Skip to main content

Custom integration changes

· 3 min read

Happy New Year everyone! 2021 is finally here 🎉

As you probably are aware, recently we were made aware of security issues in several popular custom integrations. You can read more about that here:

In light of these incidents. Starting with the Home Assistant 2021.2.0 beta that was just released, we are changing two things that will affect custom integrations.

Deprecated utilities​

The sanitize_filename and sanitize_path helpers located in the homeassistant.utils package have been deprecated and are pending removal. This will happen with the release of Home Assistant 2021.4.0 scheduled for the first week of April this year.

We have added raise_if_invalid_filename and raise_if_invalid_path as replacement. They are located in the same homeassistant.utils package. These new functions will raise a ValueError instead of relying on the developer comparing the output of the function to the input to see if it is different. This will prevent misuse.

Versions​

The second change is pretty cool! Versions!

The manifest.json file now has added support for a version key. The version should be a string with a major, minor and patch version. For example, "1.0.0".

This version will help users communicate with you the version they had issues with. And if you ever find a security issue with your custom integration, Home Assistant will be able to block insecure versions from being used.

The version key is required from Home Assistant version 2021.6

Hassfest updated​

hassfest is our internal tool that is used in Home Assistant to validate all integrations. In April we made this available as a GitHub Action to help you find issues in your custom integration. This action can be used in any custom integration hosted on GitHub. If you have not added that to your repository yet, now is the time! Read more about that here.

If you are using the hassfest GitHub action, you will now start to see warnings when it runs if you are missing the version key in your manifest.json file. This warning will become an error at a later point when the version key becomes fully required for custom integrations.

Serving files​

Making resources available to the user is a common use case for custom integrations, whether that is images, panels, or enhancements the user can use in Lovelace. The only way one should serve static files from a path is to use hass.http.async_register_static_paths. Use this method and avoid using your own, as this can lead to serious bugs or security issues.

from pathlib import Path
from homeassisant.components.http import StaticPathConfig

should_cache = False
files_path = Path(__file__).parent / "static"
files2_path = Path(__file__).parent / "static2"

await hass.http.async_register_static_paths([
StaticPathConfig("/api/my_integration/static", str(files_path), should_cache),
StaticPathConfig("/api/my_integration/static2", str(files2_path), should_cache)
])

That's it for this update about custom integrations. Keep doing awesome stuff! Until next time 👋

System Health and Templates

· 2 min read

In Home Assistant 0.118, there will be two changes that could impact your custom integration.

Removed deprecated helpers.template.extract_entities​

The previously deprecated extract_entities method from the Template helper has been removed (PR 42601). Instead of extracting entities and then manually listen for state changes, use the new async_track_template_result from the Event helper. It will dynamically make sure that every touched entity is tracked correctly.

from homeassistant.helpers.event import async_track_template_result, TrackTemplate

template = "{{ light.kitchen.state == 'on' }}"

async_track_template_result(
hass,
[TrackTemplate(template, None)],
lambda event, updates: print(event, updates),
)

Improved System Health​

Starting with Home Assistant 0.118, we're deprecating the old way of providing system health information for your integration. Instead, create a system_health.py file in your integration (PR 42785).

Starting this release, you can also include health checks that take longer to resolve (PR 42831), like checking if the service is online. The results will be passed to the frontend when they are ready.

"""Provide info to system health."""
from homeassistant.components import system_health
from homeassistant.core import HomeAssistant, callback

from .const import DOMAIN


@callback
def async_register(
hass: HomeAssistant, register: system_health.RegisterSystemHealth
) -> None:
"""Register system health callbacks."""
register.async_register_info(system_health_info)


async def system_health_info(hass):
"""Get info for the info page."""
client = hass.data[DOMAIN]

return {
"server_version": client.server_version,
"can_reach_server": system_health.async_check_can_reach_url(
hass, client.server_url
)
}

Upcoming changes to add-ons

· 4 min read

GitHub Action​

You can now use our builder as a GitHub action! 🎉

This is already in use in our hassio-addons repository, you can see an example on how we implemented it here.

It can be used to ensure that the add-on will still build with changes made to your repository and publish the images as part of a release workflow. How to use the action is documented in the builder repository.

Here is an example of how you can use it:

jobs:
build:
name: Test build
runs-on: ubuntu-latest
steps:
- name: Checkout the repository
uses: actions/checkout@v2
- name: Test build
uses: home-assistant/builder@master
with:
args: |
--test \
--all \
--target /data

This example will run a test build on all supported architectures of the add-on.

tip

Your repository is mapped to /data in the action, so if you have your add-on files in subdirectories, you need to supply --target /data/{directoryname} as an argument to the builder action.

Documentation​

Our API documentation has moved to the developer documentation site. During this move, it also got a style update to make it easier to navigate. Some of the endpoints are still missing some content. If you have not yet met your quota for Hacktoberfest, maybe you want to contribute some more details to our API descriptions?

API Changes​

  • Using the /homeassistant/* endpoints is deprecated and will be removed later this year. You need to use /core/* instead.
  • Using http://hassio/ is deprecated and will be removed later this year. You need to use http://supervisor/ instead.
  • Using HASSIO_TOKEN is deprecated and will be removed later this year. You need to use SUPERVISOR_TOKEN instead.
  • Deleting snapshots with POST calling /supervisor/snapshots/<slug>/remove is deprecated and will be removed later this year. You need to use the DELETE method when calling /supervisor/snapshots/<slug> instead.
  • Using X-Hassio-Key header as an authentication method is deprecated and will be removed later this year. You need to use an authorization header with a Bearer token instead.

The API documentation has been updated to reflect these changes.

Add-on options​

The permissions of the /data/options.json file, is changed from 644 to 600. If your add-on is running as non-root and reading this file, it will now give you permission issues.

There are several steps you can do in your add-on to continue to use this information:

  • If you are using S6-overlay in your add-on, you can use /etc/fix-attrs.d to ensure that the user you are running the add-on as, has access to the file.
  • You can change your add-on to run as root (default).

Releases​

Until now, the Supervisor, our plugins and add-ons have been using a mix of the build number and Semantic Versioning (SemVer) as the versioning system. We have decided to replace that for these repositories and to adopt Calendar Versioning (CalVer) as our versioning system instead.

We are migrating the Supervisor from release based development to continuous development. This fits perfectly with our existing channel-based update strategy (stable, beta and dev). We are now leveraging automated pipelines to build and push out new Supervisor versions to the correct channels. There was no more need for a dual branch setup by moving to this structure, so both our dev and master branches have now been replaced with a new main branch. Our plugins (DNS, Multicast, Observer, CLI) for the Supervisor will also follow this continuous development principle.

We made this move to provide higher software quality with an automatic test system. Every commit now triggers a new dev release, which gets tested by our test instances. Issues are imminently reported to sentry. This gives us the opportunity to test all changes before we create a release. When a release is created, the changes will come available in the beta channel. Once declared stable, we can promote the release to the stable channel.

We are using our builder action with GitHub actions to build and publish the Supervisor, our plugins and base images for our Docker containers. If you are interested in how we are doing this, you can look at the builder action for the Supervisor here, and the action helpers here.

Lazy-loading more-info controls

· 5 min read

In 0.115 we converted our more info controls to be lazy-loaded. This was done because the more info dialog is always loaded in an early stage of the page load. Every domain can have its own controls and they can be pretty heavy. We don't need all these elements on page load, so we decided to only load them when they are needed.

We got feedback from Lovelace custom card developers that this broke their cards, as not all the elements would be available anymore. While we don't support using our internal elements in custom cards, for the reasons mentioned below, we decided to revert this change for 0.115. In 0.116 we will re-add the lazy loading of the more info dialog controls. To help our Lovelace custom card developers, we have made the function that we use to lazy-load the more info controls available for custom card developers. This means you can load the controls of the domain of choice if your card needs them.

info

Please be aware that we do not support this in any way. We won't promise the same elements will be available in a future update, and breaking changes will happen when you rely on our elements.

An example loading the more info controls of the light domain:

const helpers = await loadCardHelpers();
helpers.importMoreInfoControl("light");

Why do we do this?​

We always try to make our frontend as fast as possible. Not only on your super fast desktop PC with fiber internet but also on a slow and cheap phone that only has 3G.

One of the things we try to optimize is loading our code. The less code we load the faster it is loaded, the faster we can start rendering the page. We also use less memory if we don't load everything. There is however a tradeoff, if we always want to only load the parts we need, we would have to be able to load every element separately. This would mean a lot of small network requests, which eventually is slower than loading a little more but in fewer requests, because of the overhead of a network request.

So it is always a search for the right balance between chunk sizes and the number of chunks needed. The splitting of our code into chunks is done with a bundler. At the moment, we use Webpack for this.

Some custom Lovelace developers have asked us why we can't expose all the elements we use internally to custom cards. This is simply because the element you need, might not be loaded yet as it was not needed yet.

What makes this extra difficult, is that we use custom elements. Custom elements can only be defined once, if you try to define it again it will error. This means we can't have a mix of bigger chunks and separate elements, as it would error if it would be loaded a second time.

If we would want to make this happen, we would either have to load every component on the first load, which would make the first load very slow, or we would have to split every component into its own chunk, what would cause a lot of small network requests and thus will also result in a slower experience.

Besides that, the elements we use internally are not supposed to be used by external developers, the API is not documented or supported and could break at any time. We could at any release decide to replace an element as there is a better element or a new use case. Or change the API, like we recently did for the chart element. We could not develop at the pace we do now if we would have to support all our elements, we develop an application, the elements are just our building blocks.

What about external elements?​

We use external custom elements in our frontend, like the Material Web Components from Google. While we would love you to use the same elements to provide a uniform experience, we can not advise this.

Just like our own elements the external elements are part of our code splitted chunks, and they will probably be lazy-loaded. That means they will not be available at all times but could be loaded later. This means that if a custom card would load and define an mwc element because it was not available at the time it needed it, and we try to do it later when it is lazy-loaded, the Home Assistant frontend will run into an error.

Unfortunately, there is no technical solution for this at the moment. There are some solutions, like scoped elements from open-wc, but they will not work in most cases as the imported element will either self-register or it defines sub-elements, that can not be scoped. There is a discussion about a proposal for Scoped Custom Element Definitions that could potentially fix this problem, but it may take a long time before this would be available in all our supported browsers, if the proposal is accepted at all.

Is there a solution?​

Is there a solution to all these problems? So custom cards can provide the same uniform user experience, without the risk of having breaking changes every release?

The best solution we see is a set of elements created by the custom card community. This set would have its own namespace that would not collide with that of the elements that Home Assistant uses. All custom cards could use these elements, without the risk of breaking changes.

Lovelace: Custom View Layouts

· 2 min read

Custom Element developers can now create Custom View Layouts that users can load and use!

In 0.116, we will be changing the way that we create Views in Lovelace. In the past, we had 2 views, a default view and a panel view. When talking about adding Drag and Drop to Lovelace, we decided we could do even better and start allowing custom view types.

Custom Developers will now be able to create a view that receives the following properties:

interface LovelaceViewElement {
hass?: HomeAssistant;
lovelace?: Lovelace;
index?: number;
cards?: Array<LovelaceCard | HuiErrorCard>;
badges?: LovelaceBadge[];
setConfig(config: LovelaceViewConfig): void;
}

Cards and Badges will be created and maintained by the core code and given to the custom view. The custom views are meant to load the cards and badges and display them in a customized layout.

Here is an example below: (note: this example does not have all of the properties but the necessities to show the example)

class MyNewView extends LitElement {
setConfig(_config) {}

static get properties() {
return {
cards: {type: Array, attribute: false}
};
}

render() {
if(!this.cards) {
return html``;
}
return html`${this.cards.map((card) => html`<div>${card}</div>`)}`;
}
}

And you can define this element in the Custom Element Registry just as you would with a Custom Card:

customElements.define("my-new-view", MyNewView);

You can find an example of this in our default view: Masonry View Located here: frontend/src/panels/lovelace/views/hui-masonry-view.ts

A user who downloads and installs your new Custom View can then use it via editing the YAML configuration of their view to be:

- title: Home View
type: custom:my-new-view
badges: [...]
cards: [...]

Custom Developers can add a layout property to each card that can store a key, position info, width, height, etc:

- type: weather-card
layout:
key: 1234
width: 54px
entity: weather.my_weather

Breaking Change​

For Custom Card developers that use something like this:

const LitElement = Object.getPrototypeOf(customElements.get("hui-view"));

You will no longer be able to use the hui-view element to retrieve LitElement as it has been changed to be an updatingElement. Instead you can use:

const LitElement = Object.getPrototypeOf(customElements.get("hui-masonry-view"));

But Note! This is not supported by HA. In the future, this may not work to import LitElement.

Improving Python's speed by 40% when running Home Assistant

· 8 min read

We use Alpine for most of our Containers. It is the perfect distribution for containers because it is small (BusyBox based), available for a lot of CPU architectures, and the package system is slim. Alpine uses musl as their C library instead of the more commonly used glibc.

Alpine with musl are relatively young compared to their peers (15 and 9 years old) but have seen a significant development pace. Because things move so fast, a lot of misconceptions exist about both based on things that are no longer true. The goal of this post is to address a couple of those and how we have solved them.

This blogpost is not meant as a musl vs. glibc flamewar. Each use case is different and has its own trade-offs. For example, we use glibc in our OS.

For the tests, I used the images from Docker Python library, and the result is published to our base images. I used pyperformance for lab testing and the Home Assistant internal benchmark tools for more real-life comparison. The test environment was running inside a container on the same Docker host.

C/POSIX standard library​

I often read: Python is slower when it uses musl as the default C library. This fact is not 100% correct. If the Python runtime was compiled with the same GCC and with -O3, the glibc variant is a bit faster in the lab benchmark, but in the real world, the difference is insignificant. Alpine compiles it with -Os while most other distributions compile it with -O2. This causes the often written difference between the Python runtime interpreters. But when using the same compiler optimizations, musl based Python runtimes have no negative side-effects.

But there is a game-changer, which makes the musl one more useful compared to the glibc-based runtime. It is the memory allocator jemalloc, a general-purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support. There is an interesting effect, which I found on some blogpost about Rust. There were some developers who saw that musl is much faster when using jemalloc compared to glibc, while glibc is slower when using jemalloc. For sure, the benefit with glibc and jemalloc is not the speed as they optimize memory management, but musl get both benefits. While the difference between pure musl and glibc can be ignored, the difference between musl + jemalloc and glibc are substantial (with disabled GCC memory allocator built-in optimization). Yes, today's jemalloc is compatible with musl (there was a time which it was not).

Compiler​

How you compile Python is also essential. There were statements from Fedora or Redhat about disable semantic-interposition to get a high-performance boost. I was not able to reproduce this on GCC 9.3.0, but I also saw no adverse side-effects. I can recommend disabling the semantics like the built-in allocator optimization and link jemalloc at build time. I will also recommend using the -O3 optimization. We never saw an issue with these aggressive optimizations on our targeted platforms. I need to say, unlike the distro Python runtime interpreters, we don't need to run everywhere. So we can use the --enable-optimizations without any overwrite and add more flags. I can say today, PGO/LTO/O3 make Python faster and it works on our target CPUs.

Python packages​

Alpine indeed has no manylinux compatibility with musl. If you don't cache your builds, it needs to compile the C extensions when installing packages that require it. This process takes time, just like if you would cross-build with Qemu for different CPU architectures. You cannot get precompiled binaries from PyPi. This is not a problem for us as the provided binaries on PyPI are mostly not optimized for our target systems.

To fix installation times of Python package, we created our own wheel index and backend to compile all needed wheels and keep it up to date using CI agents. We pre-build over 1k packages for each CPU architecture, and the build time of the Docker file is not so important at all.

Alpine Linux​

Alpine is a great base system for Container and allows us to provide the best experience to our user. A big thanks to Alpine Linux, musl, and jemalloc, which make this all possible.

The table shows the results comparing the Alpine Linux's Python runtime and our optimization (GCC 9.3.0/musl). All tests done using Python 3.8.3.

BenchmarkAlpineOptimized
2to3924 ms699 ms: 1.32x faster (-24%)
chameleon37.9 ms25.6 ms: 1.48x faster (-33%)
chaos393 ms273 ms: 1.44x faster (-31%)
crypto_pyaes373 ms245 ms: 1.52x faster (-34%)
deltablue22.8 ms16.4 ms: 1.39x faster (-28%)
django_template184 ms145 ms: 1.27x faster (-21%)
dulwich_log157 ms122 ms: 1.29x faster (-22%)
fannkuch1.81 sec1.32 sec: 1.38x faster (-27%)
float363 ms263 ms: 1.38x faster (-28%)
genshi_text113 ms83.9 ms: 1.34x faster (-26%)
genshi_xml226 ms171 ms: 1.32x faster (-24%)
go816 ms598 ms: 1.36x faster (-27%)
hexiom36.8 ms24.2 ms: 1.52x faster (-34%)
json_dumps34.8 ms25.6 ms: 1.36x faster (-26%)
json_loads61.2 us47.4 us: 1.29x faster (-23%)
logging_format30.0 us23.5 us: 1.28x faster (-22%)
logging_silent673 ns486 ns: 1.39x faster (-28%)
logging_simple27.2 us21.3 us: 1.27x faster (-22%)
mako54.5 ms35.6 ms: 1.53x faster (-35%)
meteor_contest344 ms219 ms: 1.57x faster (-36%)
nbody526 ms305 ms: 1.73x faster (-42%)
nqueens368 ms246 ms: 1.49x faster (-33%)
pathlib64.4 ms45.2 ms: 1.42x faster (-30%)
pickle20.3 us17.1 us: 1.19x faster (-16%)
pickle_dict40.2 us33.6 us: 1.20x faster (-16%)
pickle_list6.77 us5.88 us: 1.15x faster (-13%)
pickle_pure_python1.85 ms1.27 ms: 1.45x faster (-31%)
pidigits274 ms222 ms: 1.24x faster (-19%)
pyflate2.53 sec1.74 sec: 1.45x faster (-31%)
python_startup14.9 ms12.1 ms: 1.23x faster (-19%)
python_startup_no_site9.84 ms8.24 ms: 1.19x faster (-16%)
raytrace1.61 sec1.23 sec: 1.30x faster (-23%)
regex_compile547 ms398 ms: 1.38x faster (-27%)
regex_dna445 ms484 ms: 1.09x slower (+9%)
regex_effbot10.3 ms9.96 ms: 1.03x faster (-3%)
regex_v881.8 ms71.6 ms: 1.14x faster (-12%)
richards265 ms182 ms: 1.46x faster (-31%)
scimark_fft1.31 sec851 ms: 1.54x faster (-35%)
scimark_lu616 ms384 ms: 1.61x faster (-38%)
scimark_monte_carlo390 ms248 ms: 1.57x faster (-36%)
scimark_sor838 ms571 ms: 1.47x faster (-32%)
scimark_sparse_mat_mult19.0 ms13.2 ms: 1.43x faster (-30%)
spectral_norm567 ms388 ms: 1.46x faster (-32%)
sqlalchemy_declarative364 ms286 ms: 1.27x faster (-21%)
sqlalchemy_imperative60.3 ms46.8 ms: 1.29x faster (-22%)
sqlite_synth6.88 us5.09 us: 1.35x faster (-26%)
sympy_expand1.39 sec1.05 sec: 1.32x faster (-24%)
sympy_integrate67.3 ms49.5 ms: 1.36x faster (-26%)
sympy_sum505 ms389 ms: 1.30x faster (-23%)
sympy_str945 ms656 ms: 1.44x faster (-31%)
telco17.9 ms12.5 ms: 1.44x faster (-31%)
tornado_http347 ms273 ms: 1.27x faster (-21%)
unpack_sequence232 ns212 ns: 1.09x faster (-9%)
unpickle41.6 us30.7 us: 1.36x faster (-26%)
unpickle_list10.5 us9.24 us: 1.14x faster (-12%)
unpickle_pure_python1.28 ms945 us: 1.36x faster (-26%)
xml_etree_parse335 ms292 ms: 1.15x faster (-13%)
xml_etree_iterparse281 ms226 ms: 1.24x faster (-20%)
xml_etree_generate330 ms219 ms: 1.51x faster (-34%)
xml_etree_process263 ms181 ms: 1.45x faster (-31%)

Lovelace: getCardSize can now be async

· 2 min read

Ever since we introduced lazy loading cards to Lovelace, getting the card size of a lazy loaded card was hard.

We used to send out an error element before the element was loaded, which would have a getCardSize function. But that would be the wrong size. When the element would be defined we would, fire and rebuild the event so the right card would be recreated.

In 0.110 we stopped doing this, we would give back the correct element, but the element constructor would not be loaded yet, so it doesn't have the getCardSize. When the constructor is loaded, the element will be upgraded and we set the config. From that moment we can call getCardSize.

In this version, we changed the logic for getCardSize so it will wait for this. This means some cards, like stacks, will return a promise because they have to wait for their children to be defined.

If you are a custom card developer and your custom card uses getCardSize to get the size of other cards, you have to adjust it to handle these promises.

Our function to get the card size, which you could copy, now looks like this:

export const computeCardSize = (
card: LovelaceCard | LovelaceHeaderFooter
): number | Promise<number> => {
if (typeof card.getCardSize === "function") {
return card.getCardSize();
}
if (customElements.get(card.localName)) {
return 1;
}
return customElements
.whenDefined(card.localName)
.then(() => computeCardSize(card));
};

We first have the same check as before, if the element has a getCardSize function we will return that value, this should be a number or Promise that resolves to a number.

If the function doesn't exist, we will check if the constructor of the element is registered, if it is, this means the element doesn't have a getCardSize and we will return 1 as we did before.

If the element isn't registered yet, we will wait until it is and then call the same function again of the now defined card to get the size.

Entity class names

· One min read

Ever wondered when implementing entities for our entity integrations why you had to extend BinarySensorDevice and not BinarySensorEntity? Wonder no longer, as we have addressed the situation in Home Assistant 0.110 by renaming all classes that incorrectly had Device in their name. The old classes are still around but will log a warning when used.

All integrations in Home Assistant have been upgraded. Custom component authors need to do the migration themselves. You can do this while keeping backwards compatibility by using the following snippet:

try:
from homeassistant.components.binary_sensor import BinarySensorEntity
except ImportError:
from homeassistant.components.binary_sensor import BinarySensorDevice as BinarySensorEntity

The following classes have been renamed:

Old Class NameNew Class Name
BinarySensorDeviceBinarySensorEntity
MediaPlayerDeviceMediaPlayerEntity
LockDeviceLockEntity
ClimateDeviceClimateEntity
CoverDeviceCoverEntity
VacuumDeviceVacuumEntity
RemoteDeviceRemoteEntity
LightLightEntity
SwitchDeviceSwitchEntity
WaterHeaterDeviceWaterHeaterEntity

Custom icon sets

· 2 min read

If you are the maintainer of a custom icon set, you might need to update it.

In Home Assistant core version 0.110 we will change the way our icons are loaded. We no longer load all the mdi icons at once, and they will not become DOM elements. This will save us almost 5000 DOM elements and will reduce loading time.

This also means we no longer use or load <ha-iconset-svg>, if your icon set relied on this element, you will have to change your icon set.

We introduced a new API where you can register your custom icon set with an async function, that we will call with the icon name as parameter. We expect a promise with an object of the icon requested. Your icon set can decide on a strategy for loading and caching.

The format of the API is:

window.customIconsets: {
[iconset_name: string]: (icon_name: string) => Promise< { path: string; viewBox?: string } >
};

path is the path of the svg. This is the string that is in the d attribute of the <path> element. The viewBox is optional and will default to 0 0 24 24.

An very simple example of this for the icon custom:icon:

async function getIcon(name) {
return {
path: "M13,14H11V10H13M13,18H11V16H13M1,21H23L12,2L1,21Z",
};
}
window.customIconsets = window.customIconsets || {};
window.customIconsets["custom"] = getIcon;

Home Assistant will call the fuction getIcon("icon") when the icon custom:icon is set.

Instance URL helper

· One min read

If you are an integration developer and came across the problem of getting the URL of the users' Home Assistant instance, you probably know, this wasn't always easy.

The main problem is that a Home Assistant instance is generally installed, at home. Meaning the internal and external address can be different and even those can have variations (for example, if a user has a Home Assistant Cloud subscription).

Matters become worse if the integration has specific requirements for the URL; for example, it must be externally available and requires SSL.

As of Home Assistant Core 0.110, a new instance URL helper is introduced to ease that. We started out with the following flow chart to solve this issue:

Flow chart of getting a Home Assistant instance URL

As a result of this, the previously available base_url is now replaced by two new core configuration settings for the user: the internal and external URL.

From a development perspective, the use of hass.config.api.base_url is now deprecated in favor of the new get_url helper method.

For more information on using and implementing this new URL helper method, consult our documentation here.