Skip to main content

路 4 min read

It's been a while since we posted about changes to the Supervisor. Here are some highlights from the past year and the future. This information is mainly for add-on developers, but there is little something for everyone in here. If you have not yet seen it, we have posted a blog on the main site that you should read.

Snapshot -> Backup#

First up, as mentioned in the blog on the main site, we have started a transition away from the name "snapshot" that has been with us since the beginning of the Supervisor and are now moving to the more recognizable "backup".

These changes are live now on the dev channel for the Supervisor, so you can start testing and adjusting your tools/add-ons to make sure they will still work when your users get this.

API changes#

With the transition from "snapshot" to "backup", a new base section in the Supervisor API has been added /backups that operates the same way as /snapshots with all the same endpoints as the old section has but there are two key differences:

  • If you access /backups the data returned now will be {"backups": []} instead of {"snapshots": []}
  • To delete a snapshot you now have to use the DELETE HTTP method with the /backups endpoint, previously both POST and DELETE were supported.

The old /snapshots endpoints are now deprecated and are scheduled for removal in Q4 of this year.

Backup structure changes#

For consistency, we have also changed the name of the meta file inside the backup tar from snapshot.json to backup.json. If you have a tool that uses that file you should look for both so your tool will work for existing as well as new backups.

Streaming ingress#

Some add-ons need to receive large payloads from the user, for instance with uploading. Previously, there has been a limit of 16 MB per request for add-ons running behind ingress and this is still the default. If you need to receive larger payloads, you can enable this by setting ingress_stream to True in the add-on configuration. When you do this the request is streamed from the client to your add-on, and the request has no size limit and virtually no overhead.

Note that not all webservers are able to handle this by default, so you might need to adjust it.

Deprecated API endpoints#

Over the past years, we have restructured parts of our API endpoints, but we have also kept old endpoints working. If you use any of the deprecated endpoints in your tools/add-ons you should move to use the new ones now. All deprecated endpoints are scheduled for removal in Q4 this year.

Here is a list of the deprecated endpoints and their replacements:

Deprecated endpointsReplaced with

In addition to this, the following are also deprecated and are also scheduled for removal in Q4 this year.

  • The environment variable HASSIO_TOKEN has been replaced with SUPERVISOR_TOKEN.
  • Using X-Hassio-Key header has been replaced with using Authorization with a Bearer token.
  • Using http://hassio/ to communicate with the Supervisor has been replaced with http://supervisor/.

Supervised installation#

Maintaining a supervised installation is currently not the best experience. The script that most users use to install is behind what the Supervisor wants from the host. Since there are no real upgrade paths for those using it, users of it need to manually adjust their installation.

Recently we created the OS Agent as mentioned in the blog on the main site. This allows for better communication between the host OS and the Supervisor, and to bring in more features. To take advantage of these features users of current supervised installations have to install the OS Agent manually.

An alternative to this route is to package and distribute the supervised installation as a deb package that can be installed and upgraded with apt on the host. For this to be viable, we are looking for a person (or a group of people) that wants to create and maintain this type of deployment, and bring the supervised installation method up to par with our OS, and more importantly make updates needed on the host easier for the users.

If you have questions about these changes feel free to reach out in the #devs_supervisor channel on our Discord server.

Until next time 馃憢

路 2 min read

A new state class, total_increasing has been added. In addition, the last_reset attribute is removed from SensorEntity. The driver for the changes is to make it easier to integrate with devices, like utility meters.

State classes#

There are 2 defined state classes:

  • measurement, the state represents a measurement in present time, for example a temperature, electric power, the value of a stock portfolio, etc. For supported sensors, statistics of hourly min, max and average sensor readings or of the accumulated growth or decline of the sensor's value since it was first added is updated hourly.
  • total_increasing, a monotonically increasing total, e.g. an amount of consumed gas, water or energy. When supported, the accumulated growth of the sensor's value since it was first added is updated hourly.


For sensors with state_class STATE_CLASS_TOTAL_INCREASING, a decreasing value is interpreted as the start of a new meter cycle or the replacement of the meter. It is important that the integration ensures that the value cannot erroneously decrease in the case of calculating a value from a sensor with measurement noise present. This state class is useful for gas meters, electricity meters, water meters etc.

The sensor's state when it's first added to Home Assistant is used as an initial zero-point. When a new meter cycle is detected the zero-point will be set to 0. Please refer to the tables below for how this affects the statistics.

Example of STATE_CLASS_TOTAL_INCREASING with a new meter cycle:


Example of STATE_CLASS_TOTAL_INCREASING where the there initial state at the beginning of the new meter cycle is not 0, but 0 is used as zero-point:


This state class used to be represented by state class measurement in combination with a last_reset value. This approach has been deprecated and will be interpreted as a total_increasing state class instead with an automatic last reset.

路 One min read

Temperature unit conversions are moving from the Entity base class to the SensorEntity base class. Unit conversions will only be done if the sensor's device_class attribute is set to DEVICE_CLASS_TEMPERATURE. If the device_class is not set or is not set to DEVICE_CLASS_TEMPERATURE temperature conversion will take place during a transition period and a warning will be logged.

To facilitate this, the sensor entity model has been updated with two new properties, native_value and native_unit_of_measurement. This allows us to add additional unit conversions in the future instead of relying on the integrations to do it themselves.

Sensor implementations should no longer implement the state() property function or set the _attr_state attribute. Sensor implementations should also not implement the unit_of_measurement property function, set the _attr_unit_of_measurement attribute or set the unit_of_measurement member of EntityDescription.


The value reported by the sensor. The actual state written to the state machine may be modified by SensorEntity due to unit conversions.


The unit of measurement of the sensor, if any. The unit_of_measurement written to the state machine may be modified by SensorEntity due to unit conversions.

路 2 min read

The sensor entity model has been updated with two new properties, state_class and last_reset. The driver for both the new properties is to enable automatic generation of long-term statistics.


Sensor device classes such as DEVICE_CLASS_TEMPERATURE are used to represent wildly different types of data, for example:

  • A regularly updated temperature measurement
  • Historical or statistic data, for example daily average temperature
  • Future data, for example tomorrow's forecast

Differentiating between those sensors which represent a measurement and those which don't is needed in order to automatically make a reasonable selection of sensors to include in long-term statistics.

The state_class property classifies the type of state: The state could be a measurement in present time from a temperature sensor or an energy meter, a _historic value such as the average temperature during the last 24 hours or the amount of energy used last month, or a predicted value such as a weather forecast or the next garbage pickup schedule. If state_class="measurement", the state represents a current value, and not a historical aggregation or a prediction of the future. Otherwise, state_class=None. There is an architecture discussion with some additional background.

Note that measurement in present time above does not imply that the state has to be updated with a certain frequency, or that the sensor is not allowed to do indirect measurements such as integrating power to calculate energy. To put it in another way, if the sensor represents the latest observation or the newest data point in a time series it qualifies as state_class="measurement".


The time when an accumulating sensor such as an electricity usage meter, gas meter, water meter etc. was initialized. If the time of initialization is unknown and the meter will never reset, set to UNIX epoch 0: homeassistant.util.dt.utc_from_timestamp(0). Note that the datetime.datetime returned by the last_reset property will be converted to an ISO 8601-formatted string when the entity's state attributes are updated. When changing last_reset, the state must be a valid number.

路 2 min read

We upgraded our frontend to use Lit 2.0, this is a major bump of both LitElement (3.0) and lit-html (2.0) that will now go further under the name Lit together.

This upgrade comes with a ton of great improvements, but also with some breaking changes.

If you have developed a custom card or view, and are using LitElement and lit-html from our components, your component will be using Lit 2.0 in the next release (2021.6). If you don't know if you are using LitElement from our components, your code will look something like this:

const LitElement = Object.getPrototypeOf(customElements.get("ha-panel-lovelace"));const html = LitElement.prototype.html;const css = LitElement.prototype.css;

This is not a recommended practice, we advise you to bundle Lit into your component, or import it from or another source like in this example. This way your card is not depending on the Lit version that is shipped with Home Assistant.

One of the things that changed, is that the creation of the shadowRoot is no longer done in the constructor, but just before the first update. This means that if you directly interact with the DOM, like with a query selector, you can no longer assume shadowRoot will always be available.

For all the changes check the upgrade guide in the Lit documentation.

We expect most of the cards to work without issues with Lit 2.0, but ask custom card developers to ensure compatibility. You can do this using the current dev version of Home Assistant or by using a nightly version of Home Assistant, both currently use Lit 2.0.

路 2 min read

Three years ago Paul Ganssle wrote a comparison about time zone handling between pytz and python-dateutil. In this article he shows how it's easy to use pytz in an incorrect way that is hard to spot because it's almost correct:

import pytzfrom datetime import datetime, timedelta
NYC = pytz.timezone('America/New_York')dt = datetime(2018, 2, 14, 12, tzinfo=NYC)print(dt)# 2018-02-14 12:00:00-04:56

(link to part of the article explaining why it's -4:56)

In Home Assistant 2021.6 we're going to switch to python-dateutil. You will need to upgrade your custom integration if it relies on the unofficial interface my_time_zone.localize(my_dt). Use Python's official method my_dt.astimezone(my_time_zone) instead.

The property hass.config.time_zone will also change to a string instead of a time zone object.

Thanks to @bdraco for helping revive this effort and push this change past the finish line. We actually found a couple of bugs during the migration! Also thanks to Paul Ganssle for maintaining python-dateutil and the excellent write up.

Update May 10#

Wow, time flies! Paul, the author of python-dateutil and also the author of the blog post that inspired us, pointed us to the fact that Python 3.9 includes upgraded timezone handling and that we should use that instead. With the help of Nick and Paul python-dateutil has been removed again and zoneinfo is used instead (PR).

路 One min read

We recently merged a pull request to upgrade the astral library version used in Home Assistant Core to version 2.2. This will be released with Home Assistant 2021.5. This is a major version bump of Astral which includes some breaking changes, this caused us to update our built-in helpers and integrations that depend on astral. This has resulted in a couple of breaking changes to our sun helpers.

Custom integration authors that are maintaining integrations that use the sun helpers or the astral library directly, should review the breaking changes and update their custom integrations if needed.

The sun helper has changed its signature for get_astral_location and get_location_astral_event_next to include an elevation parameter. Also the return value of get_astral_location has changed to a tuple including elevation.

@callback@bind_hassdef get_astral_location(    hass: HomeAssistant,) -> tuple[astral.location.Location, astral.Elevation]:    """Get an astral location for the current Home Assistant configuration."""
@callbackdef get_location_astral_event_next(    location: astral.Location,    location: astral.location.Location,    elevation: astral.Elevation,    event: str,    utc_point_in_time: datetime.datetime | None = None,    offset: datetime.timedelta | None = None,) -> datetime.datetime:    """Calculate the next specified solar event."""

Please see the changelog of astral for further details.

路 3 min read

Happy New Year everyone! 2021 is finally here 馃帀

As you probably are aware, recently we were made aware of security issues in several popular custom integrations. You can read more about that here:

In light of these incidents. Starting with the Home Assistant 2021.2.0 beta that was just released, we are changing two things that will affect custom integrations.

Deprecated utilities#

The sanitize_filename and sanitize_path helpers located in the homeassistant.utils package have been deprecated and are pending removal. This will happen with the release of Home Assistant 2021.4.0 scheduled for the first week of April this year.

We have added raise_if_invalid_filename and raise_if_invalid_path as replacement. They are located in the same homeassistant.utils package. These new functions will raise a ValueError instead of relying on the developer comparing the output of the function to the input to see if it is different. This will prevent misuse.


The second change is pretty cool! Versions!

The manifest.json file now has added support for a version key. The version should be a string with a major, minor and patch version. For example, "1.0.0".

This version will help users communicate with you the version they had issues with. And if you ever find a security issue with your custom integration, Home Assistant will be able to block insecure versions from being used.

The version key is required from Home Assistant version 2021.6

Hassfest updated#

hassfest is our internal tool that is used in Home Assistant to validate all integrations. In April we made this available as a GitHub Action to help you find issues in your custom integration. This action can be used in any custom integration hosted on GitHub. If you have not added that to your repository yet, now is the time! Read more about that here.

If you are using the hassfest GitHub action, you will now start to see warnings when it runs if you are missing the version key in your manifest.json file. This warning will become an error at a later point when the version key becomes fully required for custom integrations.

Serving files#

Making resources available to the user is a common use case for custom integrations, whether that is images, panels, or enhancements the user can use in Lovelace. The only way one should serve static files from a path is to use hass.http.register_static_path. Use this method and avoid using your own, as this can lead to serious bugs or security issues.

from pathlib import Path
should_cache = Falsefiles_path = Path(__file__).parent / "static"hass.http.register_static_path("/api/my_integration/static", str(files_path), should_cache)

That's it for this update about custom integrations. Keep doing awesome stuff! Until next time 馃憢

路 2 min read

In Home Assistant 0.118, there will be two changes that could impact your custom integration.

Removed deprecated helpers.template.extract_entities#

The previously deprecated extract_entities method from the Template helper has been removed (PR 42601). Instead of extracting entities and then manually listen for state changes, use the new async_track_template_result from the Event helper. It will dynamically make sure that every touched entity is tracked correctly.

from homeassistant.helpers.event import async_track_template_result, TrackTemplate
template = "{{ == 'on' }}"
async_track_template_result(    hass,    [TrackTemplate(template, None)],    lambda event, updates: print(event, updates),)

Improved System Health#

Starting with Home Assistant 0.118, we're deprecating the old way of providing system health information for your integration. Instead, create a file in your integration (PR 42785).

Starting this release, you can also include health checks that take longer to resolve (PR 42831), like checking if the service is online. The results will be passed to the frontend when they are ready.

"""Provide info to system health."""from homeassistant.components import system_healthfrom homeassistant.core import HomeAssistant, callback
from .const import DOMAIN

@callbackdef async_register(    hass: HomeAssistant, register: system_health.RegisterSystemHealth) -> None:    """Register system health callbacks."""    register.async_register_info(system_health_info)

async def system_health_info(hass):    """Get info for the info page."""    client =[DOMAIN]
    return {      "server_version": client.server_version,      "can_reach_server": system_health.async_check_can_reach_url(          hass, client.server_url      )    }

路 4 min read

GitHub Action#

You can now use our builder as a GitHub action! 馃帀

This is already in use in our hassio-addons repository, you can see an example on how we implemented it here.

It can be used to ensure that the add-on will still build with changes made to your repository and publish the images as part of a release workflow. How to use the action is documented in the builder repository.

Here is an example of how you can use it:

jobs:  build:    name: Test build    runs-on: ubuntu-latest    steps:    - name: Checkout the repository      uses: actions/[email protected]    - name: Test build      uses: home-assistant/[email protected]      with:        args: |          --test \          --all \          --target /data

This example will run a test build on all supported architectures of the add-on.


Your repository is mapped to /data in the action, so if you have your add-on files in subdirectories, you need to supply --target /data/{directoryname} as an argument to the builder action.


Our API documentation has moved to the developer documentation site. During this move, it also got a style update to make it easier to navigate. Some of the endpoints are still missing some content. If you have not yet met your quota for Hacktoberfest, maybe you want to contribute some more details to our API descriptions?

API Changes#

  • Using the /homeassistant/* endpoints is deprecated and will be removed later this year. You need to use /core/* instead.
  • Using http://hassio/ is deprecated and will be removed later this year. You need to use http://supervisor/ instead.
  • Using HASSIO_TOKEN is deprecated and will be removed later this year. You need to use SUPERVISOR_TOKEN instead.
  • Deleting snapshots with POST calling /supervisor/snapshots/<slug>/remove is deprecated and will be removed later this year. You need to use the DELETE method when calling /supervisor/snapshots/<slug> instead.
  • Using X-Hassio-Key header as an authentication method is deprecated and will be removed later this year. You need to use an authorization header with a Bearer token instead.

The API documentation has been updated to reflect these changes.

Add-on options#

The permissions of the /data/options.json file, is changed from 644 to 600. If your add-on is running as non-root and reading this file, it will now give you permission issues.

There are several steps you can do in your add-on to continue to use this information:

  • If you are using S6-overlay in your add-on, you can use /etc/fix-attrs.d to ensure that the user you are running the add-on as, has access to the file.
  • You can change your add-on to run as root (default).


Until now, the Supervisor, our plugins and add-ons have been using a mix of the build number and Semantic Versioning (SemVer) as the versioning system. We have decided to replace that for these repositories and to adopt Calendar Versioning (CalVer) as our versioning system instead.

We are migrating the Supervisor from release based development to continuous development. This fits perfectly with our existing channel-based update strategy (stable, beta and dev). We are now leveraging automated pipelines to build and push out new Supervisor versions to the correct channels. There was no more need for a dual branch setup by moving to this structure, so both our dev and master branches have now been replaced with a new main branch. Our plugins (DNS, Multicast, Observer, CLI) for the Supervisor will also follow this continuous development principle.

We made this move to provide higher software quality with an automatic test system. Every commit now triggers a new dev release, which gets tested by our test instances. Issues are imminently reported to sentry. This gives us the opportunity to test all changes before we create a release. When a release is created, the changes will come available in the beta channel. Once declared stable, we can promote the release to the stable channel.

We are using our builder action with GitHub actions to build and publish the Supervisor, our plugins and base images for our Docker containers. If you are interested in how we are doing this, you can look at the builder action for the Supervisor here, and the action helpers here.