Skip to main content

· 2 min read

The sensor entity model has been updated with two new properties, state_class and last_reset. The driver for both the new properties is to enable automatic generation of long-term statistics.

state_class

Sensor device classes such as DEVICE_CLASS_TEMPERATURE are used to represent wildly different types of data, for example:

  • A regularly updated temperature measurement
  • Historical or statistic data, for example daily average temperature
  • Future data, for example tomorrow's forecast

Differentiating between those sensors which represent a measurement and those which don't is needed in order to automatically make a reasonable selection of sensors to include in long-term statistics.

The state_class property classifies the type of state: The state could be a measurement in present time from a temperature sensor or an energy meter, a _historic value such as the average temperature during the last 24 hours or the amount of energy used last month, or a predicted value such as a weather forecast or the next garbage pickup schedule. If state_class="measurement", the state represents a current value, and not a historical aggregation or a prediction of the future. Otherwise, state_class=None. There is an architecture discussion with some additional background.

Note that measurement in present time above does not imply that the state has to be updated with a certain frequency, or that the sensor is not allowed to do indirect measurements such as integrating power to calculate energy. To put it in another way, if the sensor represents the latest observation or the newest data point in a time series it qualifies as state_class="measurement".

last_reset

The time when an accumulating sensor such as an electricity usage meter, gas meter, water meter etc. was initialized. If the time of initialization is unknown and the meter will never reset, set to UNIX epoch 0: homeassistant.util.dt.utc_from_timestamp(0). Note that the datetime.datetime returned by the last_reset property will be converted to an ISO 8601-formatted string when the entity's state attributes are updated. When changing last_reset, the state must be a valid number.

· 2 min read

We upgraded our frontend to use Lit 2.0, this is a major bump of both LitElement (3.0) and lit-html (2.0) that will now go further under the name Lit together.

This upgrade comes with a ton of great improvements, but also with some breaking changes.

If you have developed a custom card or view, and are using LitElement and lit-html from our components, your component will be using Lit 2.0 in the next release (2021.6). If you don't know if you are using LitElement from our components, your code will look something like this:

const LitElement = Object.getPrototypeOf(customElements.get("ha-panel-lovelace"));
const html = LitElement.prototype.html;
const css = LitElement.prototype.css;

This is not a recommended practice, we advise you to bundle Lit into your component, or import it from unpkg.com or another source like in this example. This way your card is not depending on the Lit version that is shipped with Home Assistant.

One of the things that changed, is that the creation of the shadowRoot is no longer done in the constructor, but just before the first update. This means that if you directly interact with the DOM, like with a query selector, you can no longer assume shadowRoot will always be available.

For all the changes check the upgrade guide in the Lit documentation.

We expect most of the cards to work without issues with Lit 2.0, but ask custom card developers to ensure compatibility. You can do this using the current dev version of Home Assistant or by using a nightly version of Home Assistant, both currently use Lit 2.0.

· 2 min read

Three years ago Paul Ganssle wrote a comparison about time zone handling between pytz and python-dateutil. In this article he shows how it's easy to use pytz in an incorrect way that is hard to spot because it's almost correct:

import pytz
from datetime import datetime, timedelta

NYC = pytz.timezone('America/New_York')
dt = datetime(2018, 2, 14, 12, tzinfo=NYC)
print(dt)
# 2018-02-14 12:00:00-04:56

(link to part of the article explaining why it's -4:56)

In Home Assistant 2021.6 we're going to switch to python-dateutil. You will need to upgrade your custom integration if it relies on the unofficial interface my_time_zone.localize(my_dt). Use Python's official method my_dt.astimezone(my_time_zone) instead.

The property hass.config.time_zone will also change to a string instead of a time zone object.

Thanks to @bdraco for helping revive this effort and push this change past the finish line. We actually found a couple of bugs during the migration! Also thanks to Paul Ganssle for maintaining python-dateutil and the excellent write up.

Update May 10

Wow, time flies! Paul, the author of python-dateutil and also the author of the blog post that inspired us, pointed us to the fact that Python 3.9 includes upgraded timezone handling and that we should use that instead. With the help of Nick and Paul python-dateutil has been removed again and zoneinfo is used instead (PR).

· One min read

We recently merged a pull request to upgrade the astral library version used in Home Assistant Core to version 2.2. This will be released with Home Assistant 2021.5. This is a major version bump of Astral which includes some breaking changes, this caused us to update our built-in helpers and integrations that depend on astral. This has resulted in a couple of breaking changes to our sun helpers.

Custom integration authors that are maintaining integrations that use the sun helpers or the astral library directly, should review the breaking changes and update their custom integrations if needed.

The sun helper has changed its signature for get_astral_location and get_location_astral_event_next to include an elevation parameter. Also the return value of get_astral_location has changed to a tuple including elevation.

@callback
@bind_hass
def get_astral_location(
hass: HomeAssistant,
) -> tuple[astral.location.Location, astral.Elevation]:
"""Get an astral location for the current Home Assistant configuration."""

@callback
def get_location_astral_event_next(
location: astral.Location,
location: astral.location.Location,
elevation: astral.Elevation,
event: str,
utc_point_in_time: datetime.datetime | None = None,
offset: datetime.timedelta | None = None,
) -> datetime.datetime:
"""Calculate the next specified solar event."""

Please see the changelog of astral for further details.

· 3 min read

Happy New Year everyone! 2021 is finally here 🎉

As you probably are aware, recently we were made aware of security issues in several popular custom integrations. You can read more about that here:

In light of these incidents. Starting with the Home Assistant 2021.2.0 beta that was just released, we are changing two things that will affect custom integrations.

Deprecated utilities

The sanitize_filename and sanitize_path helpers located in the homeassistant.utils package have been deprecated and are pending removal. This will happen with the release of Home Assistant 2021.4.0 scheduled for the first week of April this year.

We have added raise_if_invalid_filename and raise_if_invalid_path as replacement. They are located in the same homeassistant.utils package. These new functions will raise a ValueError instead of relying on the developer comparing the output of the function to the input to see if it is different. This will prevent misuse.

Versions

The second change is pretty cool! Versions!

The manifest.json file now has added support for a version key. The version should be a string with a major, minor and patch version. For example, "1.0.0".

This version will help users communicate with you the version they had issues with. And if you ever find a security issue with your custom integration, Home Assistant will be able to block insecure versions from being used.

The version key is required from Home Assistant version 2021.6

Hassfest updated

hassfest is our internal tool that is used in Home Assistant to validate all integrations. In April we made this available as a GitHub Action to help you find issues in your custom integration. This action can be used in any custom integration hosted on GitHub. If you have not added that to your repository yet, now is the time! Read more about that here.

If you are using the hassfest GitHub action, you will now start to see warnings when it runs if you are missing the version key in your manifest.json file. This warning will become an error at a later point when the version key becomes fully required for custom integrations.

Serving files

Making resources available to the user is a common use case for custom integrations, whether that is images, panels, or enhancements the user can use in Lovelace. The only way one should serve static files from a path is to use hass.http.register_static_path. Use this method and avoid using your own, as this can lead to serious bugs or security issues.

from pathlib import Path

should_cache = False
files_path = Path(__file__).parent / "static"
hass.http.register_static_path("/api/my_integration/static", str(files_path), should_cache)

That's it for this update about custom integrations. Keep doing awesome stuff! Until next time 👋

· 2 min read

In Home Assistant 0.118, there will be two changes that could impact your custom integration.

Removed deprecated helpers.template.extract_entities

The previously deprecated extract_entities method from the Template helper has been removed (PR 42601). Instead of extracting entities and then manually listen for state changes, use the new async_track_template_result from the Event helper. It will dynamically make sure that every touched entity is tracked correctly.

from homeassistant.helpers.event import async_track_template_result, TrackTemplate

template = "{{ light.kitchen.state == 'on' }}"

async_track_template_result(
hass,
[TrackTemplate(template, None)],
lambda event, updates: print(event, updates),
)

Improved System Health

Starting with Home Assistant 0.118, we're deprecating the old way of providing system health information for your integration. Instead, create a system_health.py file in your integration (PR 42785).

Starting this release, you can also include health checks that take longer to resolve (PR 42831), like checking if the service is online. The results will be passed to the frontend when they are ready.

"""Provide info to system health."""
from homeassistant.components import system_health
from homeassistant.core import HomeAssistant, callback

from .const import DOMAIN


@callback
def async_register(
hass: HomeAssistant, register: system_health.RegisterSystemHealth
) -> None:
"""Register system health callbacks."""
register.async_register_info(system_health_info)


async def system_health_info(hass):
"""Get info for the info page."""
client = hass.data[DOMAIN]

return {
"server_version": client.server_version,
"can_reach_server": system_health.async_check_can_reach_url(
hass, client.server_url
)
}

· 4 min read

GitHub Action

You can now use our builder as a GitHub action! 🎉

This is already in use in our hassio-addons repository, you can see an example on how we implemented it here.

It can be used to ensure that the add-on will still build with changes made to your repository and publish the images as part of a release workflow. How to use the action is documented in the builder repository.

Here is an example of how you can use it:

jobs:
build:
name: Test build
runs-on: ubuntu-latest
steps:
- name: Checkout the repository
uses: actions/checkout@v2
- name: Test build
uses: home-assistant/builder@master
with:
args: |
--test \
--all \
--target /data

This example will run a test build on all supported architectures of the add-on.

tip

Your repository is mapped to /data in the action, so if you have your add-on files in subdirectories, you need to supply --target /data/{directoryname} as an argument to the builder action.

Documentation

Our API documentation has moved to the developer documentation site. During this move, it also got a style update to make it easier to navigate. Some of the endpoints are still missing some content. If you have not yet met your quota for Hacktoberfest, maybe you want to contribute some more details to our API descriptions?

API Changes

  • Using the /homeassistant/* endpoints is deprecated and will be removed later this year. You need to use /core/* instead.
  • Using http://hassio/ is deprecated and will be removed later this year. You need to use http://supervisor/ instead.
  • Using HASSIO_TOKEN is deprecated and will be removed later this year. You need to use SUPERVISOR_TOKEN instead.
  • Deleting snapshots with POST calling /supervisor/snapshots/<slug>/remove is deprecated and will be removed later this year. You need to use the DELETE method when calling /supervisor/snapshots/<slug> instead.
  • Using X-Hassio-Key header as an authentication method is deprecated and will be removed later this year. You need to use an authorization header with a Bearer token instead.

The API documentation has been updated to reflect these changes.

Add-on options

The permissions of the /data/options.json file, is changed from 644 to 600. If your add-on is running as non-root and reading this file, it will now give you permission issues.

There are several steps you can do in your add-on to continue to use this information:

  • If you are using S6-overlay in your add-on, you can use /etc/fix-attrs.d to ensure that the user you are running the add-on as, has access to the file.
  • You can change your add-on to run as root (default).

Releases

Until now, the Supervisor, our plugins and add-ons have been using a mix of the build number and Semantic Versioning (SemVer) as the versioning system. We have decided to replace that for these repositories and to adopt Calendar Versioning (CalVer) as our versioning system instead.

We are migrating the Supervisor from release based development to continuous development. This fits perfectly with our existing channel-based update strategy (stable, beta and dev). We are now leveraging automated pipelines to build and push out new Supervisor versions to the correct channels. There was no more need for a dual branch setup by moving to this structure, so both our dev and master branches have now been replaced with a new main branch. Our plugins (DNS, Multicast, Observer, CLI) for the Supervisor will also follow this continuous development principle.

We made this move to provide higher software quality with an automatic test system. Every commit now triggers a new dev release, which gets tested by our test instances. Issues are imminently reported to sentry. This gives us the opportunity to test all changes before we create a release. When a release is created, the changes will come available in the beta channel. Once declared stable, we can promote the release to the stable channel.

We are using our builder action with GitHub actions to build and publish the Supervisor, our plugins and base images for our Docker containers. If you are interested in how we are doing this, you can look at the builder action for the Supervisor here, and the action helpers here.

· 5 min read

In 0.115 we converted our more info controls to be lazy-loaded. This was done because the more info dialog is always loaded in an early stage of the page load. Every domain can have its own controls and they can be pretty heavy. We don't need all these elements on page load, so we decided to only load them when they are needed.

We got feedback from Lovelace custom card developers that this broke their cards, as not all the elements would be available anymore. While we don't support using our internal elements in custom cards, for the reasons mentioned below, we decided to revert this change for 0.115. In 0.116 we will re-add the lazy loading of the more info dialog controls. To help our Lovelace custom card developers, we have made the function that we use to lazy-load the more info controls available for custom card developers. This means you can load the controls of the domain of choice if your card needs them.

info

Please be aware that we do not support this in any way. We won't promise the same elements will be available in a future update, and breaking changes will happen when you rely on our elements.

An example loading the more info controls of the light domain:

const helpers = await loadCardHelpers();
helpers.importMoreInfoControl("light");

Why do we do this?

We always try to make our frontend as fast as possible. Not only on your super fast desktop PC with fiber internet but also on a slow and cheap phone that only has 3G.

One of the things we try to optimize is loading our code. The less code we load the faster it is loaded, the faster we can start rendering the page. We also use less memory if we don't load everything. There is however a tradeoff, if we always want to only load the parts we need, we would have to be able to load every element separately. This would mean a lot of small network requests, which eventually is slower than loading a little more but in fewer requests, because of the overhead of a network request.

So it is always a search for the right balance between chunk sizes and the number of chunks needed. The splitting of our code into chunks is done with a bundler. At the moment, we use Webpack for this.

Some custom Lovelace developers have asked us why we can't expose all the elements we use internally to custom cards. This is simply because the element you need, might not be loaded yet as it was not needed yet.

What makes this extra difficult, is that we use custom elements. Custom elements can only be defined once, if you try to define it again it will error. This means we can't have a mix of bigger chunks and separate elements, as it would error if it would be loaded a second time.

If we would want to make this happen, we would either have to load every component on the first load, which would make the first load very slow, or we would have to split every component into its own chunk, what would cause a lot of small network requests and thus will also result in a slower experience.

Besides that, the elements we use internally are not supposed to be used by external developers, the API is not documented or supported and could break at any time. We could at any release decide to replace an element as there is a better element or a new use case. Or change the API, like we recently did for the chart element. We could not develop at the pace we do now if we would have to support all our elements, we develop an application, the elements are just our building blocks.

What about external elements?

We use external custom elements in our frontend, like the Material Web Components from Google. While we would love you to use the same elements to provide a uniform experience, we can not advise this.

Just like our own elements the external elements are part of our code splitted chunks, and they will probably be lazy-loaded. That means they will not be available at all times but could be loaded later. This means that if a custom card would load and define an mwc element because it was not available at the time it needed it, and we try to do it later when it is lazy-loaded, the Home Assistant frontend will run into an error.

Unfortunately, there is no technical solution for this at the moment. There are some solutions, like scoped elements from open-wc, but they will not work in most cases as the imported element will either self-register or it defines sub-elements, that can not be scoped. There is a discussion about a proposal for Scoped Custom Element Definitions that could potentially fix this problem, but it may take a long time before this would be available in all our supported browsers, if the proposal is accepted at all.

Is there a solution?

Is there a solution to all these problems? So custom cards can provide the same uniform user experience, without the risk of having breaking changes every release?

The best solution we see is a set of elements created by the custom card community. This set would have its own namespace that would not collide with that of the elements that Home Assistant uses. All custom cards could use these elements, without the risk of breaking changes.

· 2 min read

Custom Element developers can now create Custom View Layouts that users can load and use!

In 0.116, we will be changing the way that we create Views in Lovelace. In the past, we had 2 views, a default view and a panel view. When talking about adding Drag and Drop to Lovelace, we decided we could do even better and start allowing custom view types.

Custom Developers will now be able to create a view that receives the following properties:

interface LovelaceViewElement {
hass?: HomeAssistant;
lovelace?: Lovelace;
index?: number;
cards?: Array<LovelaceCard | HuiErrorCard>;
badges?: LovelaceBadge[];
setConfig(config: LovelaceViewConfig): void;
}

Cards and Badges will be created and maintained by the core code and given to the custom view. The custom views are meant to load the cards and badges and display them in a customized layout.

Here is an example below: (note: this example does not have all of the properties but the necessities to show the example)

class MyNewView extends LitElement {
setConfig(_config) {}

static get properties() {
return {
cards: {type: Array, attribute: false}
};
}

render() {
if(!this.cards) {
return html``;
}
return html`${this.cards.map((card) => html`<div>${card}</div>`)}`;
}
}

And you can define this element in the Custom Element Registry just as you would with a Custom Card:

customElements.define("my-new-view", MyNewView);

You can find an example of this in our default view: Masonry View Located here: frontend/src/panels/lovelace/views/hui-masonry-view.ts

A user who downloads and installs your new Custom View can then use it via editing the YAML configuration of their view to be:

- title: Home View
type: custom:my-new-view
badges: [...]
cards: [...]

Custom Developers can add a layout property to each card that can store a key, position info, width, height, etc:

- type: weather-card
layout:
key: 1234
width: 54px
entity: weather.my_weather

Breaking Change

For Custom Card developers that use something like this:

const LitElement = Object.getPrototypeOf(customElements.get("hui-view"));

You will no longer be able to use the hui-view element to retrieve LitElement as it has been changed to be an updatingElement. Instead you can use:

const LitElement = Object.getPrototypeOf(customElements.get("hui-masonry-view"));

But Note! This is not supported by HA. In the future, this may not work to import LitElement.

· 8 min read

We use Alpine for most of our Containers. It is the perfect distribution for containers because it is small (BusyBox based), available for a lot of CPU architectures, and the package system is slim. Alpine uses musl as their C library instead of the more commonly used glibc.

Alpine with musl are relatively young compared to their peers (15 and 9 years old) but have seen a significant development pace. Because things move so fast, a lot of misconceptions exist about both based on things that are no longer true. The goal of this post is to address a couple of those and how we have solved them.

This blogpost is not meant as a musl vs. glibc flamewar. Each use case is different and has its own trade-offs. For example, we use glibc in our OS.

For the tests, I used the images from Docker Python library, and the result is published to our base images. I used pyperformance for lab testing and the Home Assistant internal benchmark tools for more real-life comparison. The test environment was running inside a container on the same Docker host.

C/POSIX standard library

I often read: Python is slower when it uses musl as the default C library. This fact is not 100% correct. If the Python runtime was compiled with the same GCC and with -O3, the glibc variant is a bit faster in the lab benchmark, but in the real world, the difference is insignificant. Alpine compiles it with -Os while most other distributions compile it with -O2. This causes the often written difference between the Python runtime interpreters. But when using the same compiler optimizations, musl based Python runtimes have no negative side-effects.

But there is a game-changer, which makes the musl one more useful compared to the glibc-based runtime. It is the memory allocator jemalloc, a general-purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support. There is an interesting effect, which I found on some blogpost about Rust. There were some developers who saw that musl is much faster when using jemalloc compared to glibc, while glibc is slower when using jemalloc. For sure, the benefit with glibc and jemalloc is not the speed as they optimize memory management, but musl get both benefits. While the difference between pure musl and glibc can be ignored, the difference between musl + jemalloc and glibc are substantial (with disabled GCC memory allocator built-in optimization). Yes, today's jemalloc is compatible with musl (there was a time which it was not).

Compiler

How you compile Python is also essential. There were statements from Fedora or Redhat about disable semantic-interposition to get a high-performance boost. I was not able to reproduce this on GCC 9.3.0, but I also saw no adverse side-effects. I can recommend disabling the semantics like the built-in allocator optimization and link jemalloc at build time. I will also recommend using the -O3 optimization. We never saw an issue with these aggressive optimizations on our targeted platforms. I need to say, unlike the distro Python runtime interpreters, we don't need to run everywhere. So we can use the --enable-optimizations without any overwrite and add more flags. I can say today, PGO/LTO/O3 make Python faster and it works on our target CPUs.

Python packages

Alpine indeed has no manylinux compatibility with musl. If you don't cache your builds, it needs to compile the C extensions when installing packages that require it. This process takes time, just like if you would cross-build with Qemu for different CPU architectures. You cannot get precompiled binaries from PyPi. This is not a problem for us as the provided binaries on PyPI are mostly not optimized for our target systems.

To fix installation times of Python package, we created our own wheel index and backend to compile all needed wheels and keep it up to date using CI agents. We pre-build over 1k packages for each CPU architecture, and the build time of the Docker file is not so important at all.

Alpine Linux

Alpine is a great base system for Container and allows us to provide the best experience to our user. A big thanks to Alpine Linux, musl, and jemalloc, which make this all possible.

The table shows the results comparing the Alpine Linux's Python runtime and our optimization (GCC 9.3.0/musl). All tests done using Python 3.8.3.

BenchmarkAlpineOptimized
2to3924 ms699 ms: 1.32x faster (-24%)
chameleon37.9 ms25.6 ms: 1.48x faster (-33%)
chaos393 ms273 ms: 1.44x faster (-31%)
crypto_pyaes373 ms245 ms: 1.52x faster (-34%)
deltablue22.8 ms16.4 ms: 1.39x faster (-28%)
django_template184 ms145 ms: 1.27x faster (-21%)
dulwich_log157 ms122 ms: 1.29x faster (-22%)
fannkuch1.81 sec1.32 sec: 1.38x faster (-27%)
float363 ms263 ms: 1.38x faster (-28%)
genshi_text113 ms83.9 ms: 1.34x faster (-26%)
genshi_xml226 ms171 ms: 1.32x faster (-24%)
go816 ms598 ms: 1.36x faster (-27%)
hexiom36.8 ms24.2 ms: 1.52x faster (-34%)
json_dumps34.8 ms25.6 ms: 1.36x faster (-26%)
json_loads61.2 us47.4 us: 1.29x faster (-23%)
logging_format30.0 us23.5 us: 1.28x faster (-22%)
logging_silent673 ns486 ns: 1.39x faster (-28%)
logging_simple27.2 us21.3 us: 1.27x faster (-22%)
mako54.5 ms35.6 ms: 1.53x faster (-35%)
meteor_contest344 ms219 ms: 1.57x faster (-36%)
nbody526 ms305 ms: 1.73x faster (-42%)
nqueens368 ms246 ms: 1.49x faster (-33%)
pathlib64.4 ms45.2 ms: 1.42x faster (-30%)
pickle20.3 us17.1 us: 1.19x faster (-16%)
pickle_dict40.2 us33.6 us: 1.20x faster (-16%)
pickle_list6.77 us5.88 us: 1.15x faster (-13%)
pickle_pure_python1.85 ms1.27 ms: 1.45x faster (-31%)
pidigits274 ms222 ms: 1.24x faster (-19%)
pyflate2.53 sec1.74 sec: 1.45x faster (-31%)
python_startup14.9 ms12.1 ms: 1.23x faster (-19%)
python_startup_no_site9.84 ms8.24 ms: 1.19x faster (-16%)
raytrace1.61 sec1.23 sec: 1.30x faster (-23%)
regex_compile547 ms398 ms: 1.38x faster (-27%)
regex_dna445 ms484 ms: 1.09x slower (+9%)
regex_effbot10.3 ms9.96 ms: 1.03x faster (-3%)
regex_v881.8 ms71.6 ms: 1.14x faster (-12%)
richards265 ms182 ms: 1.46x faster (-31%)
scimark_fft1.31 sec851 ms: 1.54x faster (-35%)
scimark_lu616 ms384 ms: 1.61x faster (-38%)
scimark_monte_carlo390 ms248 ms: 1.57x faster (-36%)
scimark_sor838 ms571 ms: 1.47x faster (-32%)
scimark_sparse_mat_mult19.0 ms13.2 ms: 1.43x faster (-30%)
spectral_norm567 ms388 ms: 1.46x faster (-32%)
sqlalchemy_declarative364 ms286 ms: 1.27x faster (-21%)
sqlalchemy_imperative60.3 ms46.8 ms: 1.29x faster (-22%)
sqlite_synth6.88 us5.09 us: 1.35x faster (-26%)
sympy_expand1.39 sec1.05 sec: 1.32x faster (-24%)
sympy_integrate67.3 ms49.5 ms: 1.36x faster (-26%)
sympy_sum505 ms389 ms: 1.30x faster (-23%)
sympy_str945 ms656 ms: 1.44x faster (-31%)
telco17.9 ms12.5 ms: 1.44x faster (-31%)
tornado_http347 ms273 ms: 1.27x faster (-21%)
unpack_sequence232 ns212 ns: 1.09x faster (-9%)
unpickle41.6 us30.7 us: 1.36x faster (-26%)
unpickle_list10.5 us9.24 us: 1.14x faster (-12%)
unpickle_pure_python1.28 ms945 us: 1.36x faster (-26%)
xml_etree_parse335 ms292 ms: 1.15x faster (-13%)
xml_etree_iterparse281 ms226 ms: 1.24x faster (-20%)
xml_etree_generate330 ms219 ms: 1.51x faster (-34%)
xml_etree_process263 ms181 ms: 1.45x faster (-31%)