How to install Py spy on a HA OS instance, please instruct

I’m traveling tomorrow so I might miss your response but feel free to ping me if it’s not bad data being injected somehow

Ping:

$.data.entities[1187].capabilities.available_tones<key: 1>=1(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 2>=2(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 3>=3(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 4>=4(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 5>=5(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 7>=7(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 8>=8(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 9>=9(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 10>=10(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 11>=11(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 12>=12(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 13>=13(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 14>=14(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 15>=15(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 16>=16(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 17>=17(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 18>=18(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 19>=19(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 20>=20(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 21>=21(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 22>=22(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 23>=23(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 24>=24(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 25>=25(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 26>=26(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 27>=27(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 28>=28(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 29>=29(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 30>=30(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 255>=255(<class 'int'>, $

I also see this now:

2022-06-24 09:43:20 ERROR (MainThread) [homeassistant.helpers.storage] Error writing config for core.entity_registry: Failed to serialize to JSON: /config/.storage/core.entity_registry. Bad data at $.data.entities[1187].capabilities.available_tones<key: 0>=0(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 1>=1(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 2>=2(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 3>=3(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 4>=4(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 5>=5(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 7>=7(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 8>=8(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 9>=9(<class 'int'>, $.data.entities[1187].capabilities.available_tones<key: 10>=10(<class 'int'>, $

and I havent got a clue what that would indicate…

found them to be the Aeotec siren endpoints, and Ive updated the issue with that info.
Apparently the core Z-wave.js integration writes incorrect Json tuples in the registry?

https://github.com/home-assistant/core/pull/73936 Is merged so we should be able to see a difference with tonight’s dev

1 Like

exciting news! thanks, (and also Paulus ofc)

will let you know and send new py-spy profiles in the morning :wink:

Template speed ups made it in as well https://github.com/home-assistant/core/pull/73728

The only thing outstanding is the aiounifi changes

sent in my latest profile/py-spy files, (though the .svg are untraceable …)

must admit startup feels really swift nowadays, but the listing isn’t that improved really and fluctuates enormously

startup started at exactly 10:54, and flies by, until this:

2022-06-25 10:54:32 INFO (MainThread) [homeassistant.setup] Setup of domain script took 0.3 seconds
2022-06-25 10:54:36 WARNING (MainThread) [homeassistant.components.sensor] Setup of sensor platform command_line is taking over 10 seconds.
2022-06-25 10:55:00 INFO (MainThread) [homeassistant.setup] Setting up google_assistant
2022-06-25 10:55:00 INFO (MainThread) [homeassistant.setup] Setup of domain google_assistant took 0.0 seconds
2022-06-25 10:55:01 WARNING (MainThread) [custom_components.sun2.binary_sensor] Astral Above Astronomical Dawn: Sun elevation will not reach -18.000000 again until 2022-07-22
2022-06-25 10:55:04 WARNING (MainThread) [custom_components.sun2.binary_sensor] Astral Above Astronomical Dawn: Sun elevation will not reach -18.000000 again until 2022-07-22
2022-06-25 10:55:08 INFO (MainThread) [homeassistant.setup] Setting up ffmpeg
2022-06-25 10:55:08 INFO (MainThread) [homeassistant.setup] Setup of domain ffmpeg took 0.1 seconds

and then is finished.

apparently the Command-line takes longer and causes delay?

Strange that you can’t get a py-spy recording but can get a top. I’ve never seen that happen before.

From your top:

0.00% 0.00% 1.87s 1.96s update (homeassistant/components/file/sensor.py)

How large are the files you are using for the sensor?

I see them in the Vsc editor but not in my samba share on the Mac …

I’ll send them over soon

I am sorry, but what does that mean…? you referring to the sensor.py file in the docker?
Ive just updated to todays dev20220625 so I guess thats the file I am using…?

about the py-spy files not showing: I mean not showing in Mac Finder where they were visible before:

but they are in Studio code server:

so I copied the content and pasted into handmade .svg’s :wink:

For the file size I’m asking about the actual size (in bytes) of the files for the file sensor

ie file_path: /home/user/.homeassistant/sensor-data.txt

The underlying code reads the whole file to the last line and gives you the result. If the file is growing without bounds it will get slower and slower over time since there is more data to read through until it gets to the line at the end.

Changes in the py-spy

Unifi is clearly the top runtime now, but should be better after https://github.com/Kane610/aiounifi/pull/145

The date parsing in gdacs How to install Py spy on a HA OS instance, please instruct - #46 by bdraco would be number 1 but its the number 2 because it doesn’t update as much the overall runtime is less but you’ll get a cpu spike when it does.

Google assistant run time is now about 25% of what is previously was. There might be some additional small optimizations that can be done in the future but it mostly diminishing returns at this point.

Template run time is about 89% of what is previously was. Only a 11% savings, but since you have a lot its still worth it.

Its pretty clear whats going on with the file sensor.

A I see.
I do have couple of these and they ‘store’ all notifications . And some others.

When I get back to my system I’ll check exactly.

filed_daylight _settings.txt: 86kb

18 Jun 04:55:08: Daylight: off - Elevation: -4.0 - Light level: 3242
18 Jun 22:34:43: Daylight: on - Elevation: -4.35 - Light level: 3242
19 Jun 04:55:11: Daylight: off - Elevation: -4.0 - Light level: 3242
19 Jun 22:35:03: Daylight: on - Elevation: -4.35 - Light level: 3242
20 Jun 04:59:17: Daylight: off - Elevation: -3.57 - Light level: 231
20 Jun 22:35:19: Daylight: on - Elevation: -4.35 - Light level: 3242
21 Jun 04:55:28: Daylight: off - Elevation: -4.0 - Light level: 3242
21 Jun 22:35:32: Daylight: on - Elevation: -4.35 - Light level: 231
22 Jun 04:59:41: Daylight: off - Elevation: -3.57 - Light level: 6252
22 Jun 22:35:42: Daylight: on - Elevation: -4.35 - Light level: 3242
23 Jun 04:55:59: Daylight: off - Elevation: -4.0 - Light level: 3242
23 Jun 22:35:49: Daylight: on - Elevation: -4.36 - Light level: 0
24 Jun 04:56:20: Daylight: off - Elevation: -4.0 - Light level: 231
24 Jun 22:35:53: Daylight: on - Elevation: -4.36 - Light level: 231
25 Jun 04:56:45: Daylight: off - Elevation: -4.0 - Light level: 231

using these type of sensors to understand when and why and to be able to adjust settings.

filed_intercom_messages.txt 258 kb
filed_notifications.txt 1.4 Mb

and 2 older ones, no longer daly updated (automation == off) but still exist:

filed_automations.txt: 16.6 MB
filed_ios_messages.txt 6kb

the all use templates ofc, so maybe that counts twice as heavy ? I mean, templates and the file sensor behavior?
especially maybe because there’s also a template in the service itself:

sensor:

  - platform: file
    file_path: /config/logging/filed_notifications.txt
    name: Filed notifications
    value_template: >
      {% if value is not none %}
        {% if value|length < 255 %} {{value}}
        {% else %} Truncated: {{value|truncate(240,True, '')}}
        {% endif %}
      {% endif %}

automation:

  - alias: Forward notifications to filed notifications
    id: forward_notifications_to_filed_notifications
    mode: queued
    trigger:
      platform: event
      event_type: call_service
      event_data:
        domain: notify
    condition:
      >
       {{trigger.event.data.service not in
         ['filed_notifications','filed_automations','filed_intercom_messages']}}
    action:
      service: notify.filed_notifications
      data:
        message: >
          {% set message = trigger.event.data.service_data.message %}
          {% set service = trigger.event.data.service %}
            {{now().timestamp()|timestamp_custom('%d %b: %X')}} - {{service}}: {{message}}

or eg:

  - alias: Person forward intercom messages to filed intercom messages
    id: person_forward_intercom_messages_to_filed_intercom_messages
    mode: queued
    trigger:
      platform: state
      entity_id: script.intercom_text_message
    condition:
      >
       {{trigger.to_state is not none}}
    action:
      - condition: >
          {{trigger.to_state.context.user_id is not none}}
      - service: notify.filed_intercom_messages
        data:
          message: >
            {% set message = states('input_select.intercom_message') %}
            {% set device = states('input_select.intercom') %}
            {% set language = states('input_select.intercom_language') %}
            {% set id = trigger.to_state.context.user_id %}
            {% set time = now().timestamp()|timestamp_custom('%d %b: %X') %}
            {% set user = states.person|selectattr('attributes.user_id','eq',id)|first %}
            {% set user = user.name %}
            {{time}}: {{user}} played "{{message}}" on {{device}} in {{language}}

let me know if you need more

so seeing this as the current listing toppers:

I guess the MQTT is the most interesting integration to check why it takes so long to setup. Especially when seeing the entities flow in in only a couple of seconds at most.

The other integrations , especially group and template rely on those mqtt sensors, and maybe even the Powercalc because of some switch power/energy calculations it makes based on those mqtt entities.

Could anything be improved in MQTT ?

It looks like the file change should be in tonight’s dev so it would be good to get a fresh py-spy with the change if you can.

MQTT likely has a few more places that can be optimized to speed up startup. They are likely more longer term refactoring though.

Sure, will do tomorrow first thing.

Mqtt: ok no quick wins, no problem. As long as we can keep that on the radar . Since it isn’t a true ‘issue’ I can file, it might be difficult for me to do in the regular streams/channels.

Can you maybe somehow make Mqtt devs aware of the matter ?

While there is more that can be done based on seeing the startup times, there really isn’t anything left of substance to make them aware of as everything that was visible in the profiles has already been addressed.

We likely would need a py-spy captured during startup to get some more insight into the startup time. This is likely hard to do without setting up a full development and test environment unless you are really fast with the timing (its possible if you restart and then start the py-spy just at the right time).

I can do that :wink: I make the py-spy recordings outside of the instance…

I’ll give it a try tonight!

Any specific settings you need for that?

You’ll probably have to experiment a bit. Likely durations of 15/30/60/90/120 as some of it will be luck to that the sampling picks up the right thing.

hmm, dont think there’s a way for me to get all of those commands in terminal within 1 minute from restart…you’re right… cant do that in some predefined script can we?

ssh [email protected] -p 22222

docker exec -it homeassistant /bin/bash

Top (to get pid for homeassistant)

cd py_spy-0.3.12.data/scripts

./py-spy record --duration 120 --rate 100 --pid 61

Short of modifying the init script in the container I don’t have a good suggestion