How to install Py spy on a HA OS instance, please instruct

Changes in the py-spy

Unifi is clearly the top runtime now, but should be better after https://github.com/Kane610/aiounifi/pull/145

The date parsing in gdacs How to install Py spy on a HA OS instance, please instruct - #46 by bdraco would be number 1 but its the number 2 because it doesn’t update as much the overall runtime is less but you’ll get a cpu spike when it does.

Google assistant run time is now about 25% of what is previously was. There might be some additional small optimizations that can be done in the future but it mostly diminishing returns at this point.

Template run time is about 89% of what is previously was. Only a 11% savings, but since you have a lot its still worth it.

Its pretty clear whats going on with the file sensor.

A I see.
I do have couple of these and they ‘store’ all notifications . And some others.

When I get back to my system I’ll check exactly.

filed_daylight _settings.txt: 86kb

18 Jun 04:55:08: Daylight: off - Elevation: -4.0 - Light level: 3242
18 Jun 22:34:43: Daylight: on - Elevation: -4.35 - Light level: 3242
19 Jun 04:55:11: Daylight: off - Elevation: -4.0 - Light level: 3242
19 Jun 22:35:03: Daylight: on - Elevation: -4.35 - Light level: 3242
20 Jun 04:59:17: Daylight: off - Elevation: -3.57 - Light level: 231
20 Jun 22:35:19: Daylight: on - Elevation: -4.35 - Light level: 3242
21 Jun 04:55:28: Daylight: off - Elevation: -4.0 - Light level: 3242
21 Jun 22:35:32: Daylight: on - Elevation: -4.35 - Light level: 231
22 Jun 04:59:41: Daylight: off - Elevation: -3.57 - Light level: 6252
22 Jun 22:35:42: Daylight: on - Elevation: -4.35 - Light level: 3242
23 Jun 04:55:59: Daylight: off - Elevation: -4.0 - Light level: 3242
23 Jun 22:35:49: Daylight: on - Elevation: -4.36 - Light level: 0
24 Jun 04:56:20: Daylight: off - Elevation: -4.0 - Light level: 231
24 Jun 22:35:53: Daylight: on - Elevation: -4.36 - Light level: 231
25 Jun 04:56:45: Daylight: off - Elevation: -4.0 - Light level: 231

using these type of sensors to understand when and why and to be able to adjust settings.

filed_intercom_messages.txt 258 kb
filed_notifications.txt 1.4 Mb

and 2 older ones, no longer daly updated (automation == off) but still exist:

filed_automations.txt: 16.6 MB
filed_ios_messages.txt 6kb

the all use templates ofc, so maybe that counts twice as heavy ? I mean, templates and the file sensor behavior?
especially maybe because there’s also a template in the service itself:

sensor:

  - platform: file
    file_path: /config/logging/filed_notifications.txt
    name: Filed notifications
    value_template: >
      {% if value is not none %}
        {% if value|length < 255 %} {{value}}
        {% else %} Truncated: {{value|truncate(240,True, '')}}
        {% endif %}
      {% endif %}

automation:

  - alias: Forward notifications to filed notifications
    id: forward_notifications_to_filed_notifications
    mode: queued
    trigger:
      platform: event
      event_type: call_service
      event_data:
        domain: notify
    condition:
      >
       {{trigger.event.data.service not in
         ['filed_notifications','filed_automations','filed_intercom_messages']}}
    action:
      service: notify.filed_notifications
      data:
        message: >
          {% set message = trigger.event.data.service_data.message %}
          {% set service = trigger.event.data.service %}
            {{now().timestamp()|timestamp_custom('%d %b: %X')}} - {{service}}: {{message}}

or eg:

  - alias: Person forward intercom messages to filed intercom messages
    id: person_forward_intercom_messages_to_filed_intercom_messages
    mode: queued
    trigger:
      platform: state
      entity_id: script.intercom_text_message
    condition:
      >
       {{trigger.to_state is not none}}
    action:
      - condition: >
          {{trigger.to_state.context.user_id is not none}}
      - service: notify.filed_intercom_messages
        data:
          message: >
            {% set message = states('input_select.intercom_message') %}
            {% set device = states('input_select.intercom') %}
            {% set language = states('input_select.intercom_language') %}
            {% set id = trigger.to_state.context.user_id %}
            {% set time = now().timestamp()|timestamp_custom('%d %b: %X') %}
            {% set user = states.person|selectattr('attributes.user_id','eq',id)|first %}
            {% set user = user.name %}
            {{time}}: {{user}} played "{{message}}" on {{device}} in {{language}}

let me know if you need more

so seeing this as the current listing toppers:

I guess the MQTT is the most interesting integration to check why it takes so long to setup. Especially when seeing the entities flow in in only a couple of seconds at most.

The other integrations , especially group and template rely on those mqtt sensors, and maybe even the Powercalc because of some switch power/energy calculations it makes based on those mqtt entities.

Could anything be improved in MQTT ?

It looks like the file change should be in tonight’s dev so it would be good to get a fresh py-spy with the change if you can.

MQTT likely has a few more places that can be optimized to speed up startup. They are likely more longer term refactoring though.

Sure, will do tomorrow first thing.

Mqtt: ok no quick wins, no problem. As long as we can keep that on the radar . Since it isn’t a true ‘issue’ I can file, it might be difficult for me to do in the regular streams/channels.

Can you maybe somehow make Mqtt devs aware of the matter ?

While there is more that can be done based on seeing the startup times, there really isn’t anything left of substance to make them aware of as everything that was visible in the profiles has already been addressed.

We likely would need a py-spy captured during startup to get some more insight into the startup time. This is likely hard to do without setting up a full development and test environment unless you are really fast with the timing (its possible if you restart and then start the py-spy just at the right time).

I can do that :wink: I make the py-spy recordings outside of the instance…

I’ll give it a try tonight!

Any specific settings you need for that?

You’ll probably have to experiment a bit. Likely durations of 15/30/60/90/120 as some of it will be luck to that the sampling picks up the right thing.

hmm, dont think there’s a way for me to get all of those commands in terminal within 1 minute from restart…you’re right… cant do that in some predefined script can we?

ssh [email protected] -p 22222

docker exec -it homeassistant /bin/bash

Top (to get pid for homeassistant)

cd py_spy-0.3.12.data/scripts

./py-spy record --duration 120 --rate 100 --pid 61

Short of modifying the init script in the container I don’t have a good suggestion

new files sent/

Ive discovered an oddity in the Yaml checker:

using anchors in dashboard files has always worked nicely, especially when making a mistake… it would simply state things like ‘missing anchor’ or ‘anchor not defined’ Not exactly sure.

And, it did show the card yaml in a placeholder in the Dashboard view.

With the new yaml checker however, this turns into a cryptic:

which is simply not true, and an even odder:

the test card I use hasn’t even got a logbook card configured.

Is this something you recognize without anything further or would you need an issue for that?

probably related to: https://github.com/home-assistant/core/pull/73874 ?

update

wrote up an issue just to be sure.

Based on your new py-spy, the I/O performance fix for file was very effective as it disappeared from the profile.

1 Like

PR to speed this up here https://github.com/exxamalte/python-aio-georss-client/pull/32

Edit: The actual data didn’t match the test data so this won’t actually help.

1 Like

Unifi - JSON decoding overhead (would need a separate change to aiounifi after https://github.com/home-assistant/core/pull/72847https://github.com/Kane610/aiounifi/pull/145) :white_check_mark:

This should be in tonight’s nightly

just noticed these once more, not sure off we already took it off the list of possible optimizations:

2022-07-11 10:12:13.552 DEBUG (MainThread) [homeassistant.setup] Dependency template will wait for after dependencies ['group']
2022-07-11 10:12:13.563 INFO (MainThread) [homeassistant.setup] Setting up group
2022-07-11 10:12:13.897 DEBUG (MainThread) [homeassistant.setup] Dependency template will wait for after dependencies ['group']
2022-07-11 10:12:13.915 DEBUG (MainThread) [homeassistant.setup] Dependency template will wait for after dependencies ['group']
2022-07-11 10:12:13.915 DEBUG (MainThread) [homeassistant.setup] Dependency template will wait for after dependencies ['group']
2022-07-11 10:12:13.915 DEBUG (MainThread) [homeassistant.setup] Dependency template will wait for after dependencies ['group']
2022-07-11 10:12:13.923 DEBUG (MainThread) [homeassistant.setup] Dependency template will wait for after dependencies ['group']

happening midway startup…

even though group itself is not taking very long:

2022-07-11 10:12:16.091 INFO (MainThread) [homeassistant.setup] Setup of domain group took 2.5 seconds

Confirming the groups being of importance for those numbers in the integration startup list (btw. notice this is the first time an MQTT logging of below 100 secs happened):

HA core-2022.8.0.dev20220711

great you found your issue. always a worry when things like that happen :wink:

would it be too much to ask to edit this post (or even delete it), so the thread re mains fully on topic, and no further confusion will arise…
thanks!

Just an FYI you don’t actually have to set up host ssh for this task. If you install the ssh & web terminal addon and disable protection mode then you can access the docker cli and follow the rest of the steps from that.

Important: this is the ssh addon in the community addons repo, not the ssh addon in the official addons repo. You cannot access the docker cli from the ssh addon in the official addons repo.

Yes, I am aware, but as stated in above instructions: one can not copy&paste in the add-on.
Since these commands are a pain to type, I wanted to copy and paste them.

A terminal window allows us to do so.

In the config of both addons you can expose the ssh port on the host via the network options on the configuration tab. Then you can use it over ssh from a terminal on any machine. Just like with the host ssh.

I’ve never used the web UI of the ssh addon tbh, I always forget it exists