Custom Component - ESXi Stats

Getting a ton of errors in my logs since upgrading HA 0.109.x. I believe they’ve changed something fundamental that is now spamming my log with errors.

2020-05-18 23:17:47 WARNING (MainThread) [homeassistant.util.async_] Detected I/O inside the event loop. This is causing stability issues. Please report issue to the custom component author for esxi_stats doing I/O at custom_components/esxi_stats/__init__.py, line 259: vm_name = vm.summary.config.name.replace(" ", "_").lower()
2020-05-18 23:17:47 WARNING (MainThread) [homeassistant.util.async_] Detected I/O inside the event loop. This is causing stability issues. Please report issue to the custom component author for esxi_stats doing I/O at custom_components/esxi_stats/esxi.py, line 173: vm_conf = vm.configStatus
2020-05-18 23:17:47 WARNING (MainThread) [homeassistant.util.async_] Detected I/O inside the event loop. This is causing stability issues. Please report issue to the custom component author for esxi_stats doing I/O at custom_components/esxi_stats/esxi.py, line 174: vm_sum = vm.summary
2020-05-18 23:17:47 WARNING (MainThread) [homeassistant.util.async_] Detected I/O inside the event loop. This is causing stability issues. Please report issue to the custom component author for esxi_stats doing I/O at custom_components/esxi_stats/esxi.py, line 175: vm_run = vm.runtime
2020-05-18 23:17:47 WARNING (MainThread) [homeassistant.util.async_] Detected I/O inside the event loop. This is causing stability issues. Please report issue to the custom component author for esxi_stats doing I/O at custom_components/esxi_stats/esxi.py, line 176: vm_snap = vm.snapshot
2020-05-18 23:17:47 WARNING (MainThread) [homeassistant.util.async_] Detected I/O inside the event loop. This is causing stability issues. Please report issue to the custom component author for esxi_stats doing I/O at custom_components/esxi_stats/__init__.py, line 259: vm_name = vm.summary.config.name.replace(" ", "_").lower()

I started using this decluttering template instead of the horseshoe card one:

decluttering_templates:
  vm_entity_template:
    card:
      type: entities
      title: '[[name]]'
      entities:
        - entity: '[[entity]]'
          type: attribute
          name: Status
          attribute: state
          icon: 'mdi:server'
        - entity: '[[entity]]'
          type: attribute
          attribute: guest_os
          name: Guest OS
          icon: 'mdi:server'
        - entity: '[[entity]]'
          type: attribute
          attribute: guest_ip
          name: Guest IP
          icon: 'mdi:ip'
        - entity: '[[entity]]'
          type: attribute
          attribute: cpu_count
          name: CPU Count
          icon: 'mdi:cpu-64-bit'
          suffix: cores
        - entity: '[[entity]]'
          type: attribute
          attribute: used_space_gb
          name: Diskspace Used
          icon: 'mdi:harddisk'
          suffix: GB
        - entity: '[[entity]]'
          type: attribute
          attribute: memory_allocated_mb
          name: RAM Allocated
          icon: 'mdi:memory'
          suffix: MB
        - entity: '[[entity]]'
          type: attribute
          attribute: uptime_hours
          name: Uptime
          icon: 'mdi:history'
          suffix: hours
        - entity: '[[entity]]'
          type: attribute
          attribute: snapshots
          name: Snapshots
          icon: 'mdi:camera'
        - type: 'custom:bar-card'
          entity_row: true
          entities:
            - entity: '[[entity]]'
              attribute: cpu_use_pct
              unit_of_measurement: '%'
              decimal: '0'
              min: '0'
              max: '100'
              icon: 'mdi:cpu-64-bit'
              height: 30px
              severity:
                - color: var(--label-badge-green)
                  to: '40'
                  from: '0'
                - color: orange
                  from: '40'
                  to: '65'
                - color: var(--label-badge-red)
                  from: '65'
                  to: '100'
              name: CPU Usage
          style: |-
            bar-card-name {
              font-weight: bold; 
              text-shadow: 2px 2px 2px black;
            }
            bar-card-value {
              font-weight: bold;   
              text-shadow: 2px 2px 2px black;
            } 
              animation:
                state: 'on'    
        - type: 'custom:bar-card'
          entity_row: true
          entities:
            - entity: '[[entity]]'
              attribute: memory_used_mb
              unit_of_measurement: MB
              decimal: '0'
              min: '0'
              max: '[[max_memory]]'
              icon: 'mdi:memory'
              height: 30px
              name: Memory Used
          style: |-
            bar-card-name {
              font-weight: bold; 
              text-shadow: 2px 2px 2px black;
            }
            bar-card-value {
              font-weight: bold;   
              text-shadow: 2px 2px 2px black;
            } 
              animation:
                state: 'on'

Card:

template: vm_entity_template
type: 'custom:decluttering-card'
variables:
  - entity: sensor.esxi_vm_pfsense
  - name: pfSense
  - max_memory: 3072

I’m getting this error since I updated to version 0.110.2

## Log Details (WARNING)

Logger: homeassistant.util.async_
Source: util/async_.py:120
First occurred: 12:04:27 AM (57 occurrences)
Last logged: 12:04:27 AM

* Detected I/O inside the event loop. This is causing stability issues. Please report issue to the custom component author for esxi_stats doing I/O at custom_components/esxi_stats/esxi.py, line 173: vm_conf = vm.configStatus
* Detected I/O inside the event loop. This is causing stability issues. Please report issue to the custom component author for esxi_stats doing I/O at custom_components/esxi_stats/esxi.py, line 174: vm_sum = vm.summary
* Detected I/O inside the event loop. This is causing stability issues. Please report issue to the custom component author for esxi_stats doing I/O at custom_components/esxi_stats/esxi.py, line 175: vm_run = vm.runtime
* Detected I/O inside the event loop. This is causing stability issues. Please report issue to the custom component author for esxi_stats doing I/O at custom_components/esxi_stats/esxi.py, line 176: vm_snap = vm.snapshot
* Detected I/O inside the event loop. This is causing stability issues. Please report issue to the custom component author for esxi_stats doing I/O at custom_components/esxi_stats/esxi.py, line 31: current_session = conn.content.sessionManager.currentSession.key

It is known.

edit - solved. The problem below was due to my rookie yaml mistakes. Once I cleaned up the syntax, all worked fine. Using a real editor was a huge help in finding indentation errors.

I’ve installed ESXi Stats and its working great. Trying to set up the horseshoe card based on the examples found at https://github.com/wxt9861/esxi_stats/tree/master/examples has been a challenge. No matter what changes I make, the result is a blank screen. The flex-horseshoe-card and de-cluttering card have been installed via HACS and the corresponding .js files are in the www/community folder. I’ve tried referencing the url path under resources with a path of “url: /community_plugin/” as well as “url: /hacsfiles” both without success.

I know it is probably something simple or I’m missing a basic concept but at this point, I am out of ideas. I’d appreciate any suggestions anyone might have. If this post should be put somewhere else, please let me know.

thanks in advance

ui-lovelace.yaml file

resources:
  - url: /community_plugin/flex-horseshoe-card/flex-horseshoe-card.js
    type: module
   
  - url: /community_plugin/decluttering-card/decluttering-card.js
    type: module

decluttering_templates:
  !include decluttering_card_templates.yaml

views:
  title: Test
  cards:
    - type: custom:decluttering-card
      template: vm_flex_template
      variables:
        - entity: sensor.sensor.esxi_vm_win10_homeserver
        - name: 'HomeServer'

decluttering_card_templates.yaml

decluttering_templates:
  vm_flex_template:
    card:
      type: 'custom:flex-horseshoe-card'
      entities:
        - entity: '[[entity]]'
          attribute: cpu_use_pct
          decimals: 2
          unit: '%'
          area: CPU
          name: '[[name]]'
        - entity: '[[entity]]'
          name: 'Uptime'
          attribute: uptime_hours
          decimals: 0
          unit: 'H'
        - entity: '[[entity]]'
          name: Mem Use
          attribute: memory_used_mb
          unit: 'MB'
        - entity: '[[entity]]'
          name: 'Disk'
          attribute: used_space_gb
          decimals: 0
          unit: 'GB'
        - entity: '[[entity]]'
          name: 'Status'
          unit: ' '
          attribute: status
      show:
        horseshoe_style: 'lineargradient'
        scale_tickmarks: true
      card_filter: card--dropshadow-none
      layout:
        hlines:
          - id: 0
            xpos: 50
            ypos: 38
            length: 70
            styles:
              - opacity: 0.2;
              - stroke-width: 4;
              - stroke-linecap: round;
        vlines:
          - id: 0
            xpos: 50
            ypos: 58
            length: 38
            styles:
              - opacity: 0.2;
              - stroke-width: 5;
              - stroke-linecap: round;
        states:
          - id: 0
            entity_index: 0
            xpos: 50
            ypos: 30
            styles:
              - font-size: 2.6em;
              - opacity: 0.9;
          - id: 1
            entity_index: 1
            xpos: 46
            ypos: 54
            styles:
              - font-size: 1.6em;
              - text-anchor: end;
          - id: 2
            entity_index: 2
            xpos: 54
            ypos: 54
            styles:
              - font-size: 1.6em;
              - text-anchor: start;
          - id: 3
            entity_index: 3
            xpos: 54
            ypos: 74
            styles:
              - font-size: 1.6em;
              - text-anchor: start;
          - id: 4
            entity_index: 4
            xpos: 46
            ypos: 74
            styles:
              - font-size: 1.6em;
              - text-anchor: end;             
        names:
          - id: 0
            xpos: 50
            ypos: 100
            entity_index: 0
            styles:
              - font-size: 1.3em;
              - opacity: 0.7;
              - opacity: 0.7;
          - id: 1
            xpos: 46
            ypos: 60
            entity_index: 1
            styles:
              - font-size: 0.8em;
              - text-anchor: end;
              - opacity: 0.6;
          - id: 2
            entity_index: 2
            xpos: 54
            ypos: 60
            styles:
              - font-size: 0.8em;
              - text-anchor: start;
              - opacity: 0.6;
          - id: 3
            xpos: 54
            ypos: 80
            entity_index: 3
            styles:
              - font-size: 0.8em;
              - text-anchor: start;
              - opacity: 0.6;
          - id: 4
            xpos: 46
            ypos: 80
            entity_index: 4
            styles:
              - font-size: 0.8em;
              - text-anchor: end;
              - opacity: 0.6;
        areas:
          - id: 0
            entity_index: 0
            xpos: 50
            ypos: 15
            styles:
              - font-size: 0.8em;
      horseshoe_state:
        color:  '#FFF6E3'
      horseshoe_scale:
        min: 0
        max: 100
        width: 3
      color_stops:
        05: '#FFF6E3'
        15: '#FFE9B9'
        25: '#FFDA8A'
        35: '#FFCB5B'
        45: '#FFBF37'
        55: '#ffb414'
        65: '#FFAD12'
        75: '#FFA40E'
        85: '#FF9C0B' 
        95: '#FF8C06' 

thanks

Funny you posted this, I posted something similar for this integration using decluttering cards and button cards.

In case anyone is interested…

I feel like I am missing something very stupid simple. I have the integration going, but I am only getting 2 Datastores, 1 License, 4 VMs, and 1 Ethernet as entities. What I want to get are those 8 entities, but I also want CPU, RAM, etc entities for each of the VMs individually. Is this possible? What am I missing?

Got this resolved. Anyone looking for another example of splitting the up:

#PLEX SENSORS
  - platform: template
    sensors:
      plex_vm_state:
        friendly_name: "Plex State"
        value_template: "{{ state_attr('sensor.esxi_vm_plex','state') }}"
  - platform: template
    sensors:
      plex_vm_uptime_hours:
        friendly_name: "Plex Uptime (Hours)"
        value_template: "{{ state_attr('sensor.esxi_vm_plex','uptime_hours') }}"
  - platform: template
    sensors:
      plex_vm_uptime_days:
        friendly_name: "Plex Uptime (Days)"
        value_template: "{{ states('sensor.plex_vm_uptime_hours')|int /24|round(1) }}"
  - platform: template
    sensors:
      plex_vm_cpu_count:
        friendly_name: "Plex CPU Count"
        value_template: "{{ state_attr('sensor.esxi_vm_plex','cpu_count') }}"
  - platform: template
    sensors:
      plex_vm_cpu_use_pct:
        friendly_name: "Plex CPU Used (%)"
        value_template: "{{ state_attr('sensor.esxi_vm_plex','cpu_use_pct') }}"
  - platform: template
    sensors:
      plex_vm_memory_allocated_mb:
        friendly_name: "Plex Memory Allocated (MB)"
        value_template: "{{ state_attr('sensor.esxi_vm_plex','memory_allocated_mb') }}"
  - platform: template
    sensors:
      plex_vm_memory_used_mb:
        friendly_name: "Plex Memory Used (MB)"
        value_template: "{{ state_attr('sensor.esxi_vm_plex','memory_used_mb') }}"
  - platform: template
    sensors:
      plex_vm_used_space_gb:
        friendly_name: "Plex Used Space (GB)"
        value_template: "{{ state_attr('sensor.esxi_vm_plex','used_space_gb') }}"
  - platform: template
    sensors:
      plex_vm_tools_status:
        friendly_name: "Plex Tools Status"
        value_template: "{{ state_attr('sensor.esxi_vm_plex','tools_status') }}"
  - platform: template
    sensors:
      plex_vm_guest_ip:
        friendly_name: "Plex Guest IP"
        value_template: "{{ state_attr('sensor.esxi_vm_plex','guest_ip') }}"
  - platform: template
    sensors:
      plex_vm_snapshots:
        friendly_name: "Plex Snapshots"
        value_template: "{{ state_attr('sensor.esxi_vm_plex','snapshots') }}"
  - platform: template
    sensors:
      plex_vm_memory_used_pct:
        friendly_name: "Plex Memory Used (%)"
        value_template: "{{ states('sensor.plex_vm_memory_allocated_mb')|int / states('sensor.plex_vm_memory_used_mb')|int *100|round(1) }}"
  - platform: template
    sensors:
      plex_vm_used_space_pct:
        friendly_name: "Plex Storage Used (%)"
        value_template: "{{102400 / states('sensor.plex_vm_used_space_gb')|int /10.24|round(1) }}"

@wxt9861
Anyone running into issues pulling the “memory used mb” field? Mine is showing at about 99% used, but looking at ESXi directly, it should be reading at something like 8% used.

Mine is similar
image
image

Seems we might have a bug on our hands. Might try rolling back the install via HACS tomorrow just to see if that changes anything

This is not a bug, they are 2 different stats.

Active guest memory is what the hypervisor estimates the VM is using.

Consumed host memory is how much ram the hypervisor has allocated to the VM, including any overhead - this is the stat we’re pulling. If you add up all the consumed host memory you should get the total which is close to the total used memory for the VMHost.

1 Like

Yeah I get that. It seems that Memory Allocated MB in your component should be pulling from the Consumed Host Memory in ESXi, and Memory Used MB should be pulling form the Active Guest Memory, right? It the examples above, those number are way off though

The metrics are correct, maybe the wording needs to be changed.

Memory Allocated MB is how much RAM you allocated to the VM in the VM configuration. In the screenshot you provided, it is 16384 or 16GB. I can change it to Memory Configured
Memory Usage is how much RAM the host has allocated to the VM (including overhead). In the screenshot you provided it is 16479 or 16.09GB. I can change it to Memory Allocated.

So basically looking at Allocated and Consumed does not make sense, since this is basically what is preconfigured at VM setup and what OS tries to allocate to machine, without looking at actual usage… What we really need is Active Guest memory, that shows if allocated memory was right for the machine (close to 100%) or overestimated (way below 100%) and how is it changing over time. Otherwise we are getting 2 very similar and mostly static numebrs, that tell nothing about potential VM performace problems. But I do not think it is available?

I don’t agree with this - the host will allocate memory when it is needed. If you configure VM with 32GB but only ever need 4, the host is not going to allocate all 32GB just because.

Active memory is also not the best metric to use for right sizing, I am going to refer you to this article - Understanding vSphere Active Memory - VMware vSphere Blog

It sounds like you want to know what is going on with the VM on the OS level. There are better tools to monitor those conditions (netdata, glances, etd). The purpose of this component is to see what is happening on the host level.

1 Like

Fully agree, I want to know what is going on VM level! And I’d prefer to have this in one place, in commonly accessed set of sensors from one integration rather, than have several different other components, perhaps one per VM to gather data from. Issue is here, that in some cases I even do not know what is underlying OS for some of appliances that I’m using, nor if it is possible to monitor OS level data for these. For some appliances, even if underlying OS is known and supports some monitoring component installation, actual implementaion might be striped down to minimum, prohibiting installation of for example VMware tools.
Said that, if these sensors that ESXi integration exposes to you are fitting your needs - perfect. But this is not what I’d expect/desire. If I allocate 4GB to my HA VM and I see that all 4GB are used, I might think that more is needed. But then when I look at HA memory use via sysmonitor I see that only 23% of allocated RAM is used. So what is the sense of ESXi Memory Consumed sensor? Indeed only if you look at this from host perspective.
Luckily for HA there are sensors available, so I can check. For other VMs not.

1 Like

Thanks for the clarification. :slight_smile:

Ah, and one more thing why my approach is different; my server has 32GB of RAM and all of my allways-on VMs are configured to use 24GB all togethe. So it leaves also plenty of capacity for host itself. Thats why current setup is not so much interesting for me, as host is able to allocate all of configured amount of RAM to respective VMs. For sure this would be different situation if I’d have overprovisioned RAM and host would need to dynamically manage RAM depending on VM load. Then such sensors would be of great use!

Not to upstage the ESXi Stats Component, because I do still use it to monitor the Datastores and whatnot, but if you want a closer look into the VMs themselves, take a look at OpenHardwareMonitor. Basically you set it up on the VM, and enable the web console for OHM Then there is a Component for HA to read the web console for each of your machines. I use both side by side: ESXi Stats for the top level view, and OHM for a close up of each VM.