Custom Component - ESXi Stats

Yes, and that’s a good point. Snapshot handling/monitoring is a good idea. For example, send an alert/notification if your snapshot is x days old, point out when vm disk consolidation is needed, quickly snapshot before an upgrade, etc.

1 Like

I like it!

I have been wanting to make an ESXi integration for some time now, and now I don’t have too :see_no_evil: so thanks for that!

Anyone can provide some information on how to split up the sensor data into more specific sensors?
Thanks!

https://www.home-assistant.io/components/template/ @apedance

Thanks @wxt9861

Yes, I am getting VM CPU graphs in the UI.

I upgraded and no more stack trace. Obviously, no CPU info either!
Here is what sensor.esxi_stats_vms looks like (note how memory_used_mb returns -1 also)

ubi1: {
  "name": "ubi1",
  "status": "green",
  "state": "running",
  "uptime_hours": "n/a",
  "cpu_count": 4,
  "cpu_use_%": "n/a",
  "memory_allocated_mb": 4096,
  "memory_used_mb": -1,
  "used_space_gb": 104.11,
  "tools_status": "toolsOk",
  "guest_os": "Ubuntu Linux (64-bit)"
}
template-ubuntu1804: {
  "name": "template-ubuntu1804",
  "status": "green",
  "state": "off",
  "uptime_hours": "n/a",
  "cpu_count": 4,
  "cpu_use_%": "n/a",
  "memory_allocated_mb": 2048,
  "memory_used_mb": "n/a",
  "used_space_gb": 15.17,
  "tools_status": "toolsNotRunning",
  "guest_os": "Ubuntu Linux (64-bit)"
}
kubemaster: {
  "name": "kubemaster",
  "status": "green",
  "state": "running",
  "uptime_hours": "n/a",
  "cpu_count": 4,
  "cpu_use_%": "n/a",
  "memory_allocated_mb": 4096,
  "memory_used_mb": -1,
  "used_space_gb": 254.07,
  "tools_status": "toolsOk",
  "guest_os": "Ubuntu Linux (64-bit)"
}
kubepod1: {
  "name": "kubepod1",
  "status": "green",
  "state": "running",
  "uptime_hours": "n/a",
  "cpu_count": 4,
  "cpu_use_%": "n/a",
  "memory_allocated_mb": 8192,
  "memory_used_mb": -1,
  "used_space_gb": 38.25,
  "tools_status": "toolsOk",
  "guest_os": "Ubuntu Linux (64-bit)"
}
kubepod2: {
  "name": "kubepod2",
  "status": "green",
  "state": "running",
  "uptime_hours": "n/a",
  "cpu_count": 4,
  "cpu_use_%": "n/a",
  "memory_allocated_mb": 8192,
  "memory_used_mb": -1,
  "used_space_gb": 31.11,
  "tools_status": "toolsOk",
  "guest_os": "Ubuntu Linux (64-bit)"
}
tvhub: {
  "name": "tvhub",
  "status": "green",
  "state": "running",
  "uptime_hours": "n/a",
  "cpu_count": 4,
  "cpu_use_%": "n/a",
  "memory_allocated_mb": 12288,
  "memory_used_mb": -1,
  "used_space_gb": 112.11,
  "tools_status": "toolsOk",
  "guest_os": "Ubuntu Linux (64-bit)"
}
konekt: {
  "name": "konekt",
  "status": "green",
  "state": "off",
  "uptime_hours": "n/a",
  "cpu_count": 4,
  "cpu_use_%": "n/a",
  "memory_allocated_mb": 2048,
  "memory_used_mb": "n/a",
  "used_space_gb": 18.34,
  "tools_status": "toolsNotRunning",
  "guest_os": "Ubuntu Linux (64-bit)"
}
k3s: {
  "name": "k3s",
  "status": "green",
  "state": "off",
  "uptime_hours": "n/a",
  "cpu_count": 4,
  "cpu_use_%": "n/a",
  "memory_allocated_mb": 2048,
  "memory_used_mb": "n/a",
  "used_space_gb": 15.17,
  "tools_status": "toolsNotRunning",
  "guest_os": "Ubuntu Linux (64-bit)"
}
minikube: {
  "name": "minikube",
  "status": "green",
  "state": "running",
  "uptime_hours": 2513.5,
  "cpu_count": 4,
  "cpu_use_%": 0,
  "memory_allocated_mb": 6144,
  "memory_used_mb": 5782,
  "used_space_gb": 22.87,
  "tools_status": "toolsNotRunning",
  "guest_os": null
}
template-alpine: {
  "name": "template-alpine",
  "status": "green",
  "state": "off",
  "uptime_hours": "n/a",
  "cpu_count": 1,
  "cpu_use_%": "n/a",
  "memory_allocated_mb": 512,
  "memory_used_mb": "n/a",
  "used_space_gb": 0,
  "tools_status": "toolsNotInstalled",
  "guest_os": "Other 3.x Linux (64-bit)"
}
auth1: {
  "name": "auth1",
  "status": "green",
  "state": "running",
  "uptime_hours": 1824.3,
  "cpu_count": 1,
  "cpu_use_%": 0,
  "memory_allocated_mb": 512,
  "memory_used_mb": 482,
  "used_space_gb": 1.16,
  "tools_status": "toolsNotInstalled",
  "guest_os": null
}
manageiq: {
  "name": "manageiq",
  "status": "green",
  "state": "off",
  "uptime_hours": "n/a",
  "cpu_count": 4,
  "cpu_use_%": "n/a",
  "memory_allocated_mb": 6144,
  "memory_used_mb": "n/a",
  "used_space_gb": 8.52,
  "tools_status": "toolsNotRunning",
  "guest_os": "CentOS 4/5 or later (64-bit)"
}
unit_of_measurement: virtual machine(s)
friendly_name: ESXi Stats vms

I did some side testing with a Rundeck script I wrote a while back and, since maxCpuUsage is OK, and a twisted way to retrieve our CPU usage percentage would be to use content.perfManager and QueryAvailablePerfMetric() or QueryPerf() to retrieve this counter (from perfManager.perfCounter):

groupInfo.key: 'cpu'
nameInfo.key: 'usagemhz'
rollupType: 'average'

This returns the frequency in mhz (across all vCPUS), which you can then compare to maxCpuUsage

No rush, if I am the only one seeing this issue!

@Fusion
This is strange . I think the real question is why are quickStats not being populated on the host? It seems like only 2 of 7 running VMs are reporting quickStats.

The similarities I see
a - all VMs not reporting are running tools and Ubuntu Linux (64-bit)
b - all VMs reporting are are running something other than ubuntu and tools are not running or installed

Can you see all the missing statistics from the web client?
Are there any alarms on VMs that are not reporting?
Can you power on that CentOS vm (manageiq) to see if it will report?

We can go to perfManager for missing stats, but I would want to test that first.

I made some progress on adding service calls to hass until i reached this point.

ERROR (SyncWorker_8) [custom_components.esxi_stats.esxi] Current license or ESXi version prohibits execution of the requested operation.

So basically, when running a free ESXi license, API is read-only.

Maybe this will finally push me to purchase wmug advantage :thinking:

Noooooooooooooooooooooooooooooooooooooooo :cry:

@Fusion I think I am able to reproduce the issue you’re experiencing by restarting hostd service on the host.

Any VM running prior to hostd restart does not return uptime, cpu, and memory and VMs powered on after the restart show the data. Not sure if this might be the case for you that at some point hostd crashed?

Oh, very interesting.
hostd did not crash but I restarted it a couple times, so this could definitely explain, yes!

@apedance

value_template: {{ state_attr(‘sensor.esxi_stats_vms’, ‘DEVICE’).status }}

you can replace “status” with uptime_hours, memory_used_mb, used_space_gb or any of the other attributes.

Awesome!!! Many thanks for creating this component

Awesome work @wxt9861! I’ve been waiting for someone to do this for a long time! :slight_smile:

Service calls and snapshot monitoring would make a great addition!

hey could you show me how you are using that value_template please? Ive been trying to figure this out and the documentation really doesnt have an example that is like this. home assistant doesnt like my config and wont boot when i add this. Ive tried a few different ways here is an example.

- platform: template
  sensors:
    plex-cpu:
      friendly_name: "Plex CPU"
      unit_of_measurement: '%'
      value_template: {{ state_attr(‘sensor.esxi_stats_vms’, plex).cpu_use_% }}

I coied this from the states page:


plex: {
  "name": "plex",
  "status": "green",
  "state": "running",
  "uptime_hours": 41.4,
  "cpu_count": 6,
  "cpu_use_%": 15,
  "memory_allocated_mb": 16384,
  "memory_used_mb": 16466,
  "used_space_gb": 65.18,
  "tools_status": "toolsOk",
  "guest_os": "Ubuntu Linux (64-bit)"
}

Here is sample sensor created for VM:

# WHS Server
      esx_vm_whs_server_uptime:
        friendly_name: "WHS Server VM Uptime"
        value_template: >-
          {{ states.sensor.esxi_stats_vms.attributes.whs_server["uptime_hours"] }}
        icon_template: mdi:timer-sand

whs_server is name of VM as reported by ESXi host, uptime_hours attribute retrieved by template.

Here is one for ESX host data:

      esx_host_uptime_hours:
        friendly_name: "ESXi Host Uptime"
        value_template: >-
          {{ states.sensor.esxi_stats_hosts.attributes["esxi.local"]["uptime_hours"] }}
        icon_template: mdi:timer-sand

Please note that in my case server name is esx.local and using ‘dot’ in JSON creates some issues (so esxi is expected to be the node and local attribute that does not exists. So I used bracket notation to overcome this.

And lastly example for datastore:

      esx_datastore_total_space_gb_1:
        friendly_name: "ESXi Host Datastore 1 Total"
        value_template: >-
          {{ states.sensor.esxi_stats_datastores.attributes.datastore1["total_space_gb"] }}

Here datastore1 is name of datastore and total_space_gb name of attribute retrieved…

1 Like

Man, thank you for taking the time to expain, Ive been trying to learn more and more about parsing json. I hadnt heard or read about the dot causing issues that actually helped me with another problem too:) I got it working now, thank you very much!

This is coming! I’ve added basic snapshot monitoring. For now, to keep things simple, it is just showing a number of snapshots per VM. i also want to include a time snapshot was taken to allow for automatons based on snapshot age but, in the case where there are multiple snapshots per vm, do we want to include the date of the oldest snapshot or the latest snapshot :thinking:

Service calls are also in progress.

Hey I am very excited to try this…however, when adding https://github.com/wxt9861/esxi_stats as a repository I do not get any new addons. I also have the following errors:

19-08-19 18:27:24 INFO (MainThread) [hassio.store.git] Clone add-on https://github.com/wxt9861/esxi_stats repository
19-08-19 18:27:25 ERROR (MainThread) [hassio.utils.json] Can't read json from /data/addons/git/90387c89/repository.json: [Errno 2] No such file or directory: '/data/addons/git/90387c89/repository.json'
19-08-19 18:27:25 WARNING (MainThread) [hassio.store.data] Can't read repository information from /data/addons/git/90387c89/repository.json```

Any suggestions?

It is not an add-on repository, you can not add it in hassio.

Follow the installation instructions

Hi,
Thanks for the component. Very useful. I guess I hit a bug… My test instance of HA was not opening today and it was complaining about too many file opens. It seems that the component is leaving opening connections to the ESXi host, so after each update the number of opened files increase:

homeassistant# lsof | grep 192.168.0.x | wc -l
60
homeassistant# lsof | grep 192.168.0.x | wc -l
80
homeassistant# lsof | grep 192.168.0.x | wc -l
100

Anyone else has seen that?

update: sorry… I’m running it against a free ESXi 6.5 (HP microserver version).