No problem, glad I could help!
you can customize in a package add this to the top of Package
homeassistant:
customize:
<your sensor here>:
friendly_name: Your chosen friendly nam
icon: mdi: <your mdi icon>
hasska_hidden: true
unit_of_measurement: '°F' # if any or delete
Just an update⌠Iâve replaced the input_slider with input_number and fixed the automations based on the breaking changes in 0.55. My package file on GitHub is up to date.
What is the difference between Internet Speed IN
and WAN Download Average
?
Is it possible to add Daily/Weekly/Monthly usage statistics too?
Internet speed in
is an instanteneous value whereas Download average
is just that, an average.
Unfortunately the statistics
component is not clever enough to handle several values so youâd have to use three different sensors for your averages. The next problem is what sample size to use as the sensor doesnât allow you to set a time only, you need a sample size too and it has to be big enough to handle thousands of updates per day. (It defaults to 20 samples if you omit it).
Ah, great then I understand, average over what time?
All of the different components in hass makes it really hard to understand. You canât help me with a short example with say usage over day
?
Thank you!
Thatâs the tricky part, the only answer I can give is âit dependsâ. The statistics
component requires a fixed number of samples to work, the default value is 20. Then there is also a time limit variable called max_age
so that values older than that are omitted even if that makes for less than the defined number of samples.
Unfortunately there seems to be no way to use only the max_age
variable, making it quite tricky to create statistics for sensors that change unpredictably or very often. An outdoor temperature sensor might change 100 or 200 times per day, thatâs not too bad. A bandwidth sensor that updates every second or two on the other hand makes for a huge number of samples over a day (86400) or a month (2.7 M) and I donât know how well HASS handles such large datasets.
A 24h sensor might look something like this:
sensor:
- platform: statistics
entity_id: sensor.internet_speed_in
sampling_size: 100000
max_age:
hours: 24
I have created a Github issue on this matter, asking for a statistics sensor based on time rather than sample size.
Great! It should be possible to create a week sensor based on 7 samples of that sensor then. But, what I want is to show the cumulative amount of bytes transmitted (perhaps summing the snmp_wan_out directly)âŚ
Well, yes and no. A new sample is generated whenever a monitored value changes so updating an hourly
sensor would trigger an update of the daily
sensor which in turn would trigger an update of the weekly
sensor and so on⌠Youâd need some way to keep the daily
sensor from updating every second, Iâm not sure on how to do that.
As for the cumulative value, the statistics component offers a number of attributes that you can use in template sensors, in this case you could use {{states.sensor.internet_speed_in.attributes.total}}
(or something similar, I canât test the syntax right now).
I try to configure SNMP to get bandwith of my router similar to this configuration and I get error:
Unable to find service input_number/set_value
sensor.yaml
sensor.yaml (1.3 KB)
automation.yaml
automation.yaml (956 Bytes)
I get numbers from âsnmp_wan_inâ and âsnmp_wan_outâ. I thing the problem is in automation script?
Now is working. I add input_number from sensor:yaml to configuration.yaml
hi everybody, Iâm trying to get a valid value on the HASSIO from a Mikrotik router through the snmp bandwidth monitor configuration, but I havenât accomplished it yet.
configuration.yaml
- platform: snmp host: 192.168.1.1 community: public baseoid: .1.3.6.1.2.1.31.1.1.1.6.20 name: 'PPoE_download' unit_of_measurement: 'Mbit/sec' value_template: '{{((value | int) / 8 / 1024 / 1024) | int}}' accept_errors: true - platform: snmp host: 192.168.1.1 community: public baseoid: .1.3.6.1.21.1.1.1.10.20 name: 'PPoE_upload' unit_of_measurement: 'Mbit/sec' value_template: '{{((value | int) / 8 / 1024 / 1024) | int}}' accept_errors: true
but the OID DataType is Counter64 and not an Integer
snmpwalk command:
snmpwalk -Os -c public -v 1 192.168.1.1 .1.3.6.1.2.1.31.1.1.1.6.20
ifHCInOctets.20 = Counter64: 8499094646
so how do I convert it in order to get a valid result?
The SNMP MIB variables that you retrieving is returning a running counter of the number of octets (bytes) that have been transmitted (or received, depending on which one) over the interface. This is not a rate. To compute a rate, you need to fetch the counter, wait an interval, fetch the counter again and see how much the counter changed over what interval of time. (Often, you might also retrieve system.sysUpTime.0
each time to get the managed entityâs idea of the elapsed time between the measurements to factor our the latency of sending, processing and returning the response.)
It might be that your router also computes a rate as some proprietary MIB variable, but thatâs not the one that youâre retrieving in your example.
I donât have a specific recommendation on how best to implement this rate computation in Home Assistant.
ok thanks Louis, I will dig in more to that
Is anyone using this with the data provided by the UPNP sensors? sensor.igd_received_bytes and sensor.igd_sent_bytes
I imagine it should work, but I havenât gotten any value out of it yet.
Yay! I got this working with IGD stats provided by upnp: (which is a much simpler and easier data sensor since there are no SNMP strings)!
TL;DR: IGD stats are in Megabytes not Bytes so you have to multiply by 8000000 instead of 8 in automations, but otherwise everything works pretty much the same.
Hereâs the relevant entry to configure IGD instead of SNMP data. Everything else is the same as @mconwayâs package on GitHub.
automation:
- id: snmp_monitor_traffic_in
alias: Monitor Traffic In
trigger:
platform: state
entity_id: sensor.igd_received_bytes
action:
- service: input_number.set_value
data_template:
entity_id: input_number.internet_traffic_delta_in
value: '{{ ((trigger.to_state.state | int - trigger.from_state.state | int) * 8000000 ) / ( as_timestamp(trigger.to_state.last_updated) - as_timestamp(trigger.from_state.last_updated) ) }}'
- id: snmp_monitor_traffic_out
alias: Monitor Traffic Out
trigger:
platform: state
entity_id: sensor.igd_sent_bytes
action:
- service: input_number.set_value
data_template:
entity_id: input_number.internet_traffic_delta_out
value: '{{ ((trigger.to_state.state | int - trigger.from_state.state | int) * 8000000 ) / ( as_timestamp(trigger.to_state.last_updated) - as_timestamp(trigger.from_state.last_updated) ) }}'
Thanks for your work, and thanks to @mconway for the package!
Trying to read through the .yaml file and understand what is happening. Two questions:
- Why are you using an âinput_numberâ (slider) for the delta. This creates a visible element on the Dashboard; is there any reason I would ever need to manually change this?
- @mconway has a sensor.ping_time_mean, but I donât see any associated code? Is this contained elsewhere?
I implemented bandwidth monitoring, based on the @mconway github sample (thanks!), and it worked awesome.
Now - my Asus RT3100 router reports strangely similar IfInOctets and IfOutOctets on the WAN (eth0) interface which makes HASS report similar in and out bandwidth, specially noticable when streaming netflix (17Mbps in and about the same out, the âinâ value is accurate but the âoutâ is not) - which is obviously wrong.
Turns out there is an explanation for the strange behavior, details here:
eth 0, ifInOctets 1.3.6.1.2.1.2.2.1.10.5
eth0 ifOutOctets: 1.3.6.1.2.1.2.2.1.16.5
In my case I opted to show 3 âoutâ values, one calculated using âeth2â âinâ (wifi 5g), âeth1â âinâ (2.4Ghz) and the other vlan1 (the 4 port wireline switch). All measurement tests for inbound and outbound from either radio or the wired connections are now OK (validated with speedtests in different devices).
Well the above 4-OID solution worked perfectly but it turned out to be too much for the RPi3b, HASS CPU utilization went throught the roof together with temps.
Turns out that the snmp platform is very inefficient (there is a pull request meant to help on that front). I changed the approach and used âcommand_lineâ (snmpget command) instead. Then to further optimize I merged all 4 OIDs queries into one command and used a template to split the output. CPU utilization and temps decreased a lot. Finally, to further minimize CPU use (I use the same RPi as a headless Kodi music appliance, with a hifi-berry card), I went back from 4 sensors to 2, by using a template to add up Wired and both Wifi âoutput trafficâ into a single sensor.
Really pleased with the result, CPU load increased only slighly even with 10-second polling and 4 OIDs, this is the relavant bit:
- platform: command_line
name: snmp_multiple_oids # scan 4 OIDs in one shot
command: snmpget -v 2c -c your_community -Oqv 192.168.1.1 1.3.6.1.2.1.2.2.1.10.5 1.3.6.1.2.1.2.2.1.10.9 1.3.6.1.2.1.2.2.1.10.8 1.3.6.1.2.1.2.2.1.10.7 | tr '\n' ':'
scan_interval: 10
- platform: template
sensors:
snmp_wan_in:
value_template: "{% set list = states.sensor.snmp_multiple_oids.state.split(':') %} {{ list[0] }}"
snmp_wan_out: # adding up all three 'output traffic' OIDs to avoid having to present them separately (high cpu)
value_template: "{{ states.sensor.snmp_multiple_oids.state.split(':')[1] | float +
states.sensor.snmp_multiple_oids.state.split(':')[2] | float +
states.sensor.snmp_multiple_oids.state.split(':')[3] | float | round (0)}}"