Part 2: The visuals
Now that I’ve collected a week’s worth of data in most places I think I’ve worked out a good dashboard for myself. I’m tracking the outbreak across two areas of the state because my wife works as a doctor in a different region. And I’m tracking the PA state as a whole to give myself some comparisons. Again, most of this code won’t be copy-pastable to your situation but it might give you ideas for your own dashboard.
I’ve tried to make this useful as both a big-screen data heavy dashboard (that I use) and a phone display that my wife occasionally glances at.
Across the top I’ve pulled out the most relevant information in badges. The number of cases, what percent of tests are positive, what percent of people who are sick that die (case fatality rate), the number of deaths, and the rate that those deaths are doubling. In a week or so I’ll add info on how that doubling rate is changing (is this thing getting better or worse).
The cards:
The upper map is an iframe card that references a Google map that is updated by the city. Useful overall picture.
aspect_ratio: 50%
type: iframe
url: 'https://www.google.com/maps/d/embed?z=6&mid=1lpLPPrltIuVkwFtXhoWdWEA4B1DvNmne'
The next is county by county level information. I’ve used the multiple-entity-row and foldable-entity-row to have both high-level information and allow me access to the underlying input_numbers. This is important because I’m still hand entering data every day at 1PM when our state releases them. There still isn’t a machine-readable version. Because the city uses an aspx dynamically created website I can’t use the scrape sensor or my own python script.
and with one open.
entities:
- entities:
- entity: input_number.berks_county_covid_cases
icon: 'mdi:counter'
secondary_info: last-changed
- entity: input_number.bucks_county_covid_cases
icon: 'mdi:counter'
secondary_info: last-changed
....
head:
label: Cases
type: section
type: 'custom:fold-entity-row'
- entities:
- entity: sensor.philadelphia_area_covid_cases
name: Current
entity: sensor.philly_area_cases_doubling_time
icon: 'mdi:counter'
name: Philadelphia Area
secondary_info: last-changed
state_header: Doubling Every
type: 'custom:multiple-entity-row'
....
- entities:
- entity: input_number.berks_county_covid_deaths
icon: 'mdi:counter'
secondary_info: last-changed
....
head:
label: Deaths
type: section
type: 'custom:fold-entity-row'
- entities:
- entity: sensor.philadelphia_area_covid_deaths
name: Current
entity: sensor.philly_area_death_doubling_time_filter
icon: 'mdi:counter'
name: Philadelphia Area
secondary_info: last-changed
state_header: Doubling Every
type: 'custom:multiple-entity-row'
....
- entities:
- entity: input_number.confirmed_positive_covid_tests
icon: 'mdi:counter'
secondary_info: last-changed
- entity: input_number.negative_covid_tests
icon: 'mdi:counter'
secondary_info: last-changed
- entity: input_number.greater_pa_covid_deaths
icon: 'mdi:counter'
secondary_info: last-changed
head:
label: Overall
type: section
type: 'custom:fold-entity-row'
show_header_toggle: false
title: By County
type: entities
I use the multiple-entity-row to show the number of cases as well the doubling rate (more on this later).
Next I have a collection of two graphs showing the state of testing. These are mini-graph-cards that show the positive test rate and the number of positive, negative, and total tests.
aggregate_func: last
color_thresholds:
- color: green
value: 30
- color: yellow
value: 60
- color: red
value: 70
- color: crimson
value: 90
color_thresholds_transition: hard
entities:
- entity: sensor.covid_current_rate
show_fill: false
show_state: true
state_adaptive_color: true
title: Current
group_by: date
hours_to_show: 156
icon: 'mdi:home-thermometer'
lower_bound: 0
name: Positive Test Rates
show:
labels: true
show_state: false
smoothing: true
type: 'custom:mini-graph-card'
upper_bound: 20
I calculate the positive test rate with a simple template sensor:
- platform: template
sensors:
covid_current_rate:
friendly_name: Positive Case Rate
unit_of_measurement: '%'
value_template: >-
{% set pos = states.input_number.confirmed_positive_covid_tests.state |float %}
{% set neg = states.input_number.negative_covid_tests.state |float%}
{% set p_pos = states.input_number.presumptive_positive_covid_tests.state |float %}
{% set pending = states.input_number.pending_covid_tests.state |float%}
{% set total = pos+neg+p_pos %}
{{(100*(pos+p_pos)/(total)) | round(1) }}
These are useful to see if there is a surge in testing and how that impacts the rate of positive tests. Currently awaiting said surge #45.
The next is the case fatality rate. I’m plotting it across the multiple sites and through time. This will help understand how overwhelmed the system is. If there are too many cases and not enough resources this will increase. If we’re managing things well this will stay constant.
- platform: template
sensors:
philly_area_case_fatality_rate:
friendly_name: Philly Case Fatality Rate
unit_of_measurement: '%'
value_template: >-
{% set deaths = states.sensor.philadelphia_area_covid_deaths.state | float %}
{% set cases = states.sensor.philadelphia_area_covid_cases.state | float %}
{{ (100*(deaths/cases)) | round(2) }}
The next is a set of graphs for the number of cases/deaths and the change in doubling times. Currently there is only 3 days worth of doubling data (because of trouble with sensors, more on this later).
Calculating the doubling time turned out to be obnoxious. At first I figured I could go with the Trend sensor. It measures gradients well, I’ve used it for my temperature sensors and found it particularly useful. What I didn’t realize is that it doesn’t load information on HASS restart, its buffer of samples don’t reload and therefore it doesn’t give you a gradient until at least 2 more updates. When you’re dealing with daily level data and you’re stuck inside hacking on your Home Assistant, there are a lot of restarts and you’ll never get a gradient.
My solution was to use my InfluxDB to track & aggregate data and then calculate the rates. Then use the influxdb sensor to bring that data back into home assistant. I ended up making an influxdb database so that if I found another datasource I could merge things later. If someone can think of a pure HA solution I’d love to learn a new trick.
Here’s the influx query that works for my setup.
This one gathers my sensor data, aggregates it by day, then inserts it into my covid database.
SELECT max("value") AS "philly_area_deaths" INTO "covid"."autogen"."raw" FROM "home_assistant"."autogen"."value" WHERE "entity_id"='philadelphia_area_covid_deaths' GROUP BY time(1d) FILL(previous);
The next one takes the day level data and uses the derivative function to allow me to create the right equation to calculate the instantaneous doubling rate between any two days. This equation can be boiled down to log2(today/yesterday), (in words: what number, raised to the second power, will give me the increase I saw between yesterday and today). That is the time-constant, but most people prefer doubling-time, which is just the inverse. I had to use the derivative function to calculate the “yesterday” denominator because influx doesn’t seem to have a way to lag values. So, I calculated yesterday as “today minus the difference between yesterday and today”, since the difference is negative, the signs workout.
SELECT 1/log2((max("philly_area_deaths")/(max("philly_area_deaths")-derivative(max("philly_area_deaths"), 1d)))) AS "philly_area_deaths_doubling_time" INTO "covid"."autogen"."rates" FROM "covid"."autogen"."raw" WHERE time > :dashboardTime: GROUP BY time(1d) FILL(null);
I have these running on a continuous query.
Then I use the influxdb sensor to get the doubling time back out and graph. Remember, higher numbers are better (6 days to double is better than 2 days to double). A word of caution in how to interpret the changes in doubling times. When we surge testing I expect the case doubling time to decrease (because we’re DETECTING more cases per day), this is a good thing. If the case doubling rate is increasing (like in my graphs) it means that testing is actually slowing relative to the spread. The death doubling rate is what to look at when we’re assessing whether social distancing is working. If this number is increasing, then we’re seeing improvement. If this number is decreasing then it is a symptom of faster spread and a more overloaded system.
Lastly I have these prediction tabs. I keep them in folded cases like a “spoiler tag”, because sometimes you may not want to see. The predictions come from modifications of the sensors from the earlier post. I’ve made them specific to the data from each region and I’ve used the filter sensor to clean up some noise.
entities:
- entities:
- entity: input_number.greater_pa_covid_deaths
name: Deaths Seen Today
- entity: sensor.pa_area_death_doubling_time_filter
name: Doubling every
- entity: sensor.pa_area_deaths_tom
name: Tomorrow
- entity: sensor.pa_area_deaths_week
name: In a week
head:
label: PA
type: section
type: 'custom:fold-entity-row'
title: Predictions
type: entities
As before, I need the CFR and the doubling time to make my predictions. But, as I get values day to day there’s going to be noise. So, I’m putting them through a low-pass filter with a 7 day time-constant. So, my doubling-time and CFRs will roughly be an average of the past week, that seemed reasonable. I’m also using a time-window filter to remove the effect of my data entry on the CFR. Since I enter values by hand, only one value gets updated at a time, causing my template sensors to update, this then throws off the filter’s time-constant because those intermediate values aren’t “real”. By putting the time_throttle
filter on, I can remove those intermediate values. I also put on range
filter just incase I enter a 0 somewhere and get out of bounds values.
# Get the doubling time back out of influx
- platform: influxdb
host: localhost
username:
password:
queries:
- name: Philly Area Deaths Doubling Time
unit_of_measurement: days
value_template: '{{ value | round(1) }}'
group_function: last
where: >-
time > now() - 20d
measurement: rates
database: covid
field: philly_area_deaths_doubling_time
- platform: filter
name: "Philly Area Case Fatality Rate Filter"
entity_id: sensor.philly_area_case_fatality_rate
filters:
- filter: time_throttle
window_size: "00:05"
- filter: range
lower_bound: 0
upper_bound: 100
- filter: lowpass
time_constant: 7
precision: 2
- platform: filter
name: "Philly Area Death Doubling Time Filter"
entity_id: sensor.philly_area_deaths_doubling_time
filters:
- filter: range
lower_bound: 0
- filter: lowpass
time_constant: 7
precision: 2
Then I use these filtered values in my prediction sensors.
- platform: template
sensors:
philly_area_deaths_tom:
friendly_name: Tomorrow's COVID Deaths (Philly)
unit_of_measurement: 'people'
value_template: >-
{% set deaths = states.sensor.philadelphia_area_covid_deaths.state | float %}
{% set lag = 17.3 %}
{% set doubling_time = states.sensor.philly_area_death_doubling_time_filter.state | float %}
{% set CFR = (states.sensor.philly_area_case_fatality_rate_filter.state | float)/100 %}
{% set doubings_during_lag = lag/doubling_time %}
{% set true_cases_lagged = deaths/CFR %}
{% set true_tom_lag = (true_cases_lagged * (2**(1/doubling_time))) | round %}
{{ (true_tom_lag*CFR) | round}}
philly_area_deaths_week:
friendly_name: Week COVID Deaths (Philly)
unit_of_measurement: 'people'
value_template: >-
{% set deaths = states.sensor.philadelphia_area_covid_deaths.state | float %}
{% set lag = 17.3 %}
{% set doubling_time = states.sensor.philly_area_death_doubling_time_filter.state | float %}
{% set CFR = (states.sensor.philly_area_case_fatality_rate_filter.state | float)/100 %}
{% set doubings_during_lag = lag/doubling_time %}
{% set true_cases_lagged = deaths/CFR %}
{% set true_tom_lag = (true_cases_lagged * (2**(7/doubling_time))) | round %}
{{ (true_tom_lag*CFR) | round}}
I’m still working out a way to measure my error correctly. It’ll probably have to be another Influxdb trick.
Has anyone else out there come up with another good metric to track? Anyone having better luck getting data in machine-readable format at the local level?