Below shows my supply costs for the past few months. The recent months have been hard with the volatile temps and prices. This June is the highest I’ve ever paid per kWh. My avg since April 2020 has been $0.03 per kWh.
Jan-22
Feb-22
Mar-22
Apr-22
May-22
Jun-22
1811 kWh
1476 kWh
1256 kWh
927 kWh
949 kWh
988 kWh
$ 64.13
$ 59.56
$ 47.85
$ 43.81
$ 61.30
$ 65.73
$ 0.03541
$ 0.04035
$ 0.03810
$ 0.04726
$ 0.06459
$ 0.06653
Also I can see on my bill that CEJA really impacted the capacity charge rate. It went from capacity obligation x 6.12131 to now capacity obligation x 2.03025. i.e. charge rate is now a third of what it used to be. While capacity obligation mgmt is still a good figure to manage, it’s not nearly as big as what it used to be.
Thanks for the info. Yeah, it’s good to see the capacity charge rate go down by so much. It’s been in the $4.50 - $6.50 range for the past 5 years. But, no guarantees it won’t go back up in the future.
I realized my calculation for my average supply cost (if I was on hourly pricing) was wrong - it would have actually been 9.3c for the past 30 days. So, just barely higher than what I’m paying with my alternate supplier. Going to wait and see and keep watching it over the next few months for now before making the decision to switch back to hourly.
Think I fixed my timezone issue. All of HA timestamps are in UTC in the DB which the front end converts to local. The high marker query was definitely including duplicate CDT days because it was breaking days down by UTC.
I added in convert_tz(last_updated, ‘+00:00’, ‘-05:00’) to the below which seams to have solved it (doing the same for the Comed counterpart). Luckily this appears to be a non-issue so far in 2022 as the top 5 entries included no duplicate days. And it would be rare to begin with - the peak would have to hit at 7pm.Regardless, good to fix. I stand corrected. PJM total was okay, but Comed actually had 2 duplicate days from those abnormal heat waves that took place later in the day.
select last_updated, cast(state as integer) as state from (
select *,
row_number() over (partition BY date_format(convert_tz(last_updated, '+00:00', '-05:00'),'%Y-%m-%d')
order by cast(state as integer) desc) as _rn
from states
where entity_id = 'sensor.pjm_total_load_current_hour_high'
and state != 'unknown' and state != ''
and last_updated > date_format(date_sub(current_date(),interval 5 month),'%Y-06-01')
and last_updated < date_format(date_sub(current_date(),interval 5 month),'%Y-10-01')
) as _max
where _rn = 1 order by cast(state as integer) desc limit 1 offset 4
;
How would one go about converting the yaml based sql queries into UI based integrations so I can get rid of the newly discovered sql integrations in the devices page on HA?
Looks like with the most recent database upgrades on 2023.4 don’t like the SQL queries anymore here. I’m getting errors that the SQL query does a full table scan causing instability. The queries will need to be modified it appears - I’ll take a look at some point
There is a warning on the URL PJM Data Snapshot | Instantaneous Load which is used by the custom PJM Sensor from joe248
Data Snapshot will be retired on June 1, 2023. Any systems that currently scrape Data Snapshot for data must transition to Data Miner APIs.
I tried adding that line directly below - platform: and then separately directly below - type:
and both times, there was an error while checking the config
Invalid config for [sensor.pjm]: [state_class] is an invalid option for [sensor.pjm]. Check: sensor.pjm->state_class. (See ?, line ?).
Invalid config for [sensor.pjm]: [state_class] is an invalid option for [sensor.pjm]. Check: sensor.pjm->monitored_variables->0->state_class. (See ?, line ?).
I’ve been working on updating the SQL sensors for MariaDB. I think I got them working? Need to wait a few more days to see how data is flowing. I ran into some nuances with the new last_updated_ts because it needed to be converted to human readable timing using from_unixtime but that also converts the timezone which wasn’t done before… I’ll share my updated SQL once I’ve confirmed it’s working.
One thought - I think the SQL queries within the app daemon will need to get updated too in order to utilize ‘states_meta’ and ‘local_updated_ts’. Don’t think it’ll work as of now.
Hourly pricing is great for EVs, solar, and powerwalls. I have my solar nearly always exporting during the 5 peak hours/days, so I usually can get a negative capacity charge, but I don’t want to get too off topic. I haven’t integrated your code yet, but I want to try it. I’m thinking that I can use this to predict the “at risk” days, and then I can have 100% of the solar export and just run on the powerwalls, which means I should be able to double or triple the negative size of my capacity charge…maybe??? My current capacity charge is -2.43kW.
For the noobs who are only looking for the ComEd hourly integration, this is a good support thread: Switching by ComEd kWh supply cost? Sorry for the mini snipe, but I spent hours trying to find useful threads on the ComEd Hourly integration, and this is pretty much the only decent one google finds. I hope this helps some noobs save time if they need something a bit more basic.
So I am trying to install this and unpack how it works. I have only been using HA for a couple weeks, so I am still learning how to navigate the OS.
I have the configuration at the top of the yaml and the sensor data in my sensor section of the yaml. This is accepted as a valid yaml except for the platform: pjm section.
For the appdaemon, what do I do with this? I didn’t find any directions, so I am not sure what I’m supposed to do with this file in HA. I assume that properly installing the appdaemon will fix the yaml error on “Platform: pjm”.
Lastly, I am a bit confused by the sql part. From this thread, it appears that some people are using it and some are not. For my use, all I need is a flag that I can check which will tell me “this might be a peak day” or “a peak day hour might be coming up soonish”, so i can flip my solar and powerwall handling. This doesn’t need to be as accurate as the folks who are losing AC…as I can run the house on powerwalls without issue for as many hours as needed. So do I need the database for this, or is the program/appdaemon still able to run the eco flags without a database?
If I need a database, where does this database come from? I see HA wants a database url on the integration setup, so is this something that I spin up on any hosting site like godaddy? Or is there some add-on where it just puts the database right on the HA harddrive?
Is this integration too hard for a noob? It did take me multiple hours to get the HA ComEd 5 minute pricing integration working with a trigger.
If the PJM sensor isn’t being accepted in your configuration it’s likely that you didn’t install the custom PJM component correctly.
You shouldn’t need the appdaemon app at all for your use case because you don’t care about nailing the exact hour of the peak and have the ability to run off your powerwalls for long periods of time. The PJM sensor also provides daily load forecast data. You can add the forecast sensors in addition to the instantaneous sensors like this:
Home Assistant comes with a local SQLite database by default and you shouldn’t need to do anything to set it up as far as I’m aware. You definitely need the 2 “High Marker” SQL sensors for your use case because these keep track of what the highest loads have been so far for the current year.
Once you have the custom PJM component installed and the SQL sensors, you’ll need to create an automation to use them. You’ll probably want something that checks whether the forecast load for the day is greater than, say 98% of the current high marker. If that’s true then there’s a good chance that a new high will be made on that day, so run your house on the powerwalls from say 12pm - 8pm (historically the peaks usually occur between 3-6pm but if you’re able to run as many hours as needed then 12-8pm is safer). Your automation will need to look at both the PJM total as well as the Comed zone loads.
The PJM ComEd zonal load price is the same as the ComEd hourly pricing 5min price. One is just $/MWh and the other is $/kWh. You can eliminate the ComEd sensor by just taking the PJM value and dividing by 1000.
You should incorporate the day ahead and real time hourly LMPs. ComEd’s API doesn’t include the day ahead price, but they show it on their website.
I’ll have to give your PJM sensor a spin. I’m glad to see you incorporated the PJM dataminer API.
One strategy I was thinking about: If you suspect you’re on a peak day but cannot stand the heat-- you could time your AC to operate at one-hour intervals at the 30min mark. For example between 4PM-6PM, you would have the AC operate from 4:30-5:30 splitting the load between two hours.
Then if either hour was the peak, you’ve only consumed a half hours worth of energy.
When I was first working on the PJM sensor I was IP banned (permanently I think) several times because they have pretty strict rate limits in place. The ComEd sensor was created first, but it makes sense to continue using it to reduce the number of calls made to PJM.
The few times I’ve compared the day-ahead prices to what the actual prices ended up being, I found the day-ahead prices to be pretty much worthless. They also seem to follow a fairly predictable pattern based on forecast temperature and time of day. What is your use case for wanting them?
Your strategy to split the load between two hours is a great idea if you can’t stand the heat. Keep in mind, though, that we cant predict with high certainty the exact hour that the peak will occur. In practice, on a hot summer day the peak could easily occur anywhere within a 2-5 hour window. The PJM and ComEd peaks can easily be 1-2 hours apart as well. If you wanted to run your AC at 50% you’d need to either do something like:
A: 2:30 - 3:30, 4:30 - 5:30, 6:30 - 7:30
or B: 2:00 - 2:30, 3:00 - 3:30, etc
My house is insulated well enough that I can go 4-5 hours with no AC without getting too uncomfortable, but others might find that strategy very useful.
I’ve revised your sensor.py to include the 5-min LMP average, and 5-min load forecast.
The 5 min load forecast can be used predict the peak. It’s updated every 5 min and extends out 2 hrs. You can then run an automation that triggers if the peak is predicted to occur within the next hour.
You should be able to pass any utility zone, so it’s now applicable to any utility within PJM.
Here’s the config.yaml:
sensor:
- platform: pjm
monitored_variables:
- type: instantaneous_total_load
- type: total_load_forecast
- type: total_short_forecast
- type: instantaneous_zone_load
zone: "COMED"
- type: zone_load_forecast
zone: "COMED"
- type: zone_short_forecast
zone: "COMED"
# - type: zonal_lmp
# zone: "COMED"
# Compare to short forecast timestamp attribute (PJM and Zone) to determine if peak occurs within next hour
- platform: template
sensors:
current_time_plus_one_hour:
value_template: >
{{ (as_timestamp(now()) + 3600) | int }}
friendly_name: "Next Hour"
Sensor.py
Was also having trouble with getting the sql to work - kept getting invalid errors using the SQL integration in the web UI. These ended up working for me (just tweak to point to the ComEd or PJM load sensor to create the hourly peak, and point to the new hourly peak SQL sensor to develop a 5th coincident peak):
Hourly Peak
FROM (
SELECT CAST(state AS INTEGER) AS state,
ROW_NUMBER() OVER (ORDER BY state DESC) as rownum
FROM states
WHERE metadata_id = (
SELECT metadata_id
FROM states_meta
WHERE entity_id = 'sensor.comed_zone_load'
)
AND state != 'unknown'
AND state != ''
AND strftime('%Y-%m-%d %H', last_updated_ts, 'unixepoch') = strftime('%Y-%m-%d %H', 'now')
) AS ranked
WHERE rownum = 1;
and 5th Coincident Peak
WITH daily_peaks AS (
SELECT
date(last_updated_ts, 'unixepoch') AS date,
state,
MAX(CAST(state AS INTEGER)) AS peak_value
FROM states
WHERE metadata_id = (
SELECT metadata_id FROM states_meta WHERE entity_id = 'sensor.pjm_total_load_current_hour_high'
)
AND state NOT IN ('unknown', '', 'unavailable')
AND state GLOB '[0-9]*' -- Ensures the state contains only digits
AND last_updated_ts >= strftime('%s', 'now', '-4 months')
GROUP BY date
),
ranked_peaks AS (
SELECT state, peak_value, date,
ROW_NUMBER() OVER (ORDER BY peak_value DESC) AS rank
FROM daily_peaks
)
SELECT peak_value
FROM ranked_peaks
WHERE rank = 5