I have a temperature sensor from a water heater. The value provided by the device has a granularity of 0.5 °C e.g. it shows 31.5 °C, 31.0 °C, 30.5 °C etc.

Unfortunately it seems that the producer of the water heater did not implement a flap prevention buffer for the measured value . When the water temperature slowly cools down, every 2h it starts flapping for 20 min in intervals between 1 up to 5 min between the current and the next lower value (e.g. 31.5 and 31.0) until finally the new lower value (31.0) is stable.

I have a trend binary sensor for the temperature, and the flapping of the temperature unfortunately also leads to flapping of the trend state.

How could I prevent the flapping of the temperature value (GUI strongly preferred) ?
See screenshot of 4h graph cooling down from 52.5 °C to 51.0 °C with the 0.5°C flappings occuring every 2h.

Thanks, do I get it right that with that I would get (depending on the first value) exclusively either integer values (1 ,2 , 3 …) or fractional values (1.5, 2.5, 3.5 …) ?

To prevent this little uglyness and roughness and data loss,
would it be a optimization to use the statistics integration first with average_linear of the last 10 min and upon that use the threshold integration with threshold of 0.5 ?

The Threshold binary sensor has no use for me. I simply do not need what that provides.

My apporach for the flap prevention is the following:

I have a statistics integration upon the temperature sensor value, of type “mean”, that computes the mean of the sensor values of the last 10 min.
If there was no new sensor value in the last 10 min, the last value is kept (keep_last_sample = true).

So, if the value flaps between e.g. 45.5 and 45.0, within 10 min range, the mean value is somewhere between those two values. IN my case of cooling water, the “mean” value will get 45.0 finally.

But I still cannot settle my Trend sensor upon the mean value, because while it is no more a two-value-flapping, it’s fractional value can still go up and down.

So what I now need is a filter that says "change value only if new value is differs at least 0.5 from old value.
That should result in a 0.5 granularity, but without flapping due to the mean calculation beforehand.

The threshold approach is what pretty much every system in the world uses when the thing that you want to control is on/off only. It is called a Schmitt trigger in electronics, it is also what most thermostats do.

When using a statistics function to reduce “flapping”, what you do is slow down the reaction to changes. In your case it uses temperatures from 10 to 15 minutes ago, when the temperature could have been wildly different.

This approach will most likely lead to a bigger deviation from the optimal point to switch. That may or may not be a problem to you, as long as you know the implications of your choice.

For the controlling of my stuff I use the Trend binary sensor which has as input the water heater temperature. It shall show e.g. water heater temperature is “increasing” or “strong increasing”. This works so far. Only problem is that the water temperature value directly coming from the water heater is flapping at some times (see screenshot) and therefor when there are only e.g. three flapping values in the last 30 min the trend binary sensor is also flapping what I do not want.

What I do not need is something like “when water heater is below 30.0 °C threshold, switch on”.

The temperature of the water heater does not change more than 0.5°C within 15 min, you know it has 270 L content, it’s rather unhurried changing…

In that case I think you should not use the trend sensor, as it is hard to see what it does. I’d use a derivative sensor, over which you have more control. You can graph it to see what it does. The derivative has a moving average window (so does the trend sensor by the way) so you do not need a statistics sensor to average out hickups. Then I’d use a threshold sensor on the derivative, because flapping can occur on any discrete value.

You are creating a binary sensor when you use trend, and a threshold/schmitt trigger is what is used most often to convert analog values into a binary value. As I said earlier, also averages can flipflop around the desired value - they too may need protection from it.

The problem is that the heater in the cooling period does not provide a new value for 90 min and for the trend(or derivation) I want to consider only last 30 min. So the trend/derivation has only one value left from before 90 min. Then, suddenly there comes a new value e.g. 0.5 below last. And after one minute the next value is up 0.5 again, and the it flaps every minute. And I want to consider 4 as sufficient values for the derivation computing, since when the heater is at work more values (every 20 min) would be too slow to recognize.
But the flapping every minute in 4 minutes triggers already the derivation/trend in the middle of the night. That was the problem.

I think my approach with the maximum of the last 15 min irons this flapping out. I will use that also for the temperature graph, so I get rid of those ugly zick zacks like in the screenshot. I can live with the slight inpreciseness.

Tell the software engineers at global company LG about Schmitt trigger, seems they have never heard of it

If you have no new measurements for 90 mins, how can you have a useful average over 15 mins? I think you mean no changes for 90 mins. That is somewhat different. Both statistics and trend have problems with 90 mins without new values, because they update only when a new, changed value comes in.

This is all the more reason for using the derivative, because of this:

and the solution:

The first discussion also has other solutions, like seeding the updates with tiny variations in order to get a proper derivative. But in fact, the fast flip flopping is the period then this problem is present the least if you use a moving average.

As I had written I am using not an average, but the statistics integration using the maximum value of the last 15 min and keep_last_sample=true. The latter means, that the last known maximum is always kept and is provided every 5 min as last known “maximum” value, even if the source entity does not provide a new value.
So if the temperature flaps down from 40.0 to 39.5 and then up again, there are 2 new values, but the known maximum is the same (40.0). Until the new value 39.5 has been the maximum for 15 min and that is only the case when the flapping period is over. So there is only one clean change from 40.0 to 39.5.

Since it took you only 7 minutes to discard a very relevant discussion that goes on for ages, I’m out of this discussion. You already decided you know best long ago. so I hope it works out for you…

Unfortunately, this does not work, since when after e.g. 1h cooling down the value changes from 30.0 to 29.5 and the old kept maximum was 30.0, this old kept maximum value is not kept any longer but the new fresh value replaces it as new maximum value. Right afterwards when it flaps back to 30.0 again, 30.0 is the new fresh maximum, but then the issue is already there again

Solution for my use case is:
Configure a statistics integration helper with minimum of the last 15 min, max samples=20 and keep_last_sample=1
Upon that put an derivation integration helper with time window 30 min. That provides a stable reliable indication if temperature is falling or increasing.