I think it’s because most sensors report their state periodically and in discrete steps. There is no guarantee that when a sensor’s state goes from, say, 5 to 10 it will ever report a state of 6 or 7.5 or 8.657.
But still not convincing enough for me to think that it makes any sense that the small (I am guessing) extra bit of code needed for ‘equal to’ should have been omitted.
(Please don’t anyone trot out the usual remark when someone suggests a change is small or easy - I am no Python programmer and even less of a GitHub user )
And with the ‘recent’ addition of helpers allowed in the ‘above’ or ‘below’ it makes less sense.
In a software context it can be a can of worms. If a sensor is represented truely as integer numbers equal may be possible to get reliable. But sensors in general are floating point. And comparing floating point in programs is hit or miss.
What you see as 5 deg C may internally in the program be 5.0000001 or 4.9999999 and then a comparison is not equal. You want to avoid comparing floating points to be equal in computer programs. You only compare numbers to be equal that are type set as integers. Having type setting in a UI for normal people is a bit geeky to explain and understand. Alternative is to define an interval around a value that makes it equal but then - who defines that tolerence? Is it +/- 0.1? Or 0.01? It depends on what you measure. A real can of worms
Again, very fair points (and I think we might have, erm… ‘clashed’ before over the use of zero a long time ago).
The thing is, surely ‘normal people’ would expect to be able to compare with ‘equals’?
And I definitely think now that helpers can be used there is a very good argument for including it.
I am not sure whether the ‘can of worms’ defence is admissible. Surely it is for developers (of which I used to be one) to provide solutions1 for users (of which I used to be and still am one!)?
Anyway, it’s really not a show stopping issue, I just thought I’d raise it.
And I’m glad I did because both the raised objections/explanations were good ones.
1 Yes, I accept that based on your point this could result in it not being a small bit of code
It is the normal people being grumpy when their thermometer says 70 deg F but the actual temperature measured by the thermometer is in Celcius and jumps in 0.5 deg increments and the user complains that the trigger does not work because 70 deg F in floating point C ends up as 2.11111111111E1 and that is not the same as the measured 21.000000000E1
It just sucks making automations based on comparing exact floating point comparisons because in real life measured analog values are rarely 100 % equal. And if you ask the end user to add an accuracy interval to compensate then you end up having to supply two numbers. Your target and a tolerence. And how do you know how much tolerence to put? It is easier for the end user to understand below and above.
If you really need to compare it is better to round off the measured value to integer and turn it to a string that can be compared exactly to a string.
There us nowhere in the HA user interface where the end user has to think if values are interger or floating point. And let ut be with that. They are already confused about numbers vs text string. It is too geeky
It is easy to compare two numbers in the code. It is hard to answer 1000s of questions from confused users afterwards.