I find it really hard to understand what’s happening in an automation and why it stopped working. Sometimes its easy, but if you have more conditions its hard to understand and most of the trace information are useless … can we optimize that?
Example:
Trace Timeline:
What the hell is “Test Test if 2 conditions matches”
what is the outcome? Which condition are checked? What are the values? that’s stupid…
Related logbook entries:
every time i have a look at this TAB its empty.
In short words, the Traces “could” be very helpful, but they aren’t. Maybe absolute HA pros can read them, or maybe guess what’s happening, but it doesn’t show me in seconds, what happened, which variables changed, why a condition failed, and there are so many view to use, that confuses where to find the information I am looking for.
I have used it since its inception to debug my automations as well as other people’s automations. I have found it to be an invaluable tool but, like all tools, it requires some practice to learn how to use it properly.
Complaints about its functionality won’t improve anything. If you have actionable suggestions to improve it, create a Feature Request.
I just want to steer you in the right direction—I don’t need help in understanding the issue myself. So, please avoid offering assistance.
Please understand that any function requiring ‘training,’ ‘teaching,’ or ‘explanation’ indicates it’s not yet fully optimized. This might have been acceptable during HA’s early development stages, but in today’s context, I believe the ‘traces’ log is not up to the modern standard.
This topic was created to bring attention to a function within HA that, in my view, needs significant refinement.
Think of it this way: you choose to drive a new car because you’re familiar with how other cars work. If the new car demanded an extensive tutorial before you could operate it, no one would want to buy or use it.
It would be great if this topic reaches the right people involved in HA’s development, or perhaps leads to the creation of a ‘WTH’ topic to drive necessary changes.
Please avoid complaining about product functionality in a place where developers are unlikely to read or add to their todo list.
It would be great if you submitted a Feature Request containing details for improvement. The WTH campaign only occurs every second year or so. It may happen this October, it may not.
Traces has a steep learning curve, and I am still learning, but it has been very helpful to me.
Suggestion: Look at traces after running a working automation. Click on each step in the trace and you will see what the automation did at that step. When you see how a working automation looks, troubleshooting becomes a bit easier.
I agree with the OP of this thread. The traces are mostly useless, and I am a professional software developer. HA traces are nlt very usable even for me.
Has anyone tried reading a trace of a big automation? It doesn’t even fit the screen, and it is not possible to scroll or move it:
and it is everything heavily mouse based, so whatever part you can’t see, you can’t click and therefore you can not debug. This clearly shows the authors of this “tool” do not use them besides basic testing.
And I know I’m posting in an old thread, but I share this same exact frustration for years, so rather than creating a new one, I have decided to “upvote” this one
I agree that automation traces are not at all like a source code trace. But if you click on each dot, you see what the step executed was.
OTOH, I can’t conceive of an automation this complex and probably most users wouldn’t ever have more than a dozen bubbles in “Traces”. So, IMO, there’s no end-user pressure to fix it.
OTOH, all that trace data is somewhere in the byzantine Home Assistant database. It has to be for the current traces page. Whomever created the traces page could have easily create a printable text file with each bubble printed and the status of each.
That is exactly the problem, that you have to click it. But you can’t click what you can’t see. A non scrollable UI, that requires clicking and that can be hidden behind other UI elements is the worst UX, no matter what scenario you think is reasonable to expect.
Even with the most simplest automations, this can happen. Different screen sizes, screen resolutions, or even mobile access can make this unusable.
You may think that “mobile” usage is not intended. Well, as a father that has to take care of his children I have little time to spend in front of the computer outside of work, and my mobile (and it’s a big one, a Fod 5, 8") coupled with some “dead times” here and there are one of the few opportunities that I have to check what my wife is complaining about certain automation.
Some other people, may have not very advanced automations, but they may have a low end computer, with a small screen and a small resolution. They don’t deserve a good debugging experience either?
Also, an automation system is only as powerful as how much of it you can debug it. If there is no way to have visibility on a complex trace, any potential the system has is absolutely worthless. Are you suggesting that only the most basic automations (which, on the other hand, are the ones that need less debugging) are the only expected usage of automations?
And, even if we discard all that, I think it is visible that people are using blueprints quite extensively, and almost all blueprints are this complex, so being able to see what an automation from a blueprint is doing and why is failing, I think it’s key
In your situation, I’d take all those choose statements and turn them into scripts. I have extremely complicated automations and I don’t run into this issue, but I compartmentalize everything in to separate “features”. Essentially scripts that act as functions. Also, you could take some of your logic and move it to jinja and abuse the variable system. If you post your automation, I can help with pointers specific to that automation. Otherwise, you’ll just have to wait until the automation changes go through, although I don’t believe those changes are focused on traces.
This may be part of the problem. Automations are not software - they’re just rules. “When this happens, do that.”
I’ve never seen a good case for the “big” automation - in your screenshot there seem to be four separate sequences. Why not four automations (or scripts)? Much easier to maintain.
All people keep focusing on the wrong things.
That screenshot is an example of how traces can become useless. The exact reason why doesn’t matter that much. It can be the automation size, the screen resolution, screen size or even assistive technology for people with problems.
The thing is that a trace can become unreadable and that is a serious limitation
This is what I do. The I have ai read it and find the issue then go back and fix it. Ai does that part very well pinpoint any issue in two seconds flat. Fixing it well it’s still working on that. That’s my job.
I round tripped 22 unit tests last night like that and built three very complex scripts. Cake.
The trace gets complex because the yaml gets complex. It’s unavoidable… But the trace system and the traces are Far from useless
Thar is not my situation, it’s the one from a lot of users using the cover control blueprint.
If it was me creating such a complex automation I will definitely be splitting it into parts. But this is an automation created from a blueprint, and as far as I know HA does not provide any other way of sharing “code”.
I could try to replicate it myself, but this is a complete solution that works perfectly fine and it is tested by a lot of people, why not use it?
Finding an edge case that I need to debug should not be a reason to stop using it.
Tracing is a very important part of any system, and I think that providing a way to navigate each step, however is made (scroll, next step button or whatever) is a low hanging fruit that can have but impact on usability
Which is a huge JSON blob. Why would I prefer that from a dedicated UI?
Why is people do against end users? The solution is to fix the visualization, everything else are workarounds until that gets fixed