If I may share my dev-biased (I’m the dev of the HASS Configurator) view on this topic:
First and foremost the user is responsible for what happens. At least in the case of the type of software we are talking about here. Home Assistant (or my Configurator) are free of charge and created primarily in the spare time of devs that are willing to contribute to open source software. Regardless of what licenses state, my personal opinion is that I, as a developer, can not be held responsible for damage caused by my software (I haven’t got paid to produce a safe safe piece of software). Especially in cases where the software might have been set up in a way that wasn’t even intended. A guarantee of safety could only be expected if the software (and corresponding hardware) have been set up following strict guidelines and revision by whoever is the maintainer.
As an example: Philips would be to blame if Hue-related data stored in their cloud got compromised. But if I expose my Hue bridge to the internet (which is not intended by design) and it get’s hacked, Philips doesn’t have anything to do with that.
That being said, Home Assistant has it’s setup guides, but also clear warnings that it’s not always the best idea to expose it to the web. If the user decides he or she needs to do so anyway, then it has been the users choice. And since no one reviews that the user did his setup properly, the devs shouldn’t be the blame for that. For something to be certified as safe, it has to be reviewed. Any change in the system would void this certification, requiring yet another review. If I as a vendor ship software, then I will only guarantee security as long as the software is configured to the standards I dictate. If the user changes something, it’s non of my concern. Only trainied professionals would be allowed to make changes while retaining certification.
Now to the point of security while developing:
Of course I, as a developer, keep the most obvious security-problems in mind while developing software. But bugs don’t occur on purpose. In fact, developers spend a lot of time looking for them. Some are found, some not. And some are relevant, some are not.
Taking my configurator as an example, it was originally intended to only be used locally. Who would have thought people want to change their set up while not at home, ensuring that everything still works. I personally would never restart my HASS when I’m not at home because in the worst case my selfmade alarm-system would be down, and I might not even know about it.
But: quite a few people still say “but I want to use it from the outside!”. I react with “but that’s such a bad idea!”. And then they say “pfff, I’ll do it anyways!”. I can’t prohibit users from placing a reverse proxy in front of my app and thereby exposing my software to the web, which it originally wasn’t designed for.
Seeing this process my thoughts were: “ok, if they do it, I at least try to keep them from shooting their own feet”. Hence I came up with some solutions that improved security. Which I didn’t plan to do in the beginning. But since the users use my software in all kinds of setups I don’t have control of, I chose the add security the best way I can. Then some time later there’s another user that has a new, totally different kind of setup. How should I know about that upfront? It’s almost like some sort of cat-and-mouse game where users always find some way around the security-features I implement. And it would be soooooo simple if they would just stick to not exposing at all. But they do, so I do the best I can. But I don’t take blame for damage done by a setup I haven’t reviewed for security!
I did dramatize a bit with the part of users finding ways around my security features. No one ever did. But reading the feedback I could estimate what type of problems they were going to running. They themselves didn’t even know that their intentions where bad in terms of security. So I acted pro-actively. Which doesn’t help if there’s an attack-vector I haven’t thought about yet.