Sure, that works great for user accounts. Which is why HA already does that. Look in .storage/auth_providers.homeassistant, you’ll notice the passwords are not actually your passwords in plaintext.
This doesn’t work when there is no user interaction (i.e. literally every other password, api token, etc. you have stored with HA). Do you enter your google, amazon, philips hue, lutron, etc. credentials every time HA starts up before they begin working? No because you have stored them with HA as part of config somewhere and it uses them every time. You can’t hash user input and compare the hash if there is no user input step
Which goes back to my point of security theatre and a false sense of security. People who don’t know better will think that it’s secure, when all you’ve done is hit them with ROT13.
Exactly. I feel very uncomfortable with that and would like to be able to disable that inside Home Assistant. Even if it would be a yaml option under cloud. I do not like the idea that a security issue with Nabucasa can lead to direct access to a computer behind my firewall. Being a Nabucasa customer should be a reward, not a fear.
I am no security expert and I don’t claim to be, but surely if they are stored hashed, the algorithm can unhash them to pass for logging into the services? Is that not providing at least some additional security to storing them unhashed, or is that still all “Security Theatre” @Tinkerer?
Well, you can’t “un-hash” something. You could encode or encrypt it and the decode or decrypt it, but the code for doing that will be public in HA’s code base and so no better than ROT13… yes, security theatre.
OK, make sense now. Unless the encryption code was not public? However, I don’t know whether it’s possible to keep parts of an open source project closed.
FWIW I am not concerned that this has been exploited, given it’s been out there for six years, and I know there are far smarter people working on this than me. I was just trying to understand why the passwords in secrets.yaml ware stored in plain text. I thought there would have been a way to store them hashed.
Technically yes, but that wouldn’t help. Reverse engineering the binaries doing the encryption / decryption is not very hard for someone with the right skill. And you can bet that people will try to break it right away and publish the results. Security through obscurity is not usually a good thing.
the add-on logging most connections constantly in my config is core Mosquito add-on.
since it used to have anonymous entries a lot I moved to an ‘active’ configuration, where
allow_anonymous false
(no typo, there is no : )
as single line in /share/mosquitto/auth.conf was added and config changed to
not even sure if that is still required to close down the add-on to registered users, but in this context feels good to share, as it is not a widespread config setting
Doesn’t matter. HA must be able to decrypt it so however its doing that is public. Something is going to call into whatever black box encryption service you choose to use during HA startup and anyone will be able to copy and paste what HA does.
The only way it could actually work is if HA as missing a piece of information at startup. Like a key or password for example. Something which you (the admin) had to enter manually before HA could begin anything. But of course that would mean you (the admin) would have to enter this key every time you restarted HA. And I mean before it could do anything. No integrations started, can’t even spin up what is required for remote access since SSL keys would definitely be encrypted. Perhaps a basic HTTP server could be spun up to at least allow entering it over LAN instead of physically hooking up a keyboard to the otherwise headless machine, that’s about it.
Of course even if you did all of this, its still kind of security theater. Because HA allows custom components to run in the same python process as itself. So if HA has access to credentials, so does everything you install from HACS. And anything put in the custom components folder by someone with access to that directory and malicious intent (say a hacker or any addon which maps /config).
is there some system checker available to watch out for this? I guess people can post anything to HACS, or any other repo the user can download at their own risk (not taking HA to be responsible here since out of HA control obviously)
other than community experience, can we ‘test’ for such malicious intent or just functionality (needn’t be malicious perse ofc) in the code?
Could HA integrate such a service tool and report “this and that CC/Add-on use xyx login/use system access, and please be careful”?
Reviewing the code from components you use is about it. There’s no scanner for malicious intent. And if there was then code can be pretty easily obfuscated to defeat a scanner looking for particular patterns.
login info isn’t treated specially in HA. Look in .storage/core.config_entries. Any information an integration needs to set itself up goes into its JSON entry under data. Doesn’t matter whether its an IP address, boolean, or password, its all treated the same.
Outside of .storage its even more of the “wild west”. Now there’s HA config mixed in with integration config and integration credentials all over. And before you say secrets.yaml, that’s really just sugar for to make sharing your config via copy and paste easier. The secrets are subbed in as part of YAML processing. Way before HA has any idea what integration(s) are loading.
With addons you have a bit more info. Addons by default do not have access to much in HA and Supervisor, pretty much just their own config. If they want any more access then that they have to request it as part of their configuration. Many of the things they can request lower their security score so users know to keep an eye out.
Although notably requesting access to /config does not currently lower the security score. I think it should personally but feedback from users is that they really want all their configuration in /config. That’s why for example the node red and appdaemon addons have access to /config. There’s no reason those addons should have access to all the config and secrets you added to HA, instead the config of those addons could go in /share and access to /config taken away. But users wanted the convenience of all config in one place more then the security of denying it access to HA secrets so they were changed.
But either way with addons at least the config of the addon acts as a permission system. So you can see what all it is requesting by simply looking at its config.yaml or config.json file and decide if you feel its appropriate or not.
thanks Mike, that’s a very informative post. appreciated
the current state of affairs might (should…) be reason to re-consider the above in security HQ… even the fact it would be a breaking change should not stop improving on matters of this importance, in a growing world of threats, and HA becoming more and more important as coordinator of all of those services.
One problem is that the HA integration and component system has never been designed with any security in mind. It’s a purely technical approach that assumes 100% trust:
This is a problem. Especially in a cloud connected system that allows for custom components which could act as trojan horses once control access is gained. Ideally a fully isolated approach to custom components should be used, with user controllable permissions to various HA subsystems, akin to virtualization and sandboxing in browsers or mobile operating systems. Of course that would be an extremely and probably prohibitively huge task. But realistically, in the absence of such a trust management system, custom components are always going to be an easy entry door. Even outside of zero days. Just release a cool looking component onto HACS.
HA needs security redesign with zero trust architecture. Not that we trust nothing, but that there is no implicit trust of authentication, authorization, and auditing.
Personally, until we know otherwise. And this is not a suggestion that everyone should do the same, this is just what I personally intend doing:
I am doing absolutely nothing. I’m not changing any passwords, API tokens, nothing.
If the exploit has potentially existed for so long, and none of my linked services have been compromised in that time, then there is no reason for me to be alarmed.
In fact, the sheer fact that one of my add-ons is the Google Drive Backup add-on, but there has been no suspicious activity on my Google Account, tells me everything I need to know. If someone had access to a token that would allow them write access to a specific folder inside my Google Drive account, that would absolutely be something they would start using fairly quickly, they wouldn’t sit on it for 6 years, because apart from anything else, there would be no guarantee that the token would still be valid 6 years later.
For now I am treating this as a case of - there was the potential for unauthenticated access to the system, but it probably hasn’t been exploited - by anyone.
That’s exactly same approach I’d recommend and which I’m doing.
None of the services that had keys connected with HAs secrets and installations has been anyhow abused neither I’ve seen anything unusual with my installation (new backups, changes to disk usage etc.)
I think there’s too much noise around misunderstanding of how security disclosures should be read and what does terms inside them means.
There is no sign on the Proxmox box that HA lives on for example, that anyone has tried to ssh in to the Proxmox box from HA, despite there being credentials for Proxmox on HA, and the SSH add-on is installed. Obviously anyone who gained access to HA, would want to see what other connected systems they had access to, and gaining access to the virtual server that HA lives on, would be a top priority for any half decent hacker. HA OS is too locked down in what you can install it on it, to make it a useful network penetrator, or more importantly, to install some sort of remote access trojan on it.
The vast majority of network device hacks, are generally so that the device can be made to join a botnet, where it can be instructed to do further things. Generally speaking - hackers aren’t terribly interested in the data you actually have on the device. It’s not worth enough. It’s better for example - for them to break into a server that has millions of Twitter usernames and tokens, than it is for them to break into a handful of servers and get one token from each. So generally - they are only interested in installing some code, so they can remotely tell your machine to mine bitcoin, or send email spam, or DDoS some target.