Disclosure: Supervisor security vulnerability

I also have this error. You are not alone.

What I know is that if you google “How to find subdomains” there are numerous results.

My only reason to comment was because you said

I believe the answer it’s not ‘far less likely’.

Time for a reality check guys! This vulnerability has existed for 6 years. If it had been exploited don’t you think you would have had numerous of your linked services hacked already. Hackers aren’t going to sit around on stolen credentials for six years.

13 Likes

That’s interesting and I thank you for the input. I’ll look about how those subdomains are get as this will change the concept of “far less likely”. If subdomains are spidered as disclosed by the users somehow that will not change my mind (I’ll never disclose mi subdomain!!!)
If there is a real list that contains even my own instance then Nabu Casa is doing it wrong and that should be addressed.
My experience however tells me that my subdomains is not available anywhere on the net as otherwise I would have stated at least one access trial with wrong credential. But that never happened so I’m inclined to consider the first hypothesis the most trustable.

@123
@henrik_sozzi
Certificate Authoritys (CA) like Let’s Encrypt have to send their logs to public servers for auditing. (keyword: “Certificate Transparency”)
So just see crt.sh | nabu.casa

Hostnames leak the moment you get SSL for them.
(A solution would be to use wildcart certs)

Those random hostnames are just security theater.

4 Likes

This is a great callout, @gubiq . Thank you.

This is effectively proof that there is a public directory of all Nabu Casa Cloud-proxied HA instances. (I confirmed this by locating one of my own HA instances in the list. An instance I am 100% confident the URL hasn’t been crawled or leaked elsewhere.)

There are many APT groups (Nation-states and NGOs alike) who have been known to exploit zero days, establish backdoors, and then do nothing with them until they decide they “need” them. It’s the “hack everything” approach.

I’d wager that many HA users are in tech fields, and these APT groups may consider us as valuable targets, to serve for potential pivots into more sensitive (e.g. Corporate) networks.
(Of recent note: The recent lastpass breach was via a superadmin’s home Plex server. His personal devices were targeted by the attackers to facilitate an eventual pivot into LastPass.)

I say this next bit as someone who’s also been in cybersecurity for a decade+, with the majority of that time in a Detection & Response specialty. Diving through logs and performing post-compromise forensics is my day-to-day bread-and-butter. I’m also very familiar with responding to “vulncidents”, where no active exploitation has been observed, but the ease of RCE and the potential impact/risk associated is significant enough to warrant treating it like there may be abuse of the vuln in the wild:

Given the public directory @gubiq pointed out and this context that compromising personal servers is an emerging TTP, I think that it is critical we get some more technical details on the vulnerability, soon, from a “where to start doing forensics” perspective.
If it was exploitable via the Nabu Casa “tunnel”, we need to know how it may have been exploited, and how that may have appeared in supervisor logs.
It doesn’t matter if Nabu Casa Cloud tunnels are E2E encrypted if the adversary can be one of the endpoints and exploit the vuln. AIUI, this isn’t a MitM situation, so E2EE does nothing. (As mentioned above, client certificates could have been a layer of defense here, since they would effectively prevent the adversary from “being” one of the endpoints allowed to talk to HA.)

This is the kind of thing where if anyone using HA can find evidence of exploitation on their instance, then we need to start changing how everyone is responding to this. (Not trying to stir up fear or panic here. Just trying to indicate that if we find evidence of even ONE case where this has been exploited in-the-wild, then this shouldn’t just be considered a vulnerability anymore; there’s potentially an attack campaign that’s gone undetected, and that campaign could have potentially just gone through the public directory of HA instances and compromised them all.)

We can’t even start digging into this and gaining confidence that there’s not exploitation ITW until we know where to start looking.

I very much appreciate the Nabu Casa team’s approach on patching this vuln and disclosing it like this. That’s a mark of a team that takes their product seriously and respects their users.
Given the no-auth RCE and open directory of vulnerable targets though, I think the very respectable thing to do is to provide the guidance users need to determine if their instances may have been exploited.

I’m very interested to see how this vuln impacts Nabu Casa strategy and security going forward.

14 Likes

Wow. That is an eye opener. I also had the false impression that this URL and its long random subdomain was secret. I never used it beyond trying for fun. I even remember turning it off but when I looked now it was active again

I normally had a public domain and in the past year I changed to not exposing but instead using my own Wireguard. Both the IOS and Android clients can be set up to use the tunnel or go direct based on target IP so you just setup the 192.168 urls in the HA app and you have only one hole in the firewall, the Wireguard which I trust more than this Python soup of multiple Docker containers that makes up HA.

I am really shocked that the xxx.ui.nabu.casa URLs are not secret. I would have wanted to know that

3 Likes

Oh wow, thank you for pointing out that! In fact, for the similar service I’m managing for my customers, I’m using a wildcard certificate valid for all third level domains. That way you can use the third level domain as a “string” to identify the correct insance and apply some fail2ban strategy when that identification is wrong to depotentiate brute force trials.
That said I’ve tried the link at crt.sh you provided and my specific instance was not there. Then I tried even subdomains.whoisxmlapi.com and my instance wasn’t there too but I’m sure the reason is just that the resultset is not the complete one (10.000 records…), just luck.
I was believing that my third level was secret but it wasn’t so! That’s not good.
So, maybe someone from Nabu Casa can explain why not to use wildcard cert there? I’m sure they evaluated that as that’s even easier to set-up.

Note that when you disable Remote Control, an attacker that manage to get access to your account at Nabucasa can remotely enable remote control again. And you cannot disable that it seems. That security risk needs to get plugged.

See Nabu Casa - disable possibility to turn on "remote UI" remotly - #19 by KennethLavrsen

3 Likes

it does after all state:

In a compromised situation like we currently experience, this should probably also be notified in this panel, and auto-turnoff?

the statement btw is a strong one, and non tech security savvy users like most of us are, should be able to rely on this. nothing less. If this is not the case: ‘HA Cloud provides a secure external connection’ , and this being the foremost reason to subscribe, then I fear the livelihood of NC…

1 Like

Sorry, I might be direct but have definitely no rude or false intentions.

I’m aware of what it is, and a paying customer of Shodan myself for many years.

Let me outline this a little, especially why I think, as I worded “has not much to do with this”. That wording means I do not say it has nothing to do with this in general but is just less important.

Port scanning makes you find directly exposed services, which is cool for finding things like cameras, MQTT instances, and stuff like that. Sure, you’ll also find Home Assistant instances that have been exposed directly.

Correct. That is a “less obvious” one less, which is good.

So, services like Shodan (and similar portscanners), are cool, but also used a lot; meaning they are less viable. Maybe in case of a zero-day, but otherwise :man_shrugging:

There are many more things that can be done to find domains, or, how you expose your domain yourself. There is no “hiding” when having a “public” domain. Which is what my comment has been referring to and is based on.

Certificate Authorities (CA) (like any Let’s Encrypt domain), DNS enumeration, or how about your browser, with tons of plugins with telemetry, not even to go into: Have you ever followed an external link from your HA instance? What do you think your browser sends as a referer? Be sure, this list is not even exhaustive.

Is portscanning a risk for being found? Sure. But all the bad world using Shodan, makes Shodan also less valuable as an attack vector: Everybody uses that.

I see a lot of listings of Nabu Casa above, and the CA listings. The same can be done for, e.g., DuckDNS, which most people use Let’s Encrypt for as well: https://crt.sh/?q=duckdns.org

All those factors, make me believe port scanning is lesser of an issue. It mostly obfuscates a bit… maybe… somewhat.

…/Frenck

1 Like

My answer from a purely personal perspective: I feel the same. I’m not doing an extra rotation myself at this point either.

From a general best practice perspective: one should rotate all credentials.

not being very secure probably, but, since we have this boolean, do people use it to toggle it based on home presence?

I mean, when at home, we don’t need external connection in the first place?
how would that workout with cloud tts… hmm

Frenck you keep using this lingo.
rotating indicates circling between various existing username/pw combinations.

While you probably need to say: change your existing username/pw (delete the existing and create new) ?

this blog was the 1st to popup and seems very admin centered, https://www.ibm.com/cloud/blog/how-to-enhance-security-by-rotating-service-credentials and should probably also be implemented somehow in the NC connection panel?

Notify the user to adjust. And what exactly.

But DuckDNS is on the Public Suffix List https://publicsuffix.org/ so you have to search for the subdomain like https://crt.sh/?Identity=sylvain-maison.duckdns.org&deduplicate=Y to get results.
Searching for duckdns.org only shows results from the time before the inclusion in the list.

Perhaps @frenck should get ui.nabu.casa on the list too.

EDIT: Sorry, I didn’t see that it is in fact on the publicsuffix list and no new hostnames have been logged in the last 4 years. Thanks @CentralCommand for correcting me. :kissing_heart:

1 Like

That is fairly easy to answer. Each Home Assistant instance using NC creates and has its own certificates, locally. This means all traffic, is end-to-end encrypted. As your instance is the only one that has the secret parts of the certificate.

This is the reason why NC markets as a “secure remote connection”. It provides an SSL-encrypted connection, to your instance, which is end-to-end encrypted. NC cannot view the traffic either. Even if NC would ever have a security incident, there is nobody that can get in between your traffic.

This is not possible with wildcard certificates.

Above all, it makes stuff just easier, as no port forwarding and router fiddling is needed. Some have been shouting: Cloudflare! Sure, that is an option and also a great service (I’m a paying customer myself for many other things as well). However, CloudFlare is not end-to-end encrypted when using it as a proxied service. Is that bad? No? Depends, it is all about choices.

3 Likes

Right, should have used a better example to make my points hehe (should have drank my morning coffee first). The gist of my response, however, remains the same. There is no such thing as a non-public, public domain. You may try to hide… But you have to assume they’ll find you :slight_smile:

2 Likes

So is the fix for this Supervisor vulnerability the reason why my HA API rest sensors no longer work?

https://community.home-assistant.io/t/2023-3-dialogs/541999/311

We added hardening, but that should just have worked I guess. Maybe raise an issue in GitHub, so we can take a look.

We also found an issue that blocked documentation & changelog requests, blocking viewing those in the UI. A fix for that is coming in the next patch release. Your issue is most likely similar.

Should I raise the issue in the Core or Supervisor repository?