Improve Privacy, Stop using hardcoded DNS

Nothing, that’s part of the problem…Periodically it will ‘check’ the fallback DNS to Cloudflare. This is blocked on my network, so it tries again, and again and again, until it is sending 5 requests per second. This causes everything to slow down on the HA UI, and the only way to resolve the issue is to reboot the system (and no, ‘ha dns restart’ does not work)

It’s a classic rookie mistake, ‘Oh, there seems be be a problem with contacting a service on the network, so I’ll try and contact it more often, and that must fix the problem’ Genius…

2 Likes

You should probably offer your help to solve the rookie mistake. :wink:

Don’t think they would accept my pull request. The CoreDNS Devs are of the opinion that the current behaviour is correct. I’ve tried a couple of times to get this changed, but they just shut down my bug reports/requests stating this is the expected behaviour. That’s why I started this thread…

4 Likes

Ok, understood now. :slight_smile:

Huh. Any links to your issues?

5 Likes

This isn’t a solution to the DNS problem per se. but have you considered dropping hassOS/supervisor? and just running a core install. when i look at the readme for the coreDNS plugin it says

Home Assistant CoreDNS plugin

This is for HassOS and is the login shell for supervised systems.

So do a core install in a python venv, then that should eliminate your issue. I only offer this suggestion as it seems going any official route isn’t getting much traction.

Thanks for the suggestion. I’m running a Raspberry Pi with Home Assistant OS for convienance, rather than messing around with a separate OS install with HA over the top…, this is ‘supposed’ to offer the best performance.

This just re-enforces my view that most people don’t give a crap about privacy, they just want stuff to work

6 Likes

There is privacy and privacy. :slight_smile:

CloudFlare is known to have a better privacy police that most of its competitors. Plus, I really don’t care that Cloudflare knows that I’m using Home Assistant, but I guess that’s personnal.
I use a local DNS server. I care about privacy for things that matter from my point of view, but again, that’s very personnal.

I don’t mind there being a fallback but I do think it should be configurable.

Also I am a bit bugged by this particular line in the corefile being used:

It uses my DHCP provided DNS servers first which is as expected. And having a backup plan in case of REFUSED or SERVFAIL makes sense since a lot of people using Home Assistant use local DNS servers therefore preparing for downtime seems sensible. But when the first choice DNS server successfully returns a response that says “there is no domain here” why does that send it to the fallback?

2 Likes

REFUSED should be honoured as well, this response (usually) means the request is blocked for policy reasons, and this should not be passed to a ‘fallback’.

Also, once a fallback request has been initiated, it never stops trying to resolve this query even when the main DNS is online, and the more these requests get blocked the more it sends.

So, I have AdGuard on HA which is doing DNS blocking running on a platform that is doing its best to circumvent DNS blocking…

6 Likes

Didn’t know that, I thought that was related to network/auth issues. Like if the server was configured to only listen to requests from specific IP addresses and HA’s wasn’t one of them. I agree then, that shouldn’t fallback either.

Tried again to raise this issue and it just gets closed by devs. One of my biggest pet peeves of HA devs is they way they close ticket with:
A) no explanation
B) Working on it but not resolved - it shouldn’t be set to closed until it’s resolved.

HA DNS issues when using local resolver / unwanted failover to cloudflare dns · Issue #2966 · home-assistant/supervisor (github.com)

I hope enough other people file bug reports and keep referencing the other closed bugs and this thread.

8 Likes

Whats worse, Coredns keeps breaking.
I have a secured network where all dns/dot/… is blocked,
After a few hours coredns ‘forgets’ the dns server assigned,
and just stops resolving completely, as it can’t reach cloudflare.

Even tried the other way round to finally fix this:
set up my own dot forwarding provider, +redirect rules
(nginx streaming dns over ssl).

Now HA still stops resolving, because:

x509: certificate is valid for *.xxx.pw, xxx.pw, not cloudflare-dns.com

So thats not gonna work either… (except if someone might have the cloudflare cert :rofl: )

I fell into this hassio core-dns rabbit hole a few days ago. I never had name resolution problems with HA. TBH I didn’t even know HA is using it’s own dns until then. In my case this seems to be triggered by switching from a “dnsmasq classic” name server to pihole for local name resolution. Since then, ha-dns seems to be regularly failing to do proper name resolution for local hosts after some time and is stuck with the hardcoded cloudflare dns until the dns container is restarted.

I don’t know, why ha-dns seems to be so finicky about this change. Maybe pihole is answering DNS requests a bit more slowly…? Or just because? I don’t know.

Unreliable name resolution in a local network, that is well-curated with proper host names and static dhcp leases is a huge PITA. Failing name resolution virtually breaks everything. In the worst case it’s leading to me sitting next to the broken DNS server with a laptop connected directly via ethernet cable and debugging it, because even wifi access points stop working properly after a couple of minutes when name resolution is down.

Because of that I’m deliberately running two name servers with failover functionality in my local network to avoid a SPOF for name resolution. HA breaks this redundancy by insisting of using it’s own nameserver. If that breaks, a lot of things in HA are going sideways, even HA core functionality like writing values to the recorder and influxdb databases.

3 Likes

This came up on Reddit today.

Quite surprised about this, bad practice for sure, I see devs doing this type of thing more and more. Firefox is trying to bypass DNS using DoH, Systemd had a similar incident. Same attitude I read here from devs who never managed and/or tried to secure a network. Would be great if HA respects the local network settings.

9 Likes

That’s my reddit post, thanks for visibility.

In Firefox’s case, they allow me to disable DoH. I’d like the same for HA if possible.

6 Likes

Cloudflare is a perfectly reasonable DNS provider but forcing people to use it is not.

In the unlikely event cloudflare goes down you’re screwed. (Probably not because cache would last until they came back but still) It does happen sometimes.

3 Likes

Main issue for me is that you should never override the dhcp by default. This breaks stuff and is a security issue. With Firefox wanting DoH to be enabled by default this would break resolution in split dns environments. So for instance if a user connects their device with Firefox with default DoH enabled on a network I manage they cannot access internal webmail, chat etc.

Also regarding Cloudflare, Americans are somewhat protected from spying at the endpoint, the rest of the world are not.

10 Likes

just don’t try to mention this to a dev or project leader, you’ll get muzzled.

6 Likes