Home Assistant Supervisor/CoreDNS uses hardcoded CloudFlare DNS-over-TLS.
Supposedly this is used as a ‘fallback’ in case main DNS fails (clue…, that’s what user assigned secondary DNS is for), however HA constantly sends requests to these servers.
This is a breech of Security, Trust and Privacy.
This either needs to be either removed completely or have an option to enable/disable this ‘feature’
I block all external DNS requests that do not come from my own DNS server, the result is a request from HA to CloudFlare DNS more than once a second 24/7, its ridiculous.
Nothing, that’s part of the problem…Periodically it will ‘check’ the fallback DNS to Cloudflare. This is blocked on my network, so it tries again, and again and again, until it is sending 5 requests per second. This causes everything to slow down on the HA UI, and the only way to resolve the issue is to reboot the system (and no, ‘ha dns restart’ does not work)
It’s a classic rookie mistake, ‘Oh, there seems be be a problem with contacting a service on the network, so I’ll try and contact it more often, and that must fix the problem’ Genius…
Don’t think they would accept my pull request. The CoreDNS Devs are of the opinion that the current behaviour is correct. I’ve tried a couple of times to get this changed, but they just shut down my bug reports/requests stating this is the expected behaviour. That’s why I started this thread…
This isn’t a solution to the DNS problem per se. but have you considered dropping hassOS/supervisor? and just running a core install. when i look at the readme for the coreDNS plugin it says
Home Assistant CoreDNS plugin
This is for HassOS and is the login shell for supervised systems.
So do a core install in a python venv, then that should eliminate your issue. I only offer this suggestion as it seems going any official route isn’t getting much traction.
Thanks for the suggestion. I’m running a Raspberry Pi with Home Assistant OS for convienance, rather than messing around with a separate OS install with HA over the top…, this is ‘supposed’ to offer the best performance.
This just re-enforces my view that most people don’t give a crap about privacy, they just want stuff to work
CloudFlare is known to have a better privacy police that most of its competitors. Plus, I really don’t care that Cloudflare knows that I’m using Home Assistant, but I guess that’s personnal.
I use a local DNS server. I care about privacy for things that matter from my point of view, but again, that’s very personnal.
I don’t mind there being a fallback but I do think it should be configurable.
Also I am a bit bugged by this particular line in the corefile being used:
It uses my DHCP provided DNS servers first which is as expected. And having a backup plan in case of REFUSED or SERVFAIL makes sense since a lot of people using Home Assistant use local DNS servers therefore preparing for downtime seems sensible. But when the first choice DNS server successfully returns a response that says “there is no domain here” why does that send it to the fallback?
REFUSED should be honoured as well, this response (usually) means the request is blocked for policy reasons, and this should not be passed to a ‘fallback’.
Also, once a fallback request has been initiated, it never stops trying to resolve this query even when the main DNS is online, and the more these requests get blocked the more it sends.
So, I have AdGuard on HA which is doing DNS blocking running on a platform that is doing its best to circumvent DNS blocking…
Didn’t know that, I thought that was related to network/auth issues. Like if the server was configured to only listen to requests from specific IP addresses and HA’s wasn’t one of them. I agree then, that shouldn’t fallback either.
Tried again to raise this issue and it just gets closed by devs. One of my biggest pet peeves of HA devs is they way they close ticket with:
A) no explanation
B) Working on it but not resolved - it shouldn’t be set to closed until it’s resolved.
Whats worse, Coredns keeps breaking.
I have a secured network where all dns/dot/… is blocked,
After a few hours coredns ‘forgets’ the dns server assigned,
and just stops resolving completely, as it can’t reach cloudflare.
Even tried the other way round to finally fix this:
set up my own dot forwarding provider, +redirect rules
(nginx streaming dns over ssl).
I fell into this hassio core-dns rabbit hole a few days ago. I never had name resolution problems with HA. TBH I didn’t even know HA is using it’s own dns until then. In my case this seems to be triggered by switching from a “dnsmasq classic” name server to pihole for local name resolution. Since then, ha-dns seems to be regularly failing to do proper name resolution for local hosts after some time and is stuck with the hardcoded cloudflare dns until the dns container is restarted.
I don’t know, why ha-dns seems to be so finicky about this change. Maybe pihole is answering DNS requests a bit more slowly…? Or just because? I don’t know.
Unreliable name resolution in a local network, that is well-curated with proper host names and static dhcp leases is a huge PITA. Failing name resolution virtually breaks everything. In the worst case it’s leading to me sitting next to the broken DNS server with a laptop connected directly via ethernet cable and debugging it, because even wifi access points stop working properly after a couple of minutes when name resolution is down.
Because of that I’m deliberately running two name servers with failover functionality in my local network to avoid a SPOF for name resolution. HA breaks this redundancy by insisting of using it’s own nameserver. If that breaks, a lot of things in HA are going sideways, even HA core functionality like writing values to the recorder and influxdb databases.
Quite surprised about this, bad practice for sure, I see devs doing this type of thing more and more. Firefox is trying to bypass DNS using DoH, Systemd had a similar incident. Same attitude I read here from devs who never managed and/or tried to secure a network. Would be great if HA respects the local network settings.