Improve Privacy, Stop using hardcoded DNS

REFUSED should be honoured as well, this response (usually) means the request is blocked for policy reasons, and this should not be passed to a ‘fallback’.

Also, once a fallback request has been initiated, it never stops trying to resolve this query even when the main DNS is online, and the more these requests get blocked the more it sends.

So, I have AdGuard on HA which is doing DNS blocking running on a platform that is doing its best to circumvent DNS blocking…

6 Likes

Didn’t know that, I thought that was related to network/auth issues. Like if the server was configured to only listen to requests from specific IP addresses and HA’s wasn’t one of them. I agree then, that shouldn’t fallback either.

Tried again to raise this issue and it just gets closed by devs. One of my biggest pet peeves of HA devs is they way they close ticket with:
A) no explanation
B) Working on it but not resolved - it shouldn’t be set to closed until it’s resolved.

HA DNS issues when using local resolver / unwanted failover to cloudflare dns · Issue #2966 · home-assistant/supervisor (github.com)

I hope enough other people file bug reports and keep referencing the other closed bugs and this thread.

8 Likes

Whats worse, Coredns keeps breaking.
I have a secured network where all dns/dot/… is blocked,
After a few hours coredns ‘forgets’ the dns server assigned,
and just stops resolving completely, as it can’t reach cloudflare.

Even tried the other way round to finally fix this:
set up my own dot forwarding provider, +redirect rules
(nginx streaming dns over ssl).

Now HA still stops resolving, because:

x509: certificate is valid for *.xxx.pw, xxx.pw, not cloudflare-dns.com

So thats not gonna work either… (except if someone might have the cloudflare cert :rofl: )

I fell into this hassio core-dns rabbit hole a few days ago. I never had name resolution problems with HA. TBH I didn’t even know HA is using it’s own dns until then. In my case this seems to be triggered by switching from a “dnsmasq classic” name server to pihole for local name resolution. Since then, ha-dns seems to be regularly failing to do proper name resolution for local hosts after some time and is stuck with the hardcoded cloudflare dns until the dns container is restarted.

I don’t know, why ha-dns seems to be so finicky about this change. Maybe pihole is answering DNS requests a bit more slowly…? Or just because? I don’t know.

Unreliable name resolution in a local network, that is well-curated with proper host names and static dhcp leases is a huge PITA. Failing name resolution virtually breaks everything. In the worst case it’s leading to me sitting next to the broken DNS server with a laptop connected directly via ethernet cable and debugging it, because even wifi access points stop working properly after a couple of minutes when name resolution is down.

Because of that I’m deliberately running two name servers with failover functionality in my local network to avoid a SPOF for name resolution. HA breaks this redundancy by insisting of using it’s own nameserver. If that breaks, a lot of things in HA are going sideways, even HA core functionality like writing values to the recorder and influxdb databases.

3 Likes

This came up on Reddit today.

Quite surprised about this, bad practice for sure, I see devs doing this type of thing more and more. Firefox is trying to bypass DNS using DoH, Systemd had a similar incident. Same attitude I read here from devs who never managed and/or tried to secure a network. Would be great if HA respects the local network settings.

9 Likes

That’s my reddit post, thanks for visibility.

In Firefox’s case, they allow me to disable DoH. I’d like the same for HA if possible.

6 Likes

Cloudflare is a perfectly reasonable DNS provider but forcing people to use it is not.

In the unlikely event cloudflare goes down you’re screwed. (Probably not because cache would last until they came back but still) It does happen sometimes.

3 Likes

Main issue for me is that you should never override the dhcp by default. This breaks stuff and is a security issue. With Firefox wanting DoH to be enabled by default this would break resolution in split dns environments. So for instance if a user connects their device with Firefox with default DoH enabled on a network I manage they cannot access internal webmail, chat etc.

Also regarding Cloudflare, Americans are somewhat protected from spying at the endpoint, the rest of the world are not.

10 Likes

just don’t try to mention this to a dev or project leader, you’ll get muzzled.

6 Likes

Hardcoding DNS is, at best, against best practices. I hope the devs see how silly this choice is.

7 Likes

For some reason they consider having it not hard coded a feature. People who point out the error that it is are demanding a feature. Just mark it as the bug it is and fix it when you can I would think.

3 Likes

It’s not a bug since it’s intentional. Just a bad decision.

3 Likes

I live on a sailboat. I use HA to control various things aboard the boat. Right now, I have full Internet access. When I go offshore, I’m not going to have access to anything outside of my little network.

Having a (breaking) reliance on external DNS is a travesty.

7 Likes

Hi, I’m new to Home Assistant . I installed the VM to test it and the first thing I noticed was how it was hammering my edge firewall non stop trying to reach CloudFlare DoT servers.
I read the thread and all the issues posted and is sad to see the Devs do not consider this as a bug.
Hardcoding DNS is a no-go and is difficult to imagine a reason rather than to simplify the development. If that was the case they should not close the issues and leave them opened for future improvements.

My current solution is to block traffic to 1.1.1.1 and 1.0.0.1 at the host using iptables. Is a hack and this does not survive a reboot but so far it works. I guess I could make it persistent but I want to test it for some days first.

I did a quick look a the DNS docker and could not find the obvious source of the requests. If someone knows I’d like to get a pointer because I would prefer to point this to my own servers rather than blocking them.

If someone wants to do the same is very simple. Use the “SSH & Web Terminal” addon to SSH to the host and then execute the following:

iptables -I  FORWARD -d 1.1.1.1 -j REJECT
iptables -I  FORWARD -d 1.0.0.1 -j REJECT

Becarefull because if you mess with the iptables the docker communications might fail

have a look at /usr/share/tempio/corefile inside of the hassio_dns container, I think you’ll see the reason why CloudFlare DNS is getting hammered.

too fast, I found the configuration file in /etc/corefile

bash-5.1# vi /etc/corefile 

.:53 {
    log {
        class error
    }
    errors
    loop

    hosts /config/hosts {
        fallthrough
    }
    template ANY AAAA local.hass.io hassio {
        rcode NOERROR
    }
    mdns
    forward . dns://192.168.1.3 dns://192.168.1.3 dns://127.0.0.1:5553 {
        except local.hass.io
        policy sequential
        health_check 1m
    }
    fallback REFUSED,SERVFAIL,NXDOMAIN . dns://127.0.0.1:5553
    cache 600
}

.:5553 {
    log {
        class error
    }
    errors

    forward . tls://1.1.1.1 tls://1.0.0.1  {
        tls_servername cloudflare-dns.com
        except local.hass.io
        health_check 5m
    }
    cache 600
}

As you can see all requests are sent to my local server (192.168.1.3) but also to a local instance in the port 5553. This local instance will send all request plus any that the local DNS server rejects, to Cloudflare using TLS over DNS.

Basically is an anti “pi-hole” feature. It does not matter if you filter your local DNS because your filters are always bypassed using CF unless you block the port 853 in your edge firewall.

I confirmed that if I change the 1.1.1.1 to another IP the request change

Thanks,
I need to find a way to make the configuration changes persistent

if you’re not running hassos, you could mount your own corefile into that path.

1 Like

I do.
Also I need a docker crash course. :slight_smile: