Amcrest times out a lot, fixed with keepalive proxy

I have four different amcrest cameras on wifi, and when starting home-assistant (as well as periodically while it’s running), I’ll get log messages saying there was a ‘ReadTimeout’ when talking to the cameras. In addition, the startup is delayed with “platform amcrest is taking longer than 10 seconds”, and the camera images are frequently missing or significantly delayed on the frontend.

After days of this driving me nuts, I discovered that this is all because the HTTP connection to the camera is closed after each request. It seems that the initial connection phase takes up most of the time. Knowing this, I was able to fix it using an intermediate nginx proxy that keeps a pool of connections alive. Now only the connection between home-assistant and nginx is closed, which is fine, while the connection from nginx to the cameras stays alive (hopefully forever).

Since that change I have yet to see any logs about ReadTimeouts, restarting HA is at least twice as fast, and the camera images update instantly on the frontend. So I figured I would share this just in case someone else is having similar problems.

The relevant section of nginx.conf:

upstream camera-1 {
    keepalive 16;
    server camera-1.localdomain;

server {
    listen 7771;
    location / {
      proxy_pass http://camera-1;
      proxy_read_timeout 300;
      proxy_connect_timeout 300;
      proxy_http_version 1.1;
      proxy_set_header Connection "";

And then use host:, port: 7771 instead of host: camera-1.localdomain in the configuration.yaml amcrest block. (Repeat for additional cameras, picking different port numbers).

I think there’s a way to restrict the proxy to listening on localhost, but I haven’t experimented with the configuration enough yet.

Very interesting… I am using the Amcrest for a few Dahua camera’s… I am seeing the same issue with read timeouts. Unfortunately I am not yet using a proxy so I am don’t have the setup to test this for my system.

Maybe if I’m lucky a dev will see this and be able to fix it in the Amcrest integration :slight_smile:

Great find!

From the proxy side, I discovered it’s pretty easy to get nginx to listen only on localhost:


There are still some timeouts that I noticed just now, but for most of the day everything was fine. There must be some other factors at play, and it’s possible the proxy is only masking the problem. I will dig deeper and post updates here if I find anything.

I think this solution might satisfy my flaws. I have two IPcams that behaving like OP said. I’m using FFmpeg for streaming, but it fails frequently.

2019-07-31 14:54:29 WARNING (MainThread) [] Timeout reading image.
2019-07-31 14:54:29 WARNING (MainThread) [] Timeout reading image.

So I suppose that the FFmpeg requests to the cameras is overloading the system.

I don’t have any clear information how to use rtp streaming. I’ve an addon for firefox that plays the streams smoothly, but I don’t know how it works, it detects the camera as tenvis brand.
I hope to find a solution, because of this big overhead I don’t set the cameras streaming, yet.

After running with the proxy setup, I think the problem isn’t connection pools (since it seems HA tries to use a pool of connections too, although I don’t know if they’re persistent). I can reproduce the same timeout problems if I restart my proxy enough times, so it seems the problem is in the camera’s connection handling code. Maybe all connections stick around for a while, and new connections are blocked until previous ones expire? (just guessing).

If that’s the case, then by using a proxy I’ve made it so that the connections to the camera do not terminate on HA restart, giving me the freedom to restart HA over and over to deal with changes (e.g. additional template/mqtt entities), while keeping the cameras working. But the underlying problem is in the camera and there’s nothing I can do about it.

I’ve also reduced the “keepalive” count to 2, which might help if the problem is indeed concurrent connections to the camera. I’ll keep this setup for a few days and see what happens.

What is see is that the image is requested once over 20 seconds. The requests going on timeout, therefore the following request will find the port busy and the display will blank. This just for the representation on the entity picture. Then is the same when trying to show the full stream and keeping waiting that the connection is established.

As I saw this approach, I supposed there could be an opportunity to resolve these timeouts and unsolvable synchronization.

Anyways, I’m not skilled to setup nginx as proxy and how it will deal the connection to my IPcams. I may give a try :smiley: .