HLK-LD2450 Initial experiments to connect to HomeAssistant

That ‘denied by Referer ACL’ message means it’s looking at the HTTP REFERER header.
The referrer (misspelled in the www spec as “referer”) is the website you came from.
If you click the link, your browser sends this site’s URL in the referer header and you get a 403 (not sure why their firewall has that rule but it’s still set that way).
If you instead copy the link and open it in a new tab, then there’s no referring site. Apparently that makes their firewall happy, so the it doesn’t block you and you can see the doc.

That’s the trick if anyone else wants to look at the (Chinese) hardware spec sheet from the link.

Hi, just need to clarify on something.
Does it do a better static detection than ld2450?
From this review here, it seems to imply that it does

Is there an esphome component for the rd-03d?

I was watching the same videos, any idea what is the difference between RD-03D and RD-03E?

The E seems like ld2410 with gesture recognition.

Hi,
I’m also trying to connect the D1mini to LD2450.
RX–>TX
TX–>RX.
But nothing show up in the sensors, even with the HLKRadar app.
can you please share your YAML, so I can correct mine?
Or any other idea regarding troubleshooting?
thx,
Doron

I guess it is the same.

Do you mind sharing how you got it to work with esphome? I should get it sometime next week

Sorry I am doing packet parsing in C++ or JS, never used RD-03 with esphome, But I guess LD2450 component should work out of box, as my C++ parsing is exactly the same for both, with less config options for RD-03.

1 Like

I’ve been experimenting with presence detection for quite some time now and have an LD2450 connected to an ESP8266 installed in every room. The detection itself works great, but the “zoning” is a rather tedious and challenging task, especially when setting up the zones. Anyone who has played around with this knows what I mean. For example, I have an L-shaped hallway and want to define each corridor as a separate zone—ideally as many small zones as possible so the lights can follow you. :wink:

It’s been a while since I was in school, but some things from math class have stuck with me, and I came up with the following thought experiment. I’d love to hear your opinions on it:

If I can get the coordinates of detected people (target sensor x, y) through HomeAssistant, can’t I define zones within HomeAssistant and check whether a target is inside a polygon and how many are there?

→ Here’s how I imagine it: you measure four points (the corners of your zone).
From these, you can derive four individual 𝑓(𝑥) functions, which define an area and thus a zone/polygon. Then you check which areas or planes the target coordinates fall into. In my view, this would allow you to divide the space into as many shapes as you like.

This way, you could create binary sensors that indicate whether a zone is occupied and how many people are in it.

I’ve also made an initial attempt, but it’s not quite working yet—what do you think?

- name: TestZone
  unique_id: test_zone
  icon: mdi:calendar-today          
  state: >
          {% set x_target = states('sensor.sens_sz_presence_p0x') | float %}
          {% set y_target = states('sensor.sens_sz_presence_p0y') | float %}
          {% set polygon = [(0, 0), (0, 500), (500, 500), (500, 0)] %}
          
          {% set count = 0 %}
          {% for i in range(polygon | length) %}
            {% set x1, y1 = polygon[i] %}
            {% set x2, y2 = polygon[(i + 1) % (polygon | length)] %}
            
            {% if (y1 > y_target) != (y2 > y_target) and
                   (x_target < (x2 - x1) * (y_target - y1) / (y2 - y1) + x1) %}
              {% set count = count + 1 %}
            {% endif %}
          {% endfor %}
          
          {{ count % 2 != 0 }}

##############################################
edit
I found out why it didn’t work – the variable is reinitialized after each loop and is therefore reset every time.

So I wrote it in Python, and voilà – it works so far. I can define a polygon with 4 vertices and use the ray casting algorithm to check whether the target point is inside or outside the polygon. It works quite well and is very fast.

Now I’m extending the script to handle a number n of polygons. With automation, you can easily trigger the script for every change to the targets coordinates, making it respond to updates automatically.
target_x and tsrget_y are just the entities of sensor_presence_p0x oder p0y …

I just need to figure out how to use the response in the most meaningful way. I was considering using a Boolean helper or something similar. any good ideas?

Here’s the Python script:

target_x = float(data.get("target_x", 0))
target_y = float(data.get("target_y", 0))
polygon = [(0, 0), (0, 500), (500, 500), (500, 0)]

def point_in_polygon(x, y, poly):
    count = 0
    for i in range(len(poly)):
        x1, y1 = poly[i]
        x2, y2 = poly[(i + 1) % len(poly)]
        if (y1 > y) != (y2 > y) and (x < (x2 - x1) * (y - y1) / (y2 - y1) + x1):
            count += 1
    return count % 2 == 1

#Prüfen, ob Punkt im des Polygon liegt
in_zone = point_in_polygon(target_x, target_y, polygon)

hass.services.call("notify", "persistent_notification", {"message": f"Person ist {in_zone}. {target_x} {target_y}"})