Valetudo Vacuums Map Camera for Home Assistant

Valetudo Vacuum Camera

Integration for Valetudo Vacuums to Home Assistant

Current stable release: 2024.05.0 → Important update for Home Assistant 2024.05

About:
Extract the maps for rooted Vacuum Cleaners running Hypfer / RE (rand256) Valetudo Firmware to Home Assistant via MQTT, easy setup thanks to HACS and guided configuration via Home Assistant GUI. This integration was inspired and developed due to the great work of Piotr Machowski it simply render the vacuums maps.

Screenshot 2023-07-11 at 21.21.25 Screenshot 2023-07-11 at 21.19.03

What it is:
Setup of this camera, that decode the vacuum maps and render it to Home Assistant, is straight forward and fully compatible with all home assistant possible installations type.

When you want also to control your vacuum from home assistant you will need to also install the:
lovelace-xiaomi-vacuum-map-card (recommended) from HACS as well.

How to install:

Step by step setup and usage guide is available on our repository.

Known Supported Vacuums:

  • Dreame D9
  • Dreame Z10 Pro
  • Dreame L10s Ultra
  • Mi Robot Vacuum-Mop P
  • Roborock.S5 / S50 / S55 (Gen.2)
  • Roborock.S6
  • Roborock.S7
  • Roborock.S8
  • Roborock.V1 (Gen.1)
  • Xiaomi C1
  • It works with all flashed and supported Valetudo Hypfer or RE(rand256) vacuums .

Features:

  1. Rendering Virtual Restrictions, active zones, active segments and also obstacles positions.
  2. Automatically Generate the calibration points for the lovelace-xiaomi-vacuum-map-card to ensure full compatibility to this user-friendly card.
  3. Automatically Generate rooms based configuration when vacuum support this functionality, this will allow you to configure the rooms quickly on the lovelace-xiaomi-vacuum-map-card.
  4. The camera can take automatically snapshots (when the vacuum idle/ error / docked) the snapshot can be saved with a custom name and location via snapshot services in home assistant as well.
  5. Change the image options directly form the Home Assistant integration UI with a simple click on the integration configuration.
  • The options menus are design to work on the mobile phones without scrolling as much as possible.
    • Image Rotation: 0, 90, 180, 270 (default is 0).
    • Trimming automatically the images improved the configuration of the images, you can add the margins around the images, the images are automatically trimmed to the minimum size.
    • Base colors are the colors for robot, charger, walls, background, zones etc.
    • Rooms colors, Room 1 is acrually also the Floor color (for vacuum that do not supports rooms).
    • It is possible to display on the image the vacuum staus.
    • Transparency level custom setup for all elements and rooms.
  1. This integration make possible to integrate multiple vacuums as per each camera will be named with the vacuum name (example: vacuum.robot1 = camera.robot1_camera… vacuum.robotx = camera.robotx_camera)
  2. The camera as all cameras in HA supports the ON/OFF service, it is possible to suspend and resume the camera streem as desired.
  3. We did recently also start to implement CPU and Memory reporting to HA so that the threads can be better handled in Home Assistant Virtual Environments.
  4. Exporting the logs in order to report issues with the camera as per we value privacy, the camera will filter Home Assistant logs and export only the data related to the “integration”. We save also the last json and image payload received (when vacuum idle, docked or error). The data are stored in the .storage folder of Home Assistant and can be copy on the WWW folder from the options of the camera.
  5. Auto Zoom the segments under cleaning if the vacuum support the segments. This future is available as an option from V1.5.9

Screenshot 2024-03-03 at 15.54.26

And is also possible to get the status of the vacuums display on the status text with the same language the UI of Home Assistant is setup to use (for more translations do not hesitate to contribute).

Notes:

  • This integration is developed and tested using a PI4 with Home Assistant OS fully updated to the last version, this allows us to confirm that the component is working properly with Home Assistant. Tested also on Proxmox and Docker Supervised “production” environment (fully setup home installation).

Visit us and report any issues you face, welcome also suggestions or improvements requests in our:

:link: GitHub Repository: GitHub - sca075/valetudo_vacuum_camera: Integration that export and render all Valetudo Vacuums (Hypfer and RE(rand256)) maps to Home Assistant
Join the Valetudo Vacuum Camera community and help us to enhance our vacuums cleaning routines with the functionality provided from this custom component. :rocket:

4 Likes

Hi @gsca075,

I tried your project on current HASS and it basically works with a Dreame Z10 Pro, even if it is not on the supported list of devices. It receives the map data from the Valetudo-driven robot through MQTT and displays it. Yet, when cropping and trimming the map Home Assistant keeps restarting.

Is that an expected bahaviour? Debug log doesn’t give much intel as to why the

Regards
Dirk

Dear @digodigo,

Once you can confirm the version of the camera you are using will start the investigation or try to reproduce the phenomena you face. We didn’t release (except in some beta test) any version causing HA to reboot. The cropping factor was introduced because some of the maps produced from the Vacuums are quite big to handle (causing instability). Platform wise we will later on introduce an Intel based version of this component as per we will use OpenCV instead of PIL (currently on PI4 or Home better Assistant OS it isn’t possible to setup OpenCV just by importing it). In general all Valetudo Vacuums could be supported will add Dreame Z10 Pro to the list too :slight_smile:

@digodigo,

The Z10 pro was tested from feedback of other users no issues reported on v1.3.5 and v1.4.0 that is the latest.

If you are still facing problems please let us know.

Regards,
Sandro

Sorry for my late reply. v1.4.0 works as intended indeed. Thanks for you good work!

@digodigo thanks to you… we did work a little in the last week on checking more in deep the camera performances and now we did overcome also the limits we had on our architecture…we did create our on library (similar to the OpenCV one) and we did improve drastically the performance of the integration. We strongly recommend to update to version v1.4.3 when not yet done.

@gsca075 great work! Setting it up was, well, not a breeze, but straightforward.

A few ideas:

  • Automate trimming. This should be relatively straightforward to do, just need to find the position of the leftmost/topmost/rightmost/lowest pixel position of a wall object being drawn, add a configurable padding to that, and set the trim to those edges. This way the trimming doesn’t need to be manually re-configured every time the map updates.
  • It would be nice to have an option for exporting the various layers within the Valetudo data into separate camera objects - one for the walls, one for the rooms, one for the vacuum position, one for the dock position, etc., allowing the user to freely assemble the layers into an overview via e.g. the Picture Elements card
  • Vector rendering. I understand that the Valetudo map data is a compressed bitmap without any anti-alias, meaning just by changing the drawn pixel size (e.g. instead of a 1x1 pixel, one draws a 4x4 unit), it can be scaled up to great lengths. This should also mean that this bitmap can be processed into a vector graphic using e.g. drawsvg (which has a similar API to PIL), then return the appropriately scaled bitmap to the camera entity, while also making the raw SVG content available to be used by e.g. ha-floorplan

@fonix232 Those are good points… in summary:

  1. we should auto trim the image, by default auto trim ON or OFF manual trimming as it is now… , this would give the best options to everyone do you agree?
  2. We should separate the output image if desired giving the option to store the different layers anyhow the robot position is actually available as attribute, could add the dock position… if the idea is to use the camera to create a picture element… (and I think it is a cool idea) would it be enough to extract the floor (segments) and walls => this data can be exported as SVG as well.
  3. We use numpy to draw the elements and Valetudo uses compressed pixels, this means we already draw the image as you say kind of Vector way (and actually numpy does this as OpenCV that isn’t compatible on all HA installations). The cameras in HA so far I know can only render the jpeg or pil or bitmap images… we chose Numpy because it basically runs on all installation of Home Assistant (ensure compatibility)…
    In geneal… this camera purpose is to use the vacuums… so okay for 1 and 2… we could implement this… it will take a little time anyhow because right now we are trying to support also the Valetudo Re vacuums.
    So… after v.1.5.0 is released we will start to work on it…
    Would you open a discussion on our repository for your ideas please?

I’d say make auto trim default, and supply an appropriate default padding (eg 50px, or whatever units the current trim fields take). Then you can make it an option on the config page, and a collapsible section for manual trim tuning.

Yep, that’s the idea I’m going off of :slight_smile: The Xiaomi Vacuum Card repo has a good example on such picture-element overlaying techniques, and I’ve managed to get it mostly working:

But this obviously requires a bunch of manual tinkering (ignore some slight discrepancies e.g. the balcony door/window edges not lining up, the floor plan isn’t perfect). Having separate layers for the walls, the rooms, the robot and the dock would allow one to compose a layout they want. But of course for ready-made cards, you’d need a full map too… So it is best to have the ability to create multiple cameras would come handy.

I get why numpy is being used. I’m also aware that HA, at this moment, does not support vector images for cameras - this is actually something I wanted to bring up for some time, and will ask on Discord to see if it can get any level of traction. Such vector approaches would be useful as vector images can contain tons of attributes, can be easily generated/drawn (there’s even open source JS libraries that allow rudimentary WYSIWYG vector editing! Imagine, in a few releases, if we had full vector support for a new “floorplan” camera entity (sub)type, making it possible to easily create a visual representation of the home that HA can use, like the aforementioned floorplan card approach, but all built in).

I’ll open a discussion on the git repo later today!

@fonix232 I think we are talking about a kind of home advanced panel here… well… the idea is really cool. and yes at this point will do some experiment to “crop automatically” the images, and then apply the trims as you say keeping the configuration page availabe. You see the Valetudo maps are 52105210 pixels or 7xxx7xxx and we do have a pixel_size that can scale the image… it’s quite a big challenge but you would like in general to get an image where you can control other HA elements… such as lights, and other home appliance in just one “card”… using SVG could actually lead to other applications such as scanning floors and importing the result to CAD’s … I think will do a stand alone project for this to begin… therefore… challenge accepted…

"Why does the zone cleaning feature not seem to work as expected, and I’m unsure if I’m addressing my question in the right place? My current configuration looks like this:

type: custom:xiaomi-vacuum-map-card
entity: vacuum.valetudo_dustin
vacuum_platform: Hypfer/Valetudo
map_source:
  camera: camera.dustin_camera
calibration_source:
  camera: true
internal_variables:
  topic: valetudo/your_topic
map_locked: true

Is there something within this configuration that might be causing the issue?"

@Flodo1987 I’m not sure if I understand it correctly. Anyhow I just got a S50 and I found out a couple of “bugs” on the integration that I count to solve with the next release, the zone clean or rooms cleaning was not properly shown on the map but the vacuum went to the right position and performed as it should. In your config:

internal_variables:
  topic: valetudo/your_topic

should be replaced with the valid topic that you should see in the camera attributes. According to your vacuum entity… it should be something like:

internal_variables:
  topic: valetudo/Dustin

Please check on the attribute of the camera vacuum_topic, this parameter is case sensitive and if not set correctly in the card this could explain why the zone_clean or go_to function do not work as expected.
Hope this helps. :slight_smile:

1 Like

Oh god that was stupid of me :sweat_smile::see_no_evil:
Thank you very much it works.
And thanks for the great work.:+1:

1 Like

I have another question.
Is it possible to initiate room cleaning with Alexa using voice commands?
I’m not sure if I’m asking this question in the right place.
Thank you in advance for any assistance!

yes it is possible but you need to write some automation / script in Home Assistant.

  • There is to copy the MQTT command the card send and to the vacuum and use the “MQTT: Publish” service in Home Assistant.
  • The script will be then exposed to Alexa from HA.
  • Then you can use the Alexa app to or your echo to to trigger the script.
    If you have motions sensor in your room you can also trigger the robot to pause when someone is the room and resume when the room is free… actually if you predefine zones or points you could in theory voice command the robot to go or do whatever.
    The card is really useful to get out the correct payload (in edit mode you can “copy service call”) to use use in the service you got to figure out how to translate it. and basically tell to Alexa what to do… run the script in HA… there are also a couple of example in the Valetudo Cloud

below example:

service: mqtt.publish
data:
  topic: valetudo/JustForDemonstration/MapSegmentationCapability/clean/set
  payload: '{"segment_ids": ["20", "18", "16"], "iterations": 2, "customOrder": true}'

Okay, then I haven’t understood how to do it yet.

I’ve been looking at it for a few days now and testing some things, but when I copy the example and try to create a script with it, I get an error, specifically:

“Message malformed: extra keys not allowed @ data[‘service’].”

And now I’m stuck and don’t quite understand how to resolve it."

@Flodo1987 no worry I will soon put on the repository some template or script example so that you can have a look at them and probably will be interesting also to others :slight_smile:
I did some automation long ago and at that time having a V1 I had to use the zones… I just need to look back on my config and will share it, if I can still find them :wink:

Okay, then I will continue testing it. Maybe I can figure it out before you find your configuration.:sweat_smile:

I haven’t found any instructions that have helped me so far.:see_no_evil:

Look I just run the example I passed you… and did work with my vacuum. There is and it is a little complicated now to enable the script or automation you will write “clean bedroom” or whatever on the settings of Home Assistant to be exposed to Alexa. The script must have a service call (edit YAML) similar to the one I posted above. The robot started (my S5) without any problem the only think is to use the room ID #number and topic valetudo JustForDemonstration as you learned… but the rest copy past :wink:

Alright, when I try to test this on my end, I receive the error message.

This is my script:

service: mqtt.publish
data:
  topic: valetudo/Dustin/MapSegmentationCapability/clean/set
  payload: '{\"segment_ids\": [\"18\"], \"iterations\": 1, \"customOrder\": true}'

The error message I receive is:

Message malformed: extra keys not allowed @ data[‘service’]