Original project used only 4 colors for rooms (segments), I am using random color increment. Could be implemented later I guess. Implemented with v1.0.1.
Lack of color and layers/entities visibility customization, but again - could be implemented later if there is a demand.
Heeey. Thanks. I’ve seen your project and noticed it was added to lovelace-xiaomi-vacuum-map-card only few days ago (after I started working on my mapper).
Well, you can tell me what sort of magic did you do to maximize performance? Myself I am doing this:
Downscale robot coordinates from pixelSize to 1.
Draw pixel by pixel floor/wall/segments.
Aliased upscale, fully manual and multithreaded to a scaling factor.
Draw entities using 2D drawing library (to have non-aliased lines with proper connections & transparency).
The result is great I would say. Technically the performance is acceptable enough so it could be hosted on vacuum itself and would work completely fine. The data could even be extracted from http://127.0.0.1/api/v2/robot/state directly via HTTP request and image later submitted via MQTT, but logically I would prefer longer battery time rather than…convenience?
The maps elements are extracted from the json the vacuums provide in the valetudo png that come out from the vacuums in our integration. Then we build the layers on a np.array… from there we calculate the maps required data for the cards… The calibration points and rooms (segments) configuration can be extracted from the vacuums json, about the vacuum_clean_zone_predefined I think it could be possible to get it out too from the vacuum json when the vacuum offer this option… else it is necessary to setup the card as in guides… what robot type is it the one you use? Can you try the integration from my repository and then let’s discuss about the improvements we shell do to it…
Some context you might not be aware of:
ICBINV is a relict from the past. I never wanted to write or maintain that, because I have no use for the functionality. It exists only because at the time, I was unable to say no to user requests.
The feature it provides was at some point part of Valetudo but had to be removed from there due to performance constraints. However, removing that made people very angry because feature gone.
At the time, I just wasn’t able to resist that so I did build the thing and then it sat there, slowly decaying.
At one point, I’ve even put up a note in the readme that the thing is basically unmaintained which promptly got removed by a PR
Over time, there where a lot of people that complained about its state but no one stepped up to the task and did something about it.
It’s nice to see that now finally you did.
Because of that, the ICBINV repo is now archived as there is no need to keep that half-broken thing around anymore. Again, Thank you! Finally that thing is gone
@Hypfer You made a great job with the vacuums and I think we can cooperate and make it easy for the guys that retrofit the vacuums with Valetudo to integrate it on HA… I would really appreciate to cooperate with you both guys… and very open to any improvements or suggestions. If you would then also consider to add our integration’s on the Valetudo official page… well this would be quite appreciated.
@Hypfer omg I did not expect such reaction from you. Thank you!!!
Your points are valid and I know from my own experience of how hard is to maintain a project in which you have no motivation and/or use cases. I am glad to help.
Also removed “Motyvation” part from my README.md.
Because of that, the ICBINV repo is now archived as there is no need to keep that half-broken thing around anymore. Again, Thank you! Finally that thing is gone
Would be nice if you could put a link to my repo next to your note of why your repo is archived. Your repo is super popular and users would be confused of why it’s archived without no additional information.
@gsca075, we need to clarify what we mean by “cooperation” in this context. We each have our own application, both serving the same purpose but implemented differently. Your application is built with Python, while mine is developed in Golang. I’m not entirely sure how we can collaborate effectively. Do you have any suggestions?
Looking from Valetudo perspective - open http://<vacuum_ip>/api/v2/robot/state in your browser and inspect JSON. If it changes in the future - this way would be incredibly easy to understand what happened and debug. The same JSON is being delivered via MQTT.
@erkexzcx Yes the Python version (valetudo_vacuum_camera) was created to assist the xaiomi vacuum card that I used for many years, the camera did grow up slowly and with a lot of work as per I had to adapt the code. We achieved to build an integration easy to configure and as well that support the card at 100% (the raw model is the camera Pitotr made for the non routed vacuums).
There are several options, as cropping, trimming, colours management, snapshots… rooms and calibrations data out of the box. The integration is fully tested on real machines… I mean there is a lot work already done… and a lot time spent on it. Anyhow the goal is to make it easy for who like us have a Valetudo vacuum to simply render the image at the lower cost of memory and cpu and almost in real time. This is why it was developed… about the cooperation, man I do not really know go… and also the architecture behind, anyhow the data extraction from the png instead of the vacuum api is made in order to reduce the load and get the data faster directly from MQTT. So we don’t need to duplicate the data over the network as well. I would advice, if for you can help to have a look to my code to understand how it works… probably we could try somehow to merge the projects (when this isn’t impacting the performances and functionality). I do really wish we do have the same goal here… it isn’t a competition and I do it only to share the possibility the camera already on since while offers… and I think… easy way to install and use is the first point to cover when replacing ICBINV… don’t you agree?
I looked deeper into your project and I missed crucial part of it - it installs as HACS project, which is incredibly easy from user’s perspective.
My suggestion for you - update your README.md and make it clear that user can easily install your app using HACS and…that’s it. Currently it feels like this project has no “automated” install and user is required to run python script manually. Your documentation of installation can be definitely improved.
Also I noticed that Hypfer has added a link in his repo to my repo as a successor. Let’s not do a competition here, so I added your repo to my README.md, so new users can make their own choice before deploying yours or mine project.
Yep I need to clean up and make the documentation a little more easy to read… thanks for the update… I can link your project too to my repo so that as you stated we can let the users to choose. This is absolutely not a competition… we just want the same at the end… easy to use and configure and full compatibility for vacuum and HA.
As quoted from @Hypfer’s README for ICBINV (emphasis mine): “If mqtt.publishAsBase64 is set to true , the image data will instead be published as base64-encoded string, which can be useful for OpenHAB.”
Instead of drawing&rendering PNG image, which let’s say takes 100ms, rendering SVG image would take ~5ms. This would open possibility to host renderer on robot vacuum itself. Sadly, it does not work with lovelace-xiaomi-vacuum-map-card (I’ve already tried this).
SVG format can’t be used for “cameras” in Home Assistant… this is isn’t a limitation of the card… and also the SVG format… is not easy to implement on Home Assistant Operating system… although svg can be use on the picture elements… and the card actually can have a custom background.
@erkexzcx I’ve created a PR for fixing the discovery topics - they need to be retained, otherwise every the entities are missing after an MQTT integration reload/HA restart.