Hi,
since the approach of using tesseract in node-red to read the temperature of my heatpump does not seem to work (unable to import some trained data set built for 7 segment characters, see Node-Red Tesseract OCR to read heatpump temperature - #6 by smi) I tried using SSOCR.
This is my current config:
camera:
- platform: local_file
file_path: /config/www/images/heatpump4.jpg
name: heatpump_cam
- platform: local_file
file_path: /srv/homeassistant/img_2_processed.jpg
# file_path: /local/images/img_2_processed.jpg
name: ssocr_processed
image_processing:
- platform: seven_segments
source:
- entity_id: camera.heatpump_cam
x_position: 0
y_position: 0
height: 580
width: 685
threshold: 0
digits: 2
extra_arguments: '-Dimg_2_processed.jpg'
Unfortunately, in the log, I get:
Logger: homeassistant.components.seven_segments.image_processing
Source: components/seven_segments/image_processing.py:135
Integration: seven_segments (documentation, issues)
First occurred: 18:44:49 (31 occurrences)
Last logged: 18:50:10
Unable to detect value: found only 0 of 2 digits
The input image looks like this:
Also, I can’t use ssocr via terminal (running hass.io) as it says “service unknown” while the integrations appears under system info in the frontend and I can’t find the processed image (if there is one) from SSOCR.
Any idea on how to get Seven Segments OCR running? Are there any additional steps required to setup SSOCR on hass.io? The documentation only says so for core installs.
Thanks!