OCR for analog meter using opencv

That’s really cool!
About the focus: Did you try to put a small magnifying glass in front of the lens?

Hi,
I know it’s an old discussion but I am actually exactly looking for this solution to read my old gas and electric meter. The repository that @jhhbe shared doesn’t work (at least, the addons in it look deprecated - not supported anymore by HA).

Any chance to get it working with the current version of HA, or any other alternatives ?

Thanks

I only have low tech addon experience so I don’t have prebuilt images. In order to sort of make sure that they work I have only made them available for armv7 -I think- as I’m running on RPI 3B+ and they need to build on your pi.

If you let me know what it is complaining about I can have a look - ah - as it is now there is a constraint on the webcam which drops a pic in /config/www so you might want to check if you have that folder. It would be a fairly simple thing to make the webcam optional and use a jpg you can drop in that folder. An automation that drops a pic in the /config/www folder would take care of that.

Code snippet shows script is expecting a webcam supporting basic authentication and retrieves the snapshot in folder /config/www

    # download the url contents in binary format
    r = requests.get(config_json["url"], auth=(config_json["user"], config_json["password"]))
    # open method to open a file on your system and write the contents
    with open("/config/www/meter.jpg", "wb") as code:
        code.write(r.content)

hope that helps!
Jhh

I know that it’s not the same as OCR but I’ve got similar water meters and I’m reading them with infrared reflective sensors.

you can see the shiny-circle at the down-right side of the meter, just under the numbers. one full rotation of that circle means 1 liter is gone [at least in my meter ;)]. you just need to read this movement - and it’s fairly easy, as the circle has non-reflective part, the black triangle.

I’ve got the wemos mini d1, flashed tasmota on it, attached two TCRT5000 sensors, and prepared two tasmota rules that write the readout into tasmota’s counter variables. this way my meter is independent from the HomeAssistant [in case of reboots or any other issues], but HA ofcourse can read those counters via MQTT.

it will need some calibration [there’s a little screw on the sensor to do that] and it’s important to keep the meter & sensor in the dark as any additional light can mess with the readings and they jump hundreds of liters up :wink:

I’ve got this running for months now - works great and is very reliable after proper calibration. my meters are in the garage, enclosed in a small cabinet, so there’s no problem with lights leaks. additionaly I have temperature sensors attached to the very same wemos & plan to add PIR for motion detection, so it’s going to be a real multi-sensor setup, with almost no effort.

1 Like

Oh I’m running HA on a NUC, that’s probably why - too bad :frowning:

sounds exciting !

found some old pictures of first testing device:

okay - I’ve added architectures - as it is simple python I doubt there will be crazy dependencies. If you install it should build for your architecture and hopefully it will work.

Jhh

1 Like

Great thanks !
Unfortunately, I’ve just realized that an AWS account is needed (even though it’d be free the first 12 months), thus I don’t think I’m gonna use this solution permanently.

hi,

Thanks for that remark - you’re right only first year is free. I’ll need to change my refresh frequency to once every hour to stay below 1000 images per month soon. 1USD for 1K images seems reasonable until our meters are swapped to digital meters with a P1 service port that we can read (it will also be more accurate as Rekognition is not perfect).

I had missed that - I thought I could stay in the free tier for quite long.

Thx!
Jhh

Hi,

First of all a great idea to read the meters like this and thanks for sharing your hard work. I got it all working with creating an image and making the connection to AWS. But then it somehow fails with the following log output. I have no idea what that could mean. I would really appreciate your help:

x i x
Conversion failed
Nw:  0
erg waarschijnlijk juist:
i
0 0 2
-----------
Tekst: H
Type: WORD
Confidence 67.27286529541016
Id 6
ParentId: 0
value conversion
x H x
Conversion failed
Nw:  0
erg waarschijnlijk juist:
H
0 0 2
-----------
Tekst: e
Type: WORD
Confidence 50.415550231933594
Id 8
ParentId: 1
value conversion
x e x
Conversion failed
Nw:  0
erg waarschijnlijk juist:
e
0 0 2
-----------
Tekst: W
Type: WORD
Confidence 46.646087646484375
Id 7
ParentId: 0
value conversion
x W x
Conversion failed
Nw:  0
erg waarschijnlijk juist:
W
0 0 2
-----------
Tekst: 20
Type: WORD
Confidence 74.04178619384766
Id 9
ParentId: 2
value conversion
x 20 x
Nw:  20
-----------
Tekst: 0049m
Type: WORD
Confidence 94.37973022460938
Id 10
ParentId: 3
value conversion
x 0049m x
Conversion failed
Nw:  0
erg waarschijnlijk juist:
0049m
0 0 2
-----------
Tekst: LLLIIIW
Type: WORD
Confidence 70.71917724609375
Id 11
ParentId: 4
value conversion
x LLLIIIW x
Conversion failed
Nw:  0
erg waarschijnlijk juist:
LLLIIIW
0 0 2
-----------
Tekst: 41622252324
Type: WORD
Confidence 88.07597351074219
Id 12
ParentId: 4
value conversion
x 41622252324 x
Nw:  41622252324
-----------
Reading = ok
Traceback (most recent call last):
  File "ocr_aws.py", line 112, in <module>
    baseline = int(reading)
ValueError: invalid literal for int() with base 10: 'LLLIIIW'
[cmd] /run.sh exited 1
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.```

Thanks,
Thomas

hi,

When AWS is reading the text in an image it does not by magic return the correct answer. It gives you a number of guesses with a confidence rating (though I doubt at Amazon they like that phrasing).

That is why I need a current reading with a =/- range so the script can work out if the reading makes sense (new reading is between old one - delta value and old one + delta value) as the confidence ratings don’t really work.

It also means the script is trying to treat a lot of gibberish as numbers so it looks like there is an is numerical check which is not working. I’ll have a look at that but my python is not great.

Given that it sees so much in your picture you may want to change the camera angle a bit, reflections are not good. Can you include a pic of the meter that you are using?

Jhh

@jhhbe

Hi, thanks very much for this addon! Exactly what I was looking for. :slight_smile:
However I can’t install it due to error : “Failed to install addon, Unknown Error, see logs”. I’m on Hassio 0.113 on a rpi3b.

Here is my supervisor log:

20-07-28 09:12:34 INFO (SyncWorker_7) [supervisor.docker.addon] Start build 47b59106/armv7-addon-meterreader:0.1.16
20-07-28 09:12:49 ERROR (SyncWorker_7) [supervisor.docker.addon] Can’t build 47b59106/armv7-addon-meterreader:0.1.16: The command ‘/bin/ash -o pipefail -c pip3 install boto3’ returned a non-zero code: 127

Thanks!

UPDATE : I could install boto3 manually using “pip3 install boto3” in terminal, but the install still hangs on the same message.
UPDATE 2 : solved. We need to create a build.json file according to this post
UPDATE 3 : actually you are amazing you already solved it with a different way :slight_smile: thanks so much

@alexbelgium,

ha, yes I posted a new version a bit earlier today. The whole setup is a bit picky as it requires a quite similar setup to what I had. If you got this far I assume you’ll manage. Good find on that fix - I’ll probably include that as it seems more reliable than what I came up with.

I’m not putting too much time in this anymore - I’ve repurposed the webcam to find out what animal is after our chickens and we’re getting a smart meter sooner or later.

Good luck! Should you get stuck; just ask.

Jhh

2 Likes

@jhhbe
well it works perfectly so thanks very much. It now reads the camera snapshot and saves the data to the MQTT sensor as expected :slight_smile:

For the chicken what I did is use the motioneye addon to capture motion… Its not perfect on rpi3b but just works. And I built an arduino-managed co-op door that opens and closes based on luminosity. It was tricky to set but now works rather well.

All the best,
Alexandre

Hi,
what’s the code to read numbers and text? What i’ve to write in the targets?
Thank you

  • platform: amazon_rekognition
    aws_access_key_id: *****
    aws_secret_access_key: *****
    region_name: eu-west-1 # optional region, default is us-east-1
    save_file_folder: /config/www/amazon-rekognition/ # Optional image storage
    save_timestamped_file: True # Set True to save timestamped images, default False
    confidence: 90 # Optional, default is 80 percent
    targets: # Optional target objects, default person
    • ???
      roi_x_min: 0.2 # optional, range 0-1, must be less than roi_x_max
      roi_x_max: 0.6 # optional, range 0-1, must be more than roi_x_min
      roi_y_min: 0.3 # optional, range 0-1, must be less than roi_y_max
      roi_y_max: 0.45 # optional, range 0-1, must be more than roi_y_min
      source:
    • entity_id: camera.water_meter

Here was my full options that worked very well. I didn’t put targets and it provided all readable numbers with high probability as output.

upd_interval: 1800
url: 'http://192.168.178.52:8080/cgi-bin/snapshot.sh'
user: 'null'
password: 'null'
baseline: '617550'
under: 10
over: 10
aws_access_key_id: ACCESSKEY
aws_secret_access_key: SECRETKEY
region: us-east-2
mqtt_host: 192.168.178.23
mqtt_port: 1883
mqtt_user: mqtt
mqtt_pwd: mqtt
mqtt_topic: home/meterelectricity

Hi, I know it’s quite an old thread but would you mind elaborating on your solution? Thanks

Hello, got an Aquarite LT from Hayward (apparently no RS485 on this model). I’d like to do the same with OCR, can you share your Yaml plz ?
Thx.

Hi, I don’t have the code anymore, but in fact I did the OCR processing on the server, not on the ESP