Object detection: did my chickens make an egg? (no longer need to look myself)

hello fellow HA enthousiasts,

So we have a problem with magpies stealing our eggs. I have a chicken coop with webcam and an ESPHOME powered door (which works quite well). So, in principle, I could detect an egg - close the door - send a notification for me to pick up the egg. Thinking a bit further I could reliably detect my 2 feathered friends to be inside and close the door in the evening as we have predators around, but I need to check the webcam regularly to be sure they are inside. (now itā€™s based on sunset + x mins but that is only 95% reliable and lives are depending on HA here).

Iā€™ve followed the model maker and uploaded that tflite model in DOODS but it didnā€™t like it. I have a problem with DOODS anyway as it recognizes a car in whatever picture I show it.

Robinā€™s deepstack object implementation looks interesting also but the label list * animal: bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe is not ā€˜chickenā€™ of course.

I have HA running on a PI4 and a NUC so resources should be fine, but Iā€™m really at a loss where to look so pointers are more than welcome!

Thanks!

1 Like

A weight sensor in the floor would be my choice, i thinkā€¦ :stuck_out_tongue:

Thanks for the suggestion, but then I lose an opportunity to learnā€¦ and there is some excitement to an over-engineered solution also.

chickens sleep on the other side and I suppose a sensor might work, but the weight of the egg gets lost in the straw I provide. Alternatively there are laying nests that evacuate the egg automatically but then I donā€™t want to give our hens a trauma as the egg disappears when they move.

Reading up on deepstack now.

Iā€™ve meant to sense the weight of the chickens, to see, if their are home :slight_smile:

1 Like

Make the door small enough so that you can NFC tag them.

Have one reader on each side of the door, if the left one reads a chicken tag then itā€™s going in, if the right one does it itā€™s moving out.
Each movement sets a boolean.

Should work fine, unless they reverse out the door.

1 Like

If you really want to over-engineer it then you could put a depth sensor above the indention where the eggs are, this would tell you if thereā€™s a chicken and if there is an egg. I use an ultrasonic sensor like that on an Arduino to detect water levels, but it would work for any depth sensing.

2 Likes

Thanks for all suggestions - I used @robmarkcole 's deepstack object recognition installed through HACS.

Then I read up on custom models and followed the instructions and used a Google colab to train the model. For now I have the impression that due to memory and duration restrictions (I timed out on my first attempt) I got the best results from:

python3 train.py --epochs 150 --batch-size 32 --dataset-path "/content/drive/MyDrive/my-dataset"

Connecting Google drive was more straightforward than uploading and unzipping the training data.

Convincing the image processing to use your own trained model was a bit confusing:

image_processing:
  - platform: deepstack_object
    ip_address: localhost
    port: 80
    api_key: mysecretkey
    custom_model: mask
    confidence: 60

mask is not a file name mask, it was the name of the sample custom model for face mask recognition, so simply use the name of the model here. For some reason documentation mentions .pth files, the colab generates a .pt file and that works just fine.

Et voilĆ :

Rectangle is a bit fine, but I no longer need to look myself if there is an egg.

targets_found: 
- egg: 77.362

summary: 
egg: 1

last_target_detection: 2021-05-09_08-58-49
custom_model: best
all_objects: 
- egg: 77.362

It somehow feels wrong Iā€™m able to do this - pretty impressed with the whole thing.

Thx!

7 Likes

I love this application!

2 Likes

Robin,

2 applications really - yes, it shows if there is an egg which is nice as I prefer to get them rather than anyone else, but also I check if the happy hens are inside when the automation closed the door of the chicken coop. The automation is sunset+15mins but they have been surprised and chances of them surviving a night outside arenā€™t that high. So this potentially is a life saver.

Would be nice to have the server as an add-on but Iā€™m not complaining, might look into it myself but thatā€™s another learning track.

Thx!

Did you look into Frigate? That already has an addon and supports custom models I believe

Frigate looks very polished, but deepstack just works - their guide for training the model with a google colab is super simple. I started on tensorflow lite but then for training struggled with python version, dependencies and modules that either didnā€™t want to install or I couldnā€™t deinstall to start all over.
Iā€™ll probably come back to it later, I have one non rtsp cam and then use motioneye to tilt another cam 90Ā° so a few things to look into before Iā€™m ready.

I have deepstack on the NUC, not on the PI so comparing performance, ease of setup against an addon on the PI is not really apples to apples. But then again getting a usb coral for the PI may solve a lot.

What a pleasant waste of time this all isā€¦ hehe. Thanks for sharing your work if I havenā€™t mentioned that before!

My wife has a hard time with me over-engineering things for the fun of it. When she asks ā€œwhyā€ I always answer ā€œbecause I canā€. I mean, I donā€™t need a micro camera and sensors on my mailbox that transmit wifi from the ESP to Home Assistant so my house can speak to me to let me know I have mail, but I do it because ā€œI canā€.

1 Like

hi @jhhbe, very nice example of using object recognition. I was looking for something very similar.
Question; where did you find the egg images to train the model? Would you be able to share these?

Thanks
Koen

Koen,

I created a set myself: I had a HA automation taking a picture in the chicken coop every 5 minutes. Then I used GitHub - tzutalin/labelImg to annotate the jpgs which didnā€™t take that long. Did that for a day or 3, made sure I had sunny and overcast pictures, as the lighting is very different.

I need to retrain my model as the hens decided to lay their eggs in the far end of their home now and for those it doesnā€™t work. So I doubt itā€™s going to be very effective in your setup.

Anyway, the original set I used is at jhhbe/dataset: dataset egg detection (github.com)

Have fun!

1 Like

Hey do you still use it this way?
Possible to share some more info to set this all up?
i have an camera working on ha with rtsp i could install deepstack object to
but how do you get that image processed to check if its an egg or not?
Is it an automation to send it every x time to deepstack or how do you do it? tx on an advance

Yes, still use it that way, but as chickens decided to lay their eggs on the opposite side in the nest section, the camera canā€™t get a good view. So it only detects the chickens now.

Setup: install the deepstack docker container, put a deepstack section in configuration.yaml where you tell deepstack which camera to look at. Install HACS integration for deepstack - unsure what entity you get as part of the configuration setting and which from HACS, but I assume that: yaml entry gives a image_processing.scan service and HACS integration gives an entity that allows you to consume the result of that scan easily. I have an automation that calls the scan service every 5 minutes.

Optional: make a trained model available for use in the deepstack container.

I put some more effort in explaining the custom model part, as I thought the other bits and pieces were documented.

Good luck!

thank you for the info i have the deepstack part and the scanning allready working
iā€™m now trying to training the egg

The google colab is still online, and that was pretty easy. If you set-up a 5 min schedule saving pictures to your share folder to get training material specific to your setup, then the instructions in my earlier posts should have what you need.

Have fun!

i donā€™t know if its normal i think i have a error somewhere:

root@deepstack:~/deepstack-trainer# python3 train.py --dataset-path "/root/models/kipei"
Using torch 1.13.1+cu117 CPU

Namespace(adam=False, batch_size=16, bucket='', cache_images=False, cfg='./models/yolov5m.yaml', classes='', data={'train': '/root/models/kipei', 'val': '/root/models/kipei', 'nc': 2, 'names': ['egg', '']}, dataset_path='/root/models/kipei', device='', epochs=300, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[640, 640], local_rank=-1, log_imgs=16, model='yolov5m', multi_scale=False, name='exp', noautoanchor=False, nosave=False, notest=False, project='train-runs/kipei', rect=False, resume=False, save_dir='train-runs/kipei/exp2', single_cls=False, sync_bn=False, total_batch_size=16, weights='yolov5m.pt', workers=8, world_size=1)
Start Tensorboard with "tensorboard --logdir train-runs/kipei", view at http://localhost:6006/
Hyperparameters {'lr0': 0.01, 'lrf': 0.2, 'momentum': 0.937, 'weight_decay': 0.0005, 'warmup_epochs': 3.0, 'warmup_momentum': 0.8, 'warmup_bias_lr': 0.1, 'box': 0.05, 'cls': 0.5, 'cls_pw': 1.0, 'obj': 1.0, 'obj_pw': 1.0, 'iou_t': 0.2, 'anchor_t': 4.0, 'fl_gamma': 0.0, 'hsv_h': 0.015, 'hsv_s': 0.7, 'hsv_v': 0.4, 'degrees': 0.0, 'translate': 0.1, 'scale': 0.5, 'shear': 0.0, 'perspective': 0.0, 'flipud': 0.0, 'fliplr': 0.5, 'mosaic': 1.0, 'mixup': 0.0}
Overriding model.yaml nc=80 with nc=2

                 from  n    params  module                                  arguments
  0                -1  1      5280  models.common.Focus                     [3, 48, 3]
  1                -1  1     41664  models.common.Conv                      [48, 96, 3, 2]
  2                -1  1     67680  models.common.BottleneckCSP             [96, 96, 2]
  3                -1  1    166272  models.common.Conv                      [96, 192, 3, 2]
  4                -1  1    639168  models.common.BottleneckCSP             [192, 192, 6]
  5                -1  1    664320  models.common.Conv                      [192, 384, 3, 2]
  6                -1  1   2550144  models.common.BottleneckCSP             [384, 384, 6]
  7                -1  1   2655744  models.common.Conv                      [384, 768, 3, 2]
  8                -1  1   1476864  models.common.SPP                       [768, 768, [5, 9, 13]]
  9                -1  1   4283136  models.common.BottleneckCSP             [768, 768, 2, False]
 10                -1  1    295680  models.common.Conv                      [768, 384, 1, 1]
 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 12           [-1, 6]  1         0  models.common.Concat                    [1]
 13                -1  1   1219968  models.common.BottleneckCSP             [768, 384, 2, False]
 14                -1  1     74112  models.common.Conv                      [384, 192, 1, 1]
 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 16           [-1, 4]  1         0  models.common.Concat                    [1]
 17                -1  1    305856  models.common.BottleneckCSP             [384, 192, 2, False]
 18                -1  1    332160  models.common.Conv                      [192, 192, 3, 2]
 19          [-1, 14]  1         0  models.common.Concat                    [1]
 20                -1  1   1072512  models.common.BottleneckCSP             [384, 384, 2, False]
 21                -1  1   1327872  models.common.Conv                      [384, 384, 3, 2]
 22          [-1, 10]  1         0  models.common.Concat                    [1]
 23                -1  1   4283136  models.common.BottleneckCSP             [768, 768, 2, False]
 24      [17, 20, 23]  1     28287  models.yolo.Detect                      [2, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [192, 384, 768]]
Traceback (most recent call last):
  File "train.py", line 530, in <module>
    train(hyp, opt, device, tb_writer, wandb)
  File "train.py", line 90, in train
    model = Model(opt.cfg or ckpt['model'].yaml, ch=3, nc=nc).to(device)  # create
  File "/root/deepstack-trainer/models/yolo.py", line 96, in __init__
    self._initialize_biases()  # only run once
  File "/root/deepstack-trainer/models/yolo.py", line 151, in _initialize_biases
    b[:, 4] += math.log(8 / (640 / s) ** 2)  # obj (8 objects per 640 image)
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
root@deepstack:~/deepstack-trainer#

You can insert code in the colab - run ls -l /root/models/kipei to see if your training data is where you think it is. If not, use something like How to Connect Google Colab with Google Drive - MarkTechPost

For me it was just following cookbook style recipe - I have no idea what it does, which is why I commented - it feels wrong this is so easy. But I suppose if it isnā€™t, then it isnā€™t.