Image Object detection w/ Google Coral on Intel NUC

I’ve been very intrigued by this image processing platform and all the great work that a @Robmarkcole has done to date. So much so that I’d like to consolidate all my pi’s onto my new Intel NUC with Docker and a Google Coral stick.

My Image processing time on the Pi w/ the Coral is about a second. Deepstack on the NUC under docker takes about 6-8 seconds. That’s enough time in my use case to not work at all. I need sub second recognition.

I see this article doing it on a Pi. and its exactly what I’d like on to do on the NUC:

I suspect that because this image was built for the Pi, it won’t work on the NUC (I’ve tried it but get errors).

Has anyone done this on a NUC w/ Docker and the Google Coral stick?

I’m good enough at this to get this far, but lack the knowledge to go further. I’ve got Portainer.io running Deepstack, HOME ASSISTANT, etc now on the NUC.

Jeff

2 Likes

Take a look at Frigate in this forum. Seems to be what your looking for

@jompa68

Tell me what your setup looks like. Mine is very specific.

I have an Intel NUC with Ubuntu 20.4, google coral stick running home assistant supervised v243 under Docker with Portainer. It took me 5 hours today to get the coral stick working with Deepstack on the same NUC machine, so I’d have to backtrack a lot of steps to unwind what I did. There were a few key commands that did end up being breakthroughs though. I’m not sure what I did was the most efficient way to do this but it works…and it’s fast. -Jeff

Hi @jazzmonger
i have PC with Debian 10, coral stick, HA Supervised 243 running in docker.
Really would like to have my coral running with deepstack so i would be more than happy if you could tell how you did it.

Ok, sounds similar enough. Have you installed the coral stick and gotten far enough that it recognizes the Macaw bird example?

Should give you this:

INFO: Initialized TensorFlow Lite runtime.
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
11.8ms
3.0ms
2.8ms
2.9ms
2.9ms
-------RESULTS--------
Ara macao (Scarlet Macaw): 0.76562

Yeah, so far so good.

Where are u located? It’s 4:30 am here in CA, so the wife is asleep… I need to get in front of my NUC for the rest if it and turning on that light will bring on the wrath… happy wife, happy life :sunglasses:

hehe, i live in Sweden so mid day here 1:38pm

Have you installed this?

I use modified these files to get the Rest server running and talking to the coral stick. Then I turned it into a server service on the NUC.

I had installed it on my pi already so I was able to copy what I needed.

Heres the new coral-app.py that I modified to work on the NUC. Modify w/ your username and correct paths:

# Start the server:
#   python3 coral-app.py
# Submit a request via cURL:
#   curl -X POST -F [email protected] 'http://localhost:5000/v1/vision/detection'

from edgetpu.detection.engine import DetectionEngine
import argparse
from PIL import Image
import flask
import logging
import io

app = flask.Flask(__name__)

LOGFORMAT = "%(asctime)s %(levelname)s %(name)s %(threadName)s : %(message)s"
logging.basicConfig(filename='coral.log', level=logging.DEBUG, format=LOGFORMAT)

engine = None
labels = None

#ROOT_URL = "/"
ROOT_URL = "/v1/vision/detection"

# Function to read labels from text files.
def ReadLabelFile(file_path):
    with open(file_path, "r", encoding="utf-8") as f:
        lines = f.readlines()
        ret = {}
        for line in lines:
            pair = line.strip().split(maxsplit=1)
            ret[int(pair[0])] = pair[1].strip()
    return ret


@app.route("/")
def info():
    info_str = "Flask app exposing tensorflow lite model {}".format(MODEL)
    return info_str


@app.route(ROOT_URL, methods=["POST"])
def predict():
    data = {"success": False}

    if flask.request.method == "POST":
        if flask.request.files.get("image"):
            image_file = flask.request.files["image"]
            image_bytes = image_file.read()
            image = Image.open(io.BytesIO(image_bytes))

            # Run inference.
            predictions = engine.detect_with_image(
                image,
                threshold=0.05,
                keep_aspect_ratio=True,
                relative_coord=False,
                top_k=10,
            )

            if predictions:
                data["success"] = True
                preds = []
                for prediction in predictions:
                    preds.append(
                        {
                            "confidence": float(prediction.score),
                            "label": labels[prediction.label_id],
                            "y_min": int(prediction.bounding_box[0, 1]),
                            "x_min": int(prediction.bounding_box[0, 0]),
                            "y_max": int(prediction.bounding_box[1, 1]),
                            "x_max": int(prediction.bounding_box[1, 0]),
                        }
                    )
                data["predictions"] = preds

    # return the data dictionary as a JSON response
    return flask.jsonify(data)


if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Flask app exposing coral USB stick")
    parser.add_argument(
        "--models_directory",
#        default="~/Documents/GitHub/edgetpu/test_data/",
        default="/home/jeff/all_models/",
        help="the directory containing the model & labels files",
    )
    parser.add_argument(
        "--model",
        default="mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite",
        help="model file",
    )
    parser.add_argument(
        "--labels", default="coco_labels.txt", help="labels file of model"
    )
    parser.add_argument("--port", default=5000, type=int, help="port number")
    args = parser.parse_args()

    global MODEL
    MODEL = args.model
    model_file = args.models_directory + args.model
    labels_file = args.models_directory + args.labels

    engine = DetectionEngine(model_file)
    print("\n Loaded engine with model : {}".format(model_file))

    labels = ReadLabelFile(labels_file)
    app.run(host="0.0.0.0", port=args.port)

This command doesn’t work

python3 coral-app.py --models-directory ~/my/dir

It should be

python3 coral-app.py --models_directory ~/my/dir

You also have to modify coral-app.py to change all references to “pi”. Both in directories and the user name Pi to your NUC username.

There will be errors along the way calling out dependencies which need to be installed. But once it finally executes, this tests it, using whatever image you feed it:

curl -X POST -F image=@images/test-image3.jpg 'http://localhost:5000/v1/vision/detection'

You should get an inference result.

ah i think i manage to fix that

Coral.Service file I modified below. I used the default model.

[Unit]
Description=Flask app exposing tensorflow lite model on the Coral USB stick
After=network.target

[Service]
# "--model", default="mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite"
# "--labels", default="coco_labels.txt"
ExecStart=/usr/bin/python3 -u /home/jeff/coral-pi-rest-server/coral-app.py
WorkingDirectory=/home/jeff/coral-pi-rest-server
StandardOutput=inherit
StandardError=inherit
Restart=always
RestartSec=10
User=jeff

[Install]
WantedBy=multi-user.target

systemctl status coral.service

should give you:

That’s it. I’m a Total Linux novice which is why it took me all day :wink:

Wow! Working at first try. Really good help @jazzmonger
Got 1 info/error in coral.log. Not sure it is important.

2020-09-14 14:43:30,423 WARNING root Thread-2 : From /home/hsadmin/coral-pi-rest-server/coral-app.py:61: The name DetectWithImage will be deprecated. Please use detect_with_image instead.

2020-09-14 14:43:30,445 INFO werkzeug Thread-2 : 127.0.0.1 - - [14/Sep/2020 14:43:30] "POST /v1/vision/detection HTTP/1.1" 200 -

FANTASTIC!
Interesting. my log file stays empty, though it used to work on my Pi. I tried to fix that but after spending 5 hours getting this far yesterday, I decided to start drinking wine instead :). I’m in my office now looking at it.

But, that’s the last thing to worry about.

I got this error as well: DetectWithImage will be deprecated. No big deal,

I went thru all these posts and edited them adding code, examples, etc, so the next person trying to do this can replicate it! I’ve been at it 3 months and finally just powered through it yesterday.

1 Like

Hi Jeff: how fast is you rpi3+coral usb?

I’m buying hardware for setting up a image object detection system

Thanks

Actually, I have an Intel NUC that I use for my home assistant installation now. The raspberry pi’s are just way too under powered to do anything reliable regarding image detection, even with the Google Coral accelerator usb dongle. My NUC runs Ubuntu 20.04 which of course is unsupported by home assistant but works great.

In my very unscientific tests regarding performance, the R Pi’s are about 1/10 as fast as the Intel NUCs. So it really depends on your application and your requirements. For me, when I arrive at my gate, I want my license place recognized immediately, at least within 10 seconds. After that I grab the remote control. the NUC usually is up to the task. The Pi’s, forget about it…

Jeff