Category Archives: Deep Learning

Fine-Tuned MobilenetV2 on MyriadX and Coral

Fine-tuning MV2 using a GPU is not hard – especially if using the tensorflow Object Detection API in a docker container. It turns out that deploying a quantize-aware model to a Coral Edge TPU is also not hard.

Doing the same thing for MyriadX devices is somewhat harder. There doesn’t appear to be a good way to take your quantized model and convert it to MyriadX. If you try to convert your quantized model to openvino you get recondite errors; if you turn up logging on the tool you see the errors happen when the tool hits “FakeQuantization nodes.”

Thankfully, for OpenVINO you can just retrain without quantization and things work fine. As such it seems like you end up training twice – which is less than ideal.

Right now my pipeline is as follows:

  • Annotate using LabelImg – installable via brew (OS X)
  • Train (using my NVidia RTX 2070 Super). For this i use Tensorflow 1.15.2 and a specific hash of the tensorflow models dir. Using a different version of 1.x might work – but using 2.x definitely did not, in my case.
  • For Coral
    • Export to a frozen graph
    • Export the frozen graph to tflite
    • Compile the tflite bin to the edgetpu
    • Write custom c++ code to interface with the corale edgetpu libraries to run the net on images (in my case my code can grab live from a camera or from a file)
  • For DepthAI/MyriadX
    • Re-train without the quantized-aware portion of the pipeline config
    • Convert the frozen graph to openvino’s intermediate representation (IR)
    • Compile the IR to a MyriadX blob
    • Push the blob to a local depthai checkout

Let’s go through each section stage of the pipeline:

Annotation

For my custom data set I gathered a sample dataset consisting of video clips. I then exploded those clips using ffmpeg:

ffmpeg -i $1 -r 1/1 $1_%04d.jpg

I then installed LabelImg and annotated all my classes. As I got more proficient in LabelImg I could do about one image every second or two – fast enough, but certainly not as fast as Tesla’s auto labeler!

Note that for my macbook, I found the following worked for getting labelimg:

conda create --name deeplearning
conda activate deeplearning
pip install pyqt5==5.15.2 lxml
pip install labelImg
labelImg

Training

In my case I found that mobilenetv2 meets all my needs. Specifically it is fast, fairly accurate, and it runs on all my devices (both in software, and on the Coral and OAKDLite)

There are gobs of tutorials on training mobilenetv2 in general. For example, you can grab one of the great tensorflow docker images. By no means should you assume that once you have the image all is done – unless you are literally just running someone elses tutorial. The moment you throw in your own dataset you’re going to have to do a number of things. And most likely they will fail. Over and over. So script them.

But before we get to that let’s talk versions. I found that the models produced by tensorflow 2 didnt work well with any of my HW accelerators. Version 1.15.2 worked well, however, so i went with that. I even tried other versions of tensorflow 1x and had issues. I would like to dive into the cause of the issues – but have not done so yet.

See my github repo for an example Dockerfile. Note that for my GPU (an RTX 2070 Super) I had to work around memory growth issues by patching the Tensor Flow Object Detection model_main.py. I also modified my pipeline batch sizes (to 6, from the default 24). Without these fixes the training would explode mysteriously with an unhelpful error message. If only those blasted bitcoin miners hadn’t made GPUs so expensive perhaps I could upgrade to another GPU with more memory!

It is also worth noting that the version of mobilenet i used, and the pipeline config, were different for the Coral and Myraid devices, e.g.:

  • Coral: ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03
  • Myriad: ssd_mobilenet_v2_coco_2018_03_29

Update: it didn’t seem to matter which version of mobilenet i used – both work.

I found that I could get near a loss of 0.3 using my dataset and pipeline configuration after about 100,000 iterations. On my GPU this took about 3 hours, which was not bad at all.

Deploying to Coral

Coral requires quantized training exported to tflite. Once exported to tflite you must compile it for the edgetpu.

On the coral it is simple enough to change from the default mobilenet image to your custom image – literally the filenames change. Must point it at your new labelMap (so it can correctly map the classnames) and image.

Deploying to MyriadX

Myriad was a lot more difficult. To deploy a trained model one must first convert it to the OpenVINO IR formart, as follows:

source /opt/intel/openvino_2021/bin/setupvars.sh && \
    python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py \
    --input_model learn_tesla/frozen_graph/frozen_inference_graph.pb \
    --tensorflow_use_custom_operations_config /opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json \
    --tensorflow_object_detection_api_pipeline_config source_model_ckpt/pipeline.config \
    --reverse_input_channels \
    --output_dir learn_tesla/openvino_output/ \
    --data_type FP16

Then the IR can be converted to blob format by running the compile_tool command. I had significant problems with the compile_tool mainly because it didnt like something about my trained output. In the end I found the cause was that openvino simply doesn’t like the nodes being put into the training graph due to quantization. Removing that from the pipeline solved this. However this cuts the other way for Coral – it ONLY accepts quantized-aware training (caveat: in some cases you could post-process quant however Coral explicitly states this doesn’t work in all cases).

Once the blob file was ready i put the blob, and a json snippet, in the resources/nn/<custom> folder inside my depthai checkout. I was able to see full framerate (at 300×170) or about 10fps at full resolution (from a file). Not bad!

Runtime performance

On the google Coral I am able to create an extremely low-latency pipeline – from capture to inference in only a few milliseconds. Inference itself completes in about 20ms with my fine-tuned model – so I am able to operate at full speed.

I have not fully characterized the MyriadX based pipeline latency. Running from a file I was able to keep up at 30 fps. My Coral pipeline involved me pulling the video frames using my own C/C++ code down – including the YUV->BGR colorspace conversion. Since the MyriadX is able to grab and process on its device, it does have the potential to be low latency – but this has yet to be tested.

Object detection using the MyriadX (depthai oakd-lite) and Google Coral produced similar results.

Architecture For DIY AI-Driven Home Security

The raspberry pi’s ecosystem makes it a tempting platform for building computer vision projects such as home security systems. For around $80, one can assemble a pi (with sd card, camera, case, and power) that can capture video and suppress dead scenes using motion detection – typically with motion or MotionEye. This low-effort, low-cost solution seems attractive until one considers some of its shortfalls:

  • False detection events. The algorithm used to detect motion is suseptible to false positives – tree branches waving in the wind, clouds, etc. A user works around this by tweeking motion parameters (how BIG must an object be) or masking out regions (don’t look at the sky, just the road)
  • Lack of high level understanding. Even in tweeking the motion parameters anything that is moving is deemed of concern. There is no way to discriminate between a moving dog and a moving person.

The net result of these flaws – which all stem from a lack of real understanding – is wasted time. At a minimum the user is annoyed. Worse they are fatigue and miss events or neglect responding entirely.

By applying current state of the art AI techniques such as object detection, facial detection/recognition, one can vastly reduce the load on the user. To do this at full frame rate one needs to add an accelerator, such as the Coral TPU.

In testing we’ve found fairly good accuracy at almost full frame rate. Although Coral claims “400 fps” of speed – this is inference, not the full cycle of loading the image, running inference, and then examining the results. In real-world testing we found the full-cycle results closer to 15fps. This is still significantly better than the 2-3 fps one obtains by running in software.

In terms of scalability, running inference on the pi means we can scale endlessly. The server’s job is simply to log the video and metadata (object information, motion masks, etc.).

Here’s a rough sketch of such a system:

This approach is currently working successfully to provide the following, per rpi camera:

  • moving / static object detection
  • facial recognition
  • 3d object mapping – speed / location determination

This is all done at around 75% CPU utilization on a 2GB Rpi 4B. The imagery and metadata are streamed to a central server which performs no processing other than to archive the data from the cameras and serve it to clients (connected via an app or web page).

Armchair Deep Learning Enthusiast: Object Detection Tips #1

Recently I’ve been using the Tensorflow Object Detection API. Much of my approach follows material provided on pyimagesearch.com, this API allows us to perform. Rather than rehash that material, I just wanted to give a few pointers that I found helpful.

  • Don’t limit yourself to the ImageNet Large Scale Visual Recognition (ILSVRC) dataset. This dataset is credited with helping really juice up the state of the art in image recognition, however there are some serious problems for armchair DL enthusiasts like myself
    1. It isn’t easy to get the actual images for training. You have to petition to have access to the dataset, and your email must not look “common”. Uh-huh – so what are your options? Sure, you can get links to the images on the imagenet website and download them yourself Sure you could get the bounding boxes – but how do you match them up with the images you manually downloaded. I don’t want any of this – I just want the easy button; download it and use it. Well – the closest thing you’ll get to that is to grab the images from academic torrents: http://academictorrents.com/collection/imagenet-lsvrc-2015. but even that doesn’t feel satisying – if imagenet is about moving the state of the art forward, and assuming i even had the capability o do so, they sure aren’t making it easy for me to do that!
    2. It seems outdated. The object detection dataset hasnt changed since 2012. That is probably good for stability but the total image size (~1M images) no longer seems big. Peoples hairstyles, clothing, etc. are all changing – time for an update!
    3. Oh, that’s right – there is no official “person” synset inside the ILSVRC image set! So don’t worry about those out of date hair styles or clothes!
    4. There are better datasets out there. Bottom line – people are moving to other data sets and you should too.
      1. Open Images being one of the best
      2. Oh, and you can download subsets of this easily using a tool like https://github.com/harshilpatel312/open-images-downloader.git.
  • The TFOD flow is easy to follow – provided you use the right tensorflow version.
    • You TFOD is not compatible with tensorflow 2.0. You have to use the 1.x series.
    • I am using anaconda to download tensorflow-gpu version 1.15.0. To do this type “conda install tensorflow-gpu=1.15.0” (inside an activated anaconda instance)
    • You then grab the TFOD library, as per the website
  • Make sure you actually feed TFOD data, else you get weird hanging-like behavior.
    • At some point i found that tensorflow was crashing because a bounding box was outside the image.
    • In the process of fixing tht i introduced a bug that caused zero records to be sent to tensorflow
    • When i then ran a training loop I saw tensorflow progress as usual until it reported it had loaded libcublas: “Successfully opened dynamic library libcublas.so.10.2”
    • I thought this was a tensorflow issue, and even found a github issue for it Successfully opened dynamic library libcublas.so.10.0′ – however this was all a red herring. It was NOT because of tensorflow, it was just because my bounding box fix had eliminated all bounding boxes. Once i fixed that alll was well
  • Make sure you provide enough discriminatory data. E.g. if you want to find squirrels, don’t just train on the squirrel set, otherwise your detector will think almost anything that looks like an object is a squirrel. Add in a few other data sets and you will find that squirrels are squirrels and everything else is hit or miss.

How I Set Up DLIB

It is fairly easy to check out and build dlib. Getting it to work in a performance-optimized manner – python bindings included -takes a little more work.

Per the dlib github one can build the bindings by simply issuing:

python setup.py install

First problem I found is that the setup process decided to latch on to an old version of CUDA. That was my bad – fixed by moving my PATH variable to point to the new cuda’s bin dir.

Second problem is that during compilation I saw the following:

Invoking CMake build: 'cmake --build . --config Release -- -j12'
[  1%] Building NVCC (Device) object dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cusolver_dlibapi.cu.o
[  2%] Building NVCC (Device) object dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cuda_dlib.cu.o
/home/carson/code/2020/facenet/dlib/dlib/cuda/cuda_dlib.cu(1762): error: calling a constexpr __host__ function("log1p") from a __device__ function("cuda_log1pexp") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.

As it suggests this is resolved by passing in a flag to the compiler. To do this modify the setup.py line:

python setup.py install --set USE_AVX_INSTRUCTIONS=1 --set DLIB_USE_CUDA=1 --set CUDA_NVCC_FLAGS="--expt-relaxed-constexpr"

Everything went just peachy from there except when I attempted to use dlib from within python I got an error (something like):

dlib 19.19.99 is missing cblas_dtrsm symbol

After which i tried importing face_recognition and got a segfault.

I fixed this by install openblas-devel, then re-ran the setup.py script as above. Magically this fixed everything.

Again, not bad – dlib seems cool – just normal troubleshooting stuff.

GPUs Are Cool

Expect no witty sayings or clever analyses here – I just think GPUs are cool. And here are a few reasons why:

Exhibit A: Machine Learning Training a standard feed forward neural net on CIFAR-10 progresses at 50usec/sample; my 2.4 Ghz i7 takes almost 500usec/sample. The total set takes around 5 min to train on the GPU vs over a 35 min on my CPU. On long tasks this means a difference of days to weeks.

Exhibit B: Video transcoding In order to make backups of all my blu-ray disks, I rip and transcode them using ffmpeg or handbrake. Normally Im lucky to get a few dozen frames per second – completely making out my CPU during the process. By compiling ffmpeg to include nvenc/cuda support I get 456 fps (19x faster). As the screenshots show, my avg cpu usage was below 20% – and even GPU usage stayed under 10%. Video quality was superb (i couldnt tell the difference).

ffmpeg -vsync 0 -hwaccel cuvid -i 00800.m2ts -c:a copy -c:v h264_nvenc -b:v 5M prince_egypt.mp4
RAW frame from blu-ray
Same frame after ffmpeg/nvenc transcoding

My setup:

  • GPU: RTX 2070 Super (8GB ram)
  • CPU: i7-8700K (6 core HT @3.7Ghz)
  • RAM: 32GB
  • Disk: 1TB PM981 (NVME)