Apple Watch Solo

Apple makes some good products… I’m not so sure the Apple watch can be counted as one of them. Reasons: the battery life isn’t stellar, and, most importantly, it is an utter pain to set one up for family member.

The setup problem is because apple requires an iphone to set up the apple watch and, to travel the royal path, said iphone should be on a plan that supports the apple watch. Since I am cheap, I am not on one of those plans.

After much pain and wasted time I found the one true way to connect the watch to cellular while keeping a cheap plan on your phone. The process is superficially easy, but unnecessarily painful. Roughly it goes like this:

  1. Call verizon. Yes. Call. In the year 2023 – you MUST call them.
    • You must connect to their “inside sales” departemnt
  2. Tell the sales associate you want to connect an apple watch to cellular in standalone mode
  3. Given them the watch IMEI
  4. Ask them to activate the watch. Then activate the watch using your iphone. Do this with them on the phone
    • If they don’t activate now, you will try endlessly to get it to work in vain. You will call them the next day and have to pick right up where you left off.

Note to self: Even rocky 9.2 uses bridge-utils

Sometimes I decrypt LUKS volumes via dracut/dropbear (ssh). To enable this I follow the instructions from the dracut-ssh project. However since I usually use my linux boxes as hypervisors, and vms need a bridge, I end up with bridged networking.

Now for the problem: if your dracut networking config does not align with your normal post-dracut networking, you end up with a weird blending of the two. So to avoid this I just keep dracut the same, building by bond/bridges in dracut to match the normal configuration.

Here’s a sample portion of my /etc/default/grub:

rd.neednet=1 ip=10.10.2.2::10.10.0.1:255.255.240.0:server-2:bridge0:none:8.8.8.8 bridge=bridge0:enp47s0

This all works well, so long as you are sure to install bridge-utils – otherwise dracut silently fails.

yum install -y bridge-utils

Be sure to rebuild your dracut

dracut --force --regenerate-all

Viola! Fin.

Disneyworld sucks

It’s not magical – no matter how many times they say it is. It is as magical as having a pickpocket empty your wallet everyday and giving you back only lost days of your life and a sunburn.

I hope someone starts a new theme park that is actually based on some new ideas and real entertainment, not hackneyed recycled flufff from yesteryear. Walt would be so ashamed.

Review of Starlink Internet

I have been using starlink for several months and have been thoroughly impressed! I switched to Starlink even though I live in an area with multiple high-speed internet options: Verizon FIOS, Comcast xfinity, etc… The “big boys” of internet. They all tout multi-hundred Mbit/sec internet downlink; in the case of fios symmetric uplink. Starlink, in the other hand, barely breaks 100mbit/sec down. So why bother with it?

Normally you don’t care about your ISP – you just use your internet and it’s great… but what about when it isn’t so great?

Enter exhibit A – fios performance against starlink. Fios basically went to pot around january. No manner of rebooting of routers would resolve it. Starlink didnt tank – just verizon.

Now enter Verizon’s consistently crappy service: i call verizon and get bounced between 4 or 5 different “agents.” Nobody could help – but they acknowledged the problem. They assured me they valued me as a customer. Then they started blaming my router… but in my case, they didn’t realize i have redundant routers running pfsense. I’m not a black belt network engineer but I am fairly capable of handling my home internet router. I tried in vain to explain to them that it wasn’t my router. This proved a waste of time so I asked them to cancel. Even getting them to cancel took forever! And to top it off they informed me that since I was cancelling at the beginning of a billing cycle i would still have to pay the full amount – no proration – “per the contract.” Well, so much for valuing me as a customer – they don’t care and never did, even up to the very bitter end.

Now enter starlink. Normally one has no bargaining chip if they have a single internet uplink. I have pfsense load balance my internet across starlink and fios. This meant I could cancel Verizon without any interruption of service. So satisfying.

Granted Starlink is NOT as fast as FIOS. Not even close. But the fact that “Starlink != Verizon” is good enough for me. I’m even willing to pay more, obviously; starlink is now $120/mo for me, and verizon was only $80/mo. That’s what sheer disdain does.

Oh and user experience with starlink is way better. Everything is managed out of the starlink app. Want to cancel? Upgrade? Just hit the button, for goodness sakes! And never are you required to interact with anyone to set it up, change, or shut it down. Take a hint Verizon – nobody wants your service reps.

Also starlink is improving: Around march the already-low latency dropped another 10ms. See in the image below around March.

Now for the bad… obviously uplink is weak – single digit Mbit/sec. Also during torrential rains I lose internet for a minute or two at a time.

Elon: I’m hoping starlink can add a cheaper tier – perhaps $60/mo? I’d even be good with $60/mo for 60 Mbit/sec downlink? But please don’t drop uplink speed. Its already too low.

Probably time to dump Redhat

Late in 2020 Redhat made its first attempt to kill CentOS. Then came the heroic rescue of Alma and Rocky distros. Now just last month, June 2023, Redhat is attempting to kill these “downstream distros.” They – wrongly, I might add – assert “recently, we have determined that there isn’t value in having a downstream rebuilder.”

We could argue with redhat or just move on. There are other distros. But the tragedy here is that redhat is missing the entire point of open source. Everyone contributing anything that worked on CentOS was bolstering Redhat’s offering. It is arrogant for anyone, Redhat included, to view the community as a bunch of freeloaders. Opensource isn’t narrowly defined as sharing of code – it is a community of sharing. In that light Redhat is saying they are done sharing. So they are done with open source. So I am done with them. Time to move on.

The only question now is which distro will be used next. In the past I’ve avoided other distros because there was simply no compelling reason to switch. There is now a reason to switch. Perhaps Ubuntu or Debian? Arch?

Goodbye Redhat – have a good time sliding further into irrelevance.

GPU + TPU parallelism

I use my RTX for heavier ML workflows, like training and high-quality inference (say, centernet). Additionally my security system has edge-based (coral TPU) cameras (one TPU per camera) that perform their own inference workloads. Thus all my compute resources are busy burning any solar power I get, and then some.

This poses a problem as it means i have no spare compute to handle any remaining workloads. For example, during regression testing of new models I need to re-run inference against old data. Neither the camera TPUs nor the RTX is available. Thankfully, one can just add extra TPUs to the main workhorse server to gain additional inference capacity. Each of these TPUs is capable of 90ish inferences a second of a mobilenetv2-based net. All this on the same server performing inference and training using the RTX without impacting performance.

Each TPU operates independently. Currently I have two TPUs in addition to my RTX. CPU utilization sits around 25% while the TPU and GPU resources are pegged. A busy box is a happy box.

For reference, a good docker container for running coral: https://github.com/pklinker/coral-container.git – the scripts provided for objdet are good – just remember you can pass “@:0” or “@:1” to reference your 1st, 2nd, etcth TPU.

Putin is a Sad Bad Banana

How sad that a grown man – a single sorrowful, pitiful human being – could dash the hope and optimism of the world by plunging it into a stupid war. We all make dumb mistakes – but very few of have the distinction to make a mistake that causes the following:

  • Young children, who should be in school and playing on playgrounds, are being killed, scared to death, and left fatherless. Instead their schools are bombed and they are living in a hell zone.
  • Mothers who should be receiving the best of care to bring up their children are dying – some in maternity hospitals. This because in war there is no real safety and everyone is potential “collateral damage.”
  • Young people, in Ukraine and Russia, who should be stretching their minds and talents to the benefit of society, are wasting their lives in a pointless war of one man’s doing
  • The world, instead of solving great problems, like traveling to infinity and beyond, and helping the poor and the sick, have had their hopes reset with the realization that a modern-day madman can bring the storms of war to everyone’s doorstep.

These are the things that fall squarely on Mr. Putin. As such Putin earns, with ignominy, the “sad banana” award – he’s just a gross, rotten banana that nobody wants. How very sad.

Home Assistant Dynamic Entity Lists with Entity-Dependent Style

Home assistant is very powerful – possessing a great deal of flexibility both in terms of what can be monitored and how the monitored state is displayed. Much of the display flexibility comes through a combination of the Jinja templating engine and custom components.

Here is one example of this flexibility in action: display a dynamic list of sensors, coloring the status LED differently depending on the sensor state. Or more specifically: the ping status for a set of devices – in this case the ping status of all my raspberry pi cameras.

To display a dynamic list of sensors I first helped myself by giving my devices a consistent name prefix – e.g. “pi-cam-X” where X is the instance. I then installed the wonderful auto-entities plugin and specified an inclusion filter of “/pi-cam/”

Next to perform the styling I download the template-entity-row plugin – which allows applying a template to each entity row. The card yaml is actually fairly compact and readable:

type: custom:auto-entities
card:
  type: entities
filter:
  include:
    - name: /pi-cam/
      options:
        style: |
          :host {
            --card-mod-icon-color: {{ 'rgb(221,54,58)' if is_state('this.entity_id','off')  else 'rgb(67,212,58)' }};
          }
        type: custom:template-entity-row
        icon: mdi:circle
        state: >-
          {% if is_state('this.entity_id','off') %} Down {% else  %} Up {% endif %}

To be honest it took way longer to accept that this kind of flexibility was not available out of the box – it is too bad some of these add-ons are not just core to HA as they seem like core functionality.

The “out of the box” display is on the left. The ostensibly-improved status is on the right. Note that in truth i dont see a huge need for displaying textually what the red/green LED shows, however I much prefer “Up/Down” to “Connected/Disconnected”..

Putin Has Lost His Ukranian War

Putin has lost the war for one simple reason: he never had a cause to start it in the first place – and everyone knows it.

What he is doing might have worked 50 years ago – when the skies weren’t monitored by non-military GPS. When we didn’t have social media showing us the people we already knew and loved in Ukraine being murdered. We can plainly see what he is doing.

We knew he was coming weeks in advance. The US took away any shred of surprise he might have had – if he indeed had any, as he parked 100k+ troops right on Ukraines doorstep for a month. We saw the Ukrainians good-faith destroyed.

What Putin needs is help thinking as his brain seems rotten. Here’s some help I offer:

  • “We have to de-nazify Ukraine.” The Nazis were those who decided they didn’t like a group of people, invaded their country, and killed them. That is what the Russians are doing. The Jews were the people that, sadly, took the abuse from the Nazis. The Ukrainians are the Jews. Nobody is buying what Putin is saying here because it doesn’t square with what he is doing. He is killing Ukranians. He is being the Nazi. What Putin should be saying here is: “Russia has long desired to emulate the Nazis and will now do so by invading Ukraine.” Very faithfully said, Mr. Putin.
  • “The Ukranian soldiers are using their people as human shields.” Since Putin is attacking Ukraine, EVERY UKRAINIAN is a defender – not just the army. No human shields exist – just the men, women, and children of Ukraine he is killing. Ukraine is justified in defending. Russia is not justified in attacking any individual in Ukraine. Here is what Putin meant: “Russia will kill innocent Ukranians indiscriminately.” I give Putin a 10/10 for this honest statement.
  • “Any foreigner who joins the Ukrainians will meet consequences they have never seen.” Here Putin wants to deter any decent human being from helping defend the victim he is trying to murder. He is threatening nuclear war to any who might want to help. This is further proof he has no cause – if we needed further proof – for in a real cause to declare war, someone would hope for allies to stand at your side in doing what is right. Putin has no right to what he is doing. He knows it. He also knows he has a slim chance to defeating Ukraine – because nobody in Ukraine wants Russia (except a narrow strip of Russians in the east – which were already somewhat separate). He will only “win” if he basically beats the Ukrainians into submission over a long stretch of time. Anyone helping would totally frustrate his plans. Sadly, the West appears to be listening to Putin. What the world needs is to not be afraid. Putin can’t progress in a war that the world opposes. He can only progress if people are scared to oppose him. So really all he meant to say was “Russia would rather destroy the entire world than lose a war they unjustly started.”

I have no ill will against the Russian people who were stuck with Putin when he started this war. It is their fault if they keep him around. If they do they are complicit with him. Putin can arrest 10s of thousands of people – but not millions of people. If his own people stood up to him, the could clear their own guilt. Otherwise, they are the German Nazis just like in WWII who stood by as the worst evils humankind can commit were perpetrated by their army.

The History of the Ukraine war, with respect to Putin, has been written. He’s guilty beyond measure and everyone knows it. He can shell innocent civilians only so long as his own troops are blinded from this fact. Only so long as his people sit quietly by. History has written his portion of this conflict. It is anxiously awaiting to see how the Ukranian liberation will unfold.

Fine-Tuned MobilenetV2 on MyriadX and Coral

Fine-tuning MV2 using a GPU is not hard – especially if using the tensorflow Object Detection API in a docker container. It turns out that deploying a quantize-aware model to a Coral Edge TPU is also not hard.

Doing the same thing for MyriadX devices is somewhat harder. There doesn’t appear to be a good way to take your quantized model and convert it to MyriadX. If you try to convert your quantized model to openvino you get recondite errors; if you turn up logging on the tool you see the errors happen when the tool hits “FakeQuantization nodes.”

Thankfully, for OpenVINO you can just retrain without quantization and things work fine. As such it seems like you end up training twice – which is less than ideal.

Right now my pipeline is as follows:

  • Annotate using LabelImg – installable via brew (OS X)
  • Train (using my NVidia RTX 2070 Super). For this i use Tensorflow 1.15.2 and a specific hash of the tensorflow models dir. Using a different version of 1.x might work – but using 2.x definitely did not, in my case.
  • For Coral
    • Export to a frozen graph
    • Export the frozen graph to tflite
    • Compile the tflite bin to the edgetpu
    • Write custom c++ code to interface with the corale edgetpu libraries to run the net on images (in my case my code can grab live from a camera or from a file)
  • For DepthAI/MyriadX
    • Re-train without the quantized-aware portion of the pipeline config
    • Convert the frozen graph to openvino’s intermediate representation (IR)
    • Compile the IR to a MyriadX blob
    • Push the blob to a local depthai checkout

Let’s go through each section stage of the pipeline:

Annotation

For my custom data set I gathered a sample dataset consisting of video clips. I then exploded those clips using ffmpeg:

ffmpeg -i $1 -r 1/1 $1_%04d.jpg

I then installed LabelImg and annotated all my classes. As I got more proficient in LabelImg I could do about one image every second or two – fast enough, but certainly not as fast as Tesla’s auto labeler!

Note that for my macbook, I found the following worked for getting labelimg:

conda create --name deeplearning
conda activate deeplearning
pip install pyqt5==5.15.2 lxml
pip install labelImg
labelImg

Training

In my case I found that mobilenetv2 meets all my needs. Specifically it is fast, fairly accurate, and it runs on all my devices (both in software, and on the Coral and OAKDLite)

There are gobs of tutorials on training mobilenetv2 in general. For example, you can grab one of the great tensorflow docker images. By no means should you assume that once you have the image all is done – unless you are literally just running someone elses tutorial. The moment you throw in your own dataset you’re going to have to do a number of things. And most likely they will fail. Over and over. So script them.

But before we get to that let’s talk versions. I found that the models produced by tensorflow 2 didnt work well with any of my HW accelerators. Version 1.15.2 worked well, however, so i went with that. I even tried other versions of tensorflow 1x and had issues. I would like to dive into the cause of the issues – but have not done so yet.

See my github repo for an example Dockerfile. Note that for my GPU (an RTX 2070 Super) I had to work around memory growth issues by patching the Tensor Flow Object Detection model_main.py. I also modified my pipeline batch sizes (to 6, from the default 24). Without these fixes the training would explode mysteriously with an unhelpful error message. If only those blasted bitcoin miners hadn’t made GPUs so expensive perhaps I could upgrade to another GPU with more memory!

It is also worth noting that the version of mobilenet i used, and the pipeline config, were different for the Coral and Myraid devices, e.g.:

  • Coral: ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03
  • Myriad: ssd_mobilenet_v2_coco_2018_03_29

Update: it didn’t seem to matter which version of mobilenet i used – both work.

I found that I could get near a loss of 0.3 using my dataset and pipeline configuration after about 100,000 iterations. On my GPU this took about 3 hours, which was not bad at all.

Deploying to Coral

Coral requires quantized training exported to tflite. Once exported to tflite you must compile it for the edgetpu.

On the coral it is simple enough to change from the default mobilenet image to your custom image – literally the filenames change. Must point it at your new labelMap (so it can correctly map the classnames) and image.

Deploying to MyriadX

Myriad was a lot more difficult. To deploy a trained model one must first convert it to the OpenVINO IR formart, as follows:

source /opt/intel/openvino_2021/bin/setupvars.sh && \
    python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py \
    --input_model learn_tesla/frozen_graph/frozen_inference_graph.pb \
    --tensorflow_use_custom_operations_config /opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json \
    --tensorflow_object_detection_api_pipeline_config source_model_ckpt/pipeline.config \
    --reverse_input_channels \
    --output_dir learn_tesla/openvino_output/ \
    --data_type FP16

Then the IR can be converted to blob format by running the compile_tool command. I had significant problems with the compile_tool mainly because it didnt like something about my trained output. In the end I found the cause was that openvino simply doesn’t like the nodes being put into the training graph due to quantization. Removing that from the pipeline solved this. However this cuts the other way for Coral – it ONLY accepts quantized-aware training (caveat: in some cases you could post-process quant however Coral explicitly states this doesn’t work in all cases).

Once the blob file was ready i put the blob, and a json snippet, in the resources/nn/<custom> folder inside my depthai checkout. I was able to see full framerate (at 300×170) or about 10fps at full resolution (from a file). Not bad!

Runtime performance

On the google Coral I am able to create an extremely low-latency pipeline – from capture to inference in only a few milliseconds. Inference itself completes in about 20ms with my fine-tuned model – so I am able to operate at full speed.

I have not fully characterized the MyriadX based pipeline latency. Running from a file I was able to keep up at 30 fps. My Coral pipeline involved me pulling the video frames using my own C/C++ code down – including the YUV->BGR colorspace conversion. Since the MyriadX is able to grab and process on its device, it does have the potential to be low latency – but this has yet to be tested.

Object detection using the MyriadX (depthai oakd-lite) and Google Coral produced similar results.