Good Readings for Convolution Neural Networks (CNN)s

https://brohrer.github.io/how_convolutional_neural_networks_work.html – This page did a good job of breaking down the individual operations that are common to all CNNs – ReLU, pooling, convolution, etc. After reading the article you can basically implement your own CNN – but without a lot of the advanced improvements that have made them faster and more powerful.

https://towardsdatascience.com/review-yolov3-you-only-look-once-object-detection-eab75d7a1ba6 has lots of reviews of algorithms

Dive into deep learning – a free book: http://d2l.ai

Making a Transparent Firewall

In the latest iteration of my home networking stack I am factoring out my firewall from my router into discrete unit. I decided to try a transparent firewall as this has the advantage of a reduced attack surface. Plus i used it as a chance to try out nftables.

Hardware: I chose the raspberry pi compute module 4 with the DFRobot dual-nic carrier. This board is able to push about 1Gbit of traffic while consuming less than 2 watts of power! All this in about 2 inches square. To enable the 2nd NIC on this board I had to recompile the kernel – see my Recompiling the Kernel on the Pi Compute Module 4 post on this.

Once this is done, we enable the serial port so we can manage the firewall out of band. I specifically do not want a listening port on the firewall (remember, its transparent!). To do this I hooked up a FT232 USB->Serial converter to pins to the DFRobot headers. I then connect to the firewall using minicom, using 115200 8N1 for the serial params. Inside the pi ensure you run “raspi-config” and enable the serial port under the interfaces section.

I also turned off dhcpd to ensure the pi is only passing through – not actually on the network. “systemctl disable dhcpcd; systemctl stop dhcpd”

To get nftables, on the pi, I did have to install it: “apt-get install nftables.” Then we write some basic rules. Subjectively speaking, nftables is a lot nicer to write rules for than iptables, ebatables, etc. Here’s a brief sample:

table inet firewallip {
  chain c1 {
       type filter hook input priority 0;
       meta nftrace set 1
        policy drop
        ip saddr 10.55.0.10 icmp type echo-request  accept
        reject with icmp type host-unreachable
 }
}

A brief description of this example is apropos:

  • The table is named “firewallip” (doesn’t matter) but “inet” can be one of  iparpip6bridgeinetnetdev (see https://wiki.nftables.org/wiki-nftables/index.php/Quick_reference-nftables_in_10_minutes)
  • The chain, c1 (again, name doesn’t matter – although it makes sense to have it match the filter type) is instantiated with the type line – this is extremely significant, as rules will only match packets if on the proper filter type.
  • meta nftrace set 1 (can be 0 or 1) lets us run “nft monitor trace” and trace the rules live
  • there are a LOT of things that can go inside a chain – however it is worth noting “reject” phrases such as the one i show are not applicable in some chain types, such as bridge.

Note that for connection tracking inside a bridge i actually had to build my kernel yet again. If you don’t do this youll get “Protocol unsupported” errors if you try to do connection tracking inside a bridge. To enable this module, follow the same steps as in the procedure for adding realtek driver, however the menu item to enable in the kernel is under: “Networking support” -> “Networking options” -> “Network packet filtering framework” -> “IPV4/IPV6 bridge connection tracking support”. Or just add “CONFIG_NF_CONNTRACK_BRIDGE=m” to your “.config” file inside the linux tree. Once this is done you need to ensure “nf_conntrack_bridge” is loaded: “modprobe nf_conntrack_bridge” should do you.

At this point we can do things like connection tracking INSIDE the bridge. Which is great. I won’t post my actual tableset, but here’s a beginning one that works well, allowing for ssh/http/https from the LAN (assuming LAN is on eth1), and only established connections and arp otherwise.

table bridge firewall {
  chain input {
       type filter hook input priority 0; policy drop;
  }
  chain output {
       type filter hook output priority 0; policy drop;
  }
  chain forward {
       type filter hook forward priority 0;       policy drop;
       meta protocol {arp} accept
       ct state established,related accept
       iifname eth1 tcp dport {ssh,http,https} accept
  }
}

Unfortunately, all this fun on the pi doesn’t seem as readily available on centos. Cent 7.6 is still on a 3.x version of the kernel, and 8.4 is still on a 4.x version. You can upgrade to kernel using elrepo-kernel (http://elrepo.org/tiki/kernel-ml).

Recompiling the Kernel on the Pi Compute Module 4

I wanted to enable the RealTek RTL81111 nic on the DFRobot DFR0767 dual-nic carrier board for my pi 4 compute module. The driver for this nic does not come with piOS by default; apparently it is included with the 64bit version. To enable this driver I had to recompile the kernel. I followed the basic flow from this post, but (because I’m not as cool as the poster) I did not cross-compile. As a result my procedure was a lot simpler and more readable:

sudo apt-get update
sudo apt-get install flex bison libssl-dev bc  -y
git clone --depth=1 https://github.com/raspberrypi/linux
cd linux
make bcm2711_defconfig
Add "CONFIG_R8169=m" to .config (no need to "make menuconfig")
make -j4 zImage modules dtbs
make modules_install
cp arch/arm/boot/zImage /boot/kernel7l.img 
cp arch/arm/boot/dts/overlays/*.dtb* /boot/overlays/ 

After this I rebooted, and all was right as rain: the new nic was present! Compile time was only a couple hours which was quite impressive, given it was done directly on the pi (4 core, 2GB ram).

Controlling 433 Mhz Blinds from Home Assistant (the easy way)

A while back I bought a superhet 433 Mhz transceiver/receiver pair (like this). The goal was to attach this simple device to a pi and control it using the great rpi-rf package. Unfortunately, it appears that my particular transceivers (one BY-305 controlling five blinds, one AC-123-06D controlling two blinds) emit codes that aren’t easily detected by rpi-rf.

Rather than try and get a PhD in reverse engineering the protocol (well, i did try for it but barely qualify for A.B.D) I found the life saving rpi-rfsniffer package. I noticed that if I simply did the following:

 rfsniffer --rxpin 7 record 2ndFloorS_All4_Up

This allowed me to override the default receive pin (which i had plugged into GPIO4). I then held down the “up” button on my 433Mhz remote. I repeated this procedure for the up and down on both remotes, giving each recording session an appropriate name. One note: I found that rfsniffer waited sometimes far too long to terminate the recording. To limit it I modified line 62 of /usr/local/lib/python3.7/dist-packages/rfsniffer.py as follows:

if len(capture) < 16000 and  GPIO.wait_for_edge(rx_pin, GPIO.BOTH, timeout=1000):

This limits the capture to 16000 samples (about 6 seconds or so.)

To play back the recordings I did the following:

rfsniffer --txpin 11 play 2ndFloorS_All4_Up

Again i overrode the tx pin as pin 11 (GPIO17) playing back the samples I had just recorded.

To integrate this into home assistant I simply used the excellent command_line integration. This was quick and easy and probably cannot be improved upon as there is no status coming from the blinds.

switch:
  - platform: command_line
    switches:
      blinds_2ndfloor:
        command_on: ssh -o StrictHostKeyChecking=no user1@<pi1_ip> '/home/user1/blinds_south_up.sh'
        command_off: ssh -o StrictHostKeyChecking=no user1@<pi1_ip> '/home/user1/blinds_south_down.sh'

This exposes a single entity, called ‘blinds_2ndfloor’ that has “on” and “off” buttons. The on script looks like this:

rfsniffer --txpin 11 play 2ndFloorN_Up
rfsniffer --txpin 11 play 2ndFloorN_Up
rfsniffer --txpin 11 play 2ndFloorN_Up
rfsniffer --txpin 11 play 2ndFloorS_All4_Up
rfsniffer --txpin 11 play 2ndFloorS_All4_Up
rfsniffer --txpin 11 play 2ndFloorS_All4_Up

The repetition seemed necessary, empirically, as sometimes a single blind would not start if only one or two playbacks were made. Additionally, since there is only one pi, i found putting all the command serially was better than letting HA possibly try to run the north and south transceivers in parallel. The script for off looks similar but, of course, plays back the “Down” samples.

With those scripts in place one can easily add HA automations to bring the blinds up or down based on the time of day. Eventually you need not touch anything in your house and can progress to a future of blissful automation taking care of everything and allowing us to evolve into our proper form:

A list of ways our society is already like Pixar's dystopia in WALL·E

Disclaimers:

  • For docker-based HA: To enable ssh based remote invoke, one must ensure the /root/.ssh as a volume, then inside HA generate your ssh keypairs. Then add the HA pub key to your pi authorized_keys file.
  • I totally realize this method of capturning the RF signals is suboptimal: in theory if you live in a busy RF environment you would be capturing, then playing back stray signals. The solution is obviously do your RF captures in your neighborhood anechoic chamber.

Physically Disconnecting the Speaker and Microphone on the Wyzecam V3

The Wyzecam v3 comes with some great features – namely the $20 price tag and excellent starlight sensor. It also comes with a microphone and speaker – both of which have their downsides. For those that wish to disconnect them (say, for privacy reasons) — and don’t fully trust the software “disable” — one can physically disconnect them without damaging the camera.

NOTE: if you plan to disconnect the speaker you should probably do this only AFTER setting up the camera, as it provides voice prompts during the setup.

Estimated time: This takes less than 5 minutes.

Step 1 – Use a plastic spudger such as this one for $1.99 from ifixit

Step 2 – Guide the spudger under the outside of the white rim on the front of the camera. Run it around the ENTIRE white rim to loosen the underlying adhesive. Be careful not to push it too far under the rim as it will mar the adhesive tape and thereby decrease re-assembly quality.

Note the sticky tape on the back of the white insert. You want to avoid marring this as it will affect the reassmbled product

Step 3 – Use the pointy end of the tool to carefully remove the three white inserts. This part is easy – but if you get it wrong it will be VERY hard to get the underlying screws out! Tip: push on the far side of the squishy insert to cause it to rotate, then you can carefully tweezer it out.

Step 4 – Use a small screwdriver to loosen the three phillips screws. Yes – only three; if Wyze had a fourth hole and screw the price would be much higher.

Step 5 – Carefully insert the spudger in between the white case and the black front. This is the trickiest part! You don’t want to damage the red moisture seal just underneath the black front. To avoid damaging it, do not repeatedly pry at the black front – instead get the tool just under the edge and lift.

Step 6 – Once you have carefully lifted out the black portion the electronics slide out easily. The mic and speakers are on the bottom of the assembly. You can use the tool to carefully loosen the connectors. This should allow easy reconnection if desired later.

With microphone and speaker disconnected

Step 7 – Reassmebly. Push the assembly back in the case. Insert the three screws and tighten. Carefully push the white inserts. Re-attach the white rim.

The reassembled product – you can’t even tell it was modified – which is kinda the point!

Comparison of Wyzev3 Sensor vs RPi IMX327

One of the best low-light sensors for the raspberry pi is the Sony IMX327 (available here: https://www.inno-maker.com/product/mipi-cam-327/ for around $90). The wyzev3 offers similar starlight performance for $20 – a fraction of the cost. But how do they compare?

IMX327
Wyzev3

As can be seen, both images capture all the overall scene well. The white balance of the IMX327 was not adjusted leaving the overall scene seeming a bit “warmer” than the wyzev3.

As for detail a few important differences pop out:

  • The wyze seems to saturate around light sources. Especially pronounced near the streetlamp and the car’s break lights.
  • The wyze has a slightly wider field of view
  • The wyzev3 doesn’t seem to capture some detail, such as the lettering on the stop sign, as well as the IMX327 – this is especially apparent when zooming in on the full size image.
  • The wyze seems better at keeping details crisp in “busier” sections of the scene – examples include the ground around the forest on the right of the image
  • The wyze seems to suffer more motion blur. I believe that the wyze acheives some of its performance by combining multiple frames. This improves low light performance but smears together details.

While the IMX327 seems to have potentially better quality, cost almost hands the victory to the wyzev3. By the time you add a fully-equipped pi4 to the IMX327 your cost is close to $160. Cutting that in half you could point four times as many wyze cameras at the problem and have vastly better combined video coverage. Add to this the wyzev3 includes IR-LEDs (near and far), microphones, a speaker, and can be set up by a non-PhD.

The only caveat here is that wyze3 is a “closed” product. Unless open source firmwares can be loaded on it, it will never be as secure as the pi solution – users of wyzev3 are at the mercy of wyze to protect their data. To this end there is some hope that wyze will release an RTSP version of their firmware, as they did for their v2 product.

HA MQTT Auto Discovery

As described on their official MQTT Auto Discovery page, Home Assistant allows one to create sensors on the fly. This is particularly important if you are, say, trying to add some non-trivial number of devices that speak MQTT.

I saw a lot of posts about people struggling to get this working. I too had a few issues and felt it worth stating my take on it. Spoiler alert: HA MQTT auto discovery works perfectly, so long as you get over some of the setup nuances. I thought I’d document these for posterity:

  • If you added MQTT support through the GUI, auto discovery is set to false by default. There doesn’t appear to be any way to fix this through the GUI. If you then add MQTT to your configuration yaml, the GUI config overrides it! The only way to overcome this is to delete MQTT from the GUI and then add it in the configuration yaml, restart HA, etc..
  • To publish a value there is a two step process
    1. Publish to the configuration topic to define the sensor. The concept of how this works is well documented on the main link given above – however it is important to note that you do NOT have to provide the configuration for all values in a entity with multiple values. This is actually a nice feature – and works so long as you always publish the config before sending the value. It was not clear to me (and i didnt investigate) if there was any good way to minimize the need to broadcast configurations; e.g. how do you know HA saw your config before you send state? To be safe i just publish the config and state one after another everytime. This doesnt seem optimal but it works.
    2. Publish the state via the state topic passed in during the config publish. The only thing to note here is that, unfortunately, it does not appear that values get rooted under the objectid you provide. Insted they are placed under a sensor with the name you provide – seemingly HA completely ignores the objectid…

Otherwise this feature works very well. Some tips I found for debugging this:

  • Enabling logging on mqtt works well. To do this add to configuration.yaml:
logger:
  default: warning
  logs:
    homeassistant.components.mqtt: debug

You can then tail (if under docker) config/home-assistant.log

  • Using the “Publish a packet” or “Listen to a topic” pages under MQTT->configure (in the HA configuration->integrations page) is good for eliminating any client broker issues you might see
  • Additionally, if you are using eclipse-mosquitto as your mqtt server, you can directly view (HA aside) publishes. In docker it would be:
docker exec -it <mosquittodockerid> /bin/sh
mosquitto_sub -u user -P pass -h localhost -t topic

Where “topic” above is whatever you are publishing, e.g. “homeassistant/sensor/grinch/state”

Lastly, in looking for a good c++ mqtt client, I tried paho and mosquitto. I found mosquitto worked best – both for the server (which i run in a docker) and c++ client. My decision was based on this simple requirement: the library should build (paho failed), work with c++11 (other, albeit cool-looking, c++ libraries required c++14), and be easy. Mosquitto fit the bill perfectly. I am sure – perhaps – there could be other libraries that might work as well – i just never found them.

InfluxDB Queries on HomeAssistant Data

I recently added in influx db + grafana + mariadb to one of my HA instances. I was surprised at how easy it was to get HA happy. Essentially nothing more than running the docker instances for each of these components and adding minimal yaml to HA.

When i went to query the data there were a few surprises that I thought I’d note. First, as noted in https://community.home-assistant.io/t/missing-sensors-in-influxdb/144948, HA puts sensors that have a unit name inside a measurement of that name. Thus if i show measurements that match my model_y, for example, all i get are binary (e.g. unittless) sensors:

> show measurements with measurement =~ /sensoricareabout/
name: measurements
name
----
binary_sensor.sensoridontcareabout1
climate.sensoridontcareabout2
device_tracker.sensoridontcareabout3
...

However if i query series, similarly, i see the measurement i care about:

> show series where "entity_id" =~ /sensoricareabout/
key
---
°F,domain=sensor,entity_id=sensoricareabout_foo_1
°F,domain=sensor,entity_id=sensoricareabout_foo_2

Thus, oddly, to query the data i want i form a query something like this:

 select value from "°F" where entity_id = 'sensoricareabout_foo_1'

A couple things. So, it is odd to me to have to query from what I essentially view as a table °F… but that is probably more HA’s fault than influxdb. Second, i think the quoting is a little weird. Note above that the measurement is double quoted, whereas the tag is single quoted. This is important. You can sometimes (maybe?) leave the double quotes off the former but the latter must strictly be single quotes…

The moral of the story is that if you want to find the data you are looking for you need to look under ‘%’ or the actual unit of measurement. For example in grafana i would:

  • click on explore
  • select ‘%’
  • Add a where clause to filter by entity_id
  • Possibly find my series there…
    • But if not remove the where, and switch from % to the unit of measurement, and repeat by adding the where clause

This all works well but is a little unintuitive and might make one think HA has forgotten to put the data into influx.

Architecture For DIY AI-Driven Home Security

The raspberry pi’s ecosystem makes it a tempting platform for building computer vision projects such as home security systems. For around $80, one can assemble a pi (with sd card, camera, case, and power) that can capture video and suppress dead scenes using motion detection – typically with motion or MotionEye. This low-effort, low-cost solution seems attractive until one considers some of its shortfalls:

  • False detection events. The algorithm used to detect motion is suseptible to false positives – tree branches waving in the wind, clouds, etc. A user works around this by tweeking motion parameters (how BIG must an object be) or masking out regions (don’t look at the sky, just the road)
  • Lack of high level understanding. Even in tweeking the motion parameters anything that is moving is deemed of concern. There is no way to discriminate between a moving dog and a moving person.

The net result of these flaws – which all stem from a lack of real understanding – is wasted time. At a minimum the user is annoyed. Worse they are fatigue and miss events or neglect responding entirely.

By applying current state of the art AI techniques such as object detection, facial detection/recognition, one can vastly reduce the load on the user. To do this at full frame rate one needs to add an accelerator, such as the Coral TPU.

In testing we’ve found fairly good accuracy at almost full frame rate. Although Coral claims “400 fps” of speed – this is inference, not the full cycle of loading the image, running inference, and then examining the results. In real-world testing we found the full-cycle results closer to 15fps. This is still significantly better than the 2-3 fps one obtains by running in software.

In terms of scalability, running inference on the pi means we can scale endlessly. The server’s job is simply to log the video and metadata (object information, motion masks, etc.).

Here’s a rough sketch of such a system:

This approach is currently working successfully to provide the following, per rpi camera:

  • moving / static object detection
  • facial recognition
  • 3d object mapping – speed / location determination

This is all done at around 75% CPU utilization on a 2GB Rpi 4B. The imagery and metadata are streamed to a central server which performs no processing other than to archive the data from the cameras and serve it to clients (connected via an app or web page).

Why a Tesla Model Y Is The Best Car Ever

Lots of people have opinions. These are mine as to the supremecy of the model y:

  • Purchase experience. In case the entire world has not clearly understood what the past 20 years has increasingly shown: people would rather order online in person. Let that sink in deeply if it causes you pause. Here’s how Tesla makes that work:
    • You can test drive a tesla – this is entirely optional. I test drove a model S and model Y before deciding sedans are the worst and the Y, being “not a sedan”, is good for me. I am now convinced all the test drive did was excite me make the purchase – however i now know, as will be described next, that this was entirely unnecessary.
    • When you go to pick up your car a few weeks later (took 4 weeks for me, at a time when tesla was advertising 6-8), you spend ~1 hour inspecting the car and ensuring the title is transferred. Better yet, you get the car for 1,000 miles or 7 days (whichever comes first) – then you can send it back at zero cost. This is an amazing deal. I’m not sure you get that kind of a guarantee with any major car company.
    • Tesla will fix any delivery-related issue at no cost. When i accepted delivery I noted some superficial items that needed fixed:
      • Greese from seat on rear carpet. Tesla tech cleaned it up on
      • Scuffed portable charger. The charger looked like someone had been carrying it around in their (large) pockets for a few weeks. The tech switched it. I didnt like the one he found any better so he switched it again. The guy was VERY accomodating.
      • Windshield washer fluid connector was disconnected. I didn’t discover this until AFTER i drove home, but thankfully the connector was under the hood and i simply stuck it together
      • Paint missing on bolts around rear liftgate and front doors. I didn’t notice the liftgate bolt problem until recently so i have not bothered to fix it, but the paint on the front door bolts was fixed on the spot during delivery.
  • Service. After accepting delivery I noticed a few issues: Tesla came TO MY HOUSE and fixed them at zero cost. Turns out they had to come twice due to missing parts. The issues they fixed were minor:
    • There was a scratch on part of the passenger sun visor. They replaced it
    • The inner liner had slumped down (near sunroof). They reconnected it. If it happens again they recommended taking it in.
    • The weather seals around the back doors were detaching – they put on new ones
    • The Tesla floor markers inside the front doors were wiggly – they removed them and put new ones on
  • Autopilot.
    • Having autopilot on is best described in one word: soothing. Its like having a buddy drive for you – almost. You do have to let autopilot know you’re not asleep by jerking the wheel every 15-30 seconds, but even then your roll as driver is more as a supervisor of a much more aware computer. It has more eyes, including sonar and radar, and keeps track of 1) what is around you, 2) where you’re going (your route), 3) and when you need to fuel (it even picks where you should stop to supercharge)
    • At least once autopilot saved me when i became drowsy. Good lukc getting that in a normal car
    • It does an excellent job on highways – making complicated interchanges correctly where I as a human have failed.
  • Speed. While tesla does sell a performance model Y, the standard long range left me feeling like i was manning a rocket ship. After a month of driving the tesla i went back to our other gas powered car. I used to think it was a fast car – but it feels like a joke compared to the tesla. Im convinced if i were raised on teslas and then asked to drive any standard gas powered car i would think it was broken – it doesnt even appear to accelerate compared to the tesla.
  • Charging. There are two main components here:
    • Supercharging. I took a trip across the country to see the how the Y and supercharger network would do together. It was absolutely a pleasure; we ALWAYS had enough energy to make it between superchargers. The only problem we found was coming back from UT, we wanted to drop down to Nauvoo, IL, however we were on HWY 80, and the only way to hit Nauvoo and keep on our trip was to drop straight down from Davenport, then go back to Davenport. The better, time savings route would have been to drop from 80 to nauvoo, then continue on HWY 74 – however this was not possible as there wasnt a supercharger near enough to let us continue on this route; so we ended up ditching nauvoo. Moral: while the superchrger network is currently very good, it needs ot be several times larger – kind of like the network of gas station.
    • Home charging. I spend seconds connecting my charger to my wall at home. No more gas stations. No more oil changes. The best.
  • Being part of the future. I love time/life-saving technology. There is nothing more human, in my opinion, than trying to improve. The tesla is like a breath of fresh air to an industry that hasn’t changed much in a generation. It is layers of improvement all at once in a single impressive package. It hits on several themes:
    • Less is more. As in less buttons is more better. My odyssey has over 60 buttons in the front. The tesla has 6ish – 4 for windows, 2 for the door locks. Everything else is on that one big beautiful screen.
    • Audio commands that work. I can tell it to navigate, call people on the phone, send text messages, all while NOT diverting my eyes from the road if i dont want to.
    • Over the air updates. I have received one update every week or so, improving things such as efficiency, autopilot, etc. I truly feel like the car is getting better as i own it.
    • Electricity vs gas. Other than the fact that it takes a lot longer to charge than to get a full tank of gas, tesla has shown the model Y is superior in almost every way. While we clearly have a long ways to go – maybe faster superchargrs or longer-range batteries would help – for most people their daily commute is entirely drivable with the model Y today, where the recharging is done overnight at home.

As I was contemplating purchasing a new odyseey or the model Y I wasn’t sure if the model Y would be as good as the odyssey. I took a risk on the model Y. After the past few months I now have the opposite view: there is no way anyone could tempt me to sink a dollar into a new gas powered car. The features gas powered cars offer are fairly irrelevant – fancy interiors and updated trims. But who wants to smell gas odors in their garage just because you had to move your car? Who really wants to go to the gas station or have their oil changed? Who really wants to visit a dealership to find a car? Or to take it to a repairshop to fix its vastly more complicated (and, from personal experience, breakable) interiors? Not I.

In short: the big car makers approach (end to end) is leaving the gas powered industry in Tesla’s dust. My message to them (in case they’re reading this humble blog): be like Tesla or you’re going to die.