I wanted to enable the RealTek RTL81111 nic on the DFRobot DFR0767 dual-nic carrier board for my pi 4 compute module. The driver for this nic does not come with piOS by default; apparently it is included with the 64bit version. To enable this driver I had to recompile the kernel. I followed the basic flow from this post, but (because I’m not as cool as the poster) I did not cross-compile. As a result my procedure was a lot simpler and more readable:
sudo apt-get update
sudo apt-get install flex bison libssl-dev bc -y
git clone --depth=1 https://github.com/raspberrypi/linux
cd linux
make bcm2711_defconfig
Add "CONFIG_R8169=m" to .config (no need to "make menuconfig")
make -j4 zImage modules dtbs
make modules_install
cp arch/arm/boot/zImage /boot/kernel7l.img
cp arch/arm/boot/dts/overlays/*.dtb* /boot/overlays/
After this I rebooted, and all was right as rain: the new nic was present! Compile time was only a couple hours which was quite impressive, given it was done directly on the pi (4 core, 2GB ram).
A while back I bought a superhet 433 Mhz transceiver/receiver pair (like this). The goal was to attach this simple device to a pi and control it using the great rpi-rf package. Unfortunately, it appears that my particular transceivers (one BY-305 controlling five blinds, one AC-123-06D controlling two blinds) emit codes that aren’t easily detected by rpi-rf.
Rather than try and get a PhD in reverse engineering the protocol (well, i did try for it but barely qualify for A.B.D) I found the life saving rpi-rfsniffer package. I noticed that if I simply did the following:
rfsniffer --rxpin 7 record 2ndFloorS_All4_Up
This allowed me to override the default receive pin (which i had plugged into GPIO4). I then held down the “up” button on my 433Mhz remote. I repeated this procedure for the up and down on both remotes, giving each recording session an appropriate name. One note: I found that rfsniffer waited sometimes far too long to terminate the recording. To limit it I modified line 62 of /usr/local/lib/python3.7/dist-packages/rfsniffer.py as follows:
if len(capture) < 16000 and GPIO.wait_for_edge(rx_pin, GPIO.BOTH, timeout=1000):
This limits the capture to 16000 samples (about 6 seconds or so.)
To play back the recordings I did the following:
rfsniffer --txpin 11 play 2ndFloorS_All4_Up
Again i overrode the tx pin as pin 11 (GPIO17) playing back the samples I had just recorded.
To integrate this into home assistant I simply used the excellent command_line integration. This was quick and easy and probably cannot be improved upon as there is no status coming from the blinds.
This exposes a single entity, called ‘blinds_2ndfloor’ that has “on” and “off” buttons. The on script looks like this:
rfsniffer --txpin 11 play 2ndFloorN_Up
rfsniffer --txpin 11 play 2ndFloorN_Up
rfsniffer --txpin 11 play 2ndFloorN_Up
rfsniffer --txpin 11 play 2ndFloorS_All4_Up
rfsniffer --txpin 11 play 2ndFloorS_All4_Up
rfsniffer --txpin 11 play 2ndFloorS_All4_Up
The repetition seemed necessary, empirically, as sometimes a single blind would not start if only one or two playbacks were made. Additionally, since there is only one pi, i found putting all the command serially was better than letting HA possibly try to run the north and south transceivers in parallel. The script for off looks similar but, of course, plays back the “Down” samples.
With those scripts in place one can easily add HA automations to bring the blinds up or down based on the time of day. Eventually you need not touch anything in your house and can progress to a future of blissful automation taking care of everything and allowing us to evolve into our proper form:
Disclaimers:
For docker-based HA: To enable ssh based remote invoke, one must ensure the /root/.ssh as a volume, then inside HA generate your ssh keypairs. Then add the HA pub key to your pi authorized_keys file.
I totally realize this method of capturning the RF signals is suboptimal: in theory if you live in a busy RF environment you would be capturing, then playing back stray signals. The solution is obviously do your RF captures in your neighborhood anechoic chamber.
The raspberry pi’s ecosystem makes it a tempting platform for building computer vision projects such as home security systems. For around $80, one can assemble a pi (with sd card, camera, case, and power) that can capture video and suppress dead scenes using motion detection – typically with motion or MotionEye. This low-effort, low-cost solution seems attractive until one considers some of its shortfalls:
False detection events. The algorithm used to detect motion is suseptible to false positives – tree branches waving in the wind, clouds, etc. A user works around this by tweeking motion parameters (how BIG must an object be) or masking out regions (don’t look at the sky, just the road)
Lack of high level understanding. Even in tweeking the motion parameters anything that is moving is deemed of concern. There is no way to discriminate between a moving dog and a moving person.
The net result of these flaws – which all stem from a lack of real understanding – is wasted time. At a minimum the user is annoyed. Worse they are fatigue and miss events or neglect responding entirely.
By applying current state of the art AI techniques such as object detection, facial detection/recognition, one can vastly reduce the load on the user. To do this at full frame rate one needs to add an accelerator, such as the Coral TPU.
In testing we’ve found fairly good accuracy at almost full frame rate. Although Coral claims “400 fps” of speed – this is inference, not the full cycle of loading the image, running inference, and then examining the results. In real-world testing we found the full-cycle results closer to 15fps. This is still significantly better than the 2-3 fps one obtains by running in software.
In terms of scalability, running inference on the pi means we can scale endlessly. The server’s job is simply to log the video and metadata (object information, motion masks, etc.).
Here’s a rough sketch of such a system:
This approach is currently working successfully to provide the following, per rpi camera:
moving / static object detection
facial recognition
3d object mapping – speed / location determination
This is all done at around 75% CPU utilization on a 2GB Rpi 4B. The imagery and metadata are streamed to a central server which performs no processing other than to archive the data from the cameras and serve it to clients (connected via an app or web page).
Recently I purchased a raspberry pi 4 to see how it performs basic computer vision tasks. I largely followed this guide on building opencv https://www.learnopencv.com/install-opencv-4-on-raspberry-pi/
However, in the guide there are several missing dependencies on a fresh version of raspbian buster. There are also some apparent errors in the CMakeLists.txt file which other users already discovered. After patching these fixes and adding the needed missing dependencies, I now have opencv running on my pi. Here’s my complete script below.
IMPORTANT: You must READ the script, don’t just run it! For example, you should check the version of python you have at the time you run this script. When I ran it i was at python 3.7. Also, feel free to bump to a later version (or master) of opencv.
Oh, also during the numpy step it hung and i was too lazy to look into it. It didn’t seem to affect my ability to use opencv – so i didn’t go back and dig. My bad.
I purchased a couple raspberry pi 3s. The major draw is that you no longer need an external wifi adapter – yet the cost is the same as the older pis.
The very first thing I noticed is that ssh traffic was stuttery. This cleared up after performing an apt-get update/upgrade. Everything seemed silky-smooth thereafter.