Home Assistant Dynamic Entity Lists with Entity-Dependent Style

Home assistant is very powerful – possessing a great deal of flexibility both in terms of what can be monitored and how the monitored state is displayed. Much of the display flexibility comes through a combination of the Jinja templating engine and custom components.

Here is one example of this flexibility in action: display a dynamic list of sensors, coloring the status LED differently depending on the sensor state. Or more specifically: the ping status for a set of devices – in this case the ping status of all my raspberry pi cameras.

To display a dynamic list of sensors I first helped myself by giving my devices a consistent name prefix – e.g. “pi-cam-X” where X is the instance. I then installed the wonderful auto-entities plugin and specified an inclusion filter of “/pi-cam/”

Next to perform the styling I download the template-entity-row plugin – which allows applying a template to each entity row. The card yaml is actually fairly compact and readable:

type: custom:auto-entities
card:
  type: entities
filter:
  include:
    - name: /pi-cam/
      options:
        style: |
          :host {
            --card-mod-icon-color: {{ 'rgb(221,54,58)' if is_state('this.entity_id','off')  else 'rgb(67,212,58)' }};
          }
        type: custom:template-entity-row
        icon: mdi:circle
        state: >-
          {% if is_state('this.entity_id','off') %} Down {% else  %} Up {% endif %}

To be honest it took way longer to accept that this kind of flexibility was not available out of the box – it is too bad some of these add-ons are not just core to HA as they seem like core functionality.

The “out of the box” display is on the left. The ostensibly-improved status is on the right. Note that in truth i dont see a huge need for displaying textually what the red/green LED shows, however I much prefer “Up/Down” to “Connected/Disconnected”..

Putin Has Lost His Ukranian War

Putin has lost the war for one simple reason: he never had a cause to start it in the first place – and everyone knows it.

What he is doing might have worked 50 years ago – when the skies weren’t monitored by non-military GPS. When we didn’t have social media showing us the people we already knew and loved in Ukraine being murdered. We can plainly see what he is doing.

We knew he was coming weeks in advance. The US took away any shred of surprise he might have had – if he indeed had any, as he parked 100k+ troops right on Ukraines doorstep for a month. We saw the Ukrainians good-faith destroyed.

What Putin needs is help thinking as his brain seems rotten. Here’s some help I offer:

  • “We have to de-nazify Ukraine.” The Nazis were those who decided they didn’t like a group of people, invaded their country, and killed them. That is what the Russians are doing. The Jews were the people that, sadly, took the abuse from the Nazis. The Ukrainians are the Jews. Nobody is buying what Putin is saying here because it doesn’t square with what he is doing. He is killing Ukranians. He is being the Nazi. What Putin should be saying here is: “Russia has long desired to emulate the Nazis and will now do so by invading Ukraine.” Very faithfully said, Mr. Putin.
  • “The Ukranian soldiers are using their people as human shields.” Since Putin is attacking Ukraine, EVERY UKRAINIAN is a defender – not just the army. No human shields exist – just the men, women, and children of Ukraine he is killing. Ukraine is justified in defending. Russia is not justified in attacking any individual in Ukraine. Here is what Putin meant: “Russia will kill innocent Ukranians indiscriminately.” I give Putin a 10/10 for this honest statement.
  • “Any foreigner who joins the Ukrainians will meet consequences they have never seen.” Here Putin wants to deter any decent human being from helping defend the victim he is trying to murder. He is threatening nuclear war to any who might want to help. This is further proof he has no cause – if we needed further proof – for in a real cause to declare war, someone would hope for allies to stand at your side in doing what is right. Putin has no right to what he is doing. He knows it. He also knows he has a slim chance to defeating Ukraine – because nobody in Ukraine wants Russia (except a narrow strip of Russians in the east – which were already somewhat separate). He will only “win” if he basically beats the Ukrainians into submission over a long stretch of time. Anyone helping would totally frustrate his plans. Sadly, the West appears to be listening to Putin. What the world needs is to not be afraid. Putin can’t progress in a war that the world opposes. He can only progress if people are scared to oppose him. So really all he meant to say was “Russia would rather destroy the entire world than lose a war they unjustly started.”

I have no ill will against the Russian people who were stuck with Putin when he started this war. It is their fault if they keep him around. If they do they are complicit with him. Putin can arrest 10s of thousands of people – but not millions of people. If his own people stood up to him, the could clear their own guilt. Otherwise, they are the German Nazis just like in WWII who stood by as the worst evils humankind can commit were perpetrated by their army.

The History of the Ukraine war, with respect to Putin, has been written. He’s guilty beyond measure and everyone knows it. He can shell innocent civilians only so long as his own troops are blinded from this fact. Only so long as his people sit quietly by. History has written his portion of this conflict. It is anxiously awaiting to see how the Ukranian liberation will unfold.

Fine-Tuned MobilenetV2 on MyriadX and Coral

Fine-tuning MV2 using a GPU is not hard – especially if using the tensorflow Object Detection API in a docker container. It turns out that deploying a quantize-aware model to a Coral Edge TPU is also not hard.

Doing the same thing for MyriadX devices is somewhat harder. There doesn’t appear to be a good way to take your quantized model and convert it to MyriadX. If you try to convert your quantized model to openvino you get recondite errors; if you turn up logging on the tool you see the errors happen when the tool hits “FakeQuantization nodes.”

Thankfully, for OpenVINO you can just retrain without quantization and things work fine. As such it seems like you end up training twice – which is less than ideal.

Right now my pipeline is as follows:

  • Annotate using LabelImg – installable via brew (OS X)
  • Train (using my NVidia RTX 2070 Super). For this i use Tensorflow 1.15.2 and a specific hash of the tensorflow models dir. Using a different version of 1.x might work – but using 2.x definitely did not, in my case.
  • For Coral
    • Export to a frozen graph
    • Export the frozen graph to tflite
    • Compile the tflite bin to the edgetpu
    • Write custom c++ code to interface with the corale edgetpu libraries to run the net on images (in my case my code can grab live from a camera or from a file)
  • For DepthAI/MyriadX
    • Re-train without the quantized-aware portion of the pipeline config
    • Convert the frozen graph to openvino’s intermediate representation (IR)
    • Compile the IR to a MyriadX blob
    • Push the blob to a local depthai checkout

Let’s go through each section stage of the pipeline:

Annotation

For my custom data set I gathered a sample dataset consisting of video clips. I then exploded those clips using ffmpeg:

ffmpeg -i $1 -r 1/1 $1_%04d.jpg

I then installed LabelImg and annotated all my classes. As I got more proficient in LabelImg I could do about one image every second or two – fast enough, but certainly not as fast as Tesla’s auto labeler!

Note that for my macbook, I found the following worked for getting labelimg:

conda create --name deeplearning
conda activate deeplearning
pip install pyqt5==5.15.2 lxml
pip install labelImg
labelImg

Training

In my case I found that mobilenetv2 meets all my needs. Specifically it is fast, fairly accurate, and it runs on all my devices (both in software, and on the Coral and OAKDLite)

There are gobs of tutorials on training mobilenetv2 in general. For example, you can grab one of the great tensorflow docker images. By no means should you assume that once you have the image all is done – unless you are literally just running someone elses tutorial. The moment you throw in your own dataset you’re going to have to do a number of things. And most likely they will fail. Over and over. So script them.

But before we get to that let’s talk versions. I found that the models produced by tensorflow 2 didnt work well with any of my HW accelerators. Version 1.15.2 worked well, however, so i went with that. I even tried other versions of tensorflow 1x and had issues. I would like to dive into the cause of the issues – but have not done so yet.

See my github repo for an example Dockerfile. Note that for my GPU (an RTX 2070 Super) I had to work around memory growth issues by patching the Tensor Flow Object Detection model_main.py. I also modified my pipeline batch sizes (to 6, from the default 24). Without these fixes the training would explode mysteriously with an unhelpful error message. If only those blasted bitcoin miners hadn’t made GPUs so expensive perhaps I could upgrade to another GPU with more memory!

It is also worth noting that the version of mobilenet i used, and the pipeline config, were different for the Coral and Myraid devices, e.g.:

  • Coral: ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03
  • Myriad: ssd_mobilenet_v2_coco_2018_03_29

Update: it didn’t seem to matter which version of mobilenet i used – both work.

I found that I could get near a loss of 0.3 using my dataset and pipeline configuration after about 100,000 iterations. On my GPU this took about 3 hours, which was not bad at all.

Deploying to Coral

Coral requires quantized training exported to tflite. Once exported to tflite you must compile it for the edgetpu.

On the coral it is simple enough to change from the default mobilenet image to your custom image – literally the filenames change. Must point it at your new labelMap (so it can correctly map the classnames) and image.

Deploying to MyriadX

Myriad was a lot more difficult. To deploy a trained model one must first convert it to the OpenVINO IR formart, as follows:

source /opt/intel/openvino_2021/bin/setupvars.sh && \
    python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py \
    --input_model learn_tesla/frozen_graph/frozen_inference_graph.pb \
    --tensorflow_use_custom_operations_config /opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json \
    --tensorflow_object_detection_api_pipeline_config source_model_ckpt/pipeline.config \
    --reverse_input_channels \
    --output_dir learn_tesla/openvino_output/ \
    --data_type FP16

Then the IR can be converted to blob format by running the compile_tool command. I had significant problems with the compile_tool mainly because it didnt like something about my trained output. In the end I found the cause was that openvino simply doesn’t like the nodes being put into the training graph due to quantization. Removing that from the pipeline solved this. However this cuts the other way for Coral – it ONLY accepts quantized-aware training (caveat: in some cases you could post-process quant however Coral explicitly states this doesn’t work in all cases).

Once the blob file was ready i put the blob, and a json snippet, in the resources/nn/<custom> folder inside my depthai checkout. I was able to see full framerate (at 300×170) or about 10fps at full resolution (from a file). Not bad!

Runtime performance

On the google Coral I am able to create an extremely low-latency pipeline – from capture to inference in only a few milliseconds. Inference itself completes in about 20ms with my fine-tuned model – so I am able to operate at full speed.

I have not fully characterized the MyriadX based pipeline latency. Running from a file I was able to keep up at 30 fps. My Coral pipeline involved me pulling the video frames using my own C/C++ code down – including the YUV->BGR colorspace conversion. Since the MyriadX is able to grab and process on its device, it does have the potential to be low latency – but this has yet to be tested.

Object detection using the MyriadX (depthai oakd-lite) and Google Coral produced similar results.

Good Readings for Convolution Neural Networks (CNN)s

https://brohrer.github.io/how_convolutional_neural_networks_work.html – This page did a good job of breaking down the individual operations that are common to all CNNs – ReLU, pooling, convolution, etc. After reading the article you can basically implement your own CNN – but without a lot of the advanced improvements that have made them faster and more powerful.

https://towardsdatascience.com/review-yolov3-you-only-look-once-object-detection-eab75d7a1ba6 has lots of reviews of algorithms

Dive into deep learning – a free book: http://d2l.ai

Making a Transparent Firewall

In the latest iteration of my home networking stack I am factoring out my firewall from my router into discrete unit. I decided to try a transparent firewall as this has the advantage of a reduced attack surface. Plus i used it as a chance to try out nftables.

Hardware: I chose the raspberry pi compute module 4 with the DFRobot dual-nic carrier. This board is able to push about 1Gbit of traffic while consuming less than 2 watts of power! All this in about 2 inches square. To enable the 2nd NIC on this board I had to recompile the kernel – see my Recompiling the Kernel on the Pi Compute Module 4 post on this.

Once this is done, we enable the serial port so we can manage the firewall out of band. I specifically do not want a listening port on the firewall (remember, its transparent!). To do this I hooked up a FT232 USB->Serial converter to pins to the DFRobot headers. I then connect to the firewall using minicom, using 115200 8N1 for the serial params. Inside the pi ensure you run “raspi-config” and enable the serial port under the interfaces section.

I also turned off dhcpd to ensure the pi is only passing through – not actually on the network. “systemctl disable dhcpcd; systemctl stop dhcpd”

To get nftables, on the pi, I did have to install it: “apt-get install nftables.” Then we write some basic rules. Subjectively speaking, nftables is a lot nicer to write rules for than iptables, ebatables, etc. Here’s a brief sample:

table inet firewallip {
  chain c1 {
       type filter hook input priority 0;
       meta nftrace set 1
        policy drop
        ip saddr 10.55.0.10 icmp type echo-request  accept
        reject with icmp type host-unreachable
 }
}

A brief description of this example is apropos:

  • The table is named “firewallip” (doesn’t matter) but “inet” can be one of  iparpip6bridgeinetnetdev (see https://wiki.nftables.org/wiki-nftables/index.php/Quick_reference-nftables_in_10_minutes)
  • The chain, c1 (again, name doesn’t matter – although it makes sense to have it match the filter type) is instantiated with the type line – this is extremely significant, as rules will only match packets if on the proper filter type.
  • meta nftrace set 1 (can be 0 or 1) lets us run “nft monitor trace” and trace the rules live
  • there are a LOT of things that can go inside a chain – however it is worth noting “reject” phrases such as the one i show are not applicable in some chain types, such as bridge.

Note that for connection tracking inside a bridge i actually had to build my kernel yet again. If you don’t do this youll get “Protocol unsupported” errors if you try to do connection tracking inside a bridge. To enable this module, follow the same steps as in the procedure for adding realtek driver, however the menu item to enable in the kernel is under: “Networking support” -> “Networking options” -> “Network packet filtering framework” -> “IPV4/IPV6 bridge connection tracking support”. Or just add “CONFIG_NF_CONNTRACK_BRIDGE=m” to your “.config” file inside the linux tree. Once this is done you need to ensure “nf_conntrack_bridge” is loaded: “modprobe nf_conntrack_bridge” should do you.

At this point we can do things like connection tracking INSIDE the bridge. Which is great. I won’t post my actual tableset, but here’s a beginning one that works well, allowing for ssh/http/https from the LAN (assuming LAN is on eth1), and only established connections and arp otherwise.

table bridge firewall {
  chain input {
       type filter hook input priority 0; policy drop;
  }
  chain output {
       type filter hook output priority 0; policy drop;
  }
  chain forward {
       type filter hook forward priority 0;       policy drop;
       meta protocol {arp} accept
       ct state established,related accept
       iifname eth1 tcp dport {ssh,http,https} accept
  }
}

Unfortunately, all this fun on the pi doesn’t seem as readily available on centos. Cent 7.6 is still on a 3.x version of the kernel, and 8.4 is still on a 4.x version. You can upgrade to kernel using elrepo-kernel (http://elrepo.org/tiki/kernel-ml).

Recompiling the Kernel on the Pi Compute Module 4

I wanted to enable the RealTek RTL81111 nic on the DFRobot DFR0767 dual-nic carrier board for my pi 4 compute module. The driver for this nic does not come with piOS by default; apparently it is included with the 64bit version. To enable this driver I had to recompile the kernel. I followed the basic flow from this post, but (because I’m not as cool as the poster) I did not cross-compile. As a result my procedure was a lot simpler and more readable:

sudo apt-get update
sudo apt-get install flex bison libssl-dev bc  -y
git clone --depth=1 https://github.com/raspberrypi/linux
cd linux
make bcm2711_defconfig
Add "CONFIG_R8169=m" to .config (no need to "make menuconfig")
make -j4 zImage modules dtbs
make modules_install
cp arch/arm/boot/zImage /boot/kernel7l.img 
cp arch/arm/boot/dts/overlays/*.dtb* /boot/overlays/ 

After this I rebooted, and all was right as rain: the new nic was present! Compile time was only a couple hours which was quite impressive, given it was done directly on the pi (4 core, 2GB ram).

Controlling 433 Mhz Blinds from Home Assistant (the easy way)

A while back I bought a superhet 433 Mhz transceiver/receiver pair (like this). The goal was to attach this simple device to a pi and control it using the great rpi-rf package. Unfortunately, it appears that my particular transceivers (one BY-305 controlling five blinds, one AC-123-06D controlling two blinds) emit codes that aren’t easily detected by rpi-rf.

Rather than try and get a PhD in reverse engineering the protocol (well, i did try for it but barely qualify for A.B.D) I found the life saving rpi-rfsniffer package. I noticed that if I simply did the following:

 rfsniffer --rxpin 7 record 2ndFloorS_All4_Up

This allowed me to override the default receive pin (which i had plugged into GPIO4). I then held down the “up” button on my 433Mhz remote. I repeated this procedure for the up and down on both remotes, giving each recording session an appropriate name. One note: I found that rfsniffer waited sometimes far too long to terminate the recording. To limit it I modified line 62 of /usr/local/lib/python3.7/dist-packages/rfsniffer.py as follows:

if len(capture) < 16000 and  GPIO.wait_for_edge(rx_pin, GPIO.BOTH, timeout=1000):

This limits the capture to 16000 samples (about 6 seconds or so.)

To play back the recordings I did the following:

rfsniffer --txpin 11 play 2ndFloorS_All4_Up

Again i overrode the tx pin as pin 11 (GPIO17) playing back the samples I had just recorded.

To integrate this into home assistant I simply used the excellent command_line integration. This was quick and easy and probably cannot be improved upon as there is no status coming from the blinds.

switch:
  - platform: command_line
    switches:
      blinds_2ndfloor:
        command_on: ssh -o StrictHostKeyChecking=no user1@<pi1_ip> '/home/user1/blinds_south_up.sh'
        command_off: ssh -o StrictHostKeyChecking=no user1@<pi1_ip> '/home/user1/blinds_south_down.sh'

This exposes a single entity, called ‘blinds_2ndfloor’ that has “on” and “off” buttons. The on script looks like this:

rfsniffer --txpin 11 play 2ndFloorN_Up
rfsniffer --txpin 11 play 2ndFloorN_Up
rfsniffer --txpin 11 play 2ndFloorN_Up
rfsniffer --txpin 11 play 2ndFloorS_All4_Up
rfsniffer --txpin 11 play 2ndFloorS_All4_Up
rfsniffer --txpin 11 play 2ndFloorS_All4_Up

The repetition seemed necessary, empirically, as sometimes a single blind would not start if only one or two playbacks were made. Additionally, since there is only one pi, i found putting all the command serially was better than letting HA possibly try to run the north and south transceivers in parallel. The script for off looks similar but, of course, plays back the “Down” samples.

With those scripts in place one can easily add HA automations to bring the blinds up or down based on the time of day. Eventually you need not touch anything in your house and can progress to a future of blissful automation taking care of everything and allowing us to evolve into our proper form:

A list of ways our society is already like Pixar's dystopia in WALL·E

Disclaimers:

  • For docker-based HA: To enable ssh based remote invoke, one must ensure the /root/.ssh as a volume, then inside HA generate your ssh keypairs. Then add the HA pub key to your pi authorized_keys file.
  • I totally realize this method of capturning the RF signals is suboptimal: in theory if you live in a busy RF environment you would be capturing, then playing back stray signals. The solution is obviously do your RF captures in your neighborhood anechoic chamber.

Physically Disconnecting the Speaker and Microphone on the Wyzecam V3

The Wyzecam v3 comes with some great features – namely the $20 price tag and excellent starlight sensor. It also comes with a microphone and speaker – both of which have their downsides. For those that wish to disconnect them (say, for privacy reasons) — and don’t fully trust the software “disable” — one can physically disconnect them without damaging the camera.

NOTE: if you plan to disconnect the speaker you should probably do this only AFTER setting up the camera, as it provides voice prompts during the setup.

Estimated time: This takes less than 5 minutes.

Step 1 – Use a plastic spudger such as this one for $1.99 from ifixit

Step 2 – Guide the spudger under the outside of the white rim on the front of the camera. Run it around the ENTIRE white rim to loosen the underlying adhesive. Be careful not to push it too far under the rim as it will mar the adhesive tape and thereby decrease re-assembly quality.

Note the sticky tape on the back of the white insert. You want to avoid marring this as it will affect the reassmbled product

Step 3 – Use the pointy end of the tool to carefully remove the three white inserts. This part is easy – but if you get it wrong it will be VERY hard to get the underlying screws out! Tip: push on the far side of the squishy insert to cause it to rotate, then you can carefully tweezer it out.

Step 4 – Use a small screwdriver to loosen the three phillips screws. Yes – only three; if Wyze had a fourth hole and screw the price would be much higher.

Step 5 – Carefully insert the spudger in between the white case and the black front. This is the trickiest part! You don’t want to damage the red moisture seal just underneath the black front. To avoid damaging it, do not repeatedly pry at the black front – instead get the tool just under the edge and lift.

Step 6 – Once you have carefully lifted out the black portion the electronics slide out easily. The mic and speakers are on the bottom of the assembly. You can use the tool to carefully loosen the connectors. This should allow easy reconnection if desired later.

With microphone and speaker disconnected

Step 7 – Reassmebly. Push the assembly back in the case. Insert the three screws and tighten. Carefully push the white inserts. Re-attach the white rim.

The reassembled product – you can’t even tell it was modified – which is kinda the point!

Comparison of Wyzev3 Sensor vs RPi IMX327

One of the best low-light sensors for the raspberry pi is the Sony IMX327 (available here: https://www.inno-maker.com/product/mipi-cam-327/ for around $90). The wyzev3 offers similar starlight performance for $20 – a fraction of the cost. But how do they compare?

IMX327
Wyzev3

As can be seen, both images capture all the overall scene well. The white balance of the IMX327 was not adjusted leaving the overall scene seeming a bit “warmer” than the wyzev3.

As for detail a few important differences pop out:

  • The wyze seems to saturate around light sources. Especially pronounced near the streetlamp and the car’s break lights.
  • The wyze has a slightly wider field of view
  • The wyzev3 doesn’t seem to capture some detail, such as the lettering on the stop sign, as well as the IMX327 – this is especially apparent when zooming in on the full size image.
  • The wyze seems better at keeping details crisp in “busier” sections of the scene – examples include the ground around the forest on the right of the image
  • The wyze seems to suffer more motion blur. I believe that the wyze acheives some of its performance by combining multiple frames. This improves low light performance but smears together details.

While the IMX327 seems to have potentially better quality, cost almost hands the victory to the wyzev3. By the time you add a fully-equipped pi4 to the IMX327 your cost is close to $160. Cutting that in half you could point four times as many wyze cameras at the problem and have vastly better combined video coverage. Add to this the wyzev3 includes IR-LEDs (near and far), microphones, a speaker, and can be set up by a non-PhD.

The only caveat here is that wyze3 is a “closed” product. Unless open source firmwares can be loaded on it, it will never be as secure as the pi solution – users of wyzev3 are at the mercy of wyze to protect their data. To this end there is some hope that wyze will release an RTSP version of their firmware, as they did for their v2 product.

HA MQTT Auto Discovery

As described on their official MQTT Auto Discovery page, Home Assistant allows one to create sensors on the fly. This is particularly important if you are, say, trying to add some non-trivial number of devices that speak MQTT.

I saw a lot of posts about people struggling to get this working. I too had a few issues and felt it worth stating my take on it. Spoiler alert: HA MQTT auto discovery works perfectly, so long as you get over some of the setup nuances. I thought I’d document these for posterity:

  • If you added MQTT support through the GUI, auto discovery is set to false by default. There doesn’t appear to be any way to fix this through the GUI. If you then add MQTT to your configuration yaml, the GUI config overrides it! The only way to overcome this is to delete MQTT from the GUI and then add it in the configuration yaml, restart HA, etc..
  • To publish a value there is a two step process
    1. Publish to the configuration topic to define the sensor. The concept of how this works is well documented on the main link given above – however it is important to note that you do NOT have to provide the configuration for all values in a entity with multiple values. This is actually a nice feature – and works so long as you always publish the config before sending the value. It was not clear to me (and i didnt investigate) if there was any good way to minimize the need to broadcast configurations; e.g. how do you know HA saw your config before you send state? To be safe i just publish the config and state one after another everytime. This doesnt seem optimal but it works.
    2. Publish the state via the state topic passed in during the config publish. The only thing to note here is that, unfortunately, it does not appear that values get rooted under the objectid you provide. Insted they are placed under a sensor with the name you provide – seemingly HA completely ignores the objectid…

Otherwise this feature works very well. Some tips I found for debugging this:

  • Enabling logging on mqtt works well. To do this add to configuration.yaml:
logger:
  default: warning
  logs:
    homeassistant.components.mqtt: debug

You can then tail (if under docker) config/home-assistant.log

  • Using the “Publish a packet” or “Listen to a topic” pages under MQTT->configure (in the HA configuration->integrations page) is good for eliminating any client broker issues you might see
  • Additionally, if you are using eclipse-mosquitto as your mqtt server, you can directly view (HA aside) publishes. In docker it would be:
docker exec -it <mosquittodockerid> /bin/sh
mosquitto_sub -u user -P pass -h localhost -t topic

Where “topic” above is whatever you are publishing, e.g. “homeassistant/sensor/grinch/state”

Lastly, in looking for a good c++ mqtt client, I tried paho and mosquitto. I found mosquitto worked best – both for the server (which i run in a docker) and c++ client. My decision was based on this simple requirement: the library should build (paho failed), work with c++11 (other, albeit cool-looking, c++ libraries required c++14), and be easy. Mosquitto fit the bill perfectly. I am sure – perhaps – there could be other libraries that might work as well – i just never found them.