As mentioned here: https://www.raspberrypi.org/forums/viewtopic.php?t=247054, you can add the following lines to /boot/config.txt to disable this annoying light:
dtparam=pwr_led_trigger=none
dtparam=pwr_led_activelow=off
As mentioned here: https://www.raspberrypi.org/forums/viewtopic.php?t=247054, you can add the following lines to /boot/config.txt to disable this annoying light:
dtparam=pwr_led_trigger=none
dtparam=pwr_led_activelow=off
Let’s say you have an elastiflow docker instance set up. This stack pushes all flow info into an index named “elastiflow-<version>-<Year>.<Month>.<Day>”. What if you wanted to use the same ELK stack for both elastiflow AND other stuff?
This is possible, of course!
Clone the elastiflow git repo
Cd into the repo
Add a new input filter to logstash/elastiflow/conf.d/10_input_syslog.conf . For example to bring in syslog:
input {
udp {
host => "0.0.0.0"
port => 10514
codec => "json"
type => "rsyslog"
tags => ["rsyslog"]
}
}
filter { }
Modify logstash/elastiflow/conf.d/30_output_10_single.logstash.conf
output {
if "rsyslog" in [tags] {
elasticsearch {
user => "${ELASTIFLOW_ES_USER:elastic}"
password => "${ELASTIFLOW_ES_PASSWD:changeme}"
hosts => [ "172.10.4.1:9200" ]
index => "logstash-%{+YYYY.MM.dd}"
template => "${ELASTIFLOW_TEMPLATE_PATH:/etc/logstash/elastiflow/templates}/logstash.template.json"
template_name => "logstash-1.0.0"
}
} else {
elasticsearch {
id => "output_elasticsearch_single"
hosts => [ "${ELASTIFLOW_ES_HOST:127.0.0.1:9200}" ]
ssl => "${ELASTIFLOW_ES_SSL_ENABLE:false}"
ssl_certificate_verification => "${ELASTIFLOW_ES_SSL_VERIFY:false}"
# If ssl_certificate_verification is true, uncomment cacert and set the path to the certificate.
#cacert => "/PATH/TO/CERT"
user => "${ELASTIFLOW_ES_USER:elastic}"
password => "${ELASTIFLOW_ES_PASSWD:changeme}"
index => "elastiflow-3.5.3-%{+YYYY.MM.dd}"
template => "${ELASTIFLOW_TEMPLATE_PATH:/etc/logstash/elastiflow/templates}/elastiflow.template.json"
template_name => "elastiflow-3.5.3"
template_overwrite => "true"
}
}
}
Rebuild the image:
docker build --tag logstash-elastiflow-custom:1.0 .
Now bring up your stack, e.g. “docker-compose up -d”
Now let’s test it. We can generate a new syslog message by, say, logging into the syslog server. If we do this the server shows the following message:
Mar 31 08:37:37 zoobie-2-1 sshd[2625]: Accepted publickey for magplus from 172.10.4.32 port 61811 ssh2: RSA SHA256:2dui2biubddjwbdjbd
If we go to kibana -> Management and create an index, we should see a new logstash index. Add it to kibana. Then view the index in the discover view. It should look like this:
node.hostname:elk.myhomenet.sys node.ipaddr:172.10.4.1 event.type:rsyslog event.host:syslogserver.myhomenet.sys @version:3.5.3 facility:auth @timestamp:Mar 31, 2020 @ 08:24:52.000 sysloghost:zoobie-2-1 severity:info programname:sshd procid:2575 logstash_host:syslogserver.myhomenet.sys tags:rsyslog message: Accepted publickey for magplus from 172.10.4.32 port 61736 ssh2: RSA SHA256:2dui2biubddjwbdjbd _id:IJ6xanIBxE6Ab_zHIO3i _type:_doc _index:logstash-2020.03.31 _score:0
And there you have it!
NOTE: This example does not cover setting up syslog forwarding, which is required to get syslog into logstash. For a good example of this go to this Digital Ocean tutorial on syslog and logstash
Recently I’ve been using the Tensorflow Object Detection API. Much of my approach follows material provided on pyimagesearch.com, this API allows us to perform. Rather than rehash that material, I just wanted to give a few pointers that I found helpful.
It is fairly easy to check out and build dlib. Getting it to work in a performance-optimized manner – python bindings included -takes a little more work.
Per the dlib github one can build the bindings by simply issuing:
python setup.py install
First problem I found is that the setup process decided to latch on to an old version of CUDA. That was my bad – fixed by moving my PATH variable to point to the new cuda’s bin dir.
Second problem is that during compilation I saw the following:
Invoking CMake build: 'cmake --build . --config Release -- -j12'
[ 1%] Building NVCC (Device) object dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cusolver_dlibapi.cu.o
[ 2%] Building NVCC (Device) object dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cuda_dlib.cu.o
/home/carson/code/2020/facenet/dlib/dlib/cuda/cuda_dlib.cu(1762): error: calling a constexpr __host__ function("log1p") from a __device__ function("cuda_log1pexp") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.
As it suggests this is resolved by passing in a flag to the compiler. To do this modify the setup.py line:
python setup.py install --set USE_AVX_INSTRUCTIONS=1 --set DLIB_USE_CUDA=1 --set CUDA_NVCC_FLAGS="--expt-relaxed-constexpr"
Everything went just peachy from there except when I attempted to use dlib from within python I got an error (something like):
dlib 19.19.99 is missing cblas_dtrsm symbol
After which i tried importing face_recognition and got a segfault.
I fixed this by install openblas-devel, then re-ran the setup.py script as above. Magically this fixed everything.
Again, not bad – dlib seems cool – just normal troubleshooting stuff.
Expect no witty sayings or clever analyses here – I just think GPUs are cool. And here are a few reasons why:
Exhibit A: Machine Learning Training a standard feed forward neural net on CIFAR-10 progresses at 50usec/sample; my 2.4 Ghz i7 takes almost 500usec/sample. The total set takes around 5 min to train on the GPU vs over a 35 min on my CPU. On long tasks this means a difference of days to weeks.
Exhibit B: Video transcoding In order to make backups of all my blu-ray disks, I rip and transcode them using ffmpeg or handbrake. Normally Im lucky to get a few dozen frames per second – completely making out my CPU during the process. By compiling ffmpeg to include nvenc/cuda support I get 456 fps (19x faster). As the screenshots show, my avg cpu usage was below 20% – and even GPU usage stayed under 10%. Video quality was superb (i couldnt tell the difference).
ffmpeg -vsync 0 -hwaccel cuvid -i 00800.m2ts -c:a copy -c:v h264_nvenc -b:v 5M prince_egypt.mp4
My setup:
We decided to try out minecraft as a family. Our setup includes a half dozen devices (some iOS, some android, Xbox one, etc) with users spread out across the country.
Rather than pay for a realms subscription, we chose to host our own server. Besides being cheaper this gives us flexibility and control over the world data.
Run the docker container (we used the Bedrock Minecraft Server):
docker run -d -e EULA=TRUE --name cworld -e GAMEMODE=creative -e DIFFICULTY=normal -e SERVER_NAME=fenhome2 -v /home/<user>/data:/data -p 127.0.0.1:19132:19132/udp itzg/minecraft-bedrock-server
Note 1 – I purposefully run the minecraft server only the localhost – not on the public IP. More on this later.
Note 2 – I specify a volume for the data. This lets you preserve minecraft state across server restarts and allows you to port it to other servers easily. Replace <user> with your username on your server.
For the sake of security we would like to allow access to only the clients that we specifically want to allow. To do this I found the easiest thing is to use one firewall-cmd per allowed host:
firewall-cmd --direct --add-rule ipv4 nat PREROUTING 0 -s <allowedHostIp>/32 -p udp --dport 19132 -j DNAT --to-destination <dockerIp>:19132
Substitute the ip of the allowed host in place of <allowedHostIp>. To find your ip just go to http://whatismyip.com. The <dockerIp> can be found with the following command: docker exec -it cworld ip addr | grep 172
Its really easy to connect Android and iOS clients. Just click on the servers tab (shown here):
Then click on the “Add Server” button.
You will fill in the server name with whatever you want, however the hostname and port will be the hostname of your server and port (19132 in this example). See this example:
XBox clients are more challenging than Android/iOS because they don’t have the “Add Server” button. However there is still a way to make it work! The magic involves adding a “network proxy” on your network that tricks the XBox into thinking there is a local server. The XBox connects to this proxy which forwards traffic to your server.
make
cmd
cd <path to downloaded phantom-windows.exe">
phantom-windows.exe -server <server>:19132
IMPORTANT: You will need to ensure phantom is running for the duration of your game play.
If you trust me (why would you?!) you can use the copy of phantom I built (md5 5e7595f82f117b545c511629df3304ba)
If you want to start a second server for survival mode games the process is almost identical. The docker image is created as follows:
docker run -d -e EULA=TRUE --name cworld_survival -e GAMEMODE=survival -e DIFFICULTY=normal -e SERVER_NAME=fenhome2_survival -v /home/<user>/data_survival:/data -p 127.0.0.1:19133:19132/udp itzg/minecraft-bedrock-server
And for the firewall
firewall-cmd --direct --add-rule ipv4 nat PREROUTING 0 -s <allowedHostIp>/32 -p udp --dport 19133 -j DNAT --to-destination <dockerIp2>:19132
Where <dockerIp2> is the IP of the second minecraft server.
You can now add a second server in your android/iOS clients with the name “<server> survival”, <server> for the ip, and 19133 for the port.
Note that on XBox you will need a second phantom instance with 19133 as the port, as in:
phantom-windows.exe -server <server>:19133
The installation command shown on pytorch.org didn’t quite work for me on centos7. Instead of using the anaconda installer provided on pytorch, I just used the yum-provided conda:
yum install conda
However when i went to run the command as specified I got the following error:
~$ conda install pytorch torchvision cudatoolkit=10.0 -c pytorch NoBaseEnvironmentError: This conda installation has no default base environment. Use 'conda create' to create new environments and 'conda activate' to activate environments.
It seems the problem is that pytorch is assuming you’ve set up a working conda environment – which i had not. You can work around this by simply naming the environment (I suppose you could also have made a base environemnt, but I decided to save that adventure for another day). The following did work for me:
conda create -n pytorch pytorch torchvision matplotlib cudatoolkit=10.0 -c pytorch
Note I also added matplotlib as I was following pytorch examples that requires it. Also, I found I could run any pytorch example by directly referencing the conda environment, instead of activating it and then running the script, as follows:
~/.conda/envs/pytorch/bin/python3.7 train.py
Recently I purchased a raspberry pi 4 to see how it performs basic computer vision tasks. I largely followed this guide on building opencv https://www.learnopencv.com/install-opencv-4-on-raspberry-pi/
However, in the guide there are several missing dependencies on a fresh version of raspbian buster. There are also some apparent errors in the CMakeLists.txt file which other users already discovered. After patching these fixes and adding the needed missing dependencies, I now have opencv running on my pi. Here’s my complete script below.
IMPORTANT: You must READ the script, don’t just run it! For example, you should check the version of python you have at the time you run this script. When I ran it i was at python 3.7. Also, feel free to bump to a later version (or master) of opencv.
Oh, also during the numpy step it hung and i was too lazy to look into it. It didn’t seem to affect my ability to use opencv – so i didn’t go back and dig. My bad.
#!/bin/bash cvVersion="4.1.1" pythonVersion="3.7" opencvDirRoot=/home/pi/opencv sudo apt-get -y purge wolfram-engine sudo apt-get -y purge libreoffice* sudo apt-get -y clean sudo apt-get -y autoremove mkdir -p $opencvDirRoot cd $opencvDirRoot # Clean build directories rm -rf opencv/build rm -rf opencv_contrib/build # Create directory for installation rm -fr installation mkdir -p installation mkdir installation/OpenCV-"$cvVersion" sudo apt -y update sudo apt -y upgrade sudo apt-get -y remove x264 libx264-dev ## Install dependencies sudo apt-get install libblas-dev liblapack-dev sudo apt-get install libeigen3-dev sudo apt-get -y install qtbase5-dev qtdeclarative5-dev sudo apt-get -y install build-essential checkinstall cmake pkg-config yasm sudo apt-get -y install git gfortran sudo apt-get -y install libjpeg8-dev libjasper-dev libpng12-dev sudo apt-get -y install libtiff5-dev sudo apt-get -y install libtiff-dev sudo apt-get -y install libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev sudo apt-get -y install libxine2-dev libv4l-dev cd /usr/include/linux sudo ln -s -f ../libv4l1-videodev.h videodev.h cd $opencvDirRoot sudo apt-get -y install libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev sudo apt-get -y install libgtk2.0-dev libtbb-dev qt5-default sudo apt-get -y install libatlas-base-dev sudo apt-get -y install libmp3lame-dev libtheora-dev sudo apt-get -y install libvorbis-dev libxvidcore-dev libx264-dev sudo apt-get -y install libopencore-amrnb-dev libopencore-amrwb-dev sudo apt-get -y install libavresample-dev sudo apt-get -y install x264 v4l-utils sudo apt-get -y install libmesa-dev sudo apt-get -y install freeglut3-dev # Optional dependencies sudo apt-get -y install libprotobuf-dev protobuf-compiler sudo apt-get -y install libgoogle-glog-dev libgflags-dev sudo apt-get -y install libgphoto2-dev libeigen3-dev libhdf5-dev doxygen sudo apt-get -y install python3-dev python3-pip sudo -H pip3 install -U pip numpy sudo apt-get -y install python3-testresources cd $opencvDirRoot # Install virtual environment python3 -m venv OpenCV-"$cvVersion"-py3 echo "# Virtual Environment Wrapper" >> ~/.bashrc echo "alias workoncv-$cvVersion=\"source $opencvDirRoot/OpenCV-$cvVersion-py3/bin/activate\"" >> ~/.bashrc source "$opencvDirRoot"/OpenCV-"$cvVersion"-py3/bin/activate ############# ############ For Python 3 ############ # now install python libraries within this virtual environment sudo sed -i 's/CONF_SWAPSIZE=100/CONF_SWAPSIZE=1024/g' /etc/dphys-swapfile sudo /etc/init.d/dphys-swapfile stop sudo /etc/init.d/dphys-swapfile start pip install numpy dlib # quit virtual environment deactivate git clone https://github.com/opencv/opencv.git cd opencv git checkout $cvVersion cd .. git clone https://github.com/opencv/opencv_contrib.git cd opencv_contrib git checkout $cvVersion cd .. cd opencv mkdir build cd build # Eigen/Core to eigen3/Eigen/Core sed -i s,Eigen/Core,eigen3/Eigen/Core/g ../modules/core/include/opencv2/core/private.hpp # Add these to opencv/samples/cpp/CMakeLists.txt find_package(OpenGL REQUIRED) find_package(GLUT REQUIRED) cmake .. -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=$opencvDirRoot/installation/OpenCV-"$cvVersion" \ -D INSTALL_C_EXAMPLES=ON \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D WITH_TBB=ON \ -D WITH_V4L=ON \ -D OPENCV_PYTHON3_INSTALL_PATH=$opencvDirRoot/OpenCV-$cvVersion-py3/lib/python$pythonVersion/site-packages \ -D WITH_QT=ON \ -D WITH_OPENGL=ON \ -D OPENCV_EXTRA_EXE_LINKER_FLAGS=-latomic \ -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \ -D BUILD_EXAMPLES=ON make -j$(nproc) make install sudo sed -i 's/CONF_SWAPSIZE=1024/CONF_SWAPSIZE=100/g' /etc/dphys-swapfile sudo /etc/init.d/dphys-swapfile stop sudo /etc/init.d/dphys-swapfile start echo "sudo modprobe bcm2835-v4l2" >> ~/.profile
ssh -R 3333:localhost:22 -p 2222 root@website.com
This is a predicate for a distributed password management system. At the end of this exercise you have a secure password manager for one device, but with a small step or two the full solution can be implemented.
At this point, it would be good to create a few folders. The nice thing to note is you don’t have to use qtpass for this – it can be a bit squirrley, Just open a command line prompt and cd into “password-store” – you can do “mkdir
Try adding a new password under a folder – it should let you enter the login name, click on “generate password”, and when you save it should show the password under your folder.
Next step is to add git support, and create a secure git repo on a website. Then you’ll basically have a custom, secure, distributed password store!