Adding in Custom Indices to Elastiflow

Let’s say you have an elastiflow docker instance set up. This stack pushes all flow info into an index named “elastiflow-<version>-<Year>.<Month>.<Day>”. What if you wanted to use the same ELK stack for both elastiflow AND other stuff?

This is possible, of course!

Clone the elastiflow git repo

Cd into the repo

Add a new input filter to logstash/elastiflow/conf.d/10_input_syslog.conf . For example to bring in syslog:

input {
  udp {
    host => "0.0.0.0"
    port => 10514
    codec => "json"
    type => "rsyslog"
    tags => ["rsyslog"]
  }
}

filter { }

Modify logstash/elastiflow/conf.d/30_output_10_single.logstash.conf


output {
  if "rsyslog" in [tags]  {
    elasticsearch {
      user => "${ELASTIFLOW_ES_USER:elastic}"
      password => "${ELASTIFLOW_ES_PASSWD:changeme}"
      hosts => [ "172.10.4.1:9200" ]
      index => "logstash-%{+YYYY.MM.dd}"
      template => "${ELASTIFLOW_TEMPLATE_PATH:/etc/logstash/elastiflow/templates}/logstash.template.json"
      template_name => "logstash-1.0.0"
    
    }
  } else {
    elasticsearch {
      id => "output_elasticsearch_single"
      hosts => [ "${ELASTIFLOW_ES_HOST:127.0.0.1:9200}" ]
      ssl => "${ELASTIFLOW_ES_SSL_ENABLE:false}"
      ssl_certificate_verification => "${ELASTIFLOW_ES_SSL_VERIFY:false}"
      # If ssl_certificate_verification is true, uncomment cacert and set the path to the certificate.
      #cacert => "/PATH/TO/CERT"
      user => "${ELASTIFLOW_ES_USER:elastic}"
      password => "${ELASTIFLOW_ES_PASSWD:changeme}"
      index => "elastiflow-3.5.3-%{+YYYY.MM.dd}"
      template => "${ELASTIFLOW_TEMPLATE_PATH:/etc/logstash/elastiflow/templates}/elastiflow.template.json"
      template_name => "elastiflow-3.5.3"
      template_overwrite => "true"
    }
  } 
} 

Rebuild the image:

docker build --tag logstash-elastiflow-custom:1.0 .

Now bring up your stack, e.g. “docker-compose up -d”

Now let’s test it. We can generate a new syslog message by, say, logging into the syslog server. If we do this the server shows the following message:

Mar 31 08:37:37 zoobie-2-1 sshd[2625]: Accepted publickey for magplus from 172.10.4.32 port 61811 ssh2: RSA SHA256:2dui2biubddjwbdjbd

If we go to kibana -> Management and create an index, we should see a new logstash index. Add it to kibana. Then view the index in the discover view. It should look like this:

node.hostname:elk.myhomenet.sys node.ipaddr:172.10.4.1 event.type:rsyslog event.host:syslogserver.myhomenet.sys @version:3.5.3 facility:auth @timestamp:Mar 31, 2020 @ 08:24:52.000 sysloghost:zoobie-2-1 severity:info programname:sshd procid:2575 logstash_host:syslogserver.myhomenet.sys tags:rsyslog message: Accepted publickey for magplus from 172.10.4.32 port 61736 ssh2: RSA SHA256:2dui2biubddjwbdjbd _id:IJ6xanIBxE6Ab_zHIO3i _type:_doc _index:logstash-2020.03.31 _score:0

And there you have it!

NOTE: This example does not cover setting up syslog forwarding, which is required to get syslog into logstash. For a good example of this go to this Digital Ocean tutorial on syslog and logstash

Armchair Deep Learning Enthusiast: Object Detection Tips #1

Recently I’ve been using the Tensorflow Object Detection API. Much of my approach follows material provided on pyimagesearch.com, this API allows us to perform. Rather than rehash that material, I just wanted to give a few pointers that I found helpful.

  • Don’t limit yourself to the ImageNet Large Scale Visual Recognition (ILSVRC) dataset. This dataset is credited with helping really juice up the state of the art in image recognition, however there are some serious problems for armchair DL enthusiasts like myself
    1. It isn’t easy to get the actual images for training. You have to petition to have access to the dataset, and your email must not look “common”. Uh-huh – so what are your options? Sure, you can get links to the images on the imagenet website and download them yourself Sure you could get the bounding boxes – but how do you match them up with the images you manually downloaded. I don’t want any of this – I just want the easy button; download it and use it. Well – the closest thing you’ll get to that is to grab the images from academic torrents: http://academictorrents.com/collection/imagenet-lsvrc-2015. but even that doesn’t feel satisying – if imagenet is about moving the state of the art forward, and assuming i even had the capability o do so, they sure aren’t making it easy for me to do that!
    2. It seems outdated. The object detection dataset hasnt changed since 2012. That is probably good for stability but the total image size (~1M images) no longer seems big. Peoples hairstyles, clothing, etc. are all changing – time for an update!
    3. Oh, that’s right – there is no official “person” synset inside the ILSVRC image set! So don’t worry about those out of date hair styles or clothes!
    4. There are better datasets out there. Bottom line – people are moving to other data sets and you should too.
      1. Open Images being one of the best
      2. Oh, and you can download subsets of this easily using a tool like https://github.com/harshilpatel312/open-images-downloader.git.
  • The TFOD flow is easy to follow – provided you use the right tensorflow version.
    • You TFOD is not compatible with tensorflow 2.0. You have to use the 1.x series.
    • I am using anaconda to download tensorflow-gpu version 1.15.0. To do this type “conda install tensorflow-gpu=1.15.0” (inside an activated anaconda instance)
    • You then grab the TFOD library, as per the website
  • Make sure you actually feed TFOD data, else you get weird hanging-like behavior.
    • At some point i found that tensorflow was crashing because a bounding box was outside the image.
    • In the process of fixing tht i introduced a bug that caused zero records to be sent to tensorflow
    • When i then ran a training loop I saw tensorflow progress as usual until it reported it had loaded libcublas: “Successfully opened dynamic library libcublas.so.10.2”
    • I thought this was a tensorflow issue, and even found a github issue for it Successfully opened dynamic library libcublas.so.10.0′ – however this was all a red herring. It was NOT because of tensorflow, it was just because my bounding box fix had eliminated all bounding boxes. Once i fixed that alll was well
  • Make sure you provide enough discriminatory data. E.g. if you want to find squirrels, don’t just train on the squirrel set, otherwise your detector will think almost anything that looks like an object is a squirrel. Add in a few other data sets and you will find that squirrels are squirrels and everything else is hit or miss.

How I Set Up DLIB

It is fairly easy to check out and build dlib. Getting it to work in a performance-optimized manner – python bindings included -takes a little more work.

Per the dlib github one can build the bindings by simply issuing:

python setup.py install

First problem I found is that the setup process decided to latch on to an old version of CUDA. That was my bad – fixed by moving my PATH variable to point to the new cuda’s bin dir.

Second problem is that during compilation I saw the following:

Invoking CMake build: 'cmake --build . --config Release -- -j12'
[  1%] Building NVCC (Device) object dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cusolver_dlibapi.cu.o
[  2%] Building NVCC (Device) object dlib_build/CMakeFiles/dlib.dir/cuda/dlib_generated_cuda_dlib.cu.o
/home/carson/code/2020/facenet/dlib/dlib/cuda/cuda_dlib.cu(1762): error: calling a constexpr __host__ function("log1p") from a __device__ function("cuda_log1pexp") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.

As it suggests this is resolved by passing in a flag to the compiler. To do this modify the setup.py line:

python setup.py install --set USE_AVX_INSTRUCTIONS=1 --set DLIB_USE_CUDA=1 --set CUDA_NVCC_FLAGS="--expt-relaxed-constexpr"

Everything went just peachy from there except when I attempted to use dlib from within python I got an error (something like):

dlib 19.19.99 is missing cblas_dtrsm symbol

After which i tried importing face_recognition and got a segfault.

I fixed this by install openblas-devel, then re-ran the setup.py script as above. Magically this fixed everything.

Again, not bad – dlib seems cool – just normal troubleshooting stuff.

GPUs Are Cool

Expect no witty sayings or clever analyses here – I just think GPUs are cool. And here are a few reasons why:

Exhibit A: Machine Learning Training a standard feed forward neural net on CIFAR-10 progresses at 50usec/sample; my 2.4 Ghz i7 takes almost 500usec/sample. The total set takes around 5 min to train on the GPU vs over a 35 min on my CPU. On long tasks this means a difference of days to weeks.

Exhibit B: Video transcoding In order to make backups of all my blu-ray disks, I rip and transcode them using ffmpeg or handbrake. Normally Im lucky to get a few dozen frames per second – completely making out my CPU during the process. By compiling ffmpeg to include nvenc/cuda support I get 456 fps (19x faster). As the screenshots show, my avg cpu usage was below 20% – and even GPU usage stayed under 10%. Video quality was superb (i couldnt tell the difference).

ffmpeg -vsync 0 -hwaccel cuvid -i 00800.m2ts -c:a copy -c:v h264_nvenc -b:v 5M prince_egypt.mp4
RAW frame from blu-ray
Same frame after ffmpeg/nvenc transcoding

My setup:

  • GPU: RTX 2070 Super (8GB ram)
  • CPU: i7-8700K (6 core HT @3.7Ghz)
  • RAM: 32GB
  • Disk: 1TB PM981 (NVME)

Hosting a Minecraft Server

We decided to try out minecraft as a family. Our setup includes a half dozen devices (some iOS, some android, Xbox one, etc) with users spread out across the country.

Rather than pay for a realms subscription, we chose to host our own server. Besides being cheaper this gives us flexibility and control over the world data.

Server Setup

Run the docker container (we used the Bedrock Minecraft Server):

docker run -d -e EULA=TRUE --name cworld -e GAMEMODE=creative -e DIFFICULTY=normal -e SERVER_NAME=fenhome2 -v /home/<user>/data:/data  -p 127.0.0.1:19132:19132/udp itzg/minecraft-bedrock-server

Note 1 – I purposefully run the minecraft server only the localhost – not on the public IP. More on this later.

Note 2 – I specify a volume for the data. This lets you preserve minecraft state across server restarts and allows you to port it to other servers easily. Replace <user> with your username on your server.

For the sake of security we would like to allow access to only the clients that we specifically want to allow. To do this I found the easiest thing is to use one firewall-cmd per allowed host:

firewall-cmd --direct --add-rule ipv4 nat PREROUTING 0 -s <allowedHostIp>/32 -p udp --dport 19132 -j DNAT --to-destination  <dockerIp>:19132

Substitute the ip of the allowed host in place of <allowedHostIp>. To find your ip just go to http://whatismyip.com. The <dockerIp> can be found with the following command: docker exec -it cworld ip addr | grep 172

Connecting Android/iOS Clients

Its really easy to connect Android and iOS clients. Just click on the servers tab (shown here):

Then click on the “Add Server” button.

You will fill in the server name with whatever you want, however the hostname and port will be the hostname of your server and port (19132 in this example). See this example:

Connecting XBox One Clients

XBox clients are more challenging than Android/iOS because they don’t have the “Add Server” button. However there is still a way to make it work! The magic involves adding a “network proxy” on your network that tricks the XBox into thinking there is a local server. The XBox connects to this proxy which forwards traffic to your server.

  1. Download phantom from https://github.com/jhead/phantom
  2. Inside your phantom checkout run make
  3. Copy phantom_windows.exe to your windows box.
  4. On your windows box type: cmd
  5. In your console type cd <path to downloaded phantom-windows.exe">
  6. Type phantom-windows.exe -server <server>:19132
  7. If prompted by windows to allow phantom through your firewall say “yes.”

IMPORTANT: You will need to ensure phantom is running for the duration of your game play.

If you trust me (why would you?!) you can use the copy of phantom I built (md5 5e7595f82f117b545c511629df3304ba)

Add a Second Server for Survival Mode

If you want to start a second server for survival mode games the process is almost identical. The docker image is created as follows:

docker run -d -e EULA=TRUE --name cworld_survival -e GAMEMODE=survival -e DIFFICULTY=normal -e SERVER_NAME=fenhome2_survival -v /home/<user>/data_survival:/data  -p 127.0.0.1:19133:19132/udp itzg/minecraft-bedrock-server

And for the firewall

firewall-cmd --direct --add-rule ipv4 nat PREROUTING 0 -s <allowedHostIp>/32 -p udp --dport 19133 -j DNAT --to-destination  <dockerIp2>:19132

Where <dockerIp2> is the IP of the second minecraft server.

You can now add a second server in your android/iOS clients with the name “<server> survival”, <server> for the ip, and 19133 for the port.

Note that on XBox you will need a second phantom instance with 19133 as the port, as in:

phantom-windows.exe -server <server>:19133

Installing pytorch/torchvision on centos7

The installation command shown on pytorch.org didn’t quite work for me on centos7. Instead of using the anaconda installer provided on pytorch, I just used the yum-provided conda:

yum install conda

However when i went to run the command as specified I got the following error:

~$ conda install pytorch torchvision cudatoolkit=10.0 -c pytorch

NoBaseEnvironmentError: This conda installation has no default base environment. Use
'conda create' to create new environments and 'conda activate' to
activate environments.

It seems the problem is that pytorch is assuming you’ve set up a working conda environment – which i had not. You can work around this by simply naming the environment (I suppose you could also have made a base environemnt, but I decided to save that adventure for another day). The following did work for me:

conda create -n pytorch pytorch torchvision matplotlib cudatoolkit=10.0 -c pytorch

Note I also added matplotlib as I was following pytorch examples that requires it. Also, I found I could run any pytorch example by directly referencing the conda environment, instead of activating it and then running the script, as follows:

~/.conda/envs/pytorch/bin/python3.7 train.py

Installing OpenCV 4.1.1 on Raspberry Pi 4

Recently I purchased a raspberry pi 4 to see how it performs basic computer vision tasks. I largely followed this guide on building opencv https://www.learnopencv.com/install-opencv-4-on-raspberry-pi/

However, in the guide there are several missing dependencies on a fresh version of raspbian buster. There are also some apparent errors in the CMakeLists.txt file which other users already discovered. After patching these fixes and adding the needed missing dependencies, I now have opencv running on my pi. Here’s my complete script below.

IMPORTANT: You must READ the script, don’t just run it! For example, you should check the version of python you have at the time you run this script. When I ran it i was at python 3.7. Also, feel free to bump to a later version (or master) of opencv.

Oh, also during the numpy step it hung and i was too lazy to look into it. It didn’t seem to affect my ability to use opencv – so i didn’t go back and dig. My bad.

#!/bin/bash

cvVersion="4.1.1"
pythonVersion="3.7"
opencvDirRoot=/home/pi/opencv

sudo apt-get -y purge wolfram-engine
sudo apt-get -y purge libreoffice*
sudo apt-get -y clean
sudo apt-get -y autoremove

mkdir -p $opencvDirRoot
cd $opencvDirRoot

# Clean build directories
rm -rf opencv/build
rm -rf opencv_contrib/build

# Create directory for installation
rm -fr installation
mkdir -p installation
mkdir installation/OpenCV-"$cvVersion"


sudo apt -y update
sudo apt -y upgrade
sudo apt-get -y remove x264 libx264-dev
 
## Install dependencies
sudo apt-get install libblas-dev liblapack-dev
sudo apt-get install libeigen3-dev
sudo apt-get -y install qtbase5-dev qtdeclarative5-dev
sudo apt-get -y install build-essential checkinstall cmake pkg-config yasm
sudo apt-get -y install git gfortran
sudo apt-get -y install libjpeg8-dev libjasper-dev libpng12-dev

 
sudo apt-get -y install libtiff5-dev
 
sudo apt-get -y install libtiff-dev

sudo apt-get -y install libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev
sudo apt-get -y install libxine2-dev libv4l-dev
cd /usr/include/linux
sudo ln -s -f ../libv4l1-videodev.h videodev.h
cd $opencvDirRoot

sudo apt-get -y install libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev
sudo apt-get -y install libgtk2.0-dev libtbb-dev qt5-default
sudo apt-get -y install libatlas-base-dev
sudo apt-get -y install libmp3lame-dev libtheora-dev
sudo apt-get -y install libvorbis-dev libxvidcore-dev libx264-dev
sudo apt-get -y install libopencore-amrnb-dev libopencore-amrwb-dev
sudo apt-get -y install libavresample-dev
sudo apt-get -y install x264 v4l-utils
sudo apt-get -y install libmesa-dev
sudo apt-get -y install freeglut3-dev



# Optional dependencies
sudo apt-get -y install libprotobuf-dev protobuf-compiler
sudo apt-get -y install libgoogle-glog-dev libgflags-dev
sudo apt-get -y install libgphoto2-dev libeigen3-dev libhdf5-dev doxygen

sudo apt-get -y install python3-dev python3-pip
sudo -H pip3 install -U pip numpy
sudo apt-get -y install python3-testresources

cd $opencvDirRoot
# Install virtual environment
python3 -m venv OpenCV-"$cvVersion"-py3
echo "# Virtual Environment Wrapper" >> ~/.bashrc
echo "alias workoncv-$cvVersion=\"source $opencvDirRoot/OpenCV-$cvVersion-py3/bin/activate\"" >> ~/.bashrc
source "$opencvDirRoot"/OpenCV-"$cvVersion"-py3/bin/activate
#############

############ For Python 3 ############
# now install python libraries within this virtual environment
sudo sed -i 's/CONF_SWAPSIZE=100/CONF_SWAPSIZE=1024/g' /etc/dphys-swapfile
sudo /etc/init.d/dphys-swapfile stop
sudo /etc/init.d/dphys-swapfile start
pip install numpy dlib
# quit virtual environment
deactivate

git clone https://github.com/opencv/opencv.git
cd opencv
git checkout $cvVersion
cd ..

git clone https://github.com/opencv/opencv_contrib.git
cd opencv_contrib
git checkout $cvVersion
cd ..

cd opencv
mkdir build
cd build


# Eigen/Core to eigen3/Eigen/Core
sed -i s,Eigen/Core,eigen3/Eigen/Core/g ../modules/core/include/opencv2/core/private.hpp

# Add these to  opencv/samples/cpp/CMakeLists.txt 
find_package(OpenGL REQUIRED)
find_package(GLUT REQUIRED)

cmake .. -D CMAKE_BUILD_TYPE=RELEASE \
            -D CMAKE_INSTALL_PREFIX=$opencvDirRoot/installation/OpenCV-"$cvVersion" \
            -D INSTALL_C_EXAMPLES=ON \
            -D INSTALL_PYTHON_EXAMPLES=ON \
            -D WITH_TBB=ON \
            -D WITH_V4L=ON \
            -D OPENCV_PYTHON3_INSTALL_PATH=$opencvDirRoot/OpenCV-$cvVersion-py3/lib/python$pythonVersion/site-packages \
        -D WITH_QT=ON \
        -D WITH_OPENGL=ON \
    -D OPENCV_EXTRA_EXE_LINKER_FLAGS=-latomic \
        -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
        -D BUILD_EXAMPLES=ON


make -j$(nproc)
make install


sudo sed -i 's/CONF_SWAPSIZE=1024/CONF_SWAPSIZE=100/g' /etc/dphys-swapfile
sudo /etc/init.d/dphys-swapfile stop
sudo /etc/init.d/dphys-swapfile start

echo "sudo modprobe bcm2835-v4l2" >> ~/.profile
                                                                                                                                        

Setting up GPG/qtpass/pwgen for windows

This is a predicate for a distributed password management system. At the end of this exercise you have a secure password manager for one device, but with a small step or two the full solution can be implemented.

  1. Download and install GPG; At least install kleopatra, as this will give you a nice GUI for generating you keys.
  2. Generate a new keypair. Open kleopatra, click on file -> new key pair. Select “Create a personal OpenPGP pair.” Fill in your name (first and last) and email. Click on advanced. Select 4096 bits. Ensure you publish your key so you can do other fun things, like send encrypted email.
  3. Download and install qtpass.
  4. Configure qtpass. Click on users. Select the user you generated a key for earlier. Close the dialog. Exit qtpass. Relaunch it. Select config. I recommend the following:
    • On-demand copy to clipboard. Hide after 10 seconds.
    • Check hide password. Check autoclear, after 10 seconds
    • Password length. Try 20 characters. Some sites won’t use all 20, so youll need to shorten it for them, but most sites do let you go long.
    • Use tray icon.
    • Start minimized
    • Click on programs, set the path to gpg: You can browse to it, but it should be “C:/program files (x86)/GnuPG/bin/gpg.exe”.

At this point, it would be good to create a few folders. The nice thing to note is you don’t have to use qtpass for this – it can be a bit squirrley, Just open a command line prompt and cd into “password-store” – you can do “mkdir ” to create a new directory.

Try adding a new password under a folder – it should let you enter the login name, click on “generate password”, and when you save it should show the password under your folder.

Next step is to add git support, and create a secure git repo on a website. Then you’ll basically have a custom, secure, distributed password store!