I am currently writing custom software to create a variety of distributed media solutions. One of these involves a raspberry-pi security camera. Right now my software isn’t finished, but I wanted to prove the concept out using existing software. System requirements:
Raspberry pi:
- Broadcast audio (over the LAN) to a server
- Broadcast video (over the LAN) to a server
- Receive (and play over the speaker) audio
Laptop:
- Receive audio/video from the raspberry pi (and display it)
- Send audio (from the laptop’s own microphone) to the pi.
I bought a raspberry pi b+, cheapo USB microphone, and IR camera (with built-in LEDs), and some speakers. The whole package was around $100, which is just plain awesome. Im sure my grandkids will marvel, 40 years down the road, that for the cost of an Obama-approved school lunch, I could buy so much.
Here’s a schematic of the setup:
This can be accomplished with a few scripts, which i based heavily on those found here: http://blog.tkjelectronics.dk/2013/06/how-to-stream-video-and-audio-from-a-raspberry-pi-with-no-latency/
Run the following on the pi:
function on_signal (){ launch "kill -9 $piVideoPid $piAudioPid $osxAudioPid" } trap 'on_signal' EXIT # Receive os x audio gst-launch-1.0 -v udpsrc port=9102 caps=\"application/x-rtp\" ! queue ! rtppcmudepay ! mulawdec ! audioconvert ! alsasink device=plughw:ALSA sync=false & osxAudioPid=$! # Send pi audio gst-launch-1.0 -v alsasrc device=plughw:Device ! mulawenc ! rtppcmupay ! udpsink host=$broadcastIp port=$piAudioPort & piAudioPid=$! # Send pi video raspivid -t 999999 -w 1080 -h 720 -fps 25 -hf -b 2000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$localIp port=$piVideoPort
Run the following on the laptop (a macbook in my case). Note if this were a linux box you would probably use “autoaudiosrc”.
function on_signal (){ launch "kill -9 $piVideoPid $piAudioPid $osxStreamPid" } trap 'on_signal' EXIT piIp=<raspberryPiIp> # Stream OSX microphone (note the capsfilter, apparently needed due to os x bug) launch "gst-launch-1.0 -v osxaudiosrc ! capsfilter caps=audio/x-raw,rate=44100 ! audioconvert ! audioresample ! mulawenc ! rtppcmupay ! udpsink host=$broadcastIp port=$macAudioPort" "ASYNC" osxStreamPid=$! # Receive audio launch "gst-launch-1.0 -v udpsrc port=$piAudioPort caps="application/x-rtp" ! queue ! rtppcmudepay ! mulawdec ! audioconvert ! osxaudiosink" "ASYNC" piAudioPid=$! # Receive video launch "gst-launch-1.0 -v tcpclientsrc host=$piIp port=$piVideoPort ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false" "ASYNC" piVideoPid=$! echo "Waiting on $piVideoPid $piAudioPid $osxStreamPid" wait
You should be able to see and talk to whoever is on the other end of the pi. Biggest problem is audio feedback, if the laptop and pi are close to one another. I havent even begin to think about this, but the coolest things to note are:
- You get a silky-smooth h264 video
- The audio is acceptable
- It works almost entirely using gstreamer. The cool thing about that is gtreamer has a very easy programmatic API. So… that makes my job easy when I get to that part of my project.
Enjoy!