Measuring raspberry pi performance

I initially had high hopes for the raspberry pi 3 and snapped up three of them. I hoped to see one take the place of my media center, another two for monitoring tasks. Unfortunately I became less than enthused about the pi for a media center as it has no good way to stream of youtube or vidangel. Additionally it couldn’t keep up with streaming some HD videos.

It has now been a few months since my initial disappointment and I decided to try out the new firmware and see if things got better.

The first tests i ran were using iperf. One raspberry pi, rpi1, was left behind at the old firmware. Another pi, rpi2, was upgraded to the latest.

Firmware Kernel
rpi1 1b7da52ec944a9e1691745036966b3b2a48b19e8b (Apr 7 2016) 4.1.21-v7+
rpi2 1e7b8e2c9a7319f7b22869f1334c66e2cfc99f4a (Jun 27 2016) 4.4.14-v7+

Initial iperf test (iperf client running on my macbook, server on each raspberry pi – tests were run independently, on one raspberry pi at a time):

RunRpi1 Rpi2
128 Mbits/sec 39 Mbits/sec
231 Mbits/sec 37 Mbits/sec
330 Mbits/sec 39 Mbits/sec
430 Mbits/sec 38 Mbits/sec

I also tried this with a parallel iperf test (iperf -c -P 10)

RunRpi1 Rpi2
130 Mbits/sec 34 Mbits/sec
229 Mbits/sec 38 Mbits/sec
329 Mbits/sec 35 Mbits/sec
429 Mbits/sec 35 Mbits/sec

I also wanted to test sustained (iperf -c -t <10,120>)

Duration (seconds) Rpi1 Rpi2
10 29 Mbits/sec 39 Mbits/sec
120 29 Mbits/sec 35 Mbits/sec

Even on a udp iperf test (-u on the server, -b 50m on client) rpi1 gets 32 Mbits/sec, rpi2 gets about 39 Mbits/sec. In all cases there is a 5-9 Mbits/sec average higher increase with the new firmware. Pretty significant!

Note that if I run these same tests where my macbook is the client but a virtual machine on my network is the server, I see transfer rates in the 500 Mbit/sec range (the -P 10 test gave 513 Mbits/sec and the UDP test -b 1000m gave 627 Mbit/sec). This at least demonstrates that my infrastructure is more than capable of higher transfer rates. Thank goodness for 802.11ac routers!

It is also worth noting a few differences between the pi and vm tests. One difference is that the virtual machine is on a direct-cabled server, whereas both the raspberry pi and macbook are on wireless. Thus for the VM test, only one traversal of wifi is needed, whereas macbook to pi tests involve two traversals.

Now, if I upgrade rpi1 I would expect the numbers to be equal. Right…?

Strangely, no. Even after the upgrade I see rates in the 30Mbits/sec for rpi1. Digging deeper I found one minor discrepency between the two – i set aside 256 MB for gpu ram on rpi1, but only 128 MB on rpi2. Even after switching rpi1 to 128 MB, I still see the same numbers on both. I noticed I had a wireless keyboard adapter on the slower pi – but removing it didnt affect the transfer rates.

Another difference is that there is a raspberry pi camera on the faster pi – it would be interesting to see if that affects things.

I tried testing the ethernet interfaces and found performance across the pis identical.

At the moment I’m not sure how to account for the roughly 25% higher wifi performance on one pi over the other.

WiFi Performance on Raspberry Pi 3

I purchased a couple raspberry pi 3s. The major draw is that you no longer need an external wifi adapter – yet the cost is the same as the older pis.

The very first thing I noticed is that ssh traffic was stuttery. This cleared up after performing an apt-get update/upgrade. Everything seemed silky-smooth thereafter.

Exporting Paths for QNAP Albums

As nice as the photo station is inside of the QNAP, sometimes you just want to do your own thing with your photos and videos. But what if they are organized into albums on the QNAP? Couldn’t you just pull down the photos/videos in the album? Or, better, get local paths on your QNAP network mounts? All this, and more, is possible with the following script:

#!/usr/bin/python
import shlex
import subprocess
import os

parms = {}
parms['host'] = "qnaphostname"
parms['login'] = "admin"
parms['albumName'] ="Album1"
localBase = '/Volumes'

cmdBase = """ssh %(login)s@%(host)s "/usr/local/mariadb/bin/mysql -u read --password=read s01 -B  -S /tmp/mysql_mediadb.sock -e 'select cFileName, cFullPath   FROM pictureAlbumTable picAlbum JOIN pictureAlbumMapping picMap ON picMap.iPhotoAlbumId = picAlbum.iPhotoAlbumId JOIN %(mediaTable)s pic ON pic.%(mediaColumn)s = picMap.iMediaId JOIN dirTable ON dirTable.iDirId = pic.iDirId WHERE picAlbum.cAlbumTitle = \\"%(albumName)s\\" and type = %(type)s; ' " """

tables = []
tables.append({'mediaTable':'pictureTable','mediaColumn':'iPictureId','type':'1'} )
tables.append({'mediaTable':'videoTable','mediaColumn':'iVideoId','type':'2'} )

allFilenames = set()
for table in tables:
    parms.update(table)
    cmd = cmdBase % parms
    print cmd
    res = subprocess.check_output(cmd, shell=True)
    # Skip the first row, which has column headers
    hits = res.split("\n")[1:]
    print "Found %s hits" % len(hits)
    for f in hits:
        parts = f.split("\t")
        if len(parts) != 2:
            continue
        filename = os.path.join(localBase,parts[1],parts[0])
        allFilenames.add(filename)
open(parms['albumName'],'w').write("\n".join(allFilenames))

This grabs video and photos using ssh to run the mysql query remotely. QNAP references videos and photos from a very similar set of tables in a mysql database (called s01 on my box). The username and password for this database is read/read. The -B option makes the results tab-delimited, and thus more parseable.

Of course this isn’t super polished or anything; just modify the host and albumName parameters and you should be fine. That is, assuming you’re on OS X. If you are not, modify localBase to point to your mount location for the QNAP shared folder.

Update: I missed this originally, but you also have to pass in the type for each of the two queries. If you don’t do this you invariably get more videos and pictures than are actually in the album. The query is now updated above.

2015 Review of Open Source Media Sharing Software

Open source software rules the internet. The majority of website are supported in some form by open source: be it linux, apache, or some open source library or language.

Surely, then, we could find an open source alternative to social media websites like Facebook, instagram, and Flickr… right?

Here are my requirements:

  1. Server-side import. I have a huge set of media that increases weekly. I need it to be automatically imported – so something server-side must work.
  2. Multi-user. I am hosting storage space for several of my family members. I need a way to have separate logins, preferably with some level of permissions. I don’t want search engines crawling it so “unauthenticated” users should be rejected.
  3. Ability to accept comments. If people have something to say about I photo I want to hear about it.

Here are the options I found:

  • OpenPhoto. So much potential. A Yahoo engineer decided to quit and work on this around 2011 or so. The goal was to provide an open-source alternative to Flickr or Instagram. The project was eventually renamed Trovebox. Even after kickstarter funding, and $1M of additional funding, the project never reached critical mass and was shut down in 2014. It has limped along but really was inactive as of 2015. The details of the venture are given on the OpenPhoto creator’s website.. Huge amount of promise, sad to hear it failed.
  • Piwigo. Looks old school but has impressive lifespan: over 10 years and still in active development. The featureset is correspondingly fairly good and mature. It has support for batch uploading, all kinds of plugins, themes, etc. There are apps for uploading through android and apple. I installed it and think I dislike it least. My only complaint: why does it feel so clunky?! Also, I had to modify my php.ini to include my locale in order for a bunch of errors (which were spewing to the webpage) to go away. I like how you can associate one photo with many albums. It has extensive plugins, including one that lets users add tags to photos. Lots of options for user management, really easy-to-use server-side import.
  • ZenPhoto. Like PiWigo ZenPhoto has been around for a while. It seems to have a lot of the same stuff, and seems slicker, but didn’t have a good comment system (the comment UI took a note from webdesign from some earlier decade. It has like 6 fields to post a comment. I think I could open a home loan with the same detailed form. What happened to single field comment boxes?) and batch import didn’t seem easy or possible. That said its install was vastly superior to Piwigo. Very nice install.
  • Lychee. Looks super slick. Extremely bare bones. New. Unlike zenPhoto and Piwigo, Lychee appears to follow the look and feel of the modern Internet. Work to support commenting and multiple users appears to be underway (and going slowly).
  • ownStagram. The idea here is to replace instagram. It looks like capable software. Not sure how it fares on comments, bulk upload. The UI isn’t that sleek – weird looking ripoffs of instagram images. I didn’t try this one as it didn’t look compelling from the UI standpoint (and has no documented featureset, and only two contributors on github). It does have an android app though!
  • MediaGoblin. I have no idea what is going on here. The concept is cool: federated media sharing. Media includes audio, pictures, movies, etc. I installed it and found that to even get it work I had to deviate from the install instructions and use an older version of flup. Once that was installed I found I was able to upload photos one at a time. No real support for server-side importing (some ideas in the works). I was left scratching my head. I did a quick LOC count – 36k lines of code to do single-photo uploading? I couldn’t get PDF or MP4 uploads to work. A lot of the unit tests weren’t passing. Augh GNU.

Given the state of things, I’d say Lychee is my first pick, were it not for missing multi-user and comment features. For now I’m sticking with Piwigo.

Good Settings for QNAP

The last few years of using my QNAPs have been great. QNAP isn’t perfect but it is a good, quiet, Linux-ish platform for serving up files. Since some of my family are buying their own, I thought I would document some of the settings I commonly change:

  • General: I force HTTPS (its silly and unsecure not to); I tend to give QNAPs hostnames that allow for future growth (fenqnap-1, fenqnap-2), etc., Synchronize with NTP (important for making sure that a pair of QNAPs all are in sync),
  • Storage manager: When i provision a new QNAP I use thin provisioning to form a storage pool, as this lets you flexibly break out individual volumes of any size – either thin or thick will let you do snapshotting. On my volumes I always enable encryption – but DO NOT save the password (it entirely defeats the purpose of disk encryption!). Also do NOT use static single volume – its too limiting!
  • Networking: On my TS-453 Pro I have four NICs. I bond two using the ALB scheme, and use that as my default gateway. I then leave two free, since for virtualization I only had luck configuring the virtual switch on a non-trunked eth. The ALB scheme has two advantages: load balancing and redundancy. Each client will ask for files over a given interface, but the QNAP use an output eth based on a hashing scheme. Additionally, having ALB turned on means that if one link goes down, it will be removed from the hashing scheme, and the other links will be used instead.
  • Security: I allow all connections, but on “Network Access Protection” I lock multiple failed attempts out forever
  • Hardware: I disable beeps for system operations – QNAPs are a bit too beepy
  • Power: I make sure the QNAP always turns back on.
  • Notification: Configure to send alerts to your email
  • Shared Folders: I like to group things at a shared folder level – pictures in one folder, docs in another, music in another, ISOs in another, etc. It turns out that if you don’t group by the shared folder level, you can run into some funky permissions problems (specifically: I found I couldnt reliably restrict access to my photos even when “Advanced Permissions” clearly limited access – it appeared that QNAP was accessing photos via the admin user, which always has access; i filed a bug with qnap but they dont appear to have resolved it entirely). I also enable “Advanced Folder Permissions” to let me lock out individual files via setfacl/getfacl.
  • Network sharing (“Win/Mac/NFS”) – I like leaving windows (smb) and mac (AFP) enabled, but NFS just isnt secure, so i disable it.
  • FTP: Disable it – its unsecure
  • Network recycle bin: I disable it – it doesn’t fit any workflow I use, and I dont want stuff piling up in it.
  • QSync: In the past I have disabled this, but I do think it could be useful. Until recently I was pretty fine keeping all my data on the NAS. There are times, when I am away from the Internet for example, where I want to at least, say, edit my journal, then sync to the NAS later. QNAP’s QSYNC feature directly addresses this concept, even letting you resolve conflicts (say multiple people are editing a file, for example) and keep versions. QNAP stores your qsyncs inside the users home directory on the NAS. Even though this feature is useful, I think I still prefer my own thing: 1) nightly snapshots on the QNAP, and 2) rsyncs in crontabs. Why? First of all, QSync only runs on mac/windows, so Linux is SOL. Second, the QSync client is very limiting: it only lets you specify a single location where you put all your files. That might work, except even if I symlink in other locations it doesn’t do the right thing. Nothwithstanding my reservations, I think QSync is great for single-folder to single-folder stuff. In my case Ill stick to rsync and crontab.
  • Station manager: I disable the music station, as the iTunes server is the only real way I would stream music from the QNAP.
  • Multimedia management: I disable indexing images in my document folder. The idea is: I’d like one location where even images, such as sensitive document scans, aren’t indexed for general viewing
  • VPN Server: I enable this, even though I don’t use it, because QNAP appears to do crazy things when running your own openvpn client unless there is already some VPN service enabled.
  • Antivirus: I do daily scans
  • *: Pretty much everything else gets disabled

On top of this I configure my own key-based (passwordless) VPN, and then setup individual backup jobs in the backup station. I describe how to set up password-less VPN in a previous post. Each job connects to my backup QNAP over the VPN. I enable encryption, and also have the job apply custom permissions (be sure Advanced Permissions is enabled on the other QNAP!). I have each job auto-sync on a schedule.

Note: I don’t use RTRR, even though it seems cool, because it doesn’t fit my workflow – i don’t want the QNAP auto-sycning live, as it would eat up too much bandwidth – im ok with a nightly sync. Plus I have no idea what RTRR uses as a protocol – and I am doubtful whatever it is really is superior to rsync (not that I am incredulous, but if it is QNAP sure is keeping it a secret).

Note2: On the destination QNAP I go into the Backup station and enable the rsync server – but only the middle checkbox (you dont need to enable the one that makes you enter a username and password). Once enabled, you can create a user, or just use admin, then on the source QNAP just use that users credentials to enable the rsync.

Note3: I don’t use myqnapcloud. It is a nice service, but since I have my own VPN I can access my QNAPs just as if I were at home.

Note4: In terms of extra apps I get by with simply installing the HDStation and the CodexPack to enable HW-accelerated transcoding. I also use virtualization station so I can run a few “real” linux VMs on the QNAP. I’ve been very interested in container station, but as yet havent got it to work.

Note5: Be sure you run nessus, or nmap, to get a good profile of any vulerabilities on your QNAP. I found a few ports, like 631, that absolutely did not need to be open; in some cases I found a service that was configured but that I didn’t need, so i shut it down. Sadly in the ipp (631) case, I could find no way to shut it down.

Note6: If you want to do your own version of qsync, I just setup password-less logins from all the sync “sources”, then create a directory (or set of directories) I want to sync. I wrote a single script to sync these folders, something like:

#!/bin/bash
# Backup docs
rsync -auvhP /local/path/syncToNas/* admin@qnap1:/share/CE_CACHEDEV2_DATA/DocShare/mac
# Backup passwords
rsync -auvhP /local/path/.password-store admin@qnap1:/share/CE_CACHEDEV2_DATA/DocShare/passwords/

You can put this script, call it backupMac.sh (or whatever), in your /local/path/syncToNas to be sure your script, and your local files, are synced. Your crontab would then just be something like:

* * * * * /local/path/syncToNas/backupMac.sh 2>&1 > /dev/null

Of course this only works on Mac and Linux; for windows maybe you go ahead and use qsync.

Learning Log: React.js and HTML5 video

Carson’s Web Development Maxim: If you don’t like any of the web toolkits that are available just wait a few months: there will be ten new ones waiting for you.

I like, and still use, Angular, but I wanted to try something new. Enter React: a framework from the Facebook side of the fence. What better way to learn than with a project! I decided to make an app to help my kids clean up.

The concept around the application is to track the kids progress as they pick up toys. Specifically, they are asked to pick up six toys at a time, take a picture of each toy, and put the toy away. Thankfully I had an old Nexus 7 tablet laying around that nobody was using. Also, chrome support for HTML5 video lets a web app gain access to your tablet. Looks like I am set!

After reading through the “comments” tutorial for React I whipped up the following (shown running on the Nexus 7):

Screenshot_20151229-110001

The user interaction is designed as follows: each kid has a row for their “pickup tiles.” For each toy, they click a new tile and the app takes a picture. They can retake on a square as many times as they want. When they get all six squares a tantilizing reward appears: an animated gif of their choosing (yes, yes — the 90s are calling and they want their webpage back!) This is shown below:

Screenshot_20151229-110407

This app was interesting from a HCI perspective. The following observations were made:

  • Kids had a hard time using the tablet to “click” on a tile. I think their little fingers don’t register too well on a tablet.
  • There was no way to preview: maybe clicking should have shown a live preview which snapped the picture after a 2-second delay.
  • The kids were more interested in taking pictures of toys than putting them away. This resulted in toys being piled up near the tablet, rather than being put away, as kids ran around finding new toys to take pictures of.
  • Pickup actually took longer than without the app. Even when the kids put things away they took a lot longer staring at the screen.

Originally I was going to back this app with a server-side component that would store all pictures. The thought being that it would be fun to see the silly things the kids picked up. In the end I decided to skip this because of my observations from some trial runs, which basically indicates that the app makes pickup worse. I’ll just stick the traditional approach of motivating kids (e.g. “Pick up your toys or I’ll give them to Chuck at work”).

From a programming perspective here are a few observations:

  • React is awesome. I was amazed at Angular’s novel approach that effectively extended html to allow you to embed display logic into your markup. My mind was blown again with JSX, which essentially goes the opposite route, letting you embed markup naturally inside your code. I liked the way you could easily form components and interact with their data models.
  • Also in terms of html 5 video in Chrome: this worked perfectly. Only annoyance is that you must use https to get at the camera, which doesn’t seem necessary on a local net. Also the aspect ratio on my Nexus 7 is weird when in portrait view.
  • There is a ton of wasted space on chrome on the tablet. I couldn’t find any currently-supported method for making a web app full screen. There had been support in the past, but when i looked in chrome’s settings it appears the options are no longer available. Wouldn’t it be nice to see your app in full screen, especially on such a space-constrained device?

The source code for my app is here

Containers for Fun and Profit

With all the whirl about containers I decided I could wait no longer to join the fray. Here is a log of some of the things I learned.

Basic test, e.g. what the heck is this?

I spun up a cent7 vm several months back. Apparently my repos were a tad old and things didnt work until i did a yum update first, then reboot. Then I could systemctl enable docker, systemctl start docker

Note you MUST sudo in for docker commands to work

Also note: cent7 has docker in it natively – no need to wget install it (unless you want to use some of the new features like the networking module)

The following is your smoke test:

sudo docker run hello-world 

Test 2: Expanded sample

This worked flawlessly…

Test 3: Interactive sample

Ditto, worked perfectly

Test 4: Do my own thing

I made a “looper.py” app with the following code:

#!/usr/bin/python
import time
while True:
	print "Hi! ", time.time()
	time.sleep(1)

I then made the following Dockerfile

FROM docker.io/centos:latest
RUN yum install python -y
COPY looper.py /
CMD /looper.py

And create the image with

	docker build -t looper .

When i ran the build one of the things I noticed is that python was already installed in the centos image. I modified the Dockerfile by removing the RUN line, and one cool thing is that when i re-ran the build command, the python install layer was automatically removed, and everything else was basically a noop. In other words docker appears to do a good job at being efficient.

I then ran my looper:

	docker run looper

Nothing happened… So i thought…and though… and eventually decided to try and attach. Ubeknowest to me, by doing docker run i WAS attached, but nonetheless I learned a few things:

  • To attach you need your container id
  • To get your container id you run “docker ps”

Once I did a docker attach to my container id, i saw nothing, still. I did a Ctrl-C and viola, my looper output appeared! I suspected buffering, which turned out to be the case. I modified looper as follows:

#!/usr/bin/python
import time
import sys
while True:
	print "Hi! ", time.time()
	sys.stdout.flush()
	time.sleep(1)

Then rebuilt, and re-ran, and it all worked.

Note that this only runs the command in the foreground. To run it in the background:

	docker run -d looper

You can then docker ps, find the cid, docker attach to it. But… you cannot detach (without sending a SIGKILL)! The docks say Ctrl-P + Ctrl+Q will detach, but this appears to only work if you use the following command when running it:

	docker run -tdi looper

Where t means create a tty, and i means “keep stdin open even if not attached”. This works well.

Note that each time i make changes to looper, when i rebuild it takes at most 20 seconds.. if no changes, docker takes milliseconds…

Test 5: layers

What if the container modifies a file?

I modified looper.py to write to stdout and a file:

#!/usr/bin/python
import time
import sys
while True:
	msg = "Hi! " + str(time.time())
	print msg
	with open('myfile','a') as f:
		f.write(msg + '\n')
	sys.stdout.flush()
	time.sleep(1)

For fun i created a file named “myfile”, then build the image, then ran the container. When it runs i can do a docker diff:

# docker diff f04849645523
A /myfile

And to be clear, this means the file was added in the image. Docker wont let the app reach into my own version of “myfile”.

What if i want to see the file? In older versions of docker, apparently you had a few options, such as running ssh, or making a snapshot, but now its easy:

	docker exec -t -i <cid> /bin/bash

You can then just cat the file, etc. If you actually want to copy the files out,
you can export the whole filesystem (docker export ) as a tar, but this seems nuts. If you just want a single file, use docker cp:

docker cp <cid>:<src> <dest>

Test 6: CPU limit

You can do a couple things:

1) Limit the share of cpu usage across multiple containers. This is done by specifying a relative weighting (with -c)

2) Pin the process to certain cpus with —cpuset-cpus=

I havent been able to find an equivalent to the simple “limit to N processors” idea on virtual machines. The weighting is fairly close.

Migrating QNAP Static Vol to Storage Pool

WARNING: This is risky. Just assume that you would only try this if you were ok losing your data.

When I got my first QNAP almost two years ago it didn’t support storage pools (or if it did I was oblivious). One of the advantage of storage pools is that they enable QNAPs snapshot replica feature. The trick is that you cannot migrate static volumes to storage pools, or so so says QNAP.

But it turns out there is a way to do this. Say you have a RAIDed static volume. There are at least two disks in such a volume, and you can withstand the loss of a single drive. You can “migrate” your data as follows:

  1. Shutdown NAS
  2. Remove all but one drive
  3. Power on NAS and wait for it to boot up. If you go to it (using QFinder for example) it will indicate that there are no disks.
  4. Insert a single drive (henceforth dubbed the “pool drive”)
  5. Choose “Restore to factory”
  6. Once it boots, delete the volume
  7. Use the deleted volume to create a storage pool and a new volume (I chose thin, because “why not”)
  8. Once the snapshot pool and vols are created, shutdown the NAS
  9. Remove the pool drive and insert the other drives
  10. Once the nas starts up restore to factory settings again.
  11. Once the factory reset is complete insert the pool drive again.
  12. You can now create shared folders on the storage pool, then rsync from the old static vol to the new storage pool.

The final step, once this is done (assuming you didnt lose any disks along the way!) you can expand the storage pool using the static vols, and viola, you’re back in action!

GSM Phone Tracking Methods

I decided to conduct a few tests with my fona808.

  • Battery. The at+cbc command give you the current charge mode, percent charged, and millivolts. I found on the adafruit fona808 that the charge mode indicator always gave status 0 (“not charging”) even when charged.
  • GSM Location. This is given by the at+cipgsmloc. It spits out lat and lon. I don’t know where it is getting the data from, but based on the plots I made the location it gives is an estimate based on your location to the nearest cell tower.
  • GPS. Given by at+cgnsinf. I found this to be spot-on, always accurate, even with my pea-sized GPS antenna!
  • DIY cell tower triangulation. If you put the fona into ENG mode (“AT+CENG=3”), it will give you the MNC, MCC, LAC, and CellId for the towers around you (i usually got six reports per “AT+CENG?” query). You can then use a site like cellphonetrackers.org to turn the tower info into a lat/lon coordinate. The +CENG messages also given a power level which you can use, in cojunction with the cell tower lat/lon coordinates, to perform the triangulation. The method of doing this is described elsewhere, but basically each power level becomes a weight, w_i = rx_i / (rx_0+rx_1…rx_n), which is multiplied by the lat/lon of the corresponding cell tower item. You then just add up the weighted lat/lons, and viola!
    • My results?

      • I found that i lost about 10% power on my 1 hour trip. This seems terrible – barely 10 hours per charge, extrapolating.
      • The GPS info was perfect. It took about 2 min to acquire
      • The GSM location was too course, but was actually more clean than my DIY location
      • DIY location seems crumby. Im not sure what would fix this. Possible things to look at: 1) Filtering. Maybe I could throw out the lowest power rating, or smooth out the locations somehow. 2) See if there are better cell tower dbs. As far as i know, cell tower info isnt public, so any db is most likely based on reported, possibly inaccurate, values

      Here’s a map with plots of the three localization methods: GPS (magenta), GSM (green), and DIY Triangulation (yellow).

      gsmtrack

My favorite FONA commands

These are more like “the commands i have found useful as of present.” I bear no real affinity for them, except that I do appreciate the data they yield.

AT+COPS? Ensure you are connected to the network (it gives “+COPS: 0” if you are not)
at+ccid Get the SIM number; you need this for activation
AT+CMGF=1 This sets us into text mode. I haven’t used the other mode (PDU) yet.
at+sapbr=3,1,”contype”,”gprs” You can set the connection type to GPRS (data) or CSD (circuit switched) – i think this is why you can call or do data on gsm networks, but not both (at the same time).
at+sapbr=3,1,”apn”,”wholesale” Until you set your access point name, you might not be able to do things like geolocate (based on cell towers) or do data stuff. TING’s APN is “wholesale” – im sure it is different for every provider.
at+sapbr=1,1 Open up your bearer. . . sounds good, but im not entirely sure what that means.
at+cipgsmloc=1,1 This gives you your lat/lon. Note that on my old T-Mobile sim card, which had no data plan, i got nothing back for some reason; apparently you have to have a data plan to get this info?

For SMS (which i used only briefly) I found the following useful:

at+cmgl=”all” I didn’t realize this, but all text messages really are stored somewhere in the providers network, at least until you do something with them (makes sense) – i just found it interesting that I transplated my SIM card from my tmobile device to my FONA, and could see text messages from years ago.
at+cmgs=”180188xyzwl” This is of course how you send a text. You press enter, after typing the phone number (“1801..” – notice the leading “1”, since im in the USA; not sure if this is needed, but it works with it). When you are done you must hit ; if you hit it cancels the message!
at+cmgr= This is your way of reading a text message

For GPS I used the following (note I have v2 of the FONA808 – the commands are different for v1)

AT+CGNSPWR=1 The device starts with the GPS off. So i turn it on, cause i want it.
AT+CGNSINF Gives you a crudload of gps-related info, including lat-lon, altitude, utc, etc. (see table 2-2 in the “SIM800 GNSS Application Note”)