Category Archives: QNAP

Exporting Paths for QNAP Albums

As nice as the photo station is inside of the QNAP, sometimes you just want to do your own thing with your photos and videos. But what if they are organized into albums on the QNAP? Couldn’t you just pull down the photos/videos in the album? Or, better, get local paths on your QNAP network mounts? All this, and more, is possible with the following script:

#!/usr/bin/python
import shlex
import subprocess
import os

parms = {}
parms['host'] = "qnaphostname"
parms['login'] = "admin"
parms['albumName'] ="Album1"
localBase = '/Volumes'

cmdBase = """ssh %(login)s@%(host)s "/usr/local/mariadb/bin/mysql -u read --password=read s01 -B  -S /tmp/mysql_mediadb.sock -e 'select cFileName, cFullPath   FROM pictureAlbumTable picAlbum JOIN pictureAlbumMapping picMap ON picMap.iPhotoAlbumId = picAlbum.iPhotoAlbumId JOIN %(mediaTable)s pic ON pic.%(mediaColumn)s = picMap.iMediaId JOIN dirTable ON dirTable.iDirId = pic.iDirId WHERE picAlbum.cAlbumTitle = \\"%(albumName)s\\" and type = %(type)s; ' " """

tables = []
tables.append({'mediaTable':'pictureTable','mediaColumn':'iPictureId','type':'1'} )
tables.append({'mediaTable':'videoTable','mediaColumn':'iVideoId','type':'2'} )

allFilenames = set()
for table in tables:
    parms.update(table)
    cmd = cmdBase % parms
    print cmd
    res = subprocess.check_output(cmd, shell=True)
    # Skip the first row, which has column headers
    hits = res.split("\n")[1:]
    print "Found %s hits" % len(hits)
    for f in hits:
        parts = f.split("\t")
        if len(parts) != 2:
            continue
        filename = os.path.join(localBase,parts[1],parts[0])
        allFilenames.add(filename)
open(parms['albumName'],'w').write("\n".join(allFilenames))

This grabs video and photos using ssh to run the mysql query remotely. QNAP references videos and photos from a very similar set of tables in a mysql database (called s01 on my box). The username and password for this database is read/read. The -B option makes the results tab-delimited, and thus more parseable.

Of course this isn’t super polished or anything; just modify the host and albumName parameters and you should be fine. That is, assuming you’re on OS X. If you are not, modify localBase to point to your mount location for the QNAP shared folder.

Update: I missed this originally, but you also have to pass in the type for each of the two queries. If you don’t do this you invariably get more videos and pictures than are actually in the album. The query is now updated above.

Good Settings for QNAP

The last few years of using my QNAPs have been great. QNAP isn’t perfect but it is a good, quiet, Linux-ish platform for serving up files. Since some of my family are buying their own, I thought I would document some of the settings I commonly change:

  • General: I force HTTPS (its silly and unsecure not to); I tend to give QNAPs hostnames that allow for future growth (fenqnap-1, fenqnap-2), etc., Synchronize with NTP (important for making sure that a pair of QNAPs all are in sync),
  • Storage manager: When i provision a new QNAP I use thin provisioning to form a storage pool, as this lets you flexibly break out individual volumes of any size – either thin or thick will let you do snapshotting. On my volumes I always enable encryption – but DO NOT save the password (it entirely defeats the purpose of disk encryption!). Also do NOT use static single volume – its too limiting!
  • Networking: On my TS-453 Pro I have four NICs. I bond two using the ALB scheme, and use that as my default gateway. I then leave two free, since for virtualization I only had luck configuring the virtual switch on a non-trunked eth. The ALB scheme has two advantages: load balancing and redundancy. Each client will ask for files over a given interface, but the QNAP use an output eth based on a hashing scheme. Additionally, having ALB turned on means that if one link goes down, it will be removed from the hashing scheme, and the other links will be used instead.
  • Security: I allow all connections, but on “Network Access Protection” I lock multiple failed attempts out forever
  • Hardware: I disable beeps for system operations – QNAPs are a bit too beepy
  • Power: I make sure the QNAP always turns back on.
  • Notification: Configure to send alerts to your email
  • Shared Folders: I like to group things at a shared folder level – pictures in one folder, docs in another, music in another, ISOs in another, etc. It turns out that if you don’t group by the shared folder level, you can run into some funky permissions problems (specifically: I found I couldnt reliably restrict access to my photos even when “Advanced Permissions” clearly limited access – it appeared that QNAP was accessing photos via the admin user, which always has access; i filed a bug with qnap but they dont appear to have resolved it entirely). I also enable “Advanced Folder Permissions” to let me lock out individual files via setfacl/getfacl.
  • Network sharing (“Win/Mac/NFS”) – I like leaving windows (smb) and mac (AFP) enabled, but NFS just isnt secure, so i disable it.
  • FTP: Disable it – its unsecure
  • Network recycle bin: I disable it – it doesn’t fit any workflow I use, and I dont want stuff piling up in it.
  • QSync: In the past I have disabled this, but I do think it could be useful. Until recently I was pretty fine keeping all my data on the NAS. There are times, when I am away from the Internet for example, where I want to at least, say, edit my journal, then sync to the NAS later. QNAP’s QSYNC feature directly addresses this concept, even letting you resolve conflicts (say multiple people are editing a file, for example) and keep versions. QNAP stores your qsyncs inside the users home directory on the NAS. Even though this feature is useful, I think I still prefer my own thing: 1) nightly snapshots on the QNAP, and 2) rsyncs in crontabs. Why? First of all, QSync only runs on mac/windows, so Linux is SOL. Second, the QSync client is very limiting: it only lets you specify a single location where you put all your files. That might work, except even if I symlink in other locations it doesn’t do the right thing. Nothwithstanding my reservations, I think QSync is great for single-folder to single-folder stuff. In my case Ill stick to rsync and crontab.
  • Station manager: I disable the music station, as the iTunes server is the only real way I would stream music from the QNAP.
  • Multimedia management: I disable indexing images in my document folder. The idea is: I’d like one location where even images, such as sensitive document scans, aren’t indexed for general viewing
  • VPN Server: I enable this, even though I don’t use it, because QNAP appears to do crazy things when running your own openvpn client unless there is already some VPN service enabled.
  • Antivirus: I do daily scans
  • *: Pretty much everything else gets disabled

On top of this I configure my own key-based (passwordless) VPN, and then setup individual backup jobs in the backup station. I describe how to set up password-less VPN in a previous post. Each job connects to my backup QNAP over the VPN. I enable encryption, and also have the job apply custom permissions (be sure Advanced Permissions is enabled on the other QNAP!). I have each job auto-sync on a schedule.

Note: I don’t use RTRR, even though it seems cool, because it doesn’t fit my workflow – i don’t want the QNAP auto-sycning live, as it would eat up too much bandwidth – im ok with a nightly sync. Plus I have no idea what RTRR uses as a protocol – and I am doubtful whatever it is really is superior to rsync (not that I am incredulous, but if it is QNAP sure is keeping it a secret).

Note2: On the destination QNAP I go into the Backup station and enable the rsync server – but only the middle checkbox (you dont need to enable the one that makes you enter a username and password). Once enabled, you can create a user, or just use admin, then on the source QNAP just use that users credentials to enable the rsync.

Note3: I don’t use myqnapcloud. It is a nice service, but since I have my own VPN I can access my QNAPs just as if I were at home.

Note4: In terms of extra apps I get by with simply installing the HDStation and the CodexPack to enable HW-accelerated transcoding. I also use virtualization station so I can run a few “real” linux VMs on the QNAP. I’ve been very interested in container station, but as yet havent got it to work.

Note5: Be sure you run nessus, or nmap, to get a good profile of any vulerabilities on your QNAP. I found a few ports, like 631, that absolutely did not need to be open; in some cases I found a service that was configured but that I didn’t need, so i shut it down. Sadly in the ipp (631) case, I could find no way to shut it down.

Note6: If you want to do your own version of qsync, I just setup password-less logins from all the sync “sources”, then create a directory (or set of directories) I want to sync. I wrote a single script to sync these folders, something like:

#!/bin/bash
# Backup docs
rsync -auvhP /local/path/syncToNas/* admin@qnap1:/share/CE_CACHEDEV2_DATA/DocShare/mac
# Backup passwords
rsync -auvhP /local/path/.password-store admin@qnap1:/share/CE_CACHEDEV2_DATA/DocShare/passwords/

You can put this script, call it backupMac.sh (or whatever), in your /local/path/syncToNas to be sure your script, and your local files, are synced. Your crontab would then just be something like:

* * * * * /local/path/syncToNas/backupMac.sh 2>&1 > /dev/null

Of course this only works on Mac and Linux; for windows maybe you go ahead and use qsync.

Migrating QNAP Static Vol to Storage Pool

WARNING: This is risky. Just assume that you would only try this if you were ok losing your data.

When I got my first QNAP almost two years ago it didn’t support storage pools (or if it did I was oblivious). One of the advantage of storage pools is that they enable QNAPs snapshot replica feature. The trick is that you cannot migrate static volumes to storage pools, or so so says QNAP.

But it turns out there is a way to do this. Say you have a RAIDed static volume. There are at least two disks in such a volume, and you can withstand the loss of a single drive. You can “migrate” your data as follows:

  1. Shutdown NAS
  2. Remove all but one drive
  3. Power on NAS and wait for it to boot up. If you go to it (using QFinder for example) it will indicate that there are no disks.
  4. Insert a single drive (henceforth dubbed the “pool drive”)
  5. Choose “Restore to factory”
  6. Once it boots, delete the volume
  7. Use the deleted volume to create a storage pool and a new volume (I chose thin, because “why not”)
  8. Once the snapshot pool and vols are created, shutdown the NAS
  9. Remove the pool drive and insert the other drives
  10. Once the nas starts up restore to factory settings again.
  11. Once the factory reset is complete insert the pool drive again.
  12. You can now create shared folders on the storage pool, then rsync from the old static vol to the new storage pool.

The final step, once this is done (assuming you didnt lose any disks along the way!) you can expand the storage pool using the static vols, and viola, you’re back in action!