Speed of in-memory algorithms in scripting languages

This blog post gives some examples how much shorter in-memory algorithms are in scripting languages than in C.

Before writing this blog post I had the general impression that the speed ration between code in a scripting language and code in C for the same CPU-bound algorithm is between 5 and 20. I was very much surprised that for LZMA2 decompression I experienced a much larger ratio between Perl and C: 285.

Then I looked at the C speeds and Perl speeds on the Debian Computer Language Benchmarks Game, and I've found these ratios (in decreasing order) between Perl and C: 413, 79.7, 66.3, 62.2, 49.2, 20.8, 12.1, 10.2, 5.87, 1.91. So it turns out that there is a huge fluctuation in the speed ratio, depending on the algorithm.


  • One doesn't need to use 64-bit registers or vector (SIMD) instructions (e.g. AVX, SSE, MMX) or other special instructions in C code to get a huge speed ratio: for LZMA2 decompression, there can be a huge speed difference even if all variables are 32-bit unsigned integers.
  • One doesn't need to use 64-bit code in C to get a huge speed ratio: for LZMA2 decompression, the benchmarked C code was running as 32-bit (i386, more specifically: i686) code.
  • One doesn't have to use C compiler optimization flags for fast execution (e.g. -O3 or -O2) to get a huge speed ratio: for LZMA2 decompression the size-optimized output of gcc -Os was that fast.
  • Cache usage (e.g. L1 cache, L2 cache, L3 cache) can have a huge effect on speed of C code. The https://github.com/pts/muxzcat/releases/download/v1/muxzcat is 7376 bytes in total, thus the code fits into the fastest L1 cache of modern Intel processors (L1 cache size at lest 8 KiB, typically at least 32 KiB). The data itself doesn't fit to the cache though.
  • I/O buffering and the associated memory copies can also affect execution speed. Typical size of read(2) calls is >60 KiB, and typical size of write(2) is even larger (2--3 times larger) for LZMA2 decompression, this is fast enough in both C and Perl code.
  • Memory allocation can also affect execution speed. The C code for LZMA2 decompression doesn't do any memory allocation. The algorithm of the Perl code doesn't do any either (but the Perl interpreter may do some as part of its overhead), except for the occasional exponential doubling of the string capacity. (Preallocating these string buffers didn't make it any faster.)
  • Even older C compilers (e.g. GCC 4.8 from 2014) can generate very optimized low-level i386 machine code.
  • Some scripting languages are faster than others, e.g. LUA in LuaJIT and JavaScript in Node.js are typically faster than the Python, Perl and Ruby interpreters written in C, and PyPy is faster than the Python interpreter written in C.
  • Different integer sizes (e.g. 8-bit, 16-bit, 32-bit, 64-bit) can affect execution speed. Sometimes larger integers are faster (e.g. 32-bit is faster than 16-bit), because they are better aligned in memory, and fewer conversion instructions are necessary.
  • Integer fixups can contribute to the slowness of scripting languages. For example the algorithm for LZMA2 decompression works with unsigned 32-bit integers, but Perl has only either signed 64-bit integers or signed 32-bit integers, so inputs of some operators (e.g. >>, <, ==, %, /) need to be bit-masked to get correct results. Out of these, / and % would be the slower to fix, but since LZMA2 decompression doesn't use these operators), < is the slowest: in total, the 32-bit Perl is 1.1017 times slower running the LZMA2 decompression than the 64-bit Perl, mostly because operator < and its possibly negative inputs needs more complicated masking if Perl is doing 32-it arithmetic.
  • Function calls can be very slow in scripting languages, while the C compiler can inline some of the smaller functions, avoiding most of the overhead. For LZMA2 decompression, manual inlining of the fixup for operator < on 32-bit Perls made the entire program about 1.3 times faster.

Matching balanced parentheses with recursive Perl regular expressions

This blog post explains how to use recursive Perl regular expressions (regexp) to match substrings with balanced parentheses. Recursive regular expressions is also available in Ruby (with a different syntax) and in the regex extension of Python (but not in the built-in re module), but it's explicitly not available in RE2.

Let's suppose the input file in.txt contains lines like:

a = EQ(x + 6, 42);
a = EQ((x + 6) * 2, 42);
if (x + 6 == 42) { ... }
if (EQ(x + 6, 42)) { ... }
if (EQ((x + 6) * 2, 42)) { ... }
, and let's suppose we want to get all instances of EQ and if, with their arguments.

A non-recursive regexp can get only the instances without nested parentheses:

$ <in.txt perl -0777 -wne '
while (m@(?>\b(if|while|EQ|NE|LT|LE|GT|GE)\s*\(([^()]*)\))@g)
{ print "$1($2)\n" }'
EQ(x + 6, 42)
if(x + 6 == 42)
EQ(x + 6, 42)

With a recursive regexp we can get all matches:

$ <in.txt perl -0777 -wne '
while (m@(?>\b(if|while|EQ|NE|LT|LE|GT|GE)\s*(\(((?:[^()]+|(?2))*)\)))@g)
{ print "$1$2\n" }'
EQ(x + 6, 42)
EQ((x + 6) * 2, 42)
if(x + 6 == 42)
if(EQ(x + 6, 42))
if(EQ((x + 6) * 2, 42))

Please note that the EQ inside the if was not matched, because with the global flag (m@...@g) Perl doesn't consider overlapping or enclosed matches.

The recursive part of the regexp is the (?2): it's a recursive reuse of paren group 2. For more information about recursive regexps, see recursive subpattern in perlre(1). The (?>gt...) construct is a performance optimization to prevent backtracking.

It's also possible to match individual (comma-separated) arguments. For example, here is how to match both arguments of EQ separately, recursively:

$ <in.txt perl -0777 -wne '
while (
{ print "$1($2, $5)\n" }'
EQ(x + 6, 42)
EQ((x + 6) * 2, 42)
EQ(x + 6, 42)
EQ((x + 6) * 2, 42)

The non-recursive version returns 2 arguments without nested parentheses:

$ <in.txt perl -0777 -wne '
while (m@(?>\b(if|while|EQ|NE|LT|LE|GT|GE)\s*\(([^(),]*),\s*([^(),]*))@g)
{ print "$1($2, $3)\n" }'
EQ(x + 6, 42)
EQ(x + 6, 42)

Here is how to match only a single argument (no comma) recursively:

$ <in.txt perl -0777 -wne '
while (
{ print "$1($2)\n" }'
if(x + 6 == 42)
if(EQ(x + 6, 42))
if(EQ((x + 6) * 2, 42))

The non-recursive version returns 1 argument without nested parentheses:

$ <in.txt perl -0777 -wne '
while (m@(?>\b(if|while|EQ|NE|LT|LE|GT|GE)\s*\(([^(),]*)\))@g)
{ print "$1($2)\n" }'
if(x + 6 == 42)


How to copy files securely between computers running Linux or Unix?

This blog post gives various recommendations on how to copy files securely between computers running Linux or Unix.

All the recommendations below copy the file in an encrypted way, protecting against eavesdropping and protecting partially against man-in-the-middle attacks (i.e. a thrid party tricking the receiver to accept forged file contents).

If both computers run either of Chrome or Firefox, and it's convenient for you to use these web browsers, visit any of the following sites to copy the file: sharedrop.io, reep.io, takeafile.com, send-anywhere.com, justbeamit.com. These sites use WebRTC (thus the transfer is encrypted) to copy the file directly from the sender to the receiver without uploading it to a server, and they traverse NAT firewalls using STUN and ICE. (Don't use sites based on WebTorrent (such as instant.io or file.pizza), because the WebTorrent transfers are not end-to-end encrypted.)

Otherise, if one of the computers is running the OpenSSH server (sshd), and the other one is able to connect to it over the network, and you know a user's password on the server (or SSH public keys are set up instead of a password), then use scp or rsync. Otherwise, if one of the computers is able to connect to the other over the network, and the client computer (the one which initiates the TCP connection) has the OpenSSH client (ssh) installed, you have root access on the server, and you don't mind installing software to the server temporarily, then follow the instructions in the One-off SCP with Dropbear section below.

The rest of the setups are typically useful if one of the computers is recently installed (so it doesn't contain your SSH private keys yet), or you don't want any of them act as a server, or you don't have root access.

Otherwise, if both computers are connected to the same local network (e.g. same wifi network), and they are able to connect to each other, try ecplcnw (available and documented here: https://github.com/pts/copystrap).

Otherwise, if both computers have web access, and you don't mind uploading securely encrypted files to a shared hosting provider, use ecptrsh (available and documented here: https://github.com/pts/copystrap).

Otherwise, if you have a USB pen drive, SD card, external hard disk or other writable storage medium which you can physically take from one computer to another, use ecplmdr (available and documented here: https://github.com/pts/copystrap).

Otherwise I have no secure and convenient recommendation for you.

Other secure options for file copy

  • Direct connection between the computers using an Ethernet cable or serial cable. This can work, but it is not convenient, because it needs rare hardware and increasingly rare ports on the laptops and extensive and error-prone manual setup.
  • netcat for transfer + GPG for encryption. Some more details here. This is similar to ecplcnw above, but less convenient and less secure, because user-invented passphrases tend to be weak, and strong passphrases are long and cumbersome to type. Also it's a bit inconvenient to the get the IP address in the command-lines right. Also the user has to remember some GPG quirks to get the security right: specifying --force-mdc and checking the return value of gpg -d.
  • USB pen drive + GPG for encryption. This is similar to ecplmdr above, but less convenient and less secure, because user-invented passphrases tend to be weak, and strong passphrases are long and cumbersome to type. Also the user has to remember some GPG quirks to get the security right: specifying --force-mdc and checking the return value of gpg -d.
  • Using a QR code and scanning it with the webcam: qrencode + zbarcam + GPG for encryption. This works for files smaller than about 10 KiB, because the resolution of the webcam in many laptops is not good enough to scan large QR codes. Without GPG this is not secure, in case someone is taking a video recording of the computer screen. Also the user has to remember some GPG quirks to get the security right: specifying --force-mdc and checking the return value of gpg -d.
  • Setting up the secret key in your YubiKey on one computer, copying the public key from it onto the second computer, and connecting via ssh to the second computer. This works if you already have a YubiKey, the first computer is nearby, and it's convenient for you to set up and dump keys on your YubiKey. How to retrieve the SSH public key from the YubiKey: use ssh-add -L | grep cardno: Because of the many skilled manual steps involved, this solution is less convenient than the recommendations above.
  • Setting up the secret key in your YubiKey on one computer, adding metadata, then using the list command in gpg --card-edit to get the metadata. This can be used to copy a few hundred bytes if both computers are nearby (i.e. you can connect the same YubiKey to both). This is similar to using an USB pen drive to copy files, but perhaps a bit more secure. (It's more secure only if an attacker stealing your YubiKey can't extract the metadata without knowing the passphrase. This has to be checked.)


Security requirements:

  • t encrypts the data end-to-end, only the receiver is able to decrypt it.
  • The receiver is able to detect if the data is indeed what the sender has sent (e.g. it was not tampered with and it was not replaced by the data provided by the attacker).

Convenience requirements:

  • It works on the command-line.
  • It works as a regular user (non-root) on the both computers.
  • It works without software installation on the both computers.
  • It works without creating any file other than the output data file in the receiver. (We can relax this: a few small temporary files are OK, if they get removed automatically in the end.)
  • It works with very little typing (at most 20 characters of key typing in total). Copy-pasting is OK, but not between the sender and the receiver.
  • There is a mode which works on the local network without a public or local service running and without extra hardware (network cables or USB pen drives).
  • There is a mode which works without a local network and without extra hardware; it is allowed to use a public service.
  • There is a mode which works without any network (local network or internet); it is allowed to use a USB pen drive.

One-off SCP with Dropbear

If one of the computers (let's call it client) has the OpenSSH client (ssh) installed, and is able to connect to the other computer (let's call it the server), you have root access on the server, and the server doesn't have a working OpenSSH server (sshd) installed, and you don't mind intalling software to the server temporarily, you can follow these steps to copy files securely.

On the server, install Dropbear. For example, on Debian 9 or later, run this as root (without the leading #):

# apt-get install dropbear-bin

On the server, install the scp command-line tool, part of OpenSSH. For example, on Debian 9 or later:

# apt-get install openssh-client

On the server, generate an SSH host key, and start the server:

# dropbearkey -t rsa -s 4096 -f dbhostkey
# /usr/sbin/dropbear -r dbhostkey -F -E -m -w -j -k -p 64358 -P dbtmp.pid

The last command (dropbear) makes the Dropbear SSH server keep running and serving incoming connections until you press Ctrl-C in the terminal window. This is normal.

When dropbearkey above prints Fingerprint: md5, remember the value, because you will have to compare it with the value printed by the client.

On the client, initiate the copy with the following command (without the leading $):

$ SSH_AUTH_SOCK= scp -o Port=64358 -o HostName=... -o User=... \
    -F /dev/null -o UserKnownHostsFile=/dev/null \
    -o HostKeyAlgorithms=ssh-rsa -o FingerprintHash=md5 SOURCE DESTINATION

In the command above:

  • Specify HostName=... as the host name of the server.
  • Specify User=... as the non-root user name to be used on the server. scp will ask that user's password on the client.
  • SOURCE and DESTINATION can be a filename on the client, or, if prefixed by r:, then it's a filename inside the home directory of the user on the server.
  • If scp complains about FingerprintHash, then drop the -o FingerprintHash=md5, and try again.
  • When the client prints RSA key fingerprint is MD5:..., compare the ... value with the value printed by dropbearkey on the server. If they don't match perfectly, stop. If you continue even then, then you may be a victim of a man-in-the-middle attack, and your copy is not secure.

You may run multiple copies with scp between the client and the server.

As an alternative to scp, you can also use rsync to do the copies (if rsync is installed to both the client and the server). The command to be run on the client looks like this:

$ SSH_AUTH_SOCK= rsync -e 'ssh -o Port=64358 \
    -o HostName=... -o User=... -F /dev/null \  
    -o UserKnownHostsFile=/dev/null -o HostKeyAlgorithms=ssh-rsa \
    -o FingerprintHash=md5' --progress -avz SOURCE DESTINATION

Abort Dropbear on the server by pressing Ctrl-.

Having run the copies, remove unnecessary packages from the server. For example (do it carefully, don't remove anything you need), on Debian 9:

# sudo apt-get purge dropbear-bin libtommath1 libtomcrypt1
# sudo apt-get purge openssh-client


How to force OpenSSH to log in with a specific password or public key

This blog post explains how to force the OpenSSH client to log in with a specific password or public key. This is useful if some of the SSH client config files (/etc/ssh/ssh_config, /etc/ssh/ssh_known_hosts, /etc/ssh/ssh_known_hosts2, ~/.ssh/config, ~/.ssh/known_hosts) or the ssh-agent are in a broken state, and you want to try whether login works independently of these client-side issues.

Run this command to log in, substituting the "${...}" values:

SSH_AUTH_SOCK= /usr/bin/ssh -F /dev/null \
    -o UserKnownHostsFile=/dev/null -o GlobalKnownHostsFile=/dev/null \
    -o StrictHostKeyChecking=no \
    -p "${PORT}" -i "${KEYFILE}" -- "${USERNAME}"@"${HOST}"

Usage notes:

  • To use the default port (22), drop the -p "${PORT}".
  • To use password login instead of public key login, drop the -i "${KEYFILE}".
  • If you don't know where your public key file is, try -i ~/.ssh/id_rsa
  • To use the same username as your local client username, drop the "${USERNAME}"@.

How it works:

  • SSH_AUTH_SOCK= disables the ssh-agent for this connection.
  • Spelling out /usr/bin/ssh makes sure that shell aliases, shell functions and strange directories in $PATH have no effect on which SSH client is used.
  • -o UserKnownHostsFile=/dev/null -o GlobalKnownHostsFile=/dev/null makes existing host keys in known_hosts files to be ignored, thus the connection will be established even if old or incorrect host keys are saved there. Please note that this also makes it impossible to detect a man-in-the-middle attack, so attackers may be able to steal your password if you use a password to log in; also attackers can steal the contents of your session (commands and their results).
  • -o StrictHostKeyChecking=no suppresses the prompt to add the host key to the known_hosts files.


A quest to find a fast enclosure for multiple SATA 3.5" hard drives

This blog post documents the quest I'm undertaking to find a fast enclosure for multiple SATA 3.5" hard drives, supporting both USB 3 and eSATA, and the ability to read from both hard drives at the same time with at least 275 MB/s total speed. So far I haven't found a fast enough enclosure, so the quest is till ongoing. I'll keep updating the blog post with speed benchmark results.

The maximum sequential read speed my drives support are 112 MB/s and 170 MB/s. (There are much faster drives on the market, e.g. Seagate IronWolf NAS 10 TB can read 250 MB/s in the first 1 TB of the disk.)

I've decided not to order the IcyBox IB-RD3662U3S, because my online research indicates it would be too slow. It uses the chipset JMicron JMB 352 (produced in 2014), which doesn't support UASP (thus it's slow and it uses too much CPU) and maximum SATA speed is 3 Gbit/s.

I've ordered the StarTech S3520BU33ER instead, which uses the chipset JMicron JMS 562 (also produced in 2014), which supports UASP and maximum SATA speed is 6 Gbit/s. I'll run the benchmarks after it arrives.

I've also found OWC 0GB Mercury Elite Pro Dual RAID USB 3.1 / eSATA Enclosure Kit, which is potentially even faster. It supports USB 3.1, eSATA, UASP, and claims to be very fast: more than 400 MB/s over both USB and eSATA. It also uses the same chipset: JMicron JMS 562. It's avaialable from amazon.com and from the manufacturer's webshop (with expensive international delivery).

Depending on the computer it can be much faster to connect the 2 hard drives within separate single-drive enclosures, using separate USB 3 ports or using an unpowered hub. I'm not pursuing this option right now, because I have other uses for my USB ports, and I want low CPU usage (eSATA uses less than USB 3).

For a home media server, it may be cheaper to buy a NAS, e.g. QNAP TS-251+ with Ethernet and HDMI ports, DLNA with full HD video transcoding and other media server features, with maximum transfer speed of 224 MB/s. (Other kinds of QNAP NASes don't seem the be any faster.) However, with a NAS I wouldn't get the flexibility and configurability of stock Debian operating system running on a stock amd64 CPU with 4 GiB of RAM on this machine.


How to update the BIOS on a Lenovo T400 laptop

This blog post explains how to update the BIOS to version 3.24 (released on 2012-12-16, latest release as of 2018-04-21) on a Lenovo T400 laptop.

You will need a working and charged battery pack for the BIOS update, so install the battery pack first and start charging it.

If you are running Windows XP, Windows Vista or Windows 7 on the laptop, download the BIOS Update Utility from here (choose 32-bit or the 64-bit version depending on your Windows type, or try both versions if you don't know), and run it, and you are done.

Otherwise, if you are able to burn a CD or DVD (either on the Lenovo T400 laptop or on another computer), and you have a working DVD reader in the Lenovo T400, then download the installer DVD .iso from here, burn it to a DVD, insert the DVD to the Lenovo T400, reboot the Lenovo T400, press the blue ThinkVantage button (near the top left corner of the keyboard), press F12 to select a boot device, select the DVD, boot from it.

Otherwise, if you have a USB pen drive of at least 34 MB in size whose contents can be overwritten, and you have a Linux system running (either on the Lenovo T400 laptop or on another computer), then connect the pen drive, figure out the device name using sudo fdisk -l (typically it will be /dev/sdb or /dev/sdc, but be extra careful, otherwise you will overwrite the contents of some other drive), run this command to download: wget https://download.lenovo.com/ibmdl/pub/pc/pccbbs/mobiles/7uuj49uc.iso; run this command to copy the bootable BIOS update utility to the pen drive: sudo dd if=7uuj49uc.iso of=/dev/sdB bs=49152 skip=1; sync (replacing /dev/sdB with the device of the pen drive). Insert the pen drive to one of the USB slots of the Lenovo T400, reboot the Lenovo T400, press the blue ThinkVantage button (near the top left corner of the keyboard), press F12 to select a boot device, select USB HDD, boot from it.

After booting into the BIOS update utility, follow the instructions to update the system software. (Don't reboot or turn off until asked.) The next reboot will take longer, the Lenovo logo will appear and disappear 3 times. After that you are done.

Now if you enter the BIOS setup at boot time (by pressing the blue ThinkVantage button), you will see version 3.24 (7UET94WW) 2012-10-17.


How to change which characters are selected by double-clicking in xterm

Various terminal emulators on Linux (e.g. xterm, gnome-terminal, rxvt) have word selection: when you double-click a character, it selects the entire word containing the character. This blog post explains how to customize which characters are part of a word in xterm.

The various defaults are for ASCII characters (in addition to digits and the letters a-z and A-Z):

  • gnome-terminal: # % & + , - . / = ? @ \ _ ~
  • rxvt: ! # $ % + - . / : _
  • xterm default: _
  • xterm in Ubuntu: ! # % & + , - . / : = ? @ _ ~

It's possible to customize which characters are part of a word in xterm by specifying the charClass resource. The values :48 mean: consider these characters part of a word. Other numbers are character ranges, for example 43-47 mean the ASCII characters 43 (+), 44 (,), 45 (-), 46 (.) and 47 (/).

Here is how to trigger various default behaviors from the command-line:

  • gnome-terminal: xterm -xrm '*.VT100.charClass: 35:48,37:48,38:48,43-47:48,61:48,63-64:48,92:48,95:58,126:48'
  • rxvt: xterm -xrm '*.VT100.charClass: 33:48,35-37:48,43:48,45-47:48,58:48,95:58'
  • xterm default: xterm -xrm '*.VT100.charClass: 95:48'
  • xterm in Ubuntu: xterm -xrm '*.VT100.charClass: 33:48,35:48,37-38:48,43-47:48,58:48,61:48,63-64:48,95:48,126:48'

To save the setting permanently, add a line like this to your ~/.Xresources file (create it if it doesn't exist):

! Here is a pattern that is useful for double-clicking on a URL (default xterm in Ubuntu):
XTerm.VT100.charClass: 33:48,35:48,37-38:48,43-47:48,58:48,61:48,63-64:48,95:48,126:48

Make sure above that the line containing charClass doesn't start with !, because that would be a comment.

The change takes affect automatically the next time you log in. To make it take effect earlier (for all xterms you start), run: xrdb -merge ~/.Xresources


How to restrict an SSH user to file transfers

This blog post explains how a user on a Unix server can be restricted to file transfers only over SSH. The restriction is implemented by specifying a login shell which imposes a whitelist of allowed commands (e.g. rsync, sftp-server, scp, mkdir), and Unix permissions are used to restrict which files can be read and/or written by these commands.

Implementation using a custom login shell

First install Python 2 (as /usr/bin/python), then create a custom login shell binary, and save it to e.g. /usr/local/bin/transfer_shell. The contents of /usr/local/bin/transfer_shell should be:

#! /usr/bin/python
# by pts@fazekas.hu at Wed Dec  6 15:46:18 CET 2017

"""Login shell in Python 2 for SSH service restricted to data copying.

Use normal Unix permissions to restrict what files can be accessed.

import os
import stat
import sys

if os.access(__file__, os.W_OK) or os.access(
    os.path.dirname(__file__), os.W_OK):
  sys.stderr.write('error: copy shell not safe\n')
if os.getenv('SSH_ORIGINAL_COMMAND', ''):
  sys.stderr.write('error: bad command= config\n')

#cmd = os.getenv('SSH_ORIGINAL_COMMAND', '').split()
#print >>sys.stderr, sys.argv
cs = (len(sys.argv) == 3 and sys.argv[1] == '-c' and sys.argv[2]) or ''
if cs == '/bin/sh .ssh/rc':
cmd = cs.split()
# cmd0 will be '' for interactive shells, thus it will be disallowed.
cmd0 = (cmd or ('',))[0]
#print >>sys.stderr, sorted(os.environ)
if cmd0 not in ('ls', 'pwd', 'id', 'cat', 'echo', 'cp', 'mv', 'rm',
                'mkdir', 'rmdir',
                'rsync', 'scp', '/usr/lib/openssh/sftp-server'):
  # In case of sftp, we can't write to stderr.
  sys.stderr.write('error: command not allowed: %s\n' % cmd0)
def is_scp_unsafe(cmd):
  has_tf = False
  for i in xrange(1, len(cmd) - 1):
    arg = cmd[i]
    if arg == '--' or not arg.startswith('-'):
    elif arg in ('-t', '-f'):  # Flags indicating remote operation.
      has_tf = True
    elif arg not in ('-v', '-r', '-p', '-d'):
      return True
  return not has_tf
if ((cmd0 == 'rsync' and (len(cmd) < 2 or cmd[1] != '--server')) or
    cmd0 == 'scp' and is_scp_unsafe(cmd)):
  # This is to disallow arbitrary command execution with rsync -e and
  # scp -S.
  sys.stderr.write('error: command-line not allowed: %s\n' % cs)
os.environ['PATH'] = '/bin:/usr/bin'
os.environ.pop('DISPLAY', '')  # Disable X11.
os.environ.pop('XDG_SESSION_COOKIE', '')
os.environ.pop('XAUTHORITY', '')
except OSError:
  sys.stderr.write('error: data dir not found\n')
  # This is insecure: os.execl('/bin/sh', 'sh', '-c', cmd)
  os.execvp(cmd0, cmd)
except OSError:
  sys.stderr.write('error: command not found: %s\n' % cmd0)

Run these commands as root (without the leading #) to set the permissions transfer_shell:

# chown root.root /usr/local/bin/transfer_shell
# chmod 755       /usr/local/bin/transfer_shell

To set up restrictions for a new user

  1. Create the Unix user if not already created.
  2. Set up Unix groups and permissions on the system so the user doesn't have access to more files than he should have.
  3. Optionally, set up SSH public keys in ~/.ssh/authorized_keys for the user. No need to specify command="..." or other restrictions in that line.
  4. To change the login shell of the user, run this command as root (substituting USER with the login name of the user): chsh -s /usr/local/bin/transfer_shell USER
  5. Create a symlink named data in the home directory of the user. It should point to the default directory for file transfers.
  6. It's strongly recommended that you make the home directory and its contents unwritable by the user. Example command (run it as root, substitute USER): chown root.root ~USER ~USER/.ssh ~USER/.ssh/authorized_keys

Alternatives considered

  • Using a restrictive login shell and setting Unix file permissions. (This is implemented above, and also in scponly and rssh/) The disadvantage is that by accident the Unix permissions may be set up incorrectly (i.e. they are too permissive), and the user has access to too many files. Another disadvantage is that the custom login shell implementation may be vulnerable or hard to audit (example exploits for running arbitrary commands with rsync and scp: https://www.exploit-db.com/exploits/24795/).
  • Using a restrictive command="..." in ~/.ssh/authorized_keys. This is insecure, because OpenSSH sshd still runs ~/.bashrc and ~/.ssh/rc as shell scripts, and a malicious user could upload their own version of these files, or trigger some command execution in /etc/bash.bashrc. Any of these could lead to the user being able to execute arbitrary shell commands, which we don't want for this user.
  • Running a restrictive, custom SSH server implementation on a different port (while OpenSSH sshd is still running on port 22). This comes with its own risk of possible security bugs, and needs to be upgraded regularly. Also it can be complex to understand and set up correctly.
  • See some more alternatives here: https://serverfault.com/questions/83856/allow-scp-but-not-actual-login-using-ssh.


Comparison of encrypted Git remote (remote repository) implementations

This blog post is a comparison of encrypted Git remote implementations. A Git remote is a combination of storage space on a remote server, remote server software and local software working together. An encrypted Git remote is a Git remote which makes sure that the storage space on the remote server contains the Git objects encrypted. It is useful if the Git repository contains sensitive information (e.g. passwords, bank account details), and the remote server is not trusted to keep such information hidden from unauthorized readers.

See the recent Hacker News dicsussion Keybase launches encrypted Git for the encrypted, hosted cloud Git remote provided by Keybase.


  • name of the Git remote software
    • grg: git-remote-gcrypt
    • git-gpg: git-gpg
    • keybase: git-remote-keybase, the encrypted, hosted cloud Git remote provided by Keybase
  • does it support collaboration (users with different keys pull and push)?
    • grg: yes
    • git-gpg: yes
    • keybase: yes
  • does it encrypt the local .git repository directory?
    • grg: no
    • git-gpg: no
    • keybase: no
  • does it encrypt any files in the local working tree?
    • grg: no
    • git-gpg: no
    • keybase: no
  • does it encrypt the remote repository users push to?
    • grg: yes, it encrypts locally before push
    • git-gpg: yes, it encrypts locally before push
    • keybase: yes, it encrypts locally before push
  • by looking at the remote files, can anyone learn the total the number of Git objects?
    • grg: no
    • git-gpg: yes
    • keybase: probably yes
  • can root on the remote server learn the list of contributors (users who do git pull and/or git push)?
    • grg: yes, by making sshd log which SSH public key was used
    • git-gpg: yes, by making sshd log which SSH public key was used
    • keybase: yes
  • by looking at the remote files, can anyone learn the list of contributors (users who do git pull and/or git push)?
    • grg: no
    • git-gpg: no
    • keybase: probably yes
  • by looking at the remote files, can anyone learn when data was pushed?
    • grg: yes
    • git-gpg: yes
    • keybase: probably yes
  • does it support hosting of encrypted remotes on your own server?
    • grg: yes
    • git-gpg: yes
    • keybase: no, at least not by default, and not documented
  • supported remote repository types
    • grg: rsync, local directory, sftp, git repo (local or remote)
    • git-gpg: rsync, local directory
    • keybase: custom, data is stored on KBFS (Keybase filesystem, an encrypted network filesystem)
  • required software on the remote server
    • grg: sshd, (rsync or sftp-server or git)
    • git-gpg: sshd, rsync
    • keybase: custom, the KBFS server, there are no official installation instructions
  • required local software
    • grg: git, gpg, ssh, (rsync or sftp), git-remote-gcrypt
    • git-gpg: git, gpg, ssh, rsync, Python (2.6 or 2.7), git-gpg
    • keybase: binaries provided by Keybase: keybase, git-remote-keybase, kbfsfuse (only for remote repository creation)
  • product URL with installation instructions
    • grg: https://git.spwhitton.name/git-remote-gcrypt/tree/README.rst
    • git-gpg: https://github.com/glassroom/git-gpg
    • keybase: https://keybase.io/blog/encrypted-git-for-everyone
  • source code URL
    • grg: https://git.spwhitton.name/git-remote-gcrypt/tree/git-remote-gcrypt
    • git-gpg: https://github.com/glassroom/git-gpg/blob/master/git-gpg
    • keybase: https://github.com/keybase/kbfs/blob/master/kbfsgit/git-remote-keybase/main.go
  • implementation language
    • grg: Unix shell (e.g. Bash), single file
    • git-gpg: Python 2.6 and 2.7, single file
    • keybase: Go
  • source code size, number of bytes, including comments
    • grg: 21 448 bytes
    • git-gpg: 19 702 bytes
    • keybase: 5 617 305 bytes (including client/go/libkb/**/*.go and kbfs/{env,kbfsgit,libfs,libgit,libkbfs}/**/*.go)
  • is the source code easy to understand?
    • grg: yes, but some developers reported it's less easy than git-gpg
    • git-gpg: yes
    • keybase: no, because it's huge; individual pieces are simple
  • encryption tool used
    • grg: gpg (works with old versions, e.g. 1.4.10 from 2008)
    • git-gpg: gpg (works with old versions, e.g. 1.4.10 from 2008)
    • keybase: custom, written in Go
  • is it implemented as a Git remote helper?
    • grg: yes, git push etc. works
    • git-gpg: no, it works as git gpg push instead of git push etc.
    • keybase: yes, git push etc. works
  • how much extra disk space does it use locally, per repository?
    • grg: less than 1000 bytes
    • git-gpg: stores 2 extra copies of the .git repository locally, one of them containing only loose objects (thus mostly uncompressed)
    • keybase: less than 1000 bytes
  • how much disk space does it use remotely, per repository?
    • grg: one encrypted packfile for each push, encryption has a small (constant) overhead, occasionally runs git repack (locally, and uploads the result), and right after repacking it stores only 1 packfile (plus a small metadata file) per repository
    • git-gpg: one encrypted file for each object, encryption has a small (constant) overhead, no packfiles (thus the remote repository will be large and contain a lot of files, because of the lack of diff compression supported by the packfiles)
    • probably one encrypted file for each object


How to run Windows XP on Linux using QEMU and KVM

This blog post is a tutorial explaining how to run Windows XP as a guest operating system using QEMU and KVM on a Linux host. It should take less then 16 minutes, including installation.

Requirements: You need a recent Linux system (Ubuntu 14.04 LTS will work) with a GUI, 620 MB of free disk space and 550 MB of free memory. If you don't want to browse the web from Windows XP, then 300 MB of free memory is enough.

Software used:

  • The latest version of Hiren's BootCD (version 15.2) was released on 2012-11-09. It contains a live (no need to install) mini Windows XP system with a web browser (Opera). (Additionally, it contains hundreds of system rescue, data recovery, antivirus, backup, password recovery, hard disk diagnostics and system diagnostics tools. To see many of them with screenshot, look at this article about Hiren's BootCD, or click on the See CD Contents link on the official Hiren's BootCD download page.)
  • QEMU. It's a fully system emulator, which can emulate multiple architectures, and it can run multiple operating systems as a guest.
  • KVM. It's a fast virtualization (emulation) of guest operating systems on Linux. It's used by QEMU, and it lets QEMU execute the CPU-intensive operations on guest systems quickly, with only 10% or less overhead. (I/O-intensive operations can be much slower.)

Log in to the GUI, open a terminal window, and run the following command (without the leading $, copy-paste it as a single, big, multiline paste):

$ python -c'import os, struct, sys, zlib
def work(f):  # Extracts the .iso from the .zip on the fly.
 while 1:
  data, i, c, s = f.read(4), 0, 0, 0
  if data[:3] in ("PK\1", "PK\5", "PK\6"): return f.read()
  assert data[:4] == "PK\3\4", repr(data); data = f.read(26)
  _, _, mth, _, _, _, cs, us, fnl, efl = struct.unpack("<HHHHHlLLHH", data)
  fn = f.read(fnl); assert len(fn) == fnl
  ef = f.read(efl); assert len(ef) == efl
  if fn.endswith(".iso"): uf = open("hirens.iso", "wb")
  else: mth = -1
  if mth == 8: zd = zlib.decompressobj(-15)
  while i < cs:
   j = min(65536, cs - i); data = f.read(j); assert len(data) == j; i += j
   if mth == 8: data = zd.decompress(data)
   if mth != -1: uf.write(data)
  if mth == 8: uf.write(zd.flush())
work(os.popen("wget -nv -O- "

The command above downloads the Hiren's BootCD image and extracts it to the file hirens.iso. (Alternatively, you could download from your browser and extract the .iso manually. That would use more temporary disk space.)

Install QEMU. If you have a Debian or Ubuntu system, do it so by running the command (without the leading $):

$ sudo apt-get install qemu-system-x86

On other Linux systems, use your package manager to install QEMU with KVM support.

The only step in this tutorial which needs root access (and thus the root password) is the QEMU installation above.

Run the following command in your terminal window (without the leading $, copy-paste it):

$ SDL_VIDEO_X11_DGAMOUSE=0 qemu-system-i386 -m 512 -machine pc-1.0,accel=kvm \
    -cdrom hirens.iso -localtime -net nic -net user -smb "$HOME"

This command will start a virtual machine running Hiren's Boot CD, and it will display it in a window (of size 800x600). The command will not exit until you close the window (and thus abort the virtual machine).

The virtual machine will use 512 MB of memory (as specified by -mem 512 above. It's possible for the mini Windows XP to use less memory, e.g. you if you specify -mem 256 instead, then it will still work, but web browsing (with Opera) won't work, and you will have to click OK on the Your system is low on vitual memory. dialog later.

In a few seconds, the boot menu of Hiren's BootCD is displayed in the QEMU window:

Press the down arrow key and press Enter to choose Mini Windows Xp. Then wait about 1 minute for Windows XP to start. It will look like this:

To use the mouse within the QEMU window, click on the window. To release your mouse (to be used in other windows), press Ctrl and Alt at the same time.

Networking (such as web and file sharing) is not enabled by default. To enable it, click on the Network Setup icon in the QEMU window desktop, and wait about 20 seconds. The IP address of the guest Windows XP is, and the IP address of host Linux system is Because of the user mode networking emulation provided by QEMU, external TCP connections can also be made from Windows XP (e.g. you can browse the web). Please note that ping won't work (because QEMU doesn't emulate that).

To browse the web, click on the Internet icon in the QEMU Windows desktop. It will start the Opera browser. Web browsing will be quite slow, so better try some fast sites such as google.com or whatismyip.com.

To use the command line, click on the Command prompt icon in the QEMU Windows desktop. There is a useful command to type to that window: net use s: \\\qemu (press Enter after typing it). That will make your Linux home folder available as drive S: in Windows XP, for reading and writing. (You can change which folder to make available by specifying it after -smb when starting QEMU.)

Copy-pasting between Linux and Windows XP clipboards doesn't work.

You can make the QEMU window larger by changing Start menu / Settings / Control Panel / Display / Settings / Screen resoluton to 1024 by 768 pixels. The 1024x768 shortcut on the QEMU Windows desktop doesn't work,

Because of efficient CPU virtualization by KVM, an idle Windows XP in a QEMU window doesn't use more than 10% CPU on the host Linux system.

Hiren's boot CD contains hundrends of Windows apps. Only a fraction of the apps are available from the Windows XP start menu. To see all apps, click on the HBCD Menu icon in the QEMU Windows desktop, and then click on the Browser Folder button.


How to avoid unnecessary copies when appending to a C++ vector

This blog post explains how to avoid unnecessary copies when appending to a C++ std::vector, and recommends the fast_vector_append helper library, which eliminates most copies automatically.

TL;DR If you are using C++11, and your element classes have an efficient move constructor defined, then just use push_back to append, it won't do any unnecessary copies. In addition to that, if you are constructing the to-be-appended element, you can use emplace_back to append, which even avoids the (otherwise fast) call to the move constructor.

Copying is slow and needs a lot of (temporary memory) if the object contains lots of data. Such an object is a long std::string: the entire array of characters get copied to a new array. This hurts performance if the copy is unnecessary, e.g. if only a temporary copy is made. For example:

std::string create_long_string(int);

std::vector<std::string> v;
  // Case A.
  std::string s1 = create_long_string(1);
  std::string s2 = create_long_string(2);
  std::string s3 = create_long_string(3);
  // Case B.
  std::cout << s1;
// Case C.
// Case D, from C++11.
// Case E.
// Case F.
v.push_back(std::string()); v.back().swap(s2);
// Case G, from C++11.

In Case A, return value optimization prevents the unnecessary copying: the string built in the function body of create_long_string is placed directly to s1.

In Case B, a copy has to be made (there is no way around it), because v is still valid after s1 is destroyed, thus it cannot reuse the data in s1.

Case C could work without a copy, but in C++98 an unnecessary copy is made. So first std::string("foo") is called (which makes a copy of the data), and then the copy constructor of std::string is called to create a new string (with a 2nd copy of the data), which gets added to v.

Case D avoids the 2nd (unnecessary) copy, but it works only from C++11. In earlier versions of C++ (such as C++98), std::vector doesn't have the emplace_back method.

In Case E, there is an unnecessary copy in C++98: create_long_string creates an std::string, and it gets copied to a new std::string within v. It would be better if create_long_string could create the std::string to its final location.

Case F shows the workaround in C++98 of adding s2 to an std::vector without a copy. It's a workaround because it's a bit ugly and it still involves some copying. Fortunately this copying is fast: it copies only the empty string. As a side effect, the value of s2 is lost, it will then be the empty string.

Case G shows the C++11 way of adding s3 to an std::vector without a copy. It doesn't work in C++98 (there is no std::move in C++98). The std::move(s3) visibly documents that the old value of s3 is lost.

C++11 (the version of C++ after C++98) introduces rvalue references, move constructors and move semantics to avoid unnecessary copies. This will fix both Case C and Case E. For this to work, new code needs to be added to the element class (in our case std::string) and to the container class (in our case std::vector) as well. Fortunately, the callers (including our code above and the body of create_long_string) can be kept unchanged. The following code has been added to the C++ standard library (STL) in C++11:

class string {
  // Copy constructor. C++98, C++11.
  string(const string& old) { ... }
  // Move constructor. Not in C++98, added in C++11.
  string(string&& old) { ... } ... }

template<typename T, ...>
class vector {
  // Takes a const reference. C++98, C++11.
  void push_back(const T& t);
  // Takes an rvalue reference. Not in C++98, added in C++11.
  void push_back(T&& t);

As soon as both of these are added, when v.push_back(...) will attempt to call the 2nd method (which takes the rvalue reference), which will call to move constructor of std::string instead of the copy constructor. This gives us the benefit of no copying, because typically the move constructor is fast, because it doesn't copy data. In general, the move constructor creates the new object with the data of the old object, and it can leave the old old object in an arbitrary but valid state. For std::string, it just copies the pointer to the data (which is fast, because it doesn't copy the data itself), and sets the pointer in the old std::string to nullptr. Thus Case C and Case E become fast in C++11. Case B is not affected (it still copies), and that's good, because we want to print s1 to cout below, so we want that data there. This happens automatically, because in the call v.push_back(s1), s1 is not an rvalue reference, thus the cost-reference push_back will be called, which does a copy. For more details about the magic to select the proper push_back, see this tutorial or this tutorial.

Guidelines to avoid unnecessary copies

Define your (element) classes like this:

  • Define the default constructor (C() { ... }).
  • Define the destructor (~C() { ... }).
  • Define the copy constructor (C(const C& c) { ... }).
  • It's a good practice to define operator=, but not needed here.
  • For C++11 classes, define a move constructor (e.g. C(C&& c) { ... }).
  • For C++11 classes, don't define a member swap method. If you must define it, then also define a method void shrink_to_fit() { ... }. It doesn't matter what the method does, you can just declare it. The fast_vector_append library detects shrink_to_fit, and will use the move constructor instead of the swap method, the former being slightly faster, although neither copies the data.
  • For C++98 classes, don't define a move constructor. In fact, C++98 doesn't support move constructors.
  • For C++98 classes, define a member swap method.

To append a new element to an std::vector without unnecessary copying, as fast as possible, follow this advice from top to bottom:

  • If it's C++11 mode, and the object is being constructed (not returned by a function!), use emplace_back without the element class name.
  • If it's C++11 mode, and the class has a move constructor, use push_back.
  • If it's C++11 mode, and the class has the member swap method, use: { C c(42); v.resize(v.size() + 1); v.back().swap(c); }
  • If the class has the member swap method, use: { C c(42); v.push_back(C()); v.back().swap(c); }
  • Use push_back. (This is the only case with a slow copy.)

Automating the avoidance of unnecessary copies when appending to a vector

It would be awesome if the compiler could guess the programmer's intentions, e.g. it would pick emplace_back if it is faster than push_back, and it will avoid the copy even in C++98 code, e.g. it will use swap if it's available, but the move constructor isn't. This is important because sometimes it's inconvenient to modify old parts of a codebase defining the element class, and it already has swap.

For automation, use fast_vector_append(v, ...) in the fast_vector_append library to append elements to an std::vector. It works in both C++98 and C++11, but it can avoid more copies in C++11. The example above looks like:

#include "fast_vector_append.h"
std::string create_long_string(int);

std::vector<std::string> v;
  // Case A. No copy.
  std::string s1 = create_long_string(1);
  std::string s2 = create_long_string(2);
  std::string s3 = create_long_string(3);
  // Case B. Copied.
  fast_vector_append(v, s1);
  std::cout << s1;
// Case C. Not copied.
fast_vector_append(v, "foo");
// Case D. Not copied.
fast_vector_append(v, "foo");
// Case E. Copied in C++98.
fast_vector_append(v, create_long_string(4));
{ std::string s4 = create_long_string(4);
  // Case E2. Not copied.
  fast_vector_append_move(v, s4);
// Case F. Not copied.
fast_vector_append_move(v, s2);
// Case G. Not copied.
fast_vector_append_move(v, s3);
// Case H. Copied in C++98.
fast_vector_append(v, std::string("foo"));

Autodetection of class features with SFINAE

The library fast_vector_append does some interesting SFINAE tricks to autodetect the features of the element class, so that it will be able to use the fastest way of appending supported by the class.

For example, this is how it detects whether to use the member swap method:

// Use swap iff: has swap, does't have std::get, doesn't have shrink_to_fit,
// doesn't have emplace, doesn't have remove_suffix. By doing so we match
// all C++11, C++14 and C++17 STL templates except for std::optional and
// std::any. Not matching a few of them is not a problem because then member
// .swap will be used on them, and that's good enough.
// Based on HAS_MEM_FUNC in http://stackoverflow.com/a/264088/97248 .  
// Based on decltype(...) in http://stackoverflow.com/a/6324863/97248 .
template<typename T>   
struct __aph_use_swap {
  template <typename U, U> struct type_check;
  // This also checks the return type of swap (void). The checks with
  // decltype below don't check the return type.
  template <typename B> static char (&chk_swap(type_check<void(B::*)(B&), &B::swap>*))[2];
  template <typename  > static char (&chk_swap(...))[1];
  template <typename B> static char (&chk_get(decltype(std::get<0>(*(B*)0), 0)))[1];  
// ^^^ C++11 only: std::pair, std::tuple, std::variant, std::array. template <typename > static char (&chk_get(...))[2]; template <typename B> static char (&chk_s2f(decltype(((B*)0)->shrink_to_fit(), 0)))[1];
// ^^^ C++11 only: std::vector, std::deque, std::string, std::basic_string. template <typename > static char (&chk_s2f(...))[2]; template <typename B> static char (&chk_empl(decltype(((B*)0)->emplace(), 0)))[1];
// ^^^ C++11 only: std::vector, std::deque, std::set, std::multiset, std::map, std::multimap, std::unordered_multiset, std::unordered_map, std::unordered_multimap, std::stack, std::queue, std::priority_queue. template <typename > static char (&chk_empl(...))[2]; template <typename B> static char (&chk_rsuf(decltype(((B*)0)->remove_suffix(0), 0)))[1];
// ^^^ C++17 only: std::string_view, std::basic_string_view. template <typename > static char (&chk_rsuf(...))[2]; static bool const value = sizeof(chk_swap<T>(0)) == 2 && sizeof(chk_get<T>(0)) == 2
&& sizeof(chk_s2f<T>(0)) == 2 && sizeof(chk_empl<T>(0)) == 2
&& sizeof(chk_rsuf<T>(0)) == 2; };

The autodetection is used like this, to select one of the 2 implementations (either with v.back().swap(t) or v.push_back(std::move(t))):

template<typename V, typename T> static inline
typename std::enable_if<std::is_same<typename V::value_type, T>::value &&
    __aph_use_swap<typename V::value_type>::value, void>::type
fast_vector_append(V& v, T&& t) { v.resize(v.size() + 1); v.back().swap(t); }                               

template<typename V, typename T> static inline
typename std::enable_if<std::is_same<typename V::value_type, T>::value &&
    !__aph_use_swap<typename V::value_type>::value, void>::type
fast_vector_append(V& v, T&& t) { v.push_back(std::move(t)); }


How to back up your WhatsApp chats and photos safely on Android

This blog post explains how to make backups of your WhatsApp chats and photos safely on your Android device, and how to restore your backups. By safely we mean that you won't lose data unless you remove some backup files manually.

WhatsApp saves all chats to the WhatsApp/Databases folder on the phone's storage (sdcard), and it saves all photos and other media files to the WhatsApp/Media folder. (In fact, from the chats only the file WhatsApp/Databases/msgstore.db.cryptNNN may be needed, where NNN is an integer, currently 12.) If you make a copy of these folders, and copy them back to a new or reinstalled Android device before installing WhatsApp, then this effectively restores the backup, WhatsApp will recognize and use these files the first time it's installed (you need to tap on the Restore button within WhatsApp). See this FAQ entry for more information on restoring WhatsApp backups on Android.

WhatsApp supports creating backups to Google Drive (automatically, every day), and restoring those backups when the app is (re)installed. This sounds like convenient and safe, but it's not safe, you can still lose your chat history and photos (see below how). So if you care about your WhatsApp chat history and photos, and you need an automated backup for them, here is my recommendation: use the FolderSync Lite free Android app to make automatic backups to the cloud (it supports Google Drive, DropBox and more than 20 other cloud providers). To restore the backup, you can use FolderSync Lite, or you can download the files and copy them to your Android device manually.

Here is how to set up FolderSync Lite on Android for automatic backups of WhatsApp chats, photos and other media:

  1. Create a Google account, open Google Drive, create a folder named FolderSyncBackup-WhatsApp, and within it create subfolders Databases and Media (both case sensitive). It can also be done similarly on Dropbox instead, but this tutorial focuses on Google Drive.
  2. Install FolderSync Lite to your Android device.
  3. Add your Google account to FolderSync lite.
  4. Create a folderpair for backing up chats:
    • Account: your Google account
    • Unique name: wad
    • Sync type: To remote folder
    • Remote folder: /FolderSyncBackup-WhatsApp/Databases/
    • Local folder: .../WhatsApp/Databases/
    • Use scheduled sync: yes
    • Sync itnerval: Daily
    • Copy files to time-stamped folder: no
    • Sync subfolders: yes
    • Sync hidden files: yes
    • Delete source files after sync: no
    • Retry sync if failed: yes
    • Only resync source files if modified (ignore target deletion): yes
    • Sync deletions: no
    • Overwrite old files: Always
    • If conflicting modifications: Skip file
    • Use WiFi: yes
    • (Many settings below are fine, skipped here.)
  5. Save the folderpair, and do the first sync manually.
  6. Create a folderpair for backing up media files, including photos:
    • Account: your Google account
    • Unique name: wam
    • Sync type: To remote folder
    • Remote folder: /FolderSyncBackup-WhatsApp/Media/
    • Local folder: .../WhatsApp/Media/
    • (Subsequent settings are the same as above.)
  7. Save the folderpair, and do the first sync manually.
  8. Optionally, you can turn of WhatsApp's automatic backup to Google Drive in the WhatsApp app's chat settings.
  9. To remove the WhatsApp's automatic backup files from Google Drive, go to Google Drive, click on the gear icon (Settings), click on Settings, click on Manage Apps, find and click on the Options off WhatsApp Messenger, click on Delete app data, and then click on Disconnect from Drive.

If FolderSync starts failing consistently with the error message IllegalStateException: Expected BEGIN_OBJECT but was STRING, you can fix it by unlinking and reauthenticating the sync account on the Android device. To do so, open the FolderSync app on the device, tap Accounts, tap on your GoogleDrive account, tap on the black UNLINK ACCOUNT button, tap on the black RE-AUTHENTICATE ACCOUNT button, tap on Save, go back, tap on Folderpairs, and tap on the black Sync buttons with an error next to them.

WhatsApp saves all chat history so far to a new file every day (file name pattern: WhatsApp/Databases/msgstore-????-??-??.*.db.crypt*). These files will accumulate and fill up your Google Drive quote in a year or two, so you may want to remove old files. You can do it manually on the Google Drive web UI, just visit the FolderSyncBackup-WhatsApp/Databases folder, select old files (by date), and remove them. Alternatively, you can automate removal using an Apps Script. Here is how:

  1. Visit http://script.google.com/
  2. You see a script editor window. Remove the existing code (function myFunction() { ... }).
  3. In the File / Rename... menu, rename the project to script to remove old WhatsApp backups
  4. Copy-paste the following code to the script editor window:
    var FOLDER_NAME = 'FolderSyncBackup-WhatsApp';
    // Please note that msgstore.db.crypt12 will always be kept in addition to the dated files.
    var MINIMUM_DBS_TO_KEEP = 8;  // A value smaller than 8 doesn't make sense, WhatsApp keeps that much, and these files would be reuploaded by FolderSync.
    var USE_TRASH = true;
    var DO_DRY_RUN = false;
    function getSingleFolder(foldersIter) {
      var folder = null;
      while (foldersIter.hasNext()) {
        if (folder != null) throw new Error("Multiple folders found.");
        folder = foldersIter.next();
      if (folder == null) throw new Error("Folder not found.");
      return folder;
    function getAll(iter) {
      var result = [];
      while (iter.hasNext()) {
      return result;
    function compareByName(a, b) {
      var an = a.getName(), bn = b.getName();
      return an < bn ? -1 : an == bn ? 0 : 1;
    function removeOldWhatsAppBackups() {
      Logger.log('Config FOLDER_NAME=' + FOLDER_NAME);
      Logger.log('Config MINIMUM_DBS_TO_KEEP=' + MINIMUM_DBS_TO_KEEP);
      Logger.log('Config USE_TRASH=' + USE_TRASH);
      Logger.log('Config DO_DRY_RUN=' + DO_DRY_RUN);
      // var folders = DriveApp.getFolders();  // Also lists subfolders.
      var folder = getSingleFolder(getSingleFolder(DriveApp.getRootFolder().getFoldersByName(FOLDER_NAME)).getFoldersByName('Databases'));
      var files = getAll(folder.getFiles());
      var sortedFiles = [];
      var i;
      for (i = 0; i < files.length; ++i) {
        var file = files[i];
        var name = file.getName();
        // Logger.log(name + ': ' + file.getDateCreated());  // No last-modification time :-(.
        if (name.match(/^msgstore-\d\d\d\d-\d\d-\d\d[.]/)) {
      sortedFiles.sort(compareByName);  // The name reflects the date. Earlier file first.
      var toKeep = MINIMUM_DBS_TO_KEEP;
      if (toKeep < 1) toKeep = 1;
      for (i = 0; i < sortedFiles.length - toKeep; ++i) {
        var file = sortedFiles[i];
        // Logger.log('?? ' + file.getName() + ': ' + file.getSize());
        // A 100-byte decrease in file size is tolerable, can be a compression artifact.
        if (file.getSize() - 100 < sortedFiles[i + 1].getSize()) {
          if (DO_DRY_RUN) {
          } else if (USE_TRASH) {
          } else {
          // Utilities.sleep(100); // Prevent exceeding rate limit (currently 10 requests per second). Is it still in effect?
          Logger.log('-- Removing: ' + file.getName() + ': ' + file.getSize());
        } else {
          Logger.log('Keeping: ' + file.getName() + ': ' + file.getSize());
      for (; i < sortedFiles.length; ++i) {
        var file = sortedFiles[i];
        Logger.log('Keeping recent: ' + file.getName() + ': ' + file.getSize());
  5. Click on the File / Save menu to save the script.
  6. Open the Resources / All your triggers menu window, choose Add a new trigger, add this: removeOldWhatsAppBackups, Time-driven, Day timer, 5am to 6am, also add a notification to yourself in e-mail, sent at 6am.
  7. Click on Save to save the triggers.
  8. You may wonder if this Apps Script deletes chat backups in a safe way. You have to decide this for yourself. It keeps the 8 last backups, and it also keeps all backups after which the backup file size has decreased. This latter condition prevents the data loss case described below (in the next section).

That's it, automatic and safe backup of WhatsApp chats, photos and other media files to Google Drive is now set up using FolderSync Lite.

Is WhatsApp's built-in backup to Google Drive safe?

No it isn't, you can lose all your data in some cases. The data loss happened to a friend in Feb 2017 in the following way:

  • The Android phone was lost, thus the WhatsApp folder on the phone wasn't available.
  • There was a working and recent backup on Google Drive.
  • When WhatsApp was installed to a new phone, it started restoring the backup from Google Drive.
  • Shortly after the start of the restore, the internet connection broke and the restore was aborted. Only a little part of the messages were restored.
  • The owner of the phone didn't notice that many chats are missing, and started using WhatsApp.
  • Within 24 hours, WhatsApp created a new backup to Google Drive, overwriting the old, full data with the new partial data. At this point the majority of the old chats got lost.
  • A couple of days later the owner of the phone noticed, but it was too late.

If WhatsApp's built-in backup to Google Drive had been written in a way that it never overwrites old backups, it would have been possible to reinstall WhatsApp and restore all the chats without data loss. Unfortunately the users of WhatsApp have no control on how WhatsApp does backups. But they can use an alternative backup method which never loses data (see the method with FolderSync Lite above).

Another way to safely restore WhatsApp's built-in backup from Google Drive would be downloading it from Google Drive first, and keeping a a copy until WhatsApp has successfully restored everything on the new Android device. Unfortunately this is impossible, because it's impossible for the user to download their own backup from Google Drive (!), because it is saved to a hidden app folder, which only the WhatsApp app can read and write, and the user is unable to access it (apart from deleting it). This StackOverflow question has an answer which describes a cumbersome and fragile way for downloading, but this can easily change in the future, so I don't recommend it for general use. I recommend the method with FolderSync Lite above instead, which makes it easy for the user to see and download their WhatsApp backup from Google Drive.