Skip to content

2015

Deadlines and Timeouts for Realtime MongoDB Access with TxMongo

2000px-Twisted_Logo_(software).svg

Murphey had an adage: “Anything that can go wrong, will go wrong.” The best we can do is attempt to anticipate any problems that might come up and keep ‘the machine’ running. From an end-user perspective that means being responsive, even with errors. If there is a network error, we want to know as soon as possible with the guarantee that state of ‘the machine’ was not effected by the error.

With the release of TxMongo 15.3.1 we’ve introduced a few things that are useful when creating real-time applications.

We now have per-call deadline and timeouts!

Deadline: The latest time (in the future) by which the call should be completed. Useful when your application has a deadline to complete a task and you pass the same deadline to all MongoDB calls.

Timeout: How much time the call has to complete itself. Useful when each call in your application is allowed a certain amount of time to complete itself.

If these are ever exceeded, an error TimeExceeded is raised that you can trap. The guarantee is that when the error is raised, the call will not have modified MongoDB.

Here are two examples of how to implement these in your application:

yield conn.db.coll.insert({'x': 42}, safe=True, timeout=10)
yield conn.db.coll.insert({'x': 42}, safe=True, deadline=time()+10)

We also have additional features that will be useful as well:

  • When dealing with connection.ConnectionPool, max\_delay is now exposed which is used to set the maximum number of seconds between connection attempts. The default is set to 60.
  • When dealing with connection.ConnectionPool, initial\_delay is now exposed which is used to set the initial backoff retry delay. The default is set to 1.
  • NotMaster instead of AutoReconnect error will be returned when a call can be safely
  • Python3 support!

If you have any feature requests or problems, you can use our txmongo github issue tracker!

Vagrant, Virtualbox and Ubuntu Wily Weerwolf: Getting them to play along

Vagrant

I recently upgraded to Ubuntu 15.10 (Wily Weerwolf) which automatically upgraded VirtualBox from 4.3 to 5.0 and broke compatibility with Vagrant 1.6 in the process. Thinking that Vagrant knows about this and they claim VBox 5.0 compatbility, I upgraded to 1.7 and came across the same error!

Vagrant attempted to execute the capability ‘configure_networks’ on the detect guest OS ‘linux’

There is a workaround!

Apparently it hasn’t been fixed yet and others are also running into this problem. Being pragmatic, I reverted Vagrant back to 1.6 and purged VirtualBox from my system. I manually downloaded an older 4.3 build of VBox from here: https://www.virtualbox.org/wiki/Download_Old_Builds_4_3

Of course these builds are only supported up to Ubuntu 14.10, so we have a bit more work to get back to work. The first bit is that the VBox deb package won’t install until libvpx1 is installed. The problem here is that this package no longer exists in Ubuntu 15.10, but it’s later revision libvpx2 does. So we have to create a temporary empty deb package to allow installation without forcing anything.

You can follow the directions here to create your own package here: Creating_dummy_packages_to_fullfill_dependencies_in_Debian.

Or you can download it from me here: libvpx1_1.0.0_all.deb

Now you can install your VBox 4.3 without it complaining. However if you try to do this:

vagrant up

You’ll get this:

Bringing machine ‘default’ up with ‘virtualbox’ provider…
There was an error while executing VBoxManage, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: [“list”, “hostonlyifs”]

Stderr: VBoxManage: error: Failed to create a session object!
VBoxManage: error: Code NS_ERROR_FACTORY_NOT_REGISTERED (0x80040154) – Class not registered (extended info not available)
VBoxManage: error: Most likely, the VirtualBox COM server is not running or failed to start.

This is solved by making a symlink from libvpx.so.2 to libvpx.so.1 and it should be solved. We’re back to normal, until the original problem with Virtualbox 5.0 and Vagrant are fixed.

Cross-compiling for Raspberry Pi on Ubuntu

RaspberryPi Logo

While the Raspberry Pi 2 has four cores to churn through code, it still takes longer to compile than on most workstations and laptops. If you are feeling adventurous, you can try cross-compiling which has become easier to set up and get working. Cross-compiling is when binaries created are for another target architecture than the one you are compiling on. This kind of set up is very typical when creating Android applications. The end result is that you can take the resulting binary and place on its target platform, and it will run there. There are even tricks to getting the cross-compiled binary to also run on your native system! In this guide, I'll walk you through:

  • Setting up a cross-compile toolchain in Ubuntu (15.04 Vivid)
  • Setting up the proper exports
  • Compiling a test program for your native and target armhf platform
  • Compiling the latest Raspberry Pi 2 kernel with VC4 support.

The first thing we need to do is set up your Ubuntu to be able to compile software for a Raspberry Pi (1 and 2). You'll need at least Ubuntu Vivid (15.04) installed. From there, you'll need to install the following packages.

sudo apt-get install binutils-arm-linux-gnueabihf \
cpp-4.9-arm-linux-gnueabihf \
cpp-arm-linux-gnueabihf \
g++-4.9-arm-linux-gnueabihf \
g++-4.9-multilib-arm-linux-gnueabihf \
g++-arm-linux-gnueabihf \
gcc-4.9-arm-linux-gnueabihf \
gcc-4.9-arm-linux-gnueabihf-base \
gcc-4.9-multilib-arm-linux-gnueabihf \
gcc-arm-linux-gnueabihf \
pkg-config-arm-linux-gnueabihf \
binutils-arm-linux-gnueabihf \
cmake \
cpp-4.9-arm-linux-gnueabihf \
cross-gcc-dev \
dpkg-cross \
g++-4.9-arm-linux-gnueabihf \
g++-4.9-multilib-arm-linux-gnueabihf \
gcc-4.9-arm-linux-gnueabihf \
gcc-4.9-arm-linux-gnueabihf-base \
gcc-4.9-multilib-arm-linux-gnueabihf \
libasan1-armhf-cross \
libatomic1-armhf-cross \
libc6-armel-armhf-cross \
libc6-armel-cross \
libc6-armhf-cross \
libc6-dev-armel-armhf-cross \
libc6-dev-armel-cross \
libc6-dev-armhf-cross \
libdebian-dpkgcross-perl \
libfile-homedir-perl \
libgcc-4.9-dev-armhf-cross \
libgcc1-armhf-cross \
libgomp1-armhf-cross \
libsfasan1-armhf-cross \
libsfatomic1-armhf-cross \
libsfgcc-4.9-dev-armhf-cross \
libsfgcc1-armhf-cross \
libsfgomp1-armhf-cross \
libsfstdc++-4.9-dev-armhf-cross \
libsfstdc++6-armhf-cross \
libsfubsan0-armhf-cross \
libstdc++-4.9-dev-armhf-cross \
libstdc++6-armhf-cross \
libubsan0-armhf-cross \
linux-libc-dev-armhf-cross \
pdebuild-cross \
xapt \

The last package in the list is xapt, a wrapper around apt so that we can install packages specifically for other architectures like armhf. This includes things like *-dev packages with headers which will likely be required if you compile other software. Once those are installed, you need to tell the terminal you are targeting the armhf architecture. The CROSS_COMPILE flag will make your toolchain (gcc and friends) and your software aware that you are using a cross-compiler.

export $(dpkg-architecture -aarmhf) 
export CROSS_COMPILE=arm-linux-gnueabihf-

You might get this warning:

dpkg-architecture: warning: specified GNU system type arm-linux-gnueabihf does not match gcc system type x86_64-linux-gnu, try setting a correct CC environment variable

This message is harmless and you can ignore it. Now to test this, create a file called main.c and copy this Hello World code into it.

#include <stdio .h>
#include <stdlib .h>

int main(int argc, char **argv)
{
    printf("Hello world\
");
}

You'll then compile it twice, first natively and second for your target platform.

gcc -o hello_x86 main.c -static
arm-linux-gnueabihf-gcc -o hello_arm main.c -static

You can then use file to test the resulting output and it should match below:

bcurtis@Redqueen:~/workspace/RPi$ file hello_x86 hello_x86: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 2.6.32, BuildID=217c28644cf5be3ea4d24bea79c3da3bbdd9a2a9, not stripped bcurtis@Redqueen:~/workspace/RPi$ file hello_arm hello_arm: ELF 32-bit LSB executable, ARM, EABI5 version 1 (GNU/Linux), statically linked, for GNU/Linux 2.6.32, BuildID=3a5e42174d6b72ddf8b0265a9b76b3cea0668623, not stripped

Notice how the last one is ARM, EABI5 version 1, this indicates that the binary is compiled for armhf, your Raspberry Pi. Next we are going to try to run them:

bcurtis@Redqueen:~/workspace/RPi$ ./hello_x86 Hello world bcurtis@Redqueen:~/workspace/RPi$ ./hello_arm Hello world

You might asking you how the hello_arm binary can run on an x86 system. This is thanks to -static flag during compilation that shoves all the required libraries into your binary. The ones that you included are specifically crafted multi-libs that can used on both your host and your target platform (both x86 and arm). The resulting binaries are larger as a result. You can remove the -static flag and see that it will no longer run on your host machine, but much smaller and will run on your target RPi2. Aiming higher, we will try to get Linux kernel built using Eric Anholt's VC4 branch. Go ahead and checkout Eric's branch: https://github.com/anholt/linux/tree/vc4-kms-v3d-rpi2

git clone [email protected]:anholt/linux.git -b vc4-kms-v3d-rpi2 --depth 10
cd linux
export $(dpkg-architecture -aarmhf); export CROSS_COMPILE=arm-linux-gnueabihf-
make ARCH=arm -j`nproc` bcm2709_defconfig
make ARCH=arm -j`nproc`

It will spawn a number of processes in parallel, nproc will return back how many cores you have. After a few minutes of tea sipping, you'll have your newly minted arch/arm/boot/zImage that you can then copy over to your sdcard. Take a moment to make sure your setup.cfg is pointing to the right kernel, then give it a try. You should now have your RPi2 online with Linux 4.0! Please note, at the time of this post, while the option to compile in VC4 support is there, it currently isn't functioning. Eric is still busy getting RPi2 back to the same state as the original RPi. Cheers!

TxMongo - Your Asynchronous MongoDB Twisted Client

2000px-Twisted_Logo_(software).svg

We're proud to announce the release of 0.6 of TxMongo, which brings SSL support using Twisted's SSL context factory, "find with cursor" support just like PyMongo, bug fixes and updated unit tests! TxMongo is an asynchronous MongoDB client written for Twisted in Python. The biggest change is that TxMongo is now sponsored by Amplidata. Through them we were able to get development, bug fixes and Twisted first-party sponsorship online. We now have continuous integration (CI) with a wide matrix of support for py26/py27/pypy using Twisted 12.1 to 14.0 (and trunk). We also now have 78% code coverage with unit testing as a result! This is also the very last release in the 0.x series before we step over to the "year.release" model used by Twisted, it will also eventually find its way into Twisted's github organization as a first class library. You can download TxMongo 0.6.0 and other releases here: TxMongo Github Releases

What to expect We have a list of priorities:

  • Switch documentation over to sphinx for readthedocs.org supports.
  • Get TxMongo moved over to Twisted's org, with Travis-CI and Coveralls.
  • Get coverage to at least 80%.
  • Research functions found in PyMongo that are missing TxMongo.
  • Contact various TxMongo forks and gather up bugs/issues/patches from various distros.

Backstory In evaluating various options for using MongoDB with Twisted, there where two options:

  1. PyMongo
  2. TxMongo

The first option, supported by MongoDB themselves, is up to date in form of features but is synchronous and blocking. To get around this behaviour, you'll need to defer it thread. The second option is TxMongo that lacks a lot of the features of PyMongo, but is made for Twisted. Amplidata's only concern was the lack of SSL support in TxMongo, but all the main features that we needed are there. Thankfully the original author Alexandre Fiori, who is now in maintenance mode, accepted our patch. We talked a bit about the future of TxMongo and as it turns out, he is no longer developing TxMongo but he would love to give it to the community to see it furthered developed and maintained since he no longer has the time. We included Glyph of Twisted into the conversation to see about a new home, with the driving development work coming from Amplidata. The rest, is how they say, history. Example code using TxMongo and SSL First we startup mongodb:

#!/bin/bash
# create the path
mkdir -p /tmp/mongodb
# start mongodb process
mongod --dbpath /tmp/mongodb --sslMode requireSSL --sslPEMKeyFile mongodb.pem

Second we run this code:


from OpenSSL import SSL from txmongo.connection import ConnectionPool from twisted.internet import defer, reactor, ssl

class ServerTLSContext(ssl.DefaultOpenSSLContextFactory): def init(self, args, kw): kw['sslmethod'] = SSL.TLSv1_METHOD ssl.DefaultOpenSSLContextFactory.init(self, args, **kw)

@defer.inlineCallbacks def example(): tls_ctx = ServerTLSContext(privateKeyFileName='./mongodb.key', certificateFileName='./mongodb.crt') mongodb_uri = "mongodb://localhost:27017"

mongo = yield ConnectionPool(mongodb_uri, ssl_context_factory=tls_ctx)

foo = mongo.foo  # `foo` database
test = foo.test  # `test` collection

# fetch some documents
docs = yield test.find(limit=10)
for doc in docs:
    print doc

if name == 'main': example().addCallback(lambda ign: reactor.stop()) reactor.run()