Tips & Tricks on Setting up a Web2py Turnkey Appliance

May 23, 2013

Web2py Turnkey Appliance


The web2py Turnkey Linux appliance is a great way to quickly develop & deploy web2py applications into production. It is free, 100% open source and can be deployed to bare-metal, on virtual machines or into the cloud on Amazon Web Services using Turnkey Hub. It shares benefits from both the appliance world and the custom server world. While it is a small, light-weight, pre-built, easy-to-maintain well supported appliance with controlled security updates, you can still look under the hood where you’ll find a Ubuntu Server core which is useful for those times when you need additional packages, customization or when just want to understand more about what makes the appliance tick.  It comes prepackaged and pre-configured with web2py, Apache, WSGI, SSL, Mysql, iPython, SFTP, SSH, Postfix, Webmin, Turnkey start-up scripts and the Turnkey configuration console.  However, as good as this all sounds you’re likely to encounter a few issues along the way.  The goal of this post is to help you painlessly navigate through those items.

Item #1:  Web2py Version Upgrade

The first problem you might run into is that version 12 of the Turnkey appliance is running version 1.99.7 of web2py. Not a big deal right?  Just click on button under version “check for upgrades” on the web2py admin page  and wah la!!!  Well not quite … in 1.99.7 the version parser had an issue that was fixed in later releases. If you click on the check for upgrades button in version 1.99.7 you won’t be notified of an error, you won’t be able to upgrade from the web2py admin front-end and you’ll find a corresponding ticket if you click on the errors button under the admin application.

TypeError: not all arguments converted during string formatting.

Don’t panic!  There is a quick & easy work-around, just SSH into the web2py appliance and

edit line 113 in /var/www/web2py/applications/admin/controllers/

so that “version_number” is wrapped with repr() — that way the tuple can be displayed as a string …

return sp_button(URL('upgrade_web2py'), T('upgrade now to %s') % repr(version_number))

Save the change and restart Apache …

/etc/init.d/apache2 restart

Now try clicking on the check for new upgrades button and this time you should see an upgrade button appear once it is done. After clicking on the upgrade button it should upgrade you to the latest stable version of web2py and you’ll be good to go.  There are some significant changes in web2py since 1.99.7 so I highly recommend upgrading.

Item #2:  Creating a New Web2py Application

Another issue you may run into is that whenever you try to create an application from the web2py admin console a message will flash

unable to create application "your-application" (it may exist already)

This is most likely caused because the appliance was built by installing the web2py source from the trunk source which does not include welcome.w2p.  No problem, there is an easy fix for this too, just download a copy of the latest stable web2py release and just transfer (i.e. SCP or SFTP) the welcome web2py.w2p plugin located in the web2py root directory to the Turnkey appliance and into the /var/www/web2py directory.  Now try creating a new application from the web admin console, presto!

Item #3:  Setting the Date & Time

When you start using web2py you may find that your timestamps and/or database application dates are off… easy…

apt-get install webmin-time
 /etc/init.d/webmin start

Now access webmin http://your-ipaddress:12321 and you should see a clock icon. Click on it and set the current time, your timezone if desired and ntp servers (i.e.

Item #4: Developing Behind a Proxy Server (optional)

If you’re connected to the Internet via a proxy you can add the following line to

Acquire::http::Proxy "http://<server>:<port>";

… of course replace server & port with the appropriate values.

Have fun 🙂


How-to use Python LDAP paged results control to handle large LDAP searches

April 5, 2013

python-ldap evolved through the years and a lot of the information still found on the Internet regarding LDAP-paged-results only works with older versions of Python. The purpose of this post is to illustrate the use of the LDAP paged-results-control with Python 2.7 and higher.

Snippet Objective

Most LDAP servers limit the number of results returned by searches, this code uses an LDAP paged results control to overcome the limitation.

Snippet Results

Generates a list of all LDAP queried data and stores it in the results variable.

RFC 2696: pagedResultsControl

An LDAP client application that needs to control the rate at which results are returned MAY specify on the searchRequest a pagedResultsControl …

from ldap.controls import SimplePagedResultsControl

… cookie set to the zero-length string.


 Active Directory maximum page size

For Active Directory on Windows Server 2003-2008 the default maximum page size supported for LDAP responses is 1,000 records.

PAGE_SIZE = 1000

RFC 2696: Criticality

… if the client requested it as critical”


“… the server MUST return an error of unsupportedCriticalExtension …if criticality is false then “otherwise the server SHOULD ignore the control.

With python-ldap if the LDAP-page-control is not supported it should theoretically raise the ldap.UNAVAILABLE_CRITICAL_EXTENSION exception, however I can’t find any example code where anyone handles this exception and all of the LDAP servers I use support the pagedResultsControl. I don’t normally like to use a generic exception handler, but until I can test out the exception handling in more detail it currently just displays the raw error if it encounters an exception during the LDAP search.


The Following code is just a snippet and assumes that LDAPObject is an established ldap connection and that LDAP search parameters are already set, i.e. LDAP_CONN, BASE_DN, LDAP_SCOPE, FILTERSTR & ATTRIBS

import sys
import ldap
from ldap.controls import SimplePagedResultsControl

PAGE_SIZE = 1000

results = []
first_pass = True
pg_ctrl = SimplePagedResultsControl(
while first_pass or pg_ctrl.cookie:
    first_pass = False
        msgid = LDAP_CONN.search_ext(
                BASE_DN, LDAP_SCOPE,
                FILTERSTR, ATTRIBS,
        sys.err.write("ERROR: \n" +
    result_type, data, msgid, serverctrls = \
    pg_ctrl.cookie = serverctrls[0].cookie
    results += [i[0] for i in data]

Automate SFTP transfers in Linux with Expect inside a BASH script wrapper

April 5, 2013



spawn /usr/bin/sftp -o Port=$PORT $USER@$HOST
expect "password:"
send "$PASSWORD\r"
expect "sftp>"
send "put $FILE\r"
expect "sftp>"
send "bye\r"

Just set HOST, USER, PASSWORD & FILE and run the script.

reference: SFTP a file using a shell script

Frequently used Microsoft Active Directory LDAP search filters

April 3, 2013

It seemed much more difficult than it should have been to find a nice logical grouping of LDAP search filter strings for Active Directory that accomplish the most frequently used functions, such as returning the nested groups that a user belongs to, searching for a group by name using wildcards, etc. The following search filters are particularly useful for developers who may not have administrative rights, but still need to build LDAP integration and/or troubleshoot application authentication issues.

All Users


Specific User

Search by logon name, wildcards permitted


All Groups


Specific Groups

Search by Group DN, wildcards permitted


All Members of a Group

Search by Group DN


All Nested-Groups a User Belongs-To

Search by Group DN


Python imaplib – IMAP4 search method parameters

November 18, 2010

The imaplib IMAP4 “search” method is very powerful because it allows mail to be filtered on the mail server before ever sending the results back across the network.  The following example returns a list of messages numbers for all unread messages newer than the 1st of the year that do not contain “Smith” in the “From” header:, 'UNSEEN SINCE 1-Jan-2010 NOT FROM "Smith"')

Like many other IMAP4 object methods, you won’t find the options for this search parameter in the Python imaplib documentation.  This is because those parameters are specified in detail inside RFC3501; The Internet Message Access Protocol – Version 4rev1 standard.

RFC’s are great; they’re very detailed, but they were never designed to be user-friendly how-to’s.  For example the search string parameter options accepted by IMAP4’s  SEARCH command are strewn across five (5) different pages in the RFC and are listed in alphabetical order rather than grouped by function.

For future reference I decided to create an IMAPv4 SEARCH command reference, i.e. cheat-sheet. I’ve grouped the commands by:

  • Search for String in Message commands
  • Search for Message with Flags Set commands
  • Search for Messages with Flags Not Set commands
  • Search on Internal Message Date commands
  • Search on Message Header Date commands
  • Search on Message Size commands

Note: If you use Lotus Notes the mailbox must be full-text indexed before the IMAP4 SEARCH command will work.

How-to retrieve Rancid device configs & change-history from CVS

November 16, 2010

OK, you’ve got Rancid working and you’re automatically collecting and doing version control on your network device configurations. Great! Except if you’re a network admin and not a developer, and not a veteran Unix admin, or perhaps just used to new version control systems like Git or Bazaar then you may not be familiar with CVS. So you’ve got the configs via Rancid, but now what do you do with them? You could use CVSWeb, but what if you’re a network admin who still prefers the command-line? I thought I’d share some really basic CVS commands to get you started.

The examples below use:
  • Routers” as the Rancid group-name of interest. You can replace it with whatever group you want to look at.
  • /var/lib/rancid as the Rancid home directory, default for the Ubuntu Rancid package. If you are using a different Linux distro, just replace it with the correct home directory.

CVS checkout:

First you need to checkout a local copy, by default it will go in the current directory.
sudo cvs -d /var/lib/rancid/CVS co Routers

To get status:

sudo cvs -d /var/lib/rancid/CVS status Routers

To get a revision history:

sudo cvs -d /var/lib/rancid/CVS rlog Routers

To compare versions:

sudo cvs -d /var/lib/rancid/CVS diff -r 1.2 -r 1.3 Routers

To replace a file:

Delete the file and use the following command …
sudo cvs -d /var/lib/rancid/CVS update Routers

When a little awk goes a long way

November 16, 2010

There is great post on a blog called Gregable, the post is titled “Why you should know just a little awk“.  The comments are also worth reading. I discovered the link from another post titled “A little awk” on John Cook’s blog, The Endeavour.

I use a wide variety of Unix text processing tools on a regular basis, but over time, like many others I started migrating those tasks that required the power of ‘awk’ over to another language; in my case that language was Python.  Typically I can write scripts faster in Python and I find that the code is more readable. However, after reading the above post I was reminded that there are some one-liner ‘awk’ tasks that are really clean and effective. Lately I have found myself starting to sparingly use ‘awk’ again, here’s why…

When to use ‘awk’ instead of ‘cut’

  1. Cut’s delimiter is a single character, awk’s delimiter is a regular expression.
  2. Awk allows fields to specified relative to the last field position using ‘NF’.
  3. Cut always displays fields in order of ascending field number, regardless of the order fields are specified in the field list parameter, awk can redisplay the fields in any order that you specify.

    splits fields at multiple characters either a, b, c, d
    awk -F'[abcd]'

    split at one (1) or more spaces
    awk -F' +'

    re-order fields
    awk '{print $3 "\t" $2 "\t" $1}'

    prints last field
    awk '{print $NF}'

    prints next to last field
    awk '{print $(NF-1)}'

    When to use ‘awk’ instead of Python, Perl, etc.?

    1. When you can write the task in one simple, readable line with awk, i.e.
      1. Simple reformatting of data.
      2. Simple comparisons on fields.
      3. Rearrange order of fields.
      4. Split on regular expressions, including multiple characters.
      5. Feel free to comment on other reasons.
    2. When the speed of Python, Perl, etc. scripts are too slow for repeated use, this is rare when coded properly.

    Watch your quotes with ‘awk’ …

    Here is the standard unix method of quoting:

    $ awk '$NF > 385 && $(NF-1) ~ "^Sh" {print NR "\t" $0}' orders.txt
    4       10416   2005-05-10 00:00:00     Shipped 386
    6       10418   2005-05-16 00:00:00     Shipped 412
    Here is the equivalent command using unxutils for Windows:

    Note the difference in quoting…

    C:\> gawk "$NF>385 && $(NF-1) ~ \"^Sh\" {print NR \"\t\" $0}" orders.txt
    3       10416   2005-05-10 00:00:00     Shipped 386
    5       10418   2005-05-16 00:00:00     Shipped 412

    Here is what the above command is doing:

    1. Iterates through every line in the file “orders.txt”.
    2. Splits the fields at tab characters (default delimiter).
    3. Tests if the last field is greater than 385.
    4. Tests if the next to last field matches the regular expression “^Sh”, i.e. begins with the letters “Sh”.
    5. If items 3 & 4 were true then print the line number followed by a tab and then then line text itself.
    Sample text being processed:
    $ cat orders.txt
    10413   2005-05-05 00:00:00     Shipped 175
    10414   2005-05-06 00:00:00     On Hold 362
    10415   2005-05-09 00:00:00     Disputed        471
    10416   2005-05-10 00:00:00     Shipped 386
    10417   2005-05-13 00:00:00     Disputed        141
    10418   2005-05-16 00:00:00     Shipped 412
    10419   2005-05-17 00:00:00     Shipped 382
    10420   2005-05-29 00:00:00     In Process      282

    How-to turn your network diagrams into schematics instead of arts & crafts projects

    November 14, 2010

    Visio can create awesome looking network diagrams with cool pictures of your network equipment, however those pictures come at a cost; they consume valuable real-estate.  As a result drawings are often cluttered with device specific information scattered all around the outsides of visual representations.

    In the electrical engineering world we used lots of schematics; schematics didn’t waste valuable real-estate with actual pictures of the resistors, capacitors, inductors, diodes, transformers, etc.   Why?  Because it was supposed to be a design & troubleshooting tool, not a flipp’n art & crafts project!

    As an experiment try using geometric shapes, i.e. triangles, pentagons, octagons, hexagons, etc. to represent network devices.

    1. Use a geometric shape that matches the number of interconnecting interfaces, then label each  inside  corner of the shape with interface specific info.
    2. Use the remaining interior of the shape to record device specific information, i.e. hostname, IP/mask, serial number, firmware/software version, etc.
    3. Fill in geometric shapes with a color to identify the device type, i.e. red for firewalls, orange for IPS’s, blue for routers, green for switches, etc.

    Whether or not you like the idea of using geometric shapes in your network drawings, at least consider putting the emphasis on creating useful schematics for design & troubleshooting; the quality and presentation of the data and the interconnects should always trump the artwork.   Chances are anyone else using your drawings will also appreciate it.

    Thanks to J. Scott Haugdahl’s for his book Network Analysis and Troubleshooting, it was from his book that I first got the idea of using geometric shapes in network drawings.

    Quickly parsing Cobol fixed-length data from Copybook definitions into Python lists

    November 9, 2010

    Here is a really simple module for converting fixed-length Cobol data into a Python list … you can find this code and related modules at:

    You can pipe the results of into this module to quickly parse the data into a list, or once you already know the structure you can call parse_data(struct_fmt_string) directly. If the copybook field and actual record lengths don’t match it will still parse the data, but it will display a warning indicating that the data could be truncated or needed to be padded to fit the field definitions.

    #!/usr/bin/env python
    # -*- coding: utf-8 -*-
    __version__ = """COBOL Fixed-length Data Parser ver 0.2
    Note: This version does not work with OCCURS in Copybook files,
    but is a lot faster than the varaible length data parser modules.
    License: GPLv3, Copyright (C) 2010 Brian Peterson
    This is free software.  There is NO warranty; 
    USAGE = """ CopybookFile"""
    import load
    import csv, struct, sys
    def parse_data(struct_fmt, lines):
          return [ struct.unpack(struct_fmt, i) for i in lines ]
        except struct.error:
            sys.stderr.write('Record layout vs. record size mismatch\n')
            size = sum([ int(i) for i in struct_fmt.split('s')[:-1] ])
            return [ struct.unpack(struct_fmt, i.ljust(size)[:size]) 
              for i in lines ]
    def main(args):  
        copybook = load.csv_(args.copybook.readlines(), strip_=True)[1:]
        field_lengths = [ int(i[2]) for i in copybook ]
        struct_fmt = 's'.join([ str(i) for i in field_lengths ]) + 's'
        if args.struct:
            print struct_fmt
            for record in parse_data(struct_fmt, load.lines(args.datafile)):
                print record
    if __name__ == '__main__':
        from cmd_line_args import Args
        args = Args(USAGE, __version__)
        args.add_files('datafile', 'copybook')
        args.parser.add_argument('-s', '--struct', action='store_true',
            help='show structure format')

    Simple Python argparse wrapper

    November 3, 2010

    When creating Python modules I frequently turn them into a Unix-style command-line applications. It makes them easy to demo, test, debug, pipe together, parse the input/output with Unix tools, etc. The argparse module available in Python versions 2.7 and later is great for this sort of thing, but I found myself copying and pasting a lot of code for each new application and thereby breaching the DRY (Don’t Repeat Yourself) principle. I used plac for a while, but then when I went to deploy the application to production I needed to switch it over to argparse. As a result I decided to build my own argparse wrapper:

    • Ability to display multi-line version information strings (argparse ignores line feeds), used with the --version option.
    • Ability to include positional filename arguments with very little code.
    • Ability to allow the last positional filename argument to be replaced with Stdin.
    • Ability to call common options from a standard command-argument library.


    __version__ = """argparse wrapper sample code version 0.1a
    This is free software.  There is NO warranty; 
    COPYBOOK - Filename: output from
    DATAFILE - Filename: COBOL records, fixed-width text
    import load
    def main(args):
        fields = load.csv_(args.copybook, strip_="right", prune=True)
        data = load.lines(args.datafile, stop_at_line=1)
        if args.verbose:
            print 'In verbose mode...'
        if args.license:
            print 'GPL (GNU Public License)'
    if __name__ == '__main__':
        from cmd_line_args import Args
        args = Args(USAGE, __version__)
        args.add_files('copybook', 'datafile')
        args.add_options('debug', 'verbose')
        args.parser.add_argument('--license', action='store_true', 
            help='display license information')


    • Adds 2 positional filename parameters: copybook & datafile
    • The ‘datafile’ parameter can be optionally omitted and Standard Input (Stdin) can be used instead of reading from a file. For example cat file1.txt|./ layout.csv will work.
    • -d, --debug, -v, --verbose are automatically added from a standard library of command-line options
    • The argparse object can be accessed directly to support all normal argparse methods & functionallity (i.e. see --license option).

    Note: The source code below may be dated, to get the most current version visit Harbingers-Hollow at Github.

    import argparse, sys
    __all__ = ['Args']
    class VersionAction(argparse.Action):
        """Overrides argparse_VersionAction(Action) to allow line feeds within
        version display information.""" 
        def __init__(self, option_strings, version=None, 
             dest=None, default=None, help=None):
             super(VersionAction, self).__init__(option_strings=option_strings,
                 dest=dest, default=default, nargs=0, help=help)
             self.version = version
        def __call__(self, parser, namespace, values, option_string=None):
            version = self.version
            if version is None:
                version = parser.version
            print version
    class Args:
        """argparse wrapper"""
        allow_stdin = False
        def __init__(self, usage, version):
            self.parser = argparse.ArgumentParser(usage=usage)
            self.parser.version = version
            self.parser.add_argument('-V', '--version', 
                action = VersionAction,
                help='display version information and exit')
        def add_files(self, *file_args):
            """Add positional filename argurments.  If self.allow_stdin is set
                object.add_filenames('config_file', 'data_file'])
                The 1st filename will be saved in a variable called 'config_file'.
                The 2st filename will be saved in a variable called 'data_file'.
            if self.allow_stdin:
                for file_arg in file_args[:-1]:
                    self.parser.add_argument(file_arg, help='filename... %s' % file_arg)
                    help='filename... %s' % file_args[-1], nargs='?')
                for file_arg in file_args:
                    self.parser.add_argument(file_arg, help='filename... %s' % file_arg)      
            self.file_args = file_args
        def add_filelist(self):
            """FUTURE: ability to add a list of files to be processed.  Similar to
            python filelist module, but with ability to include other arguments.
            Support for wildcards."""
        def add_options(self, *options):
            """Add from a standard library of pre-defined command-line arguments"""
            for option in options:
                option = option.lower()
                if option == 'debug':
                    self.parser.add_argument('-d', '--debug', action='store_true',
                        help='Turn on debug mode.')
                elif option == 'debug_level':
                    self.parser.add_argument('-d', '--debug', type=int,
                        help='Set debug level 1-10.')
                elif option == 'verbose':
                    self.parser.add_argument('-v', '--verbose', action='store_true',
                        help='Turn on verbose mode.')
                elif option == 'quiet':
                    self.parser.add_argument('-q', '--quiet', action='store_true',
                        help='Suppress all output to terminal.')
        def allow_stdin(self):
            self.allow_stdin = True
        def parse(self):
            """Parse args & use sys.stdin if applicable
            Sets all file arguments to a file read object"""
            args = self.parser.parse_args()
            if self.file_args:
                if self.allow_stdin:
                    if not sys.stdin.isatty():
                        setattr(args, self.file_args[-1], sys.stdin)
                        self.allow_stdin = False
                last_arg_idx = len(self.file_args) - self.allow_stdin
                for file_arg in self.file_args[:last_arg_idx]:
                        file_ = open(getattr(args, file_arg))
                    except IOError, error_msg:
                        sys.stderr.write('ERROR loading file "%s".\n%s\n' % 
                            (file_arg, error_msg))
                    setattr(args, file_arg, file_)        
            return args