Jan 212014

I just recently remodified an install of awstats to detect ios and iphone, and while I was at it, I reapplied my changes to detect our xymon server as well.

Following the instructions here: http://cornempire.net/2009/08/31/adding-a-robot-to-awstats/ and here: http://wiki.cornempire.net/awstats/awstatsrobots still work. I just made a small tweak to match our new version of xymon, and also to make the detection more broad, which should future proof it a bit: http://wiki.cornempire.net/awstats/awstatsrobots#updatejan_21_2014

As a side note, if you are trying to detect ios for iPhones and iPads, this tutorial is useful: http://serverfault.com/questions/248692/analyze-http-logs-looking-for-ios except for one thing.

I received an error about a mismatch in the number of entries in the @OSSearchIDOrder and %OSHashID. This was because more recent versions of awstats includes basic ios detection, and because the footprint of the new code has the same ios label as the stock code, it would throw this error. Commenting out the built in ios detection did the trick for me. I may try to expand upon the ios section a bit and try to detect version numbers in a future version. I may also create a patch that breaks out iOS and Android devices to a new family called tablets, or mobile devices….maybe.

Dec 032013

At work we have started using a piece of software called Alfresco. We use the Share web interface for collaboration within our department. A co-worker and I attended the excellent Alfresco Summit: http://summit.alfresco.com/ in Boston. We learned a lot about the software and how extensible it is. It can be used in many ways, far more heavily then we are currently using it. We have hatched plans to use it for our departmental workflows, as well as building an online repository upon the deep repository backend that is available.

The application is written in Java and has lots of configuration and modification options. I’m going to highlight the first change we made to the Share UI here. I plan to document more in the future.

Goal: We plan to use our Share implementation mostly for internal collaboration and have it tied to our active directory for that purpose. However, we do plan to allow others to come in for some projects. Because of this we wanted to change the default option when creating a site from ‘Public’ to ‘Private’. This will help prevent staff from creating sites that are public by accident.

While at the conference, I spoke with an Alfresco Engineer and he explained how we could customize this page.

The file which controls the screen is called: create-site.get.html.ftl. I found it at ./tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/modules/create-site.get.html.ftl

Grab a copy of all of the content in this file and make your changes.

Then, create the same file in this directory: ./tomcat/shared/classes/alfresco/web-extension/site-webscripts/org/alfresco/modules/ (you will likely have to create some of these directories yourself.) Note that the part of the original path after WEB-INF matches everything after the shared folder in the except for the addition of the web-extension folder in the override path. I’m sure that is relevant, which is why I’ll highlight it below:
Path 1: ./tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/modules/
Path 2: ./tomcat/shared/classes/alfresco/web-extension/site-webscripts/org/alfresco/modules/

If you wanted to customize the HTML available in other parts of the app, you could probably drop the override file into the matching directory from its original location.

Anyway, once you have the file in place, you can restart Alfresco and it should be in play. This worked the first time I tried it, which was quite excellent!

Now, when you complete upgrades of your repository, you should back up your customizations and reapply them using the newer version of the code. The Alfresco team could change, add, or remove features from the page you are customizing and your override file will not take into consideration those changes. This could cause the behaviour to be erratic after the upgrade.

Jun 212013

The Goal

Have all syslogs from all servers shipped to a central server so that we can query them in one spot, and review old logs in the event of a compromise using only free software.

After looking at a number of options, I settled on rsyslog for the server, standard syslog on the clients (for now), LogAnalyzer as the UI for the web, both mysql backend for rsyslog and file based backend. The configuration for rsyslog will be a little different then most of the tutorials out there. I wanted to be able to query across all servers at one time, and LogAnalyzer only allows you to configure specific endpoints. With most configuration examples you will find on the web, they show you how to either dump it all into a database (which will no doubt get huge if you don’t clean it up) or dump each server to a single file which rotates daily, which is not ideal for a LogAnalyzer end point because your config has to change daily. This solution will dump all events into a database for all servers which will be configured as one realtime endpoint. rsyslog will also be configured to dump to a file, and this file will be rotated monthly using the usual logrotate scripts. Several archival (non-compressed) files will also be configured in LogAnalyzer for historical purposes.

Install rsyslog

I’m not going to go into this in detail. I’m going to assume you can already do this. But here are some useful notes:

  • My solution was built on RHEL5 using the packages available from yum. You will need the rsyslog package, as well as apache, php, and mysql for the database. I used link 2 below for this, although I didn’t remove any of the previous syslog packages. I just turned them off. Additionally, you only need to install rsyslog on your central server for this solution, not on each client.
  • I used this template line at the end of my rsyslog config: $template DailyPerHostLogs,”/var/log/LOGHOSTS/%HOSTNAME%/%HOSTNAME%.log”

Configure DB

I’m going to assume you have already installed the DB, but if not, here is a cheat sheet:

  • yum install mysql-server

Run this command (or create the database in another way):

  • mysql -uroot -p < /usr/share/doc/rsyslog-mysql-3.22.1/createDB.sql

Then create a user and password that can access the database from the host the server is on.

Configure rsyslog to log to both

Now this is where the magic happens. Provided you took some part of a sample config from somewhere, you should have something like this is your rsyslog:

$ModLoad imuxsock.so
$ModLoad imklog.so
$ModLoad imudp.so
#load the network stuff
$UDPServerRun 514
#reduce any duplicates
$RepeatedMsgReduction on
# The template that wil format the message as it is writen to the file
# you can edit this line if you want to customize te message format
$template TraditionalFormat,"%timegenerated% %HOSTNAME% %syslogtag%%msg:::drop-last-lf%
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
#*.info;mail.none;authpriv.none;cron.none                /var/log/messages
        $source == 'YOUR.CENTRAL.SERVER' 
               $syslogseverity <= '6' 
        and ( 
                        $syslogfacility-text != 'mail' 
                        $syslogfacility-text != 'authpriv' 
                        $syslogfacility-text != 'cron' 
then   /var/log/messages;TraditionalFormat
#authpriv.* /var/log/secure;TraditionalFormat
# The authpriv file has restricted access.
#authpriv.*                                              /var/log/secure
        $source == 'YOUR.CENTRAL.SERVER' 
        $syslogfacility-text == 'authpriv' 
then    /var/log/secure;TraditionalFormat
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
# mail.* /var/log/maillog;TraditionalFormat
        $source == 'YOUR.CENTRAL.SERVER' 
        $syslogfacility-text == 'mail' 
then    /var/log/maillog;TraditionalFormat
# Log cron stuff
#cron.* /var/log/cron;TraditionalFormat
        $source == 'YOUR.CENTRAL.SERVER' 
        $syslogfacility-text == 'cron' 
then    /var/log/cron;TraditionalFormat
# Everybody gets emergency messages
#*.emerg *
        $source == 'YOUR.CENTRAL.SERVER' 
        $syslogseverity-text == 'emerg' 
then    *
# this line creates a template that will store the messages for each host in a seperate file.
# The same file will always be used, and should be rotated using a Log Rotate script.
$template DailyPerHostLogs,"/var/log/LOGHOSTS/%HOSTNAME%/%HOSTNAME%.log"
*.* -?DailyPerHostLogs;TraditionalFormat

What we need to add for the mysql logging is:

$ModLoad ommysql.so
*.*  :ommysql:localhost OR IP OF MYSQL SERVER,Syslog,USERNAME,PASSWORD

I added the above code to the very end of the file. Then I restarted rsyslog.

Then I configured a new entry in my config file for LogAnaylzer. If you haven’t already installed it yet, you can use the install wizard to set this up. But if you have installed it, you will need to use the config.php file to make changes. The sample included in the comments seemed a little off, so this is what I used:

$CFG['Sources']['Source2']['ID'] = "Source2";
$CFG['Sources']['Source2']['Name'] = "All Servers Realtime";
$CFG['Sources']['Source2']['Description'] = "";
$CFG['Sources']['Source2']['SourceType'] = SOURCE_DB;
$CFG['Sources']['Source2']['MsgParserList'] = "";
$CFG['Sources']['Source2']['DBTableType'] = "winsyslog";
$CFG['Sources']['Source2']['DBType'] = DB_MYSQL;
$CFG['Sources']['Source2']['DBServer'] = "localhost OR YOUR DB HOST";
$CFG['Sources']['Source2']['DBName'] = "Syslog";
$CFG['Sources']['Source2']['DBUser'] = "YOUR DB USERNAME";
$CFG['Sources']['Source2']['DBPassword'] = "YOUR DB PASSWORD";
$CFG['Sources']['Source2']['DBTableName'] = "SystemEvents";
$CFG['Sources']['Source2']['ViewID'] = "SYSLOG";

You can optionally configure a mysql database to store the configuration settings and manage user accounts. I eventually did this for our install, and it did make configuration much simpler.

Log Rotate and mysql Purge Script

It is a good idea to rotate the logs regularly and update the database to purge older entries.

Place this file in /etc/logrotate.d/ and call it rsyslog:

# Rotate each log monthly
/var/log/LOGHOSTS/*/*.log {
#    extension .1
    rotate 4
    create 644 root root
         restart rsyslog >/dev/null 2>&1 || true
#/var/log/LOGHOSTS/*/*.log.1 {
#    monthly
#    extension .2
#/var/log/LOGHOSTS/*/*.log.2 {
#    monthly
#    extension .3
/var/log/LOGHOSTS/*/*.log.3 {
    rotate 8
    maxage 365

EDIT: My log rotate script didn’t work as expected. Try this instead. Another option could be to reverse the order of the files above, handle .3 first, then .2…etc It seemed to work ok for the first month, then started added extra .# extensions and messed things up a bit.

To purge the database and keep your logs clean, just set up this script to run as a cronjob:

# Special account to run this query only.
#MYSQL queries
RECORDSTODELETE=`mysql -u{MYSQL USERNAME} -p{MYSQL PASSWORD} --silent -e "SELECT COUNT(*) FROM Syslog.SystemEvents WHERE Syslog.SystemEvents.ReceivedAt &lt; DATE_SUB(NOW(), INTERVAL 30 DAY);&quot;`
RESULTS=`mysql -u{MYSQL USERNAME} -p{MYSQL PASSWORD} --silent -e &quot;use Syslog; DELETE from Syslog.SystemEvents WHERE Syslog.SystemEvents.ReceivedAt &lt; DATE_SUB(NOW(), INTERVAL 30 day);&quot;`
`logger -i Purged ${RECORDSTODELETE} records from logging database`

Odds and Ends

If you got this all to work, you may notice that your database based sources show more information than your file based sources. If you want facility and severity to display for text based entries, you will need to setup rsyslog to include this information in the file.

First, I’m using version 3.22.1 of rsyslog from the RHEL5 repos. Which is very old. I used this template to change what is logged to the file:

$template TraditionalFormatWithPRI,"%syslogfacility%.%syslogpriority%: %timegenerated% %HOSTNAME% %syslogtag%%msg:::drop-last-lf%

I also added a new database template that separates the process ID from the syslog tag:

$template SQLWithProcessID,"insert into SystemEvents (Message, Facility, FromHost, Priority, DeviceReportedTime, ReceivedAt, InfoUnitID, SysLogTag, ProcessID) values ('%msg%', %syslogfacility%, '%HOSTNAME%', %syslogpriority%, '%timereported:::date-mysql%', '%timegenerated:::date-mysql%', %iut%, '%syslogtag:R,ERE,1,FIELD:([a-zA-Z/]+)([[0-9]{1,5}])*:--end%', '%syslogtag:R,ERE,1,BLANK:[([0-9]{1,5})]--end%')",sql

and then make a slight change to our database logging line:

*.*  :ommysql:localhost OR IP OF MYSQL SERVER,Syslog,USERNAME,PASSWORD;SQLWithProcessID

But you then have to change LogAnalyzer in order to read the extra data.

In the $APPROOT$/classes/logstreamlineparsersyslog.class.php your first four entries in the if block should look like this:

                // Standard syslog with process and facility/severity. TGH Apr. 4, 2013
                if ( preg_match("/(.*).(.*): (...)(?:.|..)([0-9]{1,2} [0-9]{1,2}:[0-9]{1,2}:[0-9]{1,2}) ([a-zA-Z0-9_-.]{1,256}) ([A-Za-z0-9_-/.]{1,32})[(.*?)]:(.*?)$/", $szLine, $out ) )
                        // Copy parsed properties!
                        $arrArguments[SYSLOG_FACILITY] = strtoupper($out[1]);
                        $arrArguments[SYSLOG_SEVERITY] = strtoupper($out[2]);
                        $arrArguments[SYSLOG_DATE] = GetEventTime($out[3] . " " . $out[4]);
                        $arrArguments[SYSLOG_HOST] = $out[5];
                        $arrArguments[SYSLOG_SYSLOGTAG] = $out[6];
                        $arrArguments[SYSLOG_PROCESSID] = $out[7];
                        $arrArguments[SYSLOG_MESSAGE] = $out[8];
                // Standard syslog with process.
                elseif ( preg_match("/(...)(?:.|..)([0-9]{1,2} [0-9]{1,2}:[0-9]{1,2}:[0-9]{1,2}) ([a-zA-Z0-9_-.]{1,256}) ([A-Za-z0-9_-/.]{1,32})[(.*?)]:(.*?)$/", $szLine, $out ) )
                        // Copy parsed properties!
                        $arrArguments[SYSLOG_DATE] = GetEventTime($out[1] . " " . $out[2]);
                        $arrArguments[SYSLOG_HOST] = $out[3];
                        $arrArguments[SYSLOG_SYSLOGTAG] = $out[4];
                        $arrArguments[SYSLOG_PROCESSID] = $out[5];
                        $arrArguments[SYSLOG_MESSAGE] = $out[6];
                // Sample (Syslog): syslog with facility.priority and no process - TGH Apr. 4, 2013
                else if ( preg_match("/(.*).(.*): (...)(?:.|..)([0-9]{1,2} [0-9]{1,2}:[0-9]{1,2}:[0-9]{1,2}) ([a-zA-Z0-9_-.]{1,256}) ([A-Za-z0-9_-/.]{1,32}):(.*?)$/", $szLine, $out ) )
                        // Copy parsed properties!
                        $arrArguments[SYSLOG_FACILITY] = strtoupper($out[1]);
                        $arrArguments[SYSLOG_SEVERITY] = strtoupper($out[2]);
                        $arrArguments[SYSLOG_DATE] = GetEventTime($out[3] . " " . $out[4]);
                        $arrArguments[SYSLOG_HOST] = $out[5];
                        $arrArguments[SYSLOG_SYSLOGTAG] = $out[6];
                        $arrArguments[SYSLOG_MESSAGE] = $out[7];

                // Sample (Syslog): Mar 10 14:45:39 debandre syslogd 1.4.1#18: restart
                else if ( preg_match("/(...)(?:.|..)([0-9]{1,2} [0-9]{1,2}:[0-9]{1,2}:[0-9]{1,2}) ([a-zA-Z0-9_-.]{1,256}) ([A-Za-z0-9_-/.]{1,32}):(.*?)$/", $szLine, $out ) )
                        // Copy parsed properties!
                        $arrArguments[SYSLOG_DATE] = GetEventTime($out[1] . " " . $out[2]);
                        $arrArguments[SYSLOG_HOST] = $out[3];
                        $arrArguments[SYSLOG_SYSLOGTAG] = $out[4];
                        $arrArguments[SYSLOG_MESSAGE] = $out[5];

Once you put it all together, you should have solid log viewer for all of your servers at once, as well as access to historical logs.


Here are some helpful tutorials that I found useful:

  1. Install docs for LogAnalyzer: http://loganalyzer.adiscon.com/doc/install.html
  2. RHEL Install and config of rsyslog: http://www.howtoforge.com/building-a-central-loghost-on-centos-and-rhel-5-with-rsyslog
  3. rsyslog and mysql: http://www.dawnstar.id.au/linux-documentation/configuring-rsyslog-log-mysql-rhel-52/
Jun 052013

Recently had a problem where awstats stopped processing log entries from one of my sites while the others worked fine. While some of these items don’t make sense to check in this instance, you may want to give them a look if you are having problems.

Check a few things first:

  1. Do you have enough disk space? – Yes
  2. Is the log file getting updated? (I was shipping logs from one server to another for processing) – Yes
  3. Has the log format changed? (Take a look at historical logs if available to make sure. Also check your /etc/awstats/awstats.config.conf file to make sure it is the same here) – All good
  4. Files still getting written to your data dir? Do you have any contents in any missing files? (/var/lib/awstats/awstats{month}{year}{day}.txt) – New files are there for working sites, nothing present for the failing site.
  5. Turn on showing dropped records and what did it say? – Dropped record (method/protocol ‘rtmp’ not qualified when LogType=S):
  6. Have you made any changes to the awstats application recently? – Yep….wait what?

It seems at some point we upgraded awstats to the 7.0 branch, and had previously made customizations to the /usr/local/awstats/wwwroot/cgi-bin/awstats.pl to handle our FMS logs. We did it according to the instructions here: http://www.wowza.com/forums/showthread.php?163-Log-Analysis

When we did the upgrade, we overwrote our changes, an this one log stopped processing.

So, I made the following change to /usr/local/awstats/wwwroot/cgi-bin/awstats.pl around line 18158 to make this block:

                        ( $LogType eq 'W' || $LogType eq 'S' )
                        && (   uc($field[$pos_method]) eq 'GET'
                                || uc($field[$pos_method]) eq 'MMS'
                                || uc($field[$pos_method]) eq 'RTSP'
                                || uc($field[$pos_method]) eq 'HTTP'
                                || uc($field[$pos_method]) eq 'RTP' )

look like this:

                        ( $LogType eq 'W' || $LogType eq 'S' )
                        && (   uc($field[$pos_method]) eq 'GET'
                                || uc($field[$pos_method]) eq 'MMS'
                                || uc($field[$pos_method]) eq 'RTSP'
                                || uc($field[$pos_method]) eq 'RTMP'
                                || uc($field[$pos_method]) eq 'RTMPT'
                                || uc($field[$pos_method]) eq 'HTTP'
                                || uc($field[$pos_method]) eq 'RTP' )

I then started reprocessing old logs like this:

/usr/local/awstats/wwwroot/cgi-bin/awstats.pl -update -config=fms -LogFile=/oldlogs/access.30.log -showdropped -showcorrupted

And all is good in the world again.

Mar 312012

A few weeks ago, I tossed Mint on an old machine to give it a try. Looking for something to replace my Ubuntu 10.04, and I’m not a fan of Unity at all. Mint, with its extensions to Gnome 3, seem to give the best of the new paradigm without changing everything I like about the old.

So last night I backup up my computer, and began the reinstall. I had issues even booting the Live DVD on my main computer. After some internet searching I resolved this by:

  • Pressing e at the bootup screen
  • Adding nomodeset to the configuration line before the —

I was then able to boot, and install the OS. This didn’t surprise me, as I’ve had to do similar tricks over the years to install OSes. Normally when everything is installed, we are good to go.

Oh boy, not this time. So after the install and reboot, I could not get X to load. Using the same trick as above, I was able to get into a GUI, but video performance was lack luster as no drivers were loaded. I attempted to install the restricted drivers, but they would install, and then on reboot, I’d have nothing again. I have noticed similar problems on other computers I have which have NVIDIA cards and AMD cpus. I was determined to find a fix this time. Instead of using the drivers from the repos, or the open source nouveau driver, I decided to go straight to the source, and download the driver from NVIDIA. You can’t install this from within X, and you will also need some extra packages, so before you start, make sure that:

  • You have all of the compilation tools installed: sudo apt-get install build-essential
  • And you have any header or development libraries that are needed for your kernel.

Once that is installed, you need to disable X. You cannot simply kill the X process, because it will start itself back up, and you can’t really run in runmode 3 or 1 as in the Debian world, 2 – 5 seem to be the same, and 1 may not run some of the system services required to install the driver. Many earlier tutorials mention running these commands to stop X for gnome:

  • sudo /etc/init.d/gdm stop
  • sudo /etc/init.d/gdm3 stop


  • sudo /etc/init.d/kdm stop

But on mint, you need to use:

  • sudo /etc/init.d/lightdm stop

Once I ran that, I was dropped to a terminal, and I could run the NVIDIA installer as per the instructions. It disabled the nouveau driver and compiled my driver. After a reboot, all was good in the world.

I have a few thoughts on a simpler process. It is possible that the nouveau driver was conflicting with the NVIDIA driver. If I just disabled, or blacklisted the nouveau driver, the restricted drivers install wizard may have worked fine.

Here are a list of the pages I consulted to come to my resolution:

Jan 082012

This is the second part of a three part series on how to embed a Google Calendar into a web page and use it to accept online bookings/appointments from other online users.

The Series:

  1. Part 1: Setting up Google Calendar
  2. Part 2: OAuth2 and Configuring Your ‘Application’ With Google <- You Are Here
  3. Part 3: A Sample Web Page For Bookings


This was by far the hardest part of the whole exercise. I had worked with version 1 of the Google API for PHP a few years back. This allowed you to code your username and password into your script, and it would handle authentication for your application. Now that we are on to version 3 of the API, that method is no longer available. Instead OAuth2 is used for authentication and token management.

I downloaded code samples, and went about building my application, however, I quickly realized that the OAuth2 code samples are designed to allow you to interact with a visitors calendar. In the case of taking online bookings, I need to work with a single calendar, namely my calendar, not theirs.

After a lot of trial and error, and then reading, I realized that it could be done, and relied on what is called a ‘Refresh Token’ in OAuth2. This token allows you to get a new valid authentication token when the initial grant from your end user expires. Since the refresh token doesn’t expire, you can always use it to get a new authentication token, and therefore people can continue to use your application after you have initially configured it. I spent a while trying to implement it myself with no success, but then I came across this page: http://www.ericnagel.com/how-to-tips/google-affiliate-network-api.html This explains in some detail how to configure the application and token. It is written for the Google Affiliate Network API, but I made a few tweaks to make it work for Calendar. I will now take you through the steps of setting up your application with Google, and generating your Refresh Token.

Create Your Application

Log into your Google Account, and then visit https://code.google.com/apis/console/. This will take you to a page that invites you to create a project with the Google API. Click on Create project….

You are now asked to activate the services you wish to use. Click the button next to Calendar API to enable the calendar. You will be redirected to a page with a Terms of Service. Read and accept this.

Now click on API Access. Here we will configure the IDs needed for your application to authenticate with Google. Click on Create an OAuth2 client ID…. You will be offered to create Branding Information. You should add your project/product name. The rest won’t be necessary as you will not be asking users directly for access to their resources, but you can complete it if you like.

Then click Next. Here you will want to select Installed application. Click Create client ID.

You will be taken back to the API Access screen, with your new Client ID and Client secret. You will need this information to generate your Refresh Token, and to configure your application.

This page will also have your API key for ‘Simple API Access’. You will also need this API Key for your final calendar application.

Get Your Application Information

Now that you have your application information, it is time to generate your refresh token. I’ve modified the script available from http://www.ericnagel.com/how-to-tips/google-affiliate-network-api.html to just get us our refresh token for our calendar application. Here is the code for the script. Download this and save it as oauth-setup.php:

$cScope         =   'https://www.googleapis.com/auth/calendar';
$cClientID      =   '';
$cClientSecret  =   '';
$cRedirectURI   =   'urn:ietf:wg:oauth:2.0:oob';
$cAuthCode      =   '';

if (empty($cAuthCode)) {
    $rsParams = array(
                        'response_type' => 'code',
                        'client_id' => $cClientID,
                        'redirect_uri' => $cRedirectURI,
                        'access_type' => 'offline',
                        'scope' => $cScope,
                        'approval_prompt' => 'force'

    $cOauthURL = 'https://accounts.google.com/o/oauth2/auth?' . http_build_query($rsParams);
    echo("Go to
and enter the given value into this script under $cAuthCode
} // ends if (empty($cAuthCode))
elseif (empty($cRefreshToken)) {
    $cTokenURL = 'https://accounts.google.com/o/oauth2/token';
    $rsPostData = array(
                        'code'          =>   $cAuthCode,
                        'client_id'     =>   $cClientID,
                        'client_secret' =>   $cClientSecret,
                        'redirect_uri'  =>   $cRedirectURI,
                        'grant_type'    =>   'authorization_code',
    $ch = curl_init();
    curl_setopt($ch, CURLOPT_URL, $cTokenURL);
    curl_setopt($ch, CURLOPT_POST, 1);
    curl_setopt($ch, CURLOPT_POSTFIELDS, $rsPostData);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    $cTokenReturn = curl_exec($ch);
    $oToken = json_decode($cTokenReturn);
    echo("Here is your Refresh Token for your application.  Do not loose this!

    echo("Refresh Token = '" . $oToken->refresh_token . "';
} // ends

Before running this script, you will need to enter your Client ID ($cClientID) and Client Secret ($cClientSecret) as we found on the API page with Google. Once these values are added, run this script from the command line like this: php oauth-setup.php. You should see output like this:

thomas@thomas-desktop:~/code$ php oauth-setup.php 
Go to
and enter the given value into this script under $cAuthCode

Visit the website, grant permission to access your resources, and then copy the code on this page. This is your auth code, and is normally good for 3600 seconds or so.

Enter this code into the oauth-setup.php script in the $cAuthCode variable. Then run the script again: php oauth-setup.php. You should see output like this:

thomas@thomas-desktop:~/code$ php oauth-setup.php 
Here is your Refresh Token for your application.  Do not loose this!

Refresh Token = '#####################################';

Now, copy down the Refresh Token and save it for later. You will need it to make subsequent requests to Google to get a valid Auth Code for a transaction.

Stay tuned for Part 3 of the tutorial, which will use the above information to make calendar requests to Google. And allow us to create a web application that uses Google Calendar as a backend for a scheduling application.

Aug 092011

Filezilla is a great ftp/sftp/… tool. But it is missing one feature that would make it more useful: a file diff ability. This allows you to see the remote file, and local file side by side to see what is different between them. This is an often requested feature in the Filezilla forums, and until it is implemented officially, you can use this workaround to get diffs in Filezilla.

This was done running Filezilla 3.3.1 on Ubuntu 10.04.

  1. First, install Meld (haven’t tried it with any other diff viewer yet)
  2. Then go to Edit -> Settings -> File editing
  3. Change to ‘Use custom editor:’ and enter: /usr/bin/meld /home/{yourusername}/filezillafake.txt
  4. Click OK.
  5. Create a file in your home directory called filezillafake.txt
  6. You may need to also select “Always use default editor”. Optionally, you can go one menu below to “Filetype associations” and add the command for any file types that you want so it isn’t available for all.

Now, when you View/Edit a file, it will open in meld with your fake file on the left, and the remote file on the right. Then drag the file from your filezilla window into the ‘Browse’ area for the fake file in the Meld Window. It will load up and show you the diff.

You should be able to edit the file here and save it and Filezilla should prompt for an upload.

This works because Meld supports drag and drop, and also inserting two file names at the command prompt. Any application for any platform that does that should be supported by this method. The only downside is that Meld is now your default editor for files, and you may not like that if you do a lot of remote editing.

Jan 272011

Recently, I needed the ability to get a readout of the average bandwidth used on the server at one time. As many of you probably know, bandwidth is tough to measure for an instant. I could not find a tool or command that would allow me to get the data I wanted. So I decided to write one.

I write a perl script that uses the output from one invocation of ifconfig with a later invocation. The results are compared, and the total transferred, along with the average bps is extracted and printed.

Here is an example of printout:

thomas@thomas-desktop:~$ perl bandwidth-checker.pl -p -i=eth0 -t=5
Monitored traffic on interface eth0 for 5 seconds:
Incoming: 17.7109375 kbit (2.2138671875 KB) Average: 3.5421875 kbps (0.4427734375 KBps)
Outgoing: 2.109375 kbit (0.263671875 KB) Average: 0.421875 kbps (0.052734375 KBps)

I wrote a little page on the script, and have the source there as well. Hopefully it will help someone else out there: http://wiki.cornempire.net/scripts/bandwidth

If you have any use for it, on find any problems with the script, please comment below.

Dec 042010

I was trying to use the Knowledgetree automated installer on a fresh install of Ubuntu 10.04.01 64-bit server. While trying to install, I received an error about swftools, which I was able to install thanks to the directions here (in post 2):

However, I then received an error that read:
Failed to fetch http://repos.zend.com/zend-server/deb/dists/server/non-free/binary-amd64/Packages.bz2 Hash Sum mismatch
Followed by a few zend packages that couldn’t be installed.

After much purging and googling, I found that this worked well:

  1. Go to /var/lib/apt/lists/partial/ and delete the files that failed to download. (If you are curious, I looked at the file that was supposed to be correct in the partial directory, and noticed it was encoded still. I suspect an incorrectly expanded archive file that was causing the problem. Some people reported that simply deleting these files fixed the problem, but it did not help for me.)
  2. Using the root URL of the repo that had the hash problem….go to the site. So for me it is http://repos.zend.com/zend-server/deb/dists/server/non-free/binary-amd64/
    • Here I noticed that there were a number of files that had the packages. The file it failed on was the bz2 file, however, there is a plain text one available called Packages
    • I clicked on Packages and copied all of the text.
    • On the server, I created a new file in the /var/lib/apt/lists/ directory with the same name of the file that failed/was encoded in the partial directory (sudo vim repos.zend.com_zend-server_deb_dists_server_non-free_binary-amd64_Packages) and pasted all of the text in there.
  3. Then I ran apt-get update and it ran without errors.
  4. Finally, I ran the Knowledgetree install again, and it downloaded all of the required packages fine.

I imagine this would work on any repo that is having a similar problem with Hash Sum mismatch, but of course the URL you will visit, and the file name you will use will be different.

Oct 312010

Hello Folks,
Long time and no posting. I’ve been quite busy. So here are some updates:



I’m currently completing a Masters Certificate in Project Management. It is a four month course that prepares you to be a professional project manager. It has been an interesting experience. The skills I’ve learned so far have already been put to good use both at work, and with my business on the side.

Project Work

My company has been awarded a contract to develop a Document Management System, as well as install servers and software to support this system. Work will be starting next week, and I will have very little free time to work on anything else during this. 🙁 But, it will be a great experience and our first big project!

Server Hosting

In the last month or so, the company I had a VPS with http://www.webserve.ca (BBB Rating: C – 121 Complaints) was constantly attacked and hacked by outsiders. No matter what lengths I go to, somehow, someone was on the server sending mail and doing bad things. It seems that the company does very little intrusion detection, and the way in which the server was compromised, I suspect that the virtual hosts were actually the source. After 12 hour downtime with no answer from their 24/7 support, I decided to return to my former web host (http://www.canadianwebhosting.com BBB Rating: A+ – 0 complaints). Although more expensive, their support has been fantastic, and quality/value of service is good.

Software Plans

That last piece is a great segue into my software plan section. During my migration, I had to move my database (obviously). That experience has pushed me to develop a script that dumps commands to recreate all of your users and all of your grants in the event that you have to migrate your mysql database. By running this script, you will get all of the code you need to recreate all of your users and grants. Coupled with a mysql dump, you should be able to quickly copy an entire database! Hopefully I will find some time in the next few months to write the code.

I’m also planning to write a simple script that will aggregate our logwatch reports from our 15 servers into a single daily report. Instead of the default function, which is to email, the logwatch report will be scp’ed to a central server, and then a script will go over all of the reports, and put something together for all of the servers. I’ll make this available once it is written.


I’ve been playing a bit of Civilization 5 lately. It is an ok game, but I find it is still very rough around the edges. Lots of bugs, incomplete features, and terrible AI. Although I really do enjoy the new combat engine.

I’ve also been playing a bit of Urban Terror. A free realistic FPS available for Linux.


I’ve also run across a few nice software packages that I’d like to recommend:
Adminer: A great PHP script that you can upload to your site, and gain phpmyadmin type access to your databases. Supports a number of database types. Allows you to execute queries, create databases, backup databases and just browse your tables. It is a single file, so there is no installation, and I intend to use it for some light DB admin work.

Firewall Builder: I was looking for a program to help me manage iptables on linux. I’ve run firestarter in the past, but that is a gui app, and I wanted to configure the firewall for a server with no gui. I ran across Firewall Builder and gave it a try. It is dually licensed as commercial for Windows and Mac users and free for Linux users. Being a Linux user, I jumped at it to give it a try.

The interface is a bit daunting for the first time user, I strongly recommend you watch the intro video (and hopefully you already know what a firewall does. ;)) This program lets you use drag and drop to configure rules for your firewall. It creates the rules in a custom format, and then compiles them into a number of different formats for various firewall programs and devices. It even goes a step further and lets you remotely install the firewall on the remote servers. A very nice touch.

I’ve only tried iptables configuration so far, but it ran very well. I intend to use it for all of our servers, and put the configuration files under revision management. So I can manage all versions of our firewall configs from a single box.

That’s all for now.