Sunday 5 October 2014

AWS Tip of the day: Signing requests using IAM instance roles

If you are averse to having the AWS SDKs conveniently manage IAM instance role permissions and service requests (including Signature V4) then knowing that you need to include the IAM role token in the request header is quite important. This post describes the steps needed to sign an API request using an instance IAM role. It is assumed that you already have an instance launched with a role that has permission to perform the requested API action. For this example the role is going to be named ec2-ro and as the name implies it has read-only permissions on EC2 APIs.

As a good starting point we will use the first GET example on this page as a working base and modify it to retrieve the instance role credentials and add the session token to the request as follows.

Step 1: Retrieve the instance role credentials

As per the AWS documentation, instance credentials can be retrieved from the instance metadata by performing a GET against the following URL, where <role-name> is the name of the instance role containing the relevant permissions:<role-name>

In our example with the role named ec2-ro the result of the request will be similar to the result below where the body of the token has been replaced with [...] for brevity:

$ curl
  "Code" : "Success",
  "LastUpdated" : "2014-10-05T07:25:22Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "ASIXXXXXXXXXXXX",
  "Token" : "AXXXXXX//////////[...]==",
  "Expiration" : "2014-10-05T13:30:35Z"

Achieving the same in Python is pretty simple:

creds = requests.get('')
access_key = creds.json()['AccessKeyId']
secret_key = creds.json()['SecretAccessKey']
token = creds.json()['Token']

Which we can then use to replace access key code in the example:

access_key = os.environ.get('AWS_ACCESS_KEY_ID')
secret_key = os.environ.get('AWS_SECRET_ACCESS_KEY')

Unfortunately we are not finished yet as running the new code now results in a 401 response with the following message:

"AWS was not able to validate the provided access credentials"

The reason for the error above is that the instance credentials require a session token to be considered valid, moving on to the next step.

Step 2: Add the session token to the request headers

Fixing the invalid credential error above is relatively trivial and simply involves adding the session token to the request headers with a header name of x-amz-security-token. So replacing this line:

headers = {'x-amz-date':amzdate, 'Authorization':authorization_header}

With the following line:

headers = {'x-amz-date':amzdate, 'Authorization':authorization_header, 'x-amz-security-token':token}

Allows the code to execute successfully. One thing to be aware of is that the instance credentials are rotated fairly frequently and the credentials will need to be refreshed after the date and time in the instance metadata Expiration field.

Sunday 14 September 2014

AWS script of the day: Core count

In case you are curious or just want to know how close you are to being a super computer, here is a quick script to count the number of cores an AWS account is currently running:

Make sure you have boto setup up first.

Saturday 13 September 2014

AWS script of the day: Cascade delete of security groups

A common bug bear of AWS security groups is having to delete all references to a security group before deleting the group itself. Here is a quick boto script to simplify this process, you will need to have configured boto as per these instructions. After which 'python sg_cascade_delete -h' will give you:

usage: [-h] [--region REGION] [--quick] [--force]
                            group_ids [group_ids ...]

Remove all references to a security group and then delete it

positional arguments:
  group_ids        The ID of the security group to delete, eg. sg-xxxxxxx

optional arguments:
  -h, --help       show this help message and exit
  --region REGION  AWS region name the security group is in, default: us-
  --quick          Skip checks for whether or not the group is used by
                   RDS/ElastiCache. Faster but may cause error on delete if
                   the group is referenced.
  --force          Force delete without requiring confirmation
  --quiet          Do not print references or success message

An example of usage would be:
python sg_cascade_delete --region eu-west-1 sg-1231234

This will find all references to the sg-1231234 security group in the region and display them before asking for confirmation to delete the group. Note that you will be prevented from deleting any groups used in ElastiCache or RDS security groups as doing so tends to break things in unexpected ways.

If you don't want to have to confirm the deletion (for a large number of groups for example) you can specify the --force option, this will skip the confirmation question and simply delete the groups after displaying their references. For example:
python sg_cascade_delete --force --region eu-west-1 sg-1231234 sg-33221133

If you prefer your deletion silent then the --quiet option is for you, specifying this will prevent any messages being printed (other than the confirmation question and errors that occur). For no interaction at all use with --force to magically delete the groups without a sound. A non-zero process exit code indicates an error.

If you have a large number of ElastiCache clusters and RDS instances you can skip the reference checks by specifying the --quick  option, this may result in errors (in VPC) if the group is actually referenced when trying to delete and will cause some strange behaviour in EC2 classic as you are actually able to delete the group leaving a dangling authorisation on the ElastiCache/RDS security group. As such it is advised that you use this option with care or when you are truly certain that the security group is not used anywhere but in EC2.

As this code is mutating (it changes your stuff) it would be wise to run it in a test environment before making changes in production. In other words: use at your own risk.

Wednesday 3 September 2014

AWS Tip: Upgrade your PHP 1.x SDK

If you are still using 1.6.2 of the AWS PHP SDK you can expect some strange issues, for example:


Will not list T2 instance types. Save yourself some hassle and upgrade to the latest version with the help of this migration guide.

Saturday 30 August 2014

AWS Tip: elmo.ec2sg preventing security group deletion

If you are trying to delete an EC2 security group and get an error something like:

sg-12345678: Group 111222333444:My SG Name is used by groups: 123412341234:elmo.ec2sg.887766

Save yourself some time and check that your ElastiCache and RDS security groups don't reference (by name) the group you are trying to delete. See this forum thread.

Saturday 28 June 2014

Installing the AWS .NET SDK on Mono/Linux using NuGet

Following a previous post about compiling the AWS .NET SDK on Mono/Linux +David Fevre asked whether it was possible to use NuGet to install the SDK. It turns out that this is actually quite a bit simpler if you are running Mono 3.2+. Below are the steps required to get the AWS SDK working on Mono/Linux (Ubuntu 14.04)

1. Install mono and associated tools, mozroots downloads root certificates to enable SSL via mono:
sudo apt-get update
sudo apt-get install mono-complete git wget
mozroots --import --sync

2. Create a working directory and change to it:
mkdir mono-aws
cd mono-aws

3. Download the command line version of NuGet and verify that it is version 2.8+:
mono nuget.exe update -self

4. Install the AWSSDK via NuGet:
mono nuget.exe install AWSSDK

5. Create a symbolic link to the lib folder (note that the version number might be different from the example command):
ln -s AWSSDK. lib

6. Create a command line program to test with, for example in s3.cs:
using System;
using Amazon.S3;
using Amazon.S3.Model;

class Upload
  public static void Main(string[] args)
    // Create a client
    AmazonS3Client client = new AmazonS3Client(Amazon.RegionEndpoint.USWest2);

    // Create a PutObject request
    PutObjectRequest request = new PutObjectRequest
      BucketName = "SampleBucket",
      Key = "Item1",
      ContentBody = "This is sample content..."

    // Put object

7. Compile your program:
mcs s3.cs -r:./lib/AWSSDK.dll

8. Set the Mono path
export MONO_PATH=`pwd`/lib:.

9. Create an application configuration (must be named to match the compiled program) for example in s3.exe.config:
<?xml version="1.0"?>
    <add key="AWSAccessKey" value="AKXXXXXXXXXXXXXX"/>
    <add key="AWSSecretKey" value="XXXXXXXXXXXXXXXXXXXXXXXX"/>

10. Run your program and verify that the object is created in the bucket:
mono s3.exe

Friday 27 June 2014

Code snippet: Adding a security group egress rule in boto

Quick example of adding an egress rule to an existing security group (turns out cidr_ip is actually required). Assumes you have boto installed and your AWS credentials configured:

import boto
c = boto.connect_ec2()
c.authorize_security_group_egress('sg-xxxxxxx', 'tcp', from_port=1024, to_port=1024, cidr_ip='')

Saturday 17 May 2014

JBoss 6.x AS and log4j

This post is a result of spending quite a bit of time working on trying to get a custom log4j (1.2) appender to work on JBoss 6.x. The purpose of this article is not to explain how to get log4j working in a WAR/JAR/EAR, if that is what you are looking for rather have a look here, or here. If you are looking to add a log4j appender to the default JBoss server logging then carry on reading.

The first thing to be aware of is that log4j in JBoss 6.0 is broken, as is also the case in JBoss 6.1. The steps below are required to fix this and are based on clean JBoss 6.x AS installations.

1. Add and configure a log4j appender

This is done by editing the jboss-logging.xml file located in the server deployment directory (jboss-6.x.0.Final/server/default/deploy)

First add an appender named "LOG4J" (or whatever you would like to call it):

   <log4j-appender name="LOG4J" class="org.apache.log4j.FileAppender">

         <property name="file">${jboss.server.log.dir}/log4j.log</property>
         <property name="append">true</property>

         <pattern-formatter pattern="%d %-5p [%c] (%t) %m%n"/>

This configuration is for a standard log4j file appender writing to a file named "log4j.log".

Next you need to add your new appender to the root-logger section in the same file:

      <!-- Set the root logger priority via a system property, with a default value. -->
      <level name="${jboss.server.log.threshold:INFO}"/>
         <handler-ref name="CONSOLE"/>
         <handler-ref name="FILE"/>
         <handler-ref name="LOG4J"/>

If you were to start JBoss now it would be reasonable to expect a new log file named "log4j.log" to have been created in the server/default/log/ directory, unfortunately this is not the case. And instead you are given no errors by JBoss 6.0 while JBoss 6.1 spews the only slightly more useful message below for each line it should be writing to the log file:

ERROR [STDERR] log4j:ERROR No output stream or file set for the appender named [null].

2. Update log4j.jar packaged with JBoss installation

Download the latest version of the log4j 1.2 package from Apache log4j and replace the JBoss version in jboss-6.x.0/common/lib. Using 1.2.17:

cp apache-log4j-1.2.17/log4j-1.2.17.jar jboss-6.x.0.Final/common/lib/log4j.jar

3. Update the jboss-logging packages

Download the patch (zip) from the bug report and unzip it. There is unfortunately no directory structure so you will need to manually copy the replacement JAR files as follows:

cp jboss-logmanager-log4j.jar jboss-6.x.0.Final/common/lib
cp jboss-logmanager.jar jboss-6.x.0.Final/lib
cp logging-service-metadata.jar jboss-6.x.0.Final/server/default/deployers/jboss-logging.deployer

4. Start JBoss and enjoy your new logging framework

Monday 7 April 2014

AWS .NET SDK on Mono/Linux

A quick guide to getting an AWS .NET console program running on Linux. Starting with the hard part, compiling the SDK from source.

Update: NuGet can also be used if you are running Mono 3.2+, see this post

1. Install mono and associated tools (on Ubuntu/Debian), mozroots downloads root certificates to enable SSL via mono:
sudo apt-get update
sudo apt-get install mono-complete git
mozroots --import --sync

2. Create a working directory and change to it:
mkdir mono-aws
mkdir mono-aws/lib
cd mono-aws

3. Retrieve the SDK source:
git clone

4. Fix some file names (case is important to real operating systems). Update: this should be fixed in the next release of the SDK in which case it can be skipped:
cd aws-sdk-net/AWSSDK_DotNet35/
mv Amazon.S3/Model/PUtACLResponse.cs Amazon.S3/Model/PutACLResponse.cs
mv Amazon.S3/IAmazonS3.extensions.cs Amazon.S3/IAmazonS3.Extensions.cs

5. Compile core library, the command line parameters are to default the compile to use .NET 3.5 and not warn on anything other than compile errors (still in aws-sdk-net/AWSSDK_DotNet35):
xbuild /p:TargetFrameworkProfile="" /p:WarningLevel=0

6. Compile extensions (still in aws-sdk-net/AWSSDK_DotNet35):
cd ../AWS.Extensions/
xbuild /p:TargetFrameworkProfile="" /p:WarningLevel=0

7. Install libraries (from aws-sdk-net/AWS.Extentions):
cd ../..
cp aws-sdk-net/AWS.Extensions/SessionProvider/bin/Debug\ v3.5/AWS.SessionProvider.dll lib
cp aws-sdk-net/AWS.Extensions/TraceListener/bin/Debug\ v3.5/AWS.TraceListener.dll lib
cp aws-sdk-net/AWSSDK_DotNet35/bin/Debug/AWSSDK.dll lib

8. Create a command line program for example in s3.cs:
using System;
using Amazon.S3;
using Amazon.S3.Model;

class Upload
  public static void Main(string[] args)
    // Create a client
    AmazonS3Client client = new AmazonS3Client(Amazon.RegionEndpoint.USWest2);

    // Create a PutObject request
    PutObjectRequest request = new PutObjectRequest
      BucketName = "SampleBucket",
      Key = "Item1",
      ContentBody = "This is sample content..."

    // Put object
    PutObjectResponse response = client.PutObject(request);

9. Compile your program:
mcs s3.cs -r:./lib/AWSSDK.dll -r:./lib/AWS.TraceListener.dll -r:./lib/AWS.SessionProvider.dll

10. Set the Mono path
export MONO_PATH=`pwd`/lib:.

11. Create an application configuration (must be named to match the compiled program) for example in s3.exe.config:
<?xml version="1.0"?>
        <add key="AWSAccessKey" value="AKXXXXXXXXXXXXXX"/>
        <add key="AWSSecretKey" value="XXXXXXXXXXXXXXXXXXXXXXXX"/>

12. Run your program:
mono s3.exe

Of course there is also an easier approach that does not require the SDK to be compiled, you can just copy the DLLs from a Windows machine that has the SDK installed (typically into C:\Program Files (x86)\AWS SDK for .NET\bin\Net35). This allows you to skip steps 3 - 7.

Wednesday 26 March 2014

Data formatting with vim regular expressions

Using the right tool for a job can save massive amounts of time. This is a quick post to demonstrate a practical application of the power of regular expressions in vim. The problem is transforming the output from a SQL query into a wiki ready table format.

Starting with the output from the query (test data for demonstration purposes):

 category_name                  total  grated  mashed  boiled
  Artichoke            107   67     9    31
  Pepper                65   38     2    25
  Carrot                 46   32  NULL    14
  Lettuce                24   24  NULL  NULL
  Spinach               16    8     1     7
  Zuchini                 4    4  NULL  NULL

Paste this data excluding the heading line into vim (or pipe it to an output file and then edit the file). Firstly lets get rid of those ugly null values:


This searches for the character sequence "NULL" and replaces it with 0 globally. The data should now look like this

  Artichoke            107   67     9    31
  Pepper                65   38     2    25
  Carrot                 46   32  0    14
  Lettuce                24   24  0  0
  Spinach               16    8     1     7
  Zuchini                 4    4  0  0

Next lets format the lines and add separators:

:%s/^ /|-^M| /g

This one is a bit trickier, the ^M is actually a control character indicating a new line and is created by pressing Ctrl-V and the Enter. It searches for lines starting with white space (^ ) and inserts |- and a new line before prefixing the line with a |:

| Artichoke            107   67     9    31
| Pepper                65   38     2    25
| Carrot                 46   32  0    14
| Lettuce                24   24  0  0
| Spinach               16    8     1     7
| Zuchini                 4    4  0  0

Finally we want to split each number onto a separate line. This is where the power of regular expressions really shines:

:%s/ \+\(\d\+\)/^M| \1/g

The regex is actually quite simple, it says find all instances with one or more spaces ( \+) followed by one or more digits (\d\+) and save the digits for substitution (by putting them in brackets \( and \)). Then replace the pattern with a newline, a pipe and the saved digits (\1). And viola, columns to rows:

| Artichoke
| 107
| 67
| 9
| 31
| Pepper
| 65
| 38
| 2
| 25
| Carrot
| 46
| 32
| 0
| 14
| Lettuce
| 24
| 24
| 0
| 0
| Spinach
| 16
| 8
| 1
| 7
| Zuchini
| 4
| 4
| 0
| 0

Pretty cool.

Wednesday 19 February 2014

Curl with Kerberos authentication

Quick note on retrieving content using curl from Kerberos authenticated sites (so that I don't have to reread the man page every 6 months to figure it out). Firstly request a valid Kerberos ticket for forwarding:

$ kinit -f

You may need to enter your password to authenticate yourself. Next tell curl to retrieve the URL using GSS-Negotiate authentication (--negotiate) and no username or password (-u : ) as they are not used. Note that curl needs to have been compiled with support for this, check that you see GSS-Negotiate in the features list when doing a curl -V.

$ curl "https://your-secure-url/path/query?param=1&value=2" -u : --negotiate

This will return the requested page and print it to console Doing the same thing in Python (with pycurl - 'pip install pycurl'):

import pycurl

curl = pycurl.Curl()
curl.setopt(pycurl.HTTPAUTH, pycurl.HTTPAUTH_GSSNEGOTIATE)
curl.setopt(pycurl.USERPWD, ':')
curl.setopt(pycurl.URL, 'https://your-secure-url/path/query?param=1&value=2')

And finally in PHP:

$ch = curl_init();

curl_setopt($ch, CURLOPT_URL, "https://your-secure-url/path/query?param=1&value=2");
curl_setopt($ch, CURLOPT_USERPWD, ":");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);

$page = curl_exec($ch);

Wednesday 8 January 2014

Python gotcha: bound method

A quick Python observation that will hopefully save somebody some time. Given the following Python code:

class Example:
    X = 5

    def __init__(self, value):
        self.some_value = value

    def method(self, value):
        if value > Example.X:
            return 'Yup'
        return 'Nope'

    def another_method(self):
        return self.method(self.some_value)

e = Example(2)
print e.some_value

This code defines a class named Example, instantiates it and prints out one of the class member variables. In this case the code will print out the value "2" which is logical as it was used to instantiate the object.

Extending this and trying to do:
print e.another_method

Results in output that looks something like:
<bound method Example.another_method of <__main__.Example instance at 0x7ffde0f4fd88>>

This seems a bit strange until you realise that Python is doing exactly what you are telling it to do which is return the method object rather than invoke the method. There are two really simple fixes to this:
  1. Change the call to "print e.another_method()", this causes the method to be invoked and the result returned as expected (rather obvious really)
  2. Add the @property decorator to the method definition. This allows you to access the method as a read-only property
Both approaches work but they are mutually exclusive. Changing the method definition to:

    def another_method(self):
        return self.method(self.some_value)

And then trying to call e.another_method() will result in the error:
TypeError: 'str' object is not callable