Installing and using imapsync on Ubuntu 12.04

imapsync is a popular tool for migrating from one IMAP server to another, including GMail and Google for Business. You can no longer download the source code directly from the author for free, however the unique ‘no limit’ license allows an unofficial fork to be available on GitHub. As of this writing the fork lags the original slightly, but it should not be an issue for most people.

Here is how to install it on Ubuntu.

Install Dependencies

sudo apt-get install makepasswd rcs perl-doc libmail-imapclient-perl

Clone the Git Repository

git clone git://

Don’t forget to cd into the ‘imapsync’ directory once it is finished being cloned.

Create a Dist Directory

You need to make a `dist` directory. I suspect this is because Git does not track directories, only files:

mkdir dist

Build and Install

sudo make install

Run It

Once this is done, you can run a script like the one below, which was inspired (and borrowed) from this blog post

Further Reading

  • This post is where I found the original version of the script above.
  • You might also find this page about migrating your email to Google Apps useful.

Creating a Puppet Master on Ubuntu

You will need two systems for this walkthrough. One will become the Puppetmaster, controlling the deployment of the other system, known as a Puppet Node. I am using Amazon AWS for these systems, but you can use any solution that offers Ubuntu systems.

We will start with the Puppetmaster…


The Puppetmaster

sudo apt-get install puppetmaster

The Node

sudo apt-get install puppet


You will need to be logged into both the master and the node for this next step.

On the node, the following command will get the node to connect to the master, where it will wait 60 seconds for the master to sign its certificate. Timing is important, you need to complete the steps on both servers within the allotted time.

On the node, run the following command:

sudo puppet agent --server <puppetmaster hostname> --waitforcert 60 --test

On the master, run the following to show the list of certificate requests. You should see your node listed:

sudo puppet cert list

Now sign the request:

sudo puppet cert --sign <node hostname>

Timeouts? Trouble?

If you an experience a timeout during the certificate signing process or if you run into trouble with this process, try deleting the /var/lib/puppet/ssl directory on both systems, then restart the puppetmaster daemon with:

sudo /etc/init.d/puppetmaster restart

Start Puppet Automatically

When you install puppetmaster on Ubuntu, it is configured to run automatically on boot, but you need to configure the node to start on boot.

Edit /etc/default/puppet and change:




Then start puppet:

service puppet start


At this point you have a complete puppet configuration. The only thing you are missing is a puppet configuration to share from the master to the node.

Nano Tip: Make Pressing the Tab Key Add Spaces Instead of Tabs

The Nano text editor often gets a bad rap, but in reality for the non-biased it is lightning quick, easy to use, with a very low learning curve. Once you’ve been using it enough though, you might wonder why pressing <tab> adds an actual tab character, when most DevOps and coders will expect a series of spaces. Luckily, it is easy to make this change.

For a one off run, you can start nano like this:

nano -ET4

The ‘-E’ parameter will make nano use spaces, and -T 4 sets the tab size to four spaces.

If you want to make this change more permanent, just create (or edit) a ~/.nanorc file in your home directory and add these lines:

set tabstospaces
set tabsize 4

Now whenever you press tab in nano, it will automatically add 4 spaces.

For more information on this and other nano parameters, check out the man page.

Using Maven to Enable Ebean Enhancement

I’ve been using Ebean ORM layer for a while, but I haven’t bothered to enable the bytecode enhancement until now. When I searched for how to enable this enhance I found a configuration example on the homepage, but it doesn’t work, due to some problems in the plugin configuration. The demo project on GitHub has the same problems.

Luckily, I know Maven pretty well. 🙂

The example on the Avaje page has some mistakes.

  1. The <phase> tag is incorrect: ‘process-ebean-enhancement’ is not a Maven lifecycle, so the plugin in the example configuration won’t run correctly. The correct phase is ‘process-classes’.
  2. The ‘classSource’ parameter is incorrect. As configured, the ebean plugin would enhance @Entity clases in your test code, rather than you production code. The correct value is ‘${}’.
  3. Make sure to update the package-space for your project. This is ‘com.mycompany.**’ in the example below.

The complete config should be:

                    <property name="compile_classpath" refid="maven.compile.classpath" />
                    <echo message="------ Ebean enhancing test classes debug level ------" />
                    <echo message="Classpath: ${compile_classpath}" />
                    <taskdef name="ebeanEnhance" 
                        classpath="${compile_classpath}" />
                    <ebeanEnhance classSource="${}" 
                        packages="com.mycompany.**" transformArgs="debug=1" />

This will enhance all of the @Entity classes in the “src/main/java/com/mycompany” and below.

You can improve execution time by being more restrictive on the packages to examine.

Also, you may need to duplicate this config if you do need to enhance test classes. In that case, use the ‘proces-test-classes’ phase for a second <configuration> block.

Development Environments

I’ve been thinking for a while about trying to write this post, but was too lazy to do that. Now I think it’s time.

When you work on a project you need an environment to test what you are doing. Especially, if this is a big interprise system with complicated deployment process and many dependencies on other systems (usually a system could both depend and be dependent on other systems)

I’ve faced the problem of testing my projects several times. When you finish your task, but don’t know how to check that it works. Sometimes it’s not possible to run it on your local computer, and sometimes if it is, working system on your computer doesn’t mean that it’s going to work on production. So I want to share my thoughts how the process could be accomplished on my point of view. To be said that’s not my idea, I just want to share the experience that I think is good enough. So let’s get started.

In big enterprise systems there are usually many roles working on a project. Some of them are business analysts, system analysts, QA engineers, developers, deployment engineers. Based on that let’s say that there are different layers of environments for a particular role group.

Let’s assume that there are 2 projects depending on each other, project A and project B. Different teams work on each project. So how to do development and make sure all that work?

Here is how I think this could be accomplished. Again, this is not my idea, but I like it. You can have 3 layers of environments. Let’s call them DEV (development), QA (quality assurance) or SIT (system integration testing), UAT (user acceptance testing).

DEV environment is an environment where a developer can deploy and run the system to see how it’s working (or not working:) ) All dependencies to another systems can be stubbed on this layer. We don’t care if the dependent systems work on this stage. For project A and project B there could be different DEV environments, like DEVA and DEVB, so the projects could work and be deployed isolated from each other. QA team can also work on this environment to perform some testing. Once a QA team or a development team is happy with the results the project is deployed on the next layer – QA or SIT.

QA or SIT environment is an environment where QA team works on testing how the whole system works, including both project A an project B. On this stage there are no stubs. On this layer both project A and project B get tested. Once this is done, the whole system is deployed to the next layer – UAT.

UAT environment is an environment where end users perform testing. This is the stage where end users provide sign off for the system. This environment should look like as production as much as possible. After end users are happy with the system, the system can be deployed on production.

There is one more environment that stands out of the environments I described above. This is a prod like environment. On this environment anyone from dev or qa or a user can take a look how the system is currently working. This can be very useful when there is a bug and you don’t know if this was done in current release or it’s from production. Also, sometimes it is useful to see how the system works on production to figure out some behavior that you are not sure where it came from.

Thank you for reading!

Java: Static Variables are WRONG (Almost Always)

This topic came up at work recently, and I thought it might be useful to the wider internet audience…

When doing code reviews, I often come across components which utilize static mutable objects. Typically these are static Maps and other Collections.

If you are using the ‘static’ keyword without the ‘final’ keyword, this should be a signal to carefully consider your design. Even the presence of a ‘final’ is not a free pass, since a mutable static final object can be just as dangerous.

I would estimate somewhere around 85% of the time I see a ‘static’ without a ‘final’, it is WRONG. Often, I will find strange workarounds to mask or hide these problems.

Please don’t create static mutables. Especially Collections. In general, Collections should be initialized when their containing object is initialized and should be designed so that they are reset or forgotten about when their containing object is forgotten.

Using statics can create very subtle bugs which will cause sustaining engineers days of pain. I know, because I’ve both created and hunted these bugs.

If you would like more details, please read on…

Why Not Use Statics?

There are many issues with statics:

  • Writing Tests
  • Executing Tests
  • Subtle Bugs

Writing Tests

Code that relies on static objects can’t be easily unit tested, and statics can’t be easily mocked.

If you use statics, it is not possible to swap the implementation of the class out in order to test higher level components. For example, imagine a static CustomerDAO that returns Customer objects it loads from the database. Now I have a class CustomerFilter, that needs to access some Customer objects. If CustomerDAO is static, I can’t write a test for CustomerFilter without first initializing my database and populating useful information.

And database population and initialization takes a long time. And in my experience, your DB initialization framework will change over time, meaning data will morph, and tests may break. IE, imagine Customer 1 used to be a VIP, but the DB initialization framework changed, and now Customer 1 is no longer VIP, but your test was hard-coded to load Customer 1…

A better approach is to instantiate a CustomerDAO, and pass it into the CustomerFilter when it is constructed. (An even better approach would be to use Spring or another Inversion of Control framework.

Once you do this, you can quickly mock or stub out an alternate DAO in your CustomerFilterTest, allowing you to have more control over the test,

Without the static DAO, the test will be faster (no db initialization) and more reliable (because it won’t fail when the db initialization code changes). For example, in this case ensuring Customer 1 is and always will be a VIP, as far as the test is concerned.

Executing Tests

Statics cause a real problem when running suites of unit tests together (for example, with your Continuous Integration server). Imagine a static map of network Socket objects that remains open from one test to another. The first test might open a Socket on port 8080, but you forgot to clear out the Map when the test gets torn down. Now when a second test launches, it is likely to crash when it tries to create a new Socket for port 8080, since the port is still occupied.  Imagine also that Socket references in your static Collection are not removed, and (with the exception of WeakHashMap) are never eligible to be garbage collected, causing a memory leak.

This is an over-generalized example, but in large systems, this problem happens ALL THE TIME. People don’t think of unit tests starting and stopping their software repeatedly in the same JVM, but it is a good test of your software design, and if you have aspirations towards high availability, it is something you need to be aware of.

These problems often arise with framework objects, for example, your DB access, caching, messaging, and logging layers. If you are using J2EE or some best of breed frameworks, they probably manage a lot of this for you, but if like me you are dealing with a legacy system, you might have a lot of custom frameworks to access these layers.

If the system configuration that applies to these framework components changes between unit tests, and the unit test framework doesn’t tear down and rebuild the components, these changes can’t take effect, and when a test relies on those changes, they will fail.

Even non-framework components are subject to this problem. Imagine a static map called OpenOrders. You write one test that creates a few open orders, and checks to make sure they are all in the right state, then the test ends. Another developer writes a second test which puts the orders it needs into the OpenOrders map, then asserts the number of orders is accurate. Run individually, these tests would both pass, but when run together in a suite, they will fail.

Worse, failure might be based on the order in which the tests were run.

In this case, by avoiding statics, you avoid the risk of persisting data across test instances, ensuring better test reliability.

Subtle Bugs

If you work in high availability environment, or anywhere that threads might be started and stopped, the same concern mentioned above with unit test suites can apply when your code is running on production as well.

When dealing with threads, rather than using a static object to store data, it is better to use an object initialized during the thread’s startup phase. This way, each time the thread is started, a new instance of the object (with a potentially new configuration) is created, and you avoid data from one instance of the thread bleeding through to the next instance.

When a thread dies, a static object doesn’t get reset or garbage collected. Imagine you have a thread called “EmailCustomers”, and when it starts it populates a static String collection with a list of email addresses, then begins emailing each of the addresses. Lets say the thread is interrupted or canceled somehow, so your high availability framework restarts the thread. Then when the thread starts up, it reloads the list of customers. But because the collection is static, it might retain the list of email addresses from the previous collection. Now some customers might get duplicate emails.

An Aside: Static Final

The use of “static final” is effectively the Java equivalent of a C #define, although there are technical implementation differences. A C/C++ #define is swapped out of the code by the pre-processor, before compilation. A Java “static final” will end up memory resident on the stack. In that way, it is more similar to a “static const” variable in C++ than it is to a #define.


I hope this helps explain a few basic reasons why statics are problematic up. If you are using a modern Java framework like J2EE or Spring, etc, you may not encounter many of these situations, but if you are working with a large body of legacy code, they can become much more frequent.