What part of your job can you automate?

What part of your job can you automate?

Let me first start off this post by saying, if you’ve never read the book The Passionate Programmer by Chad Fowler, then you’re doing yourself a disservice. You should go right now and check it out, I’ll wait.

The Passionate Programmer is a book about creating a great career as a software engineer, and is packed with tips such as “Automate Yourself into a Job”. It is this tip that I want to talk with you about today. Unfortunately many people look at software engineers as being fungible. We have some work, we need to get it done, we can throw X software engineers at it! They don’t understand why if you can do a task with one programmer in three months, then why can’t we just throw three programmers at it and get it done in one month? Why do we need all of these expensive software engineers? Can’t we can just go hire a bunch of cheap developers and get much more work done?

Unfortunately (or maybe fortunately for you), this isn’t the way it works. Generally speaking an expert software engineer will be able to produce more in a given time frame, not necessarily in total output (inexperienced software engineers can often produce a *ton* of code), but in terms of designing systems and producing software that will run reliably and won’t cause a constant stream of bugs and downtime.

When it comes to enhancing software throughput, your options are…

  1. Get faster people to do the work
  2. Get more people to do the work, or
  3. Automate the work.

It is hard to measure if one programmer is faster than another and adding more developers to a project actually tends to slow the project down. Therefore, an experienced engineer will recognize that their time is wasted by doing routine and repetitive tasks, and will go out of their way to automate them. This is why Larry Wall (the creator of the Perl programming language) very tongue-in-cheek declared laziness to be one of the three virtues of a great programmer:

Laziness – The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don’t have to answer so many questions about it. Hence, the first great virtue of a programmer.

While there are many tasks in our daily lives as developers that can be automated, many of them fall into the realm of DevOps. One of the tasks that I realized I had been wasting too much time on was deploying server updates.

My Automation – AWS OpsWorks Command Deployments

AWS OpsWorks is a service that allows you to use code to automate server configurations through tools such as Chef and Puppet. We use OpsWorks to bootstrap AWS EC2 instances, allowing us to have an easily repeatable environment for our applications to run in.

As every good engineer knows, regular and prompt patching of your servers and libraries is critical to ensure that any vulnerabilities are quickly remedied. We had long ago created tasks to run updates on our servers via Chef, but we were still logging in every week to run the commands on the servers, waiting for the servers to re-enter the load balancers, then running it against the next batch. A process which doesn’t take a ton of time, but still time that could be best spent doing other things.

One way to do this is by logging in to AWS and deploying commands to the servers. Kicking off the Chef commands to patch the servers involves:

  1. Log into the AWS web console (entering credentials and using multi-factor authentication)
  2. Navigate to OpsWorks, navigate to each Stack to update
  3. Execute the deployment commands for each batch of servers
  4. Repeat the deployment commands for each Stack, and
  5. Repeat the entire process for each staging and production environment you have.

It isn’t a ton of work, but there is a lot of waiting in between steps, which means a lot of context switching. Now that seems like prime candidate for automation, but we put off automating it for a long time, because the *right* solution felt like a lot of work and logging in and clicking a few buttons once a week didn’t seem like a ton of work. This is the trap engineers often fall in. Putting off time-saving efforts because solving a problem in the way we feel like it *should* be solved would take a lot of effort, when in fact, a much simpler solution would suffice.

My Simple Solution: Bash Script

After a bit of research we realized that we could kick off the Opsworks agent directly on the server without going through the AWS api, and so writing a shell script to ssh into each server and kick off the commands was pretty straightforward.

A small excerpt of the script is below:

#!/bin/sh

echo 'Updating Staging Servers...'

echo '\tUpdating Server 1'

ssh -A -i ~/.ssh/ssh_key -t username@bastion-ip ssh server1-ip "sudo opsworks-agent-cli run_command execute_recipes &>execute_recipes.log"

echo '\tWaiting 3 minutes so server can get back into load balancer'

sleep 180

echo '\tUpdating Server 2'

ssh -A -i ~/.ssh/ssh_key -t username@bastion-ip ssh server2-ip "sudo opsworks-agent-cli run_command execute_recipes &>execute_recipes.log"

In the future we could clean up this solution further and use the AWS api to auto-discover servers in each AWS account in each Opsworks layer, detect when the servers re-enter the load balancer, etc… but this solution was fast and served our current needs. Never let perfect be the enemy of good!

There are many other opportunities for quick wins like this, but it makes me wonder, what part of your job can you automate?

Loved the article? Hated it? Didn’t even read it?

We’d love to hear from you.

Reach Out

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More Insights

View All