Streamlining AWS Deployments with Python & Ansible (Part I)

The basics of Ansible and how to build a playbook

Peanut butter and jelly. Batman and Robin. Denim jorts and…anything. Despite our differences, we can all agree that some things simply “go together.”

This is no different in the world of contemporary cloud computing, where one winning combination among software engineers is Python, Ansible, and AWS. As an example, a developer might craft a playbook (Ansible) that invokes scripts (in Python) to create things in the cloud (AWS).

This image shows how a developer might craft a playbook (Ansible) that invokes scripts (in Python) to create things in the cloud (AWS).

This post has three (3) parts:

  • Part I will explain Ansible and provide a basic example in Python.
  • Part II will outline basic testing using mock, moto, and localstack.
  • Part III will provide some technical nuance unique to testing Ansible modules.

Part I: Understanding Ansible

Before we get started, let’s establish a working architectural model in AWS that we’ll reference throughout this series.

Let’s say we have a special fortune-telling API which returns snarky fortune cookie-style messages. As depicted below, when people make HTTP calls to your API, that traffic is routed through a load balancer, which spreads it across a compute cluster. On this compute cluster exists microservices, where the traffic ultimately ends its journey and initiates a callback, which will include a snarky fortune.

In the world of AWS, the load balancer is an Elastic Load Balancer (ELB), the compute cluster is called an Elastic Compute Clusters (ECS), and our microservices are simply called “services.”

"This image shows the relationship between the load balancer (also known as an Elastic Load Balancer, or ELB), the compute cluster (also known as an Elastic Compute Cluster, or ECS), and our microservices (which are simply called “services”). "

So many DevOps tools…

Across the DevOps universe, there are dozens of tools that help with automating the ELBs, ECSs, and Services shown in the above diagram. Regarding the “best” choice, there are passionate debates raging at all times.

Within a category sometimes referred to as Infrastructure-as-Code (IaC), a subset of these tools — many with overlapping features — are designed to assist in a developer’s quest towards fully templating and versioning infrastructure and deployments.

A well-regarded arrow in this IaC quiver is Ansible, an open-source procedural configuration tool. The concept is simple: you provide a playbook of tasks, and Ansible executes those tasks in a cloud environment.

What about Terraform or CloudFormation?

In contrast, Terraform and CloudFormation are declarative provisioning tools, designed to enforce a “state” (e.g. a set of cloud resources) in a cloud environment. That is, you provide a blueprint and the tool handles the sequencing and execution.

This image shows how some things (like ALB's and ECS clusters) are best suited to Terraform, while Anisble is best suited for software deploys (services).

In my experience, Ansible excels in the delicate art of software deployments, whereas Terraform is best for infrastructure provisioning. As depicted above, you might provision a compute cluster using Terraform modules, but then use the procedural magic of Ansible to verify cluster health, perform a software deployment, and verify the health of the new service on the cluster.

That said, many engineers with more experience than myself support liberal use of Ansible, even for provisioning infrastructure. Such engineers may not like the “throw-it-over-the-wall” style of Terraform and appreciate the transparent, interactive nature of Ansible. Seems fair, right?

What about Chef, Puppet, or SaltStack?

You might ask, “What about Chef, Puppet, or SaltStack?” While also configuration management tools that excel in multi-tier software deployments, Ansible has the benefit of an agentless architecture, whereby modules are pushed to a remote server, executed, and cleaned up afterwards. All over SSH, with no installation required! 🥇

Are you telling me Ansible is…perfect?

Certainly not! While Ansible is a shining star in the DevOps tool box, major cloud providers have yet to grant the Ansible OSS project with what you might call “premium stewardship.” For that reason, despite active community stewardship, some parts of Ansible are still considered “buggy” by some and lacking in feature depth by others.

How does Ansible work - A simple tutorial

Within a directory known as a playbook, you will find lists of tasks written in YAML, which is an easy-to-use format for spelling out key-value pairs. YAML is a very popular choice for config files that are subject to human editing, such as our playbooks, which will change over time. Each YAML task runs in sequence and corresponds to a single Python file, which it promptly executes.

Show me the YAML!

We start with playbooks/main.yml, which defines a procedure to deploy a new application:

- name: Deploy application
   - role: basic_deploy
     when: environment == "dev"
   - role: blue_green_deploy
     when: environment == "qa" or environment == "prod"

Wait, what’s a role?

Good question. Ansible provides the roles concept to allow you to tailor playbook executions to specific situations.

For our example, our basic_deploy would provide a basic “all-at-once” deployment for our dev experimentation. And the blue_green_deploy role might provide a blue-green deployment in our stable qa and prod environments.

Here’s a literal illustration of how the directory structure might look with these two (2) roles:

This image shows the directory structure of a generic Ansible playbook.

As you can see, each role contains a tasks/main.yml file to spell out exactly what its responsibilities are. Not depicted is an optional defaults/main.yml, whereby we might provide the role with some sane defaults.

What’s in a task?

Within each role’s tasks/main.yml, we may list dozens of individual tasks. For example, one task might be a simple check to see if there’s already an ELB existing that we can use for our deployment:

    - name: "Check if ELB"
   elb_name: "{{ ansible_facts.elb_name }}"
   region: "{{ aws_region }}"

Let’s enhance this example using register and when, which gives us the ability to check for the ELB and create one if it doesn’t yet exist:

    - name: "Check if ELB"
   elb_name: "{{ ansible_facts.elb_name }}"
   region: "{{ aws_region }}"
 register: elb
- name: "Create New ELB"
   elb_name: "{{ ansible_facts.elb_name }}"
   region: "{{ aws_region }}"
 when: not elb

Enough YAML, release the Python! 🐍

You’re right! It’s about time. To support our suave veneer of YAML, we need to get our hands dirty and write code to execute these tasks. Though Ansible supports many scripting languages, we promised you Python so that’s what we’ll use.

Returning to our example, our create_elb task would map directly to a file named, which might look like this:

    #!/usr/bin/env python

from import AnsibleAWSModule
from boto3 import Session

class CreateELB:
   """Class used to create a shiny new ELB"""

   def __init__(self, module):
       self.module = module
       session = Session(region_name="us-east-1")
       self.elbv2 = session.client("elbv2")
   def create_elb(self, elb_name, subnet_id):

       results = self.elbv2.create_load_balancer(Name=elb_name, Subnets=[subnet_id])
       if "LoadBalancers" not in results:
           self.module.fail_json(msg="Not created!", elb_name=elb_name)
           self.module.exit_json(msg="Created!", elb_name=elb_name)

def main():
   module = AnsibleAWSModule(
           elb_name=dict(required=True, type="str"),
           subnet_id=dict(required=False, type="str")
   creator = CreateELB(module)
   creator.create_elb(module.params["elb_name"], module.params['subnet_id'])

if __name__ == "__main__":

Really, the key thing to focus on here is that the file is designed to be run in a Shell session, not be imported into a Python session as a module. That is, Ansible will call it via python instead of import create_elb.

This means:

  • It accepts STDIN inputs from Ansible.
  • It writes back to Ansible via STDOUT
  • When executed, it will rely on the contents of a if __name__ == “__main__”: condition for instructions on what to execute (in our case, a main() function).

What about variables?

The vars defined in task’s main.yml are neatly passed into our CreateELB class and appear in the logic as a params dictionary. Then, upon success, the main.yml registers a new variable called elb_name, which can be consumed by subsequent tasks.

In summary, Ansible will execute a file that runs the code found in the main() function, which creates a shiny new Ansible module and feeds it to a CreateELB class, which actually creates the ELB resource in AWS.

What’s next?

Great question! Now that we have a working Ansible module, in Part II we will figure out how to unit test it!


*Header image by pressfoto on

Ford Prior, Principal DevOps Engineer

Ford Prior is a Principal DevOps Engineer who works on delivery experience, inner-sourcing, and CICD pipelines. He’s passionate about productivity engineering, tech education, and the community of Richmond, where he lives with his partner and kids. Between Ansible deployments, he enjoys spending time trail running or in his homemade sauna.

Explore #LifeAtCapitalOne

Feeling inspired? So are we.

Learn more

Related Content