Streamlining AWS deployments with Python & Ansible, part III

Five tips for refactoring code

In Part 1 of this series, we experimented with writing Ansible modules. In Part 2, we learned to unit test them. In Part 3, we will refactor the code written in Parts 1 and 2 to conform with best practices and improve on efficiency and quality.

Part III — Five tips for refactoring code

Here are five tips for refactoring code.

Tip #1 — Avoid coding where possible!

Instead of writing and maintaining your own custom roles and tasks ❄️, you should also consider leveraging Ansible’s cloud modules.

For example, all of that Python code from Parts 1–2 could be replaced with the following usage of the ec2_elb_info and ec2_elb_lb modules:

    # Gather information about a specific ELBs
- action:
   module: ec2_elb_info
   names: {{some_elb_name}}
 register: elb_info
# Creates the ELB if none exists by that name
- action:
   module: ec2_elb_lb
   state: present
   name: 'New ELB'
   subnets: 'subnet-123456'
 when: not elb_info
  

Tip #2— Keep it consistent!

Just as English grammar standards allow for easy reading, relying on Python formatting allows us to focus on what the code is doing rather than how it’s written.

This not only benefits the reader, but it also settles some tedious internal debates on style that often consume the energy of any developer who’s focused on the quality of their code.

Because our Ansible playbooks consist of YAML and Python files, we will introduce two (2) packages to help keep things consistent:

import black       # for formatting Python code
import yamllint    # for linting YAML

You can run Black on your code like this:

pipenv run black .

While Black does not typically require any configuration or fine-tuning, it may be worthwhile to determine how aggressive you’d like your YAML to be linted. In general, there are 2 ways to lint:

yamllint .              # uses aggressive rule-set

yamllint -d relaxed .   # uses relaxed rule-set

It’s also recommended to use something like pre-commit-hooks to integrate tools like Black and yamllint into your version control system (e.g. GitHub). That way pull requests are flagged if the new or modified code is not formatted or linted correctly.

You can find the integration instructions for Black here and yamllint here.

Tip #3 — Follow the (Python) best practices recommended by Ansible

Ansible provides a number of general guidelines and AWS-specific guidelines for writing modules, some specifically with regard to Python. As a matter of opinion, my favorites are:

  • Where possible, consolidate shared code in a module_utilsdirectory so it can be accessed by any task throughout your playbook.

  • Write crystal clear and meaningful exit_json and fail_json messages, and format AWS errors where possible using fail_json_aws.
  • Rely on Ansible’s sanity checker throughout development to keep you on track with agreed-upon conventions.

Tip #4 — Anticipate & handle network drama

Inevitably, your relationship with AWS will feature network errors (such as RateLimitExceeded) that can be difficult to diagnose. Unlike daytime television, this kind of “network drama” simply cannot be avoided. To help prevent these errors from interfering with your deployments, you can easily enable backoff and jitter using the following reference code:

    from ansible.module_utils.ec2 import AWSRetry
@AWSRetry.exponential_backoff(retries=5, delay=5)
def describe_some_resource_with_backoff(self):
   ...
  

Alternatively, if using AnsibleAWSModule (as in our example code in Part 2), you can enable this for all network calls by configuring your module. Here’s an example:

    def main():
   module = AnsibleAWSModule(...)
   module.client('ec2', retry_decorator=AWSRetry.jittered_backoff(retries=10))
  

Tip # 5 — Watch for import collisions!

As you develop and test locally in the safety of a Python virtual environment, things may looks totally fine ✅.

However, when Ansible runs in the wild it packages your code — including botocore and it’s wrapper boto3 — into a standalone Python program, which it then attempts to run on a remote server using whatever version of Python it finds there.

Normally, this is no problem; however, a discrepancy in Python versions can result in an ImportError when your package attempts to import a version of botocore library that doesn’t exist on the remote server due to a mismatch in the Python version installed there.

Luckily for us, our AnsibleAWSModule will raise its own special error for this situation. All we have to do is silence this error, like this:

    from ansible.module_utils.aws.core import AnsibleAWSModule
try:
   import botocore
except ImportError:
   pass  # handled by AnsibleAWSModule
  

If you’re using AnisbleModule, check out these docs for some reference code that will provide a similarly smooth failure.

In closing…

If Parts I, II, and III of this series did their job, you will now instinctively yell “AWS PYTHON ANSIBLE” when someone broaches the dinner table topic of “things that go together.” While perhaps not the most common answer, the beauty of deploying to AWS using Python-crafted Ansible modules is undeniable.

And how do you know this is true? Well, we found out together! As a refresher we (1) crafted our first Ansible module in Part I, (2) authored unit tests for it in a variety of ways in Part II, and even (3) covered best practices for building clean, efficient, and easy-to-maintain Ansible code in Part III. 🏁

Resources

Ready to dive in deeper? Check out the following links:

Ready for more training? Check out these links:


Ford Prior, Principal DevOps Engineer

Ford Prior is a Principal DevOps Engineer who works on delivery experience, inner-sourcing, and CICD pipelines. He’s passionate about productivity engineering, tech education, and the community of Richmond, where he lives with his partner and kids. Between Ansible deployments, he enjoys spending time trail running or in his homemade sauna.

Related Content