Prologue

I learn and watch plenty of science fiction, each modern and traditional — and one in all my favorites is Frank Herbert’s Dune. I also have a comfortable spot for the oft-maligned David Lynch film adaptation. The opening line of the film at all times captured my creativeness: “A starting is a really delicate time.

Since I joined Pixability 4 quick months in the past, I’ve been reflecting on that quote. Starting a brand new job is at all times a very delicate expertise — one desires to show worth early, and the quickest method to try this is to fall again on acquainted patterns, instruments, languages, and many others. But are these acquainted patterns nonetheless the BEST solution to accomplish a activity? Are they even a match for the present setting? It speaks to the standard of the Engineering tradition right here at Pixability that engineers are allowed the time and assets to discover these essential questions.

The Birth of a Drop Pod

My colleague Martin Kerr wrote an incredible submit a couple of months again on the work he’s doing with Terraform and Terragrunt. When I joined Pixability, I wasn’t too acquainted with Terraform or Ansible. However, I used to be desperate to discover them as an alternative of falling again on infrastructures I’m extra acquainted with.

I really feel it’s essential to have infrastructure code tightly coupled with the applying code that is dependent upon it (wherever sensible). To my thoughts which means embedding infrastructure code (be it Ansible playbooks or Terraform recordsdata) straight into your utility repositories, and operating it by way of automation and assessments as with different sorts of code. As Terraform enables you to reference modules from exterior sources, and has the flexibility to generate a static plan file, it appeared like adapting it into that sample must be completely attainable. So, my plan was to have our builds generate a static Terraform plan file, which I’d later eat and apply as a part of our utility deployment tooling.

My first try was a false begin, due largely to me not bothering to RTFM. Simply operating “terraform plan --out=tfplan.out” inside my builds to generate a Terraform “artifact” was not going to be sufficient. When you generate a Terraform plan, the plan file itself incorporates absolute paths to modules, recordsdata, and many others. So, if you happen to run the plan on an ephemeral construct slave after which later wish to apply that plan from a distinct host (and even your individual laptop computer), it’s not going to work as a result of the module paths referenced within the plan shall be invalid.

HashiCorp’s suggestion to get round that is to run terraform plan inside a Docker container, that method you preserve some management over the file paths. This nonetheless appeared clumsy to me — even when the Docker container provides me predictable paths for modules, (/tmp for example), I nonetheless have to verify my deployment instruments assert that location, plus additionally dea with tarring/untarring the plan recordsdata besides.

So I returned to the drafting board. My final purpose was to generate a Terraform plan that I might deal with as a stand-alone, moveable artifact. And what are Docker photos if not a kind of artifact? If I’m already operating Terraform inside a container to generate the plan, then why not simply construct a container with all of the plan recordsdata and required modules, and use that as my “artifact”?

And thus was our first configuration “drop pod” born: a totally self contained bundle, with every part it wants saved on-image to perform its mission.

Our builds now have a stage (applied through a Jenkins shared library so something can plug simply into it), that does the next:

  • Pulls down a HashiCorp Terraform picture of no matter model specified by the invoking construct

  • Launches a container with the present working listing mounted

  • Copies the Terraform recordsdata to a staging path inside the container itself

  • Runs terraform init from inside the container

  • Switches to a workspace that corresponds to the setting

  • Runs the Terraform plan, producing a plan file that exists inside the container

  • Runs docker commit to save lots of the picture, together with plan recordsdata and all modules pulled down from terraform init

  • Tags the container with construct/department data and pushes the container to the registry.

Here is a take a look at the precise steps being ran inside our Jenkins library definition.

In the tip, we have now a runnable Docker container, with a pinned model of Terraform on board, together with every part else wanted to execute a given Terraform plan. You can in all probability think about how this solves different often annoying issues as properly. For occasion, if we actually must leverage a characteristic in a more recent model of Terraform for a given utility, it’s very straightforward for us to construct a totally self-contained drop pod that makes use of the newer model with out impacting anything.

We have a pleasant wrapper supporting this sample which accepts arguments of “app”, “department”, “construct” and “setting” that handles the docker pull, and subsequent docker run that applies the plan together with related cleanup. And, as a result of each plan is rendered as a part of a construct, we are able to clearly see a historical past of each proposed and utilized modifications. To present a ultimate set of linkages, all our Terraform code applies a ‘terraform-stack’ tag to each useful resource it manages, indicating the department and construct that produced the plan.

Extending Drop Pods to Ansible

I perceive the enchantment of Ansible — the simplicity, the procedural ordering of duties, the shortage of a bunch ‘agent’, and many others. Unfortunately, I believe that simplicity breeds some questionable practices, not not like what I needed to keep away from with Terraform (e.g., operators testing a repo, and executing code out of it with no central system of document or guardrails). Ansible could be very highly effective, however I believe typically will get used as a glorified scripting device and nothing extra.

So I discovered myself asking, might I construct an Ansible drop pod that builds on the Terraform strategy we took?

There had been three issues to unravel to adapt this sample to Ansible.

  1. The must share and model roles so we might embed our Ansible playbooks alongside our utility code in a conveyable method

  2. A way to bundle every part right into a container

  3. A solution to really run the playbooks as soon as bundled right into a container.

The reply to first downside turned out to be Ansible-galaxy. Galaxy enables you to retailer your roles individually out of your playbooks, they usually don’t essentially must be uploaded to the Galaxy repository both — you’ll be able to retailer and model them as a part of your individual SCM. Sadly, to retailer roles in Git requires each one in all them to have its personal repository. That appeared like a non-starter to me (I don’t actually need dozens of tiny Github repositories consuming up our license house). However, you may also pull your roles from an internet location if they’re in a tar format. So, we took the strategy of storing roles in a single repo — giving every of them a singular tag (ie, role1-1.0), after which making a construct job to routinely create tar bundles for them that will get uploaded to a web-hosting enabled S3 bucket. Essentially, we created our personal “poor man’s” package deal repository.

Ansible-galaxy additionally supplied the reply to the second downside as properly. We can embed a ‘necessities.yml’, containing any shared function dependencies required into our utility repos alongside a playbook that defines the particular configuration duties. By operating “ansible-galaxy set up -r necessities.yml” from inside our construct container, we’re producing an Ansible analog to the Terraform drop pod. We now find yourself with a container that “internally” has a filesystem construction that appears like this:

  
/root
   /ansible
     Playbook.yml
     necessities.yml
       /roles  
         RoleA-1.0
           tasksmain.yml
           meta...
         RoleB-1.0
		   tasksmain.yml
           meta...

Now onto query three. We can’t simply run the playbook from inside the container, it’s going to find yourself operating the playbook in opposition to the container itself. That isn’t what we wish – we wish to execute the playbook in opposition to an occasion EXTERNAL to the container, particularly the host EC2 occasion the place we’ve downloaded the drop pod.

The trick was to run the container in “web=host” mode, and configure /and many others/ansible/hosts (inside the container in fact) to reference “localhost”. This means executing the playbook from inside the container will really deal with localhost as if it’s a distant host.

There’s some further components to creating this work. We must pre-bake these Ansible drop pods with an SSH personal key that permits the ‘ansible-runner’ consumer to attach over (however ONLY from localhost).

Applying the playbook thus seems like this (holding in thoughts the entry level is ‘ansible-playbook’

docker run -e ANSIBLE_HOST_KEY_CHECKING=False --net='host' --rm
pixability/application_repository:some_versioned_drop_pod --private-key /root/.ssh/id_rsa --user ansible-runner
playbook.yml

The container will run this playbook as if you happen to’d supplied it an exterior host to connect with. It simply so occurs that the exterior host is definitely the host working system the place the container is operating.

We now have a totally self-contained “package deal” that not solely incorporates the playbook and all related roles on the suitable model, but in addition the total Ansible execution setting. This too is wrapped in a easy script that takes arguments of “app”, “department,” and “construct”, and handles pulling the container and executing it with the proper arguments.

Epilogue

Drop pods as a sample are nonetheless being refined right here, with plans to increase to our Terragrunt managed infrastructure coming down the street as properly. By leveraging Docker to create absolutely self-contained “packages” with every part wanted to use a Terraform plan or execute an Ansible playbook, we get a system that’s extremely moveable, versionable, and repeatable. We additionally sidestep and completely keep away from model compatibility points, and might tightly couple our configuration and infrastructure definitions with the applying builds that rely on them.

For Pixability, this technique can have probably the most usefulness for managing our shared infrastructure elements, since we run most of our internet purposes and microservices as containers. However, as a way for configuring and imposing the underlying state of the container scheduling and orchestration layer itself, it’s going to work very properly certainly. From starting a brand new job, to growing a brand new infrastructure package deal, these new beginnings maintain plenty of promise for Pixability and its Engineering workforce.

The submit Ansible and Terraform Drop Pods at Pixability appeared first on .

This article sources info from Tech Blog