Building a singular software delivery pipeline in the open

Learn how a team of developers embraced open source to create a singular software delivery pipeline.

It's 2015, you have a small group of developers in the card organization of Capital One in Canada and you're starting new development from the ground up. What starts as a small development-operations pipeline with everybody located in the same office starts to grow, and eventually we end up with multiple pipelines spanning the company because different groups were solving the same problem in their own ways.

You try to fix this by employing various DevOps concepts, where both sides are working together in a cloud computing environment to roll out high-quality applications that have been validated and are good to go, but it's getting more and more complicated. You recognize it’s time for a shift.

That was the state we were in for our team — and we solved it by building a singular software delivery pipeline. It was a huge success, and in time, the entire enterprise got on board to implement a single, simple unified process that everyone could understand and use effectively.

Why build a singular software delivery pipeline?

We saw a need to improve the developer experience for how we released software. Our software release process took a long time and involved long, painful and manual steps. The processes were error-prone and not a great experience for the developers or engineers using them. So we developed a CI/CD pipeline using InnerSource strategies to meet the users’ (who were our Capital One developers deploying software to the cloud) requirements. It automated the manual steps so they were no longer time-consuming and painful for the users, thereby allowing software releases to be faster and therefore more frequent.

Development went through several stages. In the beginning, we were very much in start-up mode, with everybody in the same office. We were with each other — changes were made, and we were able to witness the results first hand as our colleagues explored. When we moved beyond our team and scaled to the Division and Enterprise, we had to adapt. We’re no longer sitting next to our users, and the feedback isn’t as direct.

We weren't using our current framework back then, and there were multiple frameworks across the company that created duplicate effort. We wanted to consolidate and pull our talent together to solve the problem once and allowing us to move on and use our talent more effectively, solving bigger and more challenging problems.

What we wound up doing was pulling everybody together for what we called a barn-raising — a discussion to identify the core problems, agree on what we needed to do and explore possible methods that could help.

After more conversations, our team discovered that many teams were completing similar tasks differently. Eventually, we chose a framework that met the needs of our business and users.

Things that a singular pipeline did for us

We got a lot out of the change right away. At every stage, we had positive results we could point to and build on for the next stage of maturing the pipeline:

  • Developer experience: As a lot of our developers were new and needed to learn a lot of complex concepts, their developer experience was challenging. The pipeline alleviated the weight of experience by reducing some of that complexity. In turn, this lowered the developers’ need to learn specific aspects, resulting in a better experience from the start. 
  • Avoiding duplication: If you have more than one version of anything, it's going to wind up costing your organization more to manage it than the cost of moving to one system. Moving to a single software pipeline avoids that costly duplication. Developers who were once working on redundancies were freed up to focus on new and more challenging problems.
  • Compliance: We have to document and report certain things, and we have to maintain security for all our operations and keep things safe from unauthorized access and other threats. We have to document just about everything we do and show our work.It's even better that, by automating software delivery with the CI/CD software delivery pipeline, we built the validation and checks into it to make sure deployments were compliant. This allowed software delivery to be faster as manual error prone checks were no longer needed.
  • Familiarity and commonality: By far one of the most important things a shared pipeline gave us was uniformity of systems. Doing everything within a single unified framework lets us handle configurations at scale to deploy in the cloud. That lets developers build and deploy something new faster. Having everything in the same pipeline also makes it easier than it's ever been to bring new people up to speed on a system they might not have used before.

What we built: A singular software delivery pipeline accessible across the organization

We’re open source-first at our company and decided from the beginning to use open-source software and tools throughout development. We found ourselves using many different open source technologies, such as Jenkins.

As workflows evolved, the whole thing got a lot easier because of how a single unified pipeline helps streamline everything. Rolling out new deployable pieces, for example, starts with the team working in its own GitHub repository. Inside there's a configuration file that defines what the team wants the pipeline to do with their software. They can set the parameters and automate actions.

In the early days, before that product could get rolled out, specialized knowledge was needed by multiple people. By switching to using the pipeline, the complexity and need for learning the specialized knowledge was removed and replaced with defining simple configuration instead.

This system is also really speeding up the ways our teams talk to each other. We’ve adopted InnerSource strategies at scale. We use many tools including Jenkins and GitHub. Those two tools are integrated together to figure out the configuration that's needed. Meanwhile, as developers make changes to their code, they can commit it to GitHub and create a PR to review. Once it's approved, everything gets merged into the master and deployed through the pipeline.

Contributors, InnerSource and building together

Adopting InnerSource strategies gave us the ability early on to increase core reuse and improve knowledge sharing across the DevOps teams. 

At Capital One, we use GitHub. The open transparency you get from GitHub helps foster a great InnerSource environment. Using “all-in-public code repositories” that are searchable and viewable by everyone in the company is one of the ways we cultivate our open source culture.

Evolving the product for the user

In the early days of our pipeline development, we still had teams building everything out from scratch. Updates and new rollouts had to be manually entered, and we had a lot of wasted extra effort. Eventually, when the DevOps team was running properly on a single pipeline, we got the green light to roll the same unitary system out for the entire credit card organization within Capital One, followed by adoption across the entire enterprise. At this middle stage of the journey, we put a lot of emphasis on effective communication.

Developers were encouraged to contribute to the pipeline. We used a trusted contributor model, allowing users to move up the ladder and take on more responsibility when they demonstrated they held a high quality bar as they moved up the ladder. What’s key about this model is that by bringing in new contributors and allowing them to work their way up the ladder, you also allow the owners or people who originally developed the pipeline to move onto new projects knowing that what they’ve built is in good hands. 

Another aspect of scaling was the need to create and maintain thorough documentation. Documentation is needed to allow people outside the code team to be able to become knowledgeable about the pipeline so that they can become contributors.

As the pipeline has grown in usage, we’ve incorporated NPS surveys to keep track of user happiness. This helps us know how others perceive our product and gives us the ability to focus on specific users or groups to learn more about what informs their perception. This valuable feedback, in turn, helps us look for strategies to make the product better.

The guiding principle throughout this whole journey has been the philosophy that this is not our team's product or somebody else’s product, but rather it's our product and everybody has a role to play.

Running with an idea can make a really big impact

As you can probably tell, building a singular pipeline was a long journey — one that we’re proud to have gone on. Having a single software delivery pipeline is so much more efficient and accurate than the older ways of doing things that it frankly speaks for itself. By linking our former ways of doing things with our current streamlined approach, we've empowered our developers to build better, faster and in a way that allows them to focus on delivering software products and features that delight our customers.


Kirsteen Donachie, Senior staff software engineer

Kirsteen has many years of experience working as an engineer, designer and architect in the tech industry. She is an advocate for doing the right thing in a pragmatic way.

Yes, We’re Open Source!

Learn more about how we make open source work in our highly regulated industry.

Related Content

alternating light and dark blue triangles with blue dots flowing diagonally across a dark blue background
Article | October 14, 2020
abstract dots illustration on dark blue background
woman sitting at desk typing on computer
Article | August 4, 2021