You are here

How to put GitOps to work for your software delivery

public://webform/writeforus/profile-pictures/colin.jpg
Colin Domoney, Consultant, Codethink

GitOps is increasingly popular in the cloud-native world, allowing developers to deliver software to production using their native tooling—a pull request in Git. The underlying principle is that of infrastructure as code. Namely, any change in operations can be affected by a change in code.

This is also the origin of the term "GitOps": operations via Git.

GitOps is typically thought of as a cloud-native enabler, given its close association with Kubernetes, where an operational environment can be expressed declaratively in code. Kubernetes will seek to ensure that the desired state (the code) matches the observed state (the live instance).

A Fortune 500 organization I consulted with recently transformed its deployment of a legacy three-tier application to a global footprint. Existing deployments were slow, error-prone, fragile, and lacking in audit controls.

Here's how we solved some key enterprise software delivery pain points using GitOps—and the key benefits for your team.

[ Get Report: Buyer’s Guide to Software Test Automation Tools ]

Enterprise software delivery challenges

Unfortunately, at many enterprises the adoption of DevOps is nascent, and software is still deployed by manual or—at best—semi-automated methods. Central to such deployments is the infamous change advisory board, which is required (by ITIL et al.) to rubber-stamp any change request.

Ubiquitous in this process is the run book, providing instructions on how to perform the deployment. Typically, these are complex, lengthy documents describing a sequence of manual steps that must be performed. Often, steps are skipped or performed incorrectly, resulting in deployments that differ from the expected state.

This lack of reproducibility requires the intervention of operators to access production systems to correct observed discrepancies, typically via SSH access as the root user. More often than not, such interventions create as many problems as they resolve. And more important, there is a loss of control, since it is necessary to provide operational staff with privileged access to production systems.

An additional problem arises due to manually intensive processes: The system cannot be audited, since there is no record of exactly what changes were performed and by whom. The ultimate downside of the lack of this audit capability is the frustrating poor incident resolution encountered in much enterprise software.

Frustrated users create service tickets that are passed from one team to the next, with no one taking responsibility for resolution and resorting to blaming other colleagues or vendors.

[ Also see: DevOps best practices: Ending the software delivery guessing game ]

How GitOps delivered

Codethink, where I am a consultant, was tasked with transforming an organization's legacy three-tier application to one that included a global footprint. The solution comprised application deployment automation using PowerShell scripting, Azure ARM templates, Azure DevOps Pipelines, and Git as the repository. First, all of the application deployment run books were transformed into PowerShell scripts to ensure that the deployment was fully automatable and repeatable.

To make sure that the computation and database resources were built in a consistent manner, standard Azure ARM templates were used to build instances, and PowerShell Desired State Configuration (DSC) was used to ensure that the end states matched the desired states.

To utilize Git as the system of record, a cluster of independent repositories was used as follows: 

  • The config repository stored the desired configuration of the deployment.
  • The validation repository performed validation of the configuration against the enterprise IT policies.
  • The deployment repository contained the scripts responsible for taking a given configuration (received, for example, from a service desk ticket) and using automation scripts to deploy the configuration to a live instance.

[ Webinar: How to Fit Security Into Your Software Lifecycle With Automation and Integration ]

Partitioning responsibilities

A core design principle of this approach was to partition responsibilities, to ensure that individual roles and duties were rigidly enforced. This was done via Azure identity and access management, using finely granular access controls to the individual repositories. As an example, an operator doing a deployment could not access the configuration repository, since this was not the operator's responsibility.

Upon a successful deployment, a repository representing the deployment would be created from the configuration, validation, and deployment repositories. This repository would be assigned to the end user requesting the resource. This ensured a hard enforcement of the principle of separation of control—only the resource owner had access to the resource.

GitOps is all about operations via Git. Here's how an operational change could be made using this solution. Each deployed instance had an associated repository containing full details of the instance. To modify an instance—for example, to add a new database—a change could be made to the configuration code, which could be checked in and reviewed in a pull request.

Upon successful merge, a continuous integration (CI) process could execute to use automation to bring the end state in accordance with the new desired state. Precisely the promise of GitOps!

[ Also see: How to find flow in your software delivery ]

Key benefits for your team

This relatively novel approach to solving a traditional enterprise pain point using GitOps brings with it a number of key benefits. First, all changes are fully declarative based on code or configuration; gone are the run books of yore. Probably the main benefit is that Git is the underlying system of record, so there is a very strong and auditable record of change.

All changes to the target deployment are made via changes to Git and, as such, every change is held in a Git changeset (underpinned by the mathematics of a Merkle Tree). Other interesting areas for enhancement include the ability to automate pull request reviews using algorithms to determine the impact of the change. Think of this as an automated change advisory board!

Since every stage of the deployment is performed by a step in a CI process, one can perform a post-mortem on a failed deployment to determine at which stage of the deployment the failure occurred to determine remedial action. This eliminates the all too familiar finger pointing that occurs in a manual process by enforcing a perfect partition of responsibility.

Most importantly for the enterprise, however, is the rigid enforcement of separation of control by virtue of identity and access controls using Git repositories as the enforcement boundaries. It is no longer possible for operators to directly access deployments; rather such access must be attained via a relevant changeset in a repository.

Colin Domoney presented on GitOps for enterprise software delivery at BristoCyberCon on May 15. Has your team turned to GitOps? Share your experiences in the comments section.

[ Get Report: How to Get the Most From Your Application Security Testing Budget ]

Article Tags