In the previous blog post, we spoke about the importance containers and kubernetes play in modern DevOps and how rapidly they have been adopted by enterprises for app delivery. In this blog, we take a look at the Container app delivery pipeline for continuous integration/continuous delivery and its challenges.
DevOps with Containers: The Workflow
DevOps is a combination of development and operations. In case of containerization, source code goes through a series of transformations to produce docker images that eventually get provisioned as docker containers as shown below:
As shown in the illustration, a container-based application delivery starts by an automatic trigger within the enterprise build infrastructure like Jenkins, that picks up the source code from a Version Control System and compiles the source code as application binaries. These application binaries (WARs, JARs, etc.) are then checked into an artifact repository like Jfrog.
Within the container world, there is a fundamental shift on deploying the application binaries to a runtime environment. As opposed to VM world, where a lot of post-processing tasks would take care of deploying and setting up of the application and its environment, in the container world, application with all its stack dependencies and configuration need to be bundled together as a single docker image. This docker image is then readily deployed to a container runtime environment. This means the docker Image needs to be final as part of the build process.
This means the continuous delivery process in the container world is complex, time-consuming and laborious as it involves manually writing scripts, docker files etc to generate the docker images with appropriate config and environment information. Also when deploying to a container runtime as kubernetes, the corresponding deployment script (Yaml File) needs to be created and supplied along with the Images.
CI/CD with Scripts: Will it scale for Containers?
|VM based deployment||Container based deployment|
|IT teams setup VMs with OS and app stacks. Build, deploy, configure app artifacts on a running VM||IT Teams create a single container image pre-built with configurations and app artifacts|
|App updates are manually scripted and re-configured on a running VM||Containers are lightweight and are rebuilt and re-deployed with app updates|
|Modifications and fixes are performed after logging into the VM||Containers are never modified, they are replaced with the new image that contains the fix.|
As explained in the table above, an app deployed on a running VM requires a complex build strategy which requires multiple fragmented spaghetti scripts written for changes in the application. Shifting from a VM to a container based app delivery model also involves re-engineering these scripts to accommodate containers which can result in challenges in app delivery. Even if the container based changes are made to the existing scripts, it has its own sets of challenges as listed below,
- Manual Script can cause a lot of disruption as it involves major changes to accommodate container-based application delivery approach.
- While writing scripts might work for a single app deployment, it does not scale for hundreds of apps in an enterprise.
- With multiple developers writing a large number of custom scripts, it also raises concerns over the standardization of best practices.
- It provides no visual tracking, visibility or data for future use.
- It required teams to learn Docker and Kubernetes configuration resulting in a long learning curve and focussing more on the infrastructure rather than on the application.
As a result of all the above challenges, container-based application delivery requires upfront automation and intelligence that allows for smoother adoption. As enterprises scale the adoption the need of an automated platform increases.
In the final post of the Modern DevOps with Containers series, we take a look at how a platform-based approach can help enterprises simplify, accelerate and scale their container journey.