Are you aware of Docker complexities?

Email: info@wavemaker.com
Call: +1 (866) 660-6099

For Infrastructure Team

For DevOps Team

How will you intuitively manage a large number of containers?

Organizations using the cloud have challenges managing their hardware resources. Critical operations such as tracking the owner, application usage etc., for the VMs are missed, and the usage of provisioned VMs is also not known. Lack of best practices and right tools leads to scattering of the access and key management.

Containers take this complexity to another level because they are ephemeral, i.e. containers are best designed stateless, so that they can easily be migrated and restarted on different locations as needed. Also, containers can grow much larger in number and the existing challenges of VMs like tracking and book keeping, access management etc., multiply as containers may be running on the VMs.

The need of the hour is a platform with intuitive UI based management and best practices that can keep you efficient with application centric drill down to container infrastructure, and also allow intuitive naming of containers and efficient access management.  And this is needed sooner than later, because you only realize these problems when you can no more fix it easily.

If uncontrolled early, how will you later measure container resource usage for charge backs and prevent resource abuse?

Command line options are used when running the container with different configurations like CPU/RAM limits etc. Well, now if every user in the company launches containers this way, it soon gets really complicated to track container configurations. Users simply copy paste from other configurations, not realizing the right configuration to use; making it ineffective. It is also difficult to restrict users to use a quota and chargebacks are complicated to implement.

A good platform provides options to create reusable container configurations. It is just like your own container types, similar to AWS instance types.  An option to use a custom container configuration should also be available as heterogeneous workload of enterprise may always need custom resource allocation.

Do you know how to leverage container agility for up to 80% infrastructure savings?

In the VM era, launching an application from scratch used to take hours. Containers have changed the dynamics of application delivery, now they can be launched in sub second times. This can easily be utilized to launch an application on a web request. Now the question is, why do we need the applications running all the time in that case?  If cumulative of available infrastructure resources is much higher than cumulative utilized resources (utilized being 50% or less), a good Docker platform can make use of this and reduce the cost by unbelievable numbers.

Containers can easily be hibernated, if found idle for configured idle time, and can then be reactivated on a web request automatically. Organizations can save hardware resources combining hibernation with techniques like container density and container pooling.

Are you monitoring containers, containerized applications and infrastructure from separate consoles?

Infrastructure teams use their favorite monitoring tools to monitor on multiple levels including application, VMs and hosts. With containers adding to the infrastructure, another piece needs to be monitored. Users need an integrated experience with monitoring though.

The Docker platform should not only provide monitoring across applications, containers, and infrastructure, it should also be flexible to accommodate user’s favorite monitoring tools.

Are you ready to launch complex applications on different infrastructures including private or public cloud?

Docker has made portability for the workload really seamless. It is now very easy to migrate the containerized workload to any part of your infrastructure. But the catch is that an application is rarely a single container. You need to automate deployment and delivery of an application stack (a set of orchestrated container images). And you need to launch it on different infrastructures including private and public clouds (if not now, then in future).

A platform ready to burst your applications to cloud today, can save a lot of research and implementation cost when you need to expand deployments to cloud OR shrink applications to a more secure private deployment.

Are you sure that you are using standard software stacks in your Docker infrastructure?

VM Images are typically the base images with OS and common infrastructure. These are very few and are maintained by the infrastructure team.  Docker images, on the other hand, are much more portable, making them part of the build and release process. DevOps needs to build these images very often now. It is more complicated to reach the standardization easily in this case as infrastructure teams have a little watch / control on the content of the images used by the users.

You need a platform that allows the infrastructure team to visualize correlation between Docker images and its software stack intuitively.

Can you upgrade your software stack in a couple of hours?

Docker has provided great abstraction (Docker images) to separate the base software stack from the application files and data. This can be leveraged to achieve upgrade of software stack in record times assuming backward compatibility of the software used in the stack. A good platform should leverage this to provide quick and easy upgrades to the infrastructure and operations team.

Are you able to configure automated backups for your container-based infrastructure and recover them easily?

Multiple Docker volumes can be attached to the containers with –v option. This adds one more book keeping which any container based infrastructure needs to do. And the complexity grows enormously on a distributed deployment considering the ephemeral nature of containers. After all, a container can be migrated on any part of the infrastructure.

The solution you opt should not only provide organization of container files and data, but also provide innovative solutions to simplify and streamline the whole backup and recovery.

With so many new players in the Docker ecosystem, are you sure you are using proven Enterprise-grade software?

With a plethora of tools in the Docker landscape, it is easy to get in the moment and use tools available / compatible now. Many of these start-ups have not been tested with the enterprise scale load.

It is advisable to use platform with software which is already tested under load and is well supported. When assembling the tools yourself, this becomes even more important. The choice of tools becomes much more diverse and complex.

As you move towards self-service, how do you ensure proper control of access to resources?

Security best practices must be followed for secure container isolation, security of the Docker daemon, access controls of users etc. If you choose to take the Docker deployment by yourself, you not only need to be thorough with the security implementation, but also keep it up-to-date.

With a Docker platform this can abstract and provide better access controls. Also as low level security is not directly being handled by the users, the chances of erratic or malicious attacks are much less. Built-in access control in the platform can facilitate right access to container infrastructure, functionality and applications.

Are you confident in the longevity of the tools and technologies you’ve selected and assembled?

With so many tools in the Docker landscape, it is easy to get in the moment and assemble tools available / compatible now.  The tools may or may not be compatible in future or may not be supported. It will be very tricky to maintain it long run with such diverse tools.

You need a platform that can abstract this under the implementation and take care of supporting it in future.

Did you figure out the right set of compatible Docker tools for your immediate and upcoming needs? Will they be compatible in the future?

As you are setting your Docker deployments, you will need a lot of plumbing (as mentioned by Solomon Hykes at DockerCon 2015)  – right from orchestration, scheduling, service discovery to data management, networking and evolving beta versions of the tools.

Even after a good amount of plumbing, you’ll realize that it is not a compatible solution. The tools you were using are no more compliant, or the future use cases are not supported by current solution. In some cases the tool utilized may no more exist.

It is better to let a professional platform take care of this plumbing in the implementation. This is where a good Docker platform can help you.

For any complex application; do you spend considerable time automating the deployment?

While setting up a web application, you need to set up the databases, caching services, web servers and tie them with a mechanism to load balance the calls to the web nodes. This application orchestration is crucial for the project.  Wouldn’t it be great if you don’t need to think of setting up load balancers for availability and scalability? It would be better if you just provide your application and are ready to launch it with a few intuitive commands or UI.

What if a very intuitive UI can help you connect services in a scalable way and allow you to connect to non-containerized external services too?

Can you reuse existing scripts or investment in Puppet/Chef / Ansible for moving to Docker?

DevOps have invested heavily in the current build and release processes, some of them might be using configuration management tools like Puppet, Chef, Ansible etc. If they need to build the whole process again from scratch, it will create a big barrier to move to Docker.

You should be able to use the existing scripts including the Puppet/Chef/Ansible scripts other than writing Docker file from the scratch. A platform which can provide both the advantages of idempotent Chef/Puppet/Ansible and immutable Docker can reuse your investment in these tools. The workload generation can still use the configuration management scripts that you are invested in. Deployment to different environments can be configured with immutable Docker images.

Additionally, the platform should be in a position to effectively manage the life cycle of application processes including detecting and restarting dead processes, managing start up and shut down sequence etc.

Do you have an intuitive UI to manage containers with an application perspective, and debug drilling down to problem containers?

As the numbers of containers in the infrastructure grow, managing them as a flat list of containers will not be efficient. An application centric management of these containers proves to be much more productive.

Docker platform can facilitate this with intuitive UI where you can drill down from application and its services to the corresponding containers. You can also debug the trouble containers by looking at its logs and take corrective actions.

Do you realize benefits of automated Microservice deployments with minimum effort?

Previously, only large SaaS vendors with big IT team can afford to go for Microservice architecture based deployments. With the advent of Docker, Microservices architecture is also gaining popularity. Advantages of microservices like faster updates, horizontal scalability of individual services are very desirable. But, managing big number of service is complex.

A platform that facilitates automation of deployment and scaling of these services in can greatly boost the agility of the release process in the organization.

Did you manage to address the need for scalability on a heterogeneous infrastructure?

Tools from major SaaS vendors are designed keeping homogeneous workload with sophisticated infrastructure. These may not be fully relevant to requirements of enterprise where the infrastructure is not as large and homogeneous.

An enterprise Docker platform must provide options for scalability and availability on heterogeneous infrastructure of enterprises.

Do you know how to leverage container agility for up to 80% infrastructure savings?

In the VM era, launching an application from scratch would take hours. Containers have changed the dynamics of application delivery, now they can be launched in sub second times. This can easily be utilized to launch an application on a web request. Now the question is, why do we need the applications running all the time in that case?  If cumulative of available infrastructure resources is much higher than cumulative utilized resources (utilized being 50% or less), a good Docker platform can make use of this and reduce the cost by unbelievable numbers.

Containers can easily be hibernated, if found idle for configured idle time and can then be reactivated on a web request automatically. You can save a lot of hardware resources combining hibernation with techniques like container density and container pooling.

Are you monitoring containers, containerized applications and infrastructure from separate consoles?

Infrastructure teams use their favorite monitoring tools to monitor on multiple levels including application, VMs and hosts. With containers adding to the infrastructure, another piece needs to be monitored. Users need an integrated experience with monitoring though.

Your solution should not only provide monitoring across applications, containers and infrastructure, it should also be flexible to accommodate user’s favorite monitoring tools.

Email: info@wavemaker.com
Call: +1 (866) 660-6099