7 Design Principals for Containers

Hemant Jain
8 min readSep 25, 2022

--

Cloud native” is a term used to describe applications designed specifically to run on a cloud-based infrastructure. Typically, cloud-native applications are developed as loosely coupled microservices running in containers managed by platforms.

Source: Unsplash.com

These applications anticipate failure, and they run and scale reliably even when their underlying infrastructure is experiencing outages. To offer such capabilities, cloud — native platforms impose a set of contracts and constraints on the applications running on them.

Nowadays, it is possible to put almost any application in a container and run it. But to create a containerized application that can be automated and orchestrated effectively by a cloud-native platform such as Kubernetes requires additional effort.

The ideas below are inspired by many other works such as “The Twelve-Factor App,” in which the scope ranges from source code management to application scalability models. However, the scope of the following principles is constrained to designing containerized microservices-based applications for cloud-native platforms such as Kubernetes.

The principles for creating containerized applications listed below use the container image as the basic primitive and the container orchestration platform as the target container runtime environment.

Following these principles will ensure that the resulting containers behave like a good cloud-native citizen in most container orchestration engines, allowing them to be scheduled, scaled, and monitored in an automated fashion.

These principles are presented in no particular order.

1. SINGLE CONCERN PRINCIPLE (SCP)

The SCP principle dictates that every container should address a single concern and do it well. Achieving it is easier than achieving SRP in the object-oriented world, as containers usually manage a single process, and most of the time that single process addresses a single concern.

If your containerized microservice needs to address multiple concerns, it can use patterns such as sidecar and init-containers to combine multiple containers into a single deployment unit (pod), where each container still handles a single concern.

Similarly, you can swap containers that address the same concern. For example, replace the web server container, or a queue implementation container, with a newer and more scalable one

2. HIGH OBSERVABILITY PRINCIPLE (HOP)

Containers provide a unified way for packaging and running applications by treating them like a black box. But any container aiming to become a cloud-native citizen must provide application programming interfaces (APIs) for the runtime environment to observe the container health and act accordingly. This is a fundamental prerequisite for automating container updates and life cycles in a unified way, which in turn improves the system’s resilience and user experience.

In practical terms, at a very minimum, your containerized application must provide APIs for the different kinds of health checks — liveness and readiness. Even better-behaving applications must provide other means to observe the state of the containerized application.

The application should log important events into the standard error (STDERR) and standard output (STDOUT) for log aggregation by tools such as Fluentd and Logstash and integrate with tracing and metrics-gathering libraries such as OpenTracing, Prometheus, and others.

3. LIFE-CYCLE CONFORMANCE PRINCIPLE (LCP)

The HOP dictates that your container provide APIs for the platform to read from. The LCP dictates that your application have a way to read the events coming from the platform. Moreover, apart from getting events, the container should conform and react to those events. This is where the name of the principle comes from. It is almost like having “write API” in your application to interact with the platform.

But some events are more important than others. For example, any application that requires a clean shutdown process needs to catch signal: terminate (SIGTERM) messages and shut down as quickly as possible. This is to avoid the forceful shutdown through a signal: kill (SIGKILL) that follows a SIGTERM.

There are also other events, such as PostStart and PreStop, that might be significant to your application life-cycle management. For example, some applications need to warm up before service requests and some need to release resources before shutting down cleanly.

4. IMAGE IMMUTABILITY PRINCIPLE (IIP)

Containerized applications are meant to be immutable, and once built are not expected to change between different environments. This implies the use of an external means of storing the runtime data and relying on externalized configurations that vary across environments, rather than creating or modifying containers per environment. Any change in the containerized application should result in building a new container image and reusing it across all environments. The same principle is also popular under the name of immutable server/infrastructure and used for server/host management, too.

Following the IIP principle should prevent the creation of similar container images for different environments, but stick to one container image configured for each environment. This principle allows practices such as automatic roll-back and roll-forward during application updates, which is an important aspect of cloud-native automation.

5. PROCESS DISPOSABILITY PRINCIPLE (PDP)

One of the primary motivations for moving to containerized applications is that containers need to be as ephemeral as possible and ready to be replaced by another container instance at any point in time. There are many reasons to replace a container, such as failing a health check, scaling down the application, migrating the containers to a different host, platform resource starvation, or another issue.

This means that containerized applications must keep their state externalized or distributed and redundant. It also means the application should be quick in starting up and shutting down, and even be ready for a sudden, complete hardware failure. Another helpful practice in implementing this principle is to create small containers. Containers in cloud-native environments may be automatically scheduled and started on different hosts. Having smaller containers leads to quicker start-up times because before being restarted, containers need to be physically copied to the host system.

6. SELF-CONTAINMENT PRINCIPLE (S-CP)

This principle dictates that a container should contain everything it needs at build time. The container should rely only on the presence of the Linux® kernel and have any additional libraries added into it at the time the container is built. In addition to the libraries, it should also contain things such as the language runtime, the application platform if required, and other dependencies needed to run the containerized application.

The only exceptions are things such as configurations, which vary between different environments and must be provided at runtime; for example, through Kubernetes ConfigMap. Some applications are composed of multiple containerized components.

For example, a containerized web application may also require a database container. This principle does not suggest merging both containers. Instead, it suggests that the database container contain everything needed to run the database, and the web application container contain everything needed to run the web application, such as the web server. At runtime, the web application container will depend on and access the database container as needed.

7. RUNTIME CONFINEMENT PRINCIPLE (RCP)

S-CP looks at the containers from a build-time perspective and the resulting binary with its content. But a container is not just a single-dimensional black box of one size on the disk. Containers have multiple dimensions at runtime, such as memory usage dimension, CPU usage dimension, and other resource consumption dimensions.

This RCP principle suggests that every container declare its resource requirements and pass that information to the platform. It should share the resource profile of a container in terms of CPU, memory, networking, disk influence on how the platform performs scheduling, auto-scaling, capacity management, and the general service-level agreements (SLAs) of the container.

In addition to passing the resource requirements of the container, it is also important that the application stay confined to the indicated resource requirements. If the application stays confined, the platform is less likely to consider it for termination and migration when resource starvation occurs.

CONCLUSION

Cloud native is more than an end state — it is a way of working. This whitepaper described a number of principles that represent foundational guidelines that containerized applications must comply with in order to be good cloud-native citizens.

In addition to those principles, creating good containerized applications requires familiarity with other container-related best practices and techniques. While the principles described above are more fundamental and apply to most use cases, the best practices listed below require judgment on when to apply or not apply.

Here are some of the more common container-related best practices:

  • Aim for small images. Create smaller images by cleaning up temporary files and avoiding the installation of unnecessary packages. This reduces container size, build time, and networking time when copying container images.
  • Support arbitrary user IDs. Avoid using the sudo command or requiring a specific userid to run your container.
  • Mark important ports. While it is possible to specify port numbers at runtime, specifying them using the EXPOSE command makes it easier for both humans and software to use your image.
  • Use volumes for persistent data. The data that needs to be preserved after a container is destroyed must be written to a volume.
  • Set image metadata. Image metadata in the form of tags, labels, and annotations makes your container images more usable, resulting in a better experience for developers using your images.
  • Synchronize host and image. Some containerized applications require the container to be synchronized with the host on certain attributes such as time and machine ID.

--

--

Hemant Jain
Hemant Jain

Written by Hemant Jain

Sr. SRE at Oracle, Ex-PayPal, Ex-RedHat. Professional Graduate Student interested in Cloud Computing and Advanced Big Data Processing and Optimization.

No responses yet