Introduction
Containerization has transformed the world of software development and deployment. Docker ↗️, a leading containerization platform, leverages Linux namespaces, cgroups, and chroot to provide robust isolation, resource management, and security.
In this hands-on guide, we’ll skip the theory (go through the attached links above if you want to learn more about the mentioned topics) and jump straight into the practical implementation.
Before we delve into building our own Docker-like environment using namespaces, cgroups, and chroot, it’s important to clarify that this hands-on guide is not intended to replace its functionality.
Docker have features such as layered images, networking, container orchestration, and extensive tooling that make it a powerful and versatile solution for deploying applications.
The purpose of this guide is to offer an educational exploration of the foundational technologies that form the core of Docker. By building a basic container environment from scratch, we aim to gain a deeper understanding of how these underlying technologies work together to enable containerization.
Let’s build Docker
Step 1: Setting Up the Namespace
To create an isolated environment, we start by setting up a new namespace. We use the unshare
command, specifying different namespaces (--uts, --pid, --net, --mount, and --ipc)
, which provide separate instances of system identifiers and resources for our container.
unshare --uts --pid --net --mount --ipc --fork
Step 2: Configuring the cgroups
Cgroups (control groups) help manage resource allocation and control the usage of system resources by our containerized processes.
We create a new cgroup for our container and assign CPU quota limits to restrict its resource usage.
mkdir /sys/fs/cgroup/cpu/container1
echo 100000 > /sys/fs/cgroup/cpu/container1/cpu.cfs_quota_us
echo 0 > /sys/fs/cgroup/cpu/container1/tasks
echo $$ > /sys/fs/cgroup/cpu/container1/tasks
On the third line we write the value 0
to the tasks file within the /sys/fs/cgroup/cpu/container1/
directory. The tasks file is used to control which processes are assigned to a particular cgroup.
By writing 0
to this file, we are removing any previously assigned processes from the cgroup. This ensures that no processes are initially assigned to the container1 cgroup.
On the fourth line we write the value of $$
to the tasks file within the /sys/fs/cgroup/cpu/container1/
directory.
$$
is a special shell variable that represents the process ID (PID) of the current shell or script. By this, we are assigning the current process (the shell or script) to the container1 cgroup.
This ensures that any subsequent child processes spawned by the shell or script will also be part of the container1 cgroup, and their resource usage will be subject to the specified CPU quota limits.
Step 3: Building the Root File System
To create the file system for our container, we use debootstrap
to set up a minimal Ubuntu environment within a directory named "ubuntu-rootfs"
. This serves as the root file system for our container.
debootstrap focal ./ubuntu-rootfs http://archive.ubuntu.com/ubuntu/
Step 4: Mounting and Chrooting into the Container
We mount essential file systems, such as /proc
, /sys
, and /dev
, within our container’s root file system. Then, we use the chroot command to change the root directory to our container’s file system.
mount -t proc none ./ubuntu-rootfs/proc
mount -t sysfs none ./ubuntu-rootfs/sys
mount -o bind /dev ./ubuntu-rootfs/dev
chroot ./ubuntu-rootfs /bin/bash
The first command mounts the proc
filesystem into the ./ubuntu-rootfs/proc
directory. The proc
filesystem provides information about processes and system resources in a virtual file format.
Mounting the proc
filesystem in the specified directory allows processes within the ./ubuntu-rootfs/
environment to access and interact with the system’s process-related information.
The next command mounts the sysfs
filesystem into the ./ubuntu-rootfs/sys
directory. The sysfs filesystem provides information about devices, drivers, and other kernel-related information in a hierarchical format.
Mounting the sysfs
filesystem in the specified directory enables processes within the ./ubuntu-rootfs/
environment to access and interact with system-related information exposed through the sysfs
interface.
Finally we bind the /dev
directory to the ./ubuntu-rootfs/dev
directory. The /dev
directory contains device files that represent physical and virtual devices on the system.
By binding the /dev
directory to the ./ubuntu-rootfs/dev
directory, any device files accessed within the ./ubuntu-rootfs/
environment will be redirected to the corresponding devices on the host system.
This ensures that the processes running within the ./ubuntu-rootfs/
environment can interact with the necessary devices as if they were directly accessing them on the host system.
Step 5: Running Applications within the Container
Now that our container environment is set up, we can install and run applications within it. In this example, we install Nginx web server to demonstrate how applications behave within the container.
(container) $ apt update
(container) $ apt install nginx
(container) $ service nginx start
Conclusion
By taking a hands-on approach and exploring the code and command examples, we’ve gained a practical understanding of building our own Docker-like environment using Linux namespaces, cgroups, and chroot.
Of course docker containerization is lot more than what we just explored above but these fundamentals empowers us to create isolated and efficient environments for our applications.
from Hacker News https://ift.tt/KGRrf7H
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.