Container Instance Architecture
If you use nsc run
, or any of the CI runners (e.g. GitHub Runners), you are
using Container Instances.
Each Container Instance is a micro virtual machine that is fully isolated from other workloads. It gets it's own CPU, RAM and Disk allocations.
Namespace relies on containerd
to provide the container primitives users
expect.
User workloads, and a small set of supporting services, including containerd
run in "guest" space -- i.e. within the virtual machine. They're supported by a
number of services that operate outside the guest, either for performance or
security isolation reasons. We call those services "host services".
Container Instances are optimized for interactivity, and thus have very low startup times. To achieve a very low startup time, a Container Instance is devoid of most traditional linux management software, being supported primarily by host services.
The software included in a Container Instance includes:
init
: a custom init system developed by the Namespace team ("concierge").containerd
: for container management.vector
: for logs and metrics exfiltration.
After the kernel boots, Namespace's init
performs minimal operating system
initialization, including system mount management, network setup, etc. This
guest-boot procedure does not take more than a few milliseconds.
It then starts containerd and defers any additional work to it.
Additional non-critical software is also included, notably dockerd
.
Workload Identity
Each Container Instance obtains unique workload identity credentials which allow it to identify itself when calling other Namespace services. These can also be used for cross-service federation purposes. See Federation for more.
Namespaces and networking
Each container that is started with nsc run
, nsc run --on
or via one of the
runners, is deployed as an individual container and task in containerd
, in the
default
namespace.
Depending on configuration, containers can rely on internal networking, or host networking.
Each Container Instance benefits from Namespace's DNS caching improving both resolution time and reliability.
Docker API
Containers started can optionally have access to the Docker API. When that
capability is enabled, a /var/run/docker.sock
is made available to their root
fs, which can be used by any Docker client.
Note however that unlike in a traditional Docker environment,
/var/run/docker.sock
is not backed by dockerd
directly; rather it's served
by concierge, which itself is a Docker-aware reverse proxy. This reverse proxy
is aware of the calling container's identity and can perform translations,
authorization checks, etc based on that identity.
One important aspect is that to retain full compatibility and featureset, when a container creates additional containers using the Docker API, those new containers become siblings of the container creating them, rather than children.
To enable bind mounts that refer to contents within the creating container,
/var/run/docker.sock
is backed by a Docker API proxy which understands the
container anatomy and performs the necessary directory translations to ensure
correct behavior.
Example
If you have a container you've started with nsc run
, with the Docker API
enabled, and you create /mydata/foobar
, followed by docker run --rm -it -v /mydata:/parent ubuntu
and ls /parent
, you'll observe foobar
.
This works by having /mydata/
translated to an overlayfs-aware local
path, which is then passed to dockerd
. E.g.
/mydata --> /run/containerd/io.containerd.runtime.v2.task/default/{containerid}/mydata
This translation mechanism also ensures that translated paths can't escape the caller container's root.
Cgroup management
In a Container Instance, containerd
is the sole owner of the cgroups
sub-system, which is exposed exclusively as cgroups v2
.
Docker support is available in the environment, but rather than using its own
containerd
, it's configured to use the Namespace-managed one. Containers
started in Docker will thus be observed alongside other Namespace managed
containers, albeit in a different containerd namespace. Docker deploys
containers to the moby
containerd namespace.
Machine resource shapes
Namespace allows you to configure your compute resources with flexibility. For simplicity, we describe resource configurations as AxB
, where A
represents the number of vCPUs
and B
represents the amount of RAM in gigabytes. For example, 2x4
refers to a machine shape with 2 vCPUs and 4 GB of RAM.
Available numbers of vCPU are 2, 4, 8, 16, and 32. Available values of RAM (in GB) are 2, 4, 8, 16, 32, 64, 801, 961, 1121, 1281, 2561, 3841, and 5121.
[1] Using instances with over 64GB of RAM is not enabled by default for all plans. Please contact support@namespace.so to request access to high-memory instances.
Disk space
The ephemeral disk space provided to your instances automatically scales based on the selected machine shape.
With the unit_count
of an instance defined as:
unit_count = max(vCPU_count, RAM_GB / 2)
The provisioned disk space for an instance is calculated using the following formula:
disk_size = 32GB + (unit_count) * 8GB
Resource limits
You can schedule workloads concurrently with Namespace. To ensure that you can reliably predict how much capacity is available to you, Namespace enforces concurrency limits per workspace (how many vCPUs/RAM you can allocate concurrently).
When hitting your concurrency limits, additional workloads will be queued until old workloads complete and free up concurrent capacity.
Concurrency limits protect against a large bill in case of a surprising burst of workload spawns. Larger subscriptions include higher concurrency limits.
You can observe your concurrent resource usage at cloud.namespace.so/workspace/usage/concurrency. This also allows you to track how close you are running to your concurrency limit.
If you need larger concurrency limits than available to you please contact support@namespace.so.