logo

On-Demand Buildkite Agents

Integrate Namespace with Buildkite to run jobs quickly that scale with you without requiring any infrastructure maintenance.

Agents running on Namespace infrastructure are fully isolated, guaranteeing that your tests and builds finish correctly regardless of what other jobs have run.

Using Namespace Runners

Namespace only supports Agents in Buildkite Clusters. If you have not yet enabled Clusters for your Organization, enable them now!

Enable Clusters

Create a Buildkite Access API key

  1. Specify the Buildkite Organization where you plan to run the Namespace integration.
  2. Select the following REST API Scopes:
    1. read_clusters
    2. write_clusters
    3. read_builds
    4. write_builds
    5. read_organization
    6. read_pipelines
  3. Click on Create New API Access Token.
  4. Take note of the newly generated token, you'll need it a moment.

Head to your workspace

Click on Get Started to connect Namespace to your Buildkite Organization.

You will see the following form.

logs page

Setup access to AWS resources

(Optional): To store your pipeline secrets in S3 Buckets, specify the AWS IAM Role ARN from your federated AWS account to access AWS resources from Buildkite pipelines.

See secrets management and AWS federation for details.

Update your Buildkite Organization settings

1
Create a new Webhook in Buildkite.
The webhook allows Namespace to receive notifications about new jobs.
2
Update the Webhook URL to:
3
Update the Webhook Token to:
4
Select the following events:
ping
job.scheduled
job.started
job.finished
5
Select the following pipelines:
This will filter events only for pipelines assigned to Namespace-managed clusters.
Pipelines In Clusters...
Namespace Cluster
6
Press [Save].

Assign pipelines to Namespace managed cluster

At this point, Namespace has already created and configured a Buildkite Cluster in your organization, and started managing this cluster on your behalf. Namespace Cluster implements a single default queue. Namespace schedules Agents only for pipelines assigned to this cluster.

Add one or more pipelines to the Namespace managed cluster from their Settings > General page.

You are done!

When settings are changed correctly, the web UI will show a green circle, and Namespace starts receiving events for Buildkite.

Namespace Runners Details

Buildkite Agent runs within a container on a 4 vCPU 8 GB RAM instance. An Agent instance serves only a single Buildkite Job and terminates afterward.

Namespace Builders are enabled by default for Buildkite Agents, so any Docker build automatically uses your Workspace Builders. Check out the Builders product page for more details.

Base Container Image

Namespace provides a pre-configured base container image for the Buildkite Agent based on Ubuntu-22.04. This image contains every dependency and tool that Buildkite Agent needs to run. Most of the pipeline use cases should be able to run with the base container image.

If you need additional tools or dependencies, consider using the Docker Compose Buildkite plugin or provide a custom Agent base container image.

The base image contains the following tools:

  • Docker: v23.0.5.
  • Docker Buildx: v0.10.4.
  • Docker Compose: v2.17.3.
  • Miscellaneous: Git, Git LFS, Python 3, sudo, bash, curl, rsync, unzip, perl, jq, openssh-agent, and tini.

Managing Pipeline Secrets

Use Secrets from AWS S3 Bucket

The Agent image is pre-configured to use Buildkite S3 secrets hooks, allowing you to store any secret in your encrypted S3 bucket. The S3 secrets hook runs before the build steps. It fetches secret files from the S3 bucket and exposes them to the local Agent environment.

Follow these steps to use secrets from S3 buckets:

  1. Configure AWS Workload Federation to allow your Namespace workspace access to AWS S3 bucket.
  2. If you still need to configure it, specify the AWS IAM Role ARN from your federated AWS account in the Buildkite installation dialog. See installation steps.
  3. Change your pipeline definition to set the variables BUILDKITE_SECRETS_BUCKET and BUILDKITE_SECRETS_BUCKET_REGION with your S3 bucket name and its AWS region.
env:
+ BUILDKITE_SECRETS_BUCKET: "your-bucket-name"
+ BUILDKITE_SECRETS_BUCKET_REGION: "bucket-region"
steps:
  1. Done!
The S3 secrets hook expects a specific file structure in the bucket. Check their documentation to configure it correctly.

Details

Use Secrets from GCP Secret Manager

You can configure Namespace to access your GCP account and inject secrets from GCP Secret manager into Agent environments.

Follow these steps to use secrets from GCP Secret Manager:

  1. Configure GCP federation with Namespace and allow your Namespace workspace to access GCP resources. - Keep a reference of the values you set for PROJECT_NUMBER, POOL_ID, PROVIDER_ID and SERVICE_ACCOUNT. - Don't forget to include the Secret Manager Secret Accessor role to the permissions of the federated service account.
  2. Add a new secret to your GCP Secret Manager and grant access to it to the SERVICE_ACCOUNT.
    • Take note of the resource name of this secret. Click on the three vertical dots ... and click Copy resource name
  3. Then, set the following environment variables into your Buildkite pipeline file:
    env:
    	BUILDKITE_GCP_SSH_KEY_SECRET_NAME: "{Secret resource name}/versions/latest"
    	BUILDKITE_GCP_WORKLOAD_IDENTITY: "https://iam.googleapis.com/projects/{PROJECT_NUMBER}/locations/global/workloadIdentityPools/{POOL_ID}/providers/{PROVIDER_ID}"
    	BUILDKITE_GCP_SERVICE_ACCOUNT: "{SERVICE_ACCOUNT}"
  4. Done! The SSH private key will be automatically copied in Agent's environment.
This is an early access feature and it only supports injecting SSH private keys from GCP secrets.

Custom shapes and architectures

Namespace runs agents on isolated instances with 4 vCPU and 8 GB RAM each by default.

To select different instance shapes or architectures, specify agent Tags in your pipeline file.

Use the following tags to change the configuration of a particular agent:

  • nsc-shape: set the shape for the agent instance. Valid values use the format "{vcpu}-{mem}". Where:
    • vcpu: acceptable numbers for vCPU are 2, 4, 8, 16 and 32.
    • mem: acceptable numbers for RAM GB are 2, 4, 8, 16, 32, 64, 80, 96, 112 and 128.
  • nsc-arch: set the platform for the agent instance. Valid values are "amd64" or "arm64".

For instance, the following pipeline configuration requests an ARM64 agent with 2 vCPU and 4 GB of RAM:

agents:
  nsc-shape: "2x4"
  nsc-arch: "arm64"

Users with plan "Developer" are allowed to use Buildkite Agents with up to 8 vCPU and 16GB of RAM. Users with "Team", "Business" or "Business+" plans - up to 32 vCPU and 64GB of RAM. Using Buildkite Agents with more than 64GB of RAM (80, 96, 112, 128) is not allowed by default. Please reach out to support@namespace.so if you want to use high memory Buildkite Agents.

Cross-invocation caching

You can use Namespace's caching capabilities to cache artifacts across invocations. You can cache downloaded resources, stored packages, or other artifacts that don't change run-over-run and would otherwise slow down your job run.

By enabling this feature you can attach a cache volume to an agent invocation. All agent sharing the same cache tag will share cache volume contents. Cache volumes are attached on best-effort basis depending on their locality, expiration and current usage (so they should not be relied upon as durable data storage).

After a cache volume is attached to the agent invocation, you can store files into it using whatever tools you prefer: either by configuring paths in your favorite tools, copying files in or out, and simply relying on bind mounts.

You can use our nscache-buildkite-plugin to wire your cache mounts for you.

Namespace uses local fast storage for cache volumes - and some magic. Cache volumes are forked on demand, ensuring that jobs can concurrently use the same caches with uncontested read-write access.

Learn more

Configuring a cache volume

Cache volumes are attached to agent invocations by using agent tags:

  • nsc-cache-tag (required): the key to select the volume to mount. Namespace infrastructure will attempt to attach the most recently used volume with this tag (or failing that, an empty volume).
  • nsc-cache-path (optional): where to mount the cache volume, by default is /cache.
  • nsc-cache-size (optional): how large should your cache volume be, defaults to 20gb. Minimum value is 20 Gb. Maximum size depends on your subscription plan: Team plan is 50 Gb, Business plan is 100 Gb.

Tips:

  • As a starting point we suggest use unique cache tags for each step in a workflow, since each step might cache files specific to the logic of the step.
  • If multiple steps in a workflow cache the same data (e.g. installing exactly the same NPM packages) specify the same cache tag for all of them for optimal storage utilization.
  • When caching results of a computation that depends on the repo content it's a good idea to store them in subdirectories of the cache volume named after hash of the input. E.g. when caching compilation output for cc a.c b.c, store the output under /cache/`shasum a.c b.c`/a.out.

Example: Caching NPM packages

steps:
  - command: |
      time npm install
      npm run test
 
    plugins:
      - namespacelabs/nscache#v0.1:
          paths: # optional
		  	# By default the plugin detects the platform automatically and mounts the cache
			# into appropriate directories (currently supported: Go, NodeJS).
            - node_modules
            - ~/.npm
 
    agents:
      nsc-cache-tag: myrepository/npm-cache
      nsc-cache-size: 20gb

Experimental: Caching Docker images across invocations

Namespace also makes it trivial to cache container image pulls (and unpacks, often the most expensive bit) across invocations.

You can either cache images in a separate cache volume, by specifying containerd-cache-tag and containerd-cache-size, or you can place the container state in an existing cache volume by using nsc-cache-tag and containerd-cache-relative-path.

We're still ironing out some of the kinks so you should consider this support experimental. We've also prefixed the configuration keys with nsc-experimental to make that clearer.

steps:
  - command: |
      time docker pull ubuntu
    agents:
      nsc-experimental-containerd-cache-tag: myrepo/container-cache
      nsc-experimental-containerd-cache-size: 50g

Alternatively, use an existing cache volume:

steps:
  - command: |
      time npx -y supabase start
    plugins:
      - namespacelabs/nscache#v0.1:
          paths: ["~/.npm"]
    agents:
      nsc-cache-tag: myrepo/cache
      nsc-experimental-containerd-cache-relative-path: containers

For optimal performance with container image caching, we reccommend configuring cache size of 50Gb or more.

Experimental: Configure Git mirror across invocations

Namespace Cache Volumes can also be used to speed up Git checkouts. You can configure Buildkite agents to use the local cache volume to cache the Git repository.

Specify the agent tag nsc-experimental-git-mirror to enable this feature. Namespace creates a new volume for each Git repository your Buildkite pipelines refer to. The Buildkite agent is automatically configured to look for the Git mirror in the cache volume.

Configure a Git mirror volume with a default size of 5Gb.

agents:
  nsc-experimental-git-mirror: "true"

You can specify larger values in case your Git repository takes more space. For example, the following configures a Git mirror volume with a custom size of 10Gb.

agents:
  nsc-experimental-git-mirror: "10gb"

Using a custom base image

You can provide Namespace with a custom base image if you need additional tools or dependencies to run your Buildkite pipelines.

Custom base images are container images containing your required software. Note that Namespace's base image includes Docker and a few other dependencies, which you'll need to install yourself. As an alternative, follow the example below, which derives from our base.

We'll use nsc build and Namespace's Container Registry to build and store the image itself.

Prepare your custom Dockerfile.

mkdir baseimage/

And add the following as baseimage/Dockerfile.

FROM us-docker.pkg.dev/foundation-344819/prebuilts/namespacelabs.dev/internal/buildkite/base-image:prod
 
RUN apt-get install -y ruby
RUN apt-get install -y nodejs
RUN npm i supabase --save-dev

Change the above to add any additional packages you need.

Use nsc build to build and push the image to Namespace's Container Registry.

nsc build . --name buildkite/base:v1 --push \
  --platform=linux/amd64,linux/arm64

When nsc build finishes, it emits the built image reference. E.g.

Pushed:
  nscr.io/97mf7kvepuhe7u91a5vg/buildkite/base:v1

The above command builds a multi-platform container image as Namespace supports both amd64 and arm64 Buildkite Agents.

Details

Head over to the dashboard

Change the default base image

Press Edit under the organizations' menu, accessible via .

Check the the Use custom base system image checkbox in the pop-up window and paste your desired image reference in the text field below.

logs page

Finally, press Save.

Alternatively: specify the custom base image in your pipeline

You can specify nsc-base-image-ref as an agent tag to select a base image. For example:

agents:
  nsc-base-image-ref: nscr.io/97mf7kvepuhe7u91a5vg/buildkite/base:v1

Or use an existing public image:

agents:
  nsc-base-image-ref: node