logo

Faster GitHub Actions

Accelerate your GitHub Actions with Namespace while saving cost.

Namespace offers performant and configurable GitHub Runners that boot in a few seconds and come with caching capabilities to speed up your GitHub workflows.

Why Namespace Runners?

Designed as a drop-in replacement for GitHub Action runners, Namespace Runners bring to the table:

1
More capacity out of the box, starting at 4 vCPU / 16GB RAM, with higher performance per core.

2
High throughput local cache volumes to speed up job executions.

3
Highly flexible configuration of instances shape and custom runners image to fit your computing needs.

4
Native AMD64 and ARM64 support for Linux runners.

5
macOS runners on Apple Silicon (M2).

6
Our pricing model is transparent, predictable, and competitive.

7
Full compatibility with GitHub Actions: You configure your workflow to use our runners using a single-line change and can continue using GitHub Actions as usual.

Getting Started

To speed up your workflows with Namespace, you need to connect Namespace with your GitHub organization and do a one-line change to your workflow definition:

1
Connect Namespace with your GitHub organization.
2
Change the runs-on field in a workflow file to runs-on: namespace-profile-default. For example:
jobs:
build:
- runs-on: ubuntu-latest
+ runs-on: namespace-profile-default
name: Build Docker image
You can read about the configuration options available, including machine sizes, in our documentation.
3
Done! Your GitHub workflow will now run on Namespace.
Namespace Runners run with 4 vCPU / 16GB RAM by default. You can flexibly change their shape.

Details

Configuring a Runner

You can configure machine shape and architecture and enable caching using the Namespace UI or by specifying runner labels in the workflow file. The two ways are equivalent and mutually-exclusive. Some newer features are only available when using Runner Profiles.

Using Runner Profiles

Create a Profile

Open GitHub runner configuration page on the Namespace Dashboard and press "New Profile".

Select a name. The name will be part of the label to specify in the runs-on option in the workflow file.

Select the desired options. See below for details on the available features.

edit profile dialog

Confirm by pressing "Create Profile".

Select the Profile

Change your workflow file to mention the runner profile name: runs-on: namespace-profile-{name}.

jobs:
myjob:
runs-on: namespace-profile-arm64-large

The cache volume will be shared between all jobs running for a Git repository.

Using Runner Labels

Alternatively you can keep the entire runner configuration version-controlled within your workflow file. Specify one of the following labels to use different runner CPU architectures or machine shapes.

LabelOSArchitecturevCPUMemory
nscloudUbuntu 22.04AMD 64-bit416 GB
nscloud-ubuntu-22.04Ubuntu 22.04AMD 64-bit416 GB
nscloud-amd64Ubuntu 22.04AMD 64-bit416 GB
nscloud-ubuntu-22.04-amd64Ubuntu 22.04AMD 64-bit416 GB
nscloud-ubuntu-22.04-amd64-4x16Ubuntu 22.04AMD 64-bit416 GB
nscloud-ubuntu-20.04-amd64-4x16Ubuntu 20.04AMD 64-bit416 GB
nscloud-arm64Ubuntu 22.04ARM 64-bit416 GB
nscloud-ubuntu-22.04-arm64Ubuntu 22.04ARM 64-bit416 GB
nscloud-ubuntu-22.04-arm64-4x16Ubuntu 22.04ARM 64-bit416 GB
nscloud-ubuntu-20.04-arm64-4x16Ubuntu 20.04ARM 64-bit416 GB

Note that only one nscloud label is allowed in the runs-on field of your workflow file. Namespace will not schedule any workflow job if runs-on specifies more than one nscloud label or invalid ones.

Runner Machine Resources

Namespace supports custom machine shapes, so you can configure your runner to a bigger or smaller machine than what is provided by the table of well-known labels above. You can specify the machine shape with the label format as follows:

runs-on: nscloud-{os}-{arch}-{vcpu}x{mem}

Where:

  • {os}: is the operating system image to use. Today, only ubuntu-22.04 and ubuntu-20.04 are allowed.
  • {arch}: can either be "amd64" or "arm64".
  • {vcpu}: acceptable number vCPU are 2, 4, 8, 16, and 32.
  • {mem}: acceptable RAM values in GB are 2, 4, 8, 16, 32, and 64. For larger values, see below.

The GitHub's Runner software requires some resources from the underlying instance. Therefore, we recommend to configure the runner with at least 8GB of memory. Below that, depending on the workload, the runner might experience some flakiness.

For example, to create a 2 vCPU 8 GB ARM runner, use the following label:

runs-on: nscloud-ubuntu-22.04-arm64-2x8

Runners will only be scheduled if the machine shape is valid and your workspace has available concurrent capacity.

Users under the "Developer", "Team", "Business" or "Business+" plan can use up to 32 vCPU and 64GB of RAM. Please contact support@namespace.so if you want to use high-memory GitHub Runners (up to 512 GB).

Details

The provisioned disk size per runner automatically scales with its shape. See machine resource shapes.

Cross-invocation Caching

You can use Namespace cache volumes to speed up your GitHub Actions. Workflows can store dependencies, tools, and any data that needs to be available across invocations. Cached data persists and survives to the runner instance, so the following runners can read the data and skip expensive downloads and installations.

By enabling this feature, you can attach cache volumes to runner instances. Cache volumes are tagged so that any runner sharing the same cache tag will share cache volume contents. Cache volumes are attached on a best-effort basis depending on their locality, expiration, and current usage (so they should not be relied upon as durable data storage).

After a cache volume is attached to a runner, you can store files using whatever tools you prefer: by configuring paths in your favorite tools, copying files in or out, and simply relying on bind mounts.

The cache volume content is only committed when the GitHub workflow completes successfully. So, any intermediate data written into the cache volume is automatically discarded if the workflow fails.

You can use our nscloud-cache-action to wire your cache mounts for you.

Namespace uses local fast storage for cache volumes - and some magic. Cache volumes are forked on demand, ensuring that jobs can concurrently use the same caches with uncontested read-write access.

Learn more

Using a Cache Volume

If you configured a runner using the web UI and use namespace-profile-name runner label, enable caching by simply going to the runner profile and enabling cache.

runner profile cache configuration

You can also enable make the runner configure the built-in software (Git, containerd, toolchain) to use the cache volume.

Example: Caching NPM Packages

Use our nscloud-cache-action to mount the volume under the paths you want to cache.

jobs:
  tests:
    runs-on:
      - namespace-profile-node-tests
 
    steps:
      - name: Setup npm cache
        uses: namespacelabs/nscloud-cache-action@v1
        with:
          path: |
            ~/.npm
            ./node_modules
 
      - name: Install dependencies
        run: npm install
 
	    - name: NPM tests
        run: npm run test

Caching Docker Images Across Invocations

Namespace also makes it trivial to cache container image pulls (and unpacks, often the most expensive bit) across invocations.

To enable this feature, just open the runner profile configuration, add a cache volume and check Container images.

jobs:
  tests:
    runs-on:
      - namespace-profile-integration-tests
 
    steps:
      - name: Pull ubuntu image
        run: |
          time docker pull ubuntu

The second time the above example runs, the time to pull the ubuntu container image should be close to 0, as every layer was already cached by the first run.

To ensure runner stability, when you enable this feature, the minimum cache size allowed is 50 GB. If you specify lower values, our backend automatically sets the requested cache size 50 GB.

Caching GitHub Tools

Namespace volumes can cache GitHub tools across invocations. For example, actions/setup-go and actions/setup-python actions first look for their binaries in $RUNNER_TOOL_CACHE directory, before fetching from remote.

To make GitHub tool cache use Namespace volumes, open the runner profile configuration, add a cache volume and check Toolchain downloads.

jobs:
  tests:
    runs-on:
      - namespace-profile-go-python-tests
 
    steps:
      - uses: actions/setup-go@v5
      - uses: actions/setup-python@v4

Caching Git Repositories

With cache volumes, you can set your GitHub workflow to cache large git repositories to speed up the checkout phase.

After you enabled the Git caching (see below how), you'll need to change your workflows to call our optimized checkout action, which will make use of the cache volume to store and retrieve the git mirrors. See the namespacelabs/nscloud-checkout-action page for more details.

To enable this feature, just open the runner profile configuration, add a cache volume and check Git repository checkouts.

jobs:
  tests:
    runs-on:
      - namespace-profile-integration-tests
 
    steps:
      - uses: namespacelabs/nscloud-checkout-action@v2
        name: Checkout
        with:
          path: my-repo
      - run: |
          cd my-repo && git status

Expected Use Cases

Note that the namespacelabs/nscloud-checkout-action is optimized to speed up only specific use-cases. If you workflows do not belog to these, you might not see the expected performance improvement.

  • Very large repositores: when the workflow needs to checkout a repository with many or big files.
  • Checkout long commits history: when the workflow needs to checkout many commits (the default is only 1).

Advanced: Protect Caches from Updates

You can configure Cache Volumes to limit what git branches can perform updates to them. This configuration allows you to use the same cache as the main branch from other branches (e.g. pull requests), but not commit changes to it.

To specify which branches can update the cache volume, open the cache volume configuration, then check Show Advanced features, and finally type the branch names.

branch cache configuration

Any GitHub Actions job belonging to git branches that are not included in the allow-list, will be able to access the Cache Volumes, but their changes to the caches' content will not be persisted in the end.

Preview: Custom Runner Image

You can today save further time by pre-installing Ubuntu packages in the Runner base image, that you would otherwise install as part of the GitHub workflow. This feature is still in preview, we are working on its reliability and user experience. Also, this feature is only available when using Runner Profiles.

To install custom Ubuntu packages:

  1. Open the GitHub Runner configuration page.
  2. Click on an existing profile or create a new one.
  3. Click the Base image field and select Custom Ubuntu 22.04 image.
  4. Type the Ubuntu packages one by one.
    runner profile base image configuration
  5. Click on "Create Profile" or "Update Profile" button.
  6. Namespace builds the custom runner image using your workspace's Remote Builder instances and show when it's ready to use
    runner profile base image configuration ready
  7. That's it! From now on, any new GitHub Job targeting this runner profile will runs on the custom image.

The custom runner image build will fail if any of the specified packages does not exist. You can search for the list of available packages from the Ubuntu website.

All Ubuntu 22.04 packages

You can always modify the list of pre-installed packages. Any update will result in a new image build.

The Runner Custom Base Image is built starting from our default runner base image. Any GitHub workflow job using a custom base image will be delayed until the base image is built successfully.

High-performance Builds

Namespace Runners seamlessly integrate with Remote Builders, allowing you to speed up your GitHub Action runs even more. By offloading expensive Docker builds to Remote Builders, you remove much burden from the workflow run so that you can select a smaller runner shape.

Within Namespace Runners, the Docker context comes pre-configured to use our Remote Builders. No extra changes to your workflow file are required. Any call to docker build or docker/build-push-action automatically offloads the build to Namespace Remote Builder.

Remote Builders rely on Docker buildx context. Make sure that your workflow does not overwrite the buildx context. For example, calling the action docker/setup-buildx-action would prevent using Namespace Remote Builders. Consider removing it or using namespacelabs/nscloud-setup-buildx-action instead.

Disabling Namespace Remote Builders

By default, Namespace-managed GitHub Runners have Remote Builders pre-configured. Docker image builds (e.g., docker build command) use Remote Builders by default.

If you prefer to use the runner's local Docker builder instead of Namespace Remote Builders, you have two options:

  • Disable Remote Builders: You can request that runners created for a particular workflow job do not use Remote Builders.

    • If you are using runner profiles (see below) disable remote builders in the UI in the runner profile advanced configuration.
    runner profile advanced block
    • If you are configuring the runners with labels pass an additional configuration label nsc-no-remote-builders, e.g.
    jobs:
    myjob:
    	runs-on: [nscloud-ubuntu-22.04-amd64-4x16, nscloud-no-remote-builders]
  • Revert the Docker build context to the default: Namespace configures Remote Builders as a separate buildx context. You can switch back to the default by calling docker buildx use default. E.g.

    jobs:
    build:
    	steps:
    	- name: Use default builder
    		run: docker buildx use default

Access to Private Container Registry

Namespace Runners are pre-configured with authentication credentials to access to your Workspace's private container registry by default. The container registry address is available in the environment variable NSC_CONTAINER_REGISTRY.

For example, if you want to build-and-push to your Workspace container registry, you can run the following step.

 - name: Build and push
   uses: docker/build-push-action@v5
   with:
     context: .
	 push: true
	 tags: ${{ env.NSC_CONTAINER_REGISTRY }}/my-image:v1
	 platforms: linux/amd64,linux/arm64

Or in case you need to pull an image from the private registry, you can directly use the variable NSC_CONTAINER_REGISTRY in your commands, for example.

- name: Docker pull
  run: docker pull ${{ env.NSC_CONTAINER_REGISTRY }}/my-image:v1

What's Next?