Faster GitHub Actions
Accelerate your GitHub Actions with Namespace while saving cost.
Namespace offers performant and configurable GitHub Runners that boot in a few seconds and come with caching capabilities to speed up your GitHub workflows.
Why Namespace Runners?
Designed as a drop-in replacement for GitHub Action runners, Namespace Runners bring to the table:
Getting Started
To speed up your workflows with Namespace, you need to connect Namespace with your GitHub organization and do a one-line change to your workflow definition:
Connect your GitHub organization to Namespace
Update your workflows
Change the runs-on
field in a workflow file to a Namespace profile or label.
jobs:build:- runs-on: ubuntu-latest+ runs-on: namespace-profile-defaultsteps:- name: Checkoutuses: actions/checkout@v4...
Done!
Your GitHub workflow will now run on Namespace!
Configuring a Runner
You can configure machine shape and architecture and enable caching using the Namespace UI or by specifying runner labels in the workflow file. The two ways are equivalent and mutually-exclusive. Some newer features are only available when using Runner Profiles.
Using Runner Profiles
Create a Profile
Open GitHub runner configuration page on the Namespace Dashboard and press "New Profile".
Select a name. The name will be part of the label to specify in the runs-on
option in the workflow file.
Select the desired options. See below for details on the available features.
Confirm by pressing "Create Profile".
Select the Profile
Change your workflow file to mention the runner profile name: runs-on: namespace-profile-{name}
.
jobs:
myjob:
runs-on: namespace-profile-arm64-large
The cache volume will be shared between all jobs running for a Git repository.
Using Runner Labels
Alternatively you can keep the entire runner configuration version-controlled within your workflow file. Specify one of the following labels to use different runner CPU architectures or machine shapes.
Label | OS | Architecture | vCPU | Memory |
---|---|---|---|---|
nscloud | Ubuntu 22.04 | AMD 64-bit | 4 | 16 GB |
nscloud-ubuntu-22.04 | Ubuntu 22.04 | AMD 64-bit | 4 | 16 GB |
nscloud-amd64 | Ubuntu 22.04 | AMD 64-bit | 4 | 16 GB |
nscloud-ubuntu-22.04-amd64 | Ubuntu 22.04 | AMD 64-bit | 4 | 16 GB |
nscloud-ubuntu-22.04-amd64-4x16 | Ubuntu 22.04 | AMD 64-bit | 4 | 16 GB |
nscloud-ubuntu-20.04-amd64-4x16 | Ubuntu 20.04 | AMD 64-bit | 4 | 16 GB |
nscloud-arm64 | Ubuntu 22.04 | ARM 64-bit | 4 | 16 GB |
nscloud-ubuntu-22.04-arm64 | Ubuntu 22.04 | ARM 64-bit | 4 | 16 GB |
nscloud-ubuntu-22.04-arm64-4x16 | Ubuntu 22.04 | ARM 64-bit | 4 | 16 GB |
nscloud-ubuntu-20.04-arm64-4x16 | Ubuntu 20.04 | ARM 64-bit | 4 | 16 GB |
nscloud-macos-sonoma-arm64-12x28 | MacOS Sonoma | ARM 64-bit | 12 | 28 GB |
nscloud-macos-sequoia-arm64-12x28 | MacOS Sequoia | ARM 64-bit | 12 | 28 GB |
Note that only one nscloud
label is allowed in the runs-on
field of your workflow file. Namespace will not schedule any workflow job if runs-on
specifies more than one nscloud
label or invalid ones.
Runner Machine Resources
Namespace supports custom machine shapes, so you can configure your runner to a bigger or smaller machine than what is provided by the table of well-known labels above. You can specify the machine shape with the label format as follows:
runs-on: nscloud-{os}-{arch}-{vcpu}x{mem}
Where:
{os}
: is the operating system image to use. Currently, ubuntu-22.04, ubuntu-20.04, ubuntu-24.04, macos-sonoma, and macos-sequoia are allowed.{arch}
: can either be "amd64" or "arm64".{vcpu}
: acceptable vCPU values are 2, 4, 8, 16, and 32. For Mac OS, the valid values are 4, 6, 8, and 12.{mem}
: acceptable RAM values in GB are 2, 4, 8, 16, 32, and 64. For larger values, see below. For Mac OS, the valid values are 7, 14, and 28.
The GitHub's Runner software requires some resources from the underlying instance. Therefore, we recommend to configure the runner with at least 8GB of memory. Below that, depending on the workload, the runner might experience some flakiness.
For example, to create a 2 vCPU 8 GB ARM runner, use the following label:
runs-on: nscloud-ubuntu-22.04-arm64-2x8
Runners will only be scheduled if the machine shape is valid and your workspace has available concurrent capacity.
Users under the "Developer", "Team", "Business" or "Business+" plan can use up to 32 vCPU and 64GB of RAM. Please contact support@namespace.so if you want to use high-memory GitHub Runners (up to 512 GB).
The provisioned disk size per runner automatically scales with its shape. See machine resource shapes.
Cross-invocation Caching
Namespace cache volumes can help you dramatically speed up your GitHub Actions.
Workflows can store dependencies, tools, and any data that needs to be available across invocations. Cached data is persisted and survives the runner instance, and is available to subsequent jobs.
Because Cache Volumes are not upload or downloaded inline with the run, the size of your cache data does not negatively affect the total runtime of your job. In fact, most of the time you'll see close to zero seconds added to the total runtime when Cache volumes are used.
Cache volumes are named using "tags": each job definition that relies on the same tag, will share the Cache volume contents.
An important design principle of Cache volumes is that they're .. well, caches. So they're best-effort; and even though Namespace aims to offer a close to 100% cache hit ratio, you should not depend on Cache volumes as durable storage.
A Cache volume is just an additional "mount" in Unix terms; so you can just use them by reading or writing files to them, remounting the cache directory, and more.
If a run fails, the new contents are discarded. But if it succeeds, the new contents will be "committed" and use for subsequent runs; thus allowing you to maximize incrementality and reducing total run time of your builds and tests.
To simplify the use of Cache volumes further, checkout nscloud-cache-action which provides a simple configuration model for popular frameworks, or if you just want to cache a particular set of directories or files.
Namespace uses local fast storage for cache volumes - and some magic. Cache volumes are forked on demand, ensuring that jobs can concurrently use the same caches with uncontested read-write access.
Using a Cache Volume
If you configured a runner using the web UI and use namespace-profile-name
runner label, enable caching by simply going to the runner profile and enabling cache.
You can also enable make the runner configure the built-in software (Git, containerd, toolchain) to use the cache volume. The minimum cache size is 20 GB.
Example: Caching NPM Packages
Use our nscloud-cache-action
to mount the volume
under the paths you want to cache.
jobs:
tests:
runs-on:
- namespace-profile-node-tests
steps:
- name: Setup npm cache
uses: namespacelabs/nscloud-cache-action@v1
with:
path: |
~/.npm
./node_modules
- name: Install dependencies
run: npm install
- name: NPM tests
run: npm run test
If you run GitHub jobs in a container, you also need to mount the cache volume inside the container. Check the action documentation for more details.
Caching Docker Images Across Invocations
Namespace also makes it trivial to cache container image pulls (and unpacks, often the most expensive bit) across invocations.
To enable this feature, just open the runner profile configuration, add a cache volume and check Container images
.
jobs:
tests:
runs-on:
- namespace-profile-integration-tests
steps:
- name: Pull ubuntu image
run: |
time docker pull ubuntu
The second time the above example runs, the time to pull the ubuntu container image should be close to 0, as every layer was already cached by the first run.
Known issues for Docker Image caching
Docker image caching relies on standard Docker APIs. However, if you are using a tool that does not fully support Docker APIs, Docker Image caching can break your workflow. Known issues are:
- Spring Boot does not provide full image spec compatibility in their Buildpacks plugin
Caching Bazel build artifacts
This is a preview feature. If you want to try Bazel caching, reach out to us and we can enable it for you.
Namespace provides high-performance Bazel caching with very low network latency between runners and the cache storage. This allows your Bazel workflows to reuse build artifacts across runs, irrespective of your chosen granularity, significantly reducing build times.
Bazel caching is particularly effective for:
- Build steps that are time-consuming to execute but produce relatively small outputs
- Projects where compilation is typically slow, such as Rust and C++ codebases
- Workflows that can benefit from cross-invocation artifact reuse
jobs:build:runs-on: namespace-profile-defaultsteps:- name: Setup Bazel cacherun: |nsc bazel cache setup --bazelrc /etc/bazel.bazelrc- name: Bazel testrun: |bazel --bazelrc=/etc/bazel.bazelrc test //..
The bazelrc
path can be customized if you need to use a specific configuration file.
See the Bazel's documentation for granularity control options.
Caching GitHub Tools
Namespace volumes can cache GitHub tools across invocations. For example, actions/setup-go
and actions/setup-python
actions first look
for their binaries in $RUNNER_TOOL_CACHE
directory, before fetching from remote.
To make GitHub tool cache use Namespace volumes, open the runner profile configuration, add a cache volume and check Toolchain downloads
.
jobs:
tests:
runs-on:
- namespace-profile-go-python-tests
steps:
- uses: actions/setup-go@v5
- uses: actions/setup-python@v4
Caching Git Repositories
With cache volumes, you can set your GitHub workflow to cache large git repositories to speed up the checkout phase.
After you enabled the Git caching (see below how), you'll need to change your workflows to call our optimized checkout action,
which will make use of the cache volume to store and retrieve the git mirrors. See the
namespacelabs/nscloud-checkout-action
page for more details.
To enable this feature, just open the runner profile configuration, add a cache volume and check Git repository checkouts
.
jobs:
tests:
runs-on:
- namespace-profile-integration-tests
steps:
- uses: namespacelabs/nscloud-checkout-action@v5
name: Checkout
with:
path: my-repo
- run: |
cd my-repo && git status
Expected Use Cases
Note that the namespacelabs/nscloud-checkout-action
is optimized to speed up only specific use-cases.
If you workflows do not belog to these, you might not see the expected performance improvement.
- Very large repositores: when the workflow needs to checkout a repository with many or big files.
- Checkout long commits history: when the workflow needs to checkout many commits (the default is only 1).
Advanced: Protect Caches from Updates
You can configure Cache Volumes to limit what git branches can perform updates to them. This configuration allows you to use the same cache as the main branch from other branches (e.g. pull requests), but not commit changes to it.
To specify which branches can update the cache volume, open the cache volume configuration, then check Show Advanced features
, and finally type the branch names.
Any GitHub Actions job belonging to git branches that are not included in the allow-list, will be able to access the Cache Volumes, but their changes to the caches' content will not be persisted in the end.
Manage Cache Access per Job
This feature is Preview.
Instead of protecting your cache per branch, you can also select which jobs may write to the cache. This allows you to share a cache amongst multiple jobs while some are not allowed to update the cache contents. Any job without commit rights can still access the cache and can also change contents locally. But any edits that it makes to the cache will be ignored for future runs.
If using runner labels, you can add the label nscloud-cache-exp-do-not-commit
to a job.
jobs:
tests:
runs-on:
- nscloud-ubuntu-22.04-arm64-4x16-with-cache
- nscloud-cache-tag-cache-npm
- nscloud-cache-size-100gb
- nscloud-cache-exp-do-not-commit
steps:
- uses: actions/setup-node@v3
Custom Runner Images
This feature is Preview.
You can today save further time by pre-installing Ubuntu packages in the Runner base image, that you would otherwise install as part of the GitHub workflow. This feature is still in preview, we are working on its reliability and user experience. Also, this feature is only available when using Runner Profiles.
To install custom Ubuntu packages:
- Open the GitHub Runner configuration page.
- Click on an existing profile or create a new one.
- Click the Base image field and select
Custom Ubuntu 22.04 image
. - Type the Ubuntu packages one by one.
- Click on "Create Profile" or "Update Profile" button.
- Namespace builds the custom runner image using your workspace's Remote Builder instances and show when it's ready to use
- That's it! From now on, any new GitHub Job targeting this runner profile will runs on the custom image.
The custom runner image build will fail if any of the specified packages does not exist. You can search for the list of available packages from the Ubuntu website.
You can always modify the list of pre-installed packages. Any update will result in a new image build.
The Runner Custom Base Image is built starting from our default runner base image. Any GitHub workflow job using a custom base image will be delayed until the base image is built successfully.
Accelerating Docker builds
Namespace provides a variety of build environments for you that can be tuned to your needs to deliver optimal build performance. If you have advanced requirements, reach out to support@namespace.so for a detailed consultation.
Namespace Remote Builders
Namespace Runners seamlessly integrate with Remote Builders, allowing you to speed up your GitHub Action runs further. By offloading expensive Docker builds to Remote Builders, you offload the resource needs into the Remote Builder, allowing you to run workflows with smaller (and cheaper) runner shapes.
Remote Builders provide the best performance for most workflows. Each GitHub Action run on Namespace comes pre-configured with Remote Builders by default.
No additional changes to your workflow file are required. Both docker build
,
docker/build-push-action
and any docker build compatible use are supported
out of the box, and are automatically offloaded.
Remote Builders rely on Docker's buildx context. Make sure that your workflow does not overwrite the buildx context, as you may inadvertently disable Namespace's configuration.
For example, calling the action docker/setup-buildx-action
would overwrite our configuration. Consider removing it or use namespacelabs/nscloud-setup-buildx-action
instead.
Building very large images
This feature is advanced. For most users, relying on Remote Builders is the preferred option.
When building very large images, data transfers to Remote Builders can add up.
Your Docker builds can benefit from local cross-invocation caching, instead of using Remote Builders.
With "Local caching" enabled, Namespace automatically configures your instance to both upload build metadata to your workspace, so it's logged and traced; and caching is configured using a previously attached cache volume.
Make sure to size your cache appropriately, to benefit from high cache hit ratio.
To enable this feature, just open the runner profile configuration and add a cache volume. Next, select Locally cached
for your Docker builds.
Caveats
- Build caching using "local caching" is not shared with Remote Builders; each repository uses its own separate cache.
- Although multi-platform builds are supported, only builds of the same platform as the runner itself, will experience native performance.
Billing
Using local caching does not lead to any additional compute usage beyond the runner's execution time itself. In order to make use of the feature, including logging and tracing, each build made with local caching counts towards the total build usage.
Large amounts of concurrent builds
Namespace Remote Builders are configured to offer great performance for many concurrent builds. These defaults provide ideal performance for most customers. If you run an very large amount of concurrent builds, please reach out to support@namespace.so and we'll scale your Remote Builders to match your needs.
Disable Build Caching
If you prefer to skip build caching altogether, you have two options:
- Revert the Docker build context to the default: Namespace configures Remote Builders
as a separate
buildx
context. Before invoking a build that should not be cached, you can switch back to the default by callingdocker buildx use default
. E.g.
jobs:
build:
steps:
- name: Use default builder
run: docker buildx use default
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
- Disable Remote Builders: You can request that runners created for a particular workflow job do not use Remote Builders.
To disable remote builders, just open the runner profile configuration and select No caching
for your Docker builds.
High-performance Artifact Storage
This feature is Preview.
Benefit from Namespace's high-performance Artifact Storage to share workflow artifacts between jobs in your workflows.
Simply replace your uses of actions/upload-artifact
and actions/download-artifact
with
namespace-actions/upload-artifact
and namespace-actions/download-artifact
.
Limitations of Namespace artifacts actions:
- Uploaded artifacts are not displayed in the GitHub Actions UI.
- All uploaded artifacts are retained for 30 days.
retention-days
input is not respected.
For example:
jobs:
upload:
name: Demo Upload Archive
runs-on: namespace-profile-default
steps:
- name: Upload Archive
uses: namespace-actions/upload-artifact@v0
with:
name: test-archive
path: ./archive
download:
needs: upload
name: Demo Download Archive
runs-on: namespace-profile-default
steps:
- name: Download Archive
uses: namespace-actions/download-artifact@v0
with:
name: test-archive
path: /tmp/destination
Access to Private Container Registry
GitHub Actions runs on Namespace are pre-configured with authentication credentials to access your workspace's private container registry.
The container registry address is available in the environment variable
NSC_CONTAINER_REGISTRY
.
For example, if you want to build-and-push to the private registry, you can run the following step.
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ env.NSC_CONTAINER_REGISTRY }}/my-image:v1
platforms: linux/amd64,linux/arm64
Or in case you need to pull an image from the private registry, you can directly
use the variable NSC_CONTAINER_REGISTRY
in your commands, for example.
- name: Docker pull
run: docker pull ${{ env.NSC_CONTAINER_REGISTRY }}/my-image:v1
Advanced: Privileged workflows
Namespace Runner Instances run the runner software itself in a container. This approach facilitates software packaging and enables custom base images. See how to use custom base images.
If your workflow requires deeper access to the host system, Namespace can run your workflow as privileged and in the host pid namespace. Simply enable the corresponding experiments in your runner labels, as per example below:
runs-on:
- nscloud-ubuntu-22.04-arm64-4x16-with-features
- nscloud-exp-features:privileged;host-pid-namespace
If your workflow requirements are not yet covered, please reach out to support@namespace.so.
What's Next?
- Email the team with questions, or leave us feedback.