Pricing

Find the plan that best fits your needs

Simple, transparent & flexible pricing for every team.

Developer

Pay as you go
Get started with

Managed GitHub runners

Cloud image building

Cross-invocation caching

Start for Free

No credit card required

Team

$100per month
Supporting teams with

Everything from Developer

Increased concurrency

E-mail support

Get started

30 day free trial

Business

Most Popular
$250per month
Enhance your business with

Everything from Team

High concurrency

Very large jobs and builds

Dedicated Slack channel

Get started

30 day free trial

Enterprise

Custom
Empowering enterprises with

Everything from Business

Audit log export

Enterprise SSO

Custom log & metric sinks

Very high build concurrency

Contact sales

Customized plan for higher demand

Compare plans

Team

$100

Business

$250

Enterprise

Custom

Frequently asked questions

Have any question? Email us.

Namespace accounts usage based on "compute units". A compute unit represents using 1 vCPU and 2GB of RAM for a minute times the platform multiplier (i.e. 1 for Linux).
For Linux, if you run an instance that has 4 vCPU and 8GB RAM for 5 minutes, then that's:
4 (vCPU) x 5 (minutes) x 1 (Linux multiplier) = 20 unit minutes
For Windows, running an instance with 2 vCPU and 4GB RAM for 6 minutes is:
2 (vCPU) x 6 (minutes) x 2 (Windows multiplier) = 24 unit minutes
For Mac, an instance with 6 vCPU and 14GB RAM running for 3 minutes is:
6 (vCPU) x 3 (minutes) x 10 (Mac multiplier) = 180 unit minutes
While we account for a minimum of 1 minute, we always round the next 15 seconds down. For example, 30 seconds of usage is 1 billable minute, 70 seconds is 1 billable minute, and 150 seconds are billed as 3 minutes.
Unit minutes usage per shape can be found under "What is the unit minutes usage and price for each shape?" below.
For Linux and Windows, our standard shapes are multiples of 1 vCPU and 2GB RAM.
For shapes with a different ratio (i.e., 1 vCPU, 4GB RAM), compute units are calculated as:
max(vCPU count, GB RAM divided by 2)
For example, running a Linux instance on 8 vCPU and 32GB RAM for 2 minutes will be:
16 (32GB RAM / 2) x 2 (minutes) x 1 (Linux multiplier) = 32 unit minutes
Unit minutes usage per shape can be found under "What is the unit minutes usage and price for each shape?" below.
Jobs that run on Windows or macOS consume minutes at a higher rate than jobs on Linux, they consume 2x and 10x the rate respectively.
PlatformMultiplier
Linux1
Windows2
MacOS10
Linux on Apple Silicon3
Below is a breakdown of the cost of each machine shape per platform. Prepaid cost is the effective cost of minutes in your plan, overage is the cost of any usage that exceeds your plan.
Linux
Machine shapePrepaidOverage

1 vCPU2 GB

$0.001 / min$0.0015 / min

2 vCPU4 GB

$0.002 / min$0.003 / min

2 vCPU8 GB

$0.004 / min$0.006 / min

4 vCPU8 GB

$0.004 / min$0.006 / min

4 vCPU16 GB

$0.008 / min$0.012 / min

8 vCPU16 GB

$0.008 / min$0.012 / min

8 vCPU32 GB

$0.016 / min$0.024 / min

16 vCPU32 GB

$0.016 / min$0.024 / min

16 vCPU64 GB

$0.032 / min$0.048 / min

32 vCPU64 GB

$0.032 / min$0.048 / min

32 vCPU128 GB

$0.064 / min$0.096 / min

32 vCPU256 GB

$0.128 / min$0.192 / min

32 vCPU512 GB

$0.256 / min$0.384 / min
Windows
Machine shapePrepaidOverage

1 vCPU2 GB

$0.002 / min$0.003 / min

2 vCPU4 GB

$0.004 / min$0.006 / min

2 vCPU8 GB

$0.008 / min$0.012 / min

4 vCPU8 GB

$0.008 / min$0.012 / min

4 vCPU16 GB

$0.016 / min$0.024 / min

8 vCPU16 GB

$0.016 / min$0.024 / min

8 vCPU32 GB

$0.032 / min$0.048 / min

16 vCPU32 GB

$0.032 / min$0.048 / min

16 vCPU64 GB

$0.064 / min$0.096 / min

32 vCPU128 GB

$0.128 / min$0.192 / min

32 vCPU256 GB

$0.256 / min$0.384 / min

32 vCPU512 GB

$0.512 / min$0.768 / min
MacOS
Machine shapePrepaidOverage

6 vCPU14 GB

$0.06 / min$0.09 / min

12 vCPU28 GB

$0.12 / min$0.18 / min

12 vCPU56 GB

$0.18 / min$0.27 / min
Linux on Apple Silicon
Shapes may differ during early access.
Machine shapePrepaidOverage

6 vCPU14 GB

$0.018 / min$0.027 / min

12 vCPU28 GB

$0.036 / min$0.054 / min

12 vCPU56 GB

$0.054 / min$0.081 / min
We invest heavily in the best infrastructure. Something we care deeply about. We deploy our own hardware across various sites which allows us to offer fantastic performance and price.
Linux and Windows amd64 workflows are running on high-performance AMD EPYC CPUs.
Linux arm64 workloads are running on AmpereOne or on Apple Silicon.
For MacOS, most of your workloads are running on Apple M4 Pro and some on Apple M2 Pro.
Up-to-date usage reports are available in the Dashboard, and are calculated in real time. Check out your current usage here.
To better understand your usage, click on the ‘Explore’ button on the Usage page to break down and filter usage (per platform, shape, GitHub Repository, etc.).
For GitHub jobs, you can check out the Insights page to look at the Duration or Queue time of all GitHub workloads run on Namespace, with breakdowns and filters.
To improve your performance and cost, you can consider:
  • Are you using the right shape for your workflows?
    Larger may be faster, smaller may be cheaper if your runner is oversized.
  • Are you running on Namespace standard shapes (ratio of 1vCPU : 2GB RAM)?
    This will give you the best price for your usage.
  • Are you using cache effectively?
Our support team is here to help you optimize your setup.
You can now run your Windows workflows on Namespace! Windows runners are still in early access while we work on parity for pre-installed software. We're constantly adding more, so let us know if you're missing something. Having said that - some of our early customers report measurably higher performance on Namespace.
For full transparency: Our Windows runners do not yet have support for Cache Volumes or Docker Builders. We expect this to land soon.
To get access, reach out to support@namespace.so
You can now run your Linux arm64 workflows on Apple Silicon! This feature is still in early access and not all Namespace features may be available yet. The list of available instance shapes is not final yet. Having said that - some of our early customers report unmatched single-core performance for their workloads.
To get access, reach out to support@namespace.so
Namespace offers a host of high-performance caching solutions that lead to major speed-ups during test setup and hence lower your usage in terms of unit minutes. Since our caches are backed by high-performance local storage, they are even faster. Our integrated caching solutions (e.g. Bazel, Turborepo) seamlessly integrate with your tooling and provide very high cache hit ratios ensuring consistently fast workflows. Get started.
To use cache volumes effectively, you’ll need to run some workloads for the cache to be filled and widely available. The more regularly you use your cache volumes, the higher the probability of a cache hit and the higher the savings on compute time.
Namespace uses storage to offer high-performance caching solutions and a built-in private container registry. Storage usage is accounted in blocks of 100 GB. You can also see storage usage and cache hit ratios on your usage page.
When you attach a cache volume to an instance, we calculate the cost based on two factors: how long your instance was running and the total storage capacity of the cache volume.
This gives us a measurement in GB-hours. For example, if your instance runs for one hour with a 100 GB cache volume attached, you'll be charged for 100 GB-hours.
If you run an instance for 3 hours with a 50 GB cache volume attached, the calculation would be 50 GB × 3 hours = 150 GB-hours.
Bazel remote caches are a paid add-on. Contact our sales team to enable it and get more information. Bazel caches are billed for storage that they consume as well as active reads from the caches. For storage it is $0.2 / GB-month and reads are $0.002 / GB with a minimum fee per month of $100. Bazel cache writes are not billed. Enterprise contracts get a discount.

Accelerate your developer team

Join hundreds of teams using Namespace to build faster, test more efficiently, and ship with confidence.