logo
Guides

Dependencies can be easy

We’ve all been there. We only want to add a database/object store, and suddenly, we need to solve a whole fleet of problems: (a) need to figure out which server to run, or managed infrastructure to use (b) need to ensure that we create the resources ahead of our application starting and (c) need to configure our application with the right endpoints, credentials, etc.

And most importantly it needs to be repeatable. Other developers will need to do the same initialization. Tests will need the same workflow.

What do you do today? Write a README with instructions? Commit a shellscript setup.sh to your repository?

Diving into a real example

Let’s pick one example: using an S3 bucket. It's not uncommon that for development, you have to:

  • Setup MinIO or LocalStack: Whether you run them manually or have a Docker Compose file, you’ll still need to set up some shared credentials and volume storage.
  • Create a bucket: You have two options here: you do it manually in their admin interface or programmatically in your server code. Both have problems.
    • The first one requires manual setup. Do you commit an initialization shell script to your repo? Does your CI use the same shell script?
    • The latter is not robust for production use. Does your server have credentials to create buckets? Who’s responsible for creating the bucket across various replicas/function invocations?
  • Configure your application: You’ll have to provide the bucket endpoint + credentials to your application. You can either hardcode them or set them up via environment variables. But regardless, figuring out which values need to be provided to your application is something folks do manually.

Enter Encapsulation

Namespace encapsulates these workflows into reusable code and infrastructure, which we call Resources Providers.

Providers instantiate Resources which represent a resource the application depends on. You declare a resource, reference the Provider you want to use, and Namespace orchestrates all of the resulting dependencies.

Let's see what this means in practice. Let’s use Resources to get our application an S3 bucket. You can find the full example on GitHub.

Note: The following will assume some familiarity with Namespace primitives. The From Docker Compose to Namespace post is a good introduction.

server: {
  resources: {
    dataBucket: {
      class:    "namespacelabs.dev/foundation/library/storage/s3:Bucket"
      provider: "namespacelabs.dev/foundation/library/oss/minio"
 
      intent: {
          bucketName: "testbucket"
      }
    }
  }
 
  env: {
    // Inject values exported by the resource, to our application.
    S3_ACCESS_KEY_ID: fromResourceField: {
      resource: ":dataBucket"
      fieldRef: "accessKey"
    }
    // …
  }
}
 

Running ns dev started a development stack. But what happened behind the scenes?

  • Validated configuration: Namespace confirmed that the chosen provider can indeed provide the requested class, and that the input provided (the intent) is valid.
  • Resolved direct dependencies: The MinIO provider depends on other resources which Namespace resolved by adding a MinIO server to the deployment plan and accessing two secrets generated for it.
  • Loaded dependant servers: We've loaded the MinIO server configuration and resolved any dependencies it has (no other servers, but requests generating two secrets).
  • Produced a DAG that represents the deployment plan: From the resulting dependency graph, Namespace instantiated an execution plan, which includes a compiled list of Kubernetes resources required to support the servers above. These include:
    • A StatefulSet for the MinIO server, a Service that points at the MinIO API, and a PersistentVolume to back MinIO's storage.
    • Two Kubernetes secrets. Namespace will automatically generate their content from a secret specification (e.g., format and data length).
    • A Deployment for the Go server, a Service that points to the Go's server exported API, and a ConfigMap which includes its resulting runtime configuration.
    • A Pod which will run the provider's initialization: in this case, connecting to MinIO and creating the requested bucket, if one doesn't exist. The required configurations are injected as a combination of flags, and secret mounts to access the required generated secrets above.
  • Deployed the plan to Kubernetes: Namespace's scheduler executed this plan respecting the dependency order:
    • First, it deployed the MinIO server.
    • Then, it deployed the provider initializer and waited for its completion.
    • And then, finally, the scheduler deployed the Go server. The deployment order guarantees that the bucket exists at this point.

These are fully automated and repeatable across different environments. So if you're running a test, for example, Namespace orchestrates the same instructions on the ephemeral environment created for the test.

Same use-case, different provider

What if we want to use LocalStack instead of MinIO?

LocalStack is a fully functional local cloud stack that helps you develop and test your cloud and serverless applications offline. LocalStack emulates various AWS' services, and for many applications is a great choice for local development.

Just change the provider.

server: {
  resources: {
    dataBucket: {
      class:    "namespacelabs.dev/foundation/library/storage/s3:Bucket"
      provider: "namespacelabs.dev/foundation/library/oss/localstack"
 
      intent: {
          bucketName: "testbucket"
      }
    }
  }
  // ...
}

As LocalStack provides support for multiple AWS services, as you add more AWS dependencies to your application, a single LocalStack will be shared to provide them.

Taking a deeper look

Resource Classes and Providers

Resources have classes that define their types. Their inputs are intents provided by you, and your server will receive an instanced configuration. There can be multiple providers for the same resource class.

Contrast this to previous examples. We still have the flexibility of using different implementations, but as application developers, we no longer have to care about the implementation details of that provider.

Runtime configuration

Namespace automatically injects a set of configurations that represent the environment where the application is running. Which servers it depends on, and in this case, the resources it consumes. These are available under /namespace.

This configuration makes it simple for applications to configure themselves to the environment they're deployed to.

Environment variable injection

The dynamic nature of configuring infrastructure often means that automatically wiring your application becomes a complex problem, as you'll only know about some of the properties of the resources you create after their creation. For example, perhaps a certificate is generated as part of creating a resource. You'll need to orchestrate starting your application after the certificate exists.

Namespace orchestrates the entire dependency graph. It can provide configuration not just at runtime but also to ensure that environment variables point to newly created instances of resources.

An open ecosystem

Unlike traditional PaaS products, Namespace is an open ecosystem. Anyone can write resource classes and providers.

The Namespace team maintains a selection at namespacelabs.dev/foundation/library, but there is no central resource catalog. Similar to Go, Namespace's package management is distributed, and you can use any dependency available publicly or privately in any Git repository.

What to do now

Share this story onTwitteror join the team on Discord