I am a...
Learn more
How it worksPricingFAQ
Account
May 14, 2026 · 11 min read · Cadence Editorial

How to use Pulumi for infrastructure as code

pulumi guide 2026 — How to use Pulumi for infrastructure as code
Photo by [Markus Spiske](https://www.pexels.com/@markusspiske) on [Pexels](https://www.pexels.com/photo/close-up-shot-of-computer-monitor-9858906/)

How to use Pulumi for infrastructure as code

Pulumi lets you define cloud infrastructure in TypeScript, Python, Go, or .NET instead of HCL, which means your application engineers can ship infra without learning a second language. The fastest path in 2026: install the CLI, run pulumi new aws-typescript, define resources as normal class instances, then pulumi preview and pulumi up. Use Pulumi Cloud's free Individual tier for state, or an S3 backend for self-host.

Why Pulumi matters in 2026

The infrastructure-as-code conversation in 2026 is no longer Terraform vs CloudFormation. It is "do we want our app developers writing infra at all, and if yes, in what language?"

Three things have changed since 2023. First, OpenTofu shipped a stable 1.8 and absorbed most of the Terraform community after the BSL license switch, so HCL workflows are now split across two near-identical tools. Second, Pulumi added native Terraform/HCL support in January 2026, meaning you can run existing Terraform modules inside a Pulumi program. Third, AI-native engineering teams are smaller and more full-stack: the same engineer who wrote your Stripe webhook is also provisioning the Lambda that handles it.

When the same person owns app code and infra code, the friction of context-switching between TypeScript and HCL stops being a quirky preference and starts being a real productivity tax. That is the case for Pulumi in 2026.

Pulumi vs Terraform: an honest comparison

Terraform (and OpenTofu) still wins on ecosystem size and operational maturity. The Terraform Registry lists roughly 4,800 providers in 2026; Pulumi's registry is closer to 1,800. If you need a niche provider (PagerDuty, Snowflake, some on-prem ESXi setup), Terraform usually has it first. Terraform also has a deeper bench of consultants, more StackOverflow answers, and a battle-tested HCP Terraform managed offering.

Where Pulumi wins is developer experience for application teams.

DimensionPulumiTerraform / OpenTofu
LanguagesTypeScript, Python, Go, .NET, Java, YAMLHCL (DSL)
State backendPulumi Cloud (free Individual), S3, Azure Blob, GCSLocal file, S3, GCS, HCP Terraform
Secrets in stateEncrypted by default with per-stack keysPlaintext in state by default
Policy as codePulumi Policy (free, OSS) in TS/Python/RegoSentinel (paid, HCP only) or OPA
Provider count (2026)~1,800 native + Terraform bridge~4,800
TestingJest, pytest, Go testingterraform test (improving)
Free team tierFree for individuals; teams from ~$100/moOpenTofu free; HCP from $20/user/mo

The honest summary: if your team already lives in HCL, the switching cost is real and often not worth it. If you are starting fresh, or your team is a 4-person TypeScript shop staring down their first VPC, Pulumi removes a category of friction.

When Pulumi is the right call

Four signals that Pulumi is the right pick:

  1. You already have a TypeScript or Python team. Code review, linting, and IDE autocomplete are already configured. Adding @pulumi/aws to package.json is one more dependency, not a new toolchain.
  2. You want type safety on your infra. A typo in an HCL for_each block fails at terraform apply time. The same typo in TypeScript fails at compile time, before you push.
  3. You want to share types between app and infra. This is the big one. If your Lambda handler imports a Zod schema for its payload, your Pulumi program can import the same schema and use it to generate API Gateway request validators. That is impossible in HCL. Combine this with a Zod-based API validation layer and your contract is enforced once, in one language, across runtime and infra.
  4. You want OOP patterns in your infra. Need 12 microservices that all need the same VPC + ECR + IAM role + CloudWatch log group? In Pulumi you write a MicroserviceStack class and instantiate it 12 times. In HCL you write a module and call it 12 times, which works, but reuse is shallower and inheritance does not exist.

Steps

A working Pulumi project from zero to deployed in roughly 20 minutes. This is the literal sequence we'd run on a fresh laptop.

  1. Install the CLI. On macOS: brew install pulumi/tap/pulumi. On Linux: curl -fsSL https://get.pulumi.com | sh. Then pulumi version to confirm. The CLI is a single Go binary, no JVM, no Python runtime required for the CLI itself.
  2. Log in to a backend. Run pulumi login for Pulumi Cloud (free for individuals, browser auth flow). For self-host on AWS: pulumi login s3://my-pulumi-state-bucket. The S3 backend needs a bucket with versioning enabled and a KMS key for encryption. Pick this on day one; switching backends later means exporting and re-importing state.
  3. Create your first stack. mkdir infra && cd infra && pulumi new aws-typescript. The CLI scaffolds Pulumi.yaml, index.ts, package.json, and a Pulumi.dev.yaml config file. It also installs @pulumi/aws and @pulumi/pulumi. A "stack" is Pulumi's term for an isolated deployment environment (dev, staging, prod). Each stack has its own state and config.
  4. Define resources in index.ts. Replace the boilerplate with your real infra. Resources are class instances: const vpc = new awsx.ec2.Vpc("main", { cidrBlock: "10.0.0.0/16" });. The awsx package provides higher-level "crosswalk" components (VPC with sane defaults, ECS cluster with Fargate, etc.) that save 100+ lines vs raw @pulumi/aws.
  5. Preview the diff. pulumi preview shows exactly which resources will be created, updated, or destroyed, with full property-level diffs. This is your last chance to catch mistakes before they hit AWS. Read every line. Get in the habit of treating preview like git diff before commit.
  6. Deploy. pulumi up runs the diff again and prompts for confirmation. Type yes and watch resources provision in parallel where the dependency graph allows. A typical VPC + RDS + ECS stack takes 8 to 12 minutes on first run, mostly waiting on RDS.
  7. Manage state. State is automatically saved to whichever backend you logged into. For Pulumi Cloud, it shows up in the web UI immediately with full audit history. For S3, it lands as a versioned JSON blob. Lock conflicts surface as "state is currently locked" errors; fix with pulumi cancel if the lock is stale.
  8. Add secrets with ESC. pulumi config set --secret dbPassword <value> encrypts the value with the stack's per-stack key before writing to config. For shared secrets across stacks, use Pulumi ESC (Environments, Secrets, Config) to centralize and pull them at runtime. Never commit unencrypted .env files into the same repo.

Working examples

AWS VPC + RDS + ECS

A typical web-backend stack in Pulumi looks like this:

import * as awsx from "@pulumi/awsx";
import * as aws from "@pulumi/aws";

const vpc = new awsx.ec2.Vpc("main", {
  cidrBlock: "10.0.0.0/16",
  natGateways: { strategy: "Single" }, // saves $100/mo vs HA
});

const db = new aws.rds.Instance("app-db", {
  engine: "postgres",
  engineVersion: "16.4",
  instanceClass: "db.t4g.small",
  allocatedStorage: 20,
  dbSubnetGroupName: new aws.rds.SubnetGroup("db", {
    subnetIds: vpc.privateSubnetIds,
  }).name,
  username: "app",
  password: dbPassword, // pulumi config secret
  skipFinalSnapshot: true,
});

const cluster = new aws.ecs.Cluster("app");
const service = new awsx.ecs.FargateService("api", {
  cluster: cluster.arn,
  taskDefinitionArgs: {
    container: {
      image: "myorg/api:latest",
      cpu: 256,
      memory: 512,
      essential: true,
      environment: [
        { name: "DATABASE_URL", value: db.endpoint.apply(e => `postgres://app:${dbPassword}@${e}/app`) },
      ],
    },
  },
});

That is roughly 30 lines for what is a 200-line Terraform module. The apply method is Pulumi's idiomatic way to compose values that are only known after deployment (the RDS endpoint, in this case).

Cloudflare Workers and Pages

import * as cloudflare from "@pulumi/cloudflare";

const worker = new cloudflare.WorkerScript("api", {
  accountId: cfAccountId,
  name: "api-worker",
  content: fs.readFileSync("../dist/worker.js", "utf-8"),
});

new cloudflare.WorkerRoute("api-route", {
  zoneId: cfZoneId,
  pattern: "api.example.com/*",
  scriptName: worker.name,
});

Same typed style as the AWS example. That consistency across providers is a real workflow win.

State backend: Pulumi Cloud vs S3

The choice is mostly about team size and audit needs.

Pulumi Cloud (free Individual tier) is the default and the right call for solo developers and small teams. You get state storage, encrypted secrets, deployment history, a web UI showing every resource and every change, and team-based RBAC on the paid tiers. The free tier covers individual use forever; the Team tier starts around $100/month and adds shared organizations, SSO, and concurrency limits. Enterprise pricing (typically $300+/month per seat depending on volume) adds SAML, audit logs, and self-hosted agents.

S3 backend is the right call when compliance demands data stays in your AWS account, when you want to avoid a vendor for state, or when you are running a fleet of stacks and don't want per-seat pricing. Set up: a versioned S3 bucket, a KMS key for state encryption, and a DynamoDB table for state locking (same pattern as Terraform with S3). You give up the web UI, RBAC, and deployment history. You keep full control.

For most startups in 2026, start on Pulumi Cloud free, upgrade to Team when you have more than 3 engineers touching infra, and only consider S3 self-host when compliance or cost forces the conversation.

Common gotchas

A short list of things that will bite you, ordered by how often they show up in real projects.

  • State file conflicts. Two engineers run pulumi up on the same stack at the same time. The lock should prevent it, but stale locks happen (laptop closed mid-deploy). Symptom: "state is currently locked". Fix: pulumi cancel after confirming nobody is actually deploying. Process fix: deploy from CI only, never from laptops, for any stack with more than one human owner. The same discipline you'd apply to a healthy CI/CD pipeline belongs here.
  • Secrets leaking into outputs. Anything you export from a Pulumi program lands in state. If you export a connection string with the password embedded, it is encrypted at rest but readable to anyone with stack admin. Use pulumi.secret() to wrap exports that contain sensitive values; outputs marked secret are masked in the CLI and require explicit --show-secrets to view.
  • Async output composition. New users try to write const url = https://${domain}.com`` where domain is an Output. That gives you https://Calling [object Object].com. Use pulumi.interpolate or .apply() instead. The type system catches this, but the error message can be confusing the first few times.
  • Provider version pinning. Pulumi auto-installs the latest provider version when you run pulumi up. A minor version bump in @pulumi/aws can change resource defaults. Pin versions in package.json and only upgrade intentionally. This is the same hygiene you would apply to any production dependency.
  • Pulumi Policy not enforced in CI. Writing policies (pulumi-policy) in TypeScript is great. Forgetting to run pulumi up --policy-pack ./policies in CI means none of those policies actually run in production. Make it a required CI step or use a deployment-time policy enforcement service.

When you can skip Pulumi entirely

Three honest cases where Pulumi is the wrong tool.

  • You're a 2-person team pre-revenue with one Vercel project and a Supabase database. Click through the dashboards. IaC has overhead; wait until you have more than one environment or 5+ cloud resources.
  • Your team is deeply HCL-fluent with a large existing codebase. A 30,000-line Terraform monorepo is an asset. The conversion cost likely exceeds the DX benefit. Use OpenTofu to escape the licensing concern instead.
  • You need a niche provider that only exists on Terraform. The Terraform-bridge in Pulumi covers most of these, but if you are wiring up an obscure on-prem appliance with a 2018-vintage Terraform provider, native HCL is faster. Same logic applies to your container runtime: read our Kubernetes vs Docker Swarm comparison before assuming you need k8s.

How Cadence engineers ship Pulumi work

Most Pulumi rollouts we see at Cadence land on the Senior tier ($1,500/week). The work is rarely "stand up one VPC"; it is "migrate three environments off Heroku to AWS, set up Pulumi state, write the policy pack, train the team." That is a 4 to 6 week scope for one Senior engineer, often pairing with a founder for the first week to lock down the shape of the migration. If you're costing the move, our writeup on migrating from Heroku to AWS walks through the line items.

Every engineer on Cadence is AI-native by default, vetted on Cursor and Claude Code fluency before they unlock bookings, which matters here because Pulumi work benefits enormously from inline AI assistance: "convert this Terraform module to Pulumi TypeScript" or "write a policy pack that flags any S3 bucket with public access" are exactly the kind of prompts that turn a 3-day task into a 3-hour task.

If you want a sanity check on whether your current stack is ready for Pulumi, run it through our Ship or Skip stack audit to get an honest grade before you commit to the migration.

Booking a Pulumi engineer: Cadence shortlists vetted engineers in 2 minutes with a 48-hour free trial. Senior tier handles full Pulumi rollouts including state migration and policy packs. Weekly billing, replace any week.

FAQ

How long does a Pulumi rollout take?

For a fresh project: under a day to ship a useful stack. For migrating an existing Terraform codebase: budget 2 to 6 weeks depending on size, with one senior engineer leading and the rest of the team pairing on conversions. The conversion tool (pulumi convert) handles 70 to 80 percent of HCL automatically; the rest is manual review.

What does Pulumi cost in 2026?

Pulumi Cloud is free for individual use forever (unlimited resources, basic state management). Team plans start around $100/month for small organizations. Enterprise tiers with SSO, audit logs, and self-hosted agents typically run $300+/month, with custom pricing above certain seat counts. Self-hosting state on S3 is free apart from S3 and KMS costs (under $5/month for most teams).

Should I use Pulumi if my team is 3 engineers?

Yes, if you are doing more than one cloud environment or have any infra that survives more than 6 months. Pulumi pays for itself the second time you redeploy a stack from scratch. The exception is if your team is already deeply HCL-fluent; then OpenTofu is less disruption.

Can Pulumi import existing infrastructure?

Yes. pulumi import pulls existing AWS, GCP, or Azure resources into your stack and generates the corresponding Pulumi code. The generated code needs cleanup, but it gets you 80 percent of the way to managing existing infra under Pulumi without recreating anything.

Is Pulumi safe to use in production at scale?

Yes. Pulumi is used by Snowflake, Mercedes-Benz, and BMW in production. The state backend (Pulumi Cloud or S3) and the resource provisioning use the same provider SDKs as Terraform under the hood for most resources, so reliability characteristics are similar. The bigger production risks are organizational (who can deploy what stack), not technical, and those are solved with policy packs, RBAC, and CI-only deploys.

All posts