west all insights
platform engineering Mar 14, 2026 8 min read

the serverless monolith is not a contradiction

on deliberately shipping one deployable unit across dozens of serverless functions — and why it's the most underrated platform decision of the last five years.

pramod

co-founder

There is a decade-old instinct that says serverless means microservices. One function per endpoint, one repository per function, one pipeline per repository, one pager-duty rotation per pipeline. The instinct made sense in 2016, when deployment was the bottleneck and the cost of a cold start was measured in grant money. It makes very little sense now. And yet: I am still walking into engineering estates where a team of twelve is operating ninety-seven Lambda repos, each with its own CI, its own drift, and its own subtle version of utils/date.ts.

The serverless monolith is the correction. It is not a contradiction; it is a refusal to use physical deployment units as the organising principle for logical boundaries.

what actually changed

The original case for microservices rested on two load-bearing assumptions: deployment was expensive, and process isolation was cheap. Both were true when VM-backed services were the alternative. Neither is true today. With serverless runtimes, deployment is free and uniform; with container platforms, process isolation is trivially configured. The thing that is still expensive — and getting more so — is coordination: the merge conflicts you pay in human time, the schema drift you pay in incidents, the distributed tracing overhead you pay in dollars.

A serverless monolith is one codebase, one deploy, one versioned schema, and many — possibly hundreds of — serverless entrypoints. Each entrypoint is a thin handler. All the interesting logic lives in a shared domain module that every entrypoint can import directly, in-process, without a network hop. You can change a function signature and the compiler tells you which of your fifty handlers needs updating, in the PR, not in production.

the usual objections, shortened

"But now a bug takes everything down." It does not, because the platform still deploys each function independently at the infrastructure layer. You are pinning a version per handler; you are just not forcing every handler to live in a different repository. A bad release of one function does not ship the others. You get atomic rollbacks for free and gradual rollouts with a feature flag.

"But blast radius." Blast radius in production is determined by IAM, by rate limits, and by the database you connect to — not by the repository layout. If your payments.process handler can talk to your users.delete database row, the repo boundary is not saving you. Policy is saving you, or it isn't.

"But team autonomy." Team autonomy is a function of module ownership, not of repository ownership. Two teams can own two modules in the same monorepo without blocking each other. What they cannot do is violate each other's interfaces silently — which is a feature, not a bug.

what a serverless monolith looks like in practice

A recent client was running a payments platform with ~40 serverless functions across 14 repositories. The platform was correct. It was also unbearable. A typical "small" schema change — adding an optional field to a transaction record — required pull requests against six repos, three of which had diverged build pipelines, and one of which had been abandoned by its original owner. Shipping took three days. Most of that time was not review; it was the coordination tax.

We consolidated to a single repository with the following structure:

/src
  /domain       # pure, network-free, unit-testable
  /handlers     # thin serverless entrypoints
  /infra        # terraform + platform plumbing
  /tests

Every handler is a file under /handlers, registered as an independent function in the platform. Every handler imports from /domain like any normal module. The build pipeline produces one deployment artifact per handler but from one shared compile; if the compile fails, no handler ships. Shared code stays shared without a private package registry.

The outcome was not glamorous: schema changes dropped from three days to the same afternoon. The incident rate fell by roughly a third, almost entirely due to the elimination of version-skew bugs between handlers that called each other over the network when they had no business being networked at all.

when not to do this

This pattern is wrong for two populations. First: true multi-tenant platforms where different deployment targets need genuinely different binaries — for example, where a customer-per-instance deployment model is part of the product. Second: organisations where the engineering population is large enough that repo boundaries are carrying real organisational load. Somewhere above ~80 engineers per service, the cost of coordination inside a monorepo exceeds the cost of coordination between them. Most organisations asking me about this are nowhere near that threshold.

The interesting consequence is that the shape of your deployable unit is a leading indicator of your team design. If you have a handler-per-repo layout with five engineers, you are not architecting for scale; you are architecting for a future org that may never arrive.

the boring version of the principle

Ship one logical codebase. Deploy as many physical functions as the platform rewards you for. Let the compiler, not the network, be your first line of defence against integration bugs. Reserve repository boundaries for the moments when they are genuinely carrying organisational weight — and notice, honestly, how rarely that is.

The serverless monolith is not a contradiction. It is the serverless bet, taken seriously.

keep reading

more from the arkavix practice on platform engineering, applied AI, and the unglamorous details of making systems endure.

all insights