If you've been busy focusing on the - let's face it - noise around Kubernetes in recent years you might have missed it happen. While we've been busily solving orchestration and inventing increasingly ergonomic ways to hide YAML, in secret (not really) folks have been figuring out how to ship code to production without worrying about servers and orchestration at all.
This is the age of Serverless 2.0. Not FaaS. FaaS (Lambda etc) was Serverless 1.0. This is Serverless 2.0.
Print a JAM
It's 2021 and you're creating a new app. You're going to need to store data and process credit cards, and you need an analytics stack obviously and also you're planning a mobile client so you need to provide a GraphQL sync service for that.
So: spin up a Kubernetes cluster and start applying some Operator YAML and GitOps right?
Welcome to Serverless 2.0 (aka PaaS 2.0, pick your own marketing name)! Note you didn't spend lots of time messing around directly with FaaS abstractions or setting up API Gateways (and you didn't need to set up a cluster or manage any YAML), you just wrote code and used a couple services.
Scaling Containers in Serverless 2.0
Many of the big providers now have platforms that take a container image and scale it up or down based on request count, including to zero when unused. Unlike with Serverless 1.0 these containers needn't be simple functions. With Google Cloud Run, IBM Code Engine, AWS App Runner you can now push arbitrary container images and have them automatically scaled. Most of these platforms also make it easy to roll back and forth between revisions of your application and do traffic splitting between versions (look ma, no Istio..).
So: create a container, push it, let the platform worry about it. Put a Dockerfile in a git repo (or use a buildpack) and tell the platform to deploy it. It now has a URL and will scale to zero when idle.
That's fine what about Data
You are absolutely welcome to set up and run your own database, and you are absolutely welcome to spin up a Kubernetes cluster to run that in and you are absolutely welcome to figure out schema migration and backups and set up a funky Operator system to manage it for you. Doing this may be cheaper than using one of various Serverless 2.0 Database services, like Prisma or, recently, like PlanetScale, which handle this for you and have generous free tiers to get started, or it may not.
Much like when Cloud first became a thing the initial response to this idea (using Data-as-a-service rather than maintaining your own database) is likely that it will be more expensive than running the services locally, or that regulatory reasons (or just sunk costs) mean you have to rebuild these services in your own cluster. Sometimes, especially temporarily before economies of scale kick in at the service providers, this will be true.
Please let me write YAML
Ok, so Serverless 2.0, sounds great (you say), but what if I need to run code when events happen? What if I need websockets? How about traffic splitting or rollback or feature flags or observability or custom container images or sidecars? Surely then it's time to crank up my YAML hiding tool of choice and start spinning some CRDs?
Good news: there will always be use cases that require a Kube cluster! None of the ones above, though - they're all easily implementable in Serverless 2.0 platforms now.