I still find the DevEx of serverless terrible compared to the well-established monolith frameworks available to us.
The YAML config, IAM permissions, generating requests and responses, it's all so painful to get anything done.
Admittedly I speak as a software engineer primarily building CRUD apps, where frameworks have had decades of development. I can see use cases for event-driven applications where serverless may make life easier. But for CRUD, currently no chance.
Serverless can be useful for very specific tasks, such as processing files you upload, things that should happen in the background, but if you already have a simple monolith web app, I don't see why going serverless just to go serverless will help you.
I do see its usefulness, but its not a one size fits all tool.
I find serverless to be a breeze, with zero sysadmin costs compared to setting up VPS, EC2, doing your own custom monitoring, etc. Each to their own, however.
And gateway+lambda is a near perfect "dumb crud" app, though it is not without a startup cost.
There is no good reason to build a distributed monolith. You can always think of/design your monolith as a collection of (micro-)services and get the best of both worlds.
I find FaaS best when needing to automate something completely unrelated to what goes in to serving the customer. Stuff like bots to report CWV metrics from DataDog to a Slack channel.
This misses the main factor, I think: Vendor lock-in.
There is no unification of APIs - every provider has their own bespoke abstractions typically requiring heavy integration into further vendor-specific services - moreso if you are to leverage USPs.
Testing and reproducing locally is usually a pipe-dream (or take significantly more effort than the production deploy). Migrating to a different cloud usually requires significant rewrites and sometimes rearchitecturing.
This the reason I tend to not use a serverless solution in most cases.
I want my code to be written and executed on my machine in a way that can at least kind of resemble the production execution environment. I want a binary that gets run and some IO access, most of the time.
If I have a VM or a "serverless"-style compute like Fargate on ECS, I can define an entry point, some environment variables, and we're off to the races in a very similar environment to my local (thank god for containers and VMs).
The _idea_ of lambda and the similar services is awesome to me, but it's just such a PITA to deal with as a developer, at least in my experience.
There are a few platform abstractions. Quarkus, a Java framework, has Funqy, an extension that abstracts the differences between something like aws Lambda and Knative triggers, and feels quite easy to use.
That's extremely funny in the one language that cares about alignment.
(& the argument that I keep using against significant whitespace, which is that all sorts of other tools assume it can mess around with it with no downsides)
People should probably click before downvoting... this is what it looks like in the README:
from lithops import FunctionExecutor
def hello(name):
return f'Hello {name}!'
with FunctionExecutor() as fexec:
f = fexec.call_async(hello, 'World')
print(f.result())
If you copy/paste it the indentation is correct, it's just the display formatting for some reason.
It does look like it! Personally I'm off the k8s train and don't currently have a use-case but best of luck!
feedback: why make a clear distinction between "magic node" and "BYON"? Two new concepts to learn when I feel a value-prop for some users would be to not have think about these distinctions? (Just talking about wording and communication here - can you get the value prop across with less reification?)
Thanks for the feedback! I’ve been thinking about the wording too—and you might be right, perhaps highlighting the Magic Node as the mainstay vs BYON being a smaller, side feature might help sell the main value prop. I’ll try clearing that up!
Google cloud run and Azure container apps both let you run an arbitrary docker image without having to deal with custom setups. Both scale automatically so they are serverless. AWS has apprunner but it doesn't scale to zero.[0]
Can I upload my web server as a docker to Lambda and have it run forever there? I though Lambdas were supposed to be more short lived (like a couple hours), is that not the case? It's been a while since I actually looked at Lambda because GCP run is so clean.
This article misses the most important reason to not use Serverless: Cost. It's way more expensive to run serverless than it is to run any other format, even something like AWS Fargate is better than Lambda if you keep your Lambda running for 5% of the time.
The second one is even more important though: Time. How many of my systems are guaranteed to stop after 15 minutes or less? Web Servers wouldn't like that, anything stateful doesn't like that. Even compute-heavy tasks where you might like elastic scaling fall down if they take longer than 15 minutes sometimes, and then you've paid for those 15 minutes at a steep premium and have to restart the task.
Serverless only makes sense if I can do it for minor markup compared to owning a server and if I can set time limits that make sense. Neither of these are true in the current landscape.
This is why I use serverless (API Gateway + Lambda) for super low traffic stuff. If I have a cron that runs 24 times a day for 12 seconds, or a service that occasionally gets a request every few days, it makes sense not to deal with the overheard and waste of a server or container running constantly.
For me, it's because Rails has continued to be an excellent solution in every application I've ever needed to build, whether it's a project with 1 user or 10 million, or with a dev team of 1 or 100.
Every time I try to solve a problem with anything other than Rails, I run into endless issues and headaches that would have been already solved if I just. used. Rails.
Even when you have a problem set bigger than Rails, keeping everything in the Rails world and using something like Sidekiq to manage most of the backend complexity. For many cases, reasonable polling works as well as event-driven architecture, but if you absolutely have to do that, one-off lambdas that talk to Sidekiq or other parts of your Rails-stack work well enough.
> The median product engineer should reason about applications as composites of high-level, functional Lego blocks where technical low-level details are invisible
This is a good way to get nonfunctioning product. Or at least a lot of frustrating meetings.
The thing is, "serverless" still has a server in it, it's just one that you don't own or control and instead lease by short timeslices. Mostly that doesn't matter, but the costs are really there.
Because it's more expensive for less performance with less control.
If it were 5% worse performance for 7% more cost, most people would probably not bat an eye.
When it can be 50% less performant for 200% more cost, eventually someone is going to say: sure there's overhead to owning that but I will be at a major competitive advantage if I can do it even just OK. And it turns out for most businesses doing it at the scale they need isn't all that difficult to get right.
Indeed...
I've run on Hetzner for 20 years with triple redundancy and VPS for batch processing/CI and some internal tasks. My costs are fixed and only on very big database alters/upgrades/migrations our service has any downtime.
I have a friend who recently made a stupid bug in his processing pipeline on AWS. He woke up on morning and saw a message from his bank that his CC was over the limit.
When we have a bug, our Nagios send us a message that responces are more than 150% of average and we do a rollback.
So it's not only the risk of vendor lock-in, but also in surprising bills and policy changes, updates and other 3-rd party risks you end up with.
This highly depends on workload. We migrated a service that generates terrabytes of content to send to customers each day. We moved the content generation from J2EE to java lambas and our costs went from $6K/month (on savings plans, evemn) of ec2 to ~$400/month in lambda, sqs, and elasticache/redis costs and the work was done in 1/8th the time. Mind you, our content is highly bursty where we need to be able to generate the content within seconds seconds of initiation.
Serverless also means a lot of things. We also serve static content from an S3 bucket and cloudfront. Nothing else to manage once its setup.
The flip side of serverless is you really do need to think of state yourself. The J2EE code was rock solid in reliability, including recovering from almost every kind of issue you can imagine over a decade (database, connectivity, software crashes).
> The median product engineer should reason about applications as composites of high-level, functional Lego blocks where technical low-level details are invisible. Serverless represents just about the ultimate abstraction for this mindset.
I think the answer is in the first sentence. A lot of engineers make products that don't touch the internet. This concept is lost in the noise quite a bit.
Because serverless doesn't exist. Serverless just means it runs on someone else's servers, just like the cloud. And 10 years down the road people have forgotten how to run basic things, but Bezos buys Panama.
Routers are much simpler to use when you connect to them with an Ethernet cable.
Not all abstractions and simplified services are good in all situations.
I really wish the 9mm headphone jack wasn’t being replaced with just Bluetooth. 9mm has worked for me 100% of the time. Bluetooth is regularly a piece of garbage.
Besides Bluetooth usually reecodes your audio before streaming it, which means even if you used the best codec to encode it from FLAC with carefully chosen settings, it will still suffer from being encoded from a lossy format to another lossy format at the moment you listen to it.
I read Apple avoids that with their AirPods if your audio is already encoded in a supported format, but I don’t know if other general systems are smart about it.
In software, a server refers to an application that listens for requests from a client.
If you remember the olden days of web development, when CGI was king, the web applications didn't listen. Instead, a separate web server (e.g. Apache) called upon the application as a subprocess and communicated with it using system primitives like environment variables, stdin, and stdout.
Over time, we started moving away from the CGI model, moving the server process into the application itself. While often a fronting web server (e.g. nginx) would proxy the requests to the application, technically the application was able to stand on its own.
Serverless returns to the old CGI model, although not necessarily using the CGI protocol anymore, removing the server from the application. The application is less a server, hence the name.
It is too bad plan9 did not take off, from what I read that system was designed for a serverless environment. You can use resources like memory, disk, cpu cycles from many other plan9 systems at the sametime.
Of course I think that would be a DRM nightmare for big-corps. One could stream items another person's system owns for "free" without dealing with companies.
I hate the term "serverless". It's a misnomer to the extent that it feels like it was designed to deliberately mislead. Even vague consultant-speak like "externally provisioned infrastructure" would feel more accurate.
There is an apocryphal story going around at my company, which in all honesty I don't believe to be true, but it's too good to not believe. :)
Back when the hype was virtualization (so probably mid-2000s, before my time at the company), a big project was run to try moving to virtual machines. After the research phase had been deemed a success, they were gearing up to go into production and put in a hardware order. This was supposedly rejected by an executive who complained that they should not need physical servers if they do everything on virtual machines now.
It seems it is only a misnomer if you are too young to remember how these types of applications used to be written. They weren't always servers. In the early days they were subprocess modules[1]. "Serverless" is a return to the subprocess model, seeing the application lose the server, or to put it another way the application is less a server.
This must be why they say programming is dead once you turn 40: You can no longer communicate with the young-ins.
> It seems it is only a misnomer if you are too young to remember how these types of applications used to be written. They weren't always servers. In the early days they were subprocess modules[1]. [1] https://en.wikipedia.org/wiki/Common_Gateway_Interface
No, it's even worse a misnomer when you are old enough to remember these days. They were CGI modules... running under a server. They were not "without a server". They didn't work without a server.
And in these days, we did have plenty of applications without any server. For instance, desktop applications using local in-process databases were very common, and plenty of what people nowadays do within their browser (connecting to a remote server on the other side of the world) was instead done using these local-only desktop applications. These applications are what could legitimately claim the moniker of "serverless". Not something which can only work when under the control of a server.
Not true at all. You can use CGI scripts from the shell just fine. And you almost certainly did to aid with testing! Per the CGI specification, communication is through environment variables, stdin, and stdout. There was not a server in the application like we saw later. Since around the mid-2000s, when CGI fell out of fashion, applications of this nature usually meant them serving on port 3000 (probably). "Serverless" sees removal of the server from the application again, moving back to a process-based concept similar to what we did when CGI was the thing to use, although the protocols may be different nowadays. It is not in reference to a specific technology like CGI, rather the broader idea.
> And in these days, we did have plenty of applications without any server.
"These types of applications", not "all applications"...
I did start dabbling in computers in the early 80s, but I think there are far more people doing that now who have no concept of what actually happens and are are often sold pups because things are given deliberately obscure and cool-sounding names. (And people who name it like that are often extremely hostile when you ask questions, see my comment here: https://news.ycombinator.com/item?id=42549723) Of course buyer should beware, but I still think it's not OK to do.
I'm from even before that, and it makes no sense to me.
I understand both what you say and what "serverless" commonly means, I'm just saying it's essentially arbitrary. A symbol with no etymology that holds water.
All words are arbitrary, but how does it not make sense? "Server" is a well understood term in software. "Less a X" is well understood in English to mean something akin to "without X", or "not having X". Serverless is short for "less a server", which succinctly indicates exactly what it is: The application is without a server. Which is a shift in how these types of applications are written, as since the mid-2000s it was common to include a server as part of the application. "Serverless" makes more sense than most terms we use in industry. It literally describes itself, although does require some historical context to understand why "server" is relevant.
Understandably, if you don't come from the software world you might think a server is a person who does work at your request, like serve you food at a restaurant. Is that where the problem lies? There are definitely still people, servers, serving the care to the systems that run the software. But that is clearly not the context in which it is used. It is very much used as a software term, and as a software term alone.
This might be why people are having trouble with it. "Cloud" and "serverless" both refer to hardware, not software.
"Cloud" was moving the hardware from something you managed, either in office or a datacenter, to using machines someone else managed, but it was still server-oriented (such as software in VMs or bare-metal that you managed).
"Serverless" drops even that, using ephemeral code that doesn't care what server it's running on, so the people managing the hardware can move your software as-needed.
> "Cloud" and "serverless" both refer to hardware, not software.
Not really. "Cloud" refers to a software abstraction that tries to hide the existence of actual hardware namely to remove dependence on the availability of any specific hardware component (like, as in, being able to transparently move to another physical machine without the user ever knowing). It is clearly a software term. I can't go down to my local Walmart and buy a "Cloud". Amazon won't ship a "Cloud" to my place. There is nothing "hard" implied by the word as used in this context. I'll grant you that it more or less maintains some kind of "virtual" hardware concept. Perhaps that is what you mean?
There is no hardware association with "serverless". It refers to a pattern for developing a certain breed of applications that resemble servers, but without the server. "Server" is definitively a software term as used here. A server is an application that listens for requests from a client, typically on a network port if we are to dive into implementation territory, but could be something else like a unix domain socket. Much like CGI of yore, these "serverless" applications shed the server, relying on the runtime environment to fill in the missing pieces.
This doesn't make sense to me because "fooless" means "lacks foo"
You are describing a thing that is not a foo, or does not do foo, not a thing that posesses no foo.
So, I say, this is just something you're saying and not a definition I ever heard or would have implied from context from others usage. And it's not a new term by now, so there has been several years for me to have gained this impression or understanding if anyone else were using it that way.
Exactly. "Serverless" lacks a server. Which, granted, wouldn't make sense with no context, but when you remember that the same types of applications were previously written to include a server and no longer include a server when written in a "serverless" fashion, that is exactly where the differentiation is found. It literally describes what it is.
> And it's not a new term by now, so there has been several years for me to have gained this impression or understanding
I have never, ever, seen "serverless" refer to anything else outside of the previous commenter who thinks it has something to do with hardware. But it clearly has nothing to do with hardware. There aren't warehouses full of "serverlesses" ready to be loaded onto trucks. It is not something physical. It is not in any way "hard", that much is obvious. It is undeniably a software term. So, what, exactly, is your impression?
To try and put my first comment more clearly: They both refer to distance removed from hardware. Neither refers purely to software things like webservers or application servers, which is how you and others described them in what I was first responding to.
"Server" does not solely refer to software, it is also a name used for hardware. Think along the lines of mainframe, host, hypervisor, and so on.
> does not solely refer to software, it is also a name used for hardware.
A computer that primarily runs a server (or multiple servers) is often colloquially called a server in recognition of what it is doing, but server is still in reference to the software. If you repurposed that hardware to sit on someone's desk to run Excel, most wouldn't call it a server anymore.
> Think along the lines of mainframe
I am not sure that works. I expect a mainframe running Excel on someone's desk would still be considered a mainframe by most as mainframe refers to an architecture. A server, in the colloquial hardware sense, could be of any kind of architecture, including a mainframe!
Regardless, serverless is clearly not associated with any kind of hardware. It is about removing/not needing the server in your application, typically used in the context of adopting a service like AWS Lamba which offers the aforementioned runtime environment that negates the need for your application to be a server.
How much would a VPS or a rented server cost where you can boot your own OS and be the sole tenant of the SSD and don't fight with the IOPS of the other videoconverting dude using the same machine?
> The median product engineer should reason about applications as composites of high-level, functional Lego blocks where technical low-level details are invisible.
We don't make buildings from Lego blocks. We do use modular components on buildings (ceramic bricks, steel beams, etc), but they are cemented or soldered together into a monolithic whole.
In my opinion, "serverless" (which, as others have noted, is an horrible misnomer since the server still exists; true "serverless" software would run code exclusively on the client, like desktop software of old) suffers from the same issue as "remote procedure call" style distributed software from back when that was the fashion: introducing the network in place of a simple synchronous in-process call also introduces several extra failure modes.
From what I recall about that situation, they had a really stupid architecture that was using S3 as intermediate storage and processing video multiple times on multiple stages.
For the same amount of compute Lambda is priced far higher than Fargate which itself is priced higher than ec2. People run large workloads on k8s with base workload compute fully covered by dedicated ec2 instances not because it's fun, but because it saves you a lot of $.
Python is super slow, inefficient and let's say that the build environment is not so nice.
However, it's so easy to write functioning software in python that the alternative sometimes is not more efficient code, it's no code.
Lambda, in theory, follows a similar paradigm, if you can click a button and have a service that scales to zero (with logging and monitoring) then you're more likely to make toy webhooks and tiny services. If I have to make a build pipeline and a docker container and wrangle some yaml, configure service accounts and a service definition with the right labels and annotations.
Well, that's a decent chunk of work that means I'm probably going to think a bit longer about even deploying my little toy service that might see one request a day.
Going to reiterate though: I do not advocate for serverless in production. If you seriously think you're building something that will scale, it's fiscally illiterate to use a managed serverless provider.
If you under the impression that CI/CD and observability is easy with lambda I have a bridge to sell you. I worked on a large scale pure serverless project we wrote more CDK code than application code.
Serverless is more of a billing philosophy than a design philosophy in my opinion.
Serverless is all about outsourcing the infrastructure for scaling a micro service. How you design the service itself, or the system its a part of, can vary widely.
There are definitely dedign constraints of going serverless, but I'd argue those are largely just the constraints of going with microservices rather than a monolith.
> Serverless is all about outsourcing the infrastructure for scaling a micro service.
Technically, it is all about removing the server from your application. The name literally tells you so. It is true that removing the server can offer some benefits in the scaling realm. In particular, it allows you to scale to 0 now that you no longer have to keep the process alive to serve requests.
Of course, that is not tradeoff free. Scaling to 0 brings you right back to the old problem of slow initialization once a request does come in, which was the primary driver for why we moved to hosting the server in the application in the first place.
> Technically, it is all about removing the server from your application.
Technically its about removing the server from my list of responsibilities. There is still a server running my app, it just isn't managed by me and likely comes with auto-scaling features.
> Technically its about removing the server from my list of responsibilities.
Not necessarily. Back in the serverless CGI days you would still typically manage Apache. What you say is probably why would you choose serverless in 2025, but not always.
> There is still a server running my app
There may be a separate server (e.g. Apache) that runs your app, but that is someone else's app, not your app. Calling someone else's app your own would be rather silly.
Scaling to 0 is pretty much the only benefit you get. Which is not much of a benefit at any reasonable scale. Fargate is same firecracker VMs as Lambda they just don't scale to 0.
And even then it is not strictly necessary. Cloud Run comes to mind as offering the ability to scale to zero, yet allows (maybe even requires?) maintaining the server in the application. The real benefit of "serverless" is that you no longer have to worry about the server, it being removed. Granted, that is not a big worry much these days with all the great server frameworks available.
I’d much rather write 20 lines of boilerplate code and a Dockerfile than deal with 100s lines of CDK code, distributed tracing, and the associated observability challenges.
Why? Just throw your program executable in the cgi-bin directory like we did in the olden days, go wash your hands, and then call it a day. Simple. Serverless!
Serverless doesn't have to be over engineered-garbage, even if some bored technologists looking for a promotion and/or to pad their resume try their best to make it so.
the pricing is different for the same amount of compute is that news for you? Lambda is priced an order of magnitude higher than fargate which is priced significantly higher than EC2. For small scale workloads your TCO might be lower with higher level abstraction.
Because we use Nix recipes to deploy our Datalog-ish backend connectors that talk to Amazon Elastic Beanstalk via a bespoke database we wrote in Julia that's deployed on Snowflake Container Cloud. But there's a missing backslash somewhere and nobody can find it because even ChatGPT cannot decipher the error messages.
Maybe it's an expired certificate but the guy who knew how that stuff works built a 12,000 line shell script that uses awk, perl, and a cert library that for some reason requires both CMake and Autotools. It also requires GCC 4.6.4 because nobody can figure out how to turn off warnings are errors.
> Microservices made a canonical example of how easy it is to miscalibrate that bet. Since the trend started ~15y ago,....
What started was the rebranding from distributed systems.
We have had Sun RPC (The network is the computer, a slogan now owned by Cloudflare), DCE, CORBA, DCOM, RMI, Jini, .NET Remoting, SOAP, XML-RPC, JSON-RPC,....
Simple monoliths are much easier to reason about and debug. And the costs are much easier to estimate.
Serverless functions are quite interesting for certain use cases, but those are mostly additions to the main application. I'd hesitate to build a typical web application with mostly CRUD around serverless, it's just more complexity I don't need. But for handling jobs that are potentially resource intensive or that come in bursts something like Lambda would be a good fit.
I understand the appeal of serverless, especially for small stuff (we have a few serverless projects at work, and I've built some hobby projects using Lambda), but ime DevEx is such a dealbreaker. Testing changes or debugging an issue? forget about it.
Without tooling to run a serverless service locally, this is always going to be a sticking point. This is fine for hobby projects where you can push to prod in order to test (which is what I've ended up doing) but if you want stronger safeguards, it's a real problem.
We're all-in on serverless / cloud-native for our platform (document management); it works really well for our model, as we deploy into the customer's AWS account.
The initial development learning curve was higher, but the end result is a system that runs with high reliability in customer clouds that doesn't require customers (or us) to manage servers. There are also benefits for data sovereignty and compliance from running in the customer's cloud account.
But another upside to serverless is the flexibility we've found when orchestrating the components. Deploying certain modules in specific configurations has been more manageable for us with this serverless / cloud-native architecture vs. past projects with EC2s and other servers.
The only downside that we see is possible vendor lock-in, but having worked across the major cloud providers, I don't think it's an impossible task to eventually offer Azure and GCP versions of our platform.
There's a huge grey area of "I want the response of a warmed up lambda" and "I don't have enough hits that it is actually warmed up" - pair with using certain language "runtimes" like the JVM and there you have it.
'Serverless' has it's uses, but not for everything
- Serverless can get very expensive
- DevEx is less than stellar, can't run a debugger
- Vendor lock-in
- You might be forced to update when they stop supporting older runtime versions
We've looked at these tradeoffs over and over at places I work.
There's always part of the stack (at least on the kinds of problems I work on) that is CPU intense. That part makes sense to have elastic scaling.
But there's also a ton of the stack that is stateful and / or requires long buildup to start from scratch. That part will always be a server. It's just much easier.
For my own projects, I prefer lambda. It comes with zero sysadmin, costs zero to start and maintain, and can more easily scale to infinity than a backend server. It's not without costs, but most the backend services I use can easily work in lambda or a traditional server (fastapi, axum), so it is a two-way door.
Because tools like lambda are expensive
Because it locks us into a cloud provider
Because the architectures tend towards function explosion. Think CommonFuntions.java but all the calls are on the network. What could have been 2 containers and rabbitmq has become 50 lambdas and 51 sqs topics
Because distributed observability is hard
The ESB people became serverless function people and they brought their craziness with them. Im busy cleaning up what should be a fairly simple application but instead it has 300 lambdas.
All that said, serverless managed services like databases are useful.
Before we all go into why lambda doesn’t work, remember that companies are happily handing many many millions of dollars to AWS each year, and will continue to do so for some time.
IMO, the dev workflow is significantly worse, integration testing is harder and I don't see the value on "scale to zero", when the alternative is a $5/mo VPS.
Agree with you but you're paying for the potential of a sudden burst in traffic planned or unplanned. Going to maintain 5000 servers when you may only use them for some intense period a few hours of a single day during a month. Thats the canonical serverless pitch. I'd hate to develop a new pipeline using serverless as my dev environment.
I understand the value of something like this for seasonal businesses / black friday / etc; but for normal companies, how likely is it to _suddenly_ blow up in traffic?
If you are lucky enough to have your company go viral and receive a sudden spike in traffic, will the rest of the infrastructure tolerate it? Will your database accept hundreds of concurrent connections, or will it tip over?
If you need to engineer and test the auto-scaling capabilities of the rest of your infrastructure, is there value in not needing to think about the scaling of your APIs?
These may sound snarky, but they are real questions -- I used to administer ~300k CPU cores, so I have some trouble imagining the use-cases for serverless
The optimistic tone at the start of the article might just be a hallucinatory strawman set up.
But as a probably old dog, I fail to see the allure of these technologies(*).
When I read the copy trying to peddle them, to me it sounds quite like someone saying "Heey.. PSST! Wanna borrow 5000$ in cash, I can give it to you right now! Don't worry about 'interest rates', we'll get back to that LATER".
When I build stuff out of 'serverless', I find it rather difficult to figure out what my operation costs are going to be; I usually learn later through perusing the monthly bills.
I think the main two things I have appreciated(?),
is
(1) that I can publish/update functions on cloud in 1-5 seconds, whereas the older web services I also use, often take 30-120 SECONDS(not minutes, sorry) to 'flip around' for each publish.
(2) I can publish/deploy relatively small units of code with 'functions'.
But again, that is not quite accurate. It's more like 'I need to include less boilerplate' with some code to deploy it.. Because to do anything relevant, I more or less need to publish the same amount of domain/business-logic code as I used to with the older technologies.
Part from that, I mostly see downsides
- my 'function/serverless' code becomes very tied-to-vendor.
- testing a local dev setup is either impossible or convoluted, so I usually end up doing my dev work directly against cloud instances.
I'm probably just old dog, but I much prefer a dev environment that allows me to work on my own laptop, even if the TCP/IP cable is yanked.
Oh yeah, and spit on you too, YAML :-)
They found a curse to match the abomination of "coding in xml languages" of 20 years ago..
They're useful in a small set of behaviors. If you have a particular job that is run infrequently but is burstable, it doesn't make sense to have a server hanging around for just that purpose.
My current employer standardized on serverless and for many things it works well enough, but from my standpoint it's just more expensive.
I work for a community project that is building a descentralized orchestration mechanism that is intended, among other things, to democratise access to serverless open compute while also being cloudless.
Take a look at the project at https://nunet.io to know more about it!
I built a serverless startup (GalaticFog) about 8 years ago, had to shut it down. Market never developed. There were some obvious lessons learned.
First most companies thought they needed to do containers before serverless, and frankly it took them a while to get good at that.
Second the programming model was crap. It's really hard to debug across a bunch of function calls that are completely seperate apps. It's just a lot of work, and it made you want to go monolith and containers.
Third, the spin up time was a deal killer in that most people would not let that go, and wanted something always running so there was no latency. Sure workload exist that do not require that, but they are niche, and serverless stayed niche.
I always found the whole thing odd personally. The Venn Diagram of people who both need to run a service in the cloud AND cannot manage an EC2 instance is a seemingly small set of people. I never saw the advantage to it and its got plenty of drawbacks.
to me it seems much more intuitive to think in terms of actual servers. lambda seems like chicken nuggets but i wanna eat -say- a decent rotisserie not nuggets.
>Something I’m still having trouble believing is that complex workflows are going to move to e.g. AWS Lambda rather than stateless containers orchestrated by e.g. Amazon EKS. I think 0-1 it makes sense, but operating/scaling efficiently seems hard. […]
This isn't really saying anything about serverless though. The issue here is not with serverless but that Lambda wants you to break up your server into multiple smaller functions. Google cloud run[0] let's you simply upload a Dockerfile and it will run it for you and deal with scalling (including scaling to zero).
I think it’s all about cost analysis. That said, there are definitely some services that are worth outsourcing, like smtp, until you get to a certain size.
Separately, when you factor in data privacy, your decision making tree will certainly change quickly.
I still find the DevEx of serverless terrible compared to the well-established monolith frameworks available to us.
The YAML config, IAM permissions, generating requests and responses, it's all so painful to get anything done.
Admittedly I speak as a software engineer primarily building CRUD apps, where frameworks have had decades of development. I can see use cases for event-driven applications where serverless may make life easier. But for CRUD, currently no chance.
Serverless can be useful for very specific tasks, such as processing files you upload, things that should happen in the background, but if you already have a simple monolith web app, I don't see why going serverless just to go serverless will help you.
I do see its usefulness, but its not a one size fits all tool.
> Admittedly I speak as a software engineer primarily building CRUD apps
Ya, this is the majority of us.
Or maybe we just lack frameworks that provide the same developer experience but with transparent serverless deployment?
Here is an open source framework that my company makes that I think meets your requirements:
https://github.com/dbos-inc/dbos-transact-py
You can build your software as a monolith and deploy it with one command to your own cloud or our cloud.
I find serverless to be a breeze, with zero sysadmin costs compared to setting up VPS, EC2, doing your own custom monitoring, etc. Each to their own, however.
And gateway+lambda is a near perfect "dumb crud" app, though it is not without a startup cost.
> with zero sysadmin costs compared to setting up VPS
If you need RDS for example you need the VPS.
It only looks good on the outside.
There is no good reason to build a distributed monolith. You can always think of/design your monolith as a collection of (micro-)services and get the best of both worlds.
I find FaaS best when needing to automate something completely unrelated to what goes in to serving the customer. Stuff like bots to report CWV metrics from DataDog to a Slack channel.
A good reason to build a non-distributed monolith, though.
I think that's true for smaller shops. Larger shops start building their developer experience over everything and you can make it work.
But that means you're not starting with serverless, and it's your pivot from the original monolith.
If you use AWS CDK the DX is amazing.
This misses the main factor, I think: Vendor lock-in.
There is no unification of APIs - every provider has their own bespoke abstractions typically requiring heavy integration into further vendor-specific services - moreso if you are to leverage USPs.
Testing and reproducing locally is usually a pipe-dream (or take significantly more effort than the production deploy). Migrating to a different cloud usually requires significant rewrites and sometimes rearchitecturing.
This the reason I tend to not use a serverless solution in most cases.
I want my code to be written and executed on my machine in a way that can at least kind of resemble the production execution environment. I want a binary that gets run and some IO access, most of the time.
If I have a VM or a "serverless"-style compute like Fargate on ECS, I can define an entry point, some environment variables, and we're off to the races in a very similar environment to my local (thank god for containers and VMs).
The _idea_ of lambda and the similar services is awesome to me, but it's just such a PITA to deal with as a developer, at least in my experience.
Example, Vercel and Nelify, both running on top of AWS, yet their serverless offering is a tiny subset of Lambda capabilites.
There are a few platform abstractions. Quarkus, a Java framework, has Funqy, an extension that abstracts the differences between something like aws Lambda and Knative triggers, and feels quite easy to use.
https://quarkus.io/guides/funqy
In python there is lithops, which provides nice Executor primitives that can run on a wide range of cloud services (AWS lambda, GCF etc.)
https://github.com/lithops-cloud/lithops
Omg the Python code examples are center aligned. But it looks sweet
That's extremely funny in the one language that cares about alignment.
(& the argument that I keep using against significant whitespace, which is that all sorts of other tools assume it can mess around with it with no downsides)
People should probably click before downvoting... this is what it looks like in the README:
If you copy/paste it the indentation is correct, it's just the display formatting for some reason.I’m building a serverless platform with the familiar interface of Kubernetes: https://kapycluster.com. Does this fit your expectations?
It does look like it! Personally I'm off the k8s train and don't currently have a use-case but best of luck!
feedback: why make a clear distinction between "magic node" and "BYON"? Two new concepts to learn when I feel a value-prop for some users would be to not have think about these distinctions? (Just talking about wording and communication here - can you get the value prop across with less reification?)
Thanks for the feedback! I’ve been thinking about the wording too—and you might be right, perhaps highlighting the Magic Node as the mainstay vs BYON being a smaller, side feature might help sell the main value prop. I’ll try clearing that up!
Thanks again!
Cheers! What I meant was : Skip even naming/introducing the two words/concepts at all.
"Managed node" and "self-hosted nodes" are examples of more familiar concepts you can utilize to communicate.
Ah, I see! That helps a ton, thanks for the tip!
Google cloud run and Azure container apps both let you run an arbitrary docker image without having to deal with custom setups. Both scale automatically so they are serverless. AWS has apprunner but it doesn't scale to zero.[0]
[0] https://github.com/aws/apprunner-roadmap/issues/9 (amusingly the issue OP posts on HN)
Lambda does as well. You can even use their runtime interface client to run your function within the same wrapped that Lambda uses irl
Can I upload my web server as a docker to Lambda and have it run forever there? I though Lambdas were supposed to be more short lived (like a couple hours), is that not the case? It's been a while since I actually looked at Lambda because GCP run is so clean.
There is also knative, which cloud run is based on
Because most applications have 27 active users per day and a $10/month VPS can handle 100000.
With a Sqlite database at that.
This article misses the most important reason to not use Serverless: Cost. It's way more expensive to run serverless than it is to run any other format, even something like AWS Fargate is better than Lambda if you keep your Lambda running for 5% of the time.
The second one is even more important though: Time. How many of my systems are guaranteed to stop after 15 minutes or less? Web Servers wouldn't like that, anything stateful doesn't like that. Even compute-heavy tasks where you might like elastic scaling fall down if they take longer than 15 minutes sometimes, and then you've paid for those 15 minutes at a steep premium and have to restart the task.
Serverless only makes sense if I can do it for minor markup compared to owning a server and if I can set time limits that make sense. Neither of these are true in the current landscape.
This is why I use serverless (API Gateway + Lambda) for super low traffic stuff. If I have a cron that runs 24 times a day for 12 seconds, or a service that occasionally gets a request every few days, it makes sense not to deal with the overheard and waste of a server or container running constantly.
For me, it's because Rails has continued to be an excellent solution in every application I've ever needed to build, whether it's a project with 1 user or 10 million, or with a dev team of 1 or 100.
Every time I try to solve a problem with anything other than Rails, I run into endless issues and headaches that would have been already solved if I just. used. Rails.
Even when you have a problem set bigger than Rails, keeping everything in the Rails world and using something like Sidekiq to manage most of the backend complexity. For many cases, reasonable polling works as well as event-driven architecture, but if you absolutely have to do that, one-off lambdas that talk to Sidekiq or other parts of your Rails-stack work well enough.
> The median product engineer should reason about applications as composites of high-level, functional Lego blocks where technical low-level details are invisible
This is a good way to get nonfunctioning product. Or at least a lot of frustrating meetings.
The thing is, "serverless" still has a server in it, it's just one that you don't own or control and instead lease by short timeslices. Mostly that doesn't matter, but the costs are really there.
they constantly try to escape
from the complexity outside and within
by dreaming of abstractions so perfect that no one will need to be good
but the latency that is will shadow
the "simple" that pretends to be
grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too
seem very confusing to grug
This is very midwit meme, beginner uses a monolith, advanced uses serverless microservices, advanced uses a monolith.
I've heard the noise of a virtual machine,
Now I'm stuck in the reality of backlash
And cashed–in chips.
Because it's more expensive for less performance with less control.
If it were 5% worse performance for 7% more cost, most people would probably not bat an eye.
When it can be 50% less performant for 200% more cost, eventually someone is going to say: sure there's overhead to owning that but I will be at a major competitive advantage if I can do it even just OK. And it turns out for most businesses doing it at the scale they need isn't all that difficult to get right.
Indeed... I've run on Hetzner for 20 years with triple redundancy and VPS for batch processing/CI and some internal tasks. My costs are fixed and only on very big database alters/upgrades/migrations our service has any downtime.
I have a friend who recently made a stupid bug in his processing pipeline on AWS. He woke up on morning and saw a message from his bank that his CC was over the limit.
When we have a bug, our Nagios send us a message that responces are more than 150% of average and we do a rollback.
So it's not only the risk of vendor lock-in, but also in surprising bills and policy changes, updates and other 3-rd party risks you end up with.
This highly depends on workload. We migrated a service that generates terrabytes of content to send to customers each day. We moved the content generation from J2EE to java lambas and our costs went from $6K/month (on savings plans, evemn) of ec2 to ~$400/month in lambda, sqs, and elasticache/redis costs and the work was done in 1/8th the time. Mind you, our content is highly bursty where we need to be able to generate the content within seconds seconds of initiation.
Serverless also means a lot of things. We also serve static content from an S3 bucket and cloudfront. Nothing else to manage once its setup.
The flip side of serverless is you really do need to think of state yourself. The J2EE code was rock solid in reliability, including recovering from almost every kind of issue you can imagine over a decade (database, connectivity, software crashes).
> The median product engineer should reason about applications as composites of high-level, functional Lego blocks where technical low-level details are invisible. Serverless represents just about the ultimate abstraction for this mindset.
I think the answer is in the first sentence. A lot of engineers make products that don't touch the internet. This concept is lost in the noise quite a bit.
Because serverless doesn't exist. Serverless just means it runs on someone else's servers, just like the cloud. And 10 years down the road people have forgotten how to run basic things, but Bezos buys Panama.
Wireless routers also have wires.
Routers are much simpler to use when you connect to them with an Ethernet cable.
Not all abstractions and simplified services are good in all situations.
I really wish the 9mm headphone jack wasn’t being replaced with just Bluetooth. 9mm has worked for me 100% of the time. Bluetooth is regularly a piece of garbage.
Besides Bluetooth usually reecodes your audio before streaming it, which means even if you used the best codec to encode it from FLAC with carefully chosen settings, it will still suffer from being encoded from a lossy format to another lossy format at the moment you listen to it.
I read Apple avoids that with their AirPods if your audio is already encoded in a supported format, but I don’t know if other general systems are smart about it.
In software, a server refers to an application that listens for requests from a client.
If you remember the olden days of web development, when CGI was king, the web applications didn't listen. Instead, a separate web server (e.g. Apache) called upon the application as a subprocess and communicated with it using system primitives like environment variables, stdin, and stdout.
Over time, we started moving away from the CGI model, moving the server process into the application itself. While often a fronting web server (e.g. nginx) would proxy the requests to the application, technically the application was able to stand on its own.
Serverless returns to the old CGI model, although not necessarily using the CGI protocol anymore, removing the server from the application. The application is less a server, hence the name.
It is too bad plan9 did not take off, from what I read that system was designed for a serverless environment. You can use resources like memory, disk, cpu cycles from many other plan9 systems at the sametime.
Of course I think that would be a DRM nightmare for big-corps. One could stream items another person's system owns for "free" without dealing with companies.
I agree the name isn't the best, but serverless as a hosting model has used the same definition for many years now.
They aren't your servers and the server processes running your code are only active temporarily, usually with auto-scaling features.
Serverless doesn’t mean no-server. It means someone else’s server. Their system. Their rules. Their way or the highway. No thank you.
I hate the term "serverless". It's a misnomer to the extent that it feels like it was designed to deliberately mislead. Even vague consultant-speak like "externally provisioned infrastructure" would feel more accurate.
There is an apocryphal story going around at my company, which in all honesty I don't believe to be true, but it's too good to not believe. :)
Back when the hype was virtualization (so probably mid-2000s, before my time at the company), a big project was run to try moving to virtual machines. After the research phase had been deemed a success, they were gearing up to go into production and put in a hardware order. This was supposedly rejected by an executive who complained that they should not need physical servers if they do everything on virtual machines now.
It seems it is only a misnomer if you are too young to remember how these types of applications used to be written. They weren't always servers. In the early days they were subprocess modules[1]. "Serverless" is a return to the subprocess model, seeing the application lose the server, or to put it another way the application is less a server.
This must be why they say programming is dead once you turn 40: You can no longer communicate with the young-ins.
[1] https://en.wikipedia.org/wiki/Common_Gateway_Interface
> It seems it is only a misnomer if you are too young to remember how these types of applications used to be written. They weren't always servers. In the early days they were subprocess modules[1]. [1] https://en.wikipedia.org/wiki/Common_Gateway_Interface
No, it's even worse a misnomer when you are old enough to remember these days. They were CGI modules... running under a server. They were not "without a server". They didn't work without a server.
And in these days, we did have plenty of applications without any server. For instance, desktop applications using local in-process databases were very common, and plenty of what people nowadays do within their browser (connecting to a remote server on the other side of the world) was instead done using these local-only desktop applications. These applications are what could legitimately claim the moniker of "serverless". Not something which can only work when under the control of a server.
> They didn't work without a server.
Not true at all. You can use CGI scripts from the shell just fine. And you almost certainly did to aid with testing! Per the CGI specification, communication is through environment variables, stdin, and stdout. There was not a server in the application like we saw later. Since around the mid-2000s, when CGI fell out of fashion, applications of this nature usually meant them serving on port 3000 (probably). "Serverless" sees removal of the server from the application again, moving back to a process-based concept similar to what we did when CGI was the thing to use, although the protocols may be different nowadays. It is not in reference to a specific technology like CGI, rather the broader idea.
> And in these days, we did have plenty of applications without any server.
"These types of applications", not "all applications"...
I did start dabbling in computers in the early 80s, but I think there are far more people doing that now who have no concept of what actually happens and are are often sold pups because things are given deliberately obscure and cool-sounding names. (And people who name it like that are often extremely hostile when you ask questions, see my comment here: https://news.ycombinator.com/item?id=42549723) Of course buyer should beware, but I still think it's not OK to do.
I'm from even before that, and it makes no sense to me.
I understand both what you say and what "serverless" commonly means, I'm just saying it's essentially arbitrary. A symbol with no etymology that holds water.
All words are arbitrary, but how does it not make sense? "Server" is a well understood term in software. "Less a X" is well understood in English to mean something akin to "without X", or "not having X". Serverless is short for "less a server", which succinctly indicates exactly what it is: The application is without a server. Which is a shift in how these types of applications are written, as since the mid-2000s it was common to include a server as part of the application. "Serverless" makes more sense than most terms we use in industry. It literally describes itself, although does require some historical context to understand why "server" is relevant.
Understandably, if you don't come from the software world you might think a server is a person who does work at your request, like serve you food at a restaurant. Is that where the problem lies? There are definitely still people, servers, serving the care to the systems that run the software. But that is clearly not the context in which it is used. It is very much used as a software term, and as a software term alone.
> "Server" is a well understood term in software.
This might be why people are having trouble with it. "Cloud" and "serverless" both refer to hardware, not software.
"Cloud" was moving the hardware from something you managed, either in office or a datacenter, to using machines someone else managed, but it was still server-oriented (such as software in VMs or bare-metal that you managed).
"Serverless" drops even that, using ephemeral code that doesn't care what server it's running on, so the people managing the hardware can move your software as-needed.
> "Cloud" and "serverless" both refer to hardware, not software.
Not really. "Cloud" refers to a software abstraction that tries to hide the existence of actual hardware namely to remove dependence on the availability of any specific hardware component (like, as in, being able to transparently move to another physical machine without the user ever knowing). It is clearly a software term. I can't go down to my local Walmart and buy a "Cloud". Amazon won't ship a "Cloud" to my place. There is nothing "hard" implied by the word as used in this context. I'll grant you that it more or less maintains some kind of "virtual" hardware concept. Perhaps that is what you mean?
There is no hardware association with "serverless". It refers to a pattern for developing a certain breed of applications that resemble servers, but without the server. "Server" is definitively a software term as used here. A server is an application that listens for requests from a client, typically on a network port if we are to dive into implementation territory, but could be something else like a unix domain socket. Much like CGI of yore, these "serverless" applications shed the server, relying on the runtime environment to fill in the missing pieces.
This doesn't make sense to me because "fooless" means "lacks foo"
You are describing a thing that is not a foo, or does not do foo, not a thing that posesses no foo.
So, I say, this is just something you're saying and not a definition I ever heard or would have implied from context from others usage. And it's not a new term by now, so there has been several years for me to have gained this impression or understanding if anyone else were using it that way.
> "fooless" means "lacks foo"
Exactly. "Serverless" lacks a server. Which, granted, wouldn't make sense with no context, but when you remember that the same types of applications were previously written to include a server and no longer include a server when written in a "serverless" fashion, that is exactly where the differentiation is found. It literally describes what it is.
> And it's not a new term by now, so there has been several years for me to have gained this impression or understanding
I have never, ever, seen "serverless" refer to anything else outside of the previous commenter who thinks it has something to do with hardware. But it clearly has nothing to do with hardware. There aren't warehouses full of "serverlesses" ready to be loaded onto trucks. It is not something physical. It is not in any way "hard", that much is obvious. It is undeniably a software term. So, what, exactly, is your impression?
To try and put my first comment more clearly: They both refer to distance removed from hardware. Neither refers purely to software things like webservers or application servers, which is how you and others described them in what I was first responding to.
"Server" does not solely refer to software, it is also a name used for hardware. Think along the lines of mainframe, host, hypervisor, and so on.
> does not solely refer to software, it is also a name used for hardware.
A computer that primarily runs a server (or multiple servers) is often colloquially called a server in recognition of what it is doing, but server is still in reference to the software. If you repurposed that hardware to sit on someone's desk to run Excel, most wouldn't call it a server anymore.
> Think along the lines of mainframe
I am not sure that works. I expect a mainframe running Excel on someone's desk would still be considered a mainframe by most as mainframe refers to an architecture. A server, in the colloquial hardware sense, could be of any kind of architecture, including a mainframe!
Regardless, serverless is clearly not associated with any kind of hardware. It is about removing/not needing the server in your application, typically used in the context of adopting a service like AWS Lamba which offers the aforementioned runtime environment that negates the need for your application to be a server.
How much do your data centers cost to build roughly? How do you get global bandwidth with out peering?
How much would a VPS or a rented server cost where you can boot your own OS and be the sole tenant of the SSD and don't fight with the IOPS of the other videoconverting dude using the same machine?
So you agree there is some price/convenience function around control. And “someone else’s” server is a long way before serverless on that scale.
> The median product engineer should reason about applications as composites of high-level, functional Lego blocks where technical low-level details are invisible.
We don't make buildings from Lego blocks. We do use modular components on buildings (ceramic bricks, steel beams, etc), but they are cemented or soldered together into a monolithic whole.
In my opinion, "serverless" (which, as others have noted, is an horrible misnomer since the server still exists; true "serverless" software would run code exclusively on the client, like desktop software of old) suffers from the same issue as "remote procedure call" style distributed software from back when that was the fashion: introducing the network in place of a simple synchronous in-process call also introduces several extra failure modes.
How about cost at scale? Amazon itself shifted Prime Video from serverless to mostly containers and it resulted in huge savings.
From what I recall about that situation, they had a really stupid architecture that was using S3 as intermediate storage and processing video multiple times on multiple stages.
In fact, the solution still used serverless afaik: https://www.youtube.com/watch?v=BcMm0aaqnnI
(take that u/UltraSane! https://news.ycombinator.com/item?id=42506205)
It likely could have been solved by serverless too, by using local storage and having the pipeline condensed into a single action...
FD: I'm not a fan of serverless for production anything.
For the same amount of compute Lambda is priced far higher than Fargate which itself is priced higher than ec2. People run large workloads on k8s with base workload compute fully covered by dedicated ec2 instances not because it's fun, but because it saves you a lot of $.
Totally agree.
But it's the python argument.
Python is super slow, inefficient and let's say that the build environment is not so nice.
However, it's so easy to write functioning software in python that the alternative sometimes is not more efficient code, it's no code.
Lambda, in theory, follows a similar paradigm, if you can click a button and have a service that scales to zero (with logging and monitoring) then you're more likely to make toy webhooks and tiny services. If I have to make a build pipeline and a docker container and wrangle some yaml, configure service accounts and a service definition with the right labels and annotations.
Well, that's a decent chunk of work that means I'm probably going to think a bit longer about even deploying my little toy service that might see one request a day.
Going to reiterate though: I do not advocate for serverless in production. If you seriously think you're building something that will scale, it's fiscally illiterate to use a managed serverless provider.
If you under the impression that CI/CD and observability is easy with lambda I have a bridge to sell you. I worked on a large scale pure serverless project we wrote more CDK code than application code.
That’s sad, I saw a one click deploy button in my IDE and made an assumption.
What’s the point then if its not easier?
It scales to 0. Scaling to 0 and not having to write few lines Docker file is the only tangible benefits.
Without a link and a breakdown that makes no sense. We switched from blue to square and saw a honey suckle savings.
Amazon runs both and serverless is a billing model. Many serverless runtimes consume containers.
Serverless, like microservices are a design philosophy.
Serverless is more of a billing philosophy than a design philosophy in my opinion.
Serverless is all about outsourcing the infrastructure for scaling a micro service. How you design the service itself, or the system its a part of, can vary widely.
There are definitely dedign constraints of going serverless, but I'd argue those are largely just the constraints of going with microservices rather than a monolith.
> Serverless is all about outsourcing the infrastructure for scaling a micro service.
Technically, it is all about removing the server from your application. The name literally tells you so. It is true that removing the server can offer some benefits in the scaling realm. In particular, it allows you to scale to 0 now that you no longer have to keep the process alive to serve requests.
Of course, that is not tradeoff free. Scaling to 0 brings you right back to the old problem of slow initialization once a request does come in, which was the primary driver for why we moved to hosting the server in the application in the first place.
> Technically, it is all about removing the server from your application.
Technically its about removing the server from my list of responsibilities. There is still a server running my app, it just isn't managed by me and likely comes with auto-scaling features.
> Technically its about removing the server from my list of responsibilities.
Not necessarily. Back in the serverless CGI days you would still typically manage Apache. What you say is probably why would you choose serverless in 2025, but not always.
> There is still a server running my app
There may be a separate server (e.g. Apache) that runs your app, but that is someone else's app, not your app. Calling someone else's app your own would be rather silly.
Scaling to 0 is pretty much the only benefit you get. Which is not much of a benefit at any reasonable scale. Fargate is same firecracker VMs as Lambda they just don't scale to 0.
And even then it is not strictly necessary. Cloud Run comes to mind as offering the ability to scale to zero, yet allows (maybe even requires?) maintaining the server in the application. The real benefit of "serverless" is that you no longer have to worry about the server, it being removed. Granted, that is not a big worry much these days with all the great server frameworks available.
I’d much rather write 20 lines of boilerplate code and a Dockerfile than deal with 100s lines of CDK code, distributed tracing, and the associated observability challenges.
Some frameworks used to support serverless applications are completely over-engineered cesspools, no doubt, but it doesn't have to be that way.
It has nothing to do with the framework if it's a web app you have to manage deploy of multiple lambdas and API gateway.
Why? Just throw your program executable in the cgi-bin directory like we did in the olden days, go wash your hands, and then call it a day. Simple. Serverless!
Serverless doesn't have to be over engineered-garbage, even if some bored technologists looking for a promotion and/or to pad their resume try their best to make it so.
https://archive.is/ehJbY
> Scaling up the Prime Video audio/video monitoring service and reducing costs by 90%
> The move from a distributed microservices architecture to a monolith application helped achieve higher scale, resilience, and reduce costs.
the pricing is different for the same amount of compute is that news for you? Lambda is priced an order of magnitude higher than fargate which is priced significantly higher than EC2. For small scale workloads your TCO might be lower with higher level abstraction.
It is a billing switch, crazy that Amazon would publish a report of them do a rewrite of an application when the distinction is one of book keeping.
You would think they would want to sell their expensive solution.
There is tangible overhead to all the extra tracking Lambda infra has to do.
Because we use Nix recipes to deploy our Datalog-ish backend connectors that talk to Amazon Elastic Beanstalk via a bespoke database we wrote in Julia that's deployed on Snowflake Container Cloud. But there's a missing backslash somewhere and nobody can find it because even ChatGPT cannot decipher the error messages.
Maybe it's an expired certificate but the guy who knew how that stuff works built a 12,000 line shell script that uses awk, perl, and a cert library that for some reason requires both CMake and Autotools. It also requires GCC 4.6.4 because nobody can figure out how to turn off warnings are errors.
The problem with astronaut architecture is that nobody tells you about (or has a handle on) all the space junk.
> Microservices made a canonical example of how easy it is to miscalibrate that bet. Since the trend started ~15y ago,....
What started was the rebranding from distributed systems.
We have had Sun RPC (The network is the computer, a slogan now owned by Cloudflare), DCE, CORBA, DCOM, RMI, Jini, .NET Remoting, SOAP, XML-RPC, JSON-RPC,....
Client-Server, N-Tier Architecture, SOA, WebServices,...
Apparently the new trend is Microservices-based, API-first, Cloud-native, and Headless with SaaS products, aka MACH.
Simple monoliths are much easier to reason about and debug. And the costs are much easier to estimate.
Serverless functions are quite interesting for certain use cases, but those are mostly additions to the main application. I'd hesitate to build a typical web application with mostly CRUD around serverless, it's just more complexity I don't need. But for handling jobs that are potentially resource intensive or that come in bursts something like Lambda would be a good fit.
You pay per compute, and thus you have unpredictable costs. People and businesses don’t like unpredictable things, we tend to avoid it.
"Look at how streamlined our organization is. We have no infrastructure to manage!" -IT Director with 206 different contract renewal dates.
I understand the appeal of serverless, especially for small stuff (we have a few serverless projects at work, and I've built some hobby projects using Lambda), but ime DevEx is such a dealbreaker. Testing changes or debugging an issue? forget about it.
Without tooling to run a serverless service locally, this is always going to be a sticking point. This is fine for hobby projects where you can push to prod in order to test (which is what I've ended up doing) but if you want stronger safeguards, it's a real problem.
We're all-in on serverless / cloud-native for our platform (document management); it works really well for our model, as we deploy into the customer's AWS account.
The initial development learning curve was higher, but the end result is a system that runs with high reliability in customer clouds that doesn't require customers (or us) to manage servers. There are also benefits for data sovereignty and compliance from running in the customer's cloud account.
But another upside to serverless is the flexibility we've found when orchestrating the components. Deploying certain modules in specific configurations has been more manageable for us with this serverless / cloud-native architecture vs. past projects with EC2s and other servers.
The only downside that we see is possible vendor lock-in, but having worked across the major cloud providers, I don't think it's an impossible task to eventually offer Azure and GCP versions of our platform.
There's a huge grey area of "I want the response of a warmed up lambda" and "I don't have enough hits that it is actually warmed up" - pair with using certain language "runtimes" like the JVM and there you have it.
'Serverless' has it's uses, but not for everything
- Serverless can get very expensive - DevEx is less than stellar, can't run a debugger - Vendor lock-in - You might be forced to update when they stop supporting older runtime versions
We've looked at these tradeoffs over and over at places I work.
There's always part of the stack (at least on the kinds of problems I work on) that is CPU intense. That part makes sense to have elastic scaling.
But there's also a ton of the stack that is stateful and / or requires long buildup to start from scratch. That part will always be a server. It's just much easier.
For my own projects, I prefer lambda. It comes with zero sysadmin, costs zero to start and maintain, and can more easily scale to infinity than a backend server. It's not without costs, but most the backend services I use can easily work in lambda or a traditional server (fastapi, axum), so it is a two-way door.
1. Speed 2. Price 3. Vendor locked-in
* https://einaregilsson.com/serverless-15-percent-slower-and-e...
Because tools like lambda are expensive Because it locks us into a cloud provider Because the architectures tend towards function explosion. Think CommonFuntions.java but all the calls are on the network. What could have been 2 containers and rabbitmq has become 50 lambdas and 51 sqs topics Because distributed observability is hard The ESB people became serverless function people and they brought their craziness with them. Im busy cleaning up what should be a fairly simple application but instead it has 300 lambdas. All that said, serverless managed services like databases are useful.
For anyone here struggling with AWS Amplify or AWS CDK - I recently discovered https://sst.dev/ for serverless deployment.
It doesn't solve all problems (tt isn't a CRUD framework) - but it does make the developer experience much better as compared to Amplify.
Before we all go into why lambda doesn’t work, remember that companies are happily handing many many millions of dollars to AWS each year, and will continue to do so for some time.
IMO, the dev workflow is significantly worse, integration testing is harder and I don't see the value on "scale to zero", when the alternative is a $5/mo VPS.
Agree with you but you're paying for the potential of a sudden burst in traffic planned or unplanned. Going to maintain 5000 servers when you may only use them for some intense period a few hours of a single day during a month. Thats the canonical serverless pitch. I'd hate to develop a new pipeline using serverless as my dev environment.
I understand the value of something like this for seasonal businesses / black friday / etc; but for normal companies, how likely is it to _suddenly_ blow up in traffic?
If you are lucky enough to have your company go viral and receive a sudden spike in traffic, will the rest of the infrastructure tolerate it? Will your database accept hundreds of concurrent connections, or will it tip over?
If you need to engineer and test the auto-scaling capabilities of the rest of your infrastructure, is there value in not needing to think about the scaling of your APIs?
These may sound snarky, but they are real questions -- I used to administer ~300k CPU cores, so I have some trouble imagining the use-cases for serverless
The optimistic tone at the start of the article might just be a hallucinatory strawman set up. But as a probably old dog, I fail to see the allure of these technologies(*).
When I read the copy trying to peddle them, to me it sounds quite like someone saying "Heey.. PSST! Wanna borrow 5000$ in cash, I can give it to you right now! Don't worry about 'interest rates', we'll get back to that LATER".
When I build stuff out of 'serverless', I find it rather difficult to figure out what my operation costs are going to be; I usually learn later through perusing the monthly bills.
I think the main two things I have appreciated(?), is
(1) that I can publish/update functions on cloud in 1-5 seconds, whereas the older web services I also use, often take 30-120 SECONDS(not minutes, sorry) to 'flip around' for each publish.
(2) I can publish/deploy relatively small units of code with 'functions'. But again, that is not quite accurate. It's more like 'I need to include less boilerplate' with some code to deploy it.. Because to do anything relevant, I more or less need to publish the same amount of domain/business-logic code as I used to with the older technologies.
Part from that, I mostly see downsides - my 'function/serverless' code becomes very tied-to-vendor. - testing a local dev setup is either impossible or convoluted, so I usually end up doing my dev work directly against cloud instances.
I'm probably just old dog, but I much prefer a dev environment that allows me to work on my own laptop, even if the TCP/IP cable is yanked.
Oh yeah, and spit on you too, YAML :-) They found a curse to match the abomination of "coding in xml languages" of 20 years ago..
They're useful in a small set of behaviors. If you have a particular job that is run infrequently but is burstable, it doesn't make sense to have a server hanging around for just that purpose.
My current employer standardized on serverless and for many things it works well enough, but from my standpoint it's just more expensive.
For what I need it sounds overly complex and expensive, when a $5/mo VPS works just fine.
Any time AWS is mentioned I know it's going to be some huge expensive setup.
Shameless plug.
I work for a community project that is building a descentralized orchestration mechanism that is intended, among other things, to democratise access to serverless open compute while also being cloudless.
Take a look at the project at https://nunet.io to know more about it!
I built a serverless startup (GalaticFog) about 8 years ago, had to shut it down. Market never developed. There were some obvious lessons learned.
First most companies thought they needed to do containers before serverless, and frankly it took them a while to get good at that.
Second the programming model was crap. It's really hard to debug across a bunch of function calls that are completely seperate apps. It's just a lot of work, and it made you want to go monolith and containers.
Third, the spin up time was a deal killer in that most people would not let that go, and wanted something always running so there was no latency. Sure workload exist that do not require that, but they are niche, and serverless stayed niche.
I always found the whole thing odd personally. The Venn Diagram of people who both need to run a service in the cloud AND cannot manage an EC2 instance is a seemingly small set of people. I never saw the advantage to it and its got plenty of drawbacks.
It's not appropriate for high compute / long running workloads. Eg video transcoding. It's more expensive. Potentially higher latency.
I worked for a company once whose entire product was built on hundreds of lambdas, it was a nightmare.
Why would I care about serverless? I love managing and working with bare metal servers.
Why should I be serverless? I like servers. I like containers. I like options.
Isn't a shared host with php serverless for all intents and purposes?
Lack of vendor-agnostic solutions, and ridiculous amounts of configuration. It explodes complexity.
to me it seems much more intuitive to think in terms of actual servers. lambda seems like chicken nuggets but i wanna eat -say- a decent rotisserie not nuggets.
Because most companies have incompetent OPs and leadership, that Cargo Cult themselves into more tech debt.
It’s expensive. Even considering dev and ops hours.
>Something I’m still having trouble believing is that complex workflows are going to move to e.g. AWS Lambda rather than stateless containers orchestrated by e.g. Amazon EKS. I think 0-1 it makes sense, but operating/scaling efficiently seems hard. […]
This isn't really saying anything about serverless though. The issue here is not with serverless but that Lambda wants you to break up your server into multiple smaller functions. Google cloud run[0] let's you simply upload a Dockerfile and it will run it for you and deal with scalling (including scaling to zero).
[0] https://cloud.google.com/run
I think it’s all about cost analysis. That said, there are definitely some services that are worth outsourcing, like smtp, until you get to a certain size.
Separately, when you factor in data privacy, your decision making tree will certainly change quickly.
It's too bloody expensive. QED.
"A death star of death stars".
it's annoying and expensive
Because some of us need to have the servers that the "serverless" people use.