Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So serverless is essentially a return to the PHP/CGI execution model... What's changed to turn a bad idea into a good idea again?


Was it ever a bad idea, or did it just go out of fashion? I'm not a PHP fan, but I miss the simplicity of low-end deployments on a LAMP stack by literally dragging files over FTP. It seems like something has been lost with the total embrace of more modern web frameworks.


It's just a difference in scale. I mean, you can still setup a FTP and drag and drop your files to deploy, if you want to. But companies with many developers deploying multiple changes per day, needs something that works better and faster. Not saying that serverless is for them, but that's why things get more complex sometimes. But you're still able to choose "older" things.


Definitely, I guess I just miss having a platform (LAMP) that scaled down as well as up. I think it's really cool that some of the most trafficked websites run WordPress and MediaWiki which can also be installed in a few clicks on a $5/month shared host. I recently wrote a Python web app that uses Dynamodb and Lambda and even though it's open source it feels way less portable.


Yeah. PHP may be out of fashion, but it's still powering a massive portion of the internet because this deployment model is so lightweight. Much different than the now-popular model of "spin up an app server and reverse proxy to it".


It's less of CGI (some projects only support that) Think of it like FastCGI where most of the issues with CGI are fixed up (i.e. throughput isn't an issue) and then combine that with containerisation features from Kubernetes etc to make packaging easier (again not all projects use a Docker image format).

If you're wondering whether Serverless is catching on then see also: AWS Lambda.


But isn't "serverless" fundamentally a restart-the-world model, which would make it more like CGI than FastCGI which is a application server type model which isn't fundamentally that different to putting a reverse proxy in front of a HTTP speaking application server?


No. In AWS Lambda at least, the container your application runs in lives through multiple requests until some period of inactivity passes and it is shut down. It isn’t “running” in the sense that your code is in control, but long-lived things like database connections do not need to be re-established on every request.


That analogy only really applies if you stick your serverless function behind an API gateway and invoke it through an HTTP request; honestly, AWS lambda behind an API gateway has always felt like a pretty poor way to build a microservice. Serverless functions seem to come into their own when they're connected to a reliable event triggering system - that looks to be the really interesting part of OpenWhisk, that it adds extensible triggers, much like the way AWS lambda can respond to events within the AWS ecosystem - but platform independent. In a high volume event-based world, small stateless processes become a very appealing idea again.


- Scaling infrastructure up and down is hard to get right.

- Paying for idle servers can get costly if you have many of them.

- Idling servers are inefficient for cloud providers as well.


A complete machine instance is too coarse a grain sometmies. So the progression is: "A physical box" → "A VM with many things running" → "A half-OS and a couple of related processes in a container" → "A single OS-process-like function invoked on demand".

The upside is that you invoke the "functions" not as processes on one box you have to maintain (*CGI), but on many boxes maintained automatically. Your limiting resource is not the limitations of the box, but only your budget. Also, stuff like security, software updates, load balancing, etc is taken care of by the provider.


CGI didn't scale because forking is literally part of the API. It was a good idea other than that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: