(Replying to PARENT post)
If you write your Lambda function to put the interaction between the AWS Lambda service at the surface level of your application (i.e. the entry/exit point), and then write your business logic inbetween, you can create a function that is quite easily tranferrable between cloud providers.
The vendor lock in creeps in more around the surrounding services that the cloud provider offers. You can protect yourself from this though by writing good quality hooks that have pluggable backends, e.g. writing/reading to an object store should be abstracted
(Replying to PARENT post)
Lambda also provides a lot more than containerized services, as the article mentions - I no longer have to patch my system, which is a huge operational/ security burden that many companies struggle with. The time to configure API Gateway/ Lambda feels trivial compared to the work involved in maintaining a patched FAAS solution.
(Replying to PARENT post)
Let’s split lock-in into two categories:
1. Essential complexity
DyanmoDB and Firebase are different products with different features and complexities. There is no “ANSI SQL” here. They are as different as they are similar. Moving from one to the other is a non-zero cost because they have different features your app would have to make up for or adopt.
2. Inessential complexity
Serverless functions don’t necessarily need different signatures and it is conceivable that a standard for HTTP evented functions could emerge. Many are working on this. I expect this to be largely resolved over the next few years.
Lastly, I’m grateful for ANSI SQL but over the last 20 years I think I’ve seen one or two clients migrate a mature Java app from one database vendor to another (excluding some very recent moves to AWS RDS). Keep in mind that JDBC is about as good an abstraction as we’ve ever had for database agnosticism.
When you choose to build a lot of complexity upon an abstraction you have to be honest with yourself: have you ever dealt with a service (storage, queues, naming, auth, etc...) that didn’t have leaky abstractions? Of those with few/zero leaky abstractions how often did you need to migrate to a different vendor? Why do we expect a rapidly evolving set of systems and services to behave like mature commodity software?
Fear of vendor lock-in is a premature optimization.
(Replying to PARENT post)
(Replying to PARENT post)
Also, Lambda doesn’t provide any sla’s on container reuse. They could restart your container multiple times per second or every few minutes. You are at their mercy to keep your containers warm.
Finally, with the meltdown example, would containers actually need to be patched if the parent OS gets patched since the kernels are shared between host and client containers? With Fargate, amazon would patch the base OS and your containers would be safe as they get rescheduled onto patched nodes.
(Replying to PARENT post)
There are also some abstractions like Kubernetes that are neutral yet also managed with extras by all the providers which is a nice middle ground.
None of this is new or groundbreaking other than the silly hype words like "serverless".
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Forget that they take more time to care for, and to learn how to ride, and how to feed properly.
(Replying to PARENT post)
(Replying to PARENT post)
You can choose to deploy your own open source Faas framework (like openwhisk or open-faas) on one of these clouds, but YOU will have to:
1. Manage scaling of underling EC2s
2. Manage security patching of both the docker and underlying OS
3. Setup whole lot of configs
4 manage optimizations
With a managed FaaS you're trading ops for more dev time, but someone has to be paid for the ops - it isn't free
(Replying to PARENT post)
Serverless (AWS Lambda) is just the cloud's way of trying to "de-commoditize" the containers that commoditized them. They want you to tightly couple your applications to their specific cloud provider again. And charge you more. How much is that function in the zip from S3 going to cost you to run in a container managed and run by AWS? And how long did you spend configuring the API Gateway (and where else can you apply what you've learned doing the configuration)? Next time you see an article fawning over serverless and saying things like "Containers just don’t matter in the serverless world" take a look at what the author does for a living. You'll start to see the same pattern I've been seeing.
Meanwhile, you could be spending less money and time treating AWS like a dumb processor with a network connection while only really configuring the open source software you already know. But it's your time and money.