Docker, Microservices, Miscellaneous

This is not the Serverless I Ordered

 

Recently at Container Solutions we explored serverless, where it succeeds and fails, but more importantly we discussed where we thought it may be going.  Having played with Lambda before, I found the discussions way more interesting than the actual execution, because we started talking about what we want and what we got.

I reject the fact that vendors are currently in charge of the trajectory of “serverless.”  I also believe we need an open standard. Finally I had an ideal, an ideal based on ideas I have been bouncing around on my own.  The last is purely for my own self indulgence, suffer through it, or add your own suggestions.

Vendors

So when AWS introduced Lambda, I think there was a general round of excitement and in many cases confusion.  I am not sure if everyone got their head around how this would work or how this would help anyone.  Since then, it has demonstrated itself to be a very cool utility that can definitely cut your operating costs.  It also amplified if not just created the buzz around the “serverless” craze we are currently experiencing.

But to be honest, I don’t like AWS, Azure, and GCE dictating what is serverless.  I like these providers, I think their platforms are amazing to work on, but I am a purist at heart.  I think if serverless is to become “a thing” we need to develop an open standard.

I also think these vendors are looking at the cheapest solutions that fit a consumer’s need. That doesn’t always make it the best solution.

Open Standard

Open standards always bring us closer together, speaking the same language, make our tools more compatible, it just makes sense.

Currently the FaaS (Function as a Service) offer none of that.  They have all provided frameworks where you work completely inside their ecosystem.

So we need an open standard, but what would it look like.  Well first we would need to establish some ground rules on how it should work.

  • It must be secure while supporting multitenancy
  • It needs to be fast
  • It needs to have an agreed upon data format
  • It must be language agnostic
  • It needs to run anywhere

We don’t have any of the above in our current offerings.  I want to imagine a world where we could get all these things.

The Future

So I am going to try project a little into the future here, and eventually I will drag us back into the past.

Containers

Containers are where we will try and achieve security and speed in a serverless system.  When we talk about containers most people immediately think of Docker.  Docker is not a great fit for Serverless functions, it is slow, bloated and requires a daemon.  This is not a dig against Docker, but it just would not be a good fit for serverless.  The fact is Docker is a swiss army knife when we need a scalpel.  But neither Docker or Rkt are containers, they are just tools to facilitate containerization.

That does not mean we cannot develop a tool to containerize our tools that could start in milliseconds, and follow the same isolation for all functions.

But maybe containers are not the answer, maybe unikernels are, maybe just a tweaked linux server that effectively isolates each and every process, giving it no file access, and only outward bound TCP.

STDIN/STDOUT

I think I bore my coworkers to death on this concept, but I have become a firm believer in the concept of utilizing standard in and standard out.  It first started to hit me when I started to use tools like the AWS KCL which provided a daemon that wrapped itself around your process, so you could take Kinesis messages in any language.  This followed shortly by wrapping my first Go program in a NodeJS wrapper on Lambda.  We seem to write in many many different languages, and communicate in many different protocols, but STDIN/STDOUT is always universal.

Since the idea of a serverless function is to be born, do its job and die, it seems to me this is a great manner in which to actually accept data.

If you look at most of the implementations of FaaS from the cloud providers, they only provide two variables.  One if the “event” which they have no idea what is inside of that.  The second is the “context” which contextualizes your request.  This does not seem to be that different from sending to STDIN with some flags or environmental variables on an executable.

Falling back to simpler times of STDIN/STDOUT does give us a lot of latitude.  It allows our functions to be language agnostic and with Linux’s very powerful ability to pipe commands, we are allowed to build very robust chains of functionality.

Data Format

JSON seems to be the defacto way in which people fall back to communicate, but in a cloud native world, I think we just need to move on.  Please take a moment if you need to say goodbye.

The current market holds two formats which I think would be an ideal match, Cap’n Proto and Protobuf.  The first allows fast traversal of large data sets and the latter allows concatenating bytes of data to existing payloads.  At this point I am not sure which one would be the ideal choice, or maybe there is even a better one out there.  I do know, if we were to establish a standard by which we share data within processes, we can then build interchangeable parts.

To Be Continued

I don’t think what we have right now is the vendor agnostic “Serverless” infrastructure we are looking for.  We have tools that work very well, but solely work for the profits of the service providers, which is completely respectable but always reaches a point where further development is not profitable for them.  We should start thinking about an open framework that would allow us to develop robust and open tooling.  If we could develop a platform that followed open standards, third party services could develop tooling around our standards.  Not standards developed for vendor lock in or made to suit your highest paying customers.
This is an ongoing thought project, and I look forward to suggestions.  My next post will be about a make believe system that could achieve my ideal serverless ecosystem.

Comments
Leave your Comment