In our last few posts we’ve talked about two of the architectural and operational weapons of Cloud Native: containers & dynamic management. However, when I go out and talk to Cloud Native users I find that containers and orchestrators aren’t always where they started. Many companies begin with microservices and don’t adopt containers until later.
In this post we are going to look at “microservices-oriented architectures” and think about how they fit in with the other Cloud Native tools.
The microservice concept is a deceptively simple one. Complex, multi-purpose applications (aka Monoliths) are instead broken down into small, ideally single-purpose and self-contained services that are de-coupled and communicate with one another via well-defined messages.
In theory, the motivation is threefold – microservices are potentially:
- Easier to develop and update.
- More robust and scalable.
- Cheaper to operate and support.
However, these benefits are not trivial to deliver. How to architect microservices is a difficult thing to get your head around. Microservices can achieve several competing objectives and it’s very important that you think carefully about what your initial goal is or you could end up with a mess.
Let’s Talk About State
Let’s quickly step back and discuss something that often comes up when we’re talking about microservices. State.
There are broadly two types of microservice: “stateless” and “stateful”.
- Stateful microservices possess saved data in a database that they read from and write to directly. Note that well-behaved stateful microservices don’t tend to share databases with other microservices because that makes it hard to maintain decoupling and well-defined interfaces. When a stateful service terminates it has to save its state.
- Stateless microservices don’t save anything. They handle requests and return responses. Everything they need to know is supplied on the request and once the request is complete they forget it. They don’t keep any handy permanent notes to remind them where they got to. When a stateless service terminates it has nothing to save. It may not complete a request but c’est la vie – that’s the caller’s problem.
The Point of Microservices
In an earlier post we discussed how cloud native has 3 potential goals: speed (i.e. feature velocity or time to value), scale and margin. To optimize for each of these you might design your microservice architecture differently.
Microservices for Speed (Feature Velocity)
A very common motivation for moving to a microservices architecture is to make life easier for your tech teams. If you have a large team all working on the same big codebase then that can cause clashes and merge conflicts and there’s a lot of code for everyone to grok. So it would instantly seem easier if every service was smaller and separated by a clear interface. That way each microservice could be owned by a small team who could all work together happily as long as they liked the same two pizza toppings. Teams can then safely deploy changes at will without having to even talk to those four cheeses down the hall – as long as no fool changes the API….
Microservices for Scale
In the very olden days you would spend $5M on a mainframe and it would run for 10 years with no downtime (in fact, IBM see the market for mainframes lasting another 30 years for some users!). Mainframes are the classic example of vertical scaling with all it’s strengths and weaknesses. I don’t want a mainframe for many reasons, but three particularly leap to mind:
- any one machine will eventually run out of capacity
- a single machine can only be in one place – it can’t provide fast response times for users all over the world
- I have better things to do with my spare bedroom.
If I want to scale forever or I have geographically dispersed users I may need to architect for horizontal scaling (lots of distributed small machines rather than one big one).
Basically, I want to be able to start more copies of my application to support more users. The self-contained nature of microservices works well with this, an individual instance of a microservice is generally de-coupled not only from other microservices but also from other instances of itself so you can safely start lots and lots of copies. That effectively gives you instant horizontal scaling. How cool is that?
Actually it gets cooler. If you have lots of copies of your application running for scale that can also provide you resilience – if one falls over you just start up another. You can even automate this if you put your application in a container and then use an orchestrator to provide fault tolerance. Automating resilience is a good example of where microservices, containers and dynamic management work particularly well together.
Microservices for Margin
Switching to a more modern example, if my monolithic application is running out of memory on my giant cloud instance then I have to buy a bigger instance, even if I’m hardly using any CPU.
However, if my memory-intensive function was split out into its own microservice I could scale that independently and possibly use a more specialised machine type for hosting it. A flexible microservices architecture can give you more hosting options, which generally cuts your costs.
What’s The Catch?
If this all sounds too good to be true, it kind of is. Microservices architectures can be really, really complex to manage. Distributed systems have ways of failing that you’ve never thought of before.
The paradox is if you want your system to be easy for your developers, you can architect your microservices for that. It will probably involve a lot of queues and it will be expensive to host and slow to run, but it will be easier to develop on and support.
However, if you want your system to be hyperscale and cheap then you will have to understand complex distributed failure modes. In the short term, it will be more difficult for your developers and they’ll have lots to learn.
So you have an initial decision to make: ease, or scale and margin? It may be sensible to start easy and add incremental complexity as you gain familiarity and expertise.
Microservice vs Monolith
Not all application architectures fully benefit from a Cloud Native approach. For example, stopping a container fast, which is important to dynamic management, only works if the application inside the container is happy to be stopped quickly. This may not be true if the app is maintaining lots of information about its internal state that needs to be saved when the process terminates. Saving state is slow.
Lots of older applications maintain state because that was how we used to architect things – as big, multi-purpose “monoliths” that were slow to stop and start. We often still architect that way because it has many benefits but it happens not to work so well with some aspects of dynamic management.
If you have a monolith there are still advantages to a Cloud Native approach, but Cloud Native works optimally for scalability and resilience with a system of small, independent microservices that are quick to stop and start and that communicate with one another via clear interfaces. These scaling and resilience advantages exist whether you have gone for an easy-but-expensive or a hyperscale-but-complicated microservice architecture (or somewhere in between, which is where most folks are).
So there are clear speed, scale, productivity, resilience and cost advantages to using microservices, containers and dynamic management. And they all work even better together! Great! But where to start? In the next post we’ll start looking at that…
This was an excerpt from the free ebook The Cloud Native Attitude
Art By Paolo Redwings from london, UK – Banksy, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=3015423