So far in this blog series on Cloud Native we’ve said a lot of nice things about it being an effective approach, but “fine words butter no parsnips”, as we used to say in the 17th century. Cool tech is not much use unless we can practically apply the concepts.
So in this post we’re going to consider some ways of going cloud native-ish and ponder how even a blank slate is not actually blank.
The Mythical Blank Slate
A company of any size might start a project that appears to be an architectural blank slate. Hooray! Developers like blank slates. It’s a chance to do everything properly, not like those cowboys last time. A blank slate project is common for a start-up but a large enterprise can also be in this position.
However, even in a startup with no existing code base you still have legacy.
- The existing knowledge and experience within your team is a valuable legacy, which may not include microservices, containers or orchestrators because they are quite new concepts.
- There may be a cultural legacy of existing third-party products or open source code that could really help your project but which may not be Cloud Native.
- You may possess useful personal legacy code, tools or processes from other projects that don’t fit the Cloud Native model.
Legacy is not always a bad thing. It’s the abundance and reuse of our legacy that allows the software industry to move so quickly. For example, Linux is a code base that demonstrates some of the common pros and cons of legacy (including: it’s a decent OS and it’s widely used but it’s bloated and hardly anyone can support it). We generally accept that the Linux pros outweigh the cons. One day we may change our minds but we haven’t done so yet.
Using your valuable legacy might help you start faster but push you away from a Cloud Native approach. So, what do you do?
What’s Your Problem?
Consider the problems that Cloud Native is designed to solve: fast and iterative delivery, scale, and margin. Are any of these actually your most pressing problem? Right now they might not be. Cloud Native requires an investment in time and effort and that effort won’t pay off if neither speed (feature velocity), scale nor margin are your prime concern.
Thought Experiment 1- Repackaging a Monolith
Imagine you are an enterprise with an existing monolithic product that with some minor tweaks and re-positioning could be suited to a completely new market. Your immediate problem is not iterative delivery (you can tweak your existing product fairly easily). Scale is not yet an issue and neither is margin (because you don’t yet know if the product will succeed). Your goal is to get a usable product live as quickly and cheaply as possible to assess interest.
Alternatively, you may be a start-up who could rapidly produce a monolithic proof-of-concept to test the market using a monolithic framework like Ruby on Rails with which your team is already familiar.
So, you potentially have two options:
- Develop a new Cloud Native product from scratch using a microservices architecture.
- Rapidly create a monolith MVP, launch the new product on IaaS and measure interest.
In this case, the most low-risk initial strategy might be option 2, even if it is less fashionable and Cloud Nativey. If the product is successful then you can re-assess. If it fails, at least it did so quickly and you aren’t too emotionally attached to it.
Thought Experiment 2 – It Worked! Now Scale.
Imagine you chose to build the MVP monolith in thought experiment 1 and you rapidly discover that there’s a huge market for your new product. Your problem now is that the monolith won’t scale to support your potential customer base.
Oh no! You’re a total loser! You made a terrible mistake in your MVP architecture just like all those other short-termist cowboys! Walking the plank is too good for you!
What should you do next?
As a result of the very successful MVP strategy you are currently castigating yourself for, you learned loads. You understand the market better and know it’s large enough to be worth making some investment. You may now decide that your next problem is scale. You could choose to implement a new version of your product using a scalable microservices approach. Or you may not yet. There are always good arguments either way and more than one way to scale. Have the discussions and make a reasoned decision. Ultimately, having to move from a monolith to a Cloud Native architecture is not the end of the world as we’ll hear next.
The Monolithic Legacy
However you arrive at it, a monolithic application is often your actual starting point for a Cloud Native strategy. Why not just throw it out and start again?
What if the Spaghetti is Your Secret Sauce?
It’s hard to successfully re-implement legacy products. They always contain more high-value features than is immediately apparent. The value may be years of workarounds for obscure field issues (been there). Or maybe the hidden value is in undocumented behaviours that are now taken for granted and relied upon by users (been there too).
Underestimated, evolved value increases the cost and pain of replacing older legacy systems, but it is real value and you don’t want to lose it. If you have an evolved, legacy monolith then converting it to microservices is not easy or safe. However, it might be the correct next step.
So what are folk doing? How do they accomplish the move from monolith to microservice?
Can a Monolith Benefit From Cloud Native?
I recently discussed with Daniel Van Gils of the DevOps-as-a-Service platform Cloud66 what their customers are doing with Cloud Native. The data was very interesting.
Cloud66 hosting is container-based so all of their customers are containerised, but how they are utilising containers and how that has progressed over the past year draws a useful picture.
In June 2016:
- 70% of Cloud66’s 500+ business users ran a containerised monolith.
- Around 20% had taken an “api-first” architectural approach and split their monolith into 2 or 3 large subservices (usually a frontend and a backend) with a clear API between them. Each of these subservices was containerised and the front end was usually stateless.
- 6% had evolved their API-first approach further, often by splitting the backend monolith into a small, distributable, scalable API service and small distributed backend worker services.
- 4% had a complete native microservice architecture.
In January 2017, they revisited their figures to see how things had progressed. By then:
- Only 40% were running a containerised monolith, down from 70% six months earlier
- 30% had adopted the api first approach described above (separated services for backend and frontend with a clear API), up from 20% in June 2016
- 20% had further split the backend monolith (> 3 different services), up from 6%
- 10% were operating a native microservice architecture (> 10 different services), up from 4% the previous year.
So, in 2016 96% of those who had chosen to containerise on the Cloud66 platform were not running a full microservice-based Cloud Native architecture. Even 6 months later 90% were still not fully Cloud Native. However, Cloud66’s data gives us some idea of the iterative strategy that some folk with monoliths are following to get to Cloud Native.
- First, they containerize their existing monolithic application. This step provides benefits in terms of ease of management of the containerised application image and more streamlined test and deploy. Potentially there are also security advantages in immutable container image deployments.
- Second, they split the monolithic application into a stateless and scalable frontend and a stateful (fairly monolithic) backend with a clear API on the backend. Being stateless the frontend becomes easier to scale. This step improves scalability and resilience, and potentially margin via orchestration.
- Third, they break up the stateful and monolithic backend into increasingly smaller components some of which are stateless. Ideally they split out the API at this point. This further improves scale, resilience and margin. At this stage, businesses might be more likely to start leveraging useful third-party services like databases (DBaaS) or managed queues (QaaS).
The Cloud66 data suggest that, at least for their customers, businesses who choose to go Cloud Native often iteratively break up an existing monolithic architecture into smaller and smaller chunks starting at the front and working backwards, and integrating third party commodity services like DBaaS as they go.
Iterative break-up with regular deployment to live may be a safer way to re-architect a monolith. You’ll inevitably occasionally still accidentally drop important features but at least you’ll find out about that sooner when it’s relatively easier to resolve.
So, we can see that even a monolith can have an evolutionary strategy for benefitting from a microservice-oriented, containerised and orchestrated approach – without the kind of big bang re-write that gives us all nightmares and often critically undervalues what we already have.
Example Cloud Native Strategies
So, there are loads of different cloud native approaches:
- Some folk start with CI and then add containerization.
- Some folk start with containerisation and then add CI.
- Some folk start with microservices and add CI.
- Some folk slowly break up their monolith, some just containerize it.
- Some folk do microservices from a clean slate.
Many enterprises do several of these things at once in different parts of the organisation and then tie them together – or don’t.
So is only one of these approaches correct? I take the pragmatic view. From what I’ve seen, for software the “proof of the pudding is in the eating”. Software is not moral philosophy. The ultimate value of Cloud Native should not be intrinsic (“it’s fashionable” or “it’s more correct”). It should be extrinsic (“it works for us and our clients”).
If containers, microservices and orchestration might be useful to you then try them out iteratively and in the smallest, safest and highest value order for you. If they help, do more. If they don’t, do something else.
Things will go wrong, try not to beat yourself up about it like a crazy person. Think about what you learned and attempt something different. No one can foresee the future. A handy alternative is to get there sooner.
In this post I’ve talked a lot about strategies for moving from Monolith to Microservice. Surely just starting with microservices is easier? Inevitably the answer is yes and no. It has different challenges. In the next post I’m going to let out my inner pessimist and talk about why distributed systems are so hard. Maybe they obey Conway’s Law but they most definitely obey Murphy’s Law – what can go wrong, will go wrong. But does that matter?
Latest posts by Anne Currie (see all)
- Securing Microservices with Docker from Adrian Mouat – Part 2 - July 28, 2017
- Securing Microservices with Docker from Adrian Mouat – Part 1 - July 27, 2017
- Securing Microservices by Sam Newman - July 10, 2017