An argument against REST in microservices

Rest has become a widely accepted standard for API’s.  There are a few reasons for this, it is easy to follow, works very much like a web browser does, and as such uses common tools to digest the service.

But REST brings baggage with it  that can create complex, hard to maintain coupling when working in a diverse microservice environment.


URLs are one of the biggest issues with REST in microservices.  URL schemes for REST and HATEAOS have a way of drawing services together into a monolithic system.


URLs bring context to your service, meaning for a service to return properly formatted HATEOAS in its payload it needs to know where it stands in the entire architecture.  The service needs to know that it resides at a URL.

If all your services need to be aware of their surroundings you inch closer and closer to a monolith and further from a microservice infrastructure.

A better option is to abandon the idea of a service as an endpoint but instead embrace the idea of service as reference.  It is better to reference your service as domain:service:method:1234

This means we do not need to know where or if  the service is running, we just need to know this resource originated from this service reference.  If we need to find the service we let the backend route it for us by reference and not by address.  This allows your services to reside as endpoints on a router, as consumers of a queue, as a lambda fired after a DynamoDB insert, or as a subscriber to an SNS service.

It is the difference between entering in the address for the Starbucks you want to go to or searching in Google maps for the closest one.  At any point in time the Starbucks you knew could be gone or most likely is not the closest one, and tomorrow the quickest method may be home delivery.


With REST, your routing and RESTful service become tied to each other, because of the mix of parameters and endpoints.  As an example, given the url

Your bar service is directly mapped to /bar if at some point you wish to reposition bar you will need to reconfigure your router and then refactor your service. This also becomes problematic when you want to break up your service, such as

If you wanted to split bar and bar2 into two separate services, you will now need to start parsing your urls on the router level to ensure each service receives the correct parameters.  In addition even though bar2 maybe accepting calls at

it needs to know the API is expecting it at

Or conversely you have it accept calls at bar/{id}/bar2, either way you have tied your routing to your service or your service to the router.  It starts to look more and more monolithic, or even worse you end up with an ESB as the centerpiece of your infrastructure.

A better option if you were to use http is to map your services directly to endpoints that have no url parameters, so then any URL can become a service and every service serves from


HTTP is a perfectly fine protocol for communication of microservices in certain scenarios, but tying your system to the protocol as REST is defined is problematic.

Looking at a typical GET request we have this


GET /bar/12345?filter1=value1&filter2=value2 HTTP/1.1 Host:

Breaking this down we have

A Method on the service

A service


Random parameters


First and foremost we have to be clear that within your service your Random parameters and your ID, are just parameters on some sort of method.  They will be consumed by a “getter” together to determine the return.  So internally GET must be somewhere of the form get(id, filter1, filter2), or get(id, arrayOfFilters[]), unless you are invoking globals…  Yet we have spread them across several different locations as if they were mutually independent.

To follow, how does this translate to other protocols.  What if we want to send this request to an RPC via websockets or gRPC.  How do requests interpret?  Let’s have a look.

Fortunately someone has already demonstrated how this would have to work.

gRPC gateway is a tool to transform REST to gRPC calls.  Let’s look at the graphic they provide.


Let’s review what this router has to map.

Method =  host/url path + http.Verb

Parameters = maybe url, maybe body, maybe url query…., maybe headers…

As we can see in this example the gateway takes what we already knew to be parameters (id and email) and maps PUT/Profile two different values from two completely different locations in the http request to one endpoint/RPC location that accepts a serialized blob of data.  Some people also consider query parameters in the URL for POST/PUT as completely valid RESTful, so we can assume those are more values to be added to a request

As the graphic demonstrates, if we would have passed a serialized request directly in the body of the http request, there would not have to be any transformation, aside from one form of serialization to another.

Now the gRPC gateway was built explicitly to transform REST to gRPC.  What happens when we start bringing in open source tools into our infrastructure that do not use gRPC?  Or what do you do when your interpretation of proper REST may differ from someone else’s idea of REST? Well you can start hacking away at the router.

Let’s look at an IOT scenario where the client is attached to a Websocket Reverse Proxy in front of RESTful microservices.  In this scenario the Websockets have RPCs that are invoked by some device in the cloud (cell phone?).  Would the Reverse Proxy have to deserialize each and every request and then assemble each RESTful request for the backend?  This seems very costly for a router to perform.  Does the RPC respond with HATEAOS links embedded?  What would those mean to this Websocket client?

The better answer is routing whole services, with their own method of routing performed internally, based on a method defined in the serialized request, or direct method endpoint mapping like gRPC.  Using this model, things become way more portable, and sensible. An HTTP Body is much easier to move between serialization methods than to dissect a full HTTP request.


The Conclusion

Making a system based on microservices, one where you can move your parts around, fail, try again and redeploy and reconfigure, requires a different approach.  We need to build  applications that make no assumptions about where they will reside at any point within the infrastructure.

With the multiple methods for synchronous and asynchronous communication, and ephemerality of services, we need to abandon a paradigm built on old assumptions of a purely request response paradigm.

If we do not we will develop heavy coupling, and even though our microservices are mutually independent, they will be so tied to each other we will undermine the benefits of a microservice architecture.  Which in turn will lead to frustration over the complexity microservices bring while seeing little of the benefit.

A microservice architecture already brings it’s own level of complexity, it does not need yet another level, that also enforces heavy coupling.  REST is good for many things but a microservice infrastructure is not one of those.


The following two tabs change content below.
Jason is a backend engineer mostly building intergrations. He is a devout student of the K.I.S.S. philosophy. He has become focused on microservices, and the best practices for developing in a distributed environment.


  1. While I agree that REST may not be the best means of implementing microservices, you make a couple assumptions that make REST more trouble than it has to be.

    The first is that your microservices must be run from the same domain / zone. This ensures your point regarding microservices become a monolith, but there are other ways to expose your services. For example, if you use separate DNS zones for your services, you can avoid much of the addressing problems. In other words, rather than being a service, you could use and let your service discovery and load balancing system deal with sending traffic to the correct system. Docker swarm and Kubernetes both support this sort of thing and using a service discovery tool like consul would also work outside a containerized system.

    The other thing that I think most people miss about REST is that HATEOS implies the contract is meant to reside in the representation of the resource, or in other words, the document provided by the resource. If you are making a direct relationship between a HTTP method and URL as a remote function call, then you’ve lost some of the benefits of REST. Instead, your contract should HTTP and the document returned by the service. In doing so, services can evolve by providing more information while clients can make an effort to only use the necessary parts of the document.

    For example, if you had some service return some JSON about a user, the consuming service might only be interested in an “email” field. As long as that field doesn’t change, it will work with a minimal contract. The providing service can then add new features in the document with confidence it shouldn’t break existing consumers. The converse is that in a RPC environment, you’d need to ensure the caller is expecting the same type and version of that type or had the ability to handle multiple versions (ie rolling out a upgrade where an RPC endpoint returns new data and clients will be rolled out expecting both the old and new result).

    None of this is to suggest REST is the best solution for microservices. Providing an RPC interface with explicit types can be much faster, avoid overhead of traversing some document, etc. My only point is that if you consider a REST contract from the standpoint of a document rather than a RPC, there are some benefits that might be helpful. What’s more, there is no reason not to utilize RESTful techniques in a RPC based environment to help get some of the benefits while reducing the overhead (ie service bar() -> JSON).

  2. I’m sorry – but what you are describing is not REST. The thing where you went wrong when you started talking about URL’s instead of link relations. The URL is totally irrelevant with respect to REST.

    I honestly think that these kinds of ‘arguments against REST’ are harmful. Sure there are tradeoffs when going RESTFul- but the thing you are describing is not one of them.

  3. A well-written article that makes a point against hard-coding all of a REST request (endpoint, method name, parameters) into the URL.
    I am not sure if I did understand the problem about the context. The article seems to indicate that a REST URL defines a fixed place of a microservice within the architecture. If my understanding is correct, I tend to disagree.

    After all, a domain name is just a mapping to an IP address, and this mapping can be changed if required.

    I also would expect that a discovery service could deliver dynamic URL’s to clients, so the endpoint address would not be carved in stone anymore.

    I do agree on the routing problem. The proposed solution of routing only the service name via URL and having the service itself dispatch the method calls internally surely is worth thinking about. The downside seems to be more JSON en-/decoding work, as there would be no “pure” REST URLs (with empty request bodies) anymore. As with any real-world problem, the pros and cons have to be weighed carefully against each other.

  4. While I don’t love REST, and think that using protobuf-based RPC can be significantly better, this article does not nail it. Ultimately, the author sets up a simplistic straw-man argument against REST, without espousing some of the biggest benefits of using alternatives like protobufs.

    The first, and most glaring assumption the author makes here is treating the hostname of the RESTful endpoint as different from a ‘reference’. Fundamentally, your ‘’ is the same thing as ‘find the closest Starbucks’ and not the other way around – DNS gives you the ability to dynamically define endpoints for various hostnames, and you can create infinite numbers of hostnames to represent each microservice using service discovery built into any modern system. This allows you to point to ‘’ and get sent to the proper service, regardless of where it’s located. It’s completely possible to rely on hostnames and be dynamic in an ever-changing infrastructure.

    The author’s points about coupling request types and URLs is highly dependent on the environment – if you are already communicating with clients that speak some sort of non-REST, it might not make sense to be serializing and deserializing the two different systems. On the other hand, if there is purely service-to-service communication, it might not be as sensitive of an issue. In the end, this runs into how the contracts between services are negotiated.

    Ultimately – I like the sentiment. Using protobufs with gRPC creates a ton of positive things that REST does not have. Clients can be auto-generated based on an agreed-upon spec that is law. There is no different in implementation across programming languages, and serialization and deserialization is significantly less of a concern. This allows services to communicate without the frequent annoyance of sending JSON around, and generating URLs, and there is no debate about correctness across implementations. This provides a very distinct layer of coupling that is easily updated and often backwards compatible. What it does not describe are the real reasons why this is useful – it sounds more like the author is having problem translating calls between the two methods of content addressing, and might want to rethink the idea of using some sort of centralized “micro service router” – they are certainly not necessary!

    If I’ve misread any of the article, I’d love to know.

    • Dear Jan,

      This post from Jason seems to challenge your patience. There might be a misinterpretation in the use of wording or definitions.
      I am interested to hear your point of view if REST APIs are an added value or not? As technology progresses there might be better alternatives you are applying… let me know what your thoughts are.

Leave a Reply

Your email address will not be published. Required fields are marked *