I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle. When I see "REST API" I can safely assume the following:
- The API returns JSON
- CRUD actions are mapped to POST/GET/PUT/DELETE
- The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
- There's a decent chance listing endpoints were changed to POST to support complex filters
Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood.
Fielding won the real battle precisely because he was intellectually incoherent and mostly wrong. It's the "worse is better" of the 21st century.
RPC systems were notoriously unergonomic and at best marginally successful. See Sun RPC, RMI, DCOM, CORBA, XML-RPC, SOAP, Protocol Buffers, etc.
People say it is not RPC but all the time we write some function in Javascript like
const getItem = async (itemId) => { ... }
which does a
GET /item/{item_id}
and on the backend we have a function that looks like
Item getItem(String itemId) { ... }
with some annotation that explains how to map the URL to an item call. So it is RPC, but instead of a highly complex system that is intellectually coherent but awkward and makes developers puke, we have a system that's more manual than it could be but has a lot of slack and leaves developers feeling like they're in control. 80% of what's wrong with it is that people won't just use ISO 8601 dates.
> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.
Why do people feel compelled to even consider it to be a battle?
As I see it, the REST concept is useful, but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves. This is in line with the Richardson maturity model[1], where the apex of REST includes all the HATEOAS bells and whistles.
Should REST without HATEOAS classify as REST? Why not? I mean, what is the strong argument to differentiate an architectural style that meets all but one requirement? And is there a point to this nitpicking if HATEOAS is practically irrelevant and the bulk of RESTful APIs do not implement it? What's the value in this nitpicking? Is there any value to cite thesis as if they where Monty Python skits?
For me the battle is with people who want to waste time bikeshedding over the definition of "REST" and whether the APIs are "RESTful", with no practical advantages, and then having to steer the conversation--and their motivation--towards more useful things without alienating them. It's tiresome.
I’m with you. HATEOAS is great when you have two independent (or more) enterprise teams with PMs fighting for budget.
When it’s just yours and your two pizza team, contract-first-design is totally fine. Just make sure you can version your endpoints or feature-flag new API’s so it doesn’t break your older clients.
Defining media types seems right to me, but what ends up happening is that you use swagger instead to define APIs and out the window goes HATEOAS, and part of the reason for this is just that defining media types is not something people do (though they should).
Basically: define a schema for your JSON, use an obvious CRUD mapping to HTTP verbs for all actions, use URI local-parts embedded in the JSON, use standard HTTP status codes, and embed more error detail in the JSON.
>the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.
Many server-rendered websites support REST by design: a web page with links and forms is the state transferred to client. Even in SPAs, HATEOAS APIs are great for shifting business logic and security to server, where it belongs. I have built plenty of them, it does require certain mindset, but it does make many things easier. What problems are you talking about?
We should probably stop calling the thing that we call REST, REST and be done with it - it's only tangentially related to what Fielding tried to define.
> We should probably stop calling the thing that we call REST (...)
That solves no problem at all. We have Richardson maturity model that provides a crisp definition, and it's ignored. We have the concept of RESTful, which is also ignored. We have RESTless, to contrast with RESTful. Etc etc etc.
None of this discourages nitpickers. They are pedantic in one direction, and so lax in another direction.
Discoverability by whom, exactly? Like if it's for developer humans, then good docs are better. If it's for robots, then _maybe_ there's some value... But in reality, it's not for robots.
HATEOAS solves a problem that doesn't exist in practice. Can you imagine an API provider being like, "hey, we can go ahead and change our interface...should be fine as long as our users are using proper clients that automatically discover endpoints and programmatically adapt accordingly"? Or can you imagine an API consumer going, "well, this HTTP request delivers the data we need, but let's make sure not to hit it directly -- instead, let's recursively traverse a graph of requests each time to make sure this is still the way to do it!"
You have got it wrong.
Let's say I build some API with different user roles. Some users can delete an object, others can only read it. The UI knows about the semantics of the operations and logical names of it, so when UI gets the object from server it can simply check, if certain operations are available, instead of encoding the permission checking on the client side. This is the discoverability. It does not imply generated interfaces, UI may know something about the data in advance.
This is actually what we do at [DAYJOB] and it's been working well for over 12 years. Like any other kind of interface indirection it adds the overhead of indirection for the benefit of being able to change the producer's side of the implementation without having to change all of the consumers at the same time.
In this example you receive list of permitted operations embedded in the resource model. href=. means you can perform this operation on resource self link.
Oh, interesting. So rather than the UI computing what operations should be allowed currently by, say, knowing the user's current role and having rules baked into it about the relationship between role and UI widgets, the UI can compute what motive should be in or simply off of explicit statements or capability from the server.
I can see some meat on these bones. The counterpoint is that the protocol is now chattier than it would be otherwise... But a full analysis of bandwidth to the client would have to factor that you have to ship over a whole framework to implement those rules and keep those rules synchronized between client and server implementation.
It’s something else. List of available actions may include other resources, so you cannot express it with pure HTTP, you need a data model for that (HAL is one of possible solutions, but there are others)
Or probably just an Allow header on a response to another query (e.g. when fetching an object, server could respond with an Allow: GET, PUT, DELETE if the user has read-write access and Allow: GET if it’s read-only).
> If it's for robots, then _maybe_ there's some value...
Nah, machine readable docs beat HATEOAS in basically any application.
The person that created HATEOAS was really not designing an API protocol. It's a general use content delivery platform and not very useful for software development.
The problems do exist, and they're everywhere. People just invented all sorts of hacks and workarounds for these issues instead of thinking more carefully about them. See my posts in this thread for some examples:
For most APIs that doesn’t deliver any value which can’t be gained from API docs, so it’s hard to justify. However, these days it could be very useful if you want an AI to be able to navigate your API. But MCP has the spotlight now.
I think you throw away a useful description of an API by lumping them all under RPC. If you tell me your API is RPC instead of REST then I'll assume that:
* If the API is available over HTTP then the only verb used is POST.
* The API is exposed on a single URL and the `method` is encoded in the body of the request.
It is true, if you say "RPC" I'm more likely to assume gRPC or something like that. If you say "REST", I'm 95% confident that it is a standard / familiar OpenAPI style json-over-http style API but will reserve a 5% probability that it is actually HATEOAS and have to deal with that. I'd say, if you are doing Roy Fielding certified REST / HATEOAS it is non-standard and you should call it out specifically by using the term "HATEOAS" to describe it.
People in the real world referring to "REST" APIs, the kind that use HTTP verbs and have routes like /resource/id as RPC APIs. As it stands in the world outside of this thread nobody does that.
At some level language is outside of your control as an individual even if you think it's literally wrong--you sometimes have to choose between being 'correct' and communicating clearly.
This article also tries to make the distinction of not focusing on the verbs themselves. That the RESTful dissertation doesn’t focus on them.
The other side of this is that the IETF RESTful proposals from 1999 that talk about the protocol for implementation are just incomplete. The obscure verbs have no consensus on their implementation and libraries across platforms may do PUT, PATCH, DELETE incompatibly. This is enough reason to just stick with GET and POST and not try to be a strict REST adherents since you’ll hit a wall.
While I ask people whether they actually mean REST according to the paper or not, I am one of the people who refuse to just move on. The reason being that the mainstream use of the term doesn’t actually mean anything, it is not useful, and therefore not pragmatic at all. I basically say “so you actually just mean some web API, ok” and move on with that. The important difference being that I need to figure out the peculiarities of each such web API.
>> The important difference being that I need to figure out the peculiarities of each such web API
So if they say it is Roy Fielding certified, you would not have to figure out any "peculiarities"? I'd argue that creating a typical OpenAPI style spec which sticks to standard conventions is more professional than creating a pedantically HATEOAS API. Users of your API will be confused and confusion leads to bugs.
So you enjoy being pedantic for the sake of being pedantic? I see no useful benefit either from a professional or social setting to act like this.
I don’t find this method of discovery very productive and often regardless of meeting some standard in the API the real peculiarities are in the logic of the endpoints and not the surface.
I can see a value in pedantry in a professional setting from a signaling point of view. It's a cheap way to tell people "Hey! I'm not like those other girls, I care about quality," without necessarily actually needing to do the hard work of building that quality in somewhere where the discerning public can actually see your work.
(This is not a claim that the original commenter doesn't do that work, of course, they probably do. Pedants are many things but usually not hypocrites. It's just a qualifier.)
You'd still probably rather work with that guy than with me, where my preferred approach is the opposite of penalty. I slap it all together and rush it out the door as fast as possible.
It also means they made some effort to use appropriate http verbs instead of GET/POST for everything, and they made an effort to organize their urls into patterns like `/things/:id/child/:child_id`.
It was probably an organic response to the complexity of SOAP/WSDL at the time, so people harping on how it's not HATEOAS kinda miss the historical context; people didn't want another WSDL.
> It also means they made some effort to use appropriate http verbs instead of GET/POST for everything, and they made an effort to organize their urls into patterns like `/things/:id`.
No not really. A lot of people don't understand REST to be anything other than JSON over HTTP. Sometimes, the HTTP verbs thing is done as part of CRUD but actually CRUD doesn't necessarily have to do with the HTTP verbs at all and there can just be different endpoints for each operation. It's a whole mess.
I really hate my conclusions here, but from a limited freedom point of view, if all of that is going to happen...
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
So we should better start with a standard scaffolding for the replies so we can encode the errors and forget about status codes. So the only thing generating an error status is unhandled exception mapped to 500. That's the one design that survives people disagreeing.
> There's a decent chance listing endpoints were changed to POST to support complex filters
So we'd better just standardize that lists support both GET and POST from the beginning. While you are there, also accept queries on both the url and body parameters.
HTTP/JSON API works too, but you can assume it's what they mean by REST.
It makes me wish we stuck with XML based stuff, it had proper standards, strictly enforced by libraries that get confused by things not following the standards. HTTP/JSON APIs are often hand-made and hand-read, NIH syndrone running rampant because it's perceived to be so simple and straightforward. To the point of "we don't need a spec, you can just see the response yourself, right?". At least that was the state ~2012, nowadays they use an OpenAPI spec but it's often incomplete, regardless of whether it's handmade (in which case people don't know everything they have to fill in) or generated (in which case the generators will often have limitations and MAYBE support for some custom comments that can fill in the gaps).
> HTTP/JSON API works too, but you can assume it's what they mean by REST.
This is the kind of slippery slope where pedantic nitpickers thrive. The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
In this sense, the term "RESTful" is useful to shut down these pedantic nitpickers. It's "REST-adjacent" still, but the right answer to nitpicking is "who cares".
> The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
wat?
Nowhere is JSON in the name of REpresentational State Transfer. Moreover, sending other representations than JSON (and/or different presentations in JSON) is not only acceptable, but is really a part of REST
> Nowhere is JSON in the name of REpresentational State Transfer.
If you read the message you're replying to, you'll notice you are commenting on the idea of coining the concept of HTTP/JSON API as a better fitting name.
This. Or maybe we should call it "Rest API" in lowercase, meaning not the state transfer, but the state of mind, where developer reached satisfaction with API design and is no longer bothered with hypermedia controls, schemas etc.
I recall having to maintain an integration to some obscure SOAP API that ate and spit out XML with strict schemas and while I can't remember much about it, I think the integration broke quite easily if the other end changed their API somehow.
Assuming the / was meant to describe it as both an HTTP API and a JSON API (rather than HTTP API / JSON API) it should be JSON/HTTP, as it is JSON over HTTP, like TCP/IP or GNU/Linux :)
SOAP in particular can really not be described as "proper".
It had the advantage that the API docs were always generated, and thus correct, but the most common thing is for one software stack not being able to use a service built with another stack.
It's always better to use GET/POST exclusively. The verb mapping was theoretical from someone who didn't have to implement. I've long ago caved to the reality of the web's limited support for most of the other verbs.
> - CRUD actions are mapped to POST/GET/PUT/DELETE
Agree on your other three but I've seen far too many "REST APIs" with update, delete & even sometimes read operations behind a POST. "SOAP-style REST" I like to call it.
Do you care? From my point of view, post, put, delete, update, and patch all do the same. I would argue that if there is a difference, making the distinction in the url instead of the request method makes it easier to search code and log. And what's the correct verb anyway?
So that's an argument that there may be too many request methods, but you could also argue there aren't enough. But then standardization becomes an absolute mess.
I agree.
From what I have seen in corporate settings, using anything more than GET/POST takes the time to deploy the API to a different level. Using UPDATE, PATCH etc. typically involves firewall changes that may take weeks or months to get approved and deployed followed a never ending audit/re-justification process.
There's no point in idempotency for operations that change the state. DELETE is supposed to be idempotent, but it can only be if you limit yourself to deletion by unique, non-repeating id. Should you do something like delete by email or product, you have to use another operation, which then obviously will be POST anyway. And there's no way to "cache" a delete operation.
It's just absurd to mention idempotency when the state gets altered.
Yeah but GET doesn’t allow requests to have bodies (yeah, I know, technically you can but it’s not very useful), and this is a legitimate issue preventing its use in complex APIs.
> Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood
This is an insightful observation. It happens with pretty much everything
As it has been happening recently with the term vibecoding. It started with some definition, and now it’s morphed into more or less just meaning ai-assisted coding. Some people don’t like it[1]
I can count on one hand the number of times I've worked on a service that can accurately be modeled as just representational state transfer. The rest have at least some features that are inherently, inescapably some form of remote procedure call. Which the original REST model eschews.
This creates a lot of impedance mismatch, because the HTTP protocol's semantics just weren't designed to model that kind of thing. So yeah, it is hard to figure out how to shoehorn that into POST/GET/PUT/DELETE and HTTP status codes. And folks who say it's easy tend to get there by hyper-focusing on that one time they were lucky enough to be working on a project where it wasn't so hard, and dismissing as rare exceptions the 80% of cases where it did turn out to be a difficult quagmire that forced a bunch of unsatisfying compromises.
Alternatively you can pick a protocol that explicitly supports RPC. But that's not necessarily any better because all the well-known options with good language support are over-engineered monstrosities like GRPC, SOAP, and (shudder) CORBA. It might reduce your domain modeling headaches, but at the cost of increased engineering and operations hassle. I really can't blame anyone for deciding that an ad-hoc, ill-specified, janky application of not-actually-REST is the more pragmatic option. Because, frankly, it probably is.
xml-rpc (before it transmogrified into SOAP) was pretty simple and flexible. Still exists, and there is a JSON variant now too. It's effectively what a lot of web APIs are: a way to invoke a method or function remotely.
I use the term "HTTP API"; more general. Context, in light of your definition: In many cases labeled "REST", there will only be POST, or POST and GET, and HTTP 200 status with an error in JSON is used instead of HTTP status codes. Your definition makes sense as a weaker form of the original, but it it still too strict compared to how the term is used. "REST" = "HTTP with JSON bodies" is the most practical definition I have.
> HTTP 200 status with an error in JSON is used instead of HTTP status codes
I've seen some APIs that not only always return a 200 code, but will include a response in the JSON that itself indicates whether the HTTP request was successfully received, not whether the operation was successfully completed.
Building usable error handling with that kind of response is a real pain: there's no single identifier that indicates success/failure status, so we had to build our own lookup table of granular responses specific to each operation.
How can you idiomatically do a read only request with complex filters? For me both PUT and POST are "writable" operations, while "GET" are assumed to be read only. However, if you need to encode the state of the UI (filters or whatnot), it's preferred to use JSON rather than query params (which have length limitations).
One uses POST and recognizes that REST doesn't have to be so prescriptive.
The part of REST to focus on here is that the response from earlier well-formed requests will include all the forms (and possibly scripts) that allow for the client to make additional well-formed requests. If the complex filters are able to be made with a resource representation or from the root index, regardless of HTTP methods used, I think it should still count as REST (granted, HATEOAS is only part of REST but I think it should be a deciding part here).
When you factor in the effects of caching by intermediate proxy servers, you may find yourself adapting any search-like method to POST regardless, or at least GET with params, but you don't always want to, or can't, put the entire formdata in params.
Plus, with the vagaries of CSRF protections, per-user rate-limiting and access restrictions, etc.,, your GET is likely to turn into a POST for anything non-trivial. I wouldn't advise trying for pure REST-ful on the merits of its purity.
There's no requirement in HTTP (or REST) to either create a resource or return a Location header.
For the purposes of caching etc, it's useful to have one, as well as cache controls for the query results, and there can be links in the result relative to the Location (eg a link href of "next" is relative to the Location).
The response to POST can return everything you need. The Location header that you receive with it will contain permanent link for making the same search request again via GET.
Pros: no practical limit on query size.
Cons: permalink is not user-friendly - you cannot figure out what filters are applied without making the request.
If you really want this idiomatically correct, put the data in JSON or other suitable format, zip it and encode in Base64 to pass via GET as a single parameter. To hit the browser limits you will need so big query that you may hit UX constraints earlier in many cases (2048 bytes is 50+ UUIDs or 100+ polygon points etc).
Pros: the search query is a link that can be shared, the result can be cached.
Cons: harder to debug, may not work in some cases due to URI length limits.
HTML FORMs are limited to www-form-encoded or multipart. The length or the queries on a GET with a FORM is limited by intermediaries that shouldn't be limiting it. But that's reality.
Do a POST of a query document/media type that returns a "Location" that contains the query resource that the server created as well as the data (or some of it) with appropriate link elements to drive the client to receive the remainder of the query.
In this case, the POST is "writing" a query resource to the server and the server is dealing with that query resource and returning the resulting information.
Soon, hopefully, QUERY will save us all. In the meantime, simply using POST is fine.
I've also seen solutions where you POST the filter config, then reference the returned filter ID in the GET request, but that often seems like overkill even if it adds some benefits.
Haha, our API still returns XML. At least, most of the endpoints do. Not the ones written by that guy who thinks predictability in an API is lower priority than modern code, those ones return JSON.
Presumably they had an existing API, and then REST became all the rage, so they remapped the endpoints and simply converted the XML to JSON. What do you do with the <tag>value</tag> construct? Map it to the name `$`!
Congratulations, we're REST now, the world is a better place for it. Off to the pub to celebrate, gents. Ugh.
I think people tend to forget these things are tools, not shackles
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
I've done this enough times that now I don't really bother engaging.
I don't believe anyone gets it 100% correct ever.
As long as there is nothing egregiously incorrect,
I'll accept whatever.
Importantly for the discussion, this also doesn't mean the push for REST api's was a failure. Sure, we didn't end up with what was precisely envisioned from that paper, but we still got a whole lot better than CORBA and SOAP.
The lowest common denominator in the REST world is a lot better than the lowest common denominator in SOAP world, but you have to convince the technically literate and ideological bunch first.
> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.
True. Losing hacking/hacker was sad but I can live with it - crypto becoming associated with scam coins instead of cryptography makes me want to fight.
This is very true. Over my 15 years of engineering, I have never suffered_that_ much with integrating with an api (assuming it exists). So the lack of "HATEOaS" hasn't even been noticable for me. As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429) I usually have no issuss integrating and don't even notice that they don't have some "discoverable api". As long as I can get the data I need or can make the update I need I am fine.
I think good rest api design is more a service for the engineer than the client.
> As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429)
A client had build an API that would return 200 on broken requests. We pointed it out and asked if maybe it could return 500, to make monitoring easier. Sure thing, next version "Http 200 - 500", they just wrote 500 in the message body, return remained 200.
I just consumed an API where errors were marked with a "success": false field.
The "success" is never true. If it's successful, it's not there. Also, a few endpoints return 500 instead, because of course they do. Oh, and one returns nothing on error and data on success, because, again, of course it does.
Anyway, if you want a clearer symptom that your development stack is shit and has way too much accidental complexity, there isn't any.
This is the real world. You just deal with it (at least I do) because fighting it is more work and at the end of the day the boss wants the project done.
Ive seen this a few times in the past but for a different reason. What would happen in these cases was that internally there’d be some cascade of calls to microservices that all get collected. In the most egregious examples it’s just some proxy call wrapping the “real” response.
So it becomes entirely possible to get a 200 from the thing responding g to you but it may be wrapping an upstream error that gave it a 500.
I've had frontend devs ask for this, because it was "easier" to handle everything in the same then callback. They wanted me to put ANY error stuff as a payload in the response.
> So the lack of "HATEOaS" hasn't even been noticable for me.
I think HATEOAS tackles problems such as API versioning, service discovery, and state management in thin clients. API versioning is trivial to manage with sound API Management policies, and the remaining problems aren't really experienced by anyone. So you end up having to go way out of your way to benefit from HATEOAS, and you require more complexity both on clients and services.
In the end it's a solution searching for problems, and no one has those problems.
>- There's a decent chance listing endpoints were changed to POST to support complex filters
Please. Everyone knows they tried to make the complex filter work as a GET, then realized the filtering query is so long that it breaks whatever WAF or framework is being used because they block queries longer than 4k chars.
I disagree. It's a perfectly fine approach to many kinds of APIs, and people aren't "mediocre" just for using widely accepted words to describe this approach to designing HTTP APIs.
So your view is that the person who coins a term forever has full rights to dictate the meaning of that term, regardless of what meaning turns out to be useful in practice and gets broadly accepted by the community? And you think that anyone who disagrees with such an ultra-prescriptivist view of linguistics is somehow a "mediocre programmer"? Do I have that right?
No. For all people who use "REST": If reading Fielding is the exception that gets you on HN, than not reading Fielding is what average person does. Mediocre.
Using Fieldings term to refer to something else is an extra source of confusion which kinda makes the term useless. Nobody knows what the speaker exactly refers no.
The point is lost on you though. There are REST APIs (almost none), and there are "REST APIs" - a battle cry of mediocre developers. Now go tell them their restful has nothing to do with rest. And I am now just repeating stuff said in article and in comments here.
Why should I (or you, for that matter) go and tell them their restful has nothing to do with rest? Why does it matter? They're making perfectly fine HTTP APIs, and they use the industry standard term to describe what kind of HTTP API it is.
It's convenient to have a word for "HTTP API where entities are represented by JSON objects with unique paths, errors are communicated via HTTP status codes and CRUD actions use the appropriate HTTP methods". The term we have for that kind of API is "rest". And that's fine.
> 1. Never said I'm going to tell them. It's on someone else. I'm just going to lower my expectation from such developers accordingly.
This doesn't seem like a useful line of conversation, so I will ignore it.
> 2. So just "HTTP API".
No! There are many kinds of HTTP APIs. I've both made and used "HTTP APIs" where HTTP is used as a transport and API semantics are wholly defined by the message types. I've seen APIs where every request is an HTTP POST with a protobuf-encoded request message and every response is a 200 OK with a protobuf-encoded response message (which might then indicate an error). I've seen GraphQL APIs. I've seen RPC-style APIs where every "RPC call" is a POST requset to an endpoint whose name looks like a function name. I've seen APIs where request and response data is encoded using multipart/form-data.
Hell, even gRPC APIs are "HTTP APIs": gRPC uses HTTP/2 as a transport.
Telling me that something is an "HTTP API" tells me pretty much nothing about how it works or how I'm expected to use it, other than that HTTP is in some way involved. On the other hand, if you tell me that something is a "REST API", I already have a very good idea about how to use it, and the documentation can assume a lot of pre-existing context because it can assume that I've used similar APIs before.
> On the other hand, if you tell me that something is a "REST API", I already have a very good idea about how to use it (...)
Precisely this. The value of words is that they help communicate concepts. REST API or even RESTful API conveys a precise idea. To help keep pedantry in check, Richardson's maturity model provides value.
Everyone manages to work with this. Not those who feel the need to attack people with blanket accusations of mediocrity, though. They hold onto meaningless details.
It’s the worst kind of pedantry. Simultaneously arrogant, uncharitable and privileged.
Most of us are not writing proper Restful APIs because we’re dealing with legacy software, weird requirements the egos of other developers. We’re not able to build whatever we want.
> It’s the worst kind of pedantry. Simultaneously arrogant, uncharitable and privileged.
I'd go as far as to claim it is by far the dumbest kind, because it has no value, serves no purpose, and solves no problem. It's just trivia used to attack people.
I met a DevOps guy who didn't know what "dotfiles" are.
However I'd argue people who use the term to describe it the same as everyone else is the smart one, if you want to refer to the "real" one just add "strict" or "real" in front of it.
I don't think we should dismiss people over drifting definitions and lack of "fountational knowledge".
This is more like people arguing over "proper" English, the point of language is to communicate ideas. I work for a German company and my German is not great but if I can make myself understood, that's all that's needed. Likewise, the point of an API is to allow programs, systems, and people to interoperate. If it accomplishes that goal, it's fine and not worth fighting over.
If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF? My job isn't an academic paper, good enough to get the job done is going to have to be good enough.
> This is more like people arguing over "proper" English, the point of language is to communicate ideas.
ur s0 rait, eye d0nt nnno wy ne1 b0dderz tu b3 "proppr"!!!!1!!
</sarcasm>
You are correct that communication is the point. Words do communicate a message. So too does disrespect for propriety: it communicates the message that the person who is ignorant or disrespectful of proper language is either uneducated or immature, and that in turn implies that such a person’s statements and opinions should be discounted if not ignored entirely.
Words and terms mean things. The term ‘REST’ was coined to mean something. I contend that the thing ‘REST’ originally denoted is a valuable thing to discuss, and a valuable thing to employ (I could be wrong, but how easy will it be for us to debate that if we can’t even agree on a term for the thing?).
It’s similar to the ironic use of the word ‘literally.’ The word has a useful meaning, there is already the word ‘figuratively’ which can be used to mean ‘not literally’ and a good replacement for the proper meaning of ‘literally’ doesn’t spring to mind: misusing it just decreases clarity and hinders communication.
> If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF?
Whether something is JSON or XML is independent of the representation — they are serialisations (or encodings) of a representation. E.g. {"type": "foo","id":1}, <foo id="1"/>, <foo><id>1</id></foo> and (foo (id 1)) all encode the same representation.
>misusing it just decreases clarity and hinders communication
There is no such thing as "misusing language". Language changes. It always does.
Maybe you grew up in an area of the world where it's really consistent everywhere, but in my experience I'm going to have a harder time understanding people even two to three villages away.
Because language always changes.
Words mean a particular thing at a point in time and space. At another one, they might mean something completely different. And that's fine.
You can like it or dislike it, that's up to you. However, I'd say every little bit of negative thoughts in that area only serve to make yourself miserable, since humanity and language at large just aren't consistent.
And that's ok. Be it REST, literally or even a normal word such as 'nice', which used to mean something like 'foolish'.
Again, language is inconsistent by default and meanings never stay the same for long - the more a terminus technicus gets adapted by the wider population, the more its meaning gets widened and/or changed.
One solution for this is to just say "REST in its original meaning" when referring to what is now the exception instead of the norm.
When I was working on my first HTTP-based API 13 years ago, based on many comments about true REST, I decided to first study what REST should really be. I've read Fielding's paper cover to cover, I've read RESTful Web Services Cookbook from O'Reilly and then proceeded to workaround Django idioms to provide REST API. This was a bit cargo cult thinking from my end, I didn't truly understand how REST would benefit my service. I took me several more years and several more HTTP APIs to understand that in the case of these services, there were no benefits.
The vision of API that is self discoverable and that works with a generic client is not practical in most cases. I think that perhaps AWS dashboard with its multitude of services has some generic UI code that allows to handle these services without service-specific logic, but I doubt even that.
Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs. It is an architecture, but the details of how clients should really discover the endpoints and determine what these endpoints are doing is left out of the paper. To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client. If your service is the only one that implements this client, you made a lot of extra effort to end up with the same solution that not REST services implement - a service provides an API and JS code to work with the API (or a command line client that works with the API), but there is no client code reuse at all.
I also think that good UX is not compatible with REST goals. From a user perspective, app-specific code can provide better UX than generic code that can discover endpoints and provide UI for any app. Of course, UI elements can be standardized and described in some languages (remember XUL?), so UI can adapt to app requirements. But the most flexible way for such standardization is to provide a language like JavaScript that is responsible for building UI.
The browser is "generic code" that provides the UX we use all day, every day.
REST includes allowing code to be part of the response from a server, there are the obvious security issues, but the browsers (and the standards) have dealt with a lot of that.
I think you're right. APIs have a lot of aspects to them, so describing them is hard. API users need to know typical latency bounds, which error codes may be retried, whether an action is atomic or idempotent. HATEOAS gets you none of these things.
So fully implementing a perfect version of REST is usually not necessary for most types of problems users actually encounter.
What REST has given us is an industry-wide lingua franca. At the basic level, it's a basic understanding of how to map nouns/verbs to HTTP verbs and URLs. Users get to use the basic HTTP response codes. There's still a ton of design and subtlety to all this. Do you really get to do things that are technically allowed, but might break at a typical load balancer (returning bodies with certain error codes)? Is your returning 500 retriable in all cases, with what preferred backoff behavior?
>API users need to know typical latency bounds, which error codes may be retried, whether an action is atomic or idempotent. HATEOAS gets you none of these things.
Those things aren't always necessary. However API users always need to know which endpoints are available in the current context. This can be done via documentation and client-side business logic implementing it (arguably, more work) or this can be done with HATEOAS (just check if server returned the endpoint).
HTTP 500 retriable sounds like a design error, when you can use HTTP 503 to explicitly say "try again later, it's temporal".
I think this hits the nail on the head. Complaining that the current understanding of REST isn't exactly the same as the original usage is missing the point that now REST gives people a good idea of what to expect and how to use the exposed interface.
It's actually a very analogous complaint to how object-oriented programming isn't how it was supposed to be and that only Smalltalk got it right. People now understand what is meant when people say OOP even if it's not what the creator of the term envisioned.
Computer Science, and even the world in general, is littered with examples of this process in action. What's important is that there's a general consensus of the current meaning of a word.
One thing though - if you do take the time to learn the original "perfect" versions of these things, it helps you become a much better system designer. I'm constantly worried about API design because it has such large and hard-to-change consequences.
On the other hand, we as an industry have also succeeded quite a bit! So many of our abstractions work really well.
It's not just the original REST that usually has no benefits. The industry's reinterpreted version of weak REST also usually has little to no benefits. Who really cares that deleting a resource must necessarily be done with the DELETE HTTP verb rather than simply a POST?
You have to represent the action somehow. And letting proxies understand a wee bit of what's going on is useful. That's how you can have a proxy that lets your users browse the web but not login to external sites, and so on.
The POST verb exists, there's no reason not to use it to ask a server to delete data.
In fact, there are plenty of reasons not to use DELETE and PUT. Middleboxes managed by incompetent security people block them, they require that developers have a minimum of expertise and don't break the idempotency rule, lots of software stacks simply don't support them (yeah, those stacks are bad, what still doesn't change anything), and the most of the internet just don't use the benefit they provide (because they don't trust the developers behind the server to not break the rules).
And you just added more work to yourself to interpret the HTTP verb. You already need work to interpret the body of a POST request, so why not put the information of "the operation is trying to delete" inside the body?
> To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client.
Generic clients just need to understand hypermedia and they can discover your API, as long as your API returns hypermedia from its starting endpoint and all other endpoints are transitively linked from that start point.
Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?
This is what discoverability via HATEOAS is. True REST can be seen as exporting an object model with reflection capabilities. For clients that are familiar with your API, they are using hypermedia to access known/named properties and methods, and generic clients can use reflection to do the same.
> Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?
Sure this can be done, but I can't see how to build a useful generic app that interacts with objects automatically by discovering the methods and calling them with discovered parameters. For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users, the UI needs to be aware what the available methods do and need to be intentionally designed to provide intuitive ways of calling the methods.
> For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users
Yes, exactly, but the point is that something like Swagger becomes completely trivial, and so you no longer need a separate, complex tool to do what the web automatically gives you.
The additional benefits are on the server-end, in terms of maintenance and service flexibility. For instance, you can now replace and transition any endpoint URL (except the entry endpoint) at any time without disrupting clients, as clients no longer depend on specific URL formats (URLs are meaningful only to the server), but depend only on the hypermedia that provides the endpoints they should be using. This is Wheeler's aphorism: hypermedia adds one level of indirection to an API which adds all sorts of flexibility.
For example, you could have a set of servers implementing an application function, each designated by a different URL, and serve the URL for each server in the hypermedia using any policy that makes sense, effectively making an application-specific load balancer. We worked around scaling issues over the years by adding adding SNI to TLS and creating dedicated load balancers, but Fielding's REST gave us everything we needed long before! And it's more flexible than SNI because these servers don't even have to be physically located behind a load balancer.
There are many ideas in the REST paper that are super useful, but the goal of making a generic client working with any API is difficult if not impossible to achieve.
Was the client of the service that you worked on fully generic and application independent? It is one thing to be able to change URLs only on the server, without requiring a client code change, and such flexibility is indeed practical benefit that the REST architecture gives us. It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code. This goal is something that REST architecture tried to address, but IMO it was not realized in practice.
> There are many ideas in the REST paper that are super useful, but the goal of making a generic client working with any API is difficult if not impossible to achieve.
It's definitely possible to achieve: anywhere that data is missing you present an input prompt, which is exactly what a web browser does.
That said, the set of autonomous programs that can do something useful without knowing what they're doing is of course more limited. These are generic programs like search engines and AI training bots that crawl and index information.
> It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code.
Browser provide generic execution environment, but the client code (JavaScript/HTML/CSS) is not generic. Calendar application and messaging application entry points provide application specific code for implementing calendar or messaging apps functions . I don't think this is what was proposed in the REST paper, otherwise we wouldn't have articles like 'Most RESTful APIs aren't really RESTful'.
> but the client code (JavaScript/HTML/CSS) is not generic
The HTML/hypermedia returned is never generic, that's why HATEOAS works at all and is so flexible.
The "client" JS code is provided by the server, so it's not really client-specific (the client being the web browser here--maybe should call it "agent"). Regardless, sending JS is an optimization, calendars and messaging are possible using hypermedia alone, and proves the point that the web browser is a generic hypermedia agent that changes behaviour based on hypermedia that's dictated solely by the URL.
You can start programming any app with a plain hypermedia version and then add JS to make the user experience better, which is the approach that HTMx is reviving.
> Generic clients just need to understand hypermedia
Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant. I'm pretty sure the reason we ended up settling on just tossing JSON blobs around and baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.
(Besides: practically, for a web-served interface, the client may as well carry semantic understanding because the client came from the server).
> Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant.
You don't need a full web browser. Fielding published his thesis in 2000, browsers were almost trivial then, and the needs for programming are even more trivial: you can basically skip any HTML that isn't a link tag or form data for most purposes.
> baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.
This is such a non-issue. Why aren't you worried about badly formatted JSON? Because we have well-tested JSON formatters. In a world where people understood the value of hypermedia as an interchange format, we'd be in exactly the same position.
And to be clear, if JSON had links as a first class type rather than just strings, then that would qualify as a hypermedia format too.
If I'm going to do HTML that isn't HTML then I might as well not do HTML, there's a lot of sharp edges in that particular markup that I'd prefer to avoid.
> Why aren't you worried about badly formatted JSON?
Because the json spec is much smaller than the HTML spec so it is much easier for the parser to prevalidate and reject invalid JSON.
Maybe I need to reread the paper and substitute "a good hypermedia language" for HTML conceptually, see if it makes more sense to me.
> Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs.
But it does though. A HTTP server returns a HTTP response to a request from a browser. The request is a HTML webpage that is rendered to the user with all discoverable APIs visible as clickable links. Welcome to the World Wide Web.
You describe how web pages work, web pages are intended for human interactions, APIs are intended for machine interaction. How a generic Python or JavaScript client can discover these APIs? Such clients will request JSON representation of a resource, because JSON is intended for machine consumption, HTML is intended for humans. Representations are equivalent, if you request JSON representations of a /users resource, you get a JSON list. If you request HTML representation of a /users resource you get an HTML list, but the content should be the same. Should you return UI controls for modifying a list as part of the HTML representation? If you do so, your JSON and HTML representations are different, and your Python and JavaScript client still cannot discover what list modification operations are possible, only human can do it by looking at the HTML representation. This is not REST if I understand the paper correctly.
> You describe how web pages work, web pages are intended for human interactions
Exactly, yes! The first few sentences from Wikipedia...
"REST (Representational State Transfer) is a software architectural style that was created to describe the design and guide the development of the architecture for the World Wide Web. REST defines a set of constraints for how the architecture of a distributed, Internet-scale hypermedia system, such as the Web, should behave." -- [1]
If you are desiging a system for the Web, use REST. If you are designing a system where a native app (that you create) talks to a set of services on a back end (that you also create), then why conform to REST principles?
Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
Most web APIs are not designed with this use-case in mind. They're designed to facilitate web apps that are much more specific in what they're trying to present to the user. This is both deliberate and valuable; app creators need to be able to control the presentation to achieve their apps' goals.
REST API design is for use-cases where the users should have control over how they interact with the resources provided by the API. Some examples that should be using REST API design:
- Government portals for publicly accessible information, like legal codes, weather reports, or property records
- Government portals for filing forms and other interactions
- Open data initiatives like Wikipedia and OpenStreetmap
Considering these examples, it makes sense that policing of what "REST" means comes from the more academically-minded, while the detractors of the definition are typically app developers trying to create a very specific user experience. The solution is easy: just don't call it REST unless it actually is.
> Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
It's also useful when you're programming a client that is not a web page!
You GET a thing, you dereference fields/paths in the returned representation, you construct a new URI, you perform an operation on it, and so on.
Consider a directory / database application. You can define a RESTful, HATEOAS API for it, write a single-page web application for it -or a non-SPA if you prefer-, and also write libraries and command-line interfaces to the same thing, all using roughly similar code that does what I described above. That's pretty neat. In the case of a non-SPA you can use pure HTML and not think that you're "dereferencing fields of the returned representation", but the user and the user-agent are still doing just that.
> Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
The funny thing is, that perfectly describes HTML. Here’s a document with links to other documents, which the user can navigate based on what the links are called. Because if it’s designed for users, it’s called a User Interface. If it’s designed for application programming, it’s called an Application Programming Interface. This is why HATEOAS is kinda silly to me. It pretends APIs should be used by Users directly. But we already have that, it’s called a UI.
The point is that your Web UI can easily be made to be a REST HATEOAS conforming API at the same time. No separate codepaths, no duplicate efforts, just maybe some JSON templates in addition to HTML templates.
You're right, pure REST is very academic. I've worked with open/big data, and there's always a struggle to get realistic performance and app architecture design; for anything non-obvious, I'd say there are shades of REST rather than a simple boolean yes/no. Even academics have to produce a working solution or "application", i.e. that which can be actually applied, at some point.
> Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
> Most web APIs are not designed with this use-case in mind.
I wonder if this will change as APIs might support AI consumption?
Discoverability is very important to an AI, much more so than to a web app developer.
MCP shows us how powerful tool discoverability can be. HATEOS could bring similar benefits to bare API consumption.
This is a very good and detailed review of the concepts of REST, kudos to the author.
One additional point I would add is that making use of the REST-ful/HATEOAS pattern (in the original sense) requires a conforming client to make the juice worth the squeeze:
I'll never understand why the HATEOAS meme hasn't died.
Is anyone using it? Anywhere?
What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
I used it on an enterprise-grade video surveillance system. It was great - basically solved the versioning and permissions problem at the API level. We leveraged other RFCs where applicable.
The biggest issue was that people wanted to subvert the model to "make things easier" in ways that actually made things harder. The second biggest issue is that JSON is not, out of the box, a hypertext format. This makes application/json not suitable for HATEOAS, and forcing some hypertext semantics onto it always felt like a kludge.
> I'll never understand why the HATEOAS meme hasn't died.
> Is anyone using it? Anywhere?
As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol. If so (a cursory glance at RFC 8555 indicates that it may be), then it’s used by almost everyone who serves HTTPS.
Arguably HTTP, when used as it was intended, is itself a HATEOAS protocol.
> What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
LLMs seem to do well at this.
And remember that ‘auto-discovery’ means different things. A link typed next enables auto-discovery of the next resource (whatever that means); it assumes some pre-existing knowledge in the client of what ‘next’ actually means.
Yes. You used such an api to post your reply. And I am using it as well, via the affordances presented by the mobile safari hypermedia client program. Quite an amazing system!
This is true, but isn’t this quite far away from the normal understanding of API, which is an interface consumed by a program? Isn’t this the P in Application Programming Interface? If it’s a human at the helm, it’s called a User Interface.
I agree that's a common understanding of things, but I don't think that it's 100% accurate. I think that a web browser is a client program, consuming a RESTful application programming interface in the manner that RESTful APIs are designed to be consumed, and presenting the result to a human to choose actions.
I think if you restrict the notion of client to "automated programs that do not have a human driving them" then REST becomes much less useful:
If you allow the notion of client to include "web browser driven by humans", then what is it about Fielding's dissertation that is considered so important and original in the first place? Sure it's formal and creates some new and precise terminology, but the concept of browsing was already well established when he wrote it.
I think OData isn't used, and that's a proper standard and a lower bar to clear. HATEOAS isn't even benefiting from a popular standard, which is both a cause and a result.
You realize that anyone using a browser to view HTML is using HATEOS, right? You could probably argue whether SPAs fit the bill, but for sure any server rendered or static site is using HATEOS.
The point isn't that clients must have absolutely no prior knowledge of the server, its that clients shouldn't have to have complete knowledge of the server.
We've grown used to that approach because most of us have been building tightly coupled apps where the frontend knows exactly how the backend works, but that isn't the only way to build a website or web app.
UI designers want control over the look of the page in detail. E.g. some actions that can be taken on a resource are a large button and some are hidden in a menu or not rendered in the UI at all.
A client application that doesn't have any knowledge about what actions are going to be possible with a resource, instead rendering them dynamically based on the API responses, is going to make them all look the same.
So RESTful APIs as described in the article aren't useful for the most common use case of Web APIs, implementing frontend UIs.
1. UX designers operate on every stage of software development lifecycle from product discovery to post-launch support (validation of UX hypotheses), they do not exercise control - they work within constraints as part of the team. The location of a specific action in UI and interaction triggering it is orthogonal to availability of this action. Availability is defined by the state. If state restricts certain actions, UX must reflect that.
2. From architectural point of view, once you encapsulate the checking state behavior, the following will work the same way: "if (state === something)" and "if (resource.links["action"] !== null)". The latter approach will be much better, because in most cases any state-changing actions will require validation on server and you can implement the logic only once (on server).
I have been developing HATEOAS applications for quite a while and maintain HAL4J library: there are some complexities in this approach, but UI design is certainly not THE problem.
My experience with "RESTful APIs" rarely has much to do with the UI. Why even have any API if all you care about is the UI? Why not go back to server driven crap like DWR then?
My experience is that SPAs have been the way to make frontends, for the last eight years or so. May be coming to an end now. Anyway, contact with the backend all went through an API.
During that same time, the business also wanted to use the fact that our applications had an API as a selling point - our customers are pretty technical and some of them write scripts against our backends.
Backenders read about API design, they get the idea they should be REST like (as in, JSON, with different HTTP methods for CRUD operations).
And of course we weren't going to have two separate APIs, that we ran our frontends on our API was another selling point (eat your own dog food, proof that the API can do everything our frontend can, etc).
So: the UI runs on a REST API.
I'm hoping that we'll go back to Django templates with a sprinkle of HTMX here and there in the future, but who knows. That will probably be a separate backend that runs in front of this API then...
> our applications had an API as a selling point - our customers are pretty technical and some of them write scripts against our backends
It is a selling point. A massive one if you're writing enterprise software. It's not merely about "being technical", but mandatory for recurring automated jobs and integration with their other software.
What's often missed when this topic comes up is the question of who the back end API is intended for.
REST and HATEOAS are beneficial when the consumer is meant to be a third party that doesn't directly own the back end. The usual example is a plain old HTML page, the end user of that API is the person using a browser. MCP is a more recent example, that protocol is only needed because they want agents talking to APIs they don't own and need a solution for discoverability and interpretability in a sea of JSON RPC APIs.
When the API consumer is a frontend app written specifically for that backend, the benefits of REST often just don't outweigh the costs. It takes effort to design a more generic, better documented and specified API. While I don't like using tools like tRPC in production, its hugely useful for me when prototyping for much the same reason, I'm building both ends of the app and its faster to ignore separation of concerns.
> The core problem it addresses is client-server coupling. There are probably countless projects where a small change in a server’s URI structure required a coordinated (and often painful) deployment of multiple client applications. A HATEOAS-driven approach directly solves this by decoupling the client from the server’s namespace. This addresses the quality of evolvability.
Not sure I agree with this. All it does is move the coupling problem around. A client that doesn't understand where to find a URL in a document (or even which URL's are available for what purpose within that document) is just as bad as a client that assumes the wrong URL structure.
At some point, the client of an API needs to understand the semantics of what that API provides and how/where it provides those semantics. Moving it from a URL hierarchy to a document structure doesn't provide a huge amount of added value. (Particularly in a world where essentially all of the server API's are defined in terms of URL patterns routing to handlers. This is explicit hardcoded encouragement to think in a style in opposition to the HATEOAS philosophy.)
I also tend to think that the widespread migration of data formats from XML to JSON has worked against "Pure" REST/HATEOAS. XML had/has the benefit of a far richer type structure when compared to JSON. While JSON is easier to parse on a superficial level, doing things like identifying times, hyperlinks, etc. is more difficult due to the general lack of standardization of these things. JSON doesn't provide enough native and widespread representations of basic concepts needed for hypertext.
(This is one of those times I'd love some counterexamples. Aside from the original "present hypertext documents to humans via a browser" use case, I'd love to read more about examples of successful programmatic API's written in a purely HATEOAS style.)
Strict HATEOAS is bad for an API as it leads to massively bloated payloads. We _should_ encode information in the API documentation or a meta endpoint so that we don't have to send tons of extra information with every request.
Similarly, I call Java programs "Object Oriented programs" despite Alan Kays protests that it isn't at all what Object Orientation was described as in early papers.
The sad truth is that it's the less widely used concept that has to shift terminology, if it comes into wide use for something else or a "diluted" subset of the original idea(s). Maybe the true-OO-people have a term for Kay-like OO these days?
I think the idea of saving "REST" to mean the true Fielding style including HATEOAS and everything is probably as futile as trying to reserve OO to not include C++ or Java.
> By using HATEOAS and referencing schema definitions (such as XSD or JSON Schema) from within your resource representations, you can enable clients to understand the structure of the data and navigate the API dynamically.
I actually think this is where the problem lies in the real world. One of the most useful features of a JSON schema is the "additionalProperties" keyword. If applied to the "_links" subschema we're back to the original problem of "out of band" information defining the API.
I just don't see what the big deal is if we have more robust ways of serving the docs somewhere else outside of the JSON response. Would it be equivalent if the only URL in "_links" that I ever populate is a link to the JSONified Swagger docs for the "self" path for the client to consume? What's the point in even having "_links" then? How insanely bloated would that client have to be to consume something that complicated? The templates in Swagger are way more information dense and dynamic than just telling you what path and method to use. There's often a lot more for the client to handle than just CRUD links and there exists no JSON schema that could be consistent across all parts of the API.
> If you are building a public API for external developers you don’t control, invest in HATEOAS. If you are building a backend for a single frontend controlled by your own team, a simpler RPC-style API may be the more practical choice.
My conclusion is exactly the opposite. In-house developers can be expected (read: cajoled) to do things the "right" way, like follow links at runtime. You can run tests against your client and server. Internally, flexible REST makes independent evolution of the front end and back end easy.
Externally, you must cater to somebody who hard-coded a URL into their curl command that runs on cron and whose code can't tolerate the slightest deviation from exactly what existed when the script was written. In that case, an RPC-like call is great and easy to document. Increment from `/v1/` to `/v2/`, writer a BC layer between them and move on.
I think we should focus less on API schemas and more on just copying how browsers work.
Some examples:
It should be far more common for http clients to have well supported and heavily used Cookie jar implementations.
We should lean on Accept headers much more, especially with multiple mime-types and/or wildcards.
Http clients should have caching plugins to automatically respect caching headers.
There are many more examples. I've seen so much of HTTP reimplemented on top of itself over the years, often with poor results. Let's stop doing that. And when all our clients are doing those parts right, I suspect our APIs will get cleaner too.
> REST isn’t about exposing your internal object model over HTTP — it’s about building distributed systems that behave like the web.
I think I finally understand what Fielding is getting at. His REST principles boil down to allowing dynamic discovery of verbs for entities that are typed only by their media types. There's a level of indirection to allow for dynamic discovery. And there's a level of abstraction in saying entities are generic media objects. These two conceptual leaps allow the REST API to be used in a more dynamic, generic way - with benefits at the API level that the other levels of the web stack has ("client decoupling, evolvability, dynamic interaction").
In the simple (albeit niche) case, a UI could populate a list of buttons based on the URIs/verbs that the REST API returns. So the UI would be totally dynamic based on the backend - and so, work pretty generically across REST APIs.
But for a client, UI or otherwise, to make use of a dynamic set of URIs/verbs would require it to either look for a specific keyword (hard coding the intents it can satisfy) or be able to semantically understand the API (which is hard, requires a human).
Oddly, all this stuff is full circle with the AI stuff. The MCP protocol is designed to give AIs text-based descriptions of APIs, so they can reason about how to use them.
The simplest case, and the most common, is that of a browser rendering the HTML response from a website request. The HTML contains the URL links to other APIs that the user can click on. Think of navigating any website.
Academically it might be correct, but shipping real features will in most cases be more important than hitting some text book definition of correctness.
Sure, you’re right: pragmatics, in practice, are more important than theory.
But you’re assuming that there is a real contradiction between shipping features and RESTful design. I believe that RESTful design can in many cases actually increase feature delivery speed through its decoupling of clients and servers and more deeply due to its operational model.
Notice that both of those are plural words. When you have many clients and many servers implementing a protocol a formal agreement of protocol is required. REST (which I will not claim to understand well) makes a formal agreement much easier, but you still need some agreement. However when there is just one server and just one client (I'll count all web browsers as one since the browser protocols are well defined enough) you can go faster by just implementing both sides and testing they work for a long time.
It felt easier going through the post after reading these bits near the end:
> The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience
> Therefore, simply be pragmatic. I personally like to avoid the term “RESTful” for the reasons given in the article and instead say “HTTP” based APIs.
Yeah but why cause needless confusion? The colloquial definition of "RESTful" is better understood as just something you defined using the OpenAPI spec. All other variants of "HTTP API" are likely hot garbage nobody wants anyway.
Ok I may have been wrong. I checked the thesis and couldn't see this aspect mentioned. Most of the thesis seems like stuff I agree with. Damn. I'm fighting an impression of REST I had.
And have a dictionary in my server mapping method names to the actual functions.
All functions take one param (a dictionary with the data), validate it, use it and return another single dictionary along with appropriate status code.
You can add versions and such but at that point you just use JSON-RPC.
This kind of setup can be much better than REST APIs for certain usecases
This makes automating things like retrying network calls hell. You can safely assume a GET will be idempotent, and safely retry on failure with delay. A POST might, or might not also empty your bank account.
When you are retrying an API, you are calling the API, you know whether its a getBookings() or a addBooking() API. So write the client code based on that.
Instead of the API developer making sure GET /bookings is idempotent, he is going to be making sure getBookings() is idempotent. Really, what is the difference?
As for the benefits, you get a uniform interface, no quirks with URL encoding, no nonsense with browsers pre-loading, etc etc,. It's basically full control with zero surprises.
The only drawback is with cookies. Samesite: Lax depends on you using GET for idempotent actions and POST for unsafe actions. However, I am advocating the use of this only for "fetch() + createElement() = UI" kind of app, where you will use tokens for everything anyways.
I struggle to believe that any API in history has been improved by the developer more faithfully following REST’s strictures. The closest we’ve come to actually decoupled, self describing APIs is MCP, and that required inventing actual AIs to understand them.
The most successful API in history – the World-Wide Web – uses REST principles. That’s where REST came from. It was somebody who was involved in the creation of the early web who looked at it and wrote down a description of what properties of the web made it so successful.
REST on the WWW only works because humans read and interpret the results. Arguably, that’s not an API (Application Programming Interface) but a UI (User Interface).
I have yet to see an API that was improved by following strict REST principles. If REST describes the web (a UI, not an API), and it’s the only useful example of REST, is REST really meaningful?
> REST on the WWW only works because humans read and interpret the results.
This is very obviously not true. Take search engine crawlers, for example. There isn’t a human operator of GoogleBot deciding which links to follow on a case-by-case basis.
> I have yet to see an API that was improved by following strict REST principles.
I see them all the time. It’s ridiculous how many instances of custom logic in APIs can be replaced with “just follow the link we give you”.
It’s not. It’s pretty much the opposite. This is what he’s talking about:
> our clever thinker invents a new, higher, broader abstraction
> When you go too far up, abstraction-wise, you run out of oxygen.
> They tend to work for really big companies that can afford to have lots of unproductive people with really advanced degrees that don’t contribute to the bottom line.
REST is the opposite. REST is “We did this. It worked great! This is why.” And web developers around the world are using this every single day in practical projects without even realising it. The average web developer uses REST, including HATEOAS, all the time, and it works great for them. It’s just when they set out to do it on purpose, they often get distracted by some weird fake definition of REST that is completely different.
That's absolutely not what the essay is about. It's about the misassignment of credit for the success of a technology by people who think the minutiae of the clever implementation was important.
HATEOAS + Document Type Description which includes (ideally internationalized) natural language description in addition to machine readable is what MCP should have been.
I am wondering if anyone can resolve this misunderstanding of REST for me…
If the backend provides a _links map which contains “orders” for example in the list - doesn’t the front end need to still understand what that key represents? Is there another piece I am missing that would actually decouple the front end from the backend?
I see a lot of people who read Fielding's thesis and found it interesting.
I did not find it interesting. I found it excessively theoretical and proscriptive. It led to a lot of people arguing pedantically over things that just weren't important.
I just want to exchange JSON-structured messages over HTTP, using the least amount of HTTP required to implement request and response. I'm also OK with protocol buffers over grpc, or really any decent serialization technology over any well-implemented transport. Sometimes it's CRUD, sometimes it's inference, sometimes it's direct actions on a server.
Hmm. I shoudl write a thesis. JSMOHTTP (pronounced "jizmo-huttup")
The thing to internalize about "true" REST is that HN (and the rest of the web) is really a RESTful web service. You visit the homepage, a hypermedia format is delivered to a generic client (your browser), and its resources (pages, sections, profiles, etc) can all be navigated to by following links.
Links update when you log in or out, indicating the state of your session. Vote up/down links appear or disappear based on one's profile. This is HATEOAS.
Link relations can be used to alter how the client (browser) interprets the link—a rel="stylesheet" causes very different behavior from rel="canonical".
JavaScript provides even provides "code on-demand" as it's called in Fielding's paper.
From that perspective, REST is incredible. REST is extremely flexible, scalable, evolvable, etc. It is the pattern that powers the web.
Now, it's an entirely different story when it come to what many people call REST APIs, which are often nothing like HN. They cannot be consumed by a generic client. They are not interlinked. They don't ship code on-demand.
Is "REST" to blame? No. Few people have time or reason to build a client as powerful as the browser to consume their SaaS product's API.
But even building a truly generic client isn't the hardest thing about building RESTful APIs—the hardest thing is that the web depends entirely on having a human-in-the-loop and your standard API integration's purpose is to eliminate having a human in the loop.
For example, a human reads the link text saying "Log in" or "Reset password" and interprets that text to understand the state of the system (they do not have an authenticated session). And a human can reinterpret a redesigned webpage with links in a new location, but trivial clients can't reinterpret a refactored JSON object (or XML for that matter).
The folly is in thinking that there's some design pattern out there that's better than REST without understanding that the actual problem to be solved by that elusive, perfect paradigm is how you'll be able to refactor your API when your API's clients will likely be bodged-together JS programs whose authors dug through JSON for the URL they needed and then hard-coded it in a curl command instead of conscientiously and meticulously reading documentation and semantically looking up the URL at runtime, follows redirects, and handles failures gracefully.
Because that is the easiest to implement, the easiest to write, the easiest to manually test and tinker with (by writing it directly into the url bar), the easiest to automate (curl .../draw_point?x=7&y=20). It also makes it possible to put it into a link and into a bookmark.
This is great for API's that only have a few actions that can be taken on a given resource.
REST-API's then are especially suited for acting as a gateway to a database, to easily CRUD and fetch lists of information.
The best API's I've seen mix and match both patterns. RESTful API endpoints for data, "function call" endpoints for often-used actions like voting, bulk actions and other things that the client needs to be able to do, but you want the API to be in control of how it is applied.
> REST-API's then are especially suited for acting as a gateway to a database, to easily CRUD and fetch lists of information.
I don't disagree, but I've found (delivering LoB applications) that they are not homogenous: The way REST is implemented, right now, makes it not especially suitable for acting as a gateway to a database.
When you're free of constraints (i.e. greenfield application) you can do better (ITO reliability, product feature velocity, etc) by not using a tree exchange form (XML or JSON).
Because then it's not just a gateway to a database, it's an ill-specified, crippled, slow, unreliable and ad-hoc ORM: it tries to map trees (objects) to tables (relations) and vice versa, with predictably poor results.
Bots, browsers that preload URLs, caching (both browser and backend and everything in between), the whole infrastructure of the Web that assumes GET never mutates and is always safe to repeat or serve from cache.
Using GET also circumvents browser security stuff like CORS, because again the browser assumes GET never mutates.
Then that does not conform to the HTTP spec. GET endpoints must be safe, idempotent, cachable. Opening up a site to cases were web crawlers/scrapers may wreak havoc.
Indeed, user embedded pictures can fire GET requests while can not make POST requests. But this is not a problem if you don't allow users to embed pictures, or you authenticate the GET request somehow. Anyway GET requests are just fine.
CORS prevents reading from a resource, not from sending the request.
If you find that surprising, think about that the JS could also have for example created a form with the vote page as the target and clicked on the submit button. All completely unrelated to CORS.
CORS does nothing of the sort. It does the exact opposite – it’s explicitly designed to allow reading a resource, where the SOP would ordinarily deny it.
That any bot crawling your website is going to click on your links and inadvertently mutate data.
Reading your original comment I was thinking "Sure, as long as you have a good reason of doing it this way anything goes" but I realized that you prefer to do it this way because you don't know any better.
If you rely on the HTTP method to authenticate users to mutate data, you are completely lost. Bots and humans can send any method they like. It's just a string in the request.
Use cookies and auth params like HN does for the upvote link. Not HTTP methods.
> If you rely on the HTTP method to authenticate users to mutate data, you are completely lost
I don't know where you are getting that from but it's the first time I've heard of it.
If your link is indexed by a bot, then that bot will "click" on your links using the HTTP GET method—that is a convention and, yes, a malicious bot would try to send POST and DELETE requests. For the latter, this is why you authenticate users but this is unrelated to the HTTP verb.
> Use cookies and auth params like HN does for the upvote link
If it uses GET, this is not standard and I would strongly advise against it except if it's your pet project and you're the only maintainer.
Follow conventions and make everyone's lives easier, ffs.
Because HTTP is a lot more sophisticated than anyone cares to acknowledge. The entire premise of "REST", as it is academically defined, is an oversimplification of how any non-trivial API would actually work. The only good part is the notion of "state transfer".
Not a REST API, but I've found it particularly useful to include query parameters in a POST endpoint that implements a generic webhook ingester.
The query parameters allow us to specify our own metadata when configuring the webhook events in the remote application, without having to modify our own code to add new routes.
I used to do that but I've been fully converted to REST and CRUD gang. Once you establish the initial routes and objects it's really easy mount everything else on it and move fast with changes. Also using tools like httpie it's super easy to test anything right in your terminal.
It is not sufficient to crawl the API. The client also needs to know how to display the forms, which collect the data for the links presented by the API. If you want to crawl the API you also have the crawl the whole client GUI.
> The core problem it addresses is client-server coupling. There are probably countless projects where a small change in a server’s URI structure required a coordinated (and often painful) deployment of multiple client applications. A HATEOAS-driven approach directly solves this by decoupling the client from the server’s namespace. This addresses the quality of evolvability.
Eh, "a small change in a server’s URI structure" breaks links, so already you're in trouble.
But sure, embedding [local-parts of] URIs in the contents (or headers) exchanged is indeed very useful.
In my experience REST is just a code word for a distributed glob of function calls which communicate via JSON. It's a development and maintenance nightmare.
I tried to follow the approach with hypermedia and discoverable resources/actions in my hobby projects. But I "failed" at the point that this would mean additional HTTP calls from a client to "discover" a resource/its actions. Given the latency of a HTTP call, relativly seen, this was not conclusive for me.
This doesn’t provide any good arguments for why Roy Fielding’s conception should be taken as the gospel of how things should be done. At best, it points out that what we call REST now isn’t what Roy Fielding wanted.
Furthermore, it doesn’t explain how Roy Fielding’s conception would make sense for non-interactive clients. The fact that it doesn’t make sense is a large part of why virtually nobody is following it.
Take this quote: “A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations.”
If the client application only understands media types and isn’t supposed to know anything about the interrelationships of the data or possible actions on it, and there is no user that could select from the choices provided by the server, then it’s not clear how the client can do anything purposeful.
Surely, an automated client, or rather its developer, needs a model (a schema) of what is possible to do with the API. Roy Fieldings doesn’t address that aspect at all. At best, his REST API would provide a way for the client to map its model to the actual server calls to make, based on configuration information provided by the server as “hypertext”. But the point of such an indirection is unclear, because the configuration information itself would have to follow a schema known and understood by the client, so again wouldn’t be RESTful in Roy Fielding’s sense.
People are trying to fill in the blanks of what Roy Fielding might have meant, but in the end it just doesn’t make a lot of sense for what REST APIs are used in practice.
As I replied to the sibling comment, you're misunderstanding rest and hypermedia. The "schema" is html and the browser is the automated client that is exceptionally good at rendering whatever html the backend has decided to send.
Browsers are interactive clients, the opposite of automated clients. What you are saying supports the conclusion that Roy Fielding’s conception is unsuitable for non-interactive clients. However, the vast majority of real-world REST APIs are targeting automation, hence it doesn’t make sense for them to be “RESTful”.
Fielding was absolutely not saying that his REST was the One True approach. But it DOES mean something
The issue at hand here is that he coined REST and the whole world is using that term for something completely unrelated (eg an http json api).
You could start writing in binary here if you thought that that would be a more appropriate way to communicate, but it wouldn't be English (or any humanly recognizable language) no matter how hard you try to say it is.
If you want to discuss whether hypermedia/rest/hateaos is a better approach for web apps than http json APIs, I'd encourage you to read htmx.org/essays and engage with that community who find it to be an enormous liberation.
It may mean something, but Roy Fielding went out of his way, over many years, to not talk about the actual use cases he had in mind. It would have been easy for him to clarify that he was only talking about interactive browser applications. But he didn’t. And the people who came up with HATEOAS didn’t think he was. Nor did any of the blog articles that are espousing the alleged virtues of RESTfulness. So it’s not surprising that the term “REST” was appropriated for something else. In any case, it’s much too late to change that, it’s water down the bridge.
I’m only mildly interested in discussing hypothetical hypermedia browsers, for which Roy Fielding’s conception might be well and good (but also fairly incomplete, IMO). What developers care about is how to design HTTP-based APIs for programmatic use.
How are web browsers hypothetical? We're using one with rest/hateoas/hypermedia right now...
You don't seem to have even the slightest idea of what you're talking about here. Again, I suggest checking out the htmx essays and their hypermedia.systems book
In a non-interactive case, what is supposed to be reading a response and deciding which links to do some something with or what to do with them?
Let's say you've got a non-interactive program to get daily market close prices. A response returns a link labelled "foobarxyz", which is completely different to what the API returned yesterday and the day before.
How is your program supposed to magically know what to do? (without your input/interaction)
Why does "your program" need to know anything? The whole point of hypermedia is that there isn't any "program" other than the web browser that agnostically renders whatever html it receives. If the (backend) "program" development team decides that a foobarxyz link should be returned, then that's what is correct.
I suspect that your misunderstanding is because you're still looking at REST as a crud api, rather than what it actually is. That was the point of this article, though it was too technical.
ElasticSearch and OpenSearch are certainly egregiously guilty of this. Their API is an absolute nightmare to work with if you don't have a supported native client. Why such a popular project doesn't have an easy-to-use OpenAPI spec document in this day and age is beyond me.
This post follows the general, highly academic/dogmatic, tone that I’ve seen when certain folks talk about REST. Most of the article talks about what _not_ to do, and has very little details on how to actually do it.
The idea of having client/server decoupled via a REST api that is itself discoverable, and that allows independent deployment, seems like a great advantage.
However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way. Say I have a TODO api, how do I make it so that it uses HATEOAS (also who’s coming up with these acronyms…smh)?
Overall the article comes across more as academic pontification on “what not to do” instead of actionable advice.
Agreed. I wish there was some examples to better understand what the author means.
Like, in a web app, do i have any prior knowledge about the "_links" actions? Do I know that the server is going to return the actions "self" and "activate"? Is the idea to hide the routes from the user until the api call, but he should know that the api could return actions like "self", "activate" or "deactivate"? How do you communicate that an action requires a specific body? For example, the call activate is done in POST and expect a json body with a date inside. How do you tell that to the user?
> However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way.
Unless the design and requirements are unusually complex or extreme, all styles of API and front end work well enough. Any example would have to be lengthy, to provide context for the advantages of "true" ReST architecture, and contrived.
If you want to produce better APIs, try consuming them. A lot of places have this clean split between backend and frontend teams. They barely talk to each other sometimes. And a pattern I've seen over and over again is that some product manager decides feature X is needed. The backend team goes to work and delivers some API for feature X and then the frontend team has to consume the API. These APIs aren't necessarily very good if the backend people don't understand how the frontend uses them.
The symptom is usually if a seemingly simple API change on the backend leads to a lot of unexpected client side complexity to consume the API. That's because the API change breaks with some frontend expectation/assumption that frontend developers then need to work around. A simple example: including a userId with a response. To a frontend developer, the userId is not useful. They'll need a user name, a profile photo, etc. Now you get into all sorts of possible "why don't you just .." type solutions. I've done them all. They all have issues and it leads to a lot of complexity on either the server or the client.
You can bloat your API and calculate all this server side. Now all your API calls that include a userId gain some extra fields. Which means extra lookups and joins. So they get a bit slower as well. But the frontend can pretend that the server always tells it everything it needs. The other solution is to look things up from the frontend. This adds overhead. But if the frontend is clever about it, a lot of that information is very cachable. And of course graphql emerged to give frontend developers the ability to just ask for what they need from some microservices.
All these approaches have pros and cons. Most of the complexity is about what comes back, not about how it comes back or how it is parsed. But it helps if the backend developers are at least aware of what is needed on the frontend. A good way is to just do some front end development for a while. It will make you a better backend developer. Or do both. And by that I don't mean do javascript everywhere and style yourself as a full stack developer because you whack all nails with the same hammer. I mean doing things properly and experiencing the mismatches and friction for yourself. And then learn to do it properly.
The above example with the userIds is real. I've had to deal with that on multiple projects. And I've tried all of the approaches. My most recent insight here is that user information changes infrequently and should be looked up separately from other information asynchronously and then cached client side. This keeps APIs simple and forces frontend developers to not treat the server as a magical oracle and instead do sane things client side to minimize API calls and deal with application state. Good state management is key. If you don't have that, dealing with stateless network protocols (like REST) is painful. But state has to live somewhere and having it client side makes you less dependent on how the server side state management works. Which means it's easier to fix things when that needs to change.
We collectively glazed over Roy Fielding's dissertation, didn't really see the point, liked the sound of the word "REST" and used it to describe whatever we wanted to do with http / json. Sorry, Roy, but you can keep HATEOAS - no one is going to take that from you.
At some point, we built REST clients so generic they could handle nearly any use case. Honestly, building truly RESTful APIs has been easy for ages, just render HTML on the server and send it to the browser. That's 100% REST with no fuss.
The irony is, when people try to implement "pure REST" (as in Level 3 of the Richardson Maturity Model with HATEOAS), they often end up reinventing a worse version of a web browser. So it's no surprise that most developers stop at Level 2—using proper HTTP verbs and resource-based URIs. Full REST just isn't worth the complexity in most real-world applications.
RESTful APIs are not RESTful because REST is meh. Our APSi includes HATEAOS links and I have never, not once, witnessed their actual use (but they do double the size of response payloads).
It’s interesting that Stripe still even uses form-post on requests.
And rather than just using next-href your clients append next-id to a hardcoded things base URL? That seems like way more work than doing it the REST way.
And not everything in reality maps nicely to hypermedia conventions. The problem with REST is trying to shoehorn a lot of problems in a set of abstractions that were initially created for documents.
I spent years fussing about getting all of my APIs to fit the definition of REST and to do HATEAOS properly. I spent way too much time trying to conform everything as an action on a resource. Now, don't get me wrong. It is quite helpful to try to model things at stateless resources with a limited set of actions on them and to think about idempotency for specific actions in ways I don't think we did it properly in the SOAP days(at least I didn't). And in many cases it led to less brittle interfaces which were easier to reason about.
I still like REST and try to use it as much as I can when developing interfaces but I am not beholden to it. There are many cases which are not resources or are not stateless and sure you can find some obtuse way to make them be resources but that at times either leads to bad abstractions that don't convey the vocabulary of the underlying system and thus over time creates this rift in context between the interface and the underlying logic or we expose underlying implementation details as they could be easier to model as resources.
LMAO all companies asking for extensive REST API design/implementation experience in their job requirements, along with the lastest hot frontend frameworks.
I should probably fire back by asking if they know what they're asking for, because I'm pretty sure they don't.
Unless you really read and followed the paper, just call it a web api and tell your sales people to do the same. Calling it REST makes you sound like a manager that hasn't done any actual dev in 15 years.
I find it pretty shocking that this was written in 2025 without a mention of the fact that the only clients that are evolvable enough to interface with a REST API can be categorized to these three types:
1. Browsers and "API Browsers" (think something like Swagger)
2. Human and Artificial Intelligence (basically LLMs)
3. Clients downloaded from the server
You'd think that they'd point out these massive caveats. After all, the evolvable client that can handle any API, which is the thing that Roy Fielding has been dreaming about, has finally been invented.
REST and HATEOAS were intentionally developed to against the common use case of a static non-evolving client such as an android app that isn't a browser.
Instead you get this snarky blog post telling people that they are doing REST wrong, rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
If you wanted to build e.g. the matrix chat protocol on top of REST, then Roy Fielding would tell you to get lost.
If what I'm saying doesn't make sense to you, then your understanding of REST is insufficient, but let me tell you that understanding REST is a meaningless endeavor, because all you'll gain from that understanding is that you don't need it.
In REST clients are not allowed to have any out of band information about the structure or schema of the API.
You are not allowed to send GET, POST, PUT, DELETE requests to client constructed URLs.
Now that might sound reasonable. After all HATEOAS gives you all the URLs so you don't need to construct them.
Except here is the kicker. This isn't some URL specific thing. It also applies to the attributes and links in the response. You're not allowed to assume that the name "John Doe" is stored under the attribute "name" or that the activate link is stored in "activate". Your client needs to handle any theoretical API that could come from the server. "name" could be "fullName" or "firstNameAndLastName" or "firstAndLastName" or "displayName".
Now you might argue, hey but I'm allowed to parse JSON into a hierarchical object layout [0] and JPEGs into a two dimensional pixel array to be displayed onto a screen, surely it's just a matter of setting a content type or media type? Then I'll be allowed to write code specific to my resource! Except, REST doesn't define or propose any mechanism for application specific media types. You must register your media type globally for all humanity at IANA or go bust.
This might come across as a rant, but it is meant to be informative so I'll tell you what REST and HATEOAS are good for: Building micro browsers relying on human intelligence to act as the magical evolvable client. The way you're supposed to use REST and HATEOAS is by using e.g. the HAL-FORMS media type to give a logical representation of your form. Your evolvable client then translates the HAL-FORM into a html form or an android form or a form inside your MMO which happens to have a registration form built into the game itself, rather than say the launcher.
Needless to say, this is completely useless for machine to machine communication, which is where the phrase "REST API" is most commonly (ab)used.
Now for one final comment on this article in particular:
>Why aren’t most APIs truly RESTful?
>The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience: The ecosystem around specifications like OpenAPI grew rapidly, offering immediate, benefits that proved irresistible to development teams.
This is actually completely irrelevant and ignores the fact that REST as designed was never meant to be used in the vast situations where RPC over HTTP is used. The use cases for "RPC over HTTP" and REST have incredibly low overlap.
>These tools provided powerful features like automatic client/server code generation, interactive documentation, and request validation out-of-the-box. For a team under pressure to deliver, the clear, static contract provided by an OpenAPI definition was and still is probably often seen as “good enough,”
This feels like a complete reversal and shows that the author of this blog post himself doesn't understand the practical implications of his own blog post. The entire point of HATEOAS is that you cannot have automatic client code generation unless it happens during the runtime of the application. It's literally not allowed to generate code in REST, because it prevents your client from evolving at runtime.
>making the long-term architectural benefits of HATEOAS, like evolvability, seem abstract and less urgent.
Except as I said, unless you have a requirement to have something like a mini browser embedded in a smartphone app, desktop application or video game, what's the point of that evolvability?
>Furthermore, the initial cognitive overhead of building a truly hypermedia-driven client was perceived as a significant barrier.
Significant barrier is probably the understatement of the century. Building the "truly hypermedia-driven client" is equivalent to solving AGI in the machine to machine communication use case. The browser use-case only works because humans already possess general intelligence.
>It felt easier for a developer to read documentation and hardcode a URI template like /users/{id}/orders than to write a client that could dynamically parse a _links section and discover the “orders” URI at runtime.
Now the author is using snark to appeal to emotions by equivocating the simplest and most irrelevant problem with the hardest problem in a hand waving manner. "Those silly code monkeys, how dare they not build AGI! It's as simple as parsing _links and discover the "orders" URI at runtime". Except as I said, you're not allowed to assume that there is an "orders" link since that is out of band information. Your client must be intelligent enough to not only handle a API where the "/user/{id}/orders" link is stored under _links. The server is allowed give the link of "/user/{id}/orders" a randomly generated name that is changing with every request. It's also allowed to change the url path to any randomly generated structure, as long as the server is able to keep track of it. The HATEOAS server is allowed to return a human language description of each field and link, but the client is not allowed to assume that the orders are stored under any specific attribute. Hence you'd need an LLM to know which field is the "orders" field.
>In many common scenarios, such as a front-end single-page application being developed by the same team as the back-end, the client and server are already tightly coupled. In this context, the primary problem that HATEOAS solves—decoupling the client from the server’s URI structure—doesn’t present as an immediate pain point, making the simpler, documentation-driven approach the path of least resistance.
Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.
[0] Whose contents may only be processed in a structure oblivious way
> Instead you get this snarky blog post telling people that they are doing REST wrong, rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
Were using actual REST right now. That's what SSR html uses.
The rest of your (vastly snarkier) diatribe can be ignored.
And, yet, you then said the following, which seems to contradict the rest of what you said before it...
> Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.
> rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
Well, besides that, I don't see how REST solves the problem it says it addresses. So your user object includes an activate field that describes the URI you hit to activate the user. When that URI changes, the client doesn't even notice, because it queries for a user and then visits whatever it finds in the activate field.
Then you change the term from "activate" to "unslumber". How does the client figure that out? How is this a different problem from changing the user activation URI?
Htmx essays have already been mentioned, so here are my thoughts on the matter. I feel like to have a productive discussion of REST and HATEOAS, we must first agree on the basics. Repeating my own comment from a couple of weeks ago, H stands for hypermedia, and hypermedia is a type of media, that uses common format for representing some server-driven state and embedding hypermedia controls which are presented by back-end agnostic hypermedia client to a user for discoverability and interaction.
As such, JSON driven APIs can't be REST, since there is no common format for representing hypermedia controls, which means that there's no way to implement hypermedia client which can present those controls to the user and facilitate interactions. Is there such implmentation? Yes, HTML is the hypermedia, <input>s and <button>s are controls and browsers are the clients. REST and HATEOAS is designed for the humans, and trying to somehow combine it with machine-to-machine interaction results in awkward implementations, blurry definitions and overcomplication.
Richardson maturity model is a clear indication of those problems, I see it as an admission of "well, there isn't much practicality in doing proper REST for machine-to-machine comms, but that's fine, you can only do some parts of it and it's still counts". I'm not saying we shouldn't use its ideas, resource-based URLs are nice, using feature of HTTP is reasonable, but under the name REST it leads to constant arguments between the "dissertation" crowd and "the industry has moved on" crowd. The worst/best part is both those crowds are totally right and this argument will continue for as long as we use HTTP
I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle. When I see "REST API" I can safely assume the following:
- The API returns JSON
- CRUD actions are mapped to POST/GET/PUT/DELETE
- The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
- There's a decent chance listing endpoints were changed to POST to support complex filters
Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood.
Fielding won the real battle precisely because he was intellectually incoherent and mostly wrong. It's the "worse is better" of the 21st century.
RPC systems were notoriously unergonomic and at best marginally successful. See Sun RPC, RMI, DCOM, CORBA, XML-RPC, SOAP, Protocol Buffers, etc.
People say it is not RPC but all the time we write some function in Javascript like
which does a and on the backend we have a function that looks like with some annotation that explains how to map the URL to an item call. So it is RPC, but instead of a highly complex system that is intellectually coherent but awkward and makes developers puke, we have a system that's more manual than it could be but has a lot of slack and leaves developers feeling like they're in control. 80% of what's wrong with it is that people won't just use ISO 8601 dates.Amen. Particularly ISO8601.
> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.
Why do people feel compelled to even consider it to be a battle?
As I see it, the REST concept is useful, but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves. This is in line with the Richardson maturity model[1], where the apex of REST includes all the HATEOAS bells and whistles.
Should REST without HATEOAS classify as REST? Why not? I mean, what is the strong argument to differentiate an architectural style that meets all but one requirement? And is there a point to this nitpicking if HATEOAS is practically irrelevant and the bulk of RESTful APIs do not implement it? What's the value in this nitpicking? Is there any value to cite thesis as if they where Monty Python skits?
[1] https://en.wikipedia.org/wiki/Richardson_Maturity_Model
For me the battle is with people who want to waste time bikeshedding over the definition of "REST" and whether the APIs are "RESTful", with no practical advantages, and then having to steer the conversation--and their motivation--towards more useful things without alienating them. It's tiresome.
I’m with you. HATEOAS is great when you have two independent (or more) enterprise teams with PMs fighting for budget.
When it’s just yours and your two pizza team, contract-first-design is totally fine. Just make sure you can version your endpoints or feature-flag new API’s so it doesn’t break your older clients.
Defining media types seems right to me, but what ends up happening is that you use swagger instead to define APIs and out the window goes HATEOAS, and part of the reason for this is just that defining media types is not something people do (though they should).
Basically: define a schema for your JSON, use an obvious CRUD mapping to HTTP verbs for all actions, use URI local-parts embedded in the JSON, use standard HTTP status codes, and embed more error detail in the JSON.
>the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.
Many server-rendered websites support REST by design: a web page with links and forms is the state transferred to client. Even in SPAs, HATEOAS APIs are great for shifting business logic and security to server, where it belongs. I have built plenty of them, it does require certain mindset, but it does make many things easier. What problems are you talking about?
complexity
We should probably stop calling the thing that we call REST, REST and be done with it - it's only tangentially related to what Fielding tried to define.
> We should probably stop calling the thing that we call REST (...)
That solves no problem at all. We have Richardson maturity model that provides a crisp definition, and it's ignored. We have the concept of RESTful, which is also ignored. We have RESTless, to contrast with RESTful. Etc etc etc.
None of this discourages nitpickers. They are pedantic in one direction, and so lax in another direction.
Ultimately it's all about nitpicking.
> but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.
Only because we never had the tools and resources that, say, GraphQL has.
And now everyone keeps re-inventing half of HTTP anyway. See this diagram https://raw.githubusercontent.com/for-GET/http-decision-diag... (docs https://github.com/for-GET/http-decision-diagram/tree/master...) and this: https://github.com/for-GET/know-your-http-well
HATEOAS adds lots of practical value if you care about discoverability and longevity.
Discoverability by whom, exactly? Like if it's for developer humans, then good docs are better. If it's for robots, then _maybe_ there's some value... But in reality, it's not for robots.
HATEOAS solves a problem that doesn't exist in practice. Can you imagine an API provider being like, "hey, we can go ahead and change our interface...should be fine as long as our users are using proper clients that automatically discover endpoints and programmatically adapt accordingly"? Or can you imagine an API consumer going, "well, this HTTP request delivers the data we need, but let's make sure not to hit it directly -- instead, let's recursively traverse a graph of requests each time to make sure this is still the way to do it!"
You have got it wrong. Let's say I build some API with different user roles. Some users can delete an object, others can only read it. The UI knows about the semantics of the operations and logical names of it, so when UI gets the object from server it can simply check, if certain operations are available, instead of encoding the permission checking on the client side. This is the discoverability. It does not imply generated interfaces, UI may know something about the data in advance.
This is actually what we do at [DAYJOB] and it's been working well for over 12 years. Like any other kind of interface indirection it adds the overhead of indirection for the benefit of being able to change the producer's side of the implementation without having to change all of the consumers at the same time.
That's actually an interesting take, thank you.
How does the UI check if certain operations are available?
It’s literally in server response:
In this example you receive list of permitted operations embedded in the resource model. href=. means you can perform this operation on resource self link.Oh, interesting. So rather than the UI computing what operations should be allowed currently by, say, knowing the user's current role and having rules baked into it about the relationship between role and UI widgets, the UI can compute what motive should be in or simply off of explicit statements or capability from the server.
I can see some meat on these bones. The counterpoint is that the protocol is now chattier than it would be otherwise... But a full analysis of bandwidth to the client would have to factor that you have to ship over a whole framework to implement those rules and keep those rules synchronized between client and server implementation.
OPTIONS https://datatracker.ietf.org/doc/html/rfc2616
More links here: https://news.ycombinator.com/item?id=44510745
It’s something else. List of available actions may include other resources, so you cannot express it with pure HTTP, you need a data model for that (HAL is one of possible solutions, but there are others)
Or probably just an Allow header on a response to another query (e.g. when fetching an object, server could respond with an Allow: GET, PUT, DELETE if the user has read-write access and Allow: GET if it’s read-only).
> If it's for robots, then _maybe_ there's some value...
Nah, machine readable docs beat HATEOAS in basically any application.
The person that created HATEOAS was really not designing an API protocol. It's a general use content delivery platform and not very useful for software development.
The problems do exist, and they're everywhere. People just invented all sorts of hacks and workarounds for these issues instead of thinking more carefully about them. See my posts in this thread for some examples:
https://news.ycombinator.com/item?id=44509745
LLMs also appear to have an easier time consuming it (not surprisingly.)
For most APIs that doesn’t deliver any value which can’t be gained from API docs, so it’s hard to justify. However, these days it could be very useful if you want an AI to be able to navigate your API. But MCP has the spotlight now.
And that's fine, but then you're doing RPC instead of REST and we should all be clear and honest about that.
I think you throw away a useful description of an API by lumping them all under RPC. If you tell me your API is RPC instead of REST then I'll assume that:
* If the API is available over HTTP then the only verb used is POST.
* The API is exposed on a single URL and the `method` is encoded in the body of the request.
It is true, if you say "RPC" I'm more likely to assume gRPC or something like that. If you say "REST", I'm 95% confident that it is a standard / familiar OpenAPI style json-over-http style API but will reserve a 5% probability that it is actually HATEOAS and have to deal with that. I'd say, if you are doing Roy Fielding certified REST / HATEOAS it is non-standard and you should call it out specifically by using the term "HATEOAS" to describe it.
What would it take for you to update your assumptions?
People in the real world referring to "REST" APIs, the kind that use HTTP verbs and have routes like /resource/id as RPC APIs. As it stands in the world outside of this thread nobody does that.
At some level language is outside of your control as an individual even if you think it's literally wrong--you sometimes have to choose between being 'correct' and communicating clearly.
100% agreed, “language evolves”
This article also tries to make the distinction of not focusing on the verbs themselves. That the RESTful dissertation doesn’t focus on them.
The other side of this is that the IETF RESTful proposals from 1999 that talk about the protocol for implementation are just incomplete. The obscure verbs have no consensus on their implementation and libraries across platforms may do PUT, PATCH, DELETE incompatibly. This is enough reason to just stick with GET and POST and not try to be a strict REST adherents since you’ll hit a wall.
While I ask people whether they actually mean REST according to the paper or not, I am one of the people who refuse to just move on. The reason being that the mainstream use of the term doesn’t actually mean anything, it is not useful, and therefore not pragmatic at all. I basically say “so you actually just mean some web API, ok” and move on with that. The important difference being that I need to figure out the peculiarities of each such web API.
>> The important difference being that I need to figure out the peculiarities of each such web API
So if they say it is Roy Fielding certified, you would not have to figure out any "peculiarities"? I'd argue that creating a typical OpenAPI style spec which sticks to standard conventions is more professional than creating a pedantically HATEOAS API. Users of your API will be confused and confusion leads to bugs.
So you enjoy being pedantic for the sake of being pedantic? I see no useful benefit either from a professional or social setting to act like this.
I don’t find this method of discovery very productive and often regardless of meeting some standard in the API the real peculiarities are in the logic of the endpoints and not the surface.
I can see a value in pedantry in a professional setting from a signaling point of view. It's a cheap way to tell people "Hey! I'm not like those other girls, I care about quality," without necessarily actually needing to do the hard work of building that quality in somewhere where the discerning public can actually see your work.
(This is not a claim that the original commenter doesn't do that work, of course, they probably do. Pedants are many things but usually not hypocrites. It's just a qualifier.)
You'd still probably rather work with that guy than with me, where my preferred approach is the opposite of penalty. I slap it all together and rush it out the door as fast as possible.
>> "Hey! I'm not like those other girls, I care about quality,"
OMG. Pure gold!
REST is pretty much impossible to adhere to for any sufficiently complex and we should just toss it in the garbage
REST means, generally, HTTP requests with json as a result.
It also means they made some effort to use appropriate http verbs instead of GET/POST for everything, and they made an effort to organize their urls into patterns like `/things/:id/child/:child_id`.
It was probably an organic response to the complexity of SOAP/WSDL at the time, so people harping on how it's not HATEOAS kinda miss the historical context; people didn't want another WSDL.
> instead of GET/POST for everything
Sometimes that's a pragmatic choice too. I've worked with HTTP clients that only supported GET and POST. It's been a while but not that long ago.
>> /things/:id/child/:child_id
It seems that nesting isn't super common in my experience. Maybe two levels if completely composite but they tend to be fairly flat.
> It also means they made some effort to use appropriate http verbs instead of GET/POST for everything, and they made an effort to organize their urls into patterns like `/things/:id`.
No not really. A lot of people don't understand REST to be anything other than JSON over HTTP. Sometimes, the HTTP verbs thing is done as part of CRUD but actually CRUD doesn't necessarily have to do with the HTTP verbs at all and there can just be different endpoints for each operation. It's a whole mess.
I really hate my conclusions here, but from a limited freedom point of view, if all of that is going to happen...
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
So we should better start with a standard scaffolding for the replies so we can encode the errors and forget about status codes. So the only thing generating an error status is unhandled exception mapped to 500. That's the one design that survives people disagreeing.
> There's a decent chance listing endpoints were changed to POST to support complex filters
So we'd better just standardize that lists support both GET and POST from the beginning. While you are there, also accept queries on both the url and body parameters.
HTTP/JSON API works too, but you can assume it's what they mean by REST.
It makes me wish we stuck with XML based stuff, it had proper standards, strictly enforced by libraries that get confused by things not following the standards. HTTP/JSON APIs are often hand-made and hand-read, NIH syndrone running rampant because it's perceived to be so simple and straightforward. To the point of "we don't need a spec, you can just see the response yourself, right?". At least that was the state ~2012, nowadays they use an OpenAPI spec but it's often incomplete, regardless of whether it's handmade (in which case people don't know everything they have to fill in) or generated (in which case the generators will often have limitations and MAYBE support for some custom comments that can fill in the gaps).
> HTTP/JSON API works too, but you can assume it's what they mean by REST.
This is the kind of slippery slope where pedantic nitpickers thrive. The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
In this sense, the term "RESTful" is useful to shut down these pedantic nitpickers. It's "REST-adjacent" still, but the right answer to nitpicking is "who cares".
> The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
wat?
Nowhere is JSON in the name of REpresentational State Transfer. Moreover, sending other representations than JSON (and/or different presentations in JSON) is not only acceptable, but is really a part of REST
> Nowhere is JSON in the name of REpresentational State Transfer.
If you read the message you're replying to, you'll notice you are commenting on the idea of coining the concept of HTTP/JSON API as a better fitting name.
Read messages before replying? It's the internet! Ain't no one got time for that
:)
Don't stress it. It happens to the best of us.
This. Or maybe we should call it "Rest API" in lowercase, meaning not the state transfer, but the state of mind, where developer reached satisfaction with API design and is no longer bothered with hypermedia controls, schemas etc.
I recall having to maintain an integration to some obscure SOAP API that ate and spit out XML with strict schemas and while I can't remember much about it, I think the integration broke quite easily if the other end changed their API somehow.
Assuming the / was meant to describe it as both an HTTP API and a JSON API (rather than HTTP API / JSON API) it should be JSON/HTTP, as it is JSON over HTTP, like TCP/IP or GNU/Linux :)
> it had proper standards
Lol. Have you read them?
SOAP in particular can really not be described as "proper".
It had the advantage that the API docs were always generated, and thus correct, but the most common thing is for one software stack not being able to use a service built with another stack.
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
I had to chuckle here. So true!
> I can safely assume [...] CRUD actions are mapped to POST/GET/PUT/DELETE
Not totally sure about that - I think you need to check what they decided about PUT vs PATCH.
It's always better to use GET/POST exclusively. The verb mapping was theoretical from someone who didn't have to implement. I've long ago caved to the reality of the web's limited support for most of the other verbs.
Isn't that fairly straightforward? PUT for full updates and PATCH for partial ones. Does anybody do anything different?
PUT for partial updates, yes, constantly. What i worked with last week: https://docs.gitlab.com/api/projects/#edit-a-project
Lots of people make PUTs that work like PATCHes and it drives me crazy. Same with people who use POST to retrieve information.
These verbs dont even make sense most of the time.
Well you can't reliably use GET with bodies. There is the proposed SEARCH but using custom methods also might not work everywhere.
No, QUERY. https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-saf...
SEARCH is from RFC 5323 (WebDAV).
The SEARCH verb draft was superseded by the QUERY verb draft last I checked. QUERY is somewhat more adopted, though it's still very new.
You sweet summer child.
> - CRUD actions are mapped to POST/GET/PUT/DELETE
Agree on your other three but I've seen far too many "REST APIs" with update, delete & even sometimes read operations behind a POST. "SOAP-style REST" I like to call it.
Do you care? From my point of view, post, put, delete, update, and patch all do the same. I would argue that if there is a difference, making the distinction in the url instead of the request method makes it easier to search code and log. And what's the correct verb anyway?
So that's an argument that there may be too many request methods, but you could also argue there aren't enough. But then standardization becomes an absolute mess.
So I say: GET or POST.
I agree. From what I have seen in corporate settings, using anything more than GET/POST takes the time to deploy the API to a different level. Using UPDATE, PATCH etc. typically involves firewall changes that may take weeks or months to get approved and deployed followed a never ending audit/re-justification process.
> From my point of view, post, put, delete, update, and patch all do the same.
That's how we got POST-only GraphQL.
In HTTP (and hence REST) these verbs have well-defined behaviour, including the very important things like idempotence and caching: https://github.com/for-GET/know-your-http-well/blob/master/m...
There's no point in idempotency for operations that change the state. DELETE is supposed to be idempotent, but it can only be if you limit yourself to deletion by unique, non-repeating id. Should you do something like delete by email or product, you have to use another operation, which then obviously will be POST anyway. And there's no way to "cache" a delete operation.
It's just absurd to mention idempotency when the state gets altered.
Yeah but GET doesn’t allow requests to have bodies (yeah, I know, technically you can but it’s not very useful), and this is a legitimate issue preventing its use in complex APIs.
I actually had to change an API recently TO this. The request payload was getting too big, so we needed to send it via POST as a body.
> even sometimes read operations behind a POST
Even worse than that, when an API like the Pinboard API (v1) uses GET for write operations!
I work with an API that uses GET for delete :)
> Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood
This is an insightful observation. It happens with pretty much everything
As it has been happening recently with the term vibecoding. It started with some definition, and now it’s morphed into more or less just meaning ai-assisted coding. Some people don’t like it[1]
1: https://simonwillison.net/2025/Mar/19/vibe-coding/
Hell yeah. IMO we should collectively get over ourselves and just agree that what you describe is the true, proper, present-day meaning of "REST API".
I also view it as inevitable.
I can count on one hand the number of times I've worked on a service that can accurately be modeled as just representational state transfer. The rest have at least some features that are inherently, inescapably some form of remote procedure call. Which the original REST model eschews.
This creates a lot of impedance mismatch, because the HTTP protocol's semantics just weren't designed to model that kind of thing. So yeah, it is hard to figure out how to shoehorn that into POST/GET/PUT/DELETE and HTTP status codes. And folks who say it's easy tend to get there by hyper-focusing on that one time they were lucky enough to be working on a project where it wasn't so hard, and dismissing as rare exceptions the 80% of cases where it did turn out to be a difficult quagmire that forced a bunch of unsatisfying compromises.
Alternatively you can pick a protocol that explicitly supports RPC. But that's not necessarily any better because all the well-known options with good language support are over-engineered monstrosities like GRPC, SOAP, and (shudder) CORBA. It might reduce your domain modeling headaches, but at the cost of increased engineering and operations hassle. I really can't blame anyone for deciding that an ad-hoc, ill-specified, janky application of not-actually-REST is the more pragmatic option. Because, frankly, it probably is.
xml-rpc (before it transmogrified into SOAP) was pretty simple and flexible. Still exists, and there is a JSON variant now too. It's effectively what a lot of web APIs are: a way to invoke a method or function remotely.
I use the term "HTTP API"; more general. Context, in light of your definition: In many cases labeled "REST", there will only be POST, or POST and GET, and HTTP 200 status with an error in JSON is used instead of HTTP status codes. Your definition makes sense as a weaker form of the original, but it it still too strict compared to how the term is used. "REST" = "HTTP with JSON bodies" is the most practical definition I have.
>HTTP 200 status with an error in JSON is used instead of HTTP status codes
This is a bad approach. It prevents your frontend proxies from handling certain errors better. Such as: caching, rate limiting, or throttling abuse.
> HTTP 200 status with an error in JSON is used instead of HTTP status codes
I've seen some APIs that not only always return a 200 code, but will include a response in the JSON that itself indicates whether the HTTP request was successfully received, not whether the operation was successfully completed.
Building usable error handling with that kind of response is a real pain: there's no single identifier that indicates success/failure status, so we had to build our own lookup table of granular responses specific to each operation.
the last point got me.
How can you idiomatically do a read only request with complex filters? For me both PUT and POST are "writable" operations, while "GET" are assumed to be read only. However, if you need to encode the state of the UI (filters or whatnot), it's preferred to use JSON rather than query params (which have length limitations).
So ... how does one do it?
One uses POST and recognizes that REST doesn't have to be so prescriptive.
The part of REST to focus on here is that the response from earlier well-formed requests will include all the forms (and possibly scripts) that allow for the client to make additional well-formed requests. If the complex filters are able to be made with a resource representation or from the root index, regardless of HTTP methods used, I think it should still count as REST (granted, HATEOAS is only part of REST but I think it should be a deciding part here).
When you factor in the effects of caching by intermediate proxy servers, you may find yourself adapting any search-like method to POST regardless, or at least GET with params, but you don't always want to, or can't, put the entire formdata in params.
Plus, with the vagaries of CSRF protections, per-user rate-limiting and access restrictions, etc.,, your GET is likely to turn into a POST for anything non-trivial. I wouldn't advise trying for pure REST-ful on the merits of its purity.
POST the filter, get a response back with the query to follow up with for the individual resources.
which then responds with And then you can make GET request calls against that resource.It adds in some data expiration problems to be solved, but its reasonably RESTful.
This has RESTful aesthetics but it is a bit unpractical if a read-only query changes state on the server, as in creating the uuid-referenced resource.
There's no requirement in HTTP (or REST) to either create a resource or return a Location header.
For the purposes of caching etc, it's useful to have one, as well as cache controls for the query results, and there can be links in the result relative to the Location (eg a link href of "next" is relative to the Location).
Isn't this twice as slow? If your server was far away it would double load times?
The response to POST can return everything you need. The Location header that you receive with it will contain permanent link for making the same search request again via GET.
Pros: no practical limit on query size. Cons: permalink is not user-friendly - you cannot figure out what filters are applied without making the request.
There was a proposal[1] a while back to define a new SEARCH verb that was basically just a GET with a body for this exact purpose.
[1]: https://www.ietf.org/archive/id/draft-ietf-httpbis-safe-meth...
Similarly, a more recent proposal for a new QUERY verb: https://httpwg.org/http-extensions/draft-ietf-httpbis-safe-m...
If you really want this idiomatically correct, put the data in JSON or other suitable format, zip it and encode in Base64 to pass via GET as a single parameter. To hit the browser limits you will need so big query that you may hit UX constraints earlier in many cases (2048 bytes is 50+ UUIDs or 100+ polygon points etc).
Pros: the search query is a link that can be shared, the result can be cached. Cons: harder to debug, may not work in some cases due to URI length limits.
HTML FORMs are limited to www-form-encoded or multipart. The length or the queries on a GET with a FORM is limited by intermediaries that shouldn't be limiting it. But that's reality.
Do a POST of a query document/media type that returns a "Location" that contains the query resource that the server created as well as the data (or some of it) with appropriate link elements to drive the client to receive the remainder of the query.
In this case, the POST is "writing" a query resource to the server and the server is dealing with that query resource and returning the resulting information.
Soon, hopefully, QUERY will save us all. In the meantime, simply using POST is fine.
I've also seen solutions where you POST the filter config, then reference the returned filter ID in the GET request, but that often seems like overkill even if it adds some benefits.
Haha, our API still returns XML. At least, most of the endpoints do. Not the ones written by that guy who thinks predictability in an API is lower priority than modern code, those ones return JSON.
I present to you this monstrosity: https://stackoverflow.com/q/39110233
Presumably they had an existing API, and then REST became all the rage, so they remapped the endpoints and simply converted the XML to JSON. What do you do with the <tag>value</tag> construct? Map it to the name `$`!
Congratulations, we're REST now, the world is a better place for it. Off to the pub to celebrate, gents. Ugh.
I think people tend to forget these things are tools, not shackles
Sounds about right. I've been calling this REST-ish for years and generally everyone I say that to gets what I mean without much (any) explanation.
As long as it's not SOAP, it's great.
If I never have to use SOAP again in my life, I will die a happy man.
Yeah
I can assure you very few people care
And why would they? They're getting value out of this and it fits their head and model view
Sweating over this takes you nowhere
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
I've done this enough times that now I don't really bother engaging. I don't believe anyone gets it 100% correct ever. As long as there is nothing egregiously incorrect, I'll accept whatever.
Importantly for the discussion, this also doesn't mean the push for REST api's was a failure. Sure, we didn't end up with what was precisely envisioned from that paper, but we still got a whole lot better than CORBA and SOAP.
The lowest common denominator in the REST world is a lot better than the lowest common denominator in SOAP world, but you have to convince the technically literate and ideological bunch first.
We still have gRPC though...
> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.
True. Losing hacking/hacker was sad but I can live with it - crypto becoming associated with scam coins instead of cryptography makes me want to fight.
This is very true. Over my 15 years of engineering, I have never suffered_that_ much with integrating with an api (assuming it exists). So the lack of "HATEOaS" hasn't even been noticable for me. As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429) I usually have no issuss integrating and don't even notice that they don't have some "discoverable api". As long as I can get the data I need or can make the update I need I am fine.
I think good rest api design is more a service for the engineer than the client.
> As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429)
A client had build an API that would return 200 on broken requests. We pointed it out and asked if maybe it could return 500, to make monitoring easier. Sure thing, next version "Http 200 - 500", they just wrote 500 in the message body, return remained 200.
Some developers just do not understand http.
I just consumed an API where errors were marked with a "success": false field.
The "success" is never true. If it's successful, it's not there. Also, a few endpoints return 500 instead, because of course they do. Oh, and one returns nothing on error and data on success, because, again, of course it does.
Anyway, if you want a clearer symptom that your development stack is shit and has way too much accidental complexity, there isn't any.
This is the real world. You just deal with it (at least I do) because fighting it is more work and at the end of the day the boss wants the project done.
Ive seen this a few times in the past but for a different reason. What would happen in these cases was that internally there’d be some cascade of calls to microservices that all get collected. In the most egregious examples it’s just some proxy call wrapping the “real” response.
So it becomes entirely possible to get a 200 from the thing responding g to you but it may be wrapping an upstream error that gave it a 500.
Sometimes I wish HN supported emojis so I could reply with the throw-up one.
{ "statusCode": 200, "error" : "internal server error" }
Nice.
I've had frontend devs ask for this, because it was "easier" to handle everything in the same then callback. They wanted me to put ANY error stuff as a payload in the response.
> So the lack of "HATEOaS" hasn't even been noticable for me.
I think HATEOAS tackles problems such as API versioning, service discovery, and state management in thin clients. API versioning is trivial to manage with sound API Management policies, and the remaining problems aren't really experienced by anyone. So you end up having to go way out of your way to benefit from HATEOAS, and you require more complexity both on clients and services.
In the end it's a solution searching for problems, and no one has those problems.
It isn't clear that HATEOS would be better. For instance:
>>Clients shouldn’t assume or hardcode paths like /users/123/posts
Is it really net better to return something like the following just so you can change the url structure.
"_links": { "posts": { "href": "/users/123/posts" }, }
I mean, so what? We've create some indirection so that the url can change (e.g. "/u/123/posts").
Yes, so the link doesn't have to be relative to the current host. If you move user posts to another server, the href changes, nothing else does.
If suddenly a bug is found that lets people iterate through users that aren't them, you can encrypt the url, but nothing else changes.
The bane of the life of backend developers is frontend developers that do dumb "URL construction" which assumes that the URL format never changes.
It's brittle and will break some time in the future.
>- There's a decent chance listing endpoints were changed to POST to support complex filters
Please. Everyone knows they tried to make the complex filter work as a GET, then realized the filtering query is so long that it breaks whatever WAF or framework is being used because they block queries longer than 4k chars.
[flagged]
I disagree. It's a perfectly fine approach to many kinds of APIs, and people aren't "mediocre" just for using widely accepted words to describe this approach to designing HTTP APIs.
> and people aren't "mediocre" just for using widely accepted words
If you work off "widely accepted words" when there is disagreeing primary literature, you are probably mediocre.
So your view is that the person who coins a term forever has full rights to dictate the meaning of that term, regardless of what meaning turns out to be useful in practice and gets broadly accepted by the community? And you think that anyone who disagrees with such an ultra-prescriptivist view of linguistics is somehow a "mediocre programmer"? Do I have that right?
No. For all people who use "REST": If reading Fielding is the exception that gets you on HN, than not reading Fielding is what average person does. Mediocre.
Using Fieldings term to refer to something else is an extra source of confusion which kinda makes the term useless. Nobody knows what the speaker exactly refers no.
The point is lost on you though. There are REST APIs (almost none), and there are "REST APIs" - a battle cry of mediocre developers. Now go tell them their restful has nothing to do with rest. And I am now just repeating stuff said in article and in comments here.
Why should I (or you, for that matter) go and tell them their restful has nothing to do with rest? Why does it matter? They're making perfectly fine HTTP APIs, and they use the industry standard term to describe what kind of HTTP API it is.
It's convenient to have a word for "HTTP API where entities are represented by JSON objects with unique paths, errors are communicated via HTTP status codes and CRUD actions use the appropriate HTTP methods". The term we have for that kind of API is "rest". And that's fine.
1. Never said I'm going to tell them. It's on someone else. I'm just going to lower my expectation from such developers accordingly.
2. So just "HTTP API". And that would suffice. Adding "restful" is trying to be extra-smart or fit in if everyone's around an extra-smart.
> 1. Never said I'm going to tell them. It's on someone else. I'm just going to lower my expectation from such developers accordingly.
This doesn't seem like a useful line of conversation, so I will ignore it.
> 2. So just "HTTP API".
No! There are many kinds of HTTP APIs. I've both made and used "HTTP APIs" where HTTP is used as a transport and API semantics are wholly defined by the message types. I've seen APIs where every request is an HTTP POST with a protobuf-encoded request message and every response is a 200 OK with a protobuf-encoded response message (which might then indicate an error). I've seen GraphQL APIs. I've seen RPC-style APIs where every "RPC call" is a POST requset to an endpoint whose name looks like a function name. I've seen APIs where request and response data is encoded using multipart/form-data.
Hell, even gRPC APIs are "HTTP APIs": gRPC uses HTTP/2 as a transport.
Telling me that something is an "HTTP API" tells me pretty much nothing about how it works or how I'm expected to use it, other than that HTTP is in some way involved. On the other hand, if you tell me that something is a "REST API", I already have a very good idea about how to use it, and the documentation can assume a lot of pre-existing context because it can assume that I've used similar APIs before.
> On the other hand, if you tell me that something is a "REST API", I already have a very good idea about how to use it (...)
Precisely this. The value of words is that they help communicate concepts. REST API or even RESTful API conveys a precise idea. To help keep pedantry in check, Richardson's maturity model provides value.
Everyone manages to work with this. Not those who feel the need to attack people with blanket accusations of mediocrity, though. They hold onto meaningless details.
You're being needlessly pedantic, and it seems the only purpose to this pedantry is finding a pretext to accuse everyone of being mediocre.
I think the pushback is because you labelled people who create "REST APIs" as "mediocre" without any explanation. That may be a good starting point.
It’s the worst kind of pedantry. Simultaneously arrogant, uncharitable and privileged.
Most of us are not writing proper Restful APIs because we’re dealing with legacy software, weird requirements the egos of other developers. We’re not able to build whatever we want.
And I agree with the feature article.
> It’s the worst kind of pedantry. Simultaneously arrogant, uncharitable and privileged.
I'd go as far as to claim it is by far the dumbest kind, because it has no value, serves no purpose, and solves no problem. It's just trivia used to attack people.
I met a DevOps guy who didn't know what "dotfiles" are.
However I'd argue people who use the term to describe it the same as everyone else is the smart one, if you want to refer to the "real" one just add "strict" or "real" in front of it.
I don't think we should dismiss people over drifting definitions and lack of "fountational knowledge".
This is more like people arguing over "proper" English, the point of language is to communicate ideas. I work for a German company and my German is not great but if I can make myself understood, that's all that's needed. Likewise, the point of an API is to allow programs, systems, and people to interoperate. If it accomplishes that goal, it's fine and not worth fighting over.
If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF? My job isn't an academic paper, good enough to get the job done is going to have to be good enough.
I agree, thought it would be really really nice if a http method like GET would not modify things. :)
> This is more like people arguing over "proper" English, the point of language is to communicate ideas.
ur s0 rait, eye d0nt nnno wy ne1 b0dderz tu b3 "proppr"!!!!1!!
</sarcasm>
You are correct that communication is the point. Words do communicate a message. So too does disrespect for propriety: it communicates the message that the person who is ignorant or disrespectful of proper language is either uneducated or immature, and that in turn implies that such a person’s statements and opinions should be discounted if not ignored entirely.
Words and terms mean things. The term ‘REST’ was coined to mean something. I contend that the thing ‘REST’ originally denoted is a valuable thing to discuss, and a valuable thing to employ (I could be wrong, but how easy will it be for us to debate that if we can’t even agree on a term for the thing?).
It’s similar to the ironic use of the word ‘literally.’ The word has a useful meaning, there is already the word ‘figuratively’ which can be used to mean ‘not literally’ and a good replacement for the proper meaning of ‘literally’ doesn’t spring to mind: misusing it just decreases clarity and hinders communication.
> If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF?
Whether something is JSON or XML is independent of the representation — they are serialisations (or encodings) of a representation. E.g. {"type": "foo","id":1}, <foo id="1"/>, <foo><id>1</id></foo> and (foo (id 1)) all encode the same representation.
>misusing it just decreases clarity and hinders communication
There is no such thing as "misusing language". Language changes. It always does.
Maybe you grew up in an area of the world where it's really consistent everywhere, but in my experience I'm going to have a harder time understanding people even two to three villages away.
Because language always changes.
Words mean a particular thing at a point in time and space. At another one, they might mean something completely different. And that's fine.
You can like it or dislike it, that's up to you. However, I'd say every little bit of negative thoughts in that area only serve to make yourself miserable, since humanity and language at large just aren't consistent.
And that's ok. Be it REST, literally or even a normal word such as 'nice', which used to mean something like 'foolish'.
Again, language is inconsistent by default and meanings never stay the same for long - the more a terminus technicus gets adapted by the wider population, the more its meaning gets widened and/or changed.
One solution for this is to just say "REST in its original meaning" when referring to what is now the exception instead of the norm.
> I work for a German company and my German is not great but if I can make myself understood, that's all that's needed.
Really? What if somebody else wants to get some information to you? How do you know what to work on?
What an incredibly bad take.
When I was working on my first HTTP-based API 13 years ago, based on many comments about true REST, I decided to first study what REST should really be. I've read Fielding's paper cover to cover, I've read RESTful Web Services Cookbook from O'Reilly and then proceeded to workaround Django idioms to provide REST API. This was a bit cargo cult thinking from my end, I didn't truly understand how REST would benefit my service. I took me several more years and several more HTTP APIs to understand that in the case of these services, there were no benefits.
The vision of API that is self discoverable and that works with a generic client is not practical in most cases. I think that perhaps AWS dashboard with its multitude of services has some generic UI code that allows to handle these services without service-specific logic, but I doubt even that.
Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs. It is an architecture, but the details of how clients should really discover the endpoints and determine what these endpoints are doing is left out of the paper. To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client. If your service is the only one that implements this client, you made a lot of extra effort to end up with the same solution that not REST services implement - a service provides an API and JS code to work with the API (or a command line client that works with the API), but there is no client code reuse at all.
I also think that good UX is not compatible with REST goals. From a user perspective, app-specific code can provide better UX than generic code that can discover endpoints and provide UI for any app. Of course, UI elements can be standardized and described in some languages (remember XUL?), so UI can adapt to app requirements. But the most flexible way for such standardization is to provide a language like JavaScript that is responsible for building UI.
The browser is "generic code" that provides the UX we use all day, every day.
REST includes allowing code to be part of the response from a server, there are the obvious security issues, but the browsers (and the standards) have dealt with a lot of that.
https://ics.uci.edu/~fielding/pubs/dissertation/net_arch_sty...
I think you're right. APIs have a lot of aspects to them, so describing them is hard. API users need to know typical latency bounds, which error codes may be retried, whether an action is atomic or idempotent. HATEOAS gets you none of these things.
So fully implementing a perfect version of REST is usually not necessary for most types of problems users actually encounter.
What REST has given us is an industry-wide lingua franca. At the basic level, it's a basic understanding of how to map nouns/verbs to HTTP verbs and URLs. Users get to use the basic HTTP response codes. There's still a ton of design and subtlety to all this. Do you really get to do things that are technically allowed, but might break at a typical load balancer (returning bodies with certain error codes)? Is your returning 500 retriable in all cases, with what preferred backoff behavior?
>API users need to know typical latency bounds, which error codes may be retried, whether an action is atomic or idempotent. HATEOAS gets you none of these things.
Those things aren't always necessary. However API users always need to know which endpoints are available in the current context. This can be done via documentation and client-side business logic implementing it (arguably, more work) or this can be done with HATEOAS (just check if server returned the endpoint).
HTTP 500 retriable sounds like a design error, when you can use HTTP 503 to explicitly say "try again later, it's temporal".
I think this hits the nail on the head. Complaining that the current understanding of REST isn't exactly the same as the original usage is missing the point that now REST gives people a good idea of what to expect and how to use the exposed interface.
It's actually a very analogous complaint to how object-oriented programming isn't how it was supposed to be and that only Smalltalk got it right. People now understand what is meant when people say OOP even if it's not what the creator of the term envisioned.
Computer Science, and even the world in general, is littered with examples of this process in action. What's important is that there's a general consensus of the current meaning of a word.
Yes, the field is littered with imperfection.
One thing though - if you do take the time to learn the original "perfect" versions of these things, it helps you become a much better system designer. I'm constantly worried about API design because it has such large and hard-to-change consequences.
On the other hand, we as an industry have also succeeded quite a bit! So many of our abstractions work really well.
It's not just the original REST that usually has no benefits. The industry's reinterpreted version of weak REST also usually has little to no benefits. Who really cares that deleting a resource must necessarily be done with the DELETE HTTP verb rather than simply a POST?
You have to represent the action somehow. And letting proxies understand a wee bit of what's going on is useful. That's how you can have a proxy that lets your users browse the web but not login to external sites, and so on.
The DELETE verb exists, there's no reason not to use it.
There is one reason. The DELETE absolutely must be idempotent. If it's not, then use POST.
The POST verb exists, there's no reason not to use it to ask a server to delete data.
In fact, there are plenty of reasons not to use DELETE and PUT. Middleboxes managed by incompetent security people block them, they require that developers have a minimum of expertise and don't break the idempotency rule, lots of software stacks simply don't support them (yeah, those stacks are bad, what still doesn't change anything), and the most of the internet just don't use the benefit they provide (because they don't trust the developers behind the server to not break the rules).
And you just added more work to yourself to interpret the HTTP verb. You already need work to interpret the body of a POST request, so why not put the information of "the operation is trying to delete" inside the body?
> To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client.
Generic clients just need to understand hypermedia and they can discover your API, as long as your API returns hypermedia from its starting endpoint and all other endpoints are transitively linked from that start point.
Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?
This is what discoverability via HATEOAS is. True REST can be seen as exporting an object model with reflection capabilities. For clients that are familiar with your API, they are using hypermedia to access known/named properties and methods, and generic clients can use reflection to do the same.
> Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?
Sure this can be done, but I can't see how to build a useful generic app that interacts with objects automatically by discovering the methods and calling them with discovered parameters. For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users, the UI needs to be aware what the available methods do and need to be intentionally designed to provide intuitive ways of calling the methods.
> For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users
Yes, exactly, but the point is that something like Swagger becomes completely trivial, and so you no longer need a separate, complex tool to do what the web automatically gives you.
The additional benefits are on the server-end, in terms of maintenance and service flexibility. For instance, you can now replace and transition any endpoint URL (except the entry endpoint) at any time without disrupting clients, as clients no longer depend on specific URL formats (URLs are meaningful only to the server), but depend only on the hypermedia that provides the endpoints they should be using. This is Wheeler's aphorism: hypermedia adds one level of indirection to an API which adds all sorts of flexibility.
For example, you could have a set of servers implementing an application function, each designated by a different URL, and serve the URL for each server in the hypermedia using any policy that makes sense, effectively making an application-specific load balancer. We worked around scaling issues over the years by adding adding SNI to TLS and creating dedicated load balancers, but Fielding's REST gave us everything we needed long before! And it's more flexible than SNI because these servers don't even have to be physically located behind a load balancer.
There are many ideas in the REST paper that are super useful, but the goal of making a generic client working with any API is difficult if not impossible to achieve.
Was the client of the service that you worked on fully generic and application independent? It is one thing to be able to change URLs only on the server, without requiring a client code change, and such flexibility is indeed practical benefit that the REST architecture gives us. It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code. This goal is something that REST architecture tried to address, but IMO it was not realized in practice.
> There are many ideas in the REST paper that are super useful, but the goal of making a generic client working with any API is difficult if not impossible to achieve.
It's definitely possible to achieve: anywhere that data is missing you present an input prompt, which is exactly what a web browser does.
That said, the set of autonomous programs that can do something useful without knowing what they're doing is of course more limited. These are generic programs like search engines and AI training bots that crawl and index information.
> It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code.
Web browsers do exactly this!
> Web browsers do exactly this!
Browser provide generic execution environment, but the client code (JavaScript/HTML/CSS) is not generic. Calendar application and messaging application entry points provide application specific code for implementing calendar or messaging apps functions . I don't think this is what was proposed in the REST paper, otherwise we wouldn't have articles like 'Most RESTful APIs aren't really RESTful'.
> but the client code (JavaScript/HTML/CSS) is not generic
The HTML/hypermedia returned is never generic, that's why HATEOAS works at all and is so flexible.
The "client" JS code is provided by the server, so it's not really client-specific (the client being the web browser here--maybe should call it "agent"). Regardless, sending JS is an optimization, calendars and messaging are possible using hypermedia alone, and proves the point that the web browser is a generic hypermedia agent that changes behaviour based on hypermedia that's dictated solely by the URL.
You can start programming any app with a plain hypermedia version and then add JS to make the user experience better, which is the approach that HTMx is reviving.
> Generic clients just need to understand hypermedia
Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant. I'm pretty sure the reason we ended up settling on just tossing JSON blobs around and baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.
(Besides: practically, for a web-served interface, the client may as well carry semantic understanding because the client came from the server).
> Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant.
You don't need a full web browser. Fielding published his thesis in 2000, browsers were almost trivial then, and the needs for programming are even more trivial: you can basically skip any HTML that isn't a link tag or form data for most purposes.
> baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.
This is such a non-issue. Why aren't you worried about badly formatted JSON? Because we have well-tested JSON formatters. In a world where people understood the value of hypermedia as an interchange format, we'd be in exactly the same position.
And to be clear, if JSON had links as a first class type rather than just strings, then that would qualify as a hypermedia format too.
If I'm going to do HTML that isn't HTML then I might as well not do HTML, there's a lot of sharp edges in that particular markup that I'd prefer to avoid.
> Why aren't you worried about badly formatted JSON?
Because the json spec is much smaller than the HTML spec so it is much easier for the parser to prevalidate and reject invalid JSON.
Maybe I need to reread the paper and substitute "a good hypermedia language" for HTML conceptually, see if it makes more sense to me.
> Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs.
But it does though. A HTTP server returns a HTTP response to a request from a browser. The request is a HTML webpage that is rendered to the user with all discoverable APIs visible as clickable links. Welcome to the World Wide Web.
You describe how web pages work, web pages are intended for human interactions, APIs are intended for machine interaction. How a generic Python or JavaScript client can discover these APIs? Such clients will request JSON representation of a resource, because JSON is intended for machine consumption, HTML is intended for humans. Representations are equivalent, if you request JSON representations of a /users resource, you get a JSON list. If you request HTML representation of a /users resource you get an HTML list, but the content should be the same. Should you return UI controls for modifying a list as part of the HTML representation? If you do so, your JSON and HTML representations are different, and your Python and JavaScript client still cannot discover what list modification operations are possible, only human can do it by looking at the HTML representation. This is not REST if I understand the paper correctly.
> You describe how web pages work, web pages are intended for human interactions
Exactly, yes! The first few sentences from Wikipedia...
"REST (Representational State Transfer) is a software architectural style that was created to describe the design and guide the development of the architecture for the World Wide Web. REST defines a set of constraints for how the architecture of a distributed, Internet-scale hypermedia system, such as the Web, should behave." -- [1]
If you are desiging a system for the Web, use REST. If you are designing a system where a native app (that you create) talks to a set of services on a back end (that you also create), then why conform to REST principles?
[1] - https://en.wikipedia.org/wiki/REST
[dead]
Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
Most web APIs are not designed with this use-case in mind. They're designed to facilitate web apps that are much more specific in what they're trying to present to the user. This is both deliberate and valuable; app creators need to be able to control the presentation to achieve their apps' goals.
REST API design is for use-cases where the users should have control over how they interact with the resources provided by the API. Some examples that should be using REST API design:
Considering these examples, it makes sense that policing of what "REST" means comes from the more academically-minded, while the detractors of the definition are typically app developers trying to create a very specific user experience. The solution is easy: just don't call it REST unless it actually is.> Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
It's also useful when you're programming a client that is not a web page!
You GET a thing, you dereference fields/paths in the returned representation, you construct a new URI, you perform an operation on it, and so on.
Consider a directory / database application. You can define a RESTful, HATEOAS API for it, write a single-page web application for it -or a non-SPA if you prefer-, and also write libraries and command-line interfaces to the same thing, all using roughly similar code that does what I described above. That's pretty neat. In the case of a non-SPA you can use pure HTML and not think that you're "dereferencing fields of the returned representation", but the user and the user-agent are still doing just that.
> Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
The funny thing is, that perfectly describes HTML. Here’s a document with links to other documents, which the user can navigate based on what the links are called. Because if it’s designed for users, it’s called a User Interface. If it’s designed for application programming, it’s called an Application Programming Interface. This is why HATEOAS is kinda silly to me. It pretends APIs should be used by Users directly. But we already have that, it’s called a UI.
The point is that your Web UI can easily be made to be a REST HATEOAS conforming API at the same time. No separate codepaths, no duplicate efforts, just maybe some JSON templates in addition to HTML templates.
You're right, pure REST is very academic. I've worked with open/big data, and there's always a struggle to get realistic performance and app architecture design; for anything non-obvious, I'd say there are shades of REST rather than a simple boolean yes/no. Even academics have to produce a working solution or "application", i.e. that which can be actually applied, at some point.
When there is lots of data and performance is important, HTTP is the wrong protocol. JSON/XML/HTML is the wrong data format.
> Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
> Most web APIs are not designed with this use-case in mind.
I wonder if this will change as APIs might support AI consumption?
Discoverability is very important to an AI, much more so than to a web app developer.
MCP shows us how powerful tool discoverability can be. HATEOS could bring similar benefits to bare API consumption.
> Government portals for publicly accessible information, like legal codes, weather reports, or property records
Yes, and it's so nice when done well.
https://www.weather.gov/documentation/services-web-api
This is a very good and detailed review of the concepts of REST, kudos to the author.
One additional point I would add is that making use of the REST-ful/HATEOAS pattern (in the original sense) requires a conforming client to make the juice worth the squeeze:
https://htmx.org/essays/hypermedia-clients
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans
I'll never understand why the HATEOAS meme hasn't died.
Is anyone using it? Anywhere?
What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
I used it on an enterprise-grade video surveillance system. It was great - basically solved the versioning and permissions problem at the API level. We leveraged other RFCs where applicable.
The biggest issue was that people wanted to subvert the model to "make things easier" in ways that actually made things harder. The second biggest issue is that JSON is not, out of the box, a hypertext format. This makes application/json not suitable for HATEOAS, and forcing some hypertext semantics onto it always felt like a kludge.
> I'll never understand why the HATEOAS meme hasn't died.
> Is anyone using it? Anywhere?
As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol. If so (a cursory glance at RFC 8555 indicates that it may be), then it’s used by almost everyone who serves HTTPS.
Arguably HTTP, when used as it was intended, is itself a HATEOAS protocol.
> What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
LLMs seem to do well at this.
And remember that ‘auto-discovery’ means different things. A link typed next enables auto-discovery of the next resource (whatever that means); it assumes some pre-existing knowledge in the client of what ‘next’ actually means.
> As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol.
On this case specifically, everybody's lives are worse because of that.
Yes. You used it to enter this comment.
I am using it to enter this reply.
The magical client that can make use of an auto-discoverable API is called a "web browser", which you are using right this moment, as we speak.
So, given a hateos api, and stock firefox (or chrome, or safari, or whatever), it will generate client views with crud functionality?
Let alone ux affordances, branding, etc.
Yes. You used such an api to post your reply. And I am using it as well, via the affordances presented by the mobile safari hypermedia client program. Quite an amazing system!
No. I was served HTML. not a json respoise that the browser discovered how to display.
html is the hateoas response
This is true, but isn’t this quite far away from the normal understanding of API, which is an interface consumed by a program? Isn’t this the P in Application Programming Interface? If it’s a human at the helm, it’s called a User Interface.
I agree that's a common understanding of things, but I don't think that it's 100% accurate. I think that a web browser is a client program, consuming a RESTful application programming interface in the manner that RESTful APIs are designed to be consumed, and presenting the result to a human to choose actions.
I think if you restrict the notion of client to "automated programs that do not have a human driving them" then REST becomes much less useful:
https://htmx.org/essays/hypermedia-clients/
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
AI may change this at some point.
If you allow the notion of client to include "web browser driven by humans", then what is it about Fielding's dissertation that is considered so important and original in the first place? Sure it's formal and creates some new and precise terminology, but the concept of browsing was already well established when he wrote it.
The web browser is just following direct commands. The auto discovery and logic is implemented by my human brain
Yes.
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
Wait what? So everything is already HATEOAS?
I thought the “problem” was that no one was building proper restful / HATEOAS APIs.
It can’t go both ways.
https://htmx.org/ might be the closest attempt?
https://data-star.dev are taking things a bit further in terms of simplicity and performance and hypermedia concepts. Worth a look.
I think OData isn't used, and that's a proper standard and a lower bar to clear. HATEOAS isn't even benefiting from a popular standard, which is both a cause and a result.
You realize that anyone using a browser to view HTML is using HATEOS, right? You could probably argue whether SPAs fit the bill, but for sure any server rendered or static site is using HATEOS.
The point isn't that clients must have absolutely no prior knowledge of the server, its that clients shouldn't have to have complete knowledge of the server.
We've grown used to that approach because most of us have been building tightly coupled apps where the frontend knows exactly how the backend works, but that isn't the only way to build a website or web app.
Can you be more specific? What exactly is the partial knowledge? And how is that different from non-conforming APIs?
HATEOAS is anything that serves the talking point now apparently
UI designers want control over the look of the page in detail. E.g. some actions that can be taken on a resource are a large button and some are hidden in a menu or not rendered in the UI at all.
A client application that doesn't have any knowledge about what actions are going to be possible with a resource, instead rendering them dynamically based on the API responses, is going to make them all look the same.
So RESTful APIs as described in the article aren't useful for the most common use case of Web APIs, implementing frontend UIs.
This is wrong on many levels.
1. UX designers operate on every stage of software development lifecycle from product discovery to post-launch support (validation of UX hypotheses), they do not exercise control - they work within constraints as part of the team. The location of a specific action in UI and interaction triggering it is orthogonal to availability of this action. Availability is defined by the state. If state restricts certain actions, UX must reflect that.
2. From architectural point of view, once you encapsulate the checking state behavior, the following will work the same way: "if (state === something)" and "if (resource.links["action"] !== null)". The latter approach will be much better, because in most cases any state-changing actions will require validation on server and you can implement the logic only once (on server).
I have been developing HATEOAS applications for quite a while and maintain HAL4J library: there are some complexities in this approach, but UI design is certainly not THE problem.
My experience with "RESTful APIs" rarely has much to do with the UI. Why even have any API if all you care about is the UI? Why not go back to server driven crap like DWR then?
My experience is that SPAs have been the way to make frontends, for the last eight years or so. May be coming to an end now. Anyway, contact with the backend all went through an API.
During that same time, the business also wanted to use the fact that our applications had an API as a selling point - our customers are pretty technical and some of them write scripts against our backends.
Backenders read about API design, they get the idea they should be REST like (as in, JSON, with different HTTP methods for CRUD operations).
And of course we weren't going to have two separate APIs, that we ran our frontends on our API was another selling point (eat your own dog food, proof that the API can do everything our frontend can, etc).
So: the UI runs on a REST API.
I'm hoping that we'll go back to Django templates with a sprinkle of HTMX here and there in the future, but who knows. That will probably be a separate backend that runs in front of this API then...
> our applications had an API as a selling point - our customers are pretty technical and some of them write scripts against our backends
It is a selling point. A massive one if you're writing enterprise software. It's not merely about "being technical", but mandatory for recurring automated jobs and integration with their other software.
[dead]
What's often missed when this topic comes up is the question of who the back end API is intended for.
REST and HATEOAS are beneficial when the consumer is meant to be a third party that doesn't directly own the back end. The usual example is a plain old HTML page, the end user of that API is the person using a browser. MCP is a more recent example, that protocol is only needed because they want agents talking to APIs they don't own and need a solution for discoverability and interpretability in a sea of JSON RPC APIs.
When the API consumer is a frontend app written specifically for that backend, the benefits of REST often just don't outweigh the costs. It takes effort to design a more generic, better documented and specified API. While I don't like using tools like tRPC in production, its hugely useful for me when prototyping for much the same reason, I'm building both ends of the app and its faster to ignore separation of concerns.
edit: typo
agree very strongly and think it goes even deeper than that!
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans
https://htmx.org/essays/hypermedia-clients
*HATEOAS
> The core problem it addresses is client-server coupling. There are probably countless projects where a small change in a server’s URI structure required a coordinated (and often painful) deployment of multiple client applications. A HATEOAS-driven approach directly solves this by decoupling the client from the server’s namespace. This addresses the quality of evolvability.
Not sure I agree with this. All it does is move the coupling problem around. A client that doesn't understand where to find a URL in a document (or even which URL's are available for what purpose within that document) is just as bad as a client that assumes the wrong URL structure.
At some point, the client of an API needs to understand the semantics of what that API provides and how/where it provides those semantics. Moving it from a URL hierarchy to a document structure doesn't provide a huge amount of added value. (Particularly in a world where essentially all of the server API's are defined in terms of URL patterns routing to handlers. This is explicit hardcoded encouragement to think in a style in opposition to the HATEOAS philosophy.)
I also tend to think that the widespread migration of data formats from XML to JSON has worked against "Pure" REST/HATEOAS. XML had/has the benefit of a far richer type structure when compared to JSON. While JSON is easier to parse on a superficial level, doing things like identifying times, hyperlinks, etc. is more difficult due to the general lack of standardization of these things. JSON doesn't provide enough native and widespread representations of basic concepts needed for hypertext.
(This is one of those times I'd love some counterexamples. Aside from the original "present hypertext documents to humans via a browser" use case, I'd love to read more about examples of successful programmatic API's written in a purely HATEOAS style.)
This is what I don’t understand either.
/user/123/orders
How is this fundamentally different than requesting /user/123 and assuming there’s a link called “orders” in the response body?
Good.
Strict HATEOAS is bad for an API as it leads to massively bloated payloads. We _should_ encode information in the API documentation or a meta endpoint so that we don't have to send tons of extra information with every request.
Similarly, I call Java programs "Object Oriented programs" despite Alan Kays protests that it isn't at all what Object Orientation was described as in early papers.
The sad truth is that it's the less widely used concept that has to shift terminology, if it comes into wide use for something else or a "diluted" subset of the original idea(s). Maybe the true-OO-people have a term for Kay-like OO these days?
I think the idea of saving "REST" to mean the true Fielding style including HATEOAS and everything is probably as futile as trying to reserve OO to not include C++ or Java.
> By using HATEOAS and referencing schema definitions (such as XSD or JSON Schema) from within your resource representations, you can enable clients to understand the structure of the data and navigate the API dynamically.
I actually think this is where the problem lies in the real world. One of the most useful features of a JSON schema is the "additionalProperties" keyword. If applied to the "_links" subschema we're back to the original problem of "out of band" information defining the API.
I just don't see what the big deal is if we have more robust ways of serving the docs somewhere else outside of the JSON response. Would it be equivalent if the only URL in "_links" that I ever populate is a link to the JSONified Swagger docs for the "self" path for the client to consume? What's the point in even having "_links" then? How insanely bloated would that client have to be to consume something that complicated? The templates in Swagger are way more information dense and dynamic than just telling you what path and method to use. There's often a lot more for the client to handle than just CRUD links and there exists no JSON schema that could be consistent across all parts of the API.
> If you are building a public API for external developers you don’t control, invest in HATEOAS. If you are building a backend for a single frontend controlled by your own team, a simpler RPC-style API may be the more practical choice.
My conclusion is exactly the opposite. In-house developers can be expected (read: cajoled) to do things the "right" way, like follow links at runtime. You can run tests against your client and server. Internally, flexible REST makes independent evolution of the front end and back end easy.
Externally, you must cater to somebody who hard-coded a URL into their curl command that runs on cron and whose code can't tolerate the slightest deviation from exactly what existed when the script was written. In that case, an RPC-like call is great and easy to document. Increment from `/v1/` to `/v2/`, writer a BC layer between them and move on.
The article is seemingly accurate, but isn't particularly useful as it is written in FAR too technical of a style.
If anyone wants to learn more about all of this, https://htmx.org/essays and their free https://hypermedia.systems book are wonderful.
You could also check out https://data-star.dev for an even better approach to this.
I think we should focus less on API schemas and more on just copying how browsers work.
Some examples:
It should be far more common for http clients to have well supported and heavily used Cookie jar implementations.
We should lean on Accept headers much more, especially with multiple mime-types and/or wildcards.
Http clients should have caching plugins to automatically respect caching headers.
There are many more examples. I've seen so much of HTTP reimplemented on top of itself over the years, often with poor results. Let's stop doing that. And when all our clients are doing those parts right, I suspect our APIs will get cleaner too.
> REST isn’t about exposing your internal object model over HTTP — it’s about building distributed systems that behave like the web.
I think I finally understand what Fielding is getting at. His REST principles boil down to allowing dynamic discovery of verbs for entities that are typed only by their media types. There's a level of indirection to allow for dynamic discovery. And there's a level of abstraction in saying entities are generic media objects. These two conceptual leaps allow the REST API to be used in a more dynamic, generic way - with benefits at the API level that the other levels of the web stack has ("client decoupling, evolvability, dynamic interaction").
In what context would a user discover parts of a REST API dynamically?
In the simple (albeit niche) case, a UI could populate a list of buttons based on the URIs/verbs that the REST API returns. So the UI would be totally dynamic based on the backend - and so, work pretty generically across REST APIs.
But for a client, UI or otherwise, to make use of a dynamic set of URIs/verbs would require it to either look for a specific keyword (hard coding the intents it can satisfy) or be able to semantically understand the API (which is hard, requires a human).
Oddly, all this stuff is full circle with the AI stuff. The MCP protocol is designed to give AIs text-based descriptions of APIs, so they can reason about how to use them.
The simplest case, and the most common, is that of a browser rendering the HTML response from a website request. The HTML contains the URL links to other APIs that the user can click on. Think of navigating any website.
Academically it might be correct, but shipping real features will in most cases be more important than hitting some text book definition of correctness.
Sure, you’re right: pragmatics, in practice, are more important than theory.
But you’re assuming that there is a real contradiction between shipping features and RESTful design. I believe that RESTful design can in many cases actually increase feature delivery speed through its decoupling of clients and servers and more deeply due to its operational model.
its decoupling of clients and servers.
Notice that both of those are plural words. When you have many clients and many servers implementing a protocol a formal agreement of protocol is required. REST (which I will not claim to understand well) makes a formal agreement much easier, but you still need some agreement. However when there is just one server and just one client (I'll count all web browsers as one since the browser protocols are well defined enough) you can go faster by just implementing both sides and testing they work for a long time.
It felt easier going through the post after reading these bits near the end:
> The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience
> Therefore, simply be pragmatic. I personally like to avoid the term “RESTful” for the reasons given in the article and instead say “HTTP” based APIs.
I prefer call it "REST-like" APIs
Yeah but why cause needless confusion? The colloquial definition of "RESTful" is better understood as just something you defined using the OpenAPI spec. All other variants of "HTTP API" are likely hot garbage nobody wants anyway.
Drake meme for me:
REST = Hell No
GQL = Hell No.
RPC with status codes = Grin and point.
I like to get stuff done.
Imagine you are forced to organize your code filed like REST. Folder is a noun. Functions are verbs. One per folder. Etc. Would drive you nuts.
Why do this for API unless the API really really fits that style (rare).
GQL is expensive to parse and hides information from proxies (200 for everything)
> Imagine you are forced to organize your code filed like REST. Folder is a noun. Functions are verbs. One per folder. Etc. Would drive you nuts.
That’s got nothing to do with REST. You don’t have to do that at all with a REST API. Your URLs can be completely arbitrary.
Ok I may have been wrong. I checked the thesis and couldn't see this aspect mentioned. Most of the thesis seems like stuff I agree with. Damn. I'm fighting an impression of REST I had.
> RPC with status codes
Yes. All endpoints POST, JSON in, JSON out (or whatever) and meaningful HTTP status codes. It's a great sweet spot.
Of course, this works only for apps that fetch() and createElement() the UI. But that's a lot of apps.
If I don't want to use an RPC framework or whatever I just do:
And have a dictionary in my server mapping method names to the actual functions.All functions take one param (a dictionary with the data), validate it, use it and return another single dictionary along with appropriate status code.
You can add versions and such but at that point you just use JSON-RPC.
This kind of setup can be much better than REST APIs for certain usecases
>All endpoints POST
This makes automating things like retrying network calls hell. You can safely assume a GET will be idempotent, and safely retry on failure with delay. A POST might, or might not also empty your bank account.
HTTP verbs are not just for decoration.
> not just for decoration
Still, they are just a convention.
When you are retrying an API, you are calling the API, you know whether its a getBookings() or a addBooking() API. So write the client code based on that.
Instead of the API developer making sure GET /bookings is idempotent, he is going to be making sure getBookings() is idempotent. Really, what is the difference?
As for the benefits, you get a uniform interface, no quirks with URL encoding, no nonsense with browsers pre-loading, etc etc,. It's basically full control with zero surprises.
The only drawback is with cookies. Samesite: Lax depends on you using GET for idempotent actions and POST for unsafe actions. However, I am advocating the use of this only for "fetch() + createElement() = UI" kind of app, where you will use tokens for everything anyways.
I love all the comments here that you can't build a proper UX/UI with a "perfect" REST API even though browsers do it all day, every day.
REST includes code-on-demand as part of the style, HTTP allows for that with the "Link" header and HTML via <script>.
I struggle to believe that any API in history has been improved by the developer more faithfully following REST’s strictures. The closest we’ve come to actually decoupled, self describing APIs is MCP, and that required inventing actual AIs to understand them.
The most successful API in history – the World-Wide Web – uses REST principles. That’s where REST came from. It was somebody who was involved in the creation of the early web who looked at it and wrote down a description of what properties of the web made it so successful.
REST on the WWW only works because humans read and interpret the results. Arguably, that’s not an API (Application Programming Interface) but a UI (User Interface).
I have yet to see an API that was improved by following strict REST principles. If REST describes the web (a UI, not an API), and it’s the only useful example of REST, is REST really meaningful?
> REST on the WWW only works because humans read and interpret the results.
This is very obviously not true. Take search engine crawlers, for example. There isn’t a human operator of GoogleBot deciding which links to follow on a case-by-case basis.
> I have yet to see an API that was improved by following strict REST principles.
I see them all the time. It’s ridiculous how many instances of custom logic in APIs can be replaced with “just follow the link we give you”.
This is, almost canonically, the subject of Joel Spolsky's architecture astronauts essay.
It’s not. It’s pretty much the opposite. This is what he’s talking about:
> our clever thinker invents a new, higher, broader abstraction
> When you go too far up, abstraction-wise, you run out of oxygen.
> They tend to work for really big companies that can afford to have lots of unproductive people with really advanced degrees that don’t contribute to the bottom line.
REST is the opposite. REST is “We did this. It worked great! This is why.” And web developers around the world are using this every single day in practical projects without even realising it. The average web developer uses REST, including HATEOAS, all the time, and it works great for them. It’s just when they set out to do it on purpose, they often get distracted by some weird fake definition of REST that is completely different.
That's absolutely not what the essay is about. It's about the misassignment of credit for the success of a technology by people who think the minutiae of the clever implementation was important.
[dead]
Wasn't the entire point of calling an API RESTful, that it's explicitly not REST, but only kind of REST-like.
Also, who determined these rules are the definition of RESTful?
RESTful means that it respects REST constraints. One is an adjective and the other a noun (like "state" and "stateless").
> Also, who determined these rules are the definition of RESTful?
Roy Fielding.
HATEOAS + Document Type Description which includes (ideally internationalized) natural language description in addition to machine readable is what MCP should have been.
I have always said that HATEOAS starting with “HATE” is highly descriptive of my attitude toward it.
It is a fundamentally flawed concept that does not work in the real world. Full stop.
At my FAANG company, the central framework team has taken calling what people do in reality HTTP bindings. https://smithy.io/2.0/spec/http-bindings.html
I am wondering if anyone can resolve this misunderstanding of REST for me…
If the backend provides a _links map which contains “orders” for example in the list - doesn’t the front end need to still understand what that key represents? Is there another piece I am missing that would actually decouple the front end from the backend?
I see a lot of people who read Fielding's thesis and found it interesting.
I did not find it interesting. I found it excessively theoretical and proscriptive. It led to a lot of people arguing pedantically over things that just weren't important.
I just want to exchange JSON-structured messages over HTTP, using the least amount of HTTP required to implement request and response. I'm also OK with protocol buffers over grpc, or really any decent serialization technology over any well-implemented transport. Sometimes it's CRUD, sometimes it's inference, sometimes it's direct actions on a server.
Hmm. I shoudl write a thesis. JSMOHTTP (pronounced "jizmo-huttup")
i completely agree with you. the authors approach seems complex and unnecessary. my basic expectation when I see something labeled as a REST API is:
1. i can submit a request via HTTP
2. data is returned as JSON by a response
3. the most minimal amount of HTTP/Pagination necessary is required
Hot take: HATEOAS only works when humans are navigating.
The thing to internalize about "true" REST is that HN (and the rest of the web) is really a RESTful web service. You visit the homepage, a hypermedia format is delivered to a generic client (your browser), and its resources (pages, sections, profiles, etc) can all be navigated to by following links.
Links update when you log in or out, indicating the state of your session. Vote up/down links appear or disappear based on one's profile. This is HATEOAS.
Link relations can be used to alter how the client (browser) interprets the link—a rel="stylesheet" causes very different behavior from rel="canonical".
JavaScript provides even provides "code on-demand" as it's called in Fielding's paper.
From that perspective, REST is incredible. REST is extremely flexible, scalable, evolvable, etc. It is the pattern that powers the web.
Now, it's an entirely different story when it come to what many people call REST APIs, which are often nothing like HN. They cannot be consumed by a generic client. They are not interlinked. They don't ship code on-demand.
Is "REST" to blame? No. Few people have time or reason to build a client as powerful as the browser to consume their SaaS product's API.
But even building a truly generic client isn't the hardest thing about building RESTful APIs—the hardest thing is that the web depends entirely on having a human-in-the-loop and your standard API integration's purpose is to eliminate having a human in the loop.
For example, a human reads the link text saying "Log in" or "Reset password" and interprets that text to understand the state of the system (they do not have an authenticated session). And a human can reinterpret a redesigned webpage with links in a new location, but trivial clients can't reinterpret a refactored JSON object (or XML for that matter).
The folly is in thinking that there's some design pattern out there that's better than REST without understanding that the actual problem to be solved by that elusive, perfect paradigm is how you'll be able to refactor your API when your API's clients will likely be bodged-together JS programs whose authors dug through JSON for the URL they needed and then hard-coded it in a curl command instead of conscientiously and meticulously reading documentation and semantically looking up the URL at runtime, follows redirects, and handles failures gracefully.
You know what type of API I like best?
Because that is the easiest to implement, the easiest to write, the easiest to manually test and tinker with (by writing it directly into the url bar), the easiest to automate (curl .../draw_point?x=7&y=20). It also makes it possible to put it into a link and into a bookmark.This is also how HN does it:
This is great for API's that only have a few actions that can be taken on a given resource.
REST-API's then are especially suited for acting as a gateway to a database, to easily CRUD and fetch lists of information.
The best API's I've seen mix and match both patterns. RESTful API endpoints for data, "function call" endpoints for often-used actions like voting, bulk actions and other things that the client needs to be able to do, but you want the API to be in control of how it is applied.
> REST-API's then are especially suited for acting as a gateway to a database, to easily CRUD and fetch lists of information.
I don't disagree, but I've found (delivering LoB applications) that they are not homogenous: The way REST is implemented, right now, makes it not especially suitable for acting as a gateway to a database.
When you're free of constraints (i.e. greenfield application) you can do better (ITO reliability, product feature velocity, etc) by not using a tree exchange form (XML or JSON).
Because then it's not just a gateway to a database, it's an ill-specified, crippled, slow, unreliable and ad-hoc ORM: it tries to map trees (objects) to tables (relations) and vice versa, with predictably poor results.
Can you give an example of an endpoint where you would prefer a "RESTful API endpoint"?
If you type it into the URL bar, it will use GET.
Surely you're not advocating mutating data with GET?
What's your problem with it?
Bots, browsers that preload URLs, caching (both browser and backend and everything in between), the whole infrastructure of the Web that assumes GET never mutates and is always safe to repeat or serve from cache.
Using GET also circumvents browser security stuff like CORS, because again the browser assumes GET never mutates.
So why is there no problem with vote/flag/vouche on HN being GET endpoints?
Then that does not conform to the HTTP spec. GET endpoints must be safe, idempotent, cachable. Opening up a site to cases were web crawlers/scrapers may wreak havoc.
There is, it's bad. Luckily votes aren't very crucial.
Votes are crucial. HN goes to great lengths to prevent votes that do not stem from real user intent.
See this post for example:
https://news.ycombinator.com/item?id=22761897
Quotes:
"Voting ring detection has been one of HN's priorities for over 12 years"
"I've personally spent hundreds of hours working on this"
https://news.ycombinator.com/item?id=3742902
Indeed, user embedded pictures can fire GET requests while can not make POST requests. But this is not a problem if you don't allow users to embed pictures, or you authenticate the GET request somehow. Anyway GET requests are just fine.
The same would have worked with a POST endpoint.
The story url only would have to point to a web page that creates the upvote post request via JS.
That runs into CORS protections though.
CORS is a lot less strict around GET as it is supposed to be safe.
Nope, it would not have been prevented by CORS.
CORS prevents reading from a resource, not from sending the request.
If you find that surprising, think about that the JS could also have for example created a form with the vote page as the target and clicked on the submit button. All completely unrelated to CORS.
> CORS prevents reading from a resource
CORS does nothing of the sort. It does the exact opposite – it’s explicitly designed to allow reading a resource, where the SOP would ordinarily deny it.
Even mdn calls it "violating the CORS security rules" instead of SOP rules: https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...
Anyway, this is lame low effort trolling for some unknown purpose. Stop it.
That any bot crawling your website is going to click on your links and inadvertently mutate data.
Reading your original comment I was thinking "Sure, as long as you have a good reason of doing it this way anything goes" but I realized that you prefer to do it this way because you don't know any better.
If you rely on the HTTP method to authenticate users to mutate data, you are completely lost. Bots and humans can send any method they like. It's just a string in the request.
Use cookies and auth params like HN does for the upvote link. Not HTTP methods.
You say that, but there are lots of security features like SameSite=Lax that are built on the assumption that GET requests are harmless.
> If you rely on the HTTP method to authenticate users to mutate data, you are completely lost
I don't know where you are getting that from but it's the first time I've heard of it.
If your link is indexed by a bot, then that bot will "click" on your links using the HTTP GET method—that is a convention and, yes, a malicious bot would try to send POST and DELETE requests. For the latter, this is why you authenticate users but this is unrelated to the HTTP verb.
> Use cookies and auth params like HN does for the upvote link
If it uses GET, this is not standard and I would strongly advise against it except if it's your pet project and you're the only maintainer.
Follow conventions and make everyone's lives easier, ffs.
There was a post about Garage opener I read here sometime back. https://news.ycombinator.com/item?id=16964907
That’s pretty bad design. Only GETs should include a querystring. Links should only read, not create, update or delete.
> Only GETs should include a querystring.
Why?
Because HTTP is a lot more sophisticated than anyone cares to acknowledge. The entire premise of "REST", as it is academically defined, is an oversimplification of how any non-trivial API would actually work. The only good part is the notion of "state transfer".
Not a REST API, but I've found it particularly useful to include query parameters in a POST endpoint that implements a generic webhook ingester.
The query parameters allow us to specify our own metadata when configuring the webhook events in the remote application, without having to modify our own code to add new routes.
I used to do that but I've been fully converted to REST and CRUD gang. Once you establish the initial routes and objects it's really easy mount everything else on it and move fast with changes. Also using tools like httpie it's super easy to test anything right in your terminal.
You're going to run into all kinds of security issues if you let GET endpoints have side effects.
See https://stackoverflow.com/a/29520505/771665
The term has caused so much bikeshedding and unnecessary confusion.
It is not sufficient to crawl the API. The client also needs to know how to display the forms, which collect the data for the links presented by the API. If you want to crawl the API you also have the crawl the whole client GUI.
> The core problem it addresses is client-server coupling. There are probably countless projects where a small change in a server’s URI structure required a coordinated (and often painful) deployment of multiple client applications. A HATEOAS-driven approach directly solves this by decoupling the client from the server’s namespace. This addresses the quality of evolvability.
Eh, "a small change in a server’s URI structure" breaks links, so already you're in trouble.
But sure, embedding [local-parts of] URIs in the contents (or headers) exchanged is indeed very useful.
In my experience REST is just a code word for a distributed glob of function calls which communicate via JSON. It's a development and maintenance nightmare.
I tried to follow the approach with hypermedia and discoverable resources/actions in my hobby projects. But I "failed" at the point that this would mean additional HTTP calls from a client to "discover" a resource/its actions. Given the latency of a HTTP call, relativly seen, this was not conclusive for me.
This doesn’t provide any good arguments for why Roy Fielding’s conception should be taken as the gospel of how things should be done. At best, it points out that what we call REST now isn’t what Roy Fielding wanted.
Furthermore, it doesn’t explain how Roy Fielding’s conception would make sense for non-interactive clients. The fact that it doesn’t make sense is a large part of why virtually nobody is following it.
Why doesn't fielding's conception make sense for non-interactive clients?
Take this quote: “A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations.”
If the client application only understands media types and isn’t supposed to know anything about the interrelationships of the data or possible actions on it, and there is no user that could select from the choices provided by the server, then it’s not clear how the client can do anything purposeful.
Surely, an automated client, or rather its developer, needs a model (a schema) of what is possible to do with the API. Roy Fieldings doesn’t address that aspect at all. At best, his REST API would provide a way for the client to map its model to the actual server calls to make, based on configuration information provided by the server as “hypertext”. But the point of such an indirection is unclear, because the configuration information itself would have to follow a schema known and understood by the client, so again wouldn’t be RESTful in Roy Fielding’s sense.
People are trying to fill in the blanks of what Roy Fielding might have meant, but in the end it just doesn’t make a lot of sense for what REST APIs are used in practice.
As I replied to the sibling comment, you're misunderstanding rest and hypermedia. The "schema" is html and the browser is the automated client that is exceptionally good at rendering whatever html the backend has decided to send.
Browsers are interactive clients, the opposite of automated clients. What you are saying supports the conclusion that Roy Fielding’s conception is unsuitable for non-interactive clients. However, the vast majority of real-world REST APIs are targeting automation, hence it doesn’t make sense for them to be “RESTful”.
Sorry, perhaps we're talking past each other.
Fielding was absolutely not saying that his REST was the One True approach. But it DOES mean something
The issue at hand here is that he coined REST and the whole world is using that term for something completely unrelated (eg an http json api).
You could start writing in binary here if you thought that that would be a more appropriate way to communicate, but it wouldn't be English (or any humanly recognizable language) no matter how hard you try to say it is.
If you want to discuss whether hypermedia/rest/hateaos is a better approach for web apps than http json APIs, I'd encourage you to read htmx.org/essays and engage with that community who find it to be an enormous liberation.
It may mean something, but Roy Fielding went out of his way, over many years, to not talk about the actual use cases he had in mind. It would have been easy for him to clarify that he was only talking about interactive browser applications. But he didn’t. And the people who came up with HATEOAS didn’t think he was. Nor did any of the blog articles that are espousing the alleged virtues of RESTfulness. So it’s not surprising that the term “REST” was appropriated for something else. In any case, it’s much too late to change that, it’s water down the bridge.
I’m only mildly interested in discussing hypothetical hypermedia browsers, for which Roy Fielding’s conception might be well and good (but also fairly incomplete, IMO). What developers care about is how to design HTTP-based APIs for programmatic use.
How are web browsers hypothetical? We're using one with rest/hateoas/hypermedia right now...
You don't seem to have even the slightest idea of what you're talking about here. Again, I suggest checking out the htmx essays and their hypermedia.systems book
In a non-interactive case, what is supposed to be reading a response and deciding which links to do some something with or what to do with them?
Let's say you've got a non-interactive program to get daily market close prices. A response returns a link labelled "foobarxyz", which is completely different to what the API returned yesterday and the day before.
How is your program supposed to magically know what to do? (without your input/interaction)
Why does "your program" need to know anything? The whole point of hypermedia is that there isn't any "program" other than the web browser that agnostically renders whatever html it receives. If the (backend) "program" development team decides that a foobarxyz link should be returned, then that's what is correct.
I suspect that your misunderstanding is because you're still looking at REST as a crud api, rather than what it actually is. That was the point of this article, though it was too technical.
https://htmx.org/essays is a good introduction to these things
ElasticSearch and OpenSearch are certainly egregiously guilty of this. Their API is an absolute nightmare to work with if you don't have a supported native client. Why such a popular project doesn't have an easy-to-use OpenAPI spec document in this day and age is beyond me.
This post follows the general, highly academic/dogmatic, tone that I’ve seen when certain folks talk about REST. Most of the article talks about what _not_ to do, and has very little details on how to actually do it.
The idea of having client/server decoupled via a REST api that is itself discoverable, and that allows independent deployment, seems like a great advantage.
However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way. Say I have a TODO api, how do I make it so that it uses HATEOAS (also who’s coming up with these acronyms…smh)?
Overall the article comes across more as academic pontification on “what not to do” instead of actionable advice.
Agreed. I wish there was some examples to better understand what the author means. Like, in a web app, do i have any prior knowledge about the "_links" actions? Do I know that the server is going to return the actions "self" and "activate"? Is the idea to hide the routes from the user until the api call, but he should know that the api could return actions like "self", "activate" or "deactivate"? How do you communicate that an action requires a specific body? For example, the call activate is done in POST and expect a json body with a date inside. How do you tell that to the user?
> However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way.
Unless the design and requirements are unusually complex or extreme, all styles of API and front end work well enough. Any example would have to be lengthy, to provide context for the advantages of "true" ReST architecture, and contrived.
Most databases aren't relational, either, in the sense that Codd defined relational. They are, instead, useful.
Ah yes - nobody is doing REST correctly. My favorite form of bikeshedding.
If you want to produce better APIs, try consuming them. A lot of places have this clean split between backend and frontend teams. They barely talk to each other sometimes. And a pattern I've seen over and over again is that some product manager decides feature X is needed. The backend team goes to work and delivers some API for feature X and then the frontend team has to consume the API. These APIs aren't necessarily very good if the backend people don't understand how the frontend uses them.
The symptom is usually if a seemingly simple API change on the backend leads to a lot of unexpected client side complexity to consume the API. That's because the API change breaks with some frontend expectation/assumption that frontend developers then need to work around. A simple example: including a userId with a response. To a frontend developer, the userId is not useful. They'll need a user name, a profile photo, etc. Now you get into all sorts of possible "why don't you just .." type solutions. I've done them all. They all have issues and it leads to a lot of complexity on either the server or the client.
You can bloat your API and calculate all this server side. Now all your API calls that include a userId gain some extra fields. Which means extra lookups and joins. So they get a bit slower as well. But the frontend can pretend that the server always tells it everything it needs. The other solution is to look things up from the frontend. This adds overhead. But if the frontend is clever about it, a lot of that information is very cachable. And of course graphql emerged to give frontend developers the ability to just ask for what they need from some microservices.
All these approaches have pros and cons. Most of the complexity is about what comes back, not about how it comes back or how it is parsed. But it helps if the backend developers are at least aware of what is needed on the frontend. A good way is to just do some front end development for a while. It will make you a better backend developer. Or do both. And by that I don't mean do javascript everywhere and style yourself as a full stack developer because you whack all nails with the same hammer. I mean doing things properly and experiencing the mismatches and friction for yourself. And then learn to do it properly.
The above example with the userIds is real. I've had to deal with that on multiple projects. And I've tried all of the approaches. My most recent insight here is that user information changes infrequently and should be looked up separately from other information asynchronously and then cached client side. This keeps APIs simple and forces frontend developers to not treat the server as a magical oracle and instead do sane things client side to minimize API calls and deal with application state. Good state management is key. If you don't have that, dealing with stateless network protocols (like REST) is painful. But state has to live somewhere and having it client side makes you less dependent on how the server side state management works. Which means it's easier to fix things when that needs to change.
Basically JSON-RPC really, and a better use of HTTP verbs, most of the time.
Ironically it feels like GraphQL is more RESTful than most REST api's if we want to follow Fielding's paper.
Except for discoverability, nice URLs, and meaningful HTTP methods.
Did you just say "discoverability" is an issue with GraphQL with a straight face?
There are plenty of valid criticisms, but that is not one, in fact thats where it shines.
Discoverability of resources starting from a root URL is what I meant, which is probably moot, because GraphQL wants you to use just one. :D
We collectively glazed over Roy Fielding's dissertation, didn't really see the point, liked the sound of the word "REST" and used it to describe whatever we wanted to do with http / json. Sorry, Roy, but you can keep HATEOAS - no one is going to take that from you.
At some point, we built REST clients so generic they could handle nearly any use case. Honestly, building truly RESTful APIs has been easy for ages, just render HTML on the server and send it to the browser. That's 100% REST with no fuss.
The irony is, when people try to implement "pure REST" (as in Level 3 of the Richardson Maturity Model with HATEOAS), they often end up reinventing a worse version of a web browser. So it's no surprise that most developers stop at Level 2—using proper HTTP verbs and resource-based URIs. Full REST just isn't worth the complexity in most real-world applications.
RESTful APIs are not RESTful because REST is meh. Our APSi includes HATEAOS links and I have never, not once, witnessed their actual use (but they do double the size of response payloads).
It’s interesting that Stripe still even uses form-post on requests.
> Our APSi includes HATEAOS links and I have never, not once, witnessed their actual use (but they do double the size of response payloads).
So your payloads look like this:
And rather than just using next-href your clients append next-id to a hardcoded things base URL? That seems like way more work than doing it the REST way.And not everything in reality maps nicely to hypermedia conventions. The problem with REST is trying to shoehorn a lot of problems in a set of abstractions that were initially created for documents.
Nooooo not this discourse again.
I spent years fussing about getting all of my APIs to fit the definition of REST and to do HATEAOS properly. I spent way too much time trying to conform everything as an action on a resource. Now, don't get me wrong. It is quite helpful to try to model things at stateless resources with a limited set of actions on them and to think about idempotency for specific actions in ways I don't think we did it properly in the SOAP days(at least I didn't). And in many cases it led to less brittle interfaces which were easier to reason about.
I still like REST and try to use it as much as I can when developing interfaces but I am not beholden to it. There are many cases which are not resources or are not stateless and sure you can find some obtuse way to make them be resources but that at times either leads to bad abstractions that don't convey the vocabulary of the underlying system and thus over time creates this rift in context between the interface and the underlying logic or we expose underlying implementation details as they could be easier to model as resources.
https://htmx.org/img/memes/dbtohtml.png
LMAO all companies asking for extensive REST API design/implementation experience in their job requirements, along with the lastest hot frontend frameworks.
I should probably fire back by asking if they know what they're asking for, because I'm pretty sure they don't.
Who cares, honestly? I never understood this debate; nobody has ever produced a perfect RESTful API anyway
[dead]
Unless you really read and followed the paper, just call it a web api and tell your sales people to do the same. Calling it REST makes you sound like a manager that hasn't done any actual dev in 15 years.
I find it pretty shocking that this was written in 2025 without a mention of the fact that the only clients that are evolvable enough to interface with a REST API can be categorized to these three types:
1. Browsers and "API Browsers" (think something like Swagger)
2. Human and Artificial Intelligence (basically LLMs)
3. Clients downloaded from the server
You'd think that they'd point out these massive caveats. After all, the evolvable client that can handle any API, which is the thing that Roy Fielding has been dreaming about, has finally been invented.
REST and HATEOAS were intentionally developed to against the common use case of a static non-evolving client such as an android app that isn't a browser.
Instead you get this snarky blog post telling people that they are doing REST wrong, rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
If you wanted to build e.g. the matrix chat protocol on top of REST, then Roy Fielding would tell you to get lost.
If what I'm saying doesn't make sense to you, then your understanding of REST is insufficient, but let me tell you that understanding REST is a meaningless endeavor, because all you'll gain from that understanding is that you don't need it.
In REST clients are not allowed to have any out of band information about the structure or schema of the API.
You are not allowed to send GET, POST, PUT, DELETE requests to client constructed URLs.
Now that might sound reasonable. After all HATEOAS gives you all the URLs so you don't need to construct them.
Except here is the kicker. This isn't some URL specific thing. It also applies to the attributes and links in the response. You're not allowed to assume that the name "John Doe" is stored under the attribute "name" or that the activate link is stored in "activate". Your client needs to handle any theoretical API that could come from the server. "name" could be "fullName" or "firstNameAndLastName" or "firstAndLastName" or "displayName".
Now you might argue, hey but I'm allowed to parse JSON into a hierarchical object layout [0] and JPEGs into a two dimensional pixel array to be displayed onto a screen, surely it's just a matter of setting a content type or media type? Then I'll be allowed to write code specific to my resource! Except, REST doesn't define or propose any mechanism for application specific media types. You must register your media type globally for all humanity at IANA or go bust.
This might come across as a rant, but it is meant to be informative so I'll tell you what REST and HATEOAS are good for: Building micro browsers relying on human intelligence to act as the magical evolvable client. The way you're supposed to use REST and HATEOAS is by using e.g. the HAL-FORMS media type to give a logical representation of your form. Your evolvable client then translates the HAL-FORM into a html form or an android form or a form inside your MMO which happens to have a registration form built into the game itself, rather than say the launcher.
Needless to say, this is completely useless for machine to machine communication, which is where the phrase "REST API" is most commonly (ab)used.
Now for one final comment on this article in particular:
>Why aren’t most APIs truly RESTful?
>The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience: The ecosystem around specifications like OpenAPI grew rapidly, offering immediate, benefits that proved irresistible to development teams.
This is actually completely irrelevant and ignores the fact that REST as designed was never meant to be used in the vast situations where RPC over HTTP is used. The use cases for "RPC over HTTP" and REST have incredibly low overlap.
>These tools provided powerful features like automatic client/server code generation, interactive documentation, and request validation out-of-the-box. For a team under pressure to deliver, the clear, static contract provided by an OpenAPI definition was and still is probably often seen as “good enough,”
This feels like a complete reversal and shows that the author of this blog post himself doesn't understand the practical implications of his own blog post. The entire point of HATEOAS is that you cannot have automatic client code generation unless it happens during the runtime of the application. It's literally not allowed to generate code in REST, because it prevents your client from evolving at runtime.
>making the long-term architectural benefits of HATEOAS, like evolvability, seem abstract and less urgent.
Except as I said, unless you have a requirement to have something like a mini browser embedded in a smartphone app, desktop application or video game, what's the point of that evolvability?
>Furthermore, the initial cognitive overhead of building a truly hypermedia-driven client was perceived as a significant barrier.
Significant barrier is probably the understatement of the century. Building the "truly hypermedia-driven client" is equivalent to solving AGI in the machine to machine communication use case. The browser use-case only works because humans already possess general intelligence.
>It felt easier for a developer to read documentation and hardcode a URI template like /users/{id}/orders than to write a client that could dynamically parse a _links section and discover the “orders” URI at runtime.
Now the author is using snark to appeal to emotions by equivocating the simplest and most irrelevant problem with the hardest problem in a hand waving manner. "Those silly code monkeys, how dare they not build AGI! It's as simple as parsing _links and discover the "orders" URI at runtime". Except as I said, you're not allowed to assume that there is an "orders" link since that is out of band information. Your client must be intelligent enough to not only handle a API where the "/user/{id}/orders" link is stored under _links. The server is allowed give the link of "/user/{id}/orders" a randomly generated name that is changing with every request. It's also allowed to change the url path to any randomly generated structure, as long as the server is able to keep track of it. The HATEOAS server is allowed to return a human language description of each field and link, but the client is not allowed to assume that the orders are stored under any specific attribute. Hence you'd need an LLM to know which field is the "orders" field.
>In many common scenarios, such as a front-end single-page application being developed by the same team as the back-end, the client and server are already tightly coupled. In this context, the primary problem that HATEOAS solves—decoupling the client from the server’s URI structure—doesn’t present as an immediate pain point, making the simpler, documentation-driven approach the path of least resistance.
Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.
[0] Whose contents may only be processed in a structure oblivious way
> Instead you get this snarky blog post telling people that they are doing REST wrong, rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
Were using actual REST right now. That's what SSR html uses.
The rest of your (vastly snarkier) diatribe can be ignored.
And, yet, you then said the following, which seems to contradict the rest of what you said before it...
> Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.
> rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
Well, besides that, I don't see how REST solves the problem it says it addresses. So your user object includes an activate field that describes the URI you hit to activate the user. When that URI changes, the client doesn't even notice, because it queries for a user and then visits whatever it finds in the activate field.
Then you change the term from "activate" to "unslumber". How does the client figure that out? How is this a different problem from changing the user activation URI?
REST(ful) API issues can all be resolved with one addition:
Adding actions to it!
POST api/registration / api/signup? All of this sucks. Posting or putting on api/user? Also doesn‘t feel right.
POST to api/user:signup
Boom! Full REST for entities + actions with custom requests and responses for actions!
How do I make a restful filter call? GET request params are not enough…
You POST to api/user:search, boom!
(I prefer to use the description RESTful API, instead of REST API -everyone fails to implement pure REST anyways, and it‘s unnecessarily limited.)
What is the problem with posting to /user/signup that posting to /user:signup solves?
You might not want a dedicated „Signup“ entity in your model and db.
you would POST to /users
what's the confusion? you're creating a new user entity in the users collection.
r/noshitsherlock
for a lot of places, POST with JSON body is REST
Htmx essays have already been mentioned, so here are my thoughts on the matter. I feel like to have a productive discussion of REST and HATEOAS, we must first agree on the basics. Repeating my own comment from a couple of weeks ago, H stands for hypermedia, and hypermedia is a type of media, that uses common format for representing some server-driven state and embedding hypermedia controls which are presented by back-end agnostic hypermedia client to a user for discoverability and interaction.
As such, JSON driven APIs can't be REST, since there is no common format for representing hypermedia controls, which means that there's no way to implement hypermedia client which can present those controls to the user and facilitate interactions. Is there such implmentation? Yes, HTML is the hypermedia, <input>s and <button>s are controls and browsers are the clients. REST and HATEOAS is designed for the humans, and trying to somehow combine it with machine-to-machine interaction results in awkward implementations, blurry definitions and overcomplication.
Richardson maturity model is a clear indication of those problems, I see it as an admission of "well, there isn't much practicality in doing proper REST for machine-to-machine comms, but that's fine, you can only do some parts of it and it's still counts". I'm not saying we shouldn't use its ideas, resource-based URLs are nice, using feature of HTTP is reasonable, but under the name REST it leads to constant arguments between the "dissertation" crowd and "the industry has moved on" crowd. The worst/best part is both those crowds are totally right and this argument will continue for as long as we use HTTP