(Replying to PARENT post)
(Replying to PARENT post)
If you force your frontend to only use what makes sense as a "general" backend API, it ends up being inefficient. There are 1:n calls needed to get all data, or a lack of a specific filter requires pulling back everything to do client side filtering, etc.
On the other hand, if you build out your back-end to support every single front-end use case, it gets massive quickly. If other third parties start using it (as part of your published API) you now are forced to support it long term - even if your frontend changes and no longer consumes it - until you finally decide to break backwards compatibility (and the third party app). Break things enough and your users will quickly hate you.
This is especially true early on, when your frontend is going though rapid and radical changes.
I see BFF as the intermediate, giving best of both. You can rapidly adapt to front-end needs but don't commit to long-term support. There's actually no need for BC support at all if you can deploy front-end and BFF atomically.
If you build your code in a structured way, you're sharing the actual business and data logic, and it's trivially easy to "promote" a BFF resource to the official API, with whatever BC commitment you want to provide. The key thing is you are doing this deliberately, and not because the front-end is trying something out that might completely change in the next iteration.
(Replying to PARENT post)
Any half-decent backend API will offer parameters to limit the response of an endpoint, from basic pagination to filtering what info is returned per unit. What's the use of extra complexity of a "BFF" if those calls can be crafted on the fly.
And to be clear, I am not suggesting that custom endpoint be crafted for every call that gets made, that just seem like a strawman the article is positing; but rather that calls specify when info they need.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Our company has been working on a new large platform built around BFF and it's been really nice to work with.
We have several SPA and mobile apps which interact with our data very differently. It took a bit to break the mindset of using the HTTP layer as the "point of reuse" but now it's far easier to add and maintain functionality.
We feel much more productive compared to past projects where we spent a lot of time trying to make "one API to rule them all" with a single ever-changing model.
(Replying to PARENT post)
Why isn't there just one API gateway with minimal front-end-specific endpoints?
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
1. Use a single traffic receiver at both the browser application and the terminal application. This thing works like a load balancer. It receives messages from a network and sends them where they need to go. There is only 1 place to troubleshoot connectivity concerns. You can still choose between monolith and multi-service applications and have as many micro services as you want.
2. Reduce reliance on HTTP. HTTP is a unidirectional communication technology with a forced round trip. That is a tremendous amount of unnecessary overhead/complexity. I prefer a persistent websocket opened from the browser. Websockets are bidirectional and there is no round trip. This means no more polling the server for updates. Instead everything is an independent event. The browser sends messages to the server at any time and the server independently sends messages to the browser as quickly as it can, which is real time communication happening as quickly as your network connection has bandwidth and your application can process it.
(Replying to PARENT post)
(Replying to PARENT post)
There are a number of downsides to graphql that arise from its overly generic (and yet, at the same time, not generic enough) nature. No HTTP caching of graphql requests is one. Very deeply nested payloads is another. Difficulty of expressing complex query logic through parameters. Inability to natively represent structures such as trees of unknown depth. Special efforts spent on preventing arbitrarily deep/complex queries. And so on.
A BFF does not have any of these problems. It can be made as closely tailored to a particular frontend as possible. It does not need to concern itself with a generic use case.
(Replying to PARENT post)
https://docs.wundergraph.com/docs/use-cases/backend-for-fron...
(Replying to PARENT post)
I am pretty certain that the only reason why BFF were invented is for security in mobile and SPA applications, to securely deal with OAuth tokens.
If you donโt need that donโt use a BFF, keep things simple. Start by not building a SPA when it can be a classic server client web app. I feel like most of the web should just be that.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
If for whatever reason you don't want to call a few REST endpoints to render some stuff, then you should probably replace those endpoints with some other technology.
(Replying to PARENT post)
One particularly annoying pain point was that one of the engineering directors made it his personal mission to ensure that every service's Protobuf definition conformed to some platonic ideal. We'd have a BFF service Foo that essentially provides a wrapper around another service, Bar (among other things). Naturally, Foo's service definition included some Protobuf messages defined in Bar. Now the simple thing to do here would be to include those Protobuf message definitions in Foo.proto and expose them to consumers of the Foo service, but that apparently couples clients too tightly to Bar's service definition (?). So, it was decided that Foo's Protobuf file would contain actual copies of the messages defined in Bar. Adding a new field to a message in Bar wouldn't expose it to clients of Foo unless you also update the message definition in Foo.proto as well as the code that converts from Bar::SomeMessage to Foo:SomeMessage. Now imagine that the backend service is something commonly used, like UserService, and that there are potentially dozens of BFF services that contain copies of the definition of UserService::User. Welcome to hell.
The aforementioned engineering director would hunt down BFF services that dared to include a message dependency instead of copying the definition. He would then open a ticket and assign it to the service owner, informing them that they were not in compliance with the "Architectural Decision Record": the indisputable, no-exceptions, holy-of-holies log of technical mandates that were decided by a handful of people in a meeting three years ago.
This issue isn't necessarily inherent to BFF architecture, but I think it comes from the same place of overengineering and architectural astronautics.
(Replying to PARENT post)
> You probably shouldnโt implement microservices.
Because the whole article just seems like making more problems on top of your existing problems.
(Replying to PARENT post)
(Replying to PARENT post)
The big issue I've been seeing is that occasionally frontend teams will decide to develop "features" by stringing together massive series of calls to the backend to implement logic that a singular backend could do much more efficiently. For example, they commonly will have their backend-for-frontend attempt to query large lists of data, even walking through multiple pages, in order to deliver a summary that a backend service could implement in a SQL query. Unnecessary load on the backend service and on the DB to transmit all that data needlessly to the BFF.
I know the easy answer is to blame the frontend devs, but this pattern seems to almost encourage this sort of thing. Frontends try to skip involving the backend teams due to time constraints or just plain naivety, and suddenly the backend team wakes up one morning to find a massive increase in load on their systems triggering alerts, and the frontend team believes its not their fault. This just feels like an innate risk to promoting a frontend team to owning an individual service living in the data center.
(Replying to PARENT post)
Hard to understand from just text and the concepts.
(Replying to PARENT post)
If you actually measure real latencies (e.g. collect metrics from your mobile apps), this is not true most of the time, at least in my experience. Most responses are fairly small, making them a bit smaller doesnโt have much impact on mobile app performance in the real world.
Iโd guess for most apps, there are maybe a handful of endpoints where the response size can be reduced a tonne for mobile clients, AND that makes a big performance difference in practice. But instead of hopping straight to BFFs, or other approaches to solving this like GraphQL, you can just add a โfieldsโ query param to those few key endpoints. If provided by the caller, you return only the fields they ask for. No massive re-architecture to BFFs or GQL, just a small change to a couple endpoints that a single dev can implement in a day or two.
Now, there are other benefits to BFFs than minimal, client specific payloads. One is that you can make breaking changes to underlying APIs, and not break always-so-slow-to-upgrade mobile clients, by preserving backwards compatibility in your mobile BFF layer. Another is coordinating multiple calls - turning many sequential round trips into a single round trip can be a big win. But if your main concern is payload size, BFFs are probably an over engineered solution.