(Replying to PARENT post)

Can somebody please explain to me why we would use gRPC?
๐Ÿ‘คdiogenesjunior๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Things I "love" about REST APIs:

- How should I pass an argument? Let me count the many ways:

    1. Path parameters
    2. Query parameters in the URL
    3. Query parameters in the body of the request
    4. JSON/YAML/etc. in the body of the request
    5. Request headers (yes, people use these for API tokens, API versions, and other things sometimes)
- There's also the REST verb that is often super arbitrary. PUT vs POST vs PATCH... so many ways to do the same thing.

- HTTP response codes... so many numbers, so little meaning! There are so many ways to interpret a lot of these codes, and people often use 200 where they really "should" use 202... etc. Response codes other than 200 and 500 are effectively never good enough by themselves, so then we come to the next part:

- HTTP responses. Do we put the response in the body using JSON, MessagePack, YAML, or which format do we standardize on? Response headers are used for... some things? Occasionally, responses like HTTP redirects will often just throw HTML into API responses where you're normally using JSON.

- Bonus round: HTTP servers will often support compressing the response, but almost never do they allow compressing the request body, so if you're sending large requests frequently... well, oops.

I don't personally have experience with gRPC, but REST APIs can be a convoluted mess, and even standardizing internally only goes so far.

I like the promise of gRPC, where it handles all of the mundane transport details for me, and as a bonus... it will generate client implementations from a definition file, and stub out the server implementation for me... in whatever languages I want.

Why wouldn't you want that?

๐Ÿ‘คcoder543๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I work in a startup which is ~10 months old where we've decided to go all in on gRPC for all communications, both inter-service and client (Web SPA and a CLI) to service.

Although investment in tooling had been significant in the beginning it has truly paid off its dividends now as we can develop in Golang (micro services, CLI), Javascript (SPA) and Python (end to end testing framework), and have a single definition for all our API endpoints and messages in the form of Protobufs. These protobufs automatically generate all client and server code and give us out-of-the-box backward and forward compatibility, increased performance due to binary format over the wire and more..

Our Architect which put together most of this infrastructure has written an entire series of blog posts about how we use gRPC in practice, detailing our decisions and tooling: https://stackpulse.com/blog/tech-blog/grpc-in-practice-intro...

https://stackpulse.com/blog/tech-blog/grpc-in-practice-direc...

https://stackpulse.com/blog/tech-blog/grpc-web-using-grpc-in...

๐Ÿ‘คhagsh๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Sure. You would use gRPC when:

    - You want to have inter-service RPC.
    - You want not only unary calls, but also bidirectional streaming.
    - You want a well-defined schema for your RPC data and methods.
    - You want a cross-language solution that guarantees interoperability (no more JSON parsing differences! [1])
[1] - http://seriot.ch/parsing_json.php
๐Ÿ‘คq3k๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

If you're communicating between two systems, gRPC has a few benefits: * keeps a socket open between them (HTTP/2) and puts all your method calls on that connection. So you don't have to set up connections on each call or handle your own pooling. * comes with builtin fast (de &)serialization using protobuf. * uses a definition language to generate SDKs for a whole bunch of languages. * makes testing super easy because your testing team, if you have a separate one, can make an SDK in their preferred language and write tests.

Much better developer experience and performance writing HTTP services and code to call them.

Cons are * not being able to use Postman / firebug, nothing on the wire is human-readable * load balancer support is sketchy because of the use of HTTP trailers and full path HTTP/2. That's why AWS ALB supporting it is news. * The auth story isn't very clear. Do you use the out of band setup or add tokens in every single RPC?

๐Ÿ‘คsudhirj๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

It makes it really nice to define APIs (like with Openapi of swagger). There is a bunch of code generators out there to produce code for your definitions to have a native swift , objective , Java, Go api stubs for either clients or servers. It is a joy to work with in cross functional teams and define your APIs whilst taking into account what Api versioning would mean, how to define enums, how to rename field names whilst being compatible with the transport protocol and other things. Also if you were to route a payload from service A via B to C and each service is deployed independently and gets new Api changes, gRPC supports you in how to handle ther Szenarios.

Sure enough openapi can do all of this I guess but grpc definitions in Protobuf or Google artman are just way quicker to understand and work with. (At least for me)

๐Ÿ‘คweitzj๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

My main reasons:

1. It's standardized all the implementation for each language is roughly similar and has the same feature sets (middlewares, stubs, retry, hedging, timeouts, deadlines, etc).

2. High performance framework/webserver in "every" language. No more "should I use flask or the built in http server or gunicorn or waitress or..."

3. Tooling can be built around schemas as code. There's a great talk that I highly recommend about some of the magic you can do [0].

4. Protos are great for serialization and not just over a network! Need to store configs or data in a range of formats (json, prototxt, binary, yaml)?

5. Streaming. It's truely amazing and can dramatically reduce latency if you need to do a lot of collection-of-things or as-soon-as-x processing.

6. Lower resource usage. Encoding/decoding protos is faster then encoding and decoding json. At high throughput that begins to matter.

7. Linting & standards can be defined and enforced programatically [1]

8. Pressures people to document things. You comment your .c or .java code, why not comment your .proto?

[0] - https://youtu.be/j6ow-UemzBc?t=435

[1] - https://google.aip.dev/

๐Ÿ‘คgravypod๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The biggest benefit is when your company supports multiple programming languages. All RPC calls can still stay uniform regardless language backend.
๐Ÿ‘คdidip๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Serialization/deserialization speed and reducing transfer size are good reasons for large throughput service-to-service communication. Also a decent ecosystem around code generation from .proto files and gateways to still support some level of JSON-based calls.
๐Ÿ‘คjruroc๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

HTTP is super great for loosely coupled, request-based services.

RPC is more lightweight for persistent connections to stateful services. RPC makes broadcast easier than HTTP. Individual RPC requests have (much) less overhead than HTTP requests, which is very helpful when tight coupling is acceptable.

Trying to run, say, MMO gaming servers over HTTP is an exercise in always paying double for everything you want to do. (Also, trying to run a FPS gaming server over TCP instead of UDP is equally not the right choice!)

๐Ÿ‘คjwatte๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

When you don't want to deal with the portmapper anymore and think distributed reference counting is a bad idea.

Seriously binary RPC has been around for ages.

๐Ÿ‘คwbl๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0