(Replying to PARENT post)
So I think the answer is that it’s mostly just noise, but it’s noise you probably don’t want to have to deal with unless you’re invested in Rust itself.
(Replying to PARENT post)
This is not noisy for no reason - the expectation is all dependencies are compiled from source on your computer, and here the developer forces the use of a binary program, with no alternative.
(Replying to PARENT post)
Is it worth it? Maybe. Downloading blobs isn't ideal but could be made easier in the future. Ideally a hack like this wouldn't be needed and we'd have comptime serde.
(Replying to PARENT post)
It has to do with how Rust builds are structured.
The compilation unit is the crate, and the result of compiling it depends on the source code, the version of hte compiler, the compiler options, the target triple, and the set of features (think #ifdef FEATURE_FLAG).
Since there's exponential blowup of possible combinations of a single crate, the largest package registry (crates.io) does not cache compiled crates. It only caches their metadata, and when Cargo (the package manager) does dependency resolution, it uses that metadata alone - and your build system (also Cargo, most of the time) will need to actually build all those artifacts.
Seems pretty reasonable, right?
There's a special kind of crate called a proc_macro crate (procedural macros). When you use these crates you're not adding their code to your compiled output - you're creating a compiler extension - the compiled crate is loaded by the compiler, and then used to generate code in your crate, which is then compiled by the compiler.
One of the reasons Rust build times (particularly clean builds) is that they first have to compile all these compiler extensions and then use them to generate the code that the compiler compiles. These crates are super useful however, like serde, which generates serializers/deserializers that can be used for a wide variety of encodings (XML, JSON, TOML, various binary encodings, and so on).
It would seem that the same rules apply for normal crates - there's exponential blowup of compiler version, target triple, and features - but that's not really true. proc_macro crates usually don't have features, the compiler ABI is stable enough that they probably won't break across versions, and almost everyone and their mother is compiling on a handful of host systems (x86_64-linux being the most important one).
So while it's true that crates-io probably shouldn't cache every crate for every permutation of compiler/target/feature/etc, a few crates are so pervasive and used in such a sparing manner that it should be possible for the registry to host those
In fact some build systems can do this automatically. But Cargo can't. But it does have a pre-build feature (build.rs) that works like proc macros, and can pull down a pre-compiled proc macro crate, so widely used ones can use this hack to get around limitations of cargo.
My personal take is that the reason that crates-io and Cargo haven't been upgraded to do this, despite years of people complaining, and known solutions existing, is mostly personal/political. There aren't many people willing to put in the work to do it, and the teams haven't been open to contribution of this kind for a long time. For example, they do not believe that Cargo is a build system, and a lot of this is "out of scope" for cargo.
(Replying to PARENT post)
I'd also be curious if anyone has any good explanations as to why a choice like this would be made?