(Replying to PARENT post)
The results are a trade-off between sparse or having false positives.
Rust just takes the other side of the trade-off, and will reject valid programs. Hence why the unsafe keyword exists, and why tools like Miri (https://github.com/rust-lang/miri) exist specifically for rust.
π€kronaπ3yπΌ0π¨οΈ0
(Replying to PARENT post)
> and that relies on having sufficient test and fuzz coverage
At the faang I worked at, some small portion of servers ran the sanitizers in prod, so youβre not reliant on test coverage nearly so much for catching rare issues.
π€djwatson24π3yπΌ0π¨οΈ0
(Replying to PARENT post)
Static analysis tools have much harder job analyzing C++ (aliasing and escape analysis are way harder, and static analysis of thread-safety is basically impossible due to lack of thread-safety info in the type system). The results are a trade-off between being sparse or having false positives.
The sanitizers only catch issues they can observe at run time, and that relies on having sufficient test and fuzz coverage. Some data races are incredibly hard to reproduce, and might depend on a timing difference that won't happen in your test harness.
OTOH Rust proves absence of these issues by construction, at compile time.
It's like a difference between dynamically-typed and statically-typed languages. Sure, you can fuzz type errors out of JS or Python, but in statically-typed languages such errors are eliminated entirely at compile time. Rust extends this experience to more classes of errors.