(Replying to PARENT post)
Proof assistants aren’t whimsical like cottage industry of programming languages. Proof assistants are divided into 3 parts, the core or foundation (F) which is a very small language at the heart of the prover, the vernacular (V) the language in which humans write the code and the meta language for proof search and meta programming (M).
The V is translated into F and this translation is often proven to be correct. De Bruin criteria says that F has to be small and should be independently checked (mostly through inspection or mechanization in other framework). In Coq the proof derives is again checked from ground up by F when you write Qed.
Andrej Bauer explains better than I can.
https://mathoverflow.net/questions/376839/what-makes-depende...
(Replying to PARENT post)
The biggest advantage would be gained from increased interoperability between dependently typed language so that they could use each other's standard libraries, but the field may not yet be well-developed enough to standardize on basic definitions.
(Replying to PARENT post)
I'm philosophically aligned with you. Basically, I think this is because of where resources are invested - there's a tremendous amount of work on programming languages to make them ergonomic and enable composition of pieces. Much of that work is at its essence processing friendly syntax into some form of typed lambda calculus. Since dependently typed lambda calculus is a very handy way to encode proofs, it's easier to just adapt that all that mechanism than to develop infrastructure just for proofs.
Interop is hard, because all the axiom systems tend to be subtly different in ways that are hard for normal people to appreciate, though there is some work on it, including MM0.
(Replying to PARENT post)
(Replying to PARENT post)
Why3 is a big success in the way you're asking. Just not adapted to pure maths proof like lean.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
I've got a similar, but unrelated problem.
Most proof and formal methods languages development has involved dependently typed languages (modulo Frama-C and Spark). Which are dandy. However, the formula definitions are all in low-level, function-application-ish form, while the proof techniques are in high-level tactics. The result is that plain ol' simple formal logic is not directly applicable to using the formal techniques, and the proofs are completely unreadable.
Why aren't the formal techniques syntax-sugared into something understandable?
(Replying to PARENT post)
https://github.com/adsharma/py2many/blob/main/tests/cases/de...
(Replying to PARENT post)
AFAIK, a lot of proof-language code is machine-generated. And even when it isn’t, the syntax and semantics of these provers seem to be stuck in the era of “one runtime, one language”, where switching languages forces you to switch runtimes, and proof libraries written for different proof languages can’t be used together in the same project because they have different core runtime semantics.
Why isn’t there instead a “proof-checker VM” that accepts some formally-specced “proof bytecode” with a formally-specced standard semantics, that 1. is easy for machines to generate, and 2. where these different proof-languages would then be compilers emitting said bytecode?