(Replying to PARENT post)
There is a happy day coming for the web developers, sometime this decade hopefully, when skills learned as long as 12 months ago will still be up to date.
Did you notice OpenJDK has been upgraded from v11 to v11? That sort of radical change is why I run Debian. They don't break stuff.
(Replying to PARENT post)
And many of our critical systems (mostly MRIs) are on RHEL 5. (Not connected to any network, don't worry!)
(Replying to PARENT post)
https://wiki.debian.org/DebianAlternatives
Once setup, one command can toggle between your version and the Debian repo version as the system default.
DA is great, I used it for years on Ubuntu to override Debian's older default versions, or to install anything I wanted to build from source.
I eventually migrated over to NixOS because, in a way, NixOS takes the concept of Debian Alternatives and applies it to the entire OS.
(Replying to PARENT post)
(Replying to PARENT post)
Unless what you're doing is writing an application to be packaged with Debian, the version shouldn't matter to you: /usr/local is there, use it.
(note: none of this is "official" in any way, just what I've learned over 15 years or so of deploying on Debian-derived distros)
(Replying to PARENT post)
PHP 8.0's release wasn't long ago enough to fit into the stable release, bullseye was too close to the initial freeze period when that happened.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
This has been my personal policy for the last 6 years, it works well in practice.
You get the best of both worlds. A rock solid system with the flexibility to use whatever programming runtime versions you want.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
At my last employer, we ran Debian stable, and we told people to use versions of software (notably Python and Python packages) from the OS. One of my teammates was a Debian Developer, meaning he has full access to upload packages, and the two of us would package things up for the OS as we needed and get fixes into Debian as appropriate. In many cases we needed newer versions of packages than were appropriate for a stable release and (for whatever reason) weren't appropriate for backports either, so we ended up with a noticeable delta - it wasn't obviously less work than just building things ourselves independent of Debian. But because they were systemwide and you needed someone on the systems team to install those packages, what ended up happening is that my team found ourselves artificially in the middle of artificial development conflicts between other groups in the company, where one wanted a really old version and one wanted a really new version of the same package.
At my current employer, we also run Debian stable, but our VCS repository ("monorepo," as people seem to call it these days) includes both internal and third-party code. Our third-party directory includes GCC, Python, a JDK, etc. (sometimes compiled from source, sometimes just pre-built binaries from the upstream). A particular revision / commit hash of our repo describes not just a particular version of our code, but a particular version of GCC and Python and Python libraries and so forth. Our deployment tool, in turn, bundles up all your runtime dependencies (like the Python interpreter) when you make a release. In effect, it's exactly the software release cycle properties you get from something like Docker.
The practical effect is that upgrades are so much easier because they're decoupled. Upgrading the OS is hard enough - you have some script that's calling curl, and /usr/bin/curl is using the OS's version of OpenSSL, which has deprecated all the ciphers that some internal service that nobody wants to touch uses. Or whatever. Testing this is particularly hard if it affects your bare-metal OS, because you have a fairly slow process of upgrading/reimaging a specific machine, and then you run all your code on the new OS, and you see if it works. If this change also includes upgrading your GCC and Python and JDK versions, it becomes extremely cumbersome. If you can deploy your language runtime, library, etc. upgrades separately (via Docker, via LXC, via something like our monorepo, whatever), then you've decouple all of your potentially-breaking changes, and you can also revert changes for a single application. If the latest GCC mis-compiles some piece of software, it's a lot nicer, operationally, to say that this one application gets deployed to prod from an old branch until you figure it out than to prevent anyone from upgrading. And if a particular team insists on staying on an old version of some library, they don't hold up the rest of the company.
And in turn that's why Debian has such a long freeze cycle and doesn't release the latest version of things. There's a whole lot of PHP software in Debian. All of it has to work with the exact same version of PHP (or you have to go to lengths to find every place that calls /usr/bin/php and stick a version number on it). If something is incompatible with PHP 8, the whole OS gets held back. That's exactly the policy you want for things that are unavoidably system-wide (init, bash, libc, PAM, etc. etc.), but you don't need that policy for application development.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
I'm still unsure whether Docker injects that much more complexity and runtime burdens, but it sure feels a little more messy after installing a few tools.
(Replying to PARENT post)
But i guess this comes down to "there is no free lunch".
(Replying to PARENT post)
I am starting to reconsider my personal policy of โuse debian-stable as a benchmark for what language runtimes I should build on top ofโ, now tending towards โuse debian stable as the bare-metal OS, and build all my projects inside docker, using each languageโs most recent stable releaseโ