πŸ‘€jsnellπŸ•‘3yπŸ”Ό157πŸ—¨οΈ108

(Replying to PARENT post)

> However, contemporary applications rarely run on a single machine. They increasingly use remote procedure calls (RPC), HTTP and REST APIs, distributed key-value stores, and databases,

I'm seeing an increasing trend of pushback against this norm. An early example was David Crawshaw's one-process programming notes [1]. Running the database in the same process as the application server, using SQLite, is getting more popular with the rise of Litestream [2]. Earlier this year, I found the post "One machine can go pretty far if you build things properly" [3] quite refreshing.

Most of us can ignore FAANG-scale problems and keep right on using POSIX on a handful of machines.

[1]: https://crawshaw.io/blog/one-process-programming-notes

[2]: https://litestream.io/

[3]; https://rachelbythebay.com/w/2022/01/27/scale/

πŸ‘€mwcampbellπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

> Unix was the first operating system written by programmers for programmers

This is of course far from true; CTSS and especially ITS (which was so programmer friendly that rather than what is called today a β€œshell”, it’s user interface was a debugger) long predated it and both Thompson and Ritchie were exposed to them while at MIT to work on Multics. The supreme programming environments for programmers, the MIT Lispms and the D-machines at PARC are the ultimate in programmer-friendly OSes that are, for various important reasons, unmatched today.

Also, in their early papers about Unix they talked about how they didn’t need all that fancy stuff that mainframe systems like Multics needed, like VM and IO modules (Unix just does everything through the kernel in a simple fashion). Well, it was for a small computer system. There were good reasons that mainframes were structured the way they were and by the mid 80s that model started migrating to workstations (e.g. Britton-Lee accelerator and graphics subsystems) and by the 90s that model was already quite common in microcomputers too (for example with mpus in disk controllers and network interfaces).

But the Unix cultists ignored that revolution. They came late to the party with regards to networking; even when developing the sockets interface they relied heavily on the PDP-10 approach, but seemingly didn’t learn anything else from that.

Glad they are finally noticing that the world has changed.

πŸ‘€gumbyπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Posix used to feel like a nice compromise that worked for high level applications, one-off tasks, and low-level system software. I remember the first time I created an ISO file using "cat /dev/cdrom > some_file.iso", and I thought it was amazing that you didn't need a separate application for that (like you do on windows).

Now the posix APIs feels like the worst of both worlds.

Most applications shouldn't really store data on the filesystem, and probably not even configuration. Most applications use a database for a zillion reasons (e.g. better backup, restore, consistency, replication, transactions, avoiding random writes, ...). Yet the filesystem is also not a great interface if you're building a database: you want more control over flushing and buffering and atomic file operations. The filesystem is mainly good at storing compiled code.

The "everything is a text file" interface is not great anymore, either. Simple manipulation of the output of basic commands (like text 'ps') requires error-prone parsing and quite some skill with things like grep, sed, awk, sort, etc. That's OK for experienced sysadmins working on individual machines, but doesn't integrate very well in larger systems where you have lots of automated access to that information. I have some experience doing sysadmin-like work, but I still run into problems with simple things like spaces in filenames and getting the quoting and escaping right in a regex. One option is for it to be more database-like, but that's not the only option. Erlang provides nice programmatic access to processes and other system information while still feeling very free and ad-hoc.

Of course, Linux is still great in many ways. But it feels like some of the core posix APIs and interfaces just aren't a great fit, and a lot of the interesting stuff is happening in non-posix APIs. And applications/services are being built on higher-level abstractions that aren't tied to single machines.

πŸ‘€jeff-davisπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

> POSIX provides abstractions for writing applications in a portable manner across Unix-like operating system variants and machine architectures. However, contemporary applications rarely run on a single machine. They increasingly use remote procedure calls (RPC), HTTP and REST APIs, distributed key-value stores, and databases, all implemented with a high-level language such as JavaScript or Python, running on managed runtimes. These managed runtimes and frameworks expose interfaces that hide the details of their underlying POSIX abstractions and interfaces. Furthermore, they also allow applications to be written in a programming language other than C, the language of Unix and POSIX. Consequently, for many developers of contemporary systems and services, POSIX is largely obsolete because its abstractions are low-level and tied to a single machine.

> Nevertheless, the cloud and serverless platforms are now facing a problem that operating systems had before POSIX: their APIs are fragmented and platform-specific, making it hard to write portable applications. Furthermore, these APIs are still largely CPU-centric, which makes it hard to efficiently utilize special-purpose accelerators and disaggregated hardware without resorting to custom solutions. For example, JavaScript is arguably in a similar position today as POSIX was in the past: it decouples the application logic from the underlying operating system and machine architecture. However, the JavaScript runtime is still CPU-centric, which makes it hard to offload parts of a JavaScript application to run on accelerators on the NIC or storage devices. Specifically, we need a language to express application logic that enables compilers and language run times to efficiently exploit the capabilities of the plethora of hardware resources emerging across different parts of the hardware stack. At the same time, it would be an an interesting thought experiment to ponder how different would the hardware design of these devices be without the CPU-centrism in POSIX.

It sounds like the author discovered the motivation behind the last time the POSIX standard was obsoleted, and some of the reasons why Plan9 was developed.

πŸ‘€generalizationsπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

This falls right in line with the recent "hardware offload for all" article posted on HN, and the talks around Hubris that Bryan Cantrill has given. It's an exciting time for computing, I think.

I've often lamented that I was born too late to watch Unix/POSIX be born and grow up. That's always felt like a golden time in computing. I also wasn't in the industry yet for the early 2000s and the rise of web-scale, though I was at least aware. It's exciting to think about what systems will do over the next twenty years. I'm looking forward to it.

πŸ‘€IcathianπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

My answer to the title is "no" for the same reason we're still using email protocols designed in 1985. I guess the counterpoint would be systemd.

TFA says "Therefore, we argue that the POSIX era is over, and future designs need to transcend POSIX." So they're actually not answering their own question, they're saying "the answer should be x" which is subtly different. There are a lot of should-be's in computing.

πŸ‘€user3939382πŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

There was a great talk I can't find right now about how the share of computer hardware that it's directly controlled by the OS kernel keeps on shrinking.

In a modern smartphone, the OS directly controls just one of a dozen chips. The rest are running their own OSes which do their own thing independently of the main OS.

πŸ‘€323πŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Richard Stallman POSIX now will have to do one of the hardest stuff in the software realm: resist planned obsolescence while replacing some interfaces with "in-the-end better" interfaces, that on the long run. This is extremely though: often, replacing "in-the-end better" interfaces is actually planned obsolescence.

For instance event based programming, "epoll" should replace "select". Synchronous programming, signalfd, timerfd, etc... but then "signals" should be more accurately classified, for instance in monothreaded applications segfault won't be delivered from a "signalfd" in a "epoll" syscall... and why not keep only the "realtime signal" behavior?

If POSIX only "add" new "options", in the end, you'll get gigantic spec just being the mirror image of the very few implementations since its size will make it unreasonable for a new implementation from scratch.

The "leaner POSIX" route seems the right way here, but it is really easier to say than to do.

πŸ‘€sylwareπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Hadn't heard of "Dennard scaling" before:

> These services cannot expect to run faster from year to year with increasing CPU clock frequencies because the end of Dennard scaling circa 2004 implies that CPU clock frequencies are no longer increasing at the rate that was prevalent during the commoditization of Unix.

Definition:

> Dennard scaling, also known as MOSFET scaling, is a scaling law which states roughly that, as transistors get smaller, their power density stays constant, so that the power use stays in proportion with area; both voltage and current scale (downward) with length.[1][2] The law, originally formulated for MOSFETs, is based on a 1974 paper co-authored by Robert H. Dennard, after whom it is named.[3]

* https://en.wikipedia.org/wiki/Dennard_scaling

The article then mentions Moore's Law.

πŸ‘€throw0101cπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

It's interesting that this is almost entirely about the filesystem API, but barely touches on all the other things that the object-store proponents and other "POSIX is dead" advocates usually focus on. Hierarchical directory structure, byte addressability, write in place, permissions model, persistence and consistency requirements, and so on. If everybody switched to a new API today, POSIX would still be very much alive. I'm not going to argue about whether the rest of POSIX should die or not, having played an active role in such discussions for about twenty years, but for that to happen it has to be a lot more than just changing how requests and responses are communicated.
πŸ‘€notacowardπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I agree. We need a standard specification for distributed computing. Categorize all the different aspects of a distributed system, then write standard APIs for them all.

The whole point behind POSIX was so people didn't need to constantly reinvent the wheel and glue things together to do simple things on different platforms. Today if you need to do anything in the cloud, you must "integrate" the pieces, which means to take out scissors and glue and toothpicks and a text editor and literally arts-and-crafts the pieces together. Want to add something to your system? More arts and crafts! Rather than waste all that time, if everything just used a common interface, everything would just work together, the way it (mostly) does with POSIX.

This is bad for the companies that make all their money off proprietary/incompatible interfaces - lock-in devices that are advertised as a plus because they can be "integrated" with enough time and money. But it's better for our productivity and the advancement of technology if we didn't have to integrate at all. Just run the same thing everywhere.

πŸ‘€throwaway892238πŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

> The primary use case for Unix was to multiplex storage (the filesystem) and provide an interactive environment for humans (the shell) [2], [3]. In contrast, the primary use case of many contemporary POSIX-based systems is a service running on a machine in a data center, which have orders of magnitude lower latency requirements.

And yet the most pleasant development environment for producing the latter is going to be the former.

πŸ‘€bigdictπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

It was a nice read. Thanks for sharing the article on HN. However, I feel the fellowing claim in the article is not backed by any detailed facts/analysis.

> Virtual memory is considered to be a fundamental operating system abstraction, but current hardware and application trends are challenging its core assumptions.

What are specifically challenging the vm assumptions?

πŸ‘€dis-sysπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Why would POSIX need to be replaced? Sure, POSIX is a bit clunky to use, but who uses it directly? And there aren't any performance benefits you can get from replacing POSIX. This is just one of those questions that CS academics ask themselves because they ran out of algorithms to discover.
πŸ‘€phendrenad2πŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

How can you claim posix is dead when you will need some 10-100x better to get it adopted?
πŸ‘€stjohnswartsπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I've come around to the opinion that (POSIX-style) filesystems are no longer a great idea. Instead, I think something more like Git (plumbing, not porcelain) would be preferable as the underlying storage abstraction.
πŸ‘€carapaceπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0