rbanffy
email: username at that google mail thing
http://about.me/rbanffy
https://linkedin.com/in/ricardobanffy
[ my public key: https://keybase.io/rbanffy; my proof: https://keybase.io/rbanffy/sigs/HtF1uAf_RNpwIkNP1-YGWP_-3doWV6S5Cc1KywXeLYo ]
๐ Joined in 2008
๐ผ 186,374 Karma
โ๏ธ 61,653 posts
Load more
(Replying to PARENT post)
The cores, yes, but you can get an AmpereOne with 192 ARM cores (or rent out beefier machines from AWS and Azure). If you need to run macOS, then you are tied to Apple, but if all you want is ARM (for, say, emulated embedded hardware development), you have other options in the ARM ecosystem. I'm actually surprised Ampere maxes out at 192 cores when Intel Xeon 6+ has parts with 288 cores on a single socket (and that can go up to 4 sockets).
I wonder how many cores you'd need to make htop crash.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
It's a fun perception. For the longest time, all the "serious" computers were used through networks and terminals and didn't even come with any ability to connect a monitor or a keyboard (although a serial terminal would work as the system console). I used to joke (usually looking at Unisys Windows-based big servers), if the computer had VGA and PS/2 ports, it wasn't a computer, but a toy. Those Unisys servers weren't toys, but you could run Pinball and Minesweeper directly on them, which kind of said otherwise.
I think we got used to such levels of platform bloat that we don't care if the UI toolkit these days is bigger than the entire operating system that runs 95% of the world's payment transactions.
(Replying to PARENT post)
At least in my house, ARM cores outnumber x86 cores by at least four to one. And I'm not even counting the 32-bit ARM cores in embedded devices.
There is a lot of space for memory ordering bugs to manifest in all those devices.
(Replying to PARENT post)
I'd love to have a Xeon 6, a big EPYC, or an AmpereOne (or a loaded IBM LinuxOne Express) as my daily driver, but that's just not something I can justify. It'd not be easy to come up with something for all this compute capacity to do. A reasonable GPU is a much better match for most of my workloads, which aren't even about pushing pixels anymore - iGPUs are enough these days - but multiplying matrices with embarrassingly low precision, so it can pretend to understand programming tasks.
(Replying to PARENT post)
Won't ARM have validation silicon available to their licensees?