(Replying to PARENT post)
Is it? It has clean and logical abstractions, and consistency. Services depending in each other isnโt complex or difficult to understand.
I suspect that a nice GUI would make systemd quite usable for non-expert users.
BTW: Itโs called โsystemdโ:
> Yes, it is written systemd, not system D or System D, or even SystemD. And it isn't system d either. [0]
[0]: https://www.freedesktop.org/wiki/Software/systemd/#spelling
(Replying to PARENT post)
But why stop at self-hosting? Beyond self-hosting, it could be extended to local-first paradigm meaning that there's a choice to have a scalable on demand auxiliary cloud based for handling bursty access demands if you need it. In addition, you can have extra backup for the peace of mind.
I'm currently working on realiable solutions (physical wireless and application layers) to extend the local-first system to be automatically secured even you have intermittent Internet outage unlike TailScale and ZeroTier [2]. This system will be invaluable where Internet connection is not reliable due to weather, harsh environment, war, unreliable power provider or lousy ISPs [3].
[1] Local-First Software: You Own Your Data, in spite of the Cloud:
https://martin.kleppmann.com/papers/local-first.pdf
[2] Internet outage:
https://en.wikipedia.org/wiki/Internet_outage
[3] NIST Helps Next-Generation Cell Technology See Past the Greenery:
https://www.nist.gov/news-events/news/2022/01/nist-helps-nex...
(Replying to PARENT post)
Self-hosting means:
- Needing to know how to configure your linux host across firewalls, upgrades, backups.
- Negotiating contracts with network service providers. While verifying that you have the right kind of optic on the network line drop.
- Thinking through the order of operations on every remote hands request, and idiot proofing them so that no one accidentally unplugs your DB.
- Making sure that you have sufficient cold spares that a server loss doesn't nuke your business for 6-12 weeks depending on how the hardware manufacturers view your business.
- Building your own monitoring, notifications, and deployment tools using both open source and in-house tools.
- Building expertise in all of your custom tools.
- A 6-20 week lead time to provision a build server.
- Paying for all of your hardware for 3-5 years, regardless of whether you will actually need it.
- Over-provisioning memory or CPU to make up for the fact that you can't get hardware fast enough.
- Getting paged in the middle of the night because the hardware is over-provisioned and something gets overwhelmed or a physical machine died.
- Dealing with the fact that an overworked systems engineer or developer is never making any component the best. And everything you touch will just passably work.
- Everyone will have their own opinions on how something should be done, and every decision will have long term consequences. Get ready for physical vs virtual debates till the heat death of the universe.
(Replying to PARENT post)
today i accidentally nuked everything by not flushing the disk before rebooting and then naively letting fsck try to โfixโ it (which just makes things worse since it unlinks every inode it thinks is wrong instead of helping you recover data). now iโm manually dumping blocks and re-linking them in, supplementing whateverโs not recoverable with a 3-day old backup. thatโs probably gonna take an entire day to fix up.
after this i have to figure out a better backup solution, because it costs me $5 of API requests every time i rclone the system to Backblaze, making frequent backups too expensive.
after that, i have to figure out the email part of things. AFAICT itโs pretty much impossible to 100% self-host email because of blacklisting. you have to at least proxy it through a VPS, or something.
and in between that i may spin up a DNS server to overcome the part where it takes 60min for any new service to be accessible because of the negative caching common in DNS.
no, this stuff is just way too involved for anyone who hasnโt already spent a decade in the CLI. iโm only doing this because iโm a nerd with time on his hands between jobs. self-hosting isnโt gonna catch on this decade. but maybe we can federate, so that you just need to have one friend who cares about this stuff manage the infra and provide it to their peers as a social good.
also, i donโt think privacy is the right angle for promoting self-hosting. a good deal of the things that people self-host have a public-facing component (websites; public chatrooms; etc). if privacy is what you seek, then you should strive to live life offline. the larger differentiator for self-housing is control.
(Replying to PARENT post)
(Replying to PARENT post)
Host a static page on github pages makes all that a ton easier and also free.
That's a trite example sure. But when I was at a previous company who did almost everything on premises I couldn't help but think if we had an internal portal/system a la GCP's console or that of Amazon's that could be a way for devs to spin up resources and have it all managed and even be a bit programmatic (no no K8s doesn't solve all of this it's its own bag of crazy) then we'd not need cloud much since we'd not need the almost infinite scale that cloud offers.
(Replying to PARENT post)
If Mozilla still had FlyWeb things could be plug and play.
I have a set of proposals here to bring some of that back: https://github.com/WICG/proposals/issues/43
And some are considering the Tox protocol, but in general, we have not solved the most basic issue of self hosting. How do I connect to my device in a way that just works, LAN or WAN, without manually serting up the client or registering for a service?
(Replying to PARENT post)
The linux model of "first learn how to use a package manager, edit configuration files by hand, and configure init scripts" is never going to be something that I can comfortably explain to computer users like my parents...
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Good for them that it is seeing traction :)
(Replying to PARENT post)
(Replying to PARENT post)
I'm pretty sure that's exactly what we did and ended up where we are today. Any sufficiently-advanced self-hosting is indistinguishable from AWS?
I'm not sure how joking I am.
(Replying to PARENT post)
openziti is strong for app-centric use cases - put the (programmable, zero trust) network into your self-hosted app (via SDKs for various languages), rather than putting the app on the network.
https://openziti.github.io/ (quick starts) https://github.com/openziti
disclosure: founder of company selling saas on top of openziti
(Replying to PARENT post)
There's some projects trying to tackle the workload orchestration piece; CasaOS (https://www.casaos.io/) being one of my favorites but there's also Portainer (https://portainer.io). TailScale and and ZeroTier are great for Mesh VPN networking, where you may need to run some workloads in the cloud but want them networked with your home applications (or just to keep them offline). They also allow you to access applications running on a home server that doesn't have a static IP. Cloudflare Access is okay; I haven't tried it because it deviates from the mesh VPN model significantly.