(Replying to PARENT post)

Self-hosting is something that we should be constantly iterating on making easier; it's really the path forward for privacy centric folks. The main challenges are managing workload scheduling (SystemD is complicated for a layperson). Networking is another challenge; for instance, if you wanted all or part of these services to remain offline or on a Mesh VPN there's a lot of knowledge required.

There's some projects trying to tackle the workload orchestration piece; CasaOS (https://www.casaos.io/) being one of my favorites but there's also Portainer (https://portainer.io). TailScale and and ZeroTier are great for Mesh VPN networking, where you may need to run some workloads in the cloud but want them networked with your home applications (or just to keep them offline). They also allow you to access applications running on a home server that doesn't have a static IP. Cloudflare Access is okay; I haven't tried it because it deviates from the mesh VPN model significantly.

๐Ÿ‘คkodah๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

> SystemD is complicated for a layperson

Is it? It has clean and logical abstractions, and consistency. Services depending in each other isnโ€˜t complex or difficult to understand.

I suspect that a nice GUI would make systemd quite usable for non-expert users.

BTW: Itโ€˜s called โ€systemdโ€œ:

> Yes, it is written systemd, not system D or System D, or even SystemD. And it isn't system d either. [0]

[0]: https://www.freedesktop.org/wiki/Software/systemd/#spelling

๐Ÿ‘คHendrikto๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Totally agreed, for the sake of humanity we should strive to make self-hosting as easy and as seamless as possible.

But why stop at self-hosting? Beyond self-hosting, it could be extended to local-first paradigm meaning that there's a choice to have a scalable on demand auxiliary cloud based for handling bursty access demands if you need it. In addition, you can have extra backup for the peace of mind.

I'm currently working on realiable solutions (physical wireless and application layers) to extend the local-first system to be automatically secured even you have intermittent Internet outage unlike TailScale and ZeroTier [2]. This system will be invaluable where Internet connection is not reliable due to weather, harsh environment, war, unreliable power provider or lousy ISPs [3].

[1] Local-First Software: You Own Your Data, in spite of the Cloud:

https://martin.kleppmann.com/papers/local-first.pdf

[2] Internet outage:

https://en.wikipedia.org/wiki/Internet_outage

[3] NIST Helps Next-Generation Cell Technology See Past the Greenery:

https://www.nist.gov/news-events/news/2022/01/nist-helps-nex...

๐Ÿ‘คteleforce๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Having started my career in hosting, I would suggest that this world is unlikely to come back except for exceptionally small applications with minimal business impact. What does self-hosting provide which end-end encryption does not?

Self-hosting means:

- Needing to know how to configure your linux host across firewalls, upgrades, backups.

- Negotiating contracts with network service providers. While verifying that you have the right kind of optic on the network line drop.

- Thinking through the order of operations on every remote hands request, and idiot proofing them so that no one accidentally unplugs your DB.

- Making sure that you have sufficient cold spares that a server loss doesn't nuke your business for 6-12 weeks depending on how the hardware manufacturers view your business.

- Building your own monitoring, notifications, and deployment tools using both open source and in-house tools.

- Building expertise in all of your custom tools.

- A 6-20 week lead time to provision a build server.

- Paying for all of your hardware for 3-5 years, regardless of whether you will actually need it.

- Over-provisioning memory or CPU to make up for the fact that you can't get hardware fast enough.

- Getting paged in the middle of the night because the hardware is over-provisioned and something gets overwhelmed or a physical machine died.

- Dealing with the fact that an overworked systems engineer or developer is never making any component the best. And everything you touch will just passably work.

- Everyone will have their own opinions on how something should be done, and every decision will have long term consequences. Get ready for physical vs virtual debates till the heat death of the universe.

๐Ÿ‘คlumost๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

i started self-hosting a bunch of stuff last month: Pleroma (like Mastodon/Twitter), Matrix (chat), Gitea (like Github) and Jellyfin (like Plex, a media server). AFTER i set up the hardware/OS, these each took about 1-2 hours to setup, and it gets faster each time as i get more accustomed to the common parts (nginx, systemd, Lets Encrypt, and whatever containerization you use).

today i accidentally nuked everything by not flushing the disk before rebooting and then naively letting fsck try to โ€˜fixโ€™ it (which just makes things worse since it unlinks every inode it thinks is wrong instead of helping you recover data). now iโ€™m manually dumping blocks and re-linking them in, supplementing whateverโ€™s not recoverable with a 3-day old backup. thatโ€™s probably gonna take an entire day to fix up.

after this i have to figure out a better backup solution, because it costs me $5 of API requests every time i rclone the system to Backblaze, making frequent backups too expensive.

after that, i have to figure out the email part of things. AFAICT itโ€™s pretty much impossible to 100% self-host email because of blacklisting. you have to at least proxy it through a VPS, or something.

and in between that i may spin up a DNS server to overcome the part where it takes 60min for any new service to be accessible because of the negative caching common in DNS.

no, this stuff is just way too involved for anyone who hasnโ€™t already spent a decade in the CLI. iโ€™m only doing this because iโ€™m a nerd with time on his hands between jobs. self-hosting isnโ€™t gonna catch on this decade. but maybe we can federate, so that you just need to have one friend who cares about this stuff manage the infra and provide it to their peers as a social good.

also, i donโ€™t think privacy is the right angle for promoting self-hosting. a good deal of the things that people self-host have a public-facing component (websites; public chatrooms; etc). if privacy is what you seek, then you should strive to live life offline. the larger differentiator for self-housing is control.

๐Ÿ‘คwallacoloo๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

For laypeople self-hosting is out of the question for now. I'd say the more immediate problem is that even for competent engineers this is a difficult task with all the artificial restrictions put in place in the name of security, anti-fraud, etc.
๐Ÿ‘คkovac๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

This. I think -- speaking for myself mostly -- folks move to cloud for the simplicity a web interface provides. If you like, there's usually a CLI that abstracts the compute and other things. Self hosting -- at least whenever I did it -- was always: start with a VM, install Linux, configure ports, configure security, install a webserver, deal with the security of that, manage conf files, deploy website, etc etc

Host a static page on github pages makes all that a ton easier and also free.

That's a trite example sure. But when I was at a previous company who did almost everything on premises I couldn't help but think if we had an internal portal/system a la GCP's console or that of Amazon's that could be a way for devs to spin up resources and have it all managed and even be a bit programmatic (no no K8s doesn't solve all of this it's its own bag of crazy) then we'd not need cloud much since we'd not need the almost infinite scale that cloud offers.

๐Ÿ‘คgigatexal๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The problem is certificates and WAN access, and lack of MDNS on Android. There's basically no way to do anything that doesn't involve some manual setup, aside from developing a new purpose built app, and maintaining it in addition to the product, probably on two platforms.

If Mozilla still had FlyWeb things could be plug and play.

I have a set of proposals here to bring some of that back: https://github.com/WICG/proposals/issues/43

And some are considering the Tox protocol, but in general, we have not solved the most basic issue of self hosting. How do I connect to my device in a way that just works, LAN or WAN, without manually serting up the client or registering for a service?

๐Ÿ‘คeternityforest๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The only model I've ever seen this work is the Mac/Windows model. You provide a standard installer for the server (or even distribute it via the app store). The user launches it through the standard graphical app launch model (Finder or Start Menu), the server displays a suitably user-friendly GUI configuration panel, and then minimises itself to the notifications tray.

The linux model of "first learn how to use a package manager, edit configuration files by hand, and configure init scripts" is never going to be something that I can comfortably explain to computer users like my parents...

๐Ÿ‘คswiftcoder๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Most consumer platforms don't have functional automatic backups, so this is a pie in the sky at the moment. Even for a proffesionsl, self hosting is kind of time consuming
๐Ÿ‘คClumsyPilot๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

You can still self-host and use external resources to manage network and system security. You would keep full control over the machine this way. Having professionals sensibly partitioning different resources in respective subnets is still one of the most valuable defense mechanisms against many threats.
๐Ÿ‘คraxxorrax๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Quite surprised at seeing CasaOS mentioned so often here. It's quite a young project & best as I can tell it was sorta a sideproject of the guys sitting on their hands while trying to ship Zimaboard kickstarter hardware during a ship shortage.

Good for them that it is seeing traction :)

๐Ÿ‘คHavoc๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Huh, ZimaBoard [0] (Hardware SBC project by the CasaOS people) looks super cool. Sadly still on pre-order, but that is almost exactly what I want.

[0]: https://www.zimaboard.com/

๐Ÿ‘คSemaphor๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

> Self-hosting is something that we should be constantly iterating on making easier

I'm pretty sure that's exactly what we did and ended up where we are today. Any sufficiently-advanced self-hosting is indistinguishable from AWS?

I'm not sure how joking I am.

๐Ÿ‘คfknorangesite๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

tailscale is strong for network-centric use cases.

openziti is strong for app-centric use cases - put the (programmable, zero trust) network into your self-hosted app (via SDKs for various languages), rather than putting the app on the network.

https://openziti.github.io/ (quick starts) https://github.com/openziti

disclosure: founder of company selling saas on top of openziti

๐Ÿ‘คgz5๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

It's pretty easy to write a unit file for a service and install/use it. A layperson could easily follow a guide with just a few of the most common cases.
๐Ÿ‘คkarmakaze๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

There was a time when everyone and their brother were self hosting: napster, kazaa, hotline etc. Why has this trend stalled for 20 years
๐Ÿ‘คjdrc๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

What about the hardware side of this? All this talk about software...
๐Ÿ‘คpishpash๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Can CasaOS run compose stacks or simply containers?
๐Ÿ‘คPhlogi๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

First off it's systemd not systemD. And it's not complicated, in fact it's simpler than writing shell scripts. Portainer is a bad docker clone but more often than not doesn't work with docker scripts. But containerization is the wrong way anyway. If you can't setup a certain service on your machine you should not expose that host to the internet. That might sound arrogant but only for the inexperienced mind. Packaging everything in a docker package doesn't remove the burden of responsibility of operating an internet-exposed host. Idk what you're even trying to say. It feels like you're just throwing buzzwords out there without knowing what they mean or what they're for. If you want VPN wireguard is your choice. If you need a gatekeeper/firewall opnsense or openwrt.
๐Ÿ‘คddaalluu2๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0