gabrielgrant
๐ Joined in 2010
๐ผ 114 Karma
โ๏ธ 28 posts
Load more
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Much of the research (and the FCC's regulatory regime) has traditionally used a model based on cell PHONE usage ie testing for effects of relatively short periods of exposure, with the phone beside the head, where the skull acts as a shield[1]. But these assumptions aren't applicable to smartphone usage today, where 90% of people under 35 have been sleeping with their phones for years[2], and most usage involves the screen in front of the face, where there are large areas of only soft tissue (eyes & nasal cavity) between the device and your brain.
Furthermore, most testing largely centers around Specific Absorption Rate (SAR), which is basically concerned with the question of power (ie heat) transmitted to the body (basically "are the microwaves cooking your?"). "Cooking" is a fairly well-understood process, and tissue's ability to dissipate heat can be fairly easily modeled to ensure exposure stays within a safe range. But several of the concerns this meta-analysis points to have a dose-response curve is not so simple or clearly understood, meaning the effects could well occur, even within the usage patterns deemed "safe" on a SAR basis (their language is actually stronger, claiming these "should be considered [...] established effects of Wi-Fi")
All that being said, I don't see this as a definitive answer that our current exposure levels of RF radiation are necessarily harmful, but I have definitely wondered whether our (grand)children's generation will look back at images of us staring at our screens the way we look at images of our parents' generation frolicking in clouds of DDT[3] (I've also wondered this a more often about spray-on sunscreen ads[4], but that's a whole other rant)
----
[1]: Most industry recommendations make the laughable assumption that the phone is not even in contact with the body. Manufacturers basically threw a shit-fit when Berkeley started trying to inform people that most "normal", against-the-body usage was likely outside of what FCC exposure guidelines test for: https://arstechnica.com/tech-policy/2017/04/berkeleys-cellph...
[2]: https://www.businessinsider.com/90-of-18-29-year-olds-sleep-...
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
It seems to be roughly the same idea, but without any fees at all (I haven't used any of them, so I'm likely missing something)
(Replying to PARENT post)
It's primarily author is Yehuda Katz, the man behind (much of) Rails 3
(Replying to PARENT post)
Faults are handled differently depending on whether they are ZeroRPC-layer errors or application errors. To maintain the integrity of the connection itself, there is a heartbeat system independent of any given request. There is also an optional timeout that can be set for a given call's response. Application-level errors are propagated as "RemoteError" exceptions in the python interface. In order to collect more info about remote errors, there is also support for ZeroRPC[0] in Raven[1], the Sentry[2] client (Disqus' error logging system). In any failure case, both sides of the connection are notified and given an opportunity to clean up after themselves.
As to the philosophical concerns, I do agree on some level: RPC in general (and ZeroRPC in particular) are powerful weapons that need to be treated with care. That being said, there are a number of cases that are greatly simplified by the higher-level abstractions and more-robust error handling ZeroRPC provides.
------
[0]: http://raven.readthedocs.org/en/latest/config/zerorpc.html [1]: https://github.com/dcramer/raven [2]: https://github.com/dcramer/sentry
(Replying to PARENT post)
With dotCloud, you start higher up: list the components (eg Python web frontend, MySQL and Redis) that make up your actual application, push your code and you're handed back a URL with your app running. A single command lets you scale out for reliability, with load balancing across your multiple web front-ends and master-slave replication for your databases. Running a single physical server? Sure, not that hard. Setting up reliable, automated failover for every component in your stack? That's a bit more work.
In the end, it's really a question of the value of your time. Can you do all your own sys admin work? Sure. You could also build and maintain your own hardware, but from the sounds of it, you've decided that work is worth outsourcing Hetzner.
In the same way, for a lot of people, wasting time administering servers is just time taken away from building a real business: a distraction that is worth paying a few extra dollars a month to make disappear.
(Replying to PARENT post)
The later have an easier time integrating into existing client-side frameworks (Backbone, Ember, Angular, etc), since those mostly seem to be built with a REST-based model in mind, but it will be interesting to see what patterns emerge to plug push-based updates into these pull-oriented systems. For starters, I suspect we'll need the client-side frameworks to clarify the distinction between the server and client state and the source of changes. Henrik Joreteg's talk at BackboneConf[1] was good overview of this and other problems they've run into.
[1]: https://speakerdeck.com/u/henrikjoreteg/p/real-world-realtim...
(Replying to PARENT post)
There is an extensive comparison of the two processes in [1]. Specifically:
> the Bankruptcy Code provides trustees the authority to avoid, that is, claw-back or reverse, certain transfers (subject to certain limitations52) made by debtors
> the FDIC as conservator or receiver may not avoid (i.e., reverse or claw-back) any property transfer pursuant to a qualified financial contract unless the transfer was performed with the "actual intent to hinder, delay, or defraud."
[1]: https://www.everycrsreport.com/reports/R40530.html