(Replying to PARENT post)
People tend to lump Node and Erlang together because they both avoid shared-state concurrency. But they're completely opposite approaches: Erlang has concurrency but no shared state. Node has shared state but no concurrency.
Not that you'd get this from the article.
(Replying to PARENT post)
But how's Node.js "backwards"?
It seems to me that the author has picked on one statement by one individual, and turned that into a link-bait.
(Replying to PARENT post)
This is what I thinking when reading all the discussions about Node.js (mainly here, at HN).
Disclaimer: I don't program in JS, and I won't touch it without six feet pole (read: code generator from some higher-level strongly-typed language).
(Replying to PARENT post)
(Replying to PARENT post)
By "actor-like way" here I just mean a code module ("actor") sees one thread (at a time), and the runtime takes care of the details of scheduling threads when a module has an event/message/request to process. Also I guess avoiding callbacks. But you could be more Erlang-ish/Akka-ish in more details if you wanted.
node.js punts this to the app developer to instead run a herd of processes. In most cases that's probably fine, but in theory with one process and many threads, the runtime can do a better job saturating the CPU cores because it can move actors among the threads rather than waiting for the single-threaded process an actor happens to be in to become free. The practical situations where this comes up, I admit, are probably not that numerous as long as you never use blocking IO and are basically IO-bound. (Only CPU-intensive stuff would cause a problem.)
btw this has been hashed out to death on the node.js list: http://groups.google.com/group/nodejs/browse_thread/thread/c...
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Could someone familiar with Erlang please clarify:
"To understand why this is misleading, we need to go over some background information. Erlang popularized the concept of lightweight processes (Actors) and provides a runtime that beautifully abstracts the concurrency details away from the programmer. You can spawn as many Erlang processes as you need and focus on the code that functionally declares their communication. Behind the scenes, the VM launches enough kernel threads to match your system (usually one per CPU) "
In common Unix tools like 'ps' and 'top' the term 'Lightweight Process' is used as a synonym for OS thread, eg, the LWP column in 'ps -eLf' shows the thread ID.
In this article, LWPs seem to be different from threads? Is this correct? If they're not threads, what are they?
(Replying to PARENT post)
Lulz? Here's a much simpler explanation: it's a polling server. It's not an intentional approximation of this or that (Erlang), this is just how event loops using select\poll\epoll\kqueue have always worked. Unless you want to do a bunch of extra work and throw in per-core preforking\threading and scrap the libev dependency Node built upon.
(Replying to PARENT post)
However, I don't think the inability of current JavaScript to do async I/O without callbacks is Node's biggest problem. As others have said, it works for smaller projects (and even has some geek appeal). And as Havoc Pennington and Dave Herman have explained, generators (which are coming with ECMAScript Harmony) and promises will eventually provide a very nice solution. So Node has a path to grow out of the callback model without giving up its single threaded paradigm.
http://blog.ometer.com/2010/11/28/a-sequential-actor-like-ap...
http://blog.mozilla.com/dherman/2011/03/11/who-says-javascri...
The bigger problem (which I don't see getting solved anywhere down the road) is the lack of preemptive scheduling, which is available in Erlang or on the JVM. What you see under high load with Node is that latency is spread almost linearly over a very wide spectrum, from very fast to very slow, whereas response times on a preemptively scheduled platform are much more uniform.
http://jlouisramblings.blogspot.com/2010/12/differences-betw...
And no, this is something that can't be solved by distributing load over multiple CPU cores. This is problem really manifests itself within each core, and it is a direct consequence of Node's single threaded execution model. If anybody knows how to solve this without resorting to some kind of preemptive threading I'd be very curious to hear about it.
(Replying to PARENT post)
What are some good web libraries for Erlang?
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
This is like saying "This Honda Civic clearly sucks compared to my helicopter. Let me write you an article about everything that my helicopter does betting that your Civic". Clearly the Civic was build for another purpose and so the comparison is void.
(Replying to PARENT post)
(Replying to PARENT post)
This is a blanket statement that perhaps displays the author's opinion on Javascript as a language itself.
(Replying to PARENT post)
Unless you write your own Objective-C http server and run it on Mac OS X Server (it's not that hard, I've done it), this isn't very useful for Web programming. However, if you're comparing the languages / frameworks themselves (you can use all three to code command line tools, for example), GCD becomes a very seductive option.
GCD works by throwing code blocks (obj-c closures) into queues, and letting the runtime do its magic. You can have it execute code synchronously, asynchronously, or in parallel.
GCD will optimize and distribute your blocks the the available CPU cores. You can even enumerate using blocks, and instead of doing loop iterations one by one, it'll distribute them to the cores in parallel.
[1]: http://en.wikipedia.org/wiki/Grand_Central_Dispatch [2]: http://developer.apple.com/library/ios/#documentation/cocoa/...