(Replying to PARENT post)

This whole debate is always so bewildering. Programming paradigm fanboys get into heated arguments about which model is the "best" one, but actual computer science research uses myriad different models of computation, usually endeavoring to select the one that is most convenient for the given purpose.

Sometimes that could mean using the lambda calculus, particularly in study of language theory and type systems. Other times that could mean some sort of black box model, such as when proving lower bounds for solving problems using specific operations (see e.g. the sorting lower bound). Yet other times, like when establishing the ground-zero of some new variety of computational hardness, I can't think of many more suitable models to cut up into pieces and embed into the substrate of some other problem than those based upon Turing machines.

๐Ÿ‘คfnrslvr๐Ÿ•‘4y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

> Programming paradigm fanboys get into heated arguments about which model is the "best" one

My original comment certainly reads that way, but my intent was really to point out that it doesn't make sense to privilege the Turing machine model in the study of computation. I wrote more about why in this comment: https://news.ycombinator.com/item?id=27334163

๐Ÿ‘คantonvs๐Ÿ•‘4y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Well, computing science studies programming paradigms; so defining them and analyzing what makes them suitable for what purposes is pretty much within its scope.

As I said above, it may very well be that the best usage for Turing machines is using them in mathematical proofs; where the efficiency of the computation is not a concern.

๐Ÿ‘คTuringTest๐Ÿ•‘4y๐Ÿ”ผ0๐Ÿ—จ๏ธ0