(Replying to PARENT post)

There's no inherent reason why doing lazy evaluation in an eager-by-default language should be any harder than the reverse. I can imagine a language that is eager by default but has a '#' operator that holds expressions in unevaluated form until terms are requested.
๐Ÿ‘คcvoss๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The combination of lazy evaluation and state mutation/side effects can be pretty difficult to reason about. For example, if you have a function that changes a global variable as a part of a lazy computation, once that function could have been called you have no way of knowing if or when that global variable will change in the future. If you have other functions that depend on the value of that variable, their future behavior is now much more challenging to reason about than in a strict language. You can also imagine something akin to a race condition in which there are multiple lazy computations which could eventually set that variable to different values and the actual sequence of state transitions depends entirely on the dependency order of a possibly unrelated piece of code. In practice, this means that in languages that are strict by default, lazy computations are often forced to run in order to reason about the code, rather than because the actual results of the computation are required.

Since pure functions compute the same results under lazy or strict evaluation and require that any data dependencies they have are explicitly provided as inputs, they interact with lazy computations in a much more tractable way. This means that adding a strictness operator to a lazy language is much easier than adding a laziness operator a a strict language.

An alternate approach is what python did with generators where there is a data type for lazy computation, but it lives apart from the rest of the language, so it is mostly used for e.g. stream processing where a default-lazy approach is conceptually straightforward and is less likely to lead to extremely non-trivial control flow. This approach does, however, basically give up on having a laziness operator that will turn a strict computation into a lazy one.

๐Ÿ‘คchas๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Sadly even though what you say is true, I do not know of any languages that get that right. Even those that do care enough to give you a way to make lazy values, they require you to explicitly wrap them and force them all over the place, making lazy programming effectively so noisy it's unusable. Which is the main reason why I still find myself coming back to haskell (stuff like parser combinators is just so unwieldy without laziness and do notation...)
๐Ÿ‘คshirogane86x๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

> There's no inherent reason why doing lazy evaluation in an eager-by-default language should be any harder than the reverse.

lazy by default enables better composition and local reasoning.

https://publish.obsidian.md/paretooptimaldev/Advantages+of+l...

๐Ÿ‘คParetoOptimal๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

While you're technically right, it's only in the same sense that any TC language can do anything any other TC language can -- you can theoretically always delay any computation behind a lambda, but the ergonomics have wide-ranging consequences. You also don't get any memoization that way -- without further runtime support.

There was a good Reddit thread recently where a lot of experts weighed in: https://www.reddit.com/r/haskell/comments/wpbs4z/what_things...

I recommend reading the thread -- lot's of great stuff in there, but...

TL;DR: It's a lot more complicated than it seems at first and it's far from obvious that strict-by-default is worth the cost.

๐Ÿ‘คQuekid5๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0