(Replying to PARENT post)
(Replying to PARENT post)
For example, Dart prioritized familiarity. They wanted to make it easier for people who already know JavaScript, C#, Java, or ActionScript.
JavaScript was also made to look somewhat like Java and they even put "Java" right in the name to make it look more appealing to the masses.
Creating completely new syntax is a very risky move which will always hinder adoption.
(Replying to PARENT post)
I personally prefer to omit type information from naming, so for example I would declare a username text field of a view controller as so: `let username = UITextField`. Other devs might declare the text field as a `usernameTextField`, and somewhere else declare a variable called `username` to represent the string from the text field, but now you have a view controller concerned with both the textfield and the data from the textfield. By naming the textfield simply as 'username', I force myself to not have a 'username' value anywhere else in this particular view controller, which results in forcing myself to entirely separate these concerns. I can elaborate on this if someone is interested in trying to get this working in practice.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
For starters, syntax drives how I interact with a language as much as - maybe more than - semantics. How expressions are laid out is intensely important to me, as it affects how I remember and visualise the code. I can visualise the layout of code I have not worked with in years when the syntax is clear, and the code is well formatted.
I can work around painful semantics and find ways to pretend they don't exist by avoiding features or picking patterns that work better; but painful syntax usually stares me in the face ever moment I work with a language.
I have more than once rejected or picked languages based on syntax. E.g. I can't look at a Python program without getting annoyed with the syntax, and I avoid using the language whenever possible over it, and I work with Ruby whenever I can for the same reason (though the language geek in me wants to cry whenever I think about the Ruby grammar)
I also reject the idea of avoiding hand written parsers to start with. I sympathise a bit with the idea. I can see quickly testing changes with a parser generator. And certainly, if you hand write a parser, you need to avoid the temptation of adding all kinds of awful exceptions.
E.g. I love Ruby as a user of the language, but the MRI parser is beyond awful, and I think the syntax could have had most of the nice aspects and avoided most of the awful syntactical warts with a bit more discipline ("favorite" wart at the moment: '% x ' parses to the literal string "x" - "%" when not preceeded by an operand that makes it the infix operator "%" starts a quote-sequence where the following character indicates what the quote character should be - with the exception of a few special character, most characters will set the quote character to its identity. So in '% x ', the quote character is space).
Though MRI uses a Bison parser, but contains thousands of lines of handwritten exceptions, demonstrating both the bad parts of hand writing irregular exceptions into parsers, as well as how easily you can mess things up even if using a parser generator if you have one that isn't strict enough.
But to me, if your hand written parser becomes big and/or problematic to maintain, you're designing a language that will be problematic to parse cleanly, and it's probably worth revising your grammar (I wish this rule had been adhered to for Ruby).
Nice, regular, clean grammars tend to lend themselves very well to small, compact hand-written parsers. In practice I've never run into a situation where a grammar change required major rewrites of a parser in any project I've worked on for this reason, unless the rule deviated majorly from what I'd consider good practice in language design in ways that would cause problems for most parser generators too.
Modularising a hand written parser along the lines of the grammar rules is easy, and few changes cut so deeply across grammar rules to make this difficult.
But what a hand written parser tends to get you over a parser generator, is better ability to do clean error reporting, and better ease of introspecting how parser changes actually changes the processing in ways that are meaningful to mortals. To me at least, this is a lot more difficult to do with ever parser generator I've tried (and I keep hoping to be proven wrong; I've tried writing my own too, to try to prove myself wrong, and so far I've failed to come up with something I consider a usable replacement to handwritten parsers - you certainly can come up with something expressive enough, but it tends to end up being verbose enough to lose most of the benefit over clean code in the target language that saves you from having to deal with idiosyncracies of the generator).
To me the "solutions" offered demonstrate exactly why syntax matters to me:
I deeply admire Forth and Lisp and descendants on a technical level, but the syntax has always been a massive barrier to me for both language classes. I chose a s-expression inspired syntax to kick off my own compiler project by basically treating it as a serialization format for the parse tree, and first adding a parser on top later, but I did that first to be able to toy with semantics of something I didn't intend to make into its own language, and then to act as the "guts" of my in-progress Ruby compiler, not because I'd be willing to work with it more than that.
If anything, I've found it incredibly painful to work with, and I'd never have "held out" for very long without bolting a more human-friendly parser on top very early on. The experience has made me more insistent - not less - on if not starting with the syntax, then at the very least co-evolving semantics and syntax from the outset.
(Replying to PARENT post)
As soon as you do so you will realize that all modern programming languages are as dumb as a sack of rocks, and the functionality you are coding is trivial.
You can then design the syntax, which is what separates your language from every other dumb as a sack of rocks language.
Quick, name a language where I can do something simple like mention "Whenever this function is called, make sure there is enough free memory (at least ---- MB) before actually calling it; if there isn't, first swap out any objects from memory to disk, starting with the ones that had been used the longest time ago, and after the function has finished running, swap those back into memory."
That's pretty straight-forward and well-specified. Name a language I can do that in?
Or how about this: "analzye this library and include a logically simplified version that only needs to address these cases:" (a list of conditions.). What language will even try to simplify included source code?
Or take debugging. Name a language I can add this line to: "if variables A and B ever both change as a result of the same function call, print the following debugging message:" trivial. Name a language that can do it.
Languages are dumb. They do almost nothing. I can't wait for the future, it can't get here fast enough.
(Replying to PARENT post)
My growing belief is that if you can't express your language using a Pratt parser you need to rethink what you are doing. Once you grok them and get your first one up they're more extensible than anything else - far simpler than a parser generator and very easy to write in any language.
http://javascript.crockford.com/tdop/tdop.html
http://journal.stuffwithstuff.com/2011/03/19/pratt-parsers-e...
http://www.oilshell.org/blog/2016/11/01.html
http://effbot.org/zone/tdop-index.htm
Several of them link each other. Thorsten Bell's book where you write an interpreter in Go uses it, too, to parse his own language Monkey.