|
Going back to Kate's message that started this branch[^], we were talking about C#.
Since 2013, all C# design discussions have been public on GitHub:
GitHub - dotnet/csharplang: The official repo for the design of the C# programming language[^]
Earlier discussions were not made public, but some information was available on various MSDN blogs.
The language was originally designed by a team lead by Anders Hejlsberg.
As for C/C++, the details of those decisions are likely lost in the murky depths of the dim and distant past of 1972, unless Dennis Ritchie kept some notes somewhere.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Javascript is obviously dynamically typed - but as I gravitate towards statically typed languages, I'll go for a language like Ocaml, Haskell or F#. They all define record types similarly, but with slightly different keywords and syntax.
I'll give an example in Haskell:
data Location = Location { city :: String }
let home = Location { city = "Boston" }
and F#
type Location = { City : string }
let home = Location { City = "Boston" }
let other_home = {| City = "Boston" |}
johnywhy wrote: I mean, easy to understand at a glance.
I find Haskell's syntax pretty clear for pure functions. Having pattern matching brought out to the top level of function definitions makes it similar to have you might write a piece-wise definition of said function. Having function signatures be optional (the compiler will work out what the signature should be if you leave it out) leaves code less cluttered
factorial 1 = 1
factorial n = n * factorial (n-1)
list_length [] = 0
list_length (first_element:rest) = 1 + list_length rest
and F# again:
let rec fac n =
match n with
| 1 -> 1
| n -> n * fac (n-1)
let rec length list =
match list with
| [] -> 0
| first :: rest -> 1 + length rest
johnywhy wrote: What's "Concise"?
johnywhy wrote: What's "Practical"
I'll probably select F#. It's a .NET language, so can use .NET libraries. I like how the various features work together to provide powerful facilities with little code. An example - some code taken from the Microsoft website to download a bunch of URLs in parallel, with async/await
open System.Net
open Microsoft.FSharp.Control.WebExtensions
let urlList = [ "Microsoft.com", "http://www.microsoft.com/"
"MSDN", "http://msdn.microsoft.com/"
"Bing", "http://www.bing.com"
]
let fetchAsync(name, url:string) =
async {
try
let uri = new System.Uri(url)
let webClient = new WebClient()
let! html = webClient.AsyncDownloadString(uri)
printfn "Read %d characters for %s" html.Length name
with
| ex -> printfn "%s" (ex.Message);
}
let runAll() =
urlList
|> Seq.map fetchAsync
|> Async.Parallel
|> Async.RunSynchronously
|> ignore
runAll()
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
Every professional program has 3 audiences:
1. the programmer
2. other programmers
3. the compiler
The "fantasy language" fails in all 3 areas:
1. Sure, right now the syntax makes perfect sense. But come back to the same code after a 6 month or even 2 year lapse? Every programmer (including me) I know looks at their own, older code with a "WTF was I doing?" expression on their face. Remove the keywords and punctuation and the problem gets a lot harder. Everyone thinks they'll remember exactly what they were doing at the time; 99%+ of us don't.
2. A professional program is highly likely to be supported (eventually) by at least one other programmer. The problems of point #1 are multiplied by 10 or even 100 because the second programmer has to figure things out. I've supported enough code to have a deep appreciation for well organized code that includes comments which explain "why" something is being done. My effort is reduced tremendously by those 2 things.
3. Use of "judicious" whitespace as syntax is a poor idea. Sure, the compiler can figure it out, but when the whitespace is wrong, the programmer cannot see the mistake. The real problem here is not compiler errors (which should point out the offending line so the programmer can [hopefully] figure it out), but when the program DOES compile and run, but produces wrong results.
The constant move towards conciseness is being taken to an extreme, and it's causing more problems than it is solving. The more concise the language, the steeper the learning curve, and the more prone to programmer errors, and those problems are harder to solve.
Programming languages are created for people, not computers. The computer runs machine code -- regardless of how the program starts, it ends up in machine code. Write your code so the someone else can figure things out in the least amount of time.
|
|
|
|
|
Let me explain to you the undesirability of you desire:
All that punctuation is to remove ambiguity.
You can play games and replace semicolons with new-lines, specific indent counts, whatever, and it's still the same. Is "Start->End" really better than "{->}". Neither logically nor visually.
If you have a block of code in a conditional you need some method to mark statements off as part of the block and not just the next statements to be executed after the conditional.
As for why things keep copying the 'C-like' structure? Not saying it's the end-all of design but it eliminates a lot of the absurd arbitrary. Like my first language, FORTRAN, having all statements start in column seven.
I find your javascript example ludicrous. You make the javaScript unnecessarily complex for the simple act of assigning a value to a symbol - but not in your dream version. Just to start.
Coming right down to it - human readable is a nebulous concept as you intend it. Mandarin is human readable - unless you don't know it. So is Hindi, Hebrew, and Arabic. Unless you are not accustomed to it. Those who speak these languages natively would differ with you strongly as to what is readable. Thus it is for a coding language. You learn to look at it and your eyes and brain will translate it. Meanwhile, the efficient interpretation of the syntax is guaranteed by the punctuation.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
Well, just in terms of the JavaScript, here is how I would write it.
var home =
{
cit : "Boston"
};
home.city = function(aCity)
{
if (arguments.length)
this.cit = aCity;
else
return this.cit;
};
Yes, the function is a bit longer but there is only one function instead of two and writing pointless "get" and "set" is no longer necessary. That is a holdover from languages which do not have true getter / setter type attributes. I would note that the function would normally be part of a class (or prototype) and not the object itself.
In C++, it takes more work, but you can write code so that you can literally write
aCity = home.city;
home.city = aCity;
I usually don't do that, but I do do the same thing in C++ as in JavaScript. Note that brace placement may make it look longer, but that is matter of style. Braces are part of a statement block, so I indent the same as the statement block. And don't use unnecessary braces. The make code less readable.
I agree with you about APL not being readable. Perl has a lot of the same characteristics.
I disagree about COBOL, it is both far less concise and far less readable.
If something is too concise, it becomes unreadable. But, conversely, when it is too expansive (non-concise?) then it is also less readable. A straight mathematical expression whether in a conditional or an assignment is generally maximally readable. Assuming that the length and complexity of the expression is not too large. And that weird and strange operators (APL) are not in use.
So languages whose basic structure is similar to C tend to be most readable without becoming too concise.
|
|
|
|
|
A couple questions:
What is your audience? Non-technical average Joe? Technical (domain expert) but non-programmer? Software engineer? Firmware Engineer?, etc, etc, etc.
What is the target program type? Web? Windows/Linux application? Embedded realtime?, etc, etc, etc
I believe you're not going to find (or create) the 'perfect' language that's practical and concise that will handle all these user types and targets.
|
|
|
|
|
Kotlin: maybe not best "human-understandable" but it takes Java and make it more concise.
You don't have to write many things the Kotlin that you write in the Java.
Red: it's still in the alpha version but you should keep an eye on it.
You can write in different styles but you can write small & concise code. It has some "weird features".
Ruby: It might be not fastest language but it's very readable. No "parenthesis hell" (same as in the Red) as in other languages.
|
|
|
|
|
No one has mentioned Python?!? That's interesting.
|
|
|
|
|
That was exactly my thought, and I had to scroll down way too much to find this
Cheers,
विक्रम
"We have already been through this, I am not going to repeat myself." - fat_boy, in a global warming thread
|
|
|
|
|
Why? Why optimize for concise? To save YOU time typing?
I write code in a pseudo code format for the first pass, then have to translate it to one of many languages depending on the target. Most concepts are clear enough.
What about R? (Functional languages?)
But again... I prefer a language that has the libraries/components/tools that I need.
If I gave you your "perfectly" concise language, without a bunch of caveats, like no libraries, no classes, no protocol support. Would you want to use it? If you had to implement AES, SMTP, HTTP, etc
all by hand?
==
The most important thing I ever learned... Once you know how to write the program properly. The language is not usually what is holding you back. Especially with UDFs.
==
For me. C was the most elegant language and always my favorite. It was PDP Macro Assembler as a language with indentation and code blocks, UDFs, etc. etc. And probably a bit too concise, LOL!
|
|
|
|
|
> Why optimize for concise? To save YOU time typing?
I don't think typing is a problem when you have IDEs with autocompletion.
The problem is reading. Sure, your IDE will colour this and that but you still have, for example, whole line just saying "<foo> is the name of the function".
|
|
|
|
|
I think of it more at a concept level rather than saving typing. Avoid reduncancy. Avoid tokens that serve no purpose. Remove clutter.
As I argued earlier in the thread: A language such as CHILL allows any block to have an exception handler attached. Putting "try{...}" around the block before the handler is redundant: If the handler is present, then the block has a handler - no need to pre-announce it! Removing the need for that pre-announcement simplifies the code.
(The real reason for the C/C++ try{} is purely historic: C didn't have any such thing, while several other languages did. So to "compete", a macro-based solution was devised, that required no extension to the underlaying compiler. When adopted into the C/C++ language, it could have been given a cleaner syntax, but to be backwards compatible with all the code that had been written for the macro-based implementation, the syntax was kept unchanged.)
In Pascal (or CHILL), you write a condtion without enclosing parethes like in prose text: If it is raining, then you better wear a raincoat. You need not clutter up that sentence with parentheses markup. You remove the conceptually informationless tokens.
var matching = new List<FileInfo>(); rather than
List<FileInfo> matching = new List<FileInfo>(); removes redundancy, risk of inconsistency, and the reader has fewer tokens to interpret. I do not see what we gain by repeating the type information when it can be infered. (var has other uses as well unrelated to this.)
A language recognizing a statement as a block avoids redundant bracketing: At the spur of the moment, I cannot think of a single case where single-statement block and a single statement are both valid but with different semantics. So I see no reason why some languages insist on adding these extra tokens that carry no semantic information.
The idea is that non-information-carrying elements (that you nevertheless have to relate to) are bad. The fewer elements you need to process mentally, the better.
This is, of course, assuming that the programmer/reader easily relates to the concepts repersented by the tokens. E.g. to a matehmatician, matrix inversion is such a basic and common concept that giving it a separate symbol (analogous to multiply and divide) as it done in APL (created as a mathematical notation, not as a programming languge) is perfectly fine even though non-mathmaticians says "Huh?". For those who are "only engineers", powers are essential, so languages with that user group in mind have a token for that operation (like 2**11 or 2^11); that simplifies and improves readability of engineering calculations, rather than "power(1, 11)". A low-level language for handling bit patterns does the same with left and right shift operators: There is no need for a symbolic function name, and parentheses around an argument, when you can simply write it as a double larger- or smaller-than sign. That removes clutter.
This kind of simplification, avoiding information-less elements, results in a consisness that eases reading and comprehension of the program.
And also: What is essential information should be clearly visible. Whitespace[^] was certainly created as a joke, but we may wonder if that was as a reply to languages that demark e.g. what is to be repeated, a loop body, by ... whitespace. OK, the number of (visible) tokens is reduced, but when the consiseness is based on invisible whitespace tokens that has essential semantic importance, then the language designers have gone too far in "consiseness"!
|
|
|
|
|
|
Kirk 10389821 wrote: If I gave you your "perfectly" concise language, without a bunch of caveats, like no libraries, no classes, no protocol support. Would you want to use it? If you had to implement AES, SMTP, HTTP, etc
i never said anything about eliminating libraries, classes, or protocol support.
|
|
|
|
|
johnywhy wrote: Here's some COBOL. Very human understandable. But not concise:
ADD YEARS TO AGE.
MULTIPLY PRICE BY QUANTITY GIVING COST.
SUBTRACT DISCOUNT FROM COST GIVING FINAL-COST.
The boilerplate required in COBOL in order to encapsulate those three lines is going to be around another 300 lines of code, code which is not particularly human-readable
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
I would say you rewrite fails the "Human understandable" test, as because my understanding of your code is quite different than what the original javascript does. I'd assume it was closer to:
var home =
{
get City ()
{
return "Boston"
}
};
Truth,
James
|
|
|
|
|
James Curran wrote: I would say you rewrite fails the "Human understandable" test, as because my understanding of your code is quite different than what the original javascript does.
You're interpreting "human readable" to mean "js-programmer readable". That's not what i mean. My rewrite throws out much js syntax, and can't be interpreted through a js lens. It's a different language.
|
|
|
|
|
|
File this one under Weird...or annoying.
Background
Created a little web app (including dotnet core Web Api) that allows me to read comics more easily. Because, I mean, why not.
I wanted to be able to save the date of the last comic I read to a remote location so I can then load it from any browser (any of my multiple devices) and keep reading the comics consecutively by date. I got it all working and it's a fun little hobby app.
The Problem
I pulled the app up on my TV via Amazon FireStick (runs Amazon Silk browser).
It all loaded up great. Then I attempted to press the button and nothing happened.
<button id="loadComicDatesButton" onclick="getComicDatesFromApi()" >Load Dates</button> onclick Not Supported On Mobile
This is considered a mobile browser and onclick is not supported. You need the touchstart event.
First Attempt to Fix
I looked it up and found documentation. I figured you could just add the ontouchstart="functionName()" to the button.
Nope!
You have to attach the event to the element using javascript.
document.querySelector("#loadComicDatesButton").addEventListener("touchstart",getComicDatesFromApi); I guess that is because the <button> element doesn't know about ontouchstart or something??
Other devs have probably figured this out and said this 1 million times before. But, it is still weird.
modified 11-May-20 15:30pm.
|
|
|
|
|
I thought browsers were supposed to fire the mouse events as well as the touch events for single-finger activation gestures?
Touch Events - Level 2[^]
Even Amazon's own documentation say they do:
Touch - Amazon Silk[^]
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Interesting. That's what I had expected too.
But, until I added the explicit touchstart event I was getting nothing. I even tested it with a simple alert(). And, once I added it, it started working. So documentation may say something, but we all know the _real documentation_ is the code.
Thanks for finding those though because i couldn't find those docs. I will read them more closely.
|
|
|
|
|
Closing the loop on this...
That second link explains that the onclick event should fire and it is correct.
I created another barebones HTML page and added a button with an onclick event and tried it from my Amazon Kindle fire pad (running Silk browser) and it worked with no problem.
Hmmm...maybe I had a syntax error on my original HTML and it pre-cluded the touch event from firing properly while a normal desktop browser handled it??
Well, thanks again for your help.
|
|
|
|
|
The "inconsistency" of what events are supported in the HTML vs. events that can only be wired up with addEventListener is basically why I always use addEventListener. Besides the fact that I rather despise the whole "on[event]="someFunctionCall()" in the HTML. Makes it a PITA to figure out where to change the handler, and I also use a "UI event router" for these things so they can be handled by actual object instances, logged, etc.
Yes yes, I know, you're doing a fun web app.
|
|
|
|
|
Marc Clifton wrote: basically why I always use addEventListener. Besides the fact that I rather despise the whole "on[event]="someFunctionCall()" in the HTML.
I like that you made those points. I will keep that in mind.
Yours is a much better Separation of Concerns anyways. I believe the lesson I've just learned (new convention for me) is that it is better to wire up events in the JS itself due to your point and the point that mine didn't work very well.
Thanks again.
|
|
|
|
|
EDIT - NOTE: someone pointed out that I said 5.51 billion but I actually generated 551 Billion random numbers.
Yesterday, I was thinking about the JavaScript Math.random() function.
The Math.random() function returns a floating-point, pseudo-random number in the range 0 to less than 1 (inclusive of 0, but not 1) with approximately uniform distribution over that range
I had a script that I uses Math.random() to generate values in range of 1 - 10 (inclusive).
I basically ignored the fact that you could ever get zero. Then something happened and I started wondering why I never seemed to hit the 0 value.
Generate As Many Random Values As Possible
So I decided to write a script and let it just generate random values and run a long while to see if I'd ever actually get 0.
I let the following script run (via NodeJS) for something like 5 or 6 hours and it never generated a value of 0. I finally just killed the script.
var counter = 1;
var MaxLoops = Number.MAX_VALUE;
function runForZero(){
console.log("running...");
for (var x=1;x<=MaxLoops;x++)
{
var rnd = Math.random();
if (rnd === 0)
{
console.log("rnd is " + rnd + " It took " + counter + " tries.");
return;
}
if (counter % 5000000 == 0){
console.log(new Date().toTimeString() + " - Still running : " + counter);
}
counter++;
}
console.log("Complete.");
}
After Reading a Few Thoughts, I Understand, but....
I read this javascript - Can Math.random() exactly equal .5 - Stack Overflow[^] which really exposes the idea that probabilistically (is that a real word? ) you will never hit a specific value.
That SO will also lead you to other readings like : How does JavaScript’s Math.random() generate random numbers? | Hacker Noon[^]
Since 0 is a specific value, it is unlikely that you will ever see that value.
Here's The Thought I Had To Get To For Understanding
Basically think about if you stuck your hand into a bin of numbers which contained from 0 to 1.7976931348623157e+308 (largest Number in JavaScript)*. How likely would it be that you get 0? Or, how many times would you have to stick your hand into the bin to randomly grab the 0? Lots.
But, at some point you must see that value, but you may wait a long time (hundreds of years or something?)
Or it could happen with the first Math.random() call you make.
I think this is an interesting thought experiment, because it will make you think.
*It is some number similar to this for the number of decimal values 0 <= N < 1 (dependent upon Number data type size and precision).
modified 19-Apr-20 21:51pm.
|
|
|
|
|