|
'The cloud' is a synonym to 'someone else's computer'.
Any processing provided across the internet is the cloud.
|
|
|
|
|
Cross-compilers aren't good enough if you want to encourage developers to port apps for Windows on Arm. We have no right to bear ARM?
|
|
|
|
|
I always wished that ARM have termed their GPUs 'Low Energy Graphics'.
I always wanted a machine with two CPU chips and two GPUs working in tandem, having two ARMs and two LEGs.
|
|
|
|
|
But how much would it cost?
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
at least an arm and a leg plus a pound of flesh ?
«The mind is not a vessel to be filled but a fire to be kindled» Plutarch
|
|
|
|
|
When you’re learning computer science, you typically learn that programming languages fall into two categories. I guess it depends on how you interpret your compiler?
|
|
|
|
|
Kent Sharkey wrote: how you interpret your compiler? A magical program that emits error messages.
|
|
|
|
|
Kent Sharkey wrote: I guess it depends on how you interpret your compiler?
Wrong! It's all about how you compile your interpreter!
"The only place where Success comes before Work is in the dictionary." Vidal Sassoon, 1928 - 2012
|
|
|
|
|
Kornfeld Eliyahu Peter wrote: It's all about how you compile your interpreter!
Exactly. Interpeters are compiled. Then again, somewhere in the dark recesses of ancient history I remember writing an interpreter on top of an interpreter. Probably in my Commodore PET / C64 days.
|
|
|
|
|
Quote: If we take just the definitions above, we see they don’t really mean anything:
1. Any “interpreted language” can be compiled, by making a compiler which emits the interpreter bundled with the source program.
2. Any “compiled language” can be interpreted, by making an interpreter which compiles the program and then immediately runs it.
Yes, but then in case 1, you've now created a compiled language, and in case 2 (gads, interpreting machine code???) you've written an interpreter.
He clearly needs to take some courses in logic.
|
|
|
|
|
I'd argue that the IL (Intermediate Language) bytecode for both dotNet and Java are interpreted in some instances and compiled in others.
I think the real distinction comes from the fact that good compilers do a lot of static analysis of the code to identify errors before the code is executed, while interpreters wait for code execution before doing even this level of analysis.
|
|
|
|
|
obermd wrote: I'd argue that the IL (Intermediate Language) bytecode for both dotNet and Java are interpreted in some instances and compiled in others. Are there any interpreter of dotNet IL in existence? The language was never designed for direct interpretation, and my gut feeling is that it would require quite some machinery to realize. And I don't see any advantage of direct interpretation, rather than doing JIT compilation the "normal" way for IL.
Java bytecode was, on the other hand, explicitly designed for direct interpretation (strongly inspired by the quite successful Pascal P4 bytecode). For years, it also was interpreted directly: Compilation down to native code didn't come until the critical voices about performance became too strong. (I don't know when the first Java bytecode compiler was created, but it sure took a number of years after the release of Java to become widespread.)
In the first years of JVM, it was marketed as a front end solution: The code should be dynamically deployed to (typically) web browsers, to be interpreted in the JVM of the browser. There were browser implementations; I am not sure whether I ever touched one personally. It was a big flop, anyway. No one in those days talked about the browser compiling the byte code to native code.
I guess that the dotNet people had similar visions of IL being distributed to browsers, and JIT compiled there. Maybe it could have succeeded if Javascript hadn't been around. I wouldn't exactly say that JS is any "better" solution, from a technical viewpoint, but we just have to accept that it won the battle for browser control. We have to live with it.
|
|
|
|
|
IL isn't interpreted. It's simply an intermediate symbolism that is then Just-in-Time compiled to the target processor's instruction set so it can be optimized for the particular processor.
|
|
|
|
|
Any interpreter that before execution makes a complete syntactical analysis of the source code, as well as all the semantic analysis that can reasonably be performed statically, is indistinguishable from a compiler.
But what's the use then? If you spend all those resources on detecting syntactic and semantic errors in code that is not yet to be executed (and may end up not being executed at all), you will be doing the major part of the work required for generating 'compiled' code. So why not go the full length?
Ten or twenty years ago, people were still arguing in favor of interpretation because 'You don't have to wait for a lengthy compile'. Last time I met this argument, about fifteen years ago, I dug up the log from my last compile, showing up to eight module compilations completing per second. (We used an old style make system that compiled only modules affected by code changes; the number of recompiled modules was usually quite limited.) Later, I have laughed off such arguments.
I'd much rather have a compiler telling me of a problem in, say, an exception or error handling code before the software is shipped to the customer. Even if the execution start is delayed by a whole second (often it is far less, on today's machines), I think it is worth it.
My first statement is not completely correct, though: A modern compiler will often do a lot more, such as code optimization and flow analysis to determine stack requirements. I take this as extra bonus advantages of compilation. Even without it, the complete syntactical check and static semantic analysis can alone justify a delay of a second, or even two seconds if required.
Sure, there are linters and similar tools adapted to interpreted languages. Except for huge, costly static analysis software (in the class of, say, Coverity), they do a much poorer analysis job than a decent compiler, and certainly no code optimization. If you think that the F5 delay in Visual Studio is frustrating, so you rather run lint before execution (not to speak of Coverity!), then you are on the wrong track
A much more essential distinction between languages is the type strictness (and static-ness). To some degree, this correlates to interpreted vs. compiled: On the average, compiled languages have stricter type control, and are able to point out semantic issues at compile time. Few languages designed for interpretation provide the same type control as those designed for compilation. This is just correlation, not causality. So the real issue is kind of type control, regardless of how the code is processed.
|
|
|
|
|
When you tell people something important, you want it to be easily understood. I'm not sure this is. "Marketing is far too important to be left only to the marketing department!."
|
|
|
|
|
|
Make your codebase easy for everyone to get acquainted with "Welcome to my nightmare. I think you're gonna like it. I think you're gonna feel you belong"
|
|
|
|
|
It was not until 1980 that a video-game player could maneuver at will through an imaginary landscape, wreaking havoc until brought to an untimely end by enemy tanks. Blow up all the triangles!
The first metaverse
|
|
|
|
|
Measuring complexity of your code with two different metrics - Cyclomatic Complexity & Cognitive Complexity "The complexity that we despise is the complexity that leads to difficulty"
|
|
|
|
|
Right. If the problem is super-complicated, then you can't expect the solution to be super-simple.
If the complexity of the problem really isn't that bad, the code shouldn't be bad either.
But how would you know that the problem really isn't that bad? It simply isn't the agile style. The modern way is to start by coding 'void main(int argc, char** argv) { ...' long before you start asking what the real problem is. At the time when you begin to really grasp the nature of the problem, you have already written a whole lot of code, based on an incomplete understanding. You will be living with that code as 'technical debt' for a long, long, time. In the style of agility, you will be programming around the old, badly (non)designed code, adding numerous 'ToDo's.
Most real problems are really far less complex than the software written to solve them.
|
|
|
|
|
I can tell why the explanation of why you don't like Agile is too complex
«The mind is not a vessel to be filled but a fire to be kindled» Plutarch
|
|
|
|
|
One of the System Engineering textbooks we used at the University summarizes in a bullet list "Reasons why the project did not complete on schedule". Every second bullet point said "Poor planning".
So some people said "Hey, that is a great idea for making our project last!" So they sat down and developed agile methods.
|
|
|
|
|
Microsoft has disabled the MSIX ms-appinstaller protocol handler exploited in malware attacks to install malicious apps directly from a website via a Windows AppX Installer spoofing vulnerability. You mean randomly downloading code from the internet is a bad thing?
|
|
|
|
|
Intel has just released a new 2021 product security report detailing the number of bugs that were found in its hardware during the course of last year. Maybe stop accepting pull requests from the competition?
|
|
|
|
|
If AMD caused "nearly half", then that means Intel caused more than half. From my standpoint, AMD wins.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|