▄▄             ▄▄▄  ▄▄▄ Power
█  █ ▄▄▄▄ ▄▄▄▄▄ █  █ █  █
█▄▄█ █▄▄▄ █ █ █ █▀▀▄ █▀▀▄
█  █ ▄▄▄█ █ █ █ █▄▄▀ █▄▄▀

/ aa about.it ad amd64 and.who api asm asmbb asmbb.features authentication bbcode best bugs bulma cares chat common debian decentralization deck design dll docker email embed fast feature files fossil fresh.ide friendly gamedev heap help hiawatha high.cpu i18n ideas incredible interop learning libfresh limit links linux mailing.list meme meta.http-equiv minimag money mysql neo nginx numbers orly os outage pass password post-by-email programmers programming proile read-by-email resources safety script.alert.xss secret seo skins sodom source sourcecode stress.test subdirectory subforum suggestion support tags templates test test123 theme type very.ugly video work xss игнат котики парола русский тест уеб.програмиране хабр.наполеон
Categories Threads

Why assembly programs are faster than HLL programs, despite that the compilers are so advanced? RSS

The paradox.

The hand written assembly language programs are faster and use less memory than the programs with the same features, but written in high level languages.

I will give below some examples from the real life. In addition I have made several artificial experiments that show the same.

This looks like a paradox, because the HLL compilers are very effective these days and for big programs generate more optimal code, than the hand written assembly language.

Yes, it can be formally proven the following:


The hand written assembly language code, is always more optimal or equal to the compiler generated code.

This statement is easily proved by the fact, that the programmer can always read the compiler output and optimize it further, while the compiler can't do the same with the programmers code.

But this theorem is not very helpful in the real life. Simply because the compilers generate huge amount of code, that can not be manually read, analyzed and optimized by a human.

Fortunately, the assembly language programmer, does not need to compete with the compiler in the platform specific optimizations, in order to beat it.

The fastest assembler today is FlatAssembler. It is written in assembly language and is "optimized" for 80386 CPU, if this can be qualified as optimization at all. The competitors are written in C/C++ and are slower, despite of the more optimal code generated by the C/C++ compiler.

The fastest OS is KolibriOS written in assembly language. It boots for less than 2 seconds to the GUI desktop. Even the BIOS startup is slower. And the GUI is instantly responsive even on very slow and old machines.

The fastest web server is RWASA and it is the only of the above examples that is really optimized for speed. But it still uses less memory than its competitors.

Actually, in most cases, writing programs in assembly language, the programmers put more effort in writing smaller and readable code, than in writing faster code. But as a result, these programs always perform faster than the HLL equivalents.


Let see the so called "Jevons paradox". This is an economics paradox, but actually it can be applied to the programming as well.

Jevons paradox

In economics, the Jevons paradox occurs when technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the rate of consumption of that resource rises due to increasing demand.

If we apply this definition to the programming, we can see, the direct analogy. The compilers actually increases the efficiency with which the computer resources are used by the programmers. With one line of code in HLL the programmer can use more resources than with one line of code in assembly language.

The same way more efficient car have greater mileage with a liter of fuel.

As a result, the use of the computer resources CPU and RAM increases. Because of increased demand.

The same way the traveled kilometers and the total fuel consumption increases with more efficient cars.

Notice, that the increased efficiency of the compilers is not the only increased efficiency in the IT. The efficiency of the hardware also increases with the time. Now the RAM is faster and bigger than ever, the CPUs are also pretty efficient.

But all these improvements, lead only to increased consumption of the resources. In full accordance with Jevons paradox.

As simple as that.

How to counteract?

In economics, there are tools to counteract the Jevons paradox. For example, the increased taxes can stop the demand raise and neutralize the effect if the goal is to reduce the resource consumption.

But there is no taxes for "CPU cycles" or "RAM ussage".

The only way for the programmers is to consciously limit themselves from using this increased effectiveness of the HLL compilers and hardware improvements.

Someone will probably ask here "Why we should limit the resource use? The RAM is cheap and the CPU is fast."

The answer is simple and straightforward: Because we will need these resources in order to develop our programs further.

In addition, the programmers time is of course important, but the programmers often forget that the program is written once, but executed sometimes millions of times. One saved second by writing faster program, can result in millions of saved seconds for the users of the program.

Yes, the program needs maintenance and further development, but who said, the faster program needs more effort for the maintenance?

After all, the mentioned above FlatAssembler is developed and maintained for almost 20 years by a single person. I didn't noticed some enormously great effort for this program to be maintained, regardless of the fact it is written entirely in assembly language.

Of course, the programs in HLL also can be written in efficient manner. There are many examples of C/C++ projects written this way and really performing excellent.

But the higher is the level of the language, the harder is to write efficient code. In Java and .net it is almost impossible.

The future

Fortunately or not, but the overgrowth of the computer performance, actually ended. There will be no exponential growth anymore. Nor even close. We can expect some slow linear growth, or even some decline (the mobile devices) of the computers performance and resources.

So, now the programmers have the time to put their code in order and to start to pay this giant technological debt they accumulated during the big bang of the hardware.

If during the exponential growth the technical progress provided the RAM and CPU for the next program version, now this time ended.

If someone have a program that uses all resources of the computer and want to implement new features, he will be forced first to optimize the existing features to use less memory and CPU and to use the released resources for the new features of the program.

And I have some strong suspicions that this process is already silently running in most of the software companies.

But for the new projects, the economical use of the resources from the day one, seems to be better strategy, simply because it is easier to write resource friendly code from the beginning, than to rewrite it later.

BTW, more and more people start to call for more resource friendly code. I am reading such articles every day on different blogs and social networks.

So, interesting times are coming. The era of deep software optimization. :-)

Need nice book about Nasm or Fasm, have you smthng?


Need nice book about Nasm or Fasm, have you smthng?

I don't know about NASM, but there is no for FASM. The best way I know to learn FASM is to use it:

Read some instruction set tutorial (regardless of the particular assembler, only avoid gas and AT&T syntax). Every assembly language tutorial for beginners will do the job.

Read the FASM reference, provided with the distribution. It is about the FASM syntax. Keep it close for fast reference later. (In Fresh IDE it is on Ctrl-F1 distance).

Then try to play with the examples (also in the distribution package). Ask on the forum and read the old posts. Especially the "examples" section. There are tons of good sources.

Try to change some of the examples in a way you like. Try to start your own small project. Always set goals higher than your current level.

And what is your next move after ASMbb? :-)


And what is your next move after ASMbb? :-)

Why "after"? I am working in parallel on my Fresh IDE project. There is a lot of work in order to make it fully portable.

There are still at least two advantages of Asm programming. The first advantage can be illustrated with a picture that HLL programmers troll each other.


But in fact, this is almost the case when programming in Asm. And for this it is enough to be a typical Asm programmer and use a safe style. Indeed, in the debugger you can see exactly the code that is in the source code, and while you are busy with loading and setting breakpoints, you most often see an error before running the code in the debugger.

The second advantage is obvious - just look at my avatar. After all, it is obvious that a cyborg can only come to a true Asm programmer, but not to some dull trained eared HLL monkey.

Hello johnfound, when you edited the first post, this bring a new notification and up the topic to the first place. Is it intended ? (I don't think this should happen).


Hello johnfound, when you edited the first post, this bring a new notification and up the topic to the first place. Is it intended ? (I don't think this should happen).

Hm. I would say it is "semi-intended". This way the visitors can notice that the post is edited (and I will implement later the diff algorithm in order to see what exactly has been changed). On the other hand, yes it is a little bit annoying...

Will think how is better to resolve this problem. :-)

Categories Threads

Why assembly programs are faster than HLL programs, despite that the compilers are so advanced? RSS

AsmBB v2.7 (check-in: b1b34acbf71dada0); SQLite v3.30.0 (check-in: c20a353364320254);

©2016..2018 John Found; Licensed under EUPL.
Powered by Assembly language
Created with Fresh IDE

Icons are made by Egor Rumyantsev, vaadin and icomoon from www.flaticon.com