Click Here
home features news forums classifieds faqs links search
6006 members 
Amiga Q&A /  Free for All /  Emulation /  Gaming / (Latest Posts)
Login

Nickname

Password

Lost Password?

Don't have an account yet?
Register now!

Support Amigaworld.net
Your support is needed and is appreciated as Amigaworld.net is primarily dependent upon the support of its users.
Donate

Menu
Main sections
» Home
» Features
» News
» Forums
» Classifieds
» Links
» Downloads
Extras
» OS4 Zone
» IRC Network
» AmigaWorld Radio
» Newsfeed
» Top Members
» Amiga Dealers
Information
» About Us
» FAQs
» Advertise
» Polls
» Terms of Service
» Search

IRC Channel
Server: irc.amigaworld.net
Ports: 1024,5555, 6665-6669
SSL port: 6697
Channel: #Amigaworld
Channel Policy and Guidelines

Who's Online
87 crawler(s) on-line.
 6 guest(s) on-line.
 0 member(s) on-line.



You are an anonymous user.
Register Now!
 freak:  5 mins ago
 AmigaMac:  23 mins ago
 ne_one:  33 mins ago
 DiscreetFX:  40 mins ago
 Jasper:  1 hr 31 mins ago
 agami:  1 hr 36 mins ago
 bison:  1 hr 42 mins ago
 Hammer:  2 hrs 41 mins ago
 Yssing:  3 hrs 18 mins ago
 JKD:  3 hrs 30 mins ago

/  Forum Index
   /  General Technology (No Console Threads)
      /  The (Microprocessors) Code Density Hangout
Register To Post

Goto page ( Previous Page 1 | 2 | 3 | 4 | 5 Next Page )
PosterThread
noXLar 
Re: Future Topic
Posted on 2-May-2021 6:59:26
#21 ]
Cult Member
Joined: 8-May-2003
Posts: 715
From: Norway

@cdimauro

short circuit? or just bored:)

_________________
nox's in the house!

 Status: Offline
Profile     Report this post  
MEGA_RJ_MICAL 
Re: Future Topic
Posted on 2-May-2021 7:00:09
#22 ]
Regular Member
Joined: 13-Dec-2019
Posts: 498
From: AMIGAWORLD.NET WAS ORIGINALLY FOUNDED BY DAVID DOYLE

VOOOR

_________________
I HAVE ABS OF STEEL
--
CAN YOU SEE ME? CAN YOU HEAR ME? OK FOR WORK

 Status: Offline
Profile     Report this post  
MEGA_RJ_MICAL 
Re: Future Topic
Posted on 2-May-2021 7:00:22
#23 ]
Regular Member
Joined: 13-Dec-2019
Posts: 498
From: AMIGAWORLD.NET WAS ORIGINALLY FOUNDED BY DAVID DOYLE

IIIIRRR

_________________
I HAVE ABS OF STEEL
--
CAN YOU SEE ME? CAN YOU HEAR ME? OK FOR WORK

 Status: Offline
Profile     Report this post  
MEGA_RJ_MICAL 
Re: Future Topic
Posted on 2-May-2021 7:00:54
#24 ]
Regular Member
Joined: 13-Dec-2019
Posts: 498
From: AMIGAWORLD.NET WAS ORIGINALLY FOUNDED BY DAVID DOYLE

ZORRAM

_________________
I HAVE ABS OF STEEL
--
CAN YOU SEE ME? CAN YOU HEAR ME? OK FOR WORK

 Status: Offline
Profile     Report this post  
cdimauro 
Re: The (Microprocessors) Code Density Hangout
Posted on 2-May-2021 7:01:02
#25 ]
Elite Member
Joined: 29-Oct-2012
Posts: 2276
From: Germany

@noXLar Quote:

noXLar wrote:
@cdimauro

wow, are you back:)

nice to see you bro! hope you will supply site with lots of interesting things:)

Thanks a lot.

I was/am a bit busy with other stuff (family, German course), so I've little time to contribute. But I'll try to reserve some space.

@matthey Quote:

matthey wrote:
cdimauro Quote:

Since we talk about code density from time to time, I'd like to open a thread in this section to collect all information, instead of having it spread around in several threads (which also disappear and are difficult to retrieve).

@matthey: I think that you're the most competent and active expert about this topic. Would you like to contribute?

...

@matthey can you take care of creating and updating those comments? As I said, you're the most competent and also active, so I think that you can contribute much better than me.

The topic is a nice idea but I get the feeling that most people here really don't want to know about code density so I'm not very motivated. I really should stop wasting my time here but the interesting court case lured me in.

If some people doesn't understand the importance of code density it is its problem / ignorance.

Almost all computer architectures have devoted instructions and/or executions modes and/or ISA "sub-extensions" to improve the code density for their processors (the only strange exception is ARM with AArch64) for very good reasons (reduce memory consumption AND memory bandwidth for ALL memory hierarchy. Memory is expensive, especially for caches. And memory bandwidth is quite limited, especially on multicores systems).

I understand your disappointment, but I hope that you regret from your decision and give your precious contribute. In the meanwhile I'll take care of organizing this thread, and collect information (when I've time).
Quote:
cdimauro Quote:

A request from my side: please can you share your updated 68K source about this http://deater.net/weave/vmwprod/asm/ll/ll.html ? Looking at your tables in Google Documents the numbers don't match with what mr. Weaver published in that link.


The last update I sent Dr. Weaver was mostly an improvement of the decompression code by ross on EAB. The assembler files are available on the last 2 pages of the following thread.

http://eab.abime.net/showthread.php?s=e81df9c472e296778e1c2996bf076333&t=85855&page=8

Notice that ross complains in the last post about the statistics not being updated.

ross Quote:

Just noticed that my 54 byte version is not signaled.
http://www.deater.net/weave/vmwprod/asm/ll/
(best is the 56 bytes 8086 version)

68k deserve the throne

Got it, thanks. I really really hate AT&T's syntax: it's so difficult to read compared to Motorola's or Intel's.
I'll take a deeper look to the source (68K is my favorite ISA, and I'm comfortable with it. x86/x64 is second choise, and I'll take a look after).

 Status: Offline
Profile     Report this post  
cdimauro 
Re: Future Topic
Posted on 2-May-2021 7:03:22
#26 ]
Elite Member
Joined: 29-Oct-2012
Posts: 2276
From: Germany

@noXLar Quote:

noXLar wrote:
@cdimauro

short circuit? or just bored:)

Bored.

@MEGA_RJ_MICAL
Quote:
MEGA_RJ_MICAL wrote:
VOOOR

Quote:
MEGA_RJ_MICAL wrote:
IIIIRRR

Quote:
MEGA_RJ_MICAL wrote:
ZORRAM


This isn't a circus, but a technical thread.

 Status: Offline
Profile     Report this post  
MEGA_RJ_MICAL 
Re: Future Topic
Posted on 2-May-2021 7:16:13
#27 ]
Regular Member
Joined: 13-Dec-2019
Posts: 498
From: AMIGAWORLD.NET WAS ORIGINALLY FOUNDED BY DAVID DOYLE

@cdimauro

Friend cdimauro,

every "technical thread" you open, and every other thread you derail into the pointless banter you believe to be "technical", is a circus.

Just ask around.

Warm regards,
MEGA!

_________________
I HAVE ABS OF STEEL
--
CAN YOU SEE ME? CAN YOU HEAR ME? OK FOR WORK

 Status: Offline
Profile     Report this post  
noXLar 
Re: Future Topic
Posted on 2-May-2021 8:33:06
#28 ]
Cult Member
Joined: 8-May-2003
Posts: 715
From: Norway

@MEGA_RJ_MICAL

it was just teasing:)

_________________
nox's in the house!

 Status: Offline
Profile     Report this post  
noXLar 
Re: Future Topic
Posted on 2-May-2021 8:33:36
#29 ]
Cult Member
Joined: 8-May-2003
Posts: 715
From: Norway

@cdimauro

have you got ur self a vampire yet?

_________________
nox's in the house!

 Status: Offline
Profile     Report this post  
cdimauro 
Re: Future Topic
Posted on 2-May-2021 8:49:45
#30 ]
Elite Member
Joined: 29-Oct-2012
Posts: 2276
From: Germany

@MEGA_RJ_MICAL Quote:

MEGA_RJ_MICAL wrote:
@cdimauro

Friend cdimauro,

every "technical thread" you open, and every other thread you derail into the pointless banter you believe to be "technical", is a circus.

Just ask around.

Warm regards,
MEGA!

Friend MEGA_RJ_MICAL, you should be able to understand the content and the purpose of the first post in this thread: it doesn't require a PhD in linguistics...

@noXLar Quote:

noXLar wrote:
@cdimauro

have you got ur self a vampire yet?

No, I'm no interested.

IMO emulation is the best (and less expensive) way / tool to enjoy / revive retro-platforms.

 Status: Offline
Profile     Report this post  
Fl@sh 
Re: The (Microprocessors) Code Density Hangout
Posted on 2-May-2021 9:46:06
#31 ]
Regular Member
Joined: 6-Oct-2004
Posts: 228
From: Napoli - Italy

@matthey

Your contributions are welcome, you are one of the most skilled guys to talk about this topic.
My recent replies are targeted to not consider only code density in a cpu, there are a lot of other things who can make huge difference.

I hope you’ll find time and motivation to give us very interesting infos.

_________________
Pegasos II G4@1GHz 2GB Radeon 9250 256MB
AmigaOS4.1 fe - MorphOS - Debian 9 Jessie

 Status: Offline
Profile     Report this post  
noXLar 
Re: The (Microprocessors) Code Density Hangout
Posted on 2-May-2021 10:09:10
#32 ]
Cult Member
Joined: 8-May-2003
Posts: 715
From: Norway

@Fl@sh

+1

100% agree

_________________
nox's in the house!

 Status: Offline
Profile     Report this post  
OneTimer1 
Re: The (Microprocessors) Code Density Hangout
Posted on 3-May-2021 20:52:52
#33 ]
Cult Member
Joined: 3-Aug-2015
Posts: 675
From: Unknown

@cdimauro

- General information (what's code density, why it's important, etc.)

Meaning:
How many Bytes does my code needs

Importance:
Not so much any more, because memory became cheap and caches got huge.

- Benchmarks (state of the art)

Depending on CPU purpose.

Whetstone for floating point
Drystone for integer

Other benchmarks for 3D, Video, I/O or other special purposes applications

- Compilers (which ones are best)

The beast 3 compilers are:
1st: GCC
2nd: GCC
3rd: GCC

- Literature (book, academic papers, web sites)

Best things are online

- Microprocessors (general information like if the ISA is more or less oriented to code density, if it has specific execution modes for compact code, if it has specific extensions for compact code, etc.)

Is always application dependent.
There are still reasons to use 8 bit MCUs

- Motorola 68K corner (anything which is useful about this family, which is not covered by other topics)

It's dead

- Embedded corner (...)

Join a micro computing forum, the last one I used on a regular base for Atmel programming was not an English speaking forum.

One of the most impressing articles I found was about software driven video output for Atmel with some kind of genlock to display flight information on a FPS camera.

Last edited by OneTimer1 on 03-May-2021 at 08:57 PM.
Last edited by OneTimer1 on 03-May-2021 at 08:55 PM.

 Status: Offline
Profile     Report this post  
DiscreetFX 
Re: The (Microprocessors) Code Density Hangout
Posted on 3-May-2021 22:01:32
#34 ]
Elite Member
Joined: 12-Feb-2003
Posts: 2047
From: Chicago, IL

@cdimauro

I take this thread to mean that the results are still TBD (To Be Determined?).

_________________
Sent from my Quantum Computer.

 Status: Offline
Profile     Report this post  
cdimauro 
Re: The (Microprocessors) Code Density Hangout
Posted on 4-May-2021 6:13:59
#35 ]
Elite Member
Joined: 29-Oct-2012
Posts: 2276
From: Germany

@OneTimer1 Quote:

OneTimer1 wrote:
@cdimauro

- General information (what's code density, why it's important, etc.)

Meaning:
How many Bytes does my code needs

Importance:
Not so much any more, because memory became cheap and caches got huge.

As I've said in my introductory post, code density is still important because it affects memory at all levels: from system memory up to L1 caches.
System memory is also expensive. Not even talking about caches (especially L1, of course).
And bandwidth is limited.

Those are the reasons why microprocessors' vendors have invested A LOT on improving code density for their processor, and still continue to do it even with novel architectures.

Either they care driven by crazy people or code density is really important. Only one of the two is right...
Quote:
- Benchmarks (state of the art)

Depending on CPU purpose.

Whetstone for floating point
Drystone for integer

Other benchmarks for 3D, Video, I/O or other special purposes applications

The purpose in this case is obvious: code size. So, nothing that you reported.
Quote:
- Compilers (which ones are best)

The beast 3 compilers are:
1st: GCC
2nd: GCC
3rd: GCC

That's plainly wrong, taking into account the topic.

GCC is good compared to CLang/LLVM (but the latter improved. Unfortunately there's no recent study about code density and using latest compilers), but really poor compared to other compilers.

Specifically, on Windows I found that binaries compiled with GCC have poor code density compared to binaries generated by other compilers (Visual Studio, primarily).

So, and as you can see, compilers also matters when talking about code density.
Quote:
- Literature (book, academic papers, web sites)

Best things are online

Indeed, albeit some aren't accessible without subscriptions.

I'll try to add some resources, most of them shared by Matt time ago.
Quote:
- Microprocessors (general information like if the ISA is more or less oriented to code density, if it has specific execution modes for compact code, if it has specific extensions for compact code, etc.)

Is always application dependent.

Correct.
Quote:
There are still reasons to use 8 bit MCUs

Indeed, but they are disappearing, looking at recent reports coming from embedded portals.
Quote:
- Motorola 68K corner (anything which is useful about this family, which is not covered by other topics)

It's dead

Correct, but nevertheless it's one of the best architectures, if not even the best, when looking at code density.

So, that's why it's impossible to talk about code density without considering it.
Quote:
- Embedded corner (...)

Join a micro computing forum, the last one I used on a regular base for Atmel programming was not an English speaking forum.

One of the most impressing articles I found was about software driven video output for Atmel with some kind of genlock to display flight information on a FPS camera.

But this thread is about code density, and Atmel's processors never looked good from this perspective.

Also, Atmel was acquired by Microchips years ago, and both are switching to ARM...

@DiscreetFX Quote:

DiscreetFX wrote:
@cdimauro

I take this thread to mean that the results are still TBD (To Be Determined?).

To Be Done.

 Status: Offline
Profile     Report this post  
cdimauro 
Re: The (Microprocessors) Code Density Hangout
Posted on 4-May-2021 6:32:49
#36 ]
Elite Member
Joined: 29-Oct-2012
Posts: 2276
From: Germany

@Hammer Quote:

Hammer wrote:
@matthey Quote:

matthey wrote:

Assuming the Nvidia buyout of ARM opens up opportunities in the market, isn't POWER in a different market? The Libre project is one research project using POWER for embedded hardware with some weird extensions like a variable length encoding for CPU/GPU vector processors. At least RISC-V is focused on the embedded market and uses a variable length encoding to try to improve code density. POWER/PPC fans will likely say that code density doesn't matter but it has been the ISAs with the best code density which have dominated the embedded market starting with the 68k.

68k baseline
Thumb2 +2% code size
SuperH +16% code size
x86-64 +31% code size
RISCV64IMC +34% code size
AArch64 +50% code size
PPC +81% code size
MIPS +85% code size
SPARC +93% code size

Is it a coincident that the 68k, SuperH and then ARM Thumb2 cores were the best selling 32 bit cores in the embedded market? Can we see why Motorola lost the embedded market when they stopped developing the 68k and forced PPC into the embedded market? Do you still think POWER/PPC has an opportunity in the embedded market because of Nvidia buying ARM? Can we see that RISC-V would have an opportunity because of lack of code density competition?

Why compare 32bit 68K with 64-bit X86 when there's 32-bit X86?

Correct, but it's also reported on the study from which those numbers come from.
Quote:
For code density, from http://web.eece.maine.edu/~vweaver/papers/iccd09/ll_document.pdf refer to page 2
For Linux_Logo benchmark, 8086 has superior code density when compared to 68K

For LZSS decompression code,
X86-64 beats SH3
X86-64 beats Thumb-2

For size of string concatenation code
X86-64 beats SH3
X86-64 beats Thumb-2

For size of string searching code
X86-64 beats SH3
X86-64 beats Thumb-2


For size of integer printing code
X86-64 beats SH3
X86-64 beats Thumb-2

Matt already replied, but I add some things which deserve.

First, the numbers that you criticized are coming from compiler-generated code. Specifically, when compiling the SPEC suite.
Whereas your above considerations are taking a specific contest where only manually-written code was used and compared.
So, you're comparing apples and oranges (despite that 8086 wasn't running Linux, as it was reported).

Second, this contest is of very limited use for comparing architectures, because:
- assembly code is rarely used (even on embedded systems);
- the used program was really tiny (in the order of 1KB);
- it isn't using so much real-life code;
- the provided solutions aren't all equally "top-notch".

Regarding the last and for example, the 68K code provided by Matt and ross reorganized the LZSS code (removing one branch instruction) and used a net trick (using the X flag as a flag/signal). Something similar can be used on x86/x64 as well, but I had no time to adapt their code (I only did for my ISA).

Anyway and as I've said, those kind of contests are just for fun: IMO the important thing is to look at compiler-generated code.
Quote:
I'm game for another ISA debate?

You can, if you want, but do it right...

@simplex Quote:

simplex wrote:
@matthey Quote:
The 8086 would likely win the contest if Linux supported the 8086. The contest program is small and does a lot of byte processing which the 8086 is excellent at.

I know almost nothing about this, but I noticed that they author had mentioned 8086 was an 8/16 bit architecture, while 68k and many others were 32 bit. Could that be a contributing cause?

It's for sure, since 8086 is using often 16-bit offsets for addressing the memory, and 16-bit constants, which helped A LOT in improving the code density.
Quote:
(I realize not all the 8-bit architectures had better code density, but I was wondering how a 6809 would do.)

It looks that the 68HC12 (which I consider mostly a 16-bit ISA) did even better. That's why I've payed homage to it in my company car's plate.

 Status: Offline
Profile     Report this post  
matthey 
Re: The (Microprocessors) Code Density Hangout
Posted on 4-May-2021 8:58:00
#37 ]
Super Member
Joined: 14-Mar-2007
Posts: 1139
From: Kansas

cdimauro Quote:

Whereas your above considerations are taking a specific contest where only manually-written code was used and compared.
So, you're comparing apples and oranges (despite that 8086 wasn't running Linux, as it was reported).

Second, this contest is of very limited use for comparing architectures, because:
- assembly code is rarely used (even on embedded systems);
- the used program was really tiny (in the order of 1KB);
- it isn't using so much real-life code;
- the provided solutions aren't all equally "top-notch".


Dr. Weaver's contest looks at the absolute limits of code density but the usefulness is limited by the tiny size. It is a real life program although maybe not the best choice. Benchmark compiling looks at how well compilers support an architecture where older and less popular architectures are at a disadvantage.

cdimauro Quote:

Regarding the last and for example, the 68K code provided by Matt and ross reorganized the LZSS code (removing one branch instruction) and used a net trick (using the X flag as a flag/signal). Something similar can be used on x86/x64 as well, but I had no time to adapt their code (I only did for my ISA).


The contest started out as a contest between Dr. Weaver and a friend to optimize the x86 program. I would therefor expect the x86 code to be relatively well optimized. Variable length encodings do allow many opportunities for small optimizations.

simplex Quote:

I know almost nothing about this, but I noticed that they author had mentioned 8086 was an 8/16 bit architecture, while 68k and many others were 32 bit. Could that be a contributing cause?


cdimauro Quote:

It's for sure, since 8086 is using often 16-bit offsets for addressing the memory, and 16-bit constants, which helped A LOT in improving the code density.


The 68k is a 16/32 bit architecture but supports 8, 16 and 32 bit data sizes (operations, immediates and displacements). The 8086 supports fewer sizes but has very good code density when using those sizes and poor code density when using larger sizes. The 8086 is specialized at byte processing where the 68k is more general purpose.

simplex Quote:

(I realize not all the 8-bit architectures had better code density, but I was wondering how a 6809 would do.)


cdimauro Quote:

It looks that the 68HC12 (which I consider mostly a 16-bit ISA) did even better. That's why I've payed homage to it in my company car's plate.


Generally, accumulator architectures do not have as good of code density as register-memory architectures do to requiring more instructions. Memory was precious in the 8 bit days but so were the transistors in the processors and accumulator architecture processors used fewer transistors. The 6502 was a stripped down 6800 and had poor code density. I would expect the 6800 to have better code density and the 6809 to have better yet but I don't know how much. The 8086 and Z80 had good code density.

The 68000 ended up being a register-memory and memory-memory hybrid architecture. A memory-memory architecture usually has better code density than a register memory architecture so this improved the code density. The x86 ended up being a register-memory and accumulator hybrid architecture. An accumulator architecture usually has worse code density than a register-memory architecture so this may have resulted in a worse code density although the 8086 which it is based off of has one of the best code densities for an 8 bit architecture. Then came along load-store architectures which generally have worse code density than either register-memory or accumulator architectures.

 Status: Offline
Profile     Report this post  
matthey 
Re: The (Microprocessors) Code Density Hangout
Posted on 4-May-2021 20:54:38
#38 ]
Super Member
Joined: 14-Mar-2007
Posts: 1139
From: Kansas

cdimauro Quote:

- Microprocessors (general information like if the ISA is more or less oriented to code density, if it has specific execution modes for compact code, if it has specific extensions for compact code, etc.)


OneTimer1 Quote:

Is always application dependent.


cdimauro Quote:

Correct.


Code density wasn't supposed to matter for fat RISC but then ignoring it turned into a liability. The first RISC architectures to fall were Alpha and PA-RISC which are the two with the worst code density. The remaining RISC architectures could not dominate high performance computing which most were designed for so tried to improve code density to compete in lower performance computing.

ARM - Thumb, Thumb2
MIPS - MIPS16, MicroMIPS
PPC - CodePack, VLE
SPARC - SPARC16

The newer RISC architectures like RISC-V and AArch64 have improved code density from the start although it is challenging for these load/store architectures. Instruction counts are high and instructions are usually large to encode more registers which are needed for load/store performance.

The 68k, then SuperH and then ARM Thumb2 were world leaders in 32 bit core sales. The x86-64 dominated the desktop and server markets. What did these architectures have in common? They all have good code density.

OneTimer1 Quote:

There are still reasons to use 8 bit MCUs


cdimauro Quote:

Indeed, but they are disappearing, looking at recent reports coming from embedded portals.


The 2019 EE Times Embedded Survey gives the following percentages for the main processor.

8 bit processor 10%
16 bit processor 11%
32 bit processor 61%
64 bit processor 15%

It looks like 8 bit processors have been maintaining their share over time too. There are some very simple embedded devices where an 8 bit processor is adequate but I suspect this also has to do with small FPGAs being cheap enough for the final board. The FPGA can be cheaper than a separate CPU when the size of the FPGA is small due to mass production of FPGAs. If there is enough volume for an ASIC SoC, then a 16 bit, 32 bit or even 64 bit processor becomes much cheaper to add. The silicon for the logic of a 32 bit processor like a 68060 would likely cost less than external memory. This is why a small amount of SRAM is popular in embedded SoCs but requires an architecture with good code density to fit.

The most noticeable trend from the main processor percentages above has been that 32 bit processors are losing share to 64 bit processors. Some of this is likely attributed to the increased availability of 64 bit cores like when using a Raspberry Pi as an embedded device. The importance of good code density 64 bit architectures is likely growing even for embedded. There isn't much competition as RISC load/store architectures are not particularly good at code density and x86-64 isn't very good at code density for a register-memory architecture.

cdimauro Quote:

- Motorola 68K corner (anything which is useful about this family, which is not covered by other topics)


OneTimer1 Quote:

It's dead


cdimauro Quote:

Correct, but nevertheless it's one of the best architectures, if not even the best, when looking at code density.


I believe the 68k is more alive than PPC. There are more PPC processors still available for sale but they are all EOL with no further development planned. The 68k has FIDO which is *not* EOL as far as I know. It was carefully designed for a long product life (at least a decade) which is easier for embedded products. It is Innovative although specialized enough (embedded automation) that it has not been used in Amiga hardware yet. There are multiple Amiga 68k cores which are in active development for FPGA hardware like the Apollo Core, TG68, FPGA Arcade core, etc.

The 68k has one of the best code densities yet little was done by Motorola/Freescale to try to improve it. Code size could likely be 5% smaller with a few enhancements (like adding some new ColdFire instructions and immediate compression). Turning it into a 32/64 bit architecture where it would likely have the best code density for a 64 bit architecture may be more important as it would be seen as having a path forward.

 Status: Offline
Profile     Report this post  
cdimauro 
Re: The (Microprocessors) Code Density Hangout
Posted on 5-May-2021 6:07:04
#39 ]
Elite Member
Joined: 29-Oct-2012
Posts: 2276
From: Germany

@matthey Quote:

matthey wrote:
cdimauro Quote:

Whereas your above considerations are taking a specific contest where only manually-written code was used and compared.
So, you're comparing apples and oranges (despite that 8086 wasn't running Linux, as it was reported).

Second, this contest is of very limited use for comparing architectures, because:
- assembly code is rarely used (even on embedded systems);
- the used program was really tiny (in the order of 1KB);
- it isn't using so much real-life code;
- the provided solutions aren't all equally "top-notch".

Dr. Weaver's contest looks at the absolute limits of code density but the usefulness is limited by the tiny size. It is a real life program although maybe not the best choice.

Indeed. It isn't a complex application of some tenths or even hundreds of KB and not even a useful application. Code is limited; used algorithms as well; and so the usefulness.
Quote:
Benchmark compiling looks at how well compilers support an architecture where older and less popular architectures are at a disadvantage.

Yes, but there's no other choice unfortunately: almost all applications are compiled nowadays, and we can't even think of fully converting them to hand-optimized code.

The good thing is that 68K support was recently added to CLang/LLVM, which is the most promising compiler. So, there's chance that generated code can improve a lot .
Quote:
cdimauro Quote:

Regarding the last and for example, the 68K code provided by Matt and ross reorganized the LZSS code (removing one branch instruction) and used a net trick (using the X flag as a flag/signal). Something similar can be used on x86/x64 as well, but I had no time to adapt their code (I only did for my ISA).

The contest started out as a contest between Dr. Weaver and a friend to optimize the x86 program. I would therefor expect the x86 code to be relatively well optimized.

Yes. There are only a few things to be applied, but I had no time to change the sources.
Quote:
cdimauro Quote:

It looks that the 68HC12 (which I consider mostly a 16-bit ISA) did even better. That's why I've payed homage to it in my company car's plate.

Generally, accumulator architectures do not have as good of code density as register-memory architectures do to requiring more instructions. Memory was precious in the 8 bit days but so were the transistors in the processors and accumulator architecture processors used fewer transistors. The 6502 was a stripped down 6800 and had poor code density. I would expect the 6800 to have better code density and the 6809 to have better yet but I don't know how much. The 8086 and Z80 had good code density.

Unfortunately and incredibly the 6800 family wasn't considered by Dr. Weaver, so there's no data. It's quite strange, because it was one of most used microprocessor in the embedded world.
Quote:
The 68000 ended up being a register-memory and memory-memory hybrid architecture. A memory-memory architecture usually has better code density than a register memory architecture so this improved the code density. The x86 ended up being a register-memory and accumulator hybrid architecture. An accumulator architecture usually has worse code density than a register-memory architecture so this may have resulted in a worse code density although the 8086 which it is based off of has one of the best code densities for an 8 bit architecture. Then came along load-store architectures which generally have worse code density than either register-memory or accumulator architectures.

68000 has only very limited memory-memory support, with a few instructions allowing memory reference for both source and destination.
8086 has also very limited accumulator support, with just a few instructions which uses only the accumulator, and some which have short encoding when using it. It has also a few memory-memory instructions.
68H12 is accumulator (but has many registers), register-memory, and also memory-memory (but limited to a few instructions as well).

Quote:
matthey wrote:
cdimauro Quote:

- Microprocessors (general information like if the ISA is more or less oriented to code density, if it has specific execution modes for compact code, if it has specific extensions for compact code, etc.)


OneTimer1 Quote:

Is always application dependent.


cdimauro Quote:

Correct.

Code density wasn't supposed to matter for fat RISC but then ignoring it turned into a liability. The first RISC architectures to fall were Alpha and PA-RISC which are the two with the worst code density. The remaining RISC architectures could not dominate high performance computing which most were designed for so tried to improve code density to compete in lower performance computing.

ARM - Thumb, Thumb2
MIPS - MIPS16, MicroMIPS
PPC - CodePack, VLE
SPARC - SPARC16

The newer RISC architectures like RISC-V and AArch64 have improved code density from the start although it is challenging for these load/store architectures. Instruction counts are high and instructions are usually large to encode more registers which are needed for load/store performance.

The 68k, then SuperH and then ARM Thumb2 were world leaders in 32 bit core sales. The x86-64 dominated the desktop and server markets. What did these architectures have in common? They all have good code density.

I agree.

To better clarify my previous statement, what I've observed is that code density changes depending on the specific application. That's why it's "application dependent".
Quote:
OneTimer1 Quote:

There are still reasons to use 8 bit MCUs


cdimauro Quote:

Indeed, but they are disappearing, looking at recent reports coming from embedded portals.

The 2019 EE Times Embedded Survey gives the following percentages for the main processor.

8 bit processor 10%
16 bit processor 11%
32 bit processor 61%
64 bit processor 15%

It looks like 8 bit processors have been maintaining their share over time too. There are some very simple embedded devices where an 8 bit processor is adequate but I suspect this also has to do with small FPGAs being cheap enough for the final board. The FPGA can be cheaper than a separate CPU when the size of the FPGA is small due to mass production of FPGAs. If there is enough volume for an ASIC SoC, then a 16 bit, 32 bit or even 64 bit processor becomes much cheaper to add. The silicon for the logic of a 32 bit processor like a 68060 would likely cost less than external memory. This is why a small amount of SRAM is popular in embedded SoCs but requires an architecture with good code density to fit.

That's why the embedded world is abandoning the 8 and 16-bit families: 32-bit ones became cheaper to produce, and with much less constraints. So, it's also easier to develop. Combined with a good code density, they are the most favorite.
Quote:
The most noticeable trend from the main processor percentages above has been that 32 bit processors are losing share to 64 bit processors. Some of this is likely attributed to the increased availability of 64 bit cores like when using a Raspberry Pi as an embedded device.

Only starting from the Pi 2 the Raspberry have a 64-bit architecture. But this platform isn't selling a lot: only a few millions, compared to a market of billions of devices.
Quote:
The importance of good code density 64 bit architectures is likely growing even for embedded. There isn't much competition as RISC load/store architectures are not particularly good at code density and x86-64 isn't very good at code density for a register-memory architecture.

x86-64 sucks because on average you have 0.8 bytes more per instruction compared to IA-32. It has really a poor code density, and nothing can be made to change this. So, the embedded market for 64-bit devices should take a look at different ISAs.
Quote:
cdimauro Quote:

- Motorola 68K corner (anything which is useful about this family, which is not covered by other topics)


OneTimer1 Quote:

It's dead


cdimauro Quote:

Correct, but nevertheless it's one of the best architectures, if not even the best, when looking at code density.

I believe the 68k is more alive than PPC. There are more PPC processors still available for sale but they are all EOL with no further development planned. The 68k has FIDO which is *not* EOL as far as I know. It was carefully designed for a long product life (at least a decade) which is easier for embedded products. It is Innovative although specialized enough (embedded automation) that it has not been used in Amiga hardware yet. There are multiple Amiga 68k cores which are in active development for FPGA hardware like the Apollo Core, TG68, FPGA Arcade core, etc.

Yes, but those are products for nano-niche markets. Nothing which generates large volumes. That's why I consider them dead. BTW FIDO isn't even 100% 68K-compatible.
Quote:
The 68k has one of the best code densities yet little was done by Motorola/Freescale to try to improve it. Code size could likely be 5% smaller with a few enhancements (like adding some new ColdFire instructions and immediate compression). Turning it into a 32/64 bit architecture where it would likely have the best code density for a 64 bit architecture may be more important as it would be seen as having a path forward.

Absolutely agree. The problem is: who'll do it to address the embedded market (AKA large volumes)?

 Status: Offline
Profile     Report this post  
megol 
Re: The (Microprocessors) Code Density Hangout
Posted on 5-May-2021 15:18:14
#40 ]
Regular Member
Joined: 17-Mar-2008
Posts: 355
From: Unknown

This thread again?

 Status: Offline
Profile     Report this post  
Goto page ( Previous Page 1 | 2 | 3 | 4 | 5 Next Page )

[ home ][ about us ][ privacy ] [ forums ][ classifieds ] [ links ][ news archive ] [ link to us ][ user account ]
Copyright (C) 2000 - 2019 Amigaworld.net.
Amigaworld.net was originally founded by David Doyle