Your support is needed and is appreciated as Amigaworld.net is primarily dependent upon the support of its users.
|
|
|
|
Poster | Thread | bison
| |
Re: 68k Developement Posted on 24-Sep-2018 21:29:17
| | [ #361 ] |
| |
|
Elite Member |
Joined: 18-Dec-2007 Posts: 2112
From: N-Space | | |
|
| @cdimauro
Quote:
There are also A72 and A73. |
There are, but they both have OoO execution and are almost certainly susceptible to side-band exploits, which makes them less interesting to me.
I would just as soon avoid OoO execution until the full extent of possible exploits are known and understood. The problem was, after all, unknown for more than twenty years before becoming common knowledge only in the last year, and strategies for exploiting the architecture are still being discovered.
Last edited by bison on 24-Sep-2018 at 09:33 PM.
_________________ "Unix is supposed to fix that." -- Jay Miner |
| Status: Offline |
| | bison
| |
Re: 68k Developement Posted on 24-Sep-2018 21:32:22
| | [ #362 ] |
| |
|
Elite Member |
Joined: 18-Dec-2007 Posts: 2112
From: N-Space | | |
|
| @Barana
If you're going to post things like that, would you at least consider changing your signature?
_________________ "Unix is supposed to fix that." -- Jay Miner |
| Status: Offline |
| | cdimauro
| |
Re: 68k Developement Posted on 25-Sep-2018 5:47:41
| | [ #363 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 3650
From: Germany | | |
|
| @ppcamiga1 Quote:
ppcamiga1 wrote: @cdimauro
I use 68k appliactions on my NG Amiga. works better on my NG Amiga than on real 68k hardware. |
Guess from where, almost all of them, came from... |
| Status: Offline |
| | cdimauro
| |
Re: 68k Developement Posted on 25-Sep-2018 5:51:37
| | [ #364 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 3650
From: Germany | | |
|
| @megol Quote:
megol wrote: @Barana Disgusting. |
@bison Quote:
bison wrote: @Barana
If you're going to post things like that, would you at least consider changing your signature? |
+2
I think that his behavior is totally incompatible with the basic, minimum civic rules of a forum. I hope that moderation takes actions. |
| Status: Offline |
| | cdimauro
| |
Re: 68k Developement Posted on 25-Sep-2018 5:55:29
| | [ #365 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 3650
From: Germany | | |
|
| @bison Quote:
bison wrote: @cdimauro
Quote:
There are also A72 and A73. |
There are, but they both have OoO execution and are almost certainly susceptible to side-band exploits, which makes them less interesting to me.
I would just as soon avoid OoO execution until the full extent of possible exploits are known and understood. The problem was, after all, unknown for more than twenty years before becoming common knowledge only in the last year, and strategies for exploiting the architecture are still being discovered. |
I respect your opinion, but I prefer to stick with OoO execution. It's relatively secure even without patches/mitigation, knowing which applications might be affected, and how to use them: security is not a black or white thing.
If you like in-order micro-architectures, the latest from ARM is Cortex-A55. |
| Status: Offline |
| | NutsAboutAmiga
| |
Re: 68k Developement Posted on 25-Sep-2018 6:29:48
| | [ #366 ] |
| |
|
Elite Member |
Joined: 9-Jun-2004 Posts: 12820
From: Norway | | |
|
| @ppcamiga1
Your comment comes out as bit blunt.
>Real 68k is too slow to be usable.
Depends on what you're going to use it for.
>There is no fun in making software for real 68k.
I think it's easier to code for AmigaOS4.1 / PPC, at least GCC works well. I have memory protection that catch simple pointer bugs, you don't have that on 68k / OS3.x
>68k users should not expect amiga developers to optimize software for 68k amiga. >You want to use slow 68k Amiga?
Developers need to make special cases for any special instructions, it be silly to think that all developers will do that, things like AltiVec is hardly used, just in special cases on PowerPC, the same will be true with AMMX and other stuff the Vampire team dreams up, and old programs will not automatically use the new stuff.
I'm C/C++ programmer my open source projects will be written in C/C++ and won't be optimized special CPU. I'm not that good in assembler and think it too time consuming for me.
Last edited by NutsAboutAmiga on 25-Sep-2018 at 08:57 PM. Last edited by NutsAboutAmiga on 25-Sep-2018 at 06:32 AM.
_________________ http://lifeofliveforit.blogspot.no/ Facebook::LiveForIt Software for AmigaOS |
| Status: Offline |
| | NutsAboutAmiga
| |
Re: 68k Developement Posted on 25-Sep-2018 6:42:30
| | [ #367 ] |
| |
|
Elite Member |
Joined: 9-Jun-2004 Posts: 12820
From: Norway | | |
|
| @umisef
I the context of wawa's original question I follow up on your questions, even if they seems to be off topic here. Quote:
And especially given that you develop for a system which lacks modern protections, developing on the same system sounds like a Really Bad Idea(tm). Having to reboot |
Most of time I can just iconify a shell window, or move window out of the screen,
Quote:
and recreate all the development state from scratch, |
I just macros to take me quickly back to project. I use script to run test cases.
Quote:
each time one makes a silly pointer mistake --- sounds painful. |
This bugs are easily detected by Grim Repair, I know your 68K developer and are used to Guru mediation, I never see the GURU.
Most often I reboot to remove crashed windows, or because other bugs.. see my previous comment.
Like freeing memory twice, or freeing the wrong address, because there is no memory resource tracking in AmigaOS, you allowed to free a different programs memory, with stupid mess that creates.
Freezes is most result of forbids and intuition or bitmap locks, system is running but, screen is frozen, you might use remove shell to access the system in this state.
_________________ http://lifeofliveforit.blogspot.no/ Facebook::LiveForIt Software for AmigaOS |
| Status: Offline |
| | NutsAboutAmiga
| |
Re: 68k Developement Posted on 25-Sep-2018 6:44:14
| | [ #368 ] |
| |
|
Elite Member |
Joined: 9-Jun-2004 Posts: 12820
From: Norway | | |
|
| | Status: Offline |
| | OlafS25
| |
Re: 68k Developement Posted on 25-Sep-2018 9:19:11
| | [ #369 ] |
| |
|
Elite Member |
Joined: 12-May-2010 Posts: 6353
From: Unknown | | |
|
| @NutsAboutAmiga
perhaps GCC but you have lots of programming environments, languages and libraries on 68k that are not available on "NG" and are not working there because needing the chipset (of course you could use UAE). GCC has advantages when porting software from other platforms but if developing something special for amiga 68k would be better choice.
So to me it is a mixed picture... |
| Status: Offline |
| | megol
| |
Re: 68k Developement Posted on 25-Sep-2018 9:20:39
| | [ #370 ] |
| |
|
Regular Member |
Joined: 17-Mar-2008 Posts: 355
From: Unknown | | |
|
| @bison Quote:
bison wrote: @cdimauro
Quote:
There are also A72 and A73. |
There are, but they both have OoO execution and are almost certainly susceptible to side-band exploits, which makes them less interesting to me.
I would just as soon avoid OoO execution until the full extent of possible exploits are known and understood. The problem was, after all, unknown for more than twenty years before becoming common knowledge only in the last year, and strategies for exploiting the architecture are still being discovered.
|
I wouldn't say unknown in itself, the bandwidth by which information could be extracted through that mechanism was at unknown.
Even the in-order designs have several paths for information leakage as they have caches, TLBs, prefetch, branch predictors. For instance if the branch predictor can be determined to have a certain memory address to branch predictor entry mapping one can set the state in an attacker to see what path the victim code choose. If one knows that the victim will fetch data to a certain location in some interesting case one can make sure to set the cache state right, call the victim, then detect if that location is likely to be touched.
What one can't do is directly get the interesting data fetched, that's where the power of these new kinds of exploits come in - much higher bandwidths, can in some cases get data the victim code tries to protect by range checks and the like. The Intel/ARM/IBM Meltdown type of thing is extremely high bandwidth and trivial to exploit. |
| Status: Offline |
| | umisef
| |
Re: 68k Developement Posted on 25-Sep-2018 10:49:24
| | [ #371 ] |
| |
|
Super Member |
Joined: 19-Jun-2005 Posts: 1714
From: Melbourne, Australia | | |
|
| @NutsAboutAmiga
Quote:
For argument sake, if use UAE for testing will stop needing to reboot UAE?
|
The suggestion was to use one system for the change and build stages of the "change, build, test" cycle, and another for the test stage.
In such a setup, one does definitely not lose the state of the dev system, regardless of what goes wrong on the test system, so even if dev system and test system are using the same hardware/OS setup, it is still a win.
The further suggestion was that, having made the separation, anyway, it would be wise to reevaluate whether using the same hardware/OS setup for both systems is the best choice. A different setup for the dev system may provide benefits, such as, for example, access to better tools, or faster build cycles, or better ergonomics.
|
| Status: Offline |
| | umisef
| |
Re: 68k Developement Posted on 25-Sep-2018 11:06:12
| | [ #372 ] |
| |
|
Super Member |
Joined: 19-Jun-2005 Posts: 1714
From: Melbourne, Australia | | |
|
| @megol Quote:
megol wrote: @bison Quote:
I would just as soon avoid OoO execution until the full extent of possible exploits are known and understood.
|
I wouldn't say unknown in itself, the bandwidth by which information could be extracted through that mechanism was at unknown.
|
Is there anything in speculative execution that requires OoO?
To the best of my knowledge, in-order implementations will keep executing straight through a predicted branch, relying on "just never commit any results if the prediction turns out incorrect". The problem being that "never commit any results" merely deals with the processor's programming model, and there is plenty of observable state in a modern processor which is not part of the programming model (such as various cache for data, instruction and MMU config, as well branch prediction, automatic prefetch systems, etc.).
(The unrelated madness of speculatively using data from a memory read before the MMU access rights validation for that data has completed is also independent of any OoO implementation, as far as I can see) |
| Status: Offline |
| | wawa
| |
Re: 68k Developement Posted on 25-Sep-2018 11:34:29
| | [ #373 ] |
| |
|
Elite Member |
Joined: 21-Jan-2008 Posts: 6259
From: Unknown | | |
|
| @NutsAboutAmiga
Quote:
NutsAboutAmiga wrote: @umisef
For argument sake, if use UAE for testing will stop needing to reboot UAE? |
no, but as umisef said, the development system will not be taken down along with it, which i think must have pretty drastic impact when it happens every other time your test case goes wrong, considering you probably lose all your undo and history in open editors, shells any may forgotten where you were last time editing the files, not to mention when the system is slow to reboot.
other than that, when testcase succeedes, you may not even need to reboot uae, you simply recompile binary, go to uae and rerun again. Last edited by wawa on 25-Sep-2018 at 11:36 AM.
|
| Status: Offline |
| | megol
| |
Re: 68k Developement Posted on 25-Sep-2018 13:56:09
| | [ #374 ] |
| |
|
Regular Member |
Joined: 17-Mar-2008 Posts: 355
From: Unknown | | |
|
| @umisef
Quote:
umisef wrote: @megol Is there anything in speculative execution that requires OoO?
|
Some level of speculative execution is still there in a high performance in-order design. The difference is what level of speculation can be done. Quote:
To the best of my knowledge, in-order implementations will keep executing straight through a predicted branch, relying on "just never commit any results if the prediction turns out incorrect". The problem being that "never commit any results" merely deals with the processor's programming model, and there is plenty of observable state in a modern processor which is not part of the programming model (such as various cache for data, instruction and MMU config, as well branch prediction, automatic prefetch systems, etc.).
|
In a way one can say the OoO processor does the same - just making sure things commit in the right order in the end. The difference of course is that the level of speculation possible in a design able to speculate over several L1 cache misses is much higher than one that maybe can tolerate one miss. Quote:
(The unrelated madness of speculatively using data from a memory read before the MMU access rights validation for that data has completed is also independent of any OoO implementation, as far as I can see) |
Well it requires the check to be postponed enough that a memory read of the target can be done followed by a dependent read to prime the data cache. That is unlikely to be possible on in-order designs.
However I did a search before submitting this post - and apparently the Power 6 have a demonstrated Spectre type exploit. Now it is a special case with it supporting run-ahead execution but still. |
| Status: Offline |
| | Hypex
| |
Re: 68k Developement Posted on 25-Sep-2018 15:59:04
| | [ #375 ] |
| |
|
Elite Member |
Joined: 6-May-2007 Posts: 11222
From: Greensborough, Australia | | |
|
| | Status: Offline |
| | NutsAboutAmiga
| |
Re: 68k Developement Posted on 25-Sep-2018 16:41:12
| | [ #376 ] |
| |
|
Elite Member |
Joined: 9-Jun-2004 Posts: 12820
From: Norway | | |
|
| @OlafS25
but you have lots of programming environments, languages and libraries on 68k that are not available on "NG" and are not working there because needing the chipset. Yes there are lot old tools that has not been updated, googled for AsmOne, and AsmPro that was hugely popular has not been updated in decades, and it's unlikely it support 68080, even if source code is available.
The only tool I found that does support 68080 is VBCC, VBCC is not bad, and it actually being updated it is working fine on AmigaOS4, and I have created cross compiler to compile 680x0 assembler stuff.
Quote:
Petunia has no problem with 680x0 libraries., sadly if there source code is not available no one will update or improve this libraries, and so will probably never be optimized for 68080.
Quote:
So to me it is a mixed picture... |
Yes it mixed picture of this is working, this not working.
a) Great this written well and just works. b) This is not working, but source code is available and easy to work with. c) This is not working, and source code is written in language that is abandoned. d) This is not working, and the source code is not available. e) This is not working, and source code is not available, but its wort replicating. f) This is not working, and source code is not available, but owner is willing to sell it.
_________________ http://lifeofliveforit.blogspot.no/ Facebook::LiveForIt Software for AmigaOS |
| Status: Offline |
| | megol
| |
Re: 68k Developement Posted on 25-Sep-2018 16:42:03
| | [ #377 ] |
| |
|
Regular Member |
Joined: 17-Mar-2008 Posts: 355
From: Unknown | | |
|
| @Hypex
Quote:
Hypex wrote: @BigD
Quote:
Worthy is a new A500 compatible Classic game! A commercial A500 OCS game that looks far better than 3D Spencer on NG |
Have you seen Spencer? I haven't as I still need a 3d card. I could just play it on Windows at this point.
But OCS? Seriously, AGA made OCS obsolete 25 years ago. AGA is so much more powerful. That would almost be like still doing 256 colour PC games when the world shifted to true colour ages ago. No wonder the Amiga died out if game makers are still targetting an A500 and floppy for the most common machine.
|
I have no AGA machine but several ECS ones. There are a lot of others that have no AGA machine. In many ways I prefer a skilled pixel artist working with 16 or 32 colors than one that have true color or even 256 colors available. Larger market (...) and more of a technical challenge. What's not to like? ;) |
| Status: Offline |
| | ppcamiga1
| |
Re: 68k Developement Posted on 25-Sep-2018 16:51:49
| | [ #378 ] |
| |
|
Cult Member |
Joined: 23-Aug-2015 Posts: 771
From: Unknown | | |
|
| I have no printer. In last ten years I printing very rare. Almost everytime from printer in local shop, or on pc in my work. I do not need TurboPrint. What I need is fast Amiga to convert .ps to .pdf which I can use everywhere.
I prefer Personal Paint over Deluxe Paint. But in rare cases I use Deluxe Paint IV - it works without problems.
I do not need a chipset. Every usable 68k software from my old 68k Amiga works better on my NG Amiga without problems.
|
| Status: Offline |
| | ppcamiga1
| |
Re: 68k Developement Posted on 25-Sep-2018 16:59:49
| | [ #379 ] |
| |
|
Cult Member |
Joined: 23-Aug-2015 Posts: 771
From: Unknown | | |
|
| Amiga is my hobby, not my work. As user I do not want to spend my precious free time on something slower and less comfortable than cheap pc from Windows 95 era. As a developer I do not want to waste my precious free time to optimize software to work on something slower than cheap pc from Windows 95 era. If slower amiga is ok for You, then ok, we can cooperate in some way, but don expect us to downgrade.
|
| Status: Offline |
| | matthey
| |
Re: 68k Developement Posted on 26-Sep-2018 3:40:50
| | [ #380 ] |
| |
|
Elite Member |
Joined: 14-Mar-2007 Posts: 2015
From: Kansas | | |
|
| Quote:
cdimauro wrote: Frankly speaking, now I don't know what you really want with the new ISA design. I'm not kidding you, but I want to better understand what's your vision / goal. So, I make some questions.
|
Don't worry. I'm open minded enough to consider how and the cost of doing things I don't currently consider necessary like adding more registers.
Quote:
Do you want 64-bit support?
|
IMO, a 32 bit CPU hits the sweet spot for performance, has a good amount of address space, is easy to program and provides a small footprint. This is why they continue to be popular for embedded use. Personally, I would be happy with 32 bit CPUs with a small footprint. However, 64 bit CPUs are popular and hyped right now so it is necessary to have 64 bit plans for the future to gain any kind of respect. Some applications practically require 64 bit including advanced GPUs.
Quote:
How much of 68K and/or x86/x64 instructions compatibility do you want to keep? In different words, how much can be thrown out?
|
I think compatibility is more important to a 68k CPU than any entrenched architecture. The huge 68k software library and 68k fans depend on compatibility.
Quote:
Which registers set is ok for you: (current) 8 (data) + 8 (address), 16 + 16, 16 + 8, 32 + 16?
|
I think 8+8 integer registers is enough but 16+8 is a possibility if necessary. 8+8 is the best for compatibility and code density while providing acceptable memory traffic from stats I have seen. The x86_64 ISA provides world dominating performance with only 16 GP registers.
Quote:
Code density is a dogma or a slight decrease is acceptable?
|
Code density is important and becoming more competitive with new ISAs. I think it would be a mistake to decrease code density when the 68k can leverage the code density advantage.
Quote:
Is binary compatibility required, or source (assembly) compatibility is fine? 100% or less?
|
I prefer to retain 68k binary compatibility but 100% compatibility with the 68000-68060 is not possible. It is possible to add ColdFire compatibility but I'm not sure whether source or binary is better at this point.
Quote:
Do you want to keep the more complex addressing modes (e.g.: double memory indirect)?
|
I think it best to keep some form of support for compatibility. It may be possible that the single memory accessing variations are useful and not too problematic. It is generally better to split the 2 memory access variations into 2 instructions for better scheduling. They are useful for modern OO code but they are inherently multi-cycle. An OoO 68k CPU would likely have less problems with the double memory accessing addressing modes.
Quote:
Do want to keep 68K compatibility (e.g.: the processor has a 68K compatibility mode) or only new (even 32-bit) mode(s) are available (e.g.: the new ISA is only 68K and/or x86/x64 inspired)? Or something like ARM, with ARM32 + Thumb-2 (like a 32-bit redesign) + ARM64 modes usable?
|
I prefer to allow the 32 bit mode at the same time as the 64 bit mode (like ARM). The 32 bit code could provide better compatibility and sometimes have improve code density.
Quote:
Are opcodes little on big endian? Is data access little, big endian, selectable?
|
Big endian is needed for 68k compatibility. Of course there should be ways to accelerate little endian data accesses.
Quote:
How long an instruction can be? >16 bytes?
|
Longer than 16 bytes is necessary for compatibility and it would be difficult to limit instructions to 16 bytes for 64 bit support. With that said, I have been careful to make sure that the maximum instruction length does not grow from the 68020 ISA.
Quote:
Which market segments should be targeted?
|
Embedded, hobbyist/retro and education.
Quote:
I don't like them but they are an option if necessary.
Quote:
Is instructions decoding changeable at runtime (by the kernel? Or even by application?)
In other words and to simplify, do you (already) have an idea of what you want achieve with the new processor (and the new ISA)?
|
Do you mean an FPGA CPU core or CPU microcode which can be updated? I would like to have a hard and standard CPU. Customization of the 68k CPU instructions would likely be limited. An FPGA on the board with the reset controlled by CPU software (but not tied to the CPU reset) is more interesting for customization. Software could then load various FPGA acceleration programs (for embedded or codec accelerators) or custom chip sets (console or computer) into the FPGA. There still hasn't been the ultimate and easy to use out of the box CPU and FPGA combo even as there are CPU+FPGA SoCs.
Quote:
I can say that I prefer to write new ISAs which are only INSPIRED by existing ones. I think that it's enough to take only the good parts/ideas, while creating something new. If it's 100% assembly compatible it's OK, but it's not strictly necessary (for my NEx64T it was my goal because I wanted to make really easy port software, and I was lucky being able to achieve it but I had to make several compromises).
So, how is looking your post-68K/inspired new processor/ISA?
|
I consider my incomplete 68k_64 ISA to still be 68k and not just inspired. Yes, I have thought about a universal SuperCISC ISA "inspired" by the 68k but nothing exists for it.
Quote:
Please read ALL comments.
Speculative execution is here to stay. Mitigated, certainly, but it'll not be abandoned.
|
I read all the comments. Of course speculation is here to stay but it is unlikely to be as deep, at least for awhile. I expect the simpler in-order CPUs have had a surge of sales since Spectre type exploits came to light and I'm not sure OoO CPU sales have hit bottom yet.
Quote:
cdimauro wrote: That's very strange, because I see that in most games the processors which show better performances are always the ones with HT enabled. At least on Intel side, whereas AMD processors shown to suffer it and AMD presented a "Game mode" for its Ryzen, which essentially disables SMT.
|
When I looked at stats a few years ago, Intel CPUs were having problems with multi-threading performance too. Even from slightly different Intel CPUs there was inconsistent performance.
Quote:
OK, so three answers here to my above questions: 64-bit, embedded market, and a "mid" SIMD unit (with 16 registers, I assume).
|
I am not set on 16 SIMD registers although it would make several things easier. It is much more expensive to run out of SIMD registers as the data is often streaming. The SIMD unit is the last I will work on.
Quote:
There are around 15 billions ARM core produced each year.
Western Digital announced last year that it will completely move from ARM to RISC-V for the micro-controllers, and it produces 1-2 billions per year.
Just to give an example. But other big CPU vendors have joined the RISC-V foundation/committee, so I see a threat for ARM business here.
|
AArch64 was a big gamble to go after higher performance markets (especially servers) but ARM has become less competitive in deeply embedded devices where customization, simplicity and code density are important. RISC-V has signed up some big names and people like the open cores with no licensing concept. ARM has a strong reputation in embedded but maybe they have given up Thumb2 like Motorola/Freescale gave up the 68k.
Quote:
Yes, this preliminary result is quite encouraging, but it's not enough to make a serious comparison with all other ISAs. I need a compiler which targets the new ISA, and allows to generate binaries using the most common applications used for such benchmarks (usually SPECint and SPECfp), but this requires a HUGE work and I've no time now. It's even worse, because currently I cannot take advantage of many features which I've defined (and which can further improve both code density and performance), because it requires proper code generation (which might be complex to implement). |
The compiler has been the Achilles heel of the 68k as well.
Quote:
Most of the results were relative to some previous RISC-V result so are mostly meaningless. I found the RISC-V video which came on after it more interesting.
https://www.youtube.com/watch?v=Ii_pEXKKYUg
The Renewed Case for the Reduced Instruction Set Computer: Avoiding ISA Bloat with Macro-Op Fusion for RISC-V https://people.eecs.berkeley.edu/~krste/papers/EECS-2016-130.pdf
Christopher Celio mostly looks at instruction counts and code density in his comparison of x86_64, ARM32, AArch64 and RISC-V 64G(C). He uses dynamic traces of the SPEC CINT2006 benchmark programs. RV64GC does compare pretty well to AArch64 considering the complexity of of AArch64 (load/store pair with post increment needs 3 write ports to execute in a single cycle as the article points out). Some of the comparisons are unfair like including the macro-op (instruction) fusion results in some charts with RV64GC. Other architectures are doing fusing as well so I really don't see it as an advantage for RISC-V to reduce instructions/operations to less than them. There are also disadvantages to macro-op fusion like reduced code density and more difficult instruction scheduling vs ISA supported instructions and addressing modes. The main advantages are the simpler ISA scales better to low performance hardware and dependent instructions can sometimes be removed.
He really didn't compare to anything with good code density. Thumb2 was excluded and I guess the 68k was not "popular" enough. He did find that x86_64 had an average instruction length of 3.71 bytes/instruction which is poor. That is interesting as Vince Weaver's x86_64 code optimized for size gave a very good 2.29 bytes/instruction (uses only 8 registers and the stack often). Seeing as how x86_64 compiler support is good, perhaps we can see the cost of prefixes and larger instructions which access more registers with prefixes and less memory accesses. Of course Vince's static results show a different story when optimized for size.
Fewest Instructions 1) AArch64 (power packed instructions for a RISC ISA) 2) 68k (I know a few ISA enhancements to reduce instructions) 3) ARM32/EABI (difficult to optimize but power packed for some algorithms) 4-5) RISCV32IMC & RISCV64IMC (good for a simple ISA) 6) PPC (good before the competition showed up) 7) MIPS 8) Thumb2 (increased code density increased instructions) 9) SPARC 10) Thumb1 11) x86 12) x86_64 (optimizing for size increases instruction counts and memory traffic) 13) SH-3 (16 bit fixed length encoding was not good for performance)
Best Code Density 1) 68k (I know a few ISA enhancements to improve code density) 2) Thumb2 (great code density but 21% more instructions than the 68k) 3) Thumb1 4) SH-3 (good code density but 47% more instructions than the 68k) 5) x86 (good code density but 31% more instructions than RISCV32IMC) 6) RISCV32IMC (good for a simple ISA) 7) x86_64 (good code density but 34% more instructions than RISCV64IMC) 8) RISCV64IMC (good for a simple ISA) 9) AArch64 (powerful for RISC but mediocre code density) 10) ARM32/EABI (RISC code density didn't used to be important) 11) PPC (mediocre before the competition showed up) 12) MIPS (beat SPARC but MIPS programs are larger because of data) 13) SPARC
Christopher's conclusion was, "Our analysis using the SPEC CINT2006 benchmark suite shows that the RISC-V ISA can be both denser and higher performance than the popular, existing commercial CISC ISAs." They obviously didn't compare to a good CISC ISA but rather to one with many trade-offs. Where would the 68k be with a good compiler and enhancements?
One last interesting find in the appendix.
Quote:
400.perlbench benchmarks the interpreted Perl language with some of the more OS-centric elements removed and file I/O reduced. Although libc_malloc and _int_free routines make an appearance for a few percent of the instruction count, the only thing of any serious note in 400.perlbench is a significant amount of the instruction count spent on stack pushing and popping. This works against RV64G as its larger register pool requires it to spend more time saving and restoring more registers. Although counter-intuitive, this can be an issue if functions exhibit early function returns and end up not needing to use all of the allocated registers.
|
More registers *decreased* performance in functions with early returns.
Large Register File Advantages and Disadvantages + decreased memory traffic - code density - transistor count - more stack space used - more registers to save during exceptions - early return functions have more registers to save and restore
|
| Status: Offline |
| |
|
|
|
[ home ][ about us ][ privacy ]
[ forums ][ classifieds ]
[ links ][ news archive ]
[ link to us ][ user account ]
|