Click Here
home features news forums classifieds faqs links search
6071 members 
Amiga Q&A /  Free for All /  Emulation /  Gaming / (Latest Posts)
Login

Nickname

Password

Lost Password?

Don't have an account yet?
Register now!

Support Amigaworld.net
Your support is needed and is appreciated as Amigaworld.net is primarily dependent upon the support of its users.
Donate

Menu
Main sections
» Home
» Features
» News
» Forums
» Classifieds
» Links
» Downloads
Extras
» OS4 Zone
» IRC Network
» AmigaWorld Radio
» Newsfeed
» Top Members
» Amiga Dealers
Information
» About Us
» FAQs
» Advertise
» Polls
» Terms of Service
» Search

IRC Channel
Server: irc.amigaworld.net
Ports: 1024,5555, 6665-6669
SSL port: 6697
Channel: #Amigaworld
Channel Policy and Guidelines

Who's Online
12 crawler(s) on-line.
 70 guest(s) on-line.
 3 member(s) on-line.


 olsen,  amigang,  michalsc

You are an anonymous user.
Register Now!
 michalsc:  31 secs ago
 amigang:  47 secs ago
 olsen:  1 min ago
 pixie:  5 mins ago
 Karlos:  9 mins ago
 ppcamiga1:  17 mins ago
 OlafS25:  28 mins ago
 duga:  28 mins ago
 Kronos:  40 mins ago
 hlt:  43 mins ago

/  Forum Index
   /  Amiga General Chat
      /  what is wrong with 68k
Register To Post

Goto page ( Previous Page 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 Next Page )
PosterThread
matthey 
Re: what is wrong with 68k
Posted on 2-Dec-2024 20:25:48
#241 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2393
From: Kansas

cdimauro Quote:

On top of what Matt already reported, the first ARM versions were fully MICROCODED. Yes, like a 68000 and many other... CISC processors.


Some people think of ARM and SuperH as the better embedded RISC ISAs. Both used microcode in early core designs which allowed CISC like features including complex addressing modes (using the ARM barrel shifter for example) and complex move/load/store multiple/block instructions from a shallow pipeline design.

https://en.wikichip.org/wiki/ARM2#Decode Quote:

The reason the decode is implemented in a number of separate units is because the ARM2 makes use of microcode ROMs (PLA). Each instruction is decoded into up to four µOP signal-wise. In other words, the ARM instructions are broken down into up to four sets of internal-µOP signals indicating things such as which registers to select or what value to shift by. For some complex operations such as block-transfers, the microsequencer also performs a looping operation for each register.


The original ARM, SuperH and Thumb(-2) ISAs are inferior RISCier copies of the 68000 ISA. Early ARM and SuperH core designs were influenced by the old by that time 68000 but offer some improvements like pipelining and later caches also added to the in development 68020/68030.

cdimauro Quote:

Then why many instructions removed from 68k were added a gain on ColdFires?

Cost reduction make sense if you've super small cores with a few transistors/gates, but with the new processes and the addition of bigger, and bigger caches even on embedded processors, saving a few gates on a processor core leaves time to be found.


Adding back 68k instructions to ColdFire that had been prematurely castrated was as much about regaining 68k compatibility as restoring lost 68k performance. Imagine x86 losing compatibility and performance at the same time and the effect that would have had on the x86 PC platform. The x86(-64) architecture would benefit from area and power reductions as can be seen by the x86S proposal and UEFI replacement of BIOS. The 68k has a tiny fraction of the baggage of x86(-64).

cdimauro Quote:

How much money made and are making companies like Microchip with their sub sub 1$ SoCs?

Other companies licensed the 68k and used in the embedded marked (see FIDO, for example).

It was only Motorola who did not believe in his project and killed it.


Why did FIDO need 68k licensing? Motorola/Freescale told the Natami team they did not need a license for new 68k development. It is when reusing existing CPU core designs, software and documentation that licensing may be needed but the obsolete designation of the 68k may make licensing reasonable enough that development time and overall costs would be saved.

cdimauro Quote:

I give you another example that shows why this isn't true: the J-Core processor.

https://j-core.org

You can read the motivations and why the SuperH was re-started. And you can see yourself the similarities with 68k, which had much bigger audience & market.

More interesting is a talk which was given: https://j-core.org/talks/japan-2015.pdf

You can read slides 9-10 which further show why J-Core was born.


Slide 11 reads "Code Density : Efficiency", "There *Are* other metrics, but none actually matter more (unless something is broken)". Vince Weaver's code density data shown in the slide is flawed which was later fixed on his website with the following results.

http://www.deater.net/weave/vmwprod/asm/ll/ Quote:

31 Architectures, Smallest Linux executable is 870 bytes! (on m68k)


Jeff Dionne's revival of SuperH was heavily influenced by flawed code density data when he prefers to code in assembly on the 68k. Code density may be the most important CPU metric for embedded cores as performance is improved while power is reduced but for performance alone, the number of instructions executed is very important too, where SuperH with a fixed length 16-bit encoding is severely handicapped. SuperH at least has good code density unlike the original ARM ISA. The 68k has the best CPU metrics with the best code density, the fewest instructions executed and the lowest memory traffic of SuperH, the original ARM and Thumb(-2) ISAs. The 68k has more complexity than these simplified embedded RISCier ISAs but the last remaining RISC philosophy is load/store memory accesses and considering the 68k remains the leader in CPU/performance metrics, it should fall too.

Kronos Quote:

Which say nothing unless you include the average profits per units (I'd guess single digit) and put that into relation to the R&D cost that would occur to keep it relevant for the next x years.


The profits from the 68k were being used for PPC development so 68k profit margins were not important to 68k development. ARM Holdings and the RPi Foundation successfully leveraged low profit margin embedded economies of scale to become what they are today.

Kronos Quote:

It was important when computers had ROM and RAM counted in kb and most of that was taken up by code.

Today you only run of of RAM when data balloons out of control often the result of sloppy coding.

Not meaningless but also not more than an afterthought in most use cases.


I believe you underestimate the benefits of code density much like early RISC developers. Is it not obvious by listing 32/64 bit architectures by code density?

Alpha
PA-RISC
MIPS
88k
SPARC
PPC
ARM (original)
ARM64/AArch64
x86-64
RV64IMC
SuperH (revived by J-Core project)
x86
RV32IMC
Thumb
Thumb-2
68k

Code density is more important today because there is more code in CPU caches. Also, good code density increases instructions fetched and decreases memory traffic.

Last edited by matthey on 02-Dec-2024 at 11:31 PM.
Last edited by matthey on 02-Dec-2024 at 08:33 PM.

 Status: Offline
Profile     Report this post  
cdimauro 
Re: what is wrong with 68k
Posted on 3-Dec-2024 6:02:40
#242 ]
Elite Member
Joined: 29-Oct-2012
Posts: 4127
From: Germany

@matthey

Quote:

matthey wrote:
cdimauro Quote:
Then why many instructions removed from 68k were added a gain on ColdFires?

Cost reduction make sense if you've super small cores with a few transistors/gates, but with the new processes and the addition of bigger, and bigger caches even on embedded processors, saving a few gates on a processor core leaves time to be found.


Adding back 68k instructions to ColdFire that had been prematurely castrated was as much about regaining 68k compatibility as restoring lost 68k performance. Imagine x86 losing compatibility and performance at the same time and the effect that would have had on the x86 PC platform.

Exactly. If they wanted to save some gates then they could have microcoded or, even better, millicoded the most complex instructions on Coldfire whilst retaining full 68k compatibility.

The problem with Motorola is that they never understood how much important was backward-compatibility.
Quote:
The x86(-64) architecture would benefit from area and power reductions as can be seen by the x86S proposal and UEFI replacement of BIOS.

Here it'll be different, because the plan with X86S is remove many legacy parts, so those processors will not be fully backward-compatible.

However, this isn't really a problem, since modern software (included OSes) already uses only a subset of x86/x64 since very long time.

That's the reason why on my previous architecture (NEx64T) I've decided to only support some x86/x64 features, and I've millicoded everything else with a proper mechanism. And it was... 2011 when I've started. Intel waked-up quite late.
Quote:
The 68k has a tiny fraction of the baggage of x86(-64).

Indeed, but the biggest baggage is represented by the stack frames for interrupts/exceptions: they are so much complicated, because the Motorola engineers wanted to have fully controllable & re-startable instructions. Which was a very very nice feature and convenient to implement at the time, but it complicates A LOT the chips design when you want to introduce pipeline and superpipelining.

However, and AFAIR, the 68060 doesn't support this mechanism anymore and used a more modern: fault -> rollback the partial execution -> point to the offending instruction. Which is the way to go.
Quote:
cdimauro Quote:

How much money made and are making companies like Microchip with their sub sub 1$ SoCs?

Other companies licensed the 68k and used in the embedded marked (see FIDO, for example).

It was only Motorola who did not believe in his project and killed it.


Why did FIDO need 68k licensing? Motorola/Freescale told the Natami team they did not need a license for new 68k development. It is when reusing existing CPU core designs, software and documentation that licensing may be needed but the obsolete designation of the 68k may make licensing reasonable enough that development time and overall costs would be saved.

Well, usually companies require to pay licenses for using their architecture designs. So, I was thinking that Innovasic licensed the CPU32 architecture from Motorola to produce its FIDO.

To me this is a further proof that Motorola doesn't know the gold which was/is at its hands: they're giving away their architecture for free. Unbelievable...
Quote:
cdimauro Quote:

I give you another example that shows why this isn't true: the J-Core processor.

https://j-core.org

You can read the motivations and why the SuperH was re-started. And you can see yourself the similarities with 68k, which had much bigger audience & market.

More interesting is a talk which was given: https://j-core.org/talks/japan-2015.pdf

You can read slides 9-10 which further show why J-Core was born.


Slide 11 reads "Code Density : Efficiency", "There *Are* other metrics, but none actually matter more (unless something is broken)".

Yes, thanks for fixing it: I was rushing in the morning before working, so I typed 10 instead of 11, so I missed to highlight this very important slide. Which should speak by itself...
Quote:
Vince Weaver's code density data shown in the slide is flawed which was later fixed on his website with the following results.

http://www.deater.net/weave/vmwprod/asm/ll/ Quote:

31 Architectures, Smallest Linux executable is 870 bytes! (on m68k)

Well, the slides were old, and using the old data. Motorola is again leading this code density context thanks to your and Ross' work.
Quote:
Jeff Dionne's revival of SuperH was heavily influenced by flawed code density data when he prefers to code in assembly on the 68k. Code density may be the most important CPU metric for embedded cores as performance is improved while power is reduced but for performance alone, the number of instructions executed is very important too,

Absolutely. That's something which should be set in the stone and in the minds of people when they talk about computer architectures.
Quote:
where SuperH with a fixed length 16-bit encoding is severely handicapped. SuperH at least has good code density unlike the original ARM ISA. The 68k has the best CPU metrics with the best code density, the fewest instructions executed and the lowest memory traffic of SuperH, the original ARM and Thumb(-2) ISAs. The 68k has more complexity than these simplified embedded RISCier ISAs but the last remaining RISC philosophy is load/store memory accesses and considering the 68k remains the leader in CPU/performance metrics, it should fall too.

I don't get why the J-Core team hasn't revived the 68000 core instead of the SuperH, since there are no licensing issues.

They waited the SuperH patents to expire before working on the J-Core, where Motorola is already offered everything for free.

And 68ks had much better support, tools, more widespread/used, easier to program, etc., as you've also reported.
Quote:
Kronos Quote:

It was important when computers had ROM and RAM counted in kb and most of that was taken up by code.

Today you only run of of RAM when data balloons out of control often the result of sloppy coding.

Not meaningless but also not more than an afterthought in most use cases.


I believe you underestimate the benefits of code density much like early RISC developers. Is it not obvious by listing 32/64 bit architectures by code density?

Alpha
PA-RISC
MIPS
88k
SPARC
PPC
ARM (original)
ARM64/AArch64
x86-64
RV64IMC
SuperH (revived by J-Core project)
x86
RV32IMC
Thumb
Thumb-2
68k

Code density is more important today because there is more code in CPU caches. Also, good code density increases instructions fetched and decreases memory traffic.

Which also helps A LOT with the much more widespread multicore systems, since you've so many cores which are contending the access to the memory.

Memory hasn't scaled so much in terms of memory bandwith. It's the primary bottleneck for CPUs, since very long time: CPU cores advanced very quickly in memory and number of executed instructions, but memory frequencies and bandwidth haven't followed the same path and slowed the CPU performances due to this reason.

That's why caches were introduced: to mitigate the problem. And we have multi-level caches up to L4 / HBM for this reason.

Code density helps A LOT from this PoV, because it allows to considerably reduce the access to the memory, at all levels (starting from the L1 cache).

That's ALSO (because saving space in memory is also important: we've A LOT of code in memory nowadays because systems became very complicated with several, complex, components running at the same time and on several cores) the reason why it's one of the most important factors when talking about computer architectures, along with the number of executed instructions.

People which aren't computer architectures experts or passionate don't get how much important is it, despite all processor vendors have solutions / extension only for addressing it. Bah...

BTW, there's space for what I call "data density". In fact, a computer architecture can also save accesses to memory for accessing data. Immediates embedded on istructions allows it, and they are considerated part of the code density domain. However, more can be done.

Now... working time (as usual) and no time to read again and fix typos etc.

 Status: Offline
Profile     Report this post  
kolla 
Re: what is wrong with 68k
Posted on 3-Dec-2024 9:27:34
#243 ]
Elite Member
Joined: 20-Aug-2003
Posts: 3275
From: Trondheim, Norway

@matthey

As you of course know, SPARC isn't entirely dead just yet, Fujitsu has planned end-of-sale of their M12 in 2029, and end-of-support in 2034. They are migrating to ARM.

(Also, just like disco, MIPS didn't die, it just went underground... and evolved into LoongArch)

Last edited by kolla on 03-Dec-2024 at 09:30 AM.

_________________
B5D6A1D019D5D45BCC56F4782AC220D8B3E2A6CC

 Status: Offline
Profile     Report this post  
matthey 
Re: what is wrong with 68k
Posted on 3-Dec-2024 22:01:46
#244 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2393
From: Kansas

cdimauro Quote:

The problem with Motorola is that they never understood how much important was backward-compatibility.


Backward compatibility is less important for embedded hardware than desktop/workstation/server hardware but it is still important as demonstrated by the A1222+ PPC hardware and ColdFire hardware which, despite using a subset of the existing CPU ISA, lost enough compatibility to cause major problems. The 68k had poor planning after the 68000 ISA.

Desktop 68k

68000 excellent providing 32-bit datatypes and address space with easy to decode 16-bit ops
68010 minor improvement fixing minor mistakes with some compatibility loss
68020 mediocre with solid enhancements like long branches, barrel shifter scaled addressing modes and 32/64 bit MUL/DIV but also double memory indirect addressing modes and rarely used language specific instructions
6888x good FPU ISA for an external FPU
68851 external MMU that should have never been developed with development effort increased for releasing the 68030 sooner
68030 same as 68020 except incompatible integrated MMU
68040 FPU integrated but castrated, MMU incompatible
68060 some integer instruction castration too including less than rare 64-bit MUL/DIV

Despite castrations, 68k backward compatibility in user mode was good but support and avoiding performance bottlenecks became more difficult. As transistors budgets increased, some of the support could have been brought back but I admit it was tempting to slim the 68060 to deepen the pipeline, double the caches and reduce the price at the time. Unfortunately, we never saw the benefit of higher clock speeds or doubled caches which was already possible when the 68060 was released, due to the 68060 already embarrassing PPC shallow pipeline limited OoO designs.

Embedded 68k

68000 excellent providing 32-bit datatypes and address space with easy to decode 16-bit ops
68008 no ISA changes; narrowed 8-bit data bus instead of 68000 16-bit data bus that never should have been developed considering the performance loss and similar cost to the 68000
CPU32 a good and more practical than 68020 ISA for early embedded use with no double memory indirect addressing modes and no bitfield instructions; excellent 68000 compatibility and good 68020 compatibility
ColdFire major castration of the 68k ISA with poor 68000 compatibility and terrible 68020 compatibility

ColdFire was barely simpler than the 68000 ISA so it made little sense to castrate and scale below the 68000. The CPU32 ISA was well liked for embedded use with cores already developed supporting it but they obviously competed with low end PPC cores.

cdimauro Quote:

Well, usually companies require to pay licenses for using their architecture designs. So, I was thinking that Innovasic licensed the CPU32 architecture from Motorola to produce its FIDO.

To me this is a further proof that Motorola doesn't know the gold which was/is at its hands: they're giving away their architecture for free. Unbelievable...


I did not ask Dave Alsup whether 68k IP was licensed for FIDO so it is possible. I doubt it though. "CPU32" was used in marketing and documentation but there is no trademark registered with the USPTO. It looks like "68k", "68000", "68020" etc. are unregistered as well. All patents should be long expired which leaves copyrights in existing cores, support code and documentation.

cdimauro Quote:

I don't get why the J-Core team hasn't revived the 68000 core instead of the SuperH, since there are no licensing issues.

They waited the SuperH patents to expire before working on the J-Core, where Motorola is already offered everything for free.

And 68ks had much better support, tools, more widespread/used, easier to program, etc., as you've also reported.


SuperH cores may be simpler and scale a little lower than 68000 cores while there is more modern SuperH development that can be used without changes. If Jeff had accurate CPU metrics earlier, would he have chosen the 68k to revive instead? Maybe? He may have been able to license ColdFire as another option where he would receive ColdFire cores including the Verilog sources but would not be able to distribute them. He co-developed uClinux and supports the open software and hardware philosophy though.

cdimauro Quote:

BTW, there's space for what I call "data density". In fact, a computer architecture can also save accesses to memory for accessing data. Immediates embedded on instructions allows it, and they are considerated part of the code density domain. However, more can be done.


Absolutely. As important as code density is, it is much better for immediate and displacement data to be in the more predictable for pre-fetching code stream. SuperH is just about worst case for "data density" because of the 16-bit fixed length encoding only allowing tiny immediates and displacements in each instruction. A fixed length 32-bit encoding is a large improvement but a variable length encoding with 8, 16 and 32 bit (and 64-bit for 64-bit architectures) immediates and displacements encoded with each instruction gives a significant performance advantage that avoids less predictable data loads or dependencies between instructions. Variable length RISC encodings continue to miss this advantage like Thumb-2 and RISC-V RVC while the 68k and x86(-64) have this advantage. ColdFire's "variable length RISC" encoding has some of this advantage but it was reduced with castration to only 6 byte length instructions. Smaller instructions have advantages for small cores but Mitch Alsup was working on a "RISC" load/store ISA with variable length encoding (VLE) encoded like the 68k 16-bit VLE but enhanced to support 64-bit immediates and displacements. Some people want to castrate CISC into RISC and others know better.

kolla Quote:

As you of course know, SPARC isn't entirely dead just yet, Fujitsu has planned end-of-sale of their M12 in 2029, and end-of-support in 2034. They are migrating to ARM.

(Also, just like disco, MIPS didn't die, it just went underground... and evolved into LoongArch)


A dead ISA is when there are no more developed cores using the ISA and no new silicon. The F-35 is still produced and may still have PPC CPU SoCs if they have not already been upgraded to ARM or RISC-V SoCs like the NASA space upgrade to RISC-V. The 68k was popular for military equipment before stopped development forced the switch to PPC. Leading edge tech is only 2-3 die shrinks away from being outdated while less important tech takes longer to bury the corpse.

LoongArch may be considered an alive MIPS successor but is not MIPS because the MIPS IP owner endorsed the switch to RISC-V. I expect RISC-V will gain many LoongArch, MIPS and SPARC customers. LoongArch was a hedge against a Chinese chip embargo but RISC-V open hardware reduces if not eliminates the need for it.

Last edited by matthey on 03-Dec-2024 at 10:17 PM.

 Status: Offline
Profile     Report this post  
Hammer 
Re: what is wrong with 68k
Posted on 4-Dec-2024 3:26:31
#245 ]
Elite Member
Joined: 9-Mar-2003
Posts: 6058
From: Australia

@matthey

Quote:

Desktop 68k

68000 excellent providing 32-bit datatypes and address space with easy to decode 16-bit ops
68010 minor improvement fixing minor mistakes with some compatibility loss

Technology is useless without good management.

Motorola shouldn't encourage dead-end 68000 compatibility when 68010 has the corrective 68K instruction set e.g. Motorola should have priced out 68000 with lower priced 68010.

A lower-priced 68010 is a better foundation leading into a full 32-bit 68020.

When Commodore-Amiga and Sega released their respective machines, it should have been 68010 baseline.

Quote:

@matthey

68020 mediocre with solid enhancements like long branches, barrel shifter scaled addressing modes and 32/64 bit MUL/DIV but also double memory indirect addressing modes and rarely used language specific instructions

68020/68030 MUL instructions are very slow, hence allowing MIPS a competitive advantage.

Silicon Graphics (SGI) adopted the R2000 MIPS architecture for its workstations having noted that the Motorola 68000 series of processors was "at the end of its price-performance curve"
https://archive.org/details/sim_electronic-business_1986-11-15_12_22/page/110/mode/2up

_________________
Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68)
Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68)
Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB

 Status: Offline
Profile     Report this post  
Hammer 
Re: what is wrong with 68k
Posted on 4-Dec-2024 4:13:00
#246 ]
Elite Member
Joined: 9-Mar-2003
Posts: 6058
From: Australia

@minator

Quote:
x86 still uses variable length instructions and pays a price in complexity for it. Apple's and ARM's latest designs can execute 10 instructions per cycle, the 68060 had a hard time doing 2.


Zen 5 has 8 per-cycle decoder dispatches and 12 per-cycle op-code dispatches, but the bottleneck is with Zen 4's recycled I/O chip. Zen 5's potential can be seen via Ryzen 7 9800X3D vs Ryzen 7 9700X difference.

AMD's desktop Zen 5 CPUs must move into a 256-bit bus memory platform e.g. Strix Halo.

For Cinebench 2024 MT (mostly scalar FP workloads)
https://www.cgdirector.com/cinebench-2024-scores/
Apple M3 Max (16 cores) = 1677
Apple M3 Max (14 cores) = 1436

https://www.techpowerup.com/review/amd-ryzen-7-9800x3d/9.html
Ryzen 7 9800X3D PBO Max (8 cores) = 1405
Ryzen 9 3950X (16 cores) = 1325
Ryzen 7 9700X (8 cores) = 1208

The major part of the bottleneck problem is memory bandwidth.

On Blender
Ryzen 7 9800X3D (8 cores) almost rivals Ryzen 9 7900 (12 cores) and Ryzen 9 5950X (16 cores).

Single thread benchmark doesn't show single-core performance with SMT.

_________________
Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68)
Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68)
Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB

 Status: Offline
Profile     Report this post  
cdimauro 
Re: what is wrong with 68k
Posted on 4-Dec-2024 5:12:22
#247 ]
Elite Member
Joined: 29-Oct-2012
Posts: 4127
From: Germany

@kolla

Quote:

kolla wrote:
@matthey

As you of course know, SPARC isn't entirely dead just yet, Fujitsu has planned end-of-sale of their M12 in 2029, and end-of-support in 2034. They are migrating to ARM.

They have already done it.

BTW, Fujitsu was the first processor vendor to adopt ARM's new vector extension (SVE) for its supercomputer (which was leading the TOP500 since some time ago).

SPARC is just maintained probably because of existing long term contracts. Like it was for Intel with Itanium (a not-formally-dead-architecture which was supported for several years before the effective / official EoL).
Quote:
(Also, just like disco, MIPS didn't die, it just went underground... and evolved into LoongArch)

MIPS already moved to something else. I don't recall now if it was ARM of RISC-V. Whatever: they've abandoned their own architecture.

LoongArch is still using MIPS only because when they started this project it was the simplest architecture which had mainstream support / toolchains. It would have been RISC-V, if it was already existing AND having a decent support.

In short: almost all historical architectures are dead.

 Status: Offline
Profile     Report this post  
cdimauro 
Re: what is wrong with 68k
Posted on 4-Dec-2024 5:50:11
#248 ]
Elite Member
Joined: 29-Oct-2012
Posts: 4127
From: Germany

@matthey

Quote:

matthey wrote:

cdimauro Quote:

I don't get why the J-Core team hasn't revived the 68000 core instead of the SuperH, since there are no licensing issues.

They waited the SuperH patents to expire before working on the J-Core, where Motorola is already offered everything for free.

And 68ks had much better support, tools, more widespread/used, easier to program, etc., as you've also reported.


SuperH cores may be simpler and scale a little lower than 68000 cores while there is more modern SuperH development that can be used without changes. If Jeff had accurate CPU metrics earlier, would he have chosen the 68k to revive instead? Maybe? He may have been able to license ColdFire as another option where he would receive ColdFire cores including the Verilog sources but would not be able to distribute them. He co-developed uClinux and supports the open software and hardware philosophy though.

I don't think that code density was the primary factor for adopting SuperH.

Jeff looks like an open source aficionado, and probably that's the major reason why he wanted to do start with something which have no patents and other constraints which go against his philosophy.
Quote:
cdimauro Quote:

BTW, there's space for what I call "data density". In fact, a computer architecture can also save accesses to memory for accessing data. Immediates embedded on instructions allows it, and they are considered part of the code density domain. However, more can be done.


Absolutely. As important as code density is, it is much better for immediate and displacement data to be in the more predictable for pre-fetching code stream.

Indeed. You can also have "compressed immediates and displacements as part of the data density, which help a lot code density as well.

In general, having the possibility to use/define data like that helps to reduce the space needed in the data section/segment and the corresponding loads, gaining a general benefit on memory usage, saved bandwidth, less executed instructions, less memory access contentions: a win-win situation.

Other possibilities in the data density domain (which provide similar benefits) are:
- compressed function pointers tables;
- compressed VMTs;
- compressed switch/case structures;
- compressed pointers;
- compressed immediate known constants (especially on FP side: pi, log2, etc.);
- broadcasting of data (not only on SIMD / Vector: it might be used for loading the same value on multiple registers, for example);
- sparse data.
Quote:
SuperH is just about worst case for "data density" because of the 16-bit fixed length encoding only allowing tiny immediates and displacements in each instruction. A fixed length 32-bit encoding is a large improvement but a variable length encoding with 8, 16 and 32 bit (and 64-bit for 64-bit architectures) immediates and displacements encoded with each instruction gives a significant performance advantage that avoids less predictable data loads or dependencies between instructions.

The only problem is that 64-bit immediates / displacements might be that they are too big to handle for the value that they give (e.g.: not so many constants require > 32-bit to be represented).

I mean, too big to handle because it might give problems on the longest instruction that can be handled and fitting with the length of a code cache line.
Quote:
Smaller instructions have advantages for small cores but Mitch Alsup was working on a "RISC" load/store ISA with variable length encoding (VLE) encoded like the 68k 16-bit VLE but enhanced to support 64-bit immediates and displacements.

Do you mean his 66000 architecture?
Quote:
Some people want to castrate CISC into RISC and others know better.

Every architecture has its unique good and weak points. Sometimes removing some feature from an architecture isn't properly a castration, rather the need to reorganize the ISA removing some legacy baggage. That's what Intel is doing with the already discussed X86S. I did the same with my previous NEx64T, where I've "re -> moved" almost all x86/x64 legacy.

That's because old architectures were designed with different principles & ideas, which don't apply anymore or poses problems.

For example, having the possibility to freely defining/using the size of a data processing instruction (e.g.: adding integers in byte/word/longword/quadword) gives a great value in terms of flexibility, code density and performance.
However, partial register updates are a nightmare to handle for superpipelined microarchitectures. So, it would have been better to extend the result to the entire register size, to avoid such issues (that's what x64 has done for 32-bit operations: all results are zero-extended to 64-bit).
Another problem is that your ALU becomes much more complicated, because it has to support & implement four different types for the same operation (primarily because flags are generated differently, but the same operation with a different size could also give a different result).

Completely designing a new architecture with all this hindsight probably will probably take different solutions which better fit with modern needs. Or, that might be simply better.

The main problem with the current CISC architectures is that...they are very old, and have a lot legacy baggage & design decisions of the time, which limited or crippled them in some ways. A review & clean-up might be required, but not so much can be changed because of backward-compatibility. So, not an easy situation for them.

 Status: Offline
Profile     Report this post  
bhabbott 
Re: what is wrong with 68k
Posted on 4-Dec-2024 5:52:40
#249 ]
Regular Member
Joined: 6-Jun-2018
Posts: 488
From: Aotearoa

@matthey

Quote:

matthey wrote:
cdimauro Quote:

The problem with Motorola is that they never understood how much important was backward-compatibility.


Backward compatibility is less important for embedded hardware...

Not just less important, totally unimportant as far as the CPU is concerned. But peripheral compatibility is very important.

Quote:

68000 excellent providing 32-bit datatypes and address space with easy to decode 16-bit ops
68010 minor improvement fixing minor mistakes with some compatibility loss
68020 mediocre with solid enhancements like long branches, barrel shifter scaled addressing modes and 32/64 bit MUL/DIV but also double memory indirect addressing modes and rarely used language specific instructions

The 68020 was more than just 'mediocre'. Unfortunately the dominant and therefore 'standard' CPU in the Amiga was the 68000 (this changed in 1992 with AGA, since all AGA machines had a 68020 or better). But even when limited to plain 68000 instructions the 68020 was much better. The only downside was cost.

The 68030's internal MMU was nice, but again since 68000 was the standard the MMU wouldn't be used to the fullest. Many accelerator cards used the cheaper 68EC030 with no impact on compatibility with earlier CPUs.

Quote:
Despite castrations, 68k backward compatibility in user mode was good but support and avoiding performance bottlenecks became more difficult. As transistors budgets increased, some of the support could have been brought back but I admit it was tempting to slim the 68060 to deepen the pipeline, double the caches and reduce the price at the time.

Castrations were the beginning of the end. No problem for embedded where every device has its own custom firmware, but a problem for desktop computers expected to be compatible with earlier models. 68060 caused significant problems for the Amiga that made it disappointing for users. You paid a huge amount of money, only to have it not perform as well as expected. The problem was exacerbated by the lack of Commodore as a 'gatekeeper' to ensure that compatibility was maintained.

But the worse part is that castrating was the wrong strategy for improving performance, since it could only be taken so far. The 68060 had better integer performance than Pentium, but worse FPU performance - at a time when floating point performance was becoming crucial. The real killer was Intel being able to bring out faster and faster chips while Motorola couldn't. They should have concentrated on getting the silicon to go faster rather than cutting down the CPU.

 Status: Offline
Profile     Report this post  
matthey 
Re: what is wrong with 68k
Posted on 4-Dec-2024 19:28:38
#250 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2393
From: Kansas

Hammer Quote:

Technology is useless without good management.

Motorola shouldn't encourage dead-end 68000 compatibility when 68010 has the corrective 68K instruction set e.g. Motorola should have priced out 68000 with lower priced 68010.

A lower-priced 68010 is a better foundation leading into a full 32-bit 68020.

When Commodore-Amiga and Sega released their respective machines, it should have been 68010 baseline.


I tend to agree but the 68000 was released in 1979 and the 68010 in 1982 closer to the 1984 68020. The 68010 needed to be released earlier and aggressively priced to replace the 68000. The 68010 was pin compatible with the 68000 and transistors were close enough that there was minimal cost difference other than the development cost of the 68010 improvements.

Hammer Quote:

68020/68030 MUL instructions are very slow, hence allowing MIPS a competitive advantage.

Silicon Graphics (SGI) adopted the R2000 MIPS architecture for its workstations having noted that the Motorola 68000 series of processors was "at the end of its price-performance curve"


The 68020 used transistors for some niceties absent on most RISC cores as well as wasting transistors on some rarely used features which left fewer transistors to optimize the MUL/DIV instructions. The 68000 lacked enough transistors for a barrel shifter due to providing niceties compared to some competitors. The lack of a barrel shifter in the 68000 likely impacted performance more than the poorly optimized MUL/DIV instructions in the 68020. It would have been nice if the 68010 had been upgraded to a barrel shifter and the 68030 had used more transistors to optimize at least the 68020 MUL instructions. Then major development issues began with the 68040. It looks like management issues were more of a problem than 68k complexity.

cdimauro Quote:

I don't think that code density was the primary factor for adopting SuperH.

Jeff looks like an open source aficionado, and probably that's the major reason why he wanted to do start with something which have no patents and other constraints which go against his philosophy.


Code density was likely the primary performance metric used in the selection of SuperH. RISC-V may not have been available when Jeff began researching SuperH and is not shown in the slideshow you linked. As I recall, RISC-V was available by the time I talked to Jeff but he was not a fan, perhaps because SuperH is more similar to the 68k which it is obviously based on despite being converted to a load/store architecture. There are plenty of people who do not like the RISC-V ISA. Mitch Alsup did not like RISC-V either. Open source software and hardware are very important to Jeff which made cooperation to use Gunnar's precious unlikely.

cdimauro Quote:

The only problem is that 64-bit immediates / displacements might be that they are too big to handle for the value that they give (e.g.: not so many constants require > 32-bit to be represented).

I mean, too big to handle because it might give problems on the longest instruction that can be handled and fitting with the length of a code cache line.


No doubt large instructions are more difficult to process for small and low power cores as I already mentioned in this thread and as you describe above. However, I believe there is a performance advantage to the larger instructions with immediate/displacement data when instruction buffers are large enough. This was already an issue on the 68k where the 68060 limited data in the instruction buffer limiting superscalar execution of instructions to 6 byte instructions reducing superscalar capabilities with many powerful immediate and addressing mode combinations and all FPU immediate use. ColdFire castrated all 68k instructions to 6 bytes to simplify and reduce the decoupled instruction buffer between the IFP and OEPs which even scaled down ColdFire cores used. The ColdFire PRM describes how to handle the loss of FPU immediate capabilities due to castrating instructions to only 6 bytes.

https://www.nxp.com/docs/en/reference-manual/CFPRM.pdf 7-9 Quote:

The 68K FPU allows floating-point instructions to directly specify immediate values; the ColdFire FPU does not support these types of immediate constants. It is recommended that floating-point immediate values be moved into a table of constants that can be referenced using PC-relative addressing or as an offset from another address pointer. See Table 7-9. Note for ColdFire that if a PC-relative effective address is specified for an FPU instruction, the PC always holds the address of the 16-bit operation word plus 2.


ColdFire instruction castration to 6 bytes removes FPU immediates from predictable to prefetch code and requires more data cache accesses. Handling 64-bit integer immediates is not much more difficult than handling 64-bit FPU immediates and is better than the alternatives. The 68060 scales low enough for a low power superscalar core while still supporting large immediates. The x86 competition moved in the other direction of wider fixed length instructions in the instruction buffer to support larger immediates/displacements which increased performance. The 68060 could do the same to increase superscalar parallel execution and the 68k orthogonality would provide a superior in-order design that could scale lower than x86.

cdimauro Quote:

Do you mean his 66000 architecture?


I believe so. Mitch Alsup was involved with 68k and 88k development perhaps influencing the name. As I recall, he also developed/architected x86 (AMD) and SPARC CPU cores, a Samsung GPU core and the Libre SoC project vector unit.

bhabbott Quote:

Not just less important, totally unimportant as far as the CPU is concerned. But peripheral compatibility is very important.


Backward compatibility is less important for but still important for embedded CPUs. The A1222+ PPC CPU lacking a standard FPU made PPC support difficult and ColdFire made 68k support difficult. The CPU successor of the A1222+ CPU added back the standard PPC FPU and ColdFire added back more than a few 68k instructions.

bhabbott Quote:

The 68020 was more than just 'mediocre'. Unfortunately the dominant and therefore 'standard' CPU in the Amiga was the 68000 (this changed in 1992 with AGA, since all AGA machines had a 68020 or better). But even when limited to plain 68000 instructions the 68020 was much better. The only downside was cost.


I was talking more about the 68020 ISA. The 68020 core design was above average for 1984.

bhabbott Quote:

Castrations were the beginning of the end. No problem for embedded where every device has its own custom firmware, but a problem for desktop computers expected to be compatible with earlier models. 68060 caused significant problems for the Amiga that made it disappointing for users. You paid a huge amount of money, only to have it not perform as well as expected. The problem was exacerbated by the lack of Commodore as a 'gatekeeper' to ensure that compatibility was maintained.

But the worse part is that castrating was the wrong strategy for improving performance, since it could only be taken so far. The 68060 had better integer performance than Pentium, but worse FPU performance - at a time when floating point performance was becoming crucial. The real killer was Intel being able to bring out faster and faster chips while Motorola couldn't. They should have concentrated on getting the silicon to go faster rather than cutting down the CPU.


The 68060 had good user mode backward compatibility and could have acceptable FPU performance but, yes, the castrations made support difficult and performance more challenging to realize. This is why it became a more desirable upgrade later rather than when it was released.

 Status: Offline
Profile     Report this post  
Hammer 
Re: what is wrong with 68k
Posted on 5-Dec-2024 1:52:05
#251 ]
Elite Member
Joined: 9-Mar-2003
Posts: 6058
From: Australia

@matthey

Quote:

I tend to agree but the 68000 was released in 1979 and the 68010 in 1982 closer to the 1984 68020. The 68010 needed to be released earlier and aggressively priced to replace the 68000. The 68010 was pin compatible with the 68000 and transistors were close enough that there was minimal cost difference other than the development cost of the 68010 improvements.

If a corrective instruction set is released, it's best for new products to have the corrected instruction set.

Motorola failed to manage their customer's instruction set deployment e.g. 68010 should pricing out the 68000.


@matthey
Quote:

The 68020 used transistors for some niceties absent on most RISC cores as well as wasting transistors on some rarely used features which left fewer transistors to optimize the MUL/DIV instructions. The 68000 lacked enough transistors for a barrel shifter due to providing niceties compared to some competitors. The lack of a barrel shifter in the 68000 likely impacted performance more than the poorly optimized MUL/DIV instructions in the 68020. It would have been nice if the 68010 had been upgraded to a barrel shifter and the 68030 had used more transistors to optimize at least the 68020 MUL instructions. Then major development issues began with the 68040. It looks like management issues were more of a problem than 68k complexity.

It's a business decision. Motorola's fast fixpoint MUL is with DSP products i.e. pushing the customer to purchase two chips e.g.
68020 + 68851 MMU which is reduced to a single 68030.
68030 + 56000 DSP with fast 24bit MUL combination e.g. Atari Falcon.

68EC040 has a relatively competitive MUL performance, but the asking price wasn't competitive e.g. CL-450 MIPS-X SoC example.

Dave Haynie was willing to design new glue chips for 68EC040-25 based mid-range Amigas that are desirable for strong currency markets such as the USA, Canada, and Australia. The problem is Commodore management. This is about getting the Amiga ready to face 486SX gaming PCs.

For 68030, 68LC040, and 68040, Motorola thinks it has Intel's x86 PC clone business when Motorola directly competes against discounted RISC competitors.

If Motorola doesn't respect 68K backward compatibility, then its platform customers will seek the best bang-per-buck CPU solution without backward compatibility with 68K.

The pricing policy is on Motorola management.

Motorola burned 68K advantages by allowing Intel to catch up. This is a Motorola management problem.

Technology is nothing without skilled personnel.

Motorola didn't survive against high-performance SoC or APU competitors.


Last edited by Hammer on 05-Dec-2024 at 02:00 AM.
Last edited by Hammer on 05-Dec-2024 at 01:57 AM.

_________________
Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68)
Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68)
Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB

 Status: Offline
Profile     Report this post  
Hammer 
Re: what is wrong with 68k
Posted on 5-Dec-2024 2:18:14
#252 ]
Elite Member
Joined: 9-Mar-2003
Posts: 6058
From: Australia

@bhabbott
Quote:

Castrations were the beginning of the end. No problem for embedded where every device has its own custom firmware, but a problem for desktop computers expected to be compatible with earlier models. 68060 caused significant problems for the Amiga that made it disappointing for users. You paid a huge amount of money, only to have it not perform as well as expected. The problem was exacerbated by the lack of Commodore as a 'gatekeeper' to ensure that compatibility was maintained.

But the worse part is that castrating was the wrong strategy for improving performance, since it could only be taken so far. The 68060 had better integer performance than Pentium, but worse FPU performance - at a time when floating point performance was becoming crucial. The real killer was Intel being able to bring out faster and faster chips while Motorola couldn't. They should have concentrated on getting the silicon to go faster rather than cutting down the CPU.

FYI, Pentium FPU can process fixed-point integers.

Pentium has the 64-bit memory bandwidth to back dual integer streams.

Cyrix 6x86's superior integer (dual ALU) advantage is backed by Pentium's 64-bit bus, which is not the case for 68060's 32-bit bus.

The 68060 didn't use PowerPC/88000's 64-bit 60x bus.

64-bit 60x bus could have supported either 64-bit bus equipped 68060 or PowerPC 601.

Last edited by Hammer on 05-Dec-2024 at 02:20 AM.
Last edited by Hammer on 05-Dec-2024 at 02:19 AM.

_________________
Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68)
Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68)
Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB

 Status: Offline
Profile     Report this post  
cdimauro 
Re: what is wrong with 68k
Posted on 5-Dec-2024 6:03:45
#253 ]
Elite Member
Joined: 29-Oct-2012
Posts: 4127
From: Germany

@matthey

Quote:

matthey wrote:

The 68020 used transistors for some niceties absent on most RISC cores as well as wasting transistors on some rarely used features which left fewer transistors to optimize the MUL/DIV instructions. The 68000 lacked enough transistors for a barrel shifter due to providing niceties compared to some competitors. The lack of a barrel shifter in the 68000 likely impacted performance more than the poorly optimized MUL/DIV instructions in the 68020. It would have been nice if the 68010 had been upgraded to a barrel shifter and the 68030 had used more transistors to optimize at least the 68020 MUL instructions. Then major development issues began with the 68040. It looks like management issues were more of a problem than 68k complexity.

The performance of MUL/DIV instructions was good enough when the 68020 was introduced. Also, those instructions aren't so much used (especially the DIV).
Quote:
cdimauro Quote:

I don't think that code density was the primary factor for adopting SuperH.

Jeff looks like an open source aficionado, and probably that's the major reason why he wanted to do start with something which have no patents and other constraints which go against his philosophy.


Code density was likely the primary performance metric used in the selection of SuperH. RISC-V may not have been available when Jeff began researching SuperH and is not shown in the slideshow you linked. As I recall, RISC-V was available by the time I talked to Jeff

Very likely, because it was introduced on 2011.
Quote:
but he was not a fan, perhaps because SuperH is more similar to the 68k which it is obviously based on despite being converted to a load/store architecture. There are plenty of people who do not like the RISC-V ISA. Mitch Alsup did not like RISC-V either.

Me neither.

Interesting. Do you have any post from Mitch where he explains some technical reasons why he doesn't like it?
Quote:
cdimauro Quote:

The only problem is that 64-bit immediates / displacements might be that they are too big to handle for the value that they give (e.g.: not so many constants require > 32-bit to be represented).

I mean, too big to handle because it might give problems on the longest instruction that can be handled and fitting with the length of a code cache line.


No doubt large instructions are more difficult to process for small and low power cores as I already mentioned in this thread and as you describe above. However, I believe there is a performance advantage to the larger instructions with immediate/displacement data when instruction buffers are large enough. This was already an issue on the 68k where the 68060 limited data in the instruction buffer limiting superscalar execution of instructions to 6 byte instructions reducing superscalar capabilities with many powerful immediate and addressing mode combinations and all FPU immediate use.

There's a clear advantage, no doubt on that: once you're able to pack immediates and displacements/offsets on the same instruction, then you've removed more instructions to be executed and more accesses to memory, providing an overall benefit to both performance and power usage.

My only concern is allowing 64-bit everywhere, having very long instructions which can cause implementation problems (require much larger instruction buffer, as you stated), for questionable benefits.

While 64-bit immediates are fine and welcome (I also support them on both NEx64T and my new architecture, but only limited to certain instruction types), 64-bit absolute addresses/displacements/offsets aren't so much used to justify the additional complication (because you can have 64-bit for immediates + 64-bits for abs/disp/offs = 16 byte only for them -> very long instructions to handle).

That's the only critical point for me.
Quote:
ColdFire castrated all 68k instructions to 6 bytes to simplify and reduce the decoupled instruction buffer between the IFP and OEPs which even scaled down ColdFire cores used. The ColdFire PRM describes how to handle the loss of FPU immediate capabilities due to castrating instructions to only 6 bytes.

https://www.nxp.com/docs/en/reference-manual/CFPRM.pdf 7-9 Quote:

The 68K FPU allows floating-point instructions to directly specify immediate values; the ColdFire FPU does not support these types of immediate constants. It is recommended that floating-point immediate values be moved into a table of constants that can be referenced using PC-relative addressing or as an offset from another address pointer. See Table 7-9. Note for ColdFire that if a PC-relative effective address is specified for an FPU instruction, the PC always holds the address of the 16-bit operation word plus 2.


ColdFire instruction castration to 6 bytes removes FPU immediates from predictable to prefetch code and requires more data cache accesses. Handling 64-bit integer immediates is not much more difficult than handling 64-bit FPU immediates and is better than the alternatives.

Absolutely. It was shame. However, with such 6 bytes max length, there's nothing else that could have been made (FPU instructions are already 32 bit. The maximum is having 16-bit immediates).
Quote:
The 68060 scales low enough for a low power superscalar core while still supporting large immediates. The x86 competition moved in the other direction of wider fixed length instructions in the instruction buffer to support larger immediates/displacements which increased performance.

Not exactly. x64 allows 64-bit absolute addresses only with the two special MOV Abs64,RAX and MOV RAX,Abs64 instructions. Similarly for 64-bit immediates: they are only allowed with the MOV RAX,Imm64 instruction.

That's because there's a upper 15 bytes limit to the length of the instruction. Plus, 64-bit absolute addresses are very rare (only recently, with APX, Intel added a JMP Abs64 instruction).
Quote:
The 68060 could do the same to increase superscalar parallel execution and the 68k orthogonality would provide a superior in-order design that could scale lower than x86.

Indeed.
Quote:
cdimauro Quote:

Do you mean his 66000 architecture?


I believe so. Mitch Alsup was involved with 68k and 88k development perhaps influencing the name. As I recall, he also developed/architected x86 (AMD) and SPARC CPU cores, a Samsung GPU core and the Libre SoC project vector unit.

Oh, nice. Now I know why the Libre SoC vector unit is so different from what I've seen, and it looks similar to the 66000 one (which has... no vector unit But allows to do vectors processing).

 Status: Offline
Profile     Report this post  
bhabbott 
Re: what is wrong with 68k
Posted on 5-Dec-2024 10:21:53
#254 ]
Regular Member
Joined: 6-Jun-2018
Posts: 488
From: Aotearoa

@Hammer

Quote:

Hammer wrote:

Cyrix 6x86's superior integer (dual ALU) advantage is backed by Pentium's 64-bit bus, which is

Funny, ISTR that the Cyrix 6x86 was a dog compared to Pentium. Am I wrong?

Quote:
not the case for 68060's 32-bit bus.

Yes, the Pentium's 64 bit bus did give it an advantage - in being able to use lower speed memory without slowing down. But the CPU itself was still only 32-bit, right? Perhaps the wider bus was what allowed the FPU to go faster.

IIRC some Amiga accelerator cards used interleaved RAM to boost memory speed.

Quote:
The 68060 didn't use PowerPC/88000's 64-bit 60x bus.

Amazing, I learn more every day! OTOH the 68060 didn't need need twice as many instructions to do anything.

 Status: Offline
Profile     Report this post  
bhabbott 
Re: what is wrong with 68k
Posted on 5-Dec-2024 11:29:06
#255 ]
Regular Member
Joined: 6-Jun-2018
Posts: 488
From: Aotearoa

@Hammer

Quote:

Hammer wrote:

Motorola burned 68K advantages by allowing Intel to catch up. This is a Motorola management problem.

Technology is nothing without skilled personnel.


And how much did Intel burn on Itanium?

Quote:
after a decade of development, Itanium's performance was disappointing compared to better-established RISC and CISC processors. Emulation to run existing x86 applications and operating systems was particularly poor.

Motorola was always behind Intel. Perhaps not in architecture, but in production. Intel was simply better at making high-density chips, starting from the early days then they made the first DRAM chip in 1970.

But that doesn't mean Motorola was incompetent. Somebody had to be the best, and that just happened to be Intel. But Motorola also made lots of other stuff, including analog ICs and discrete transistors. So their R&D was spread thinner, whereas Intel could concentrate on CPUs. This was even more true after the IBM PC was introduced, as Intel then had a big incentive to make more powerful CPUs that suited the PC.

All this talk about how Motorola should have been better is silly. There was only room for one architecture in the PC market, which included an Intel CPU so any different one would lose. Luckily for Motorola home computers and gaming consoles were different markets that didn't have to follow the IBM PC standard. But they were mostly low-end and therefore didn't incentivize developing high-end CPUs. The other market - workstations - did, but it was a small and shrinking market nor suited to high volume production.

If Motorola had been a bit quicker out the gate with 68000 things might have been a lot different. Had IBM chosen 68000 for the PC, Motorola would be the kingpin and Intel would have struggled. But that's not likely because IBM didn't want to make the PC too good or it would compete against their own products. That's why they chose the 'toy' 8088 CPU over the 8086 - too afraid to have a 16-bit bus lest it make the machine too 'professional'. Then when PC sales exceeded their wildest expectations they changed their minds, but were locked into x86.

BTW in 1982 IBM did make a 68000-based computer, the System 9000, originally intended to be a laboratory computer. It had an 8 MHz 68000, multitasking real-time OS stored on 128k ROM, 128k RAM expandable to 5 MB, Motorola VME system bus, 80 Ă— 30 text and 768 Ă— 480 graphics, various I/O ports for communicating with laboratory instruments, and a sophisticated version of BASIC for programming. For storage it used a 5.25" or 8" floppy drive and up to four 10MB hard drives.

That gives you some idea of what the PC might have been like with a 68000 CPU.

Last edited by bhabbott on 05-Dec-2024 at 11:30 AM.

 Status: Offline
Profile     Report this post  
minator 
Re: what is wrong with 68k
Posted on 5-Dec-2024 16:55:43
#256 ]
Super Member
Joined: 23-Mar-2004
Posts: 1007
From: Cambridge

@cdimauro

Quote:
But ARM is clearly a CISC member. No doubt on that.


If you go by the definition John Mashey describes in the post I linked earlier, Arm isn't pure RISC, but it's very clearly not CISC.

Quote:
Thumb was created by ARM EXPLICITLY AND SOLELY for getting a much better code density, because the original ARM architecture sucked very badly at that.


Thumb was created for Arm to get a deal with a customer in the mobile phone space, back when memory in phones was very constrained.

Quote:
That was the only reason and also the reason why ARM had a HUGE success on the embedded market.


Arm were probably successful for many different reasons, but the biggest was low power. It was low power from the beginning because Acorn didn't want to use a ceramic chip case, plastic was cheaper, but could only withstand a certain amout of heat. Arm the company exists because Apple wanted a low power chip for the Newton. ARM was spun out as a separate company because they didn't want to buy the chip from a competitor.

Quote:
The original ARM was quickly surpassed by the new Thumb first, and especially Thumb-2 after, precisely for this reason.


The first chip with Thumb was in the ARM7 series, 7 years after the original.
It compromised performance so Thumb 2 brought back some 32 bit instructions to speed things up a bit. The original ARM instructions were never removed.

It's successor T32, is used in the M series chips but I believe the 32 bit modes (and Thumb variants) have been completely removed from the latest 64 bit Arm chips.

_________________
Whyzzat?

 Status: Offline
Profile     Report this post  
matthey 
Re: what is wrong with 68k
Posted on 5-Dec-2024 22:59:34
#257 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2393
From: Kansas

Hammer Quote:

If a corrective instruction set is released, it's best for new products to have the corrected instruction set.

Motorola failed to manage their customer's instruction set deployment e.g. 68010 should pricing out the 68000.


Considering how long the 68000 was already out, the best way to handle the 68010 upgrade may have been to use a 68000 mode on startup but allow to switch to 68010 mode. A pin that was the opposite high/low voltage on power up from the 68000 spec may have been able to indicate 68010 mode too. A later 68000 CPU design had a selectable 16 or 8 bit data bus which could have avoided the 1982 68008 development if incorporated in the 68010 (the 68008 saved pins but the much reduced value compared to the 68000 predictably failed to achieved economies of scale to reduce the price below the 68000). The 68010 could have maintained 68000 compatibility and replaced the 68000 and 68008 with better planning. Instead, the 1982 68008 and 1982 68010 died before the 1979 68000.

1979 68000
1982 68010 add 68000 mode & selectable 8/16 bit data bus
198x 68008 unnecessary as incorporated in 68010, assign dev team to 68010
1983 68451 short lived external MMU for 68010, assign dev team to 68030
1984 68020
1984 68881 external FPU for 68020 and 68030 is worthwhile
198x 68851 external MMU obsolete with release of 68030, assign dev team to 68030
1986 68030 release a year earlier with MMU help from 68451 & 68851 MMU teams
1989 68040 release a year earlier with moved up product pipeline
1993 68060 release a year earlier with moved up product pipeline

The Sinclair QL, Mac, Amiga and Atari ST all likely would have used the 68010 with these changes. More of these PCs may have started with or used 8-bit memory for low end systems like the Sinclair QL though. The 68008 reduced the price of the QL by ~10-15% but this may have been less later.

https://retrocomputing.stackexchange.com/questions/15570/were-there-really-any-cost-savings-in-sinclair-ql-because-of-it-being-8-bit-desi

Motorola may have been selling the 68008 near cost like the Intel 8088 while economies of scale were driving down the price of the better value 68000 faster. Splitting the 68k market with an incompatible 68000, 68008 and 68010 was unnecessary and unsuccessful. Wider data and address busses requiring chips with more pins would split the 68k market although another pin compatible successor to the 68010 was possible with internal MMU, caches, 32-bit ALU, 32-bit barrel shifter and ISA upgrade. Motorola was much too slow at getting out even minor upgrades like the 68008 and 68010 though.

cdimauro Quote:

The performance of MUL/DIV instructions was good enough when the 68020 was introduced. Also, those instructions aren't so much used (especially the DIV).


Hammer exaggerates the importance of MUL/DIV performance. The 68020 MUL/DIV instruction was introduced when many CPUs lacked hardware instructions and 8x8 multiplication table lookups were more common. Several RISC CPUs had higher performance MUL/DIV instructions before the 68040 was slow to arrive and had mediocre performance.

cdimauro Quote:

Me neither.

Interesting. Do you have any post from Mitch where he explains some technical reasons why he doesn't like it?


Searching for "Mitch Alsup" and "RISC-V" brings up several hits. The first hit for me just happens to be Mitch talking about immediates and displacements in the code vs worse alternatives.

RISC-V worse and better than 1980s RISC
https://www.realworldtech.com/forum/?threadid=219885&curpostid=220922

There are many RISC-V fans which Mitch has sparred with on several occasions. RISC-V was an improvement in many ways but more knowledgeable developers would not be surprised to see a RISC-VI ISA do over which seems to be the predominant RISC philosophy today.

cdimauro Quote:

There's a clear advantage, no doubt on that: once you're able to pack immediates and displacements/offsets on the same instruction, then you've removed more instructions to be executed and more accesses to memory, providing an overall benefit to both performance and power usage.

My only concern is allowing 64-bit everywhere, having very long instructions which can cause implementation problems (require much larger instruction buffer, as you stated), for questionable benefits.

While 64-bit immediates are fine and welcome (I also support them on both NEx64T and my new architecture, but only limited to certain instruction types), 64-bit absolute addresses/displacements/offsets aren't so much used to justify the additional complication (because you can have 64-bit for immediates + 64-bits for abs/disp/offs = 16 byte only for them -> very long instructions to handle).

That's the only critical point for me.


It is difficult to know how big is too big for immediates/displacements/addresses without an implementation for testing. Even then, limiting the max instruction size is likely a tradeoff sacrificing some performance for lower power and area.

cdimauro Quote:

Absolutely. It was shame. However, with such 6 bytes max length, there's nothing else that could have been made (FPU instructions are already 32 bit. The maximum is having 16-bit immediates).


Limiting the ColdFire max instruction size to 8 bytes would have allowed single precision immediates which many double precision immediates can be exactly converted/compressed into with the optimization Frank Wille and I developed and is working in Vasm/Vbcc. Gunnar suggested that 8 byte instructions could be supported by ColdFire with minimal degradation in core timing on the NXP forums. Did the ColdFire developers care about performance anymore after ColdFire was likely arbitrarily castrated enough to satisfy their PPC zealot masters?

cdimauro Quote:

Not exactly. x64 allows 64-bit absolute addresses only with the two special MOV Abs64,RAX and MOV RAX,Abs64 instructions. Similarly for 64-bit immediates: they are only allowed with the MOV RAX,Imm64 instruction.

That's because there's a upper 15 bytes limit to the length of the instruction. Plus, 64-bit absolute addresses are very rare (only recently, with APX, Intel added a JMP Abs64 instruction).


The x86-64 15 byte instruction limit may be due to inefficient decoding. Decoding is performed 8-bits at a time and there are more wasted bits per instruction compared to a 16-bit base encoding which can be anywhere from half a byte to well over a byte per instruction depending on optimizations, int/fp/SIMD code distribution, etc.

x86-64 instruction lengths | frequencies
1B 4%
2B 18% 22%
3B 23% 45%
4B 17% 62%
5B 18% 80%
6B 7% 87%
7B 8% 95%
8B 4% 99%

On average, 3 bytes have to be examined one at a time on x86-64 to decode roughly the equivalent of 2 bytes at once with a 16-bit base encoding. About 5 bytes have to be examined one at a time to reach the equivalent of 4 bytes (2x2B) with a 16-bit base encoding. Size decode is also easier without prefixes and over ride encoding bytes. I expect the instruction size of over 90% of 68k instructions can be determined by looking at the first 2 bytes and 97% by examining the first 4 bytes. The 68k average decoding case is much simpler with less wasted bits which applies to most other 16-bit VLEs as well like your NEx64T as well. The 68k does allow up to 22 byte instructions but decoding 16-bits at a time is in some ways more like 11 bytes when compared to x86(-64). The 68k could easily limit instructions to 16 bytes max and most existing programs wouldn't need any changes. For a 68k64 ISA, a 22 byte limit would allow 64 bit immediates/displacements without double memory indirect modes which the 68060 could handle without much performance loss. With a 8 byte/cycle instruction fetch instead of 4 byte/cycle fetch and wider than 96-bit fixed length instructions in the instruction buffer, the 68060 could have superscalar/parallel executed more large (greater than 6 byte) instructions rather than losing a cycle or two when encountering them. The 68060 balanced power with performance where x86(-64) fixed length macro-op instructions grew in size with performance designed cores.

cdimauro Quote:

Oh, nice. Now I know why the Libre SoC vector unit is so different from what I've seen, and it looks similar to the 66000 one (which has... no vector unit But allows to do vectors processing).


Vector units instead of SIMD units seem to be popular for RISC-V, perhaps because the love for the ISA stops short of assembly coding. SiFive won the NASA contract to replace PPC with a SOC using a CISC like U74 CPU core design and a vector unit. I wonder if SiFive looked at the Libre SoC vector unit considering the project was originally supposed to be RISC-V based.

Last edited by matthey on 05-Dec-2024 at 11:09 PM.
Last edited by matthey on 05-Dec-2024 at 11:07 PM.

 Status: Offline
Profile     Report this post  
agami 
Re: what is wrong with 68k
Posted on 5-Dec-2024 23:31:39
#258 ]
Super Member
Joined: 30-Jun-2008
Posts: 1858
From: Melbourne, Australia

@thread

This is one of those perfect examples of why there is no right or wrong, only opinion.

Inherently, there is nothing "wrong" with 68k. On the contrary, many will point out all the things that are "right" with it, and that it was Motorola/industry that made wrong decisions. But that is the very crux of the matter: It was the (short-sighted) opinion of Motorola execs who wanted to pursue more profitable projects for which they deemed the 68k unsuited, and it was the opinion of the industry/market that cheaper brute-force designs were preferable over more costly elegant designs.

Many companies have and still make the mistake of thinking their competitor is another company or market trends, when in truth they are competing with themselves. Like individuals, companies are often their own worst enemies and can't get out of their own way.

In my opinion, it's wrong that 68k was sidelined by Motorola's business struggles and straw-clutching of PowerPC via the AIM alliance. Almost as wrong as the loss of Amiga as an alternative computing approach to Wintel and Mac. It is capitalism after all, and it required its sacrifice of an industry wide focus on RISC so that out of the rubble we could find the balance between RISC and CISC.

_________________
All the way, with 68k

 Status: Offline
Profile     Report this post  
Hammer 
Re: what is wrong with 68k
Posted on 6-Dec-2024 2:08:25
#259 ]
Elite Member
Joined: 9-Mar-2003
Posts: 6058
From: Australia

@matthey

Quote:

Considering how long the 68000 was already out, the best way to handle the 68010 upgrade may have been to use a 68000 mode on startup but allow to switch to 68010 mode. A pin that was the opposite high/low voltage on power up from the 68000 spec may have been able to indicate 68010 mode too. A later 68000 CPU design had a selectable 16 or 8 bit data bus which could have avoided the 1982 68008 development if incorporated in the 68010 (the 68008 saved pins but the much reduced value compared to the 68000 predictably failed to achieved economies of scale to reduce the price below the 68000). The 68010 could have maintained 68000 compatibility and replaced the 68000 and 68008 with better planning. Instead, the 1982 68008 and 1982 68010 died before the 1979 68000.

1979 68000
1982 68010 add 68000 mode & selectable 8/16 bit data bus


The 68010 is not 100% software compatible with the 68000 and these changes include:
1. The MOVE from SR instruction is privileged in the 68010 and can only be executed in supervisor mode, meeting Popek and Goldberg virtualization requirements. Because the 68000 offers an unprivileged MOVE from SR, it does not meet them.

2. The MOVE from CCR instruction was added to partially compensate for the removal of the user-mode MOVE from SR.

3. It can recover from bus faults, allowing it to implement virtual memory. The exception stack frame is different.

4. It introduced a 22-bit Vector Base Register (VBR), which allowed the vector jump table to be anywhere in up to 4MB of addressable RAM. The 68000 vector table was always based at address zero.

68010's changes are geared toward Unix-like OS and are not major factors for baseline 68000 AmigaOS. Early Amiga's calculator program could crash with 68010.

68010's corrective instruction set changes are inherited by 68020.

From 1986 to 1987, Commodore wasted R&D resources on custom MMUs for 16-bit 68000 and 32-bit 68020 e.g. Bob Welland’s C= custom MMU with 68020 was in the early A2620 accelerator and it was operational inside A2000 in December 1986.

Many 32-bit 68K workstation vendors wasted R&D on custom MMUs. 68K workstation vendors like DEC and SGI moved to MIPS R2000-based workstations.

Commodore wasted R&D time on A2620 accelerator R&D from 1987 to 1988.

When 68851 MMU was released, Bob Welland continued to work on the A2620 accelerator in 1988. There are reasons for the A3000 being late.

For X86, Intel ensured the memory-protected/virtual memory-enabled Unix-capable 386 is the baseline standard, while 68K is gradually losing the business workstation market.

By 1988, the 386-based PC platform was mass-produced on a large scale with no time-wasting 68K MMU debacle. Windows 2.0 386 and Xenix 386 were released in 1987.

i386 is the platform for Xenix 386, Linux, and Windows NT 386 while 68K gradually losing the business workstation market. The large economies of scale 386 PC platform can flex Xenix 386 ready advantage which lays the groundwork for 486 (1989), IBM 32bit OS/2 2.0 (1992), MS Windows 3.1/Win32S beta (1992), MS Windows NT 3.1 32bit (beta Build 1.175.1 in July 1991, released in 1993), and later Linux i386.

Both Windows 386 and Xenix 386 can run multiple MS-DOS applications concurrently.

Microsoft's Windows NT 3.1 WIP 1991 demos and Win32S (Win32 bridge for Win3.1) caused an Osborne effect on the competition.

Motorola's workstation market loss was repeated with smart handheld markets when ARMv4T (bundled with MMU, ) displaced 68000-based Dragonball (MC68VZ328). MC68328 has a semi-custom 68EC000 static core with a 4 GB address bus and a 16-bit data bus.

Commodore wasted R&D and HR resources on evolved Coherent/AT&T Unix hybrid R&D instead of modernizing AmigaOS with modern Unix features i.e. Unix with AmigaOS "look and feel" i.e. refer to MacOS X move. Another problem: 68K MMU has a premium price.

Reference
https://www.abortretry.fail/p/the-history-of-xenix


Quote:

@matthey,

The Sinclair QL, Mac, Amiga and Atari ST all likely would have used the 68010 with these changes. More of these PCs may have started with or used 8-bit memory for low end systems like the Sinclair QL though. The 68008 reduced the price of the QL by ~10-15% but this may have been less later.

PC clones used the 8086 as a one-up against IBM PC's 8088.

IBM PC established a common BIOS, HAL, and boot standards for PC clones, hence unifying most X86 PC vendors i.e. the "United States" of the computing world.

68K's platform fragmentation is like balkanized Europe. None of the 68K platform vendors allows clones and is not strong enough to establish the de facto standard.

Intel has strong leadership in the PC chipset business e.g. PCI displaced NEC-led VL-Bus (VESA). NEC-led VL-Bus displaced IBM MCA. Intel effectively kicked NEC out of PC mainboard standards.

-----------

IBM lost the 386 leadership to Compaq's 386. IBM management has focused on the 16-bit 286 which Bill Gates labeled the 286 as "brain-dead". During OS/2 1.x era, Bill Gates argued for 386 during his meeting with IBM management.

MS worked with Compaq for 386-enhanced Windows 2.0 and MS/SCO released Xenix 386.

Bill Gates' pro-386 argument over IBM's pro-286 is due to MS's experience with Apple Mac 68K.

Microsoft (backed by PC clones) and IBM fought over PC's road map.

Quote:

@matthey,

Motorola may have been selling the 68008 near cost like the Intel 8088 while economies of scale were driving down the price of the better value 68000 faster. Splitting the 68k market with an incompatible 68000, 68008 and 68010 was unnecessary and unsuccessful. Wider data and address busses requiring chips with more pins would split the 68k market although another pin compatible successor to the 68010 was possible with internal MMU, caches, 32-bit ALU, 32-bit barrel shifter and ISA upgrade. Motorola was much too slow at getting out even minor upgrades like the 68008 and 68010 though.


CPU pin differences weren't a major issue when BIOS (Firmware), HAL (e.g. ACPI, AT), boot and basic display(e.g. VBIOS) standards were the major issues.

None of the major 68K platform vendors has allowed clones and not strong enough to establish a de facto standard for the 68K desktop market.

PC clones can group together and establish platform standards e.g. Intel and AMD form x86 Ecosystem Advisory Group as a modern example.

x86 Ecosystem Advisory Group founding members
1. Intel,
2. AMD,
3. DELL,
4. HP,
5. HPE (formerly Cray, SGI, HP RISC and etc),
6. Lenovo (formerly IBM PC division),
7. Microsoft,
8. Oracle,
9. Red Hat (owned by IBM, LOL),
10. Broadcom,

They can compete against each other, but there's a greater common good.


Last edited by Hammer on 06-Dec-2024 at 02:28 AM.

_________________
Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68)
Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68)
Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB

 Status: Offline
Profile     Report this post  
Hammer 
Re: what is wrong with 68k
Posted on 6-Dec-2024 2:48:02
#260 ]
Elite Member
Joined: 9-Mar-2003
Posts: 6058
From: Australia

@matthey

Quote:

Hammer exaggerates the importance of MUL/DIV performance. The 68020 MUL/DIV instruction was introduced when many CPUs lacked hardware instructions and 8x8 multiplication table lookups were more common. Several RISC CPUs had higher performance MUL/DIV instructions before the 68040 was slow to arrive and had mediocre performance.

1. It wasn't an exaggeration for SGI's core target audience i.e. the 3D graphics workstation market.

2. It wasn't an exaggeration for Commodore's future core target audience i.e. the 3D graphics games market.

Namco System 22 augmented 24.5Mhz 68020 CPU with two Texas Instruments TMS32025 DSP @ 49 Mhz

Minus texture mapper GPU, cost reduce Namco System 22's level math performance with your pure Motorola 68K solution.

It's a no-brainer to why Commodore departs from 68K with the Amiga Hombre project which scales from CD3D game console to 3D graphics workstations. Commodore evaluated Motorola 88000 in 1989!

From Commodore - The Final Years
Quote:

RISC
Ever since the late eighties, when A500 co-designer Bob Welland was still working for Commodore, the company had been investigating RISC processor technology, often in relation to developing its Unix machines.

(skip)

Commodore’s Ted Lenthe began looking into RISC chips with the AAA chipset back in the summer of 1989, specifically the Motorola 88000.

But the engineers always balked at concrete plans due to the incompatibility problems a new processor would cause.

For his part, Ed Hepler favored creating his own RISC CPU on the basis that Commodore could produce it much cheaper than buying the 88000 from Motorola.

(skip)

In late 1990, Hepler began writing a formal design for a new chipset dubbed Hombre, which would use a RISC processor. However, department heads Ted Lenthe and Ned McCook tried their best to keep him focused on AAA.

(skip)

By 1993, Ed Hepler began favoring the PA-RISC chip (Precision Architecture) from Hewlett-Packard, the same company that fabricated many of Amiga’s custom chips. “What we were looking for was something that we could integrate right into our design,” says Hepler, who had temporarily given up his dream of designing his own CPU. “Obviously we weren't going to build a 68000 chip at that point.”

The PA-RISC chip allowed Commodore’s semiconductor engineers to implement a graphics chip on the same chip as the PA-RISC CPU core. “You could buy cores from some folks. Most of the time the cores were rectangular in shape,” explains Hepler. “If you put the core down on a chip, all you had left was a periphery around the edge where you could put your own logic—an L shaped thing was all that was left because typically chips are made to be rectangular and you want to make them as close to square as possible because they're stronger that way. If you make something that's a long thin rectangle, they'll break sometimes.”

(skip)

The big question from Commodore’s customers was whether this architecture could run the existing Amiga software library. On this point, even Commodore was loathe to give a straight answer. “We thought that we could emulate a 68000, if we needed to, very efficiently using a PA-RISC,” says Hepler. “It just seemed like a pretty good fit.”

Curiously, Lew Eggebrecht favored the MIPS and PowerPC RISC chips because they were likely to run an “industry standard” operating system, such as Unix and the upcoming Windows NT.

“A lot of Amiga people don't like to hear this, but we also had some guidelines from our management that said that whatever we picked had to be able to run Windows NT,” recalls Hepler. “Obviously for game kinds of systems, people wanted something like an Amiga.



Notice Commodore plans to buy a RISC CPU core and customize it. This is similar to the PS1 path.

The difference is PS1's texture map hardware was outsourced to Toshiba while Commodore creates texture mapper hardware.

Commodore's discrete PA-RISC CPU selection is based on Hitachi's PA/50 PA-RISC clone.

3DO ARM's non-existent MUL was augmented by a custom 3D math co-processor. For 3DO M2, this is changed into two PowerPC 602 CPUs @ 66Mhz.

Continuing from Commodore - The Final Years,
Quote:

A RISC chip would essentially cause a break with the old Amiga computers. “I think we would have come up with some emulators, but it would not have run natively,” says Hepler. “It would not have taken object code for an Amiga that was running 68000 and run it directly on the hardware. There would have been an emulation package.”

The engineer’s reluctance to commit to full hardware backward compatibility had to do with previous efforts at retaining compatibility in the AAA chipset. “There were a number of reasons for that.

In doing AAA we had to be backwards compatible and that came at a pretty high price,” explains Hepler. “The folks that did the original Amiga did an amazing job but all of the registers and addresses were 16-bit addresses.

Later on, to expand things in the AGA chipset, we added some address extensions. However the registers came in two parts, so you'd write the lower-order 16-bits in one place and the high-order 4-bits someplace else. That made for very unclean software.”

Ed Hepler feels the emulator would have been effective at running legacy software. “With the instruction set of the PA-RISC and the addressing modes and things like that, it would have been pretty efficient to run a 68000 emulator on a PA-RISC,” says Hepler.



https://www.emaculation.com/forum/viewtopic.php?t=10868
Apple's Official MAE 3.0 68K emulator running on HPUX's PA-RISC. HPUX's PA-RISC machine is emulated by Qemu on a modern Windows PC.


https://www.reddit.com/r/VintageApple/comments/11v877r/most_absurd_marchintosh/
Apple's Official MAE 3.0 68K emulator running on a real HP Apollo 9000 Series 735 workstation, complete with a 99MHz PA-7100 PA-RISC processor.


A1200 SoC was discussed by Commodore which is needed to run legacy games.

Last edited by Hammer on 06-Dec-2024 at 03:27 AM.
Last edited by Hammer on 06-Dec-2024 at 03:18 AM.
Last edited by Hammer on 06-Dec-2024 at 03:14 AM.
Last edited by Hammer on 06-Dec-2024 at 03:05 AM.
Last edited by Hammer on 06-Dec-2024 at 03:01 AM.
Last edited by Hammer on 06-Dec-2024 at 02:56 AM.

_________________
Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68)
Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68)
Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB

 Status: Offline
Profile     Report this post  
Goto page ( Previous Page 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 Next Page )

[ home ][ about us ][ privacy ] [ forums ][ classifieds ] [ links ][ news archive ] [ link to us ][ user account ]
Copyright (C) 2000 - 2019 Amigaworld.net.
Amigaworld.net was originally founded by David Doyle