Your support is needed and is appreciated as Amigaworld.net is primarily dependent upon the support of its users.
|
|
|
|
Poster | Thread | cdimauro
| |
Re: what is wrong with 68k Posted on 30-Nov-2024 20:06:58
| | [ #221 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @Kronos
Quote:
Kronos wrote: @cdimauro
Quote:
cdimauro wrote: but 68k processors are still produced and available.
|
They are about as "available" as spare parts for Ford Model Ts. |
It depends on the model: 68060s are becoming quite rare, since they are not produced anymore.
Other models are still produced and available. Quote:
And about as relevant.... |
Well, it's not relevant anymore since long time. Quote:
68k died because Motorola came to the conclusion that further development of that line made no economic sense and beyond a last hoorah (68060) and some legacy product (ColdFire) nothing really happened in over 30 years. |
Wrong evaluation: the PowerPCs that they have embraced were/are as dead as well since several years. There's no development / new models anymore from more than a decade, because they (Motorola -> Freescale -> NXP) embraced ARM.
The good thing of Coldfires is that they have shown that it was still possible for 68ks to be developed and keep up with other mainstream processors, since they reached high frequency (up to 500Mhz for the last model, AFAIR) using similar node processes.
Which is another proof of Motorola's wrong decision to put a stop to this glorious family. Quote:
But I'm sure you gonna drag that dead horse over at least another dozen pages. |
Only if there's still something to say AND I've time. |
| Status: Offline |
| | Kronos
| |
Re: what is wrong with 68k Posted on 30-Nov-2024 20:16:48
| | [ #222 ] |
| |
|
Elite Member |
Joined: 8-Mar-2003 Posts: 2708
From: Unknown | | |
|
| @cdimauro
Quote:
cdimauro wrote:
Wrong evaluation: the PowerPCs that they have embraced were/are as dead as well since several years. |
Guess what just because PPC turned out a failure doesn't make further 68k development any more viable.
_________________ - We don't need good ideas, we haven't run out on bad ones yet - blame Canada |
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 30-Nov-2024 20:48:27
| | [ #223 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @Kronos
Quote:
Kronos wrote: @cdimauro
Quote:
cdimauro wrote:
Wrong evaluation: the PowerPCs that they have embraced were/are as dead as well since several years. |
Guess what just because PPC turned out a failure doesn't make further 68k development any more viable.
|
68ks were leading the embedded market, which Motorola decided to give to ARM.
Take at look at what's ARM since around 20 years and what's now.
Something could have changed, of course, but Motorola was very very well positioned to at least keep a good piece of the cake with its family. |
| Status: Offline |
| | Kronos
| |
Re: what is wrong with 68k Posted on 30-Nov-2024 21:05:20
| | [ #224 ] |
| |
|
Elite Member |
Joined: 8-Mar-2003 Posts: 2708
From: Unknown | | |
|
| @cdimauro
Quote:
cdimauro wrote: which Motorola decided to give to ARM.
|
Yeah sure, everybody at Moto (C=, Apple,IBM etc) was incompetent and clueless, only random armchair expert know what would have been best...
Or maybe they just realized that they couldn't push 68k ahead with just embedded and the limited margins that could be made there._________________ - We don't need good ideas, we haven't run out on bad ones yet - blame Canada |
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 30-Nov-2024 21:28:39
| | [ #225 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @Kronos
Quote:
Kronos wrote: @cdimauro
Quote:
cdimauro wrote: which Motorola decided to give to ARM.
|
Yeah sure, everybody at Moto (C=, Apple,IBM etc) was incompetent and clueless, only random armchair expert know what would have been best... |
Well, not armchairs (SIC!), but history has proven it. With such companies.
C=... you already know. No words to spend here. Apple was about to go bankrupt and was Microsoft that saved it. It was one of the PowerPC founders and the first one to go away. IBM was THE chip / server / PC (!) company and now it's mostly a software company, since it lost almost everything of that. It created POWER and then PowerPC, but now there's only POWER (PowerPC is dead since long time), which is surviving only because IBM was so desperate that needed to open-up this architecture and created the OpenPOWER consortium to get external help. Motorola was leading the workstation market, then the desktop market (68k were the highest performance processors), then the embedded market, and it lost everything. What's Motorola now? I leave you the pleasure t answer. Quote:
Or maybe they just realized that they couldn't push 68k ahead with just embedded |
Then why developing the Coldfires for so long time? Quote:
and the limited margins that could be made there. |
This you've to ask ARM as well. ARM "survived". As well as many other companies which still operate on this market (even the 6502s are still sold!). |
| Status: Offline |
| | OneTimer1
| |
Re: what is wrong with 68k Posted on 30-Nov-2024 22:26:18
| | [ #226 ] |
| |
|
Super Member |
Joined: 3-Aug-2015 Posts: 1127
From: Germany | | |
|
| @cdimauro
Quote:
Quote:
cdimauro wrote: which Motorola decided to give to ARM.
|
Or maybe they just realized that they couldn't push 68k ahead with just embedded and the limited margins that could be made there. |
Motorola has given nothing to ARM, the company was sold slice by slice, what was once the home of 68k is not even a shadow of its former self.
The Coldfire, an incompatible 68k successor, is on the end of development. When new GCC versions lacked 68k support it was financed via bounties from Amiga+Atari Fans, NXP did nothing.
NXP has some interesting ARM SoCs for embedded in the 1.5 - 2.0 GHz range, but nothing with 68k. When even the owner of the platform refuses any further development and don't even sell real 68k above 68000, you can consider the platform to be dead.
If you don't believe me, take a look: Quote:
MC68000
No Longer Manufactured Receive alerts
This page contains information on a product that is not recommended for new designs. |
https://www.nxp.com/products/MC68000
The MC68000 family is as dead as a Dodo
And believe it or not, this is a technical fact. |
| Status: Offline |
| | BigD
| |
Re: what is wrong with 68k Posted on 30-Nov-2024 23:01:36
| | [ #227 ] |
| |
|
Elite Member |
Joined: 11-Aug-2005 Posts: 7468
From: UK | | |
|
| @OneTimer1
And yet there's the Apollo 68080 core! _________________ "Art challenges technology. Technology inspires the art." John Lasseter, Co-Founder of Pixar Animation Studios |
| Status: Offline |
| | OneTimer1
| |
Re: what is wrong with 68k Posted on 1-Dec-2024 0:48:34
| | [ #228 ] |
| |
|
Super Member |
Joined: 3-Aug-2015 Posts: 1127
From: Germany | | |
|
| Quote:
BigD wrote:
And yet there's the Apollo 68080 core!
|
And a bunch of free and compatible 68k soft cores, but none of them can even beat an end of life Coldfire in performance.
You would need something with 1Ghz clock and loads of cache, currently even older PPC AmigaNGs are to slow for a WWW-Browser, on 68k you are doomed to retro software that runs faster on WinUAE. |
| Status: Offline |
| | minator
| |
Re: what is wrong with 68k Posted on 1-Dec-2024 1:45:53
| | [ #229 ] |
| |
|
Super Member |
Joined: 23-Mar-2004 Posts: 1012
From: Cambridge | | |
|
| For a more difinitive answer on what the problem with 68K is, see this discussion.
Or, if you really want to go into intricate detail, see this post by John Mashey.
Simply put, you can't make them fast. All the techniques employed to speed up CPUs like superscalar and out-of-order execution are extraordinarily difficult on CISC processors. The reason being that the addressing modes mean that instructions are complex to decode and instruction lengths are unpredictable. Compare this with RISC with fixed length instructions which are simple to decode and predictable.
That x86 is still around is because of Intel had the lion's share of the market and the money to fund the best silicon process in the industry. That's not the case any more and Apple's ARM designs beat intel and AMD on single threaded performance while consuming less power. That's with Intel using a very similar process.
x86 still uses variable length instructions and pays a price in complexity for it. Apple's and ARM's latest designs can execute 10 instructions per cycle, the 68060 had a hard time doing 2.
Motorola had seen this in the mid 80s and developed their own 88000 series RISC designs and later PowerPC. They didn't give up on 68K, they used it in embedded designs for years and it actually had 2 successors: CPU32 and ColdFire which both simplified it and continued for quite a while. 68K is dead, CPU32 is end-of-life, but Coldfire is still available. _________________ Whyzzat? |
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 1-Dec-2024 5:58:26
| | [ #230 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @OneTimer1
Quote:
OneTimer1 wrote: @cdimauro
Quote:
Or maybe they just realized that they couldn't push 68k ahead with just embedded and the limited margins that could be made there. |
Motorola has given nothing to ARM, the company was sold slice by slice, what was once the home of 68k is not even a shadow of its former self.
The Coldfire, an incompatible 68k successor, is on the end of development. When new GCC versions lacked 68k support it was financed via bounties from Amiga+Atari Fans, NXP did nothing.
NXP has some interesting ARM SoCs for embedded in the 1.5 - 2.0 GHz range, but nothing with 68k. When even the owner of the platform refuses any further development and don't even sell real 68k above 68000, you can consider the platform to be dead.
If you don't believe me, take a look: Quote:
MC68000
No Longer Manufactured Receive alerts
This page contains information on a product that is not recommended for new designs. |
https://www.nxp.com/products/MC68000
The MC68000 family is as dead as a Dodo
And believe it or not, this is a technical fact. |
Again, it's a MANAGEMENT fact / decision.
It has nothing to with 68k's ARCHITECTURE and/or MICROARCHITECTURE, which are purely technical aspects.Last edited by cdimauro on 01-Dec-2024 at 05:59 AM.
|
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 1-Dec-2024 6:17:07
| | [ #231 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @minator
Quote:
minator wrote: For a more difinitive answer on what the problem with 68K is, see this discussion. |
There's really nothing relevant other than what it was already discussed here.
Only, it's always a pleasure to read Peter Cordes' posts. Quote:
Or, if you really want to go into intricate detail, see this post by John Mashey. |
I know it since very long time, and really long time, because this post is very old (in fact, it stopped at the 68040).
Many things and definitions which he has written don't apply anymore, especially to RISCs which... are CISCs (well, since much before his post, to be more precise).
Just check the list which he shared and which should define RISCs, and compare it to the RISC processors which are available today (and also many many years before): you'll find several surprises. Quote:
Simply put, you can't make them fast. |
No, the post says nothing about. It shows that it's more complicate / difficult, which is a different thing.
In fact, CISC processors are the faster ones. Quote:
All the techniques employed to speed up CPUs like superscalar and out-of-order execution are extraordinarily difficult on CISC processors. |
Yes, but difficult is different from "can't make them fast".
Besides that, this doesn't necessarily belong/apply to all CISCs. Quote:
The reason being that the addressing modes mean that instructions are complex to decode and instruction lengths are unpredictable. |
This strictly depends on the specific architecture.
Addressing modes on some processors can make decoding instructions difficult to be done, for sure. But this doesn't apply to all CISCs.
In fact, and as I've already reported begore, my last architecture offers mem-mem-mem instructions (even for SIMD/vector instructions) like the VAX one which is reported in the post:
ADDL @(R1)+,@(R1)+,@(R2)+ However, it's trivial to decode it, because by checking a few bits I immediately know: - how long is this instruction; - if any of the three EAs needs an immediate or displacement or offset; - how long (2 or 4 bytes) is this additional information (for each of them, of course); - the position on the instruction where to find it (same here: for each of them). Quote:
Compare this with RISC with fixed length instructions which are simple to decode and predictable. |
See above: many RISCs have variable-length instructions.
It was needed because they sucked pretty bad at code density, which is one of the most important aspect for computer architectures.
However, RISCs gave up to pretty much all of their foundations / pillars, embracing CISCs to be competitive. Otherwise they would have disappeared already from 4 decades.
There's no way a RISC, as per their definition, that could have been survived 'til nowadays, without becoming a CISC. Quote:
That x86 is still around is because of Intel had the lion's share of the market and the money to fund the best silicon process in the industry. |
Absolutely. Money matter. Quote:
That's not the case any more and Apple's ARM designs beat intel and AMD on single threaded performance while consuming less power. That's with Intel using a very similar process. |
The recent Intel's Lunar Lake showed that it's not like that anymore, by adopting the same process and integrating the memory in the SoC. Quote:
x86 still uses variable length instructions and pays a price in complexity for it. |
Correct. But this applies only to x86 and not to all CISCs: see above my example. Quote:
Apple's and ARM's latest designs can execute 10 instructions per cycle, |
x86 doesn't need to have such large number of instructions to decode & execute, in order to achieve better performance. Thanks to its CISC nature. Quote:
the 68060 had a hard time doing 2. |
Well, like the Pentium.
And at the time ARM only had one instruction per cycle... Quote:
Motorola had seen this in the mid 80s and developed their own 88000 series RISC designs and later PowerPC. They didn't give up on 68K, they used it in embedded designs for years and it actually had 2 successors: CPU32 and ColdFire which both simplified it and continued for quite a while. 68K is dead, CPU32 is end-of-life, but Coldfire is still available. |
Exactly, but that's due a bad management decision, unfortunately.
Motorola lost everything with its stupid decisions...Last edited by cdimauro on 01-Dec-2024 at 06:45 AM.
|
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 1-Dec-2024 6:53:37
| | [ #232 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| | Status: Offline |
| | CosmosUnivers
| |
Re: what is wrong with 68k Posted on 1-Dec-2024 7:45:36
| | [ #233 ] |
| |
|
Regular Member |
Joined: 20-Sep-2007 Posts: 108
From: Unknown | | |
|
| @cdimauro
Quote:
Again, it's a MANAGEMENT fact / decision. It has nothing to with 68k's ARCHITECTURE and/or MICROARCHITECTURE, which are purely technical aspects |
Wrong : they stopped 68k for spiritual raisons... Many guys HATE good thing, and want to replace them with crap : and call this a "progress" !Last edited by CosmosUnivers on 01-Dec-2024 at 07:46 AM.
|
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 1-Dec-2024 7:55:37
| | [ #234 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @CosmosUnivers
Quote:
CosmosUnivers wrote: @cdimauro
Quote:
Again, it's a MANAGEMENT fact / decision. It has nothing to with 68k's ARCHITECTURE and/or MICROARCHITECTURE, which are purely technical aspects |
Wrong : they stopped 68k for spiritual raisons... Many guys HATE good thing, and want to replace them with crap : and call this a "progress" ! |
Well, management and technical guys rarely match. Unfortunately.
Motorola's management was not aware of the gold which they had in their hands, and completely failed. Bah...
|
| Status: Offline |
| | CosmosUnivers
| |
Re: what is wrong with 68k Posted on 1-Dec-2024 8:08:52
| | [ #235 ] |
| |
|
Regular Member |
Joined: 20-Sep-2007 Posts: 108
From: Unknown | | |
|
| @cdimauro
Quote:
Motorola's management was not aware of the gold which they had in their hands, and completely failed. Bah... |
Wrong again : they were 100% aware of what they did...
|
| Status: Offline |
| | Kronos
| |
Re: what is wrong with 68k Posted on 1-Dec-2024 8:11:10
| | [ #236 ] |
| |
|
Elite Member |
Joined: 8-Mar-2003 Posts: 2708
From: Unknown | | |
|
| @cdimauro
Quote:
cdimauro wrote:
This you've to ask ARM as well. ARM "survived". |
ARM "survived" because.... ... it was open for licensing ... some those licenses happened to be by major players ... it was well designed for the time ... it was inside the RISC "hype" ... sheer luck
Now it doesn't matter how good 68k was (or not) or how good a potential future could have looked looking back from 30 years on.
What does matter is how much would have costed to really push it forward vs how much money it could have generated with it's limited market share and margins.
The conclusion was that nothing but 68k-cost reduced made sense, hence ColdFire being a thing always way behind in performance.
PPC and later ARM seemed attractive because development cost were shared and they looked to have a bright future (which was the correct guess for ARM).
Noone doing CPUs,GPUs,SoCs as a business is in the business of making the best product possible no matter the cost, they are in business of making the most money for the least effort and in that regard 68k was just bad tech._________________ - We don't need good ideas, we haven't run out on bad ones yet - blame Canada |
| Status: Offline |
| | OneTimer1
| |
Re: what is wrong with 68k Posted on 1-Dec-2024 19:20:00
| | [ #237 ] |
| |
|
Super Member |
Joined: 3-Aug-2015 Posts: 1127
From: Germany | | |
|
| @Kronos
Quote:
Kronos wrote:
What does matter is how much would have costed to really push it forward vs how much money it could have generated with it's limited market share and margins.
|
Ack!
1. Complexity: The 68k had more complex addressing modes compared to i86, making further improvements difficult, the i386 is as dumb as a RISC CPU open for improvements and the ARM is a RISC CPU without the need to be backwards compatible.
2. Demand All big users of 68k where gone, without big desktop customers you can switch to other systems.
3. Code density This is meaningless. I'm currently working with an embedded project and they might screw up the existing resources, not because of the used CPU architecture, just because they are using the 'wrong' OS and a full set of unused libraries.Last edited by OneTimer1 on 01-Dec-2024 at 09:51 PM. Last edited by OneTimer1 on 01-Dec-2024 at 07:22 PM.
|
| Status: Offline |
| | matthey
| |
Re: what is wrong with 68k Posted on 2-Dec-2024 3:29:53
| | [ #238 ] |
| |
|
Elite Member |
Joined: 14-Mar-2007 Posts: 2412
From: Kansas | | |
|
| Kronos Quote:
ARM "survived" because.... ... it was open for licensing ... some those licenses happened to be by major players ... it was well designed for the time
|
I disagree with the original ARM ISA being well designed for the time. The ARM ISA had half the GP registers of most other RISC ISAs like MIPS, SPARC, PA-RISC, Alpha, 88k and PPC but the code density was just as bad. For RISC embedded use, SuperH had double the GP registers and much better code density. The PC was visible as a GP register which increases branch logic complexity and reduced code density. Only 26-bits of the PC were combined with the PSR in the upper bits of the PC register reducing the address space from 4GiB to 64MiB.
https://en.wikipedia.org/wiki/26-bit_computing#Early_ARM_processors
The 68k had none of these ARM handicaps even though the 68k started with a 16-bit 68000 CPU compared to a more than half a decade later 32-bit ARM CPU.
Kronos Quote:
... it was inside the RISC "hype" ... sheer luck
|
ARM adaption and perseverance was as importance as luck. It was a switch to Thumb and Thumb-2 ISAs licensed from Hitachi SuperH and based on the licensed 68000 before ARM had much embedded success. The 32-bit embedded market leading ISAs were the 68k, SuperH and Thumb(-2) which were also code density leaders.
Kronos Quote:
Now it doesn't matter how good 68k was (or not) or how good a potential future could have looked looking back from 30 years on.
What does matter is how much would have costed to really push it forward vs how much money it could have generated with it's limited market share and margins.
|
RISC Volume Gains But 68K Still Reigns https://websrv.cecs.uci.edu/~papers/mpr/MPR/19980126/120102.pdf
The article above with 1997 data has ARM in 4th place for 32-bit embedded volumes after more than a decade.
1. 68k 79.3M (included Saturn and Jaguar consoles) 2. MIPS 44.0M (included PS1 and N64 MIPS cores) 3. SuperH 23.5M (included Saturn x3 SuperH cores) 4. ARM 10.0M 5. i960 9M 6. x86 9M 7. PPC 3.9M
The 68k had about 8 times the embedded market volumes of ARM after the 68k was gone from the desktop and workstation markets. The 68k had about 20 times the embedded market volume of PPC and about 10 times the PPC desktop and embedded market volumes combined. The 68k market share was holding up while margins were reduced by being delegated to low end ColdFire embedded use while investment and development went into PPC with poor "market share" results.
Kronos Quote:
The conclusion was that nothing but 68k-cost reduced made sense, hence ColdFire being a thing always way behind in performance.
PPC and later ARM seemed attractive because development cost were shared and they looked to have a bright future (which was the correct guess for ARM).
Noone doing CPUs,GPUs,SoCs as a business is in the business of making the best product possible no matter the cost, they are in business of making the most money for the least effort and in that regard 68k was just bad tech.
|
Sales success usually comes from selling in demand products (68k) rather than unwanted (PPC) products. At least ARM did a good job of licensing and catering to customer needs after borrowing 68k/SuperH "code density" technology.
OneTimer1 Quote:
Ack!
1. Complexity: The 68k had more complex addressing modes compared to i86, making further improvements difficult, the i386 is as dumb as a RISC CPU open for improvements and the ARM is a RISC CPU without the need to be backwards compatible.
|
RISC propaganda. The most RISC ISAs with the simplest addressing modes like MIPS and RISC-V suffer from poor performance. Powerful orthogonal addressing modes are a performance multiplier. The most common 68k addressing modes are simple and very worthwhile as seen by ARM64/AArch64 adopting CISC/68k like addressing modes to increase performance. The complex double memory indirect addressing modes are rarely used, not used in most programs and do not require microcode on the 68060. In comparison, x86(-64) code is more difficult to decode, x86(-64) ISAs lack orthogonality and x86(-64) cores use slow microcode for legacy code support.
OneTimer1 Quote:
2. Demand All big users of 68k where gone, without big desktop customers you can switch to other systems.
|
That was Motorola/Freescale's AIM (Alliance). There was a 68k market which persisted for a long time despite being ignored. Eventually the 68k silicon and designs grew old as they were not modernized while PPC failed to replace the 68k anywhere.
OneTimer1 Quote:
3. Code density This is meaningless. I'm currently working with an embedded project and they might screw up the existing resources, not because of the used CPU architecture, just because they are using the 'wrong' OS and a full set of unused libraries.
|
If there is much code, code density is important. Calling one of the most important computer metrics meaningless is naive.
Last edited by matthey on 02-Dec-2024 at 04:40 AM.
|
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 2-Dec-2024 6:02:35
| | [ #239 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @CosmosUnivers
Quote:
CosmosUnivers wrote: @cdimauro
Quote:
Motorola's management was not aware of the gold which they had in their hands, and completely failed. Bah... |
Wrong again : they were 100% aware of what they did...
|
Source?
@Kronos: Matt already replied / answered to most of the points. I'll do it only for a few things.
Quote:
Kronos wrote: @cdimauro
Quote:
cdimauro wrote:
This you've to ask ARM as well. ARM "survived". |
ARM "survived" because.... ... it was open for licensing ... some those licenses happened to be by major players |
ARM was a failure at the beginning, and the licensing system was started after that the ARM company was spined off from the mother company (Acorn). Quote:
... it was well designed for the time |
On top of what Matt already reported, the first ARM versions were fully MICROCODED. Yes, like a 68000 and many other... CISC processors. Quote:
... it was inside the RISC "hype" |
Despite the "RISC" label (also in its name), the ARM architecture is one of the most complex ones, with super-complicated instructions and complex addressing modes (even with base address updated with... offsets. So, not just auto pre/post-increment).
I agree that RISC, in general, is an hype, because no processor can really be defined like that AND following all "RISC principles".
But ARM is clearly a CISC member. No doubt on that. Quote:
The conclusion was that nothing but 68k-cost reduced made sense, hence ColdFire being a thing always way behind in performance. |
Then why many instructions removed from 68k were added a gain on ColdFires?
Cost reduction make sense if you've super small cores with a few transistors/gates, but with the new processes and the addition of bigger, and bigger caches even on embedded processors, saving a few gates on a processor core leaves time to be found. Quote:
PPC and later ARM seemed attractive because development cost were shared and they looked to have a bright future (which was the correct guess for ARM). |
Only PowerPC costs are shared for Motorola, thanks to the join-venture with IBM and Apple.
Motorola entered very late the ARM market. When it already lost all its battles (68xx, 68xxx, 88xxx, PowerPCs).
Great vision... Quote:
Noone doing CPUs,GPUs,SoCs as a business is in the business of making the best product possible no matter the cost, they are in business of making the most money for the least effort and in that regard 68k was just bad tech. |
How much money made and are making companies like Microchip with their sub sub 1$ SoCs?
Other companies licensed the 68k and used in the embedded marked (see FIDO, for example).
It was only Motorola who did not believe in his project and killed it.
@OneTimer1
Quote:
OneTimer1 wrote: @Kronos
Quote:
Kronos wrote:
What does matter is how much would have costed to really push it forward vs how much money it could have generated with it's limited market share and margins.
|
Ack!
1. Complexity: The 68k had more complex addressing modes compared to i86, making further improvements difficult, |
As I've already said, double memory indirect modes can be handled by splitting them in proper micro-ops. And the problem is solved.
AFAIR, 68060 did something like that, and it was super-pipelined. It was "just" needed to continue on the same direction. Quote:
the i386 is as dumb as a RISC CPU open for improvements |
I suggest to open Intel's 80386 manual and compare it to Motorola's 68020 manual: you'll find how much complex is the former compared to the latter.
There's no way that you put it close to a RISC CPU: it's super complicated monster. Quote:
and the ARM is a RISC CPU |
See above for that. Quote:
without the need to be backwards compatible. |
Which is even worse, right? You've to write from scratch all tools which are needed. Quote:
2. Demand All big users of 68k where gone, without big desktop customers you can switch to other systems. |
I give you another example that shows why this isn't true: the J-Core processor.
https://j-core.org
You can read the motivations and why the SuperH was re-started. And you can see yourself the similarities with 68k, which had much bigger audience & market.
More interesting is a talk which was given: https://j-core.org/talks/japan-2015.pdf
You can read slides 9-10 which further show why J-Core was born. Quote:
3. Code density This is meaningless. I'm currently working with an embedded project and they might screw up the existing resources, not because of the used CPU architecture, just because they are using the 'wrong' OS and a full set of unused libraries. |
What you're doing is irrelevant on this context. We're talking about computer architectures, where code density is one of the most important factor in this domain.
You may don't know it because you aren't expert on that neither a passionate. So, you lack the knowledge on this field.
That's fine, because you might be interested on other things.
However, it's better to don't talk of things where there's no good knowledge.
Taking the above J-Core links:
The SuperH processor is a Japanese design developed by Hitachi in the late 1990's. As a second generation hybrid RISC design it was easier for compilers to generate good code for than earlier RISC chips, and it recaptured much of the code density of earlier CISC designs by using fixed length 16 bit instructions [...] - instruc\on set density (16 bit fixed length) [...] SuperH ISA was the blueprint for ARM Thumb
Thumb was created by ARM EXPLICITLY AND SOLELY for getting a much better code density, because the original ARM architecture sucked very badly at that.
That was the only reason and also the reason why ARM had a HUGE success on the embedded market. The original ARM was quickly surpassed by the new Thumb first, and especially Thumb-2 after, precisely for this reason.
Since decades, there are ARM processors which are only based on Thumb-2 (the ARM ISA was dropped), and are the ones which dominates the ARM sales.
All mainstream computer architectures have proper extensions for explicitly addressing the code density, or even a big part of their ISA. The last fresh example is RISC-V, which already introduced super-complicated instructions (PUSHM/POPM. Even much more complicated the 68k's MOVEM) only for that.
PowerPCs had also specific extension for this.
MIPS as well.
SPARC as well.
And we can continue.
ARM... I've already talked about. Only with ARM64 they haven't done it, and in fact the code density isn't good. Albeit it's much better compared to the ARM ISA, because they introduced several instructions to combine multiple instructions and get similar benefits.
But it's the only notable (and very strange!) example nowadays.
In short: could you please tell why so MASSIVE effort was done by companies (and academic / researches) on improving the code density, if it wouldn't have been so much important? |
| Status: Offline |
| | Kronos
| |
Re: what is wrong with 68k Posted on 2-Dec-2024 11:08:57
| | [ #240 ] |
| |
|
Elite Member |
Joined: 8-Mar-2003 Posts: 2708
From: Unknown | | |
|
| @matthey
Quote:
1. 68k 79.3M (included Saturn and Jaguar consoles)
|
Which say nothing unless you include the average profits per units (I'd guess single digit) and put that into relation to the R&D cost that would occur to keep it relevant for the next x years.
Quote:
If there is much code, code density is important. Calling one of the most important computer metrics meaningless is naive.
|
It was important when computers had ROM and RAM counted in kb and most of that was taken up by code.
Today you only run of of RAM when data balloons out of control often the result of sloppy coding.
Not meaningless but also not more than an afterthought in most use cases._________________ - We don't need good ideas, we haven't run out on bad ones yet - blame Canada |
| Status: Offline |
| |
|
|
|
[ home ][ about us ][ privacy ]
[ forums ][ classifieds ]
[ links ][ news archive ]
[ link to us ][ user account ]
|