Click Here
home features news forums classifieds faqs links search
6071 members 
Amiga Q&A /  Free for All /  Emulation /  Gaming / (Latest Posts)
Login

Nickname

Password

Lost Password?

Don't have an account yet?
Register now!

Support Amigaworld.net
Your support is needed and is appreciated as Amigaworld.net is primarily dependent upon the support of its users.
Donate

Menu
Main sections
» Home
» Features
» News
» Forums
» Classifieds
» Links
» Downloads
Extras
» OS4 Zone
» IRC Network
» AmigaWorld Radio
» Newsfeed
» Top Members
» Amiga Dealers
Information
» About Us
» FAQs
» Advertise
» Polls
» Terms of Service
» Search

IRC Channel
Server: irc.amigaworld.net
Ports: 1024,5555, 6665-6669
SSL port: 6697
Channel: #Amigaworld
Channel Policy and Guidelines

Who's Online
19 crawler(s) on-line.
 93 guest(s) on-line.
 1 member(s) on-line.


 OneTimer1

You are an anonymous user.
Register Now!
 OneTimer1:  3 mins ago
 MagicSN:  22 mins ago
 Glames:  29 mins ago
 AndreasM:  29 mins ago
 Kronos:  30 mins ago
 OlafS25:  47 mins ago
 Tpod:  58 mins ago
 Hans:  59 mins ago
 Templario:  1 hr 5 mins ago
 Karlos:  1 hr 38 mins ago

/  Forum Index
   /  Classic Amiga Hardware
      /  One major reason why Motorola and 68k failed...
Register To Post

Goto page ( 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 Next Page )
PosterThread
Matt3k 
One major reason why Motorola and 68k failed...
Posted on 6-May-2024 16:37:35
#1 ]
Regular Member
Joined: 28-Feb-2004
Posts: 244
From: NY

Since we have talked about this subject from various perspectives.

I though I would add another... Intel.

Intel used the same famed consultants that brought Google to prominence.

I read this book years ago and it discusses it nicely:
https://www.whatmatters.com/the-book

Read chapter 3: Operation Crush: an Intel Story.

Give you one guess on who they wanted to crush? :)

Article on it.

Another article.

Clearly Motorola got outplayed by Intel and the rest is history...

Last edited by Matt3k on 06-May-2024 at 05:39 PM.
Last edited by Matt3k on 06-May-2024 at 04:40 PM.

 Status: Offline
Profile     Report this post  
Hammer 
Re: One major reason why Motorola and 68k failed...
Posted on 7-May-2024 4:39:58
#2 ]
Elite Member
Joined: 9-Mar-2003
Posts: 6039
From: Australia

@Matt3k

The critical design win for Intel was the IBM PC standard. Intel OKR didn't work for Intel's attempt to kill X86 e.g. iAPX 432., i860, and Itanium. CPU by itself is nothing without software.

IBM's open PC standard killed NEC's PC-98 despite both platforms having common X86 CPUs.

68K based Amiga and Apple can compete against the likes of NEC PC-98.

IBM's imposed "second source" insurance for the X86 supply contract worked as intended i.e. AMD64 (X86-64) killed Itanium.

Motorola didn't price out their 68000 with 68010 e.g. sell 68000 at $8 while 68010 at $7.
68010's instruction set changes lays the foundations for 68020 and 68030. Bad habits need to be priced out.

For Dragon Ball with 68000, this problem is repeated in the later handheld mobile market e.g. 68000 based Dragon Ball @ 33 Mhz was displaced by ARM925T @ 144 Mhz with integrated MMU. Dragon Ball reached 66Mhz for the MC68SZ328 model.

68020 wasn't a direct drop-in replacement for 68000 since it's not 100% instruction set compatible. Chaos for Sega Mega Drive games let alone Amiga/Atari ST games.

The pathways for many 68000 retro platforms are with PiStorm's 100% instruction compatibility with 68000 while being faster.

Motorola couldn't shift their 68000 success into the newer 68020, 68030, and 68040.

Motorola didn't seriously cultivate MMU-enabled Unix/Linux OS insurance and relied on MMU-less OS platform customers. Motorola was late with the integrated MMU 68030, Motorola did NOT guarantee MMU standards across their products. Motorola has a "nickel and dime" MMU as a premium product segmentation feature.

There's no BOM cost difference between 68EC030 and 68030 since they are the same chips.

Intel guaranteed MMU across i386 standard which enabled mass memory-protected OS deployment. Intel guaranteed MMU across i286 standard which enabled Xenix potential for every 286-based PC. Without the Intel MMU standard, NEC's 8086 clones dropped from the market.

A certain 386 PC was used for creating Linux.

https://segaretro.org/History_of_the_Sega_Saturn/Development

EGM reported that this new Saturn project was likely to use a Motorola 68030 processor. It also became increasingly unlikely that this new project would not be compatible with Mega Drive or Mega-CD software
...

On September 21st, 1993, Sega announced a joint venture with Hitachi with the intention of producing a "32-bit" video game multimedia machine, the idea being that Hitachi would be responsible for producing the processor. Mega Drive and Mega-CD support was effectively ruled out around this point - it was unlikely that Sega would equip its new console with all the chips required to run this software natively, and more modern techniques such as software emulation was completely unheard of at the time.


There are many stories of when 68K lost to RISC competitors not just Intel.


Last edited by Hammer on 07-May-2024 at 05:39 AM.
Last edited by Hammer on 07-May-2024 at 05:33 AM.
Last edited by Hammer on 07-May-2024 at 05:25 AM.
Last edited by Hammer on 07-May-2024 at 05:18 AM.
Last edited by Hammer on 07-May-2024 at 05:17 AM.
Last edited by Hammer on 07-May-2024 at 05:12 AM.
Last edited by Hammer on 07-May-2024 at 04:47 AM.

_________________
Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68)
Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68)
Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB

 Status: Offline
Profile     Report this post  
matthey 
Re: One major reason why Motorola and 68k failed...
Posted on 7-May-2024 17:32:10
#3 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2387
From: Kansas

@Matt3k
This looks like propaganda told by one of the winners to sell books. The winners write history from their perspective.

How did Intel feel about the 68000?

https://archive.computerhistory.org/resources/text/Oral_History/Motorola_68000/102658164.05.01.acc.pdf Quote:

House: You were saying that you'd won most of the designs and Murray had said how electrifying that was. I can tell you that at Intel it was pretty electrifying too because we saw the design loops being won by the 68000, and we talked to the customers about why, and it was terrifying to the team at Intel.


Motorola 68000 Oral History Panel
https://www.youtube.com/watch?v=UaHtGf4aRLs

Intel's "Operation Crush" should be called "Operation Avoid the Motorola Crush". The 68000 was obviously vastly superior to the 808x but it wasn't the end of the world. The 68000 was also a new CPU family with limited support chips and the new ISA had limited software support, especially for development tools like compilers. Motorola had outdated fabs and fab production issues at the introduction of the 68000. High demand and production issues meant a limited 68000 chip supply for customers. This left an opening for Intel to at least retain existing customers with their more mature 808x family and availability was not a problem as many potential customers chose the 68000. Intel lost most of the new and high end designs but they retained a cost advantage for the low end as the 808x was out longer and the 8086 only used about 29,000 transistors to the 68000 68,000 transistors making it potentially cheaper to produce. The cheaper 8088 variant with 8 bit data bus reducing the CPU pin count and cost was available much sooner than the equivalent 68008 which arrived so late that there was little demand for it and development was a waste of resources. The cheap 8088 for the IBM PC is the lucky win that saved Intel from the Motorola 68000 crush.

Besides the OKR management propaganda, your article mentions 4 Intel strategies.

https://www.pragmaticinstitute.com/resources/articles/product/leader-of-the-pack/ Quote:

1. Develop and publish five benchmarks showing superior 8086 family performance (Applications)
2. Repackage the entire 8086 family of products (Marketing)
3. Get the 8MHz part into production (Engineering, Manufacturing)
4. Sample the arithmetic coprocessor no later than June 15 (Engineering)


Intel's benchmark and marketing strategy was partially successful. Intel placed 68000 doubts in some potential customers minds but more tech savy potential customers realized the propaganda and it likely backfired with some as Intel stretched the truth beyond credibility. Consider the following article which calls out Intel's benchmarks.

https://marc.retronik.fr/motorola/68K/68000/Benchmarking_The_68000-80x86_[Micro_Cornucopia_1985-1p].pdf Quote:

What's the fastest 16-bit chip around? It depends on whom you're listening to.

Intel has published reports comparing the speeds of its 80*86 family and Motorola's 68000. Their reports claim the iAPX286 is three to six times faster than the 8086 and three times faster than the 68000. Motorola decided to study Intel's benchmark results, and they found some inconsistencies in Intel's comparisons. Here's food for thought:

1. Intel used the fastest iAPX286 they make (8MHz), but not the 12.5MHz Motorola 68000.
2. Intel used a record area of 64K for the linked list benchmark (which is the maximum memory all 80*86 chips can address without segment switching) and used a 16 Megabyte area for the 68000.
3. None of Intel's benchmarks handled the case of crossing a segment boundary. Obviously, many applications require more than 64K RAM. Crossing a segment boundary means more overhead (slower operation) for Intel's parts.

...

EDN asked Intel to send in the code for their benchmarks, but Intel refused. Motorola interpreted Intel's refusal to mean that the code for the iAPX286 was so long and clumsy Intel would be embarrassed to see it in print.

...

Editor's note: Of course, there's more to a microprocessor's success than benchmarks. The Intel-Motorola battle illustrates how marketing moxy can outweigh performance in the battle for industry's pocketbooks.

In 1981, when the Motorola 68000 was gaining momentum, Intel president Andy Grove called in Regis McKenna, a public relations hotshot from Palo Alto, California.

Grove, McKenna, and six Intel managers met to develop a new marketing strategy for Intel. Their project was code named CRUSH. Very simply, its intention was to stop the movement of designers from the Intel chips to the newer 68000 series.

After surveying the market, they concluded that if customers compared the 8086 to the 68000, chip to chip, "Intel would have trouble." The 68000 was becoming more and more popular among software-oriented companies, while the 8086 was holding its own among hardware-oriented companies. (See "The Last Page" this issue for details.)

The CRUSH strategy was to play on customers' fears. They wanted people to worry about the consequences of committing themselves to Motorola. After all, the 68000 had very little software, no peripheral chips, and no development system. And Motorola hadn't clearly defined its future. Would customers get stuck with an orphan if they went 68000?

During the next quarter, Intel gave 50 half-day seminars to potential customers, and thereby won the positioning battle. Motorola is only now beginning to catch up in the home computer market, with new machines coming from Amiga, Atari, and Apple.



The article has benchmark results showing the 68000 outperforming the 8086, 80186 and 80286. There are certainly some cases where 808x and x86 instruction execution uses fewer cycles and there are cases where compilers will generate poor code for the 68000 but the 68000 is clearly higher performance than the 8086, 80186 and 80286. The big difference, as noted in this article and which I recently pointed out in an Amigaworld.net post, is the 68000 flat address space compared to a 64kiB segmented address space. Is there any doubt why Jay Miner pushed for using the 68000 so hard in the Amiga and that it was the perfect choice? The Amiga, Atari and Apple all got it right but IBM may have wanted a PC with more limited hardware so it wouldn't compete with their higher margin products. Intel offered good chip availability, more support chips and more software. Maybe Intel's propaganda made it to IBM management who lacked tech knowledge or maybe IBM just lacked the visionaries to see their PC becoming dynamic, versatile, and expandable PCs like Jay Miner and Steve Jobs foresaw.

Intel was good at marketing and upgrading products and customers. They were persistent with their incremental upgrades. Both Motorola and Intel made some huge mistakes. Motorola could have listened to Chuck Peddle and developed what became the 6502 family. They may not have come up with the 68000 in an effort to regain the CPU market they lost from 6800 to 6502 though. The biggest mistake may have been throwing away their beautiful 68k baby. Intel never threw out their ugly x86 baby even with huge expensive mistakes like the iAPX 432 and Itanium that were supposed to replace it. It's sad that x86 has been developed so far while the vastly superior and beautiful 68k that started the dynamic PC revolution gets so little development love despite viability today based on performance metrics. Even the older 6502 is more available as licensable cores and has proliferated more than the 68k since Motorola thew out their baby and never looked back.

Last edited by matthey on 07-May-2024 at 05:37 PM.

 Status: Offline
Profile     Report this post  
Hammer 
Re: One major reason why Motorola and 68k failed...
Posted on 8-May-2024 2:53:49
#4 ]
Elite Member
Joined: 9-Mar-2003
Posts: 6039
From: Australia

@matthey

Quote:
The article has benchmark results showing the 68000 outperforming the 8086, 80186 and 80286. There are certainly some cases where 808x and x86 instruction execution uses fewer cycles and there are cases where compilers will generate poor code for the 68000 but the 68000 is clearly higher performance than the 8086, 80186 and 80286. The big difference, as noted in this article and which I recently pointed out in an Amigaworld.net post, is the 68000 flat address space compared to a 64kiB segmented address space. Is there any doubt why Jay Miner pushed for using the 68000 so hard in the Amiga and that it was the perfect choice? The Amiga, Atari and Apple all got it right but IBM may have wanted a PC with more limited hardware so it wouldn't compete with their higher margin products. Intel offered good chip availability, more support chips and more software. Maybe Intel's propaganda made it to IBM management who lacked tech knowledge or maybe IBM just lacked the visionaries to see their PC becoming dynamic, versatile, and expandable PCs like Jay Miner and Steve Jobs foresaw.

Fact: Motorola 68008 was released in 1982 which was too late for cost-reduced IBM PC's 8-bit bus with 16-bit ALU CPU requirement, hence Intel's 8088 was selected.

If 68008 was released in 1979 along with 68000, the timeline could be different.

NEC selected the 16-bit FSB equipped 8086 for PC-98.

Many desktop micro-computers in the early 1980s had 8-bit data bus PCB designs due to cost reasons.

PC clones selected the faster 16-bit front side bus 8086 that runs IBM's original PC's business software on MS-DOS.

In 1984, IBM released the PC/AT with 286 CPU with integrated MMU. PC clones also used 286 CPUs. Before PC/AT's release, Intel focused on Unix (and Unix-like clones) with 286 platforms.

Motorola didn't integrate 68K's MMU until 1987's 68030 release. X86's built-in MMU has a cost advantage for system integrators when compared to Motorola's two-chip solution e.g. 68010 / 68451 (or single vendor boat anchor custom MMU).

Apple's 68000-based Lisa was released in 1983 that is 2 years later than the original IBM PC's 1981 release.

The PC platform leveraged its business/professional software legacy that was built up during Commodore's 8-bit microcomputer era into the full 16-bit (8086/80286) and 32-bit (80386 with guaranteed MMU) era while the 68000 competition started from ground zero.

Despite owning MOS 65xx CPU family, Commodore couldn't carry over the success of the MOS 65xx PET business and C64 games legacy software into the full 16-bit era. The Amiga started from ground zero in 1985.

Under Jack Tramiel's Commodore, Tramiel didn't care about software compatibility across Commodore's 8-bit computers i.e. incompatible Plus4/C16 and C64. The same Tramiel shit product segmentation for Atari Mega ST (custom bitter) and ST (no bitter). Jack Tramiel doesn't understand 3rd party software development.

The same lessons for AMD64's win against Itanium (IA-64) and Power64. AMD64 didn't win by "64-bit "alone when the IA-32 legacy/X86-64 pathway was the sledgehammer against 64-bit Power64 and Itanium. Software sold the hardware.

Apple was aware of the business (productive) software use case and convinced enough major business software vendors for the Mac. Microsoft created software teams for Mac and learned from it.

There are reasons for Apple purchasing business professional software vendors for its Mac platform, hence creating 1st party professional software teams for the Mac.
Apple's 1st party software teams are similar to Nintendo/Sony's 1st party software teams.

Acorn predicated Commodore's crap CPU R&D road map and designed their ARM CPU family. Acorn wasn't a "big iron" Unix vendor, hence their profit expectations and pricing model are closer to normal customers i.e. RISC CPU for the masses, not classes.

Most end consumers didn't care about 68000's virtue. When my Amiga 500 was purchased, 68000's virtue wasn't in my mind i.e. it was the good quality games, price entry point, and "homework" edculation excuse. Atari ST was considered, but willing to pay a slight premium for superior graphics hardware, hence the Amiga 500 in 1989.

----------------

In modern times, Intel has its "unstable Pentium III Ghz race" moment with the RaptorLake debacle. AMD piled in the pressure and Intel made another mistake.

When extreme pressure is applied, Intel usually makes mistakes i.e. Pentium FPU bug, unstable Pentium III Ghz race, and recent unstable RaptorLake's power profiles.

Without Jack Tramiel's Commodore as a low-cost competitor, Raspberry Pi can continue from Acorn's low-cost education micro-computer entry point.

Last edited by Hammer on 08-May-2024 at 03:31 AM.
Last edited by Hammer on 08-May-2024 at 03:20 AM.
Last edited by Hammer on 08-May-2024 at 03:07 AM.

_________________
Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68)
Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68)
Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB

 Status: Offline
Profile     Report this post  
matthey 
Re: One major reason why Motorola and 68k failed...
Posted on 8-May-2024 5:21:22
#5 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2387
From: Kansas

Hammer Quote:

The critical design win for Intel was the IBM PC standard. Intel OKR didn't work for Intel's attempt to kill X86 e.g. iAPX 432., i860, and Itanium. CPU by itself is nothing without software.

IBM's open PC standard killed NEC's PC-98 despite both platforms having common X86 CPUs.

68K based Amiga and Apple can compete against the likes of NEC PC-98.

IBM's imposed "second source" insurance for the X86 supply contract worked as intended i.e. AMD64 (X86-64) killed Itanium.


Itanium had multiple producers too including Intel and HP. The issue with the VLIW Itanium was very expensive and slow development resulting in lackluster general purpose performance much like the i860. The EPIC mistake was targeting general purpose high performance desktop and workstation markets which set very high performance expectations. The AMD64/x86-64 ISA may have violated license agreements but Intel saw the light and negotiated with AMD to share the ISA instead of suing them.

Hammer Quote:

Motorola didn't price out their 68000 with 68010 e.g. sell 68000 at $8 while 68010 at $7.
68010's instruction set changes lays the foundations for 68020 and 68030. Bad habits need to be priced out.

For Dragon Ball with 68000, this problem is repeated in the later handheld mobile market e.g. 68000 based Dragon Ball @ 33 Mhz was displaced by ARM925T @ 144 Mhz with integrated MMU. Dragon Ball reached 66Mhz for the MC68SZ328 model.


The 68000 and 68010 have similar transistor and pin counts so the cost to produce the chips was similar. However, there were additional development and chip production costs for the 68010 Motorola likely wanted to recoup with an expected short life before the 68020 replacement and they could charge a premium price for features needed primarily by the high margin workstation market. The 68000 was adequate for the embedded market and most of the desktop market although the 68010 was better for the desktop market to allow a more compatible upgrade path to the 68020. Jay Miner likely preferred the 68010 for his expandable desktop Amiga vision but the game machine guys considered the 68008 and wanted no more expense than the 68000. Engineering is the art of compromise and the 68000 was chosen which Apple and Atari also selected for the desktop. External MMU add on ability and proper virtualization support obviously was not as important for the desktop as a large flat address space.

DragonBall embedded chips used 68000 cores as they targeted small and low power devices before ColdFire aimed even lower. Motorola had higher performance CPU cores and could have used existing CPU32 or 68020 cores with or without MMU but they would have competed with embedded PPC. The 68k was only allowed for low end embedded use where the fat PPC couldn't go. An MMU was very low priority for ColdFire as well with it only appearing in Coldfire v4e launched in 2000. While the DragonBall embedded chips were early SoCs with 2D support, they were much more primitive and lower end than later ARM based SoCs with integrated 3D GPUs like the TI OMAP SoCs. Motorola developed higher end PPC SoCs to compete with ARM SoCs like the MPC51xx and MPC52xx SoCs but they were too little too late and ARM had Thumb technology for 68k like code density licensed from Hitachi SuperH and based on the 68k from when Hitach was a 68k 2nd source chip producer. Motorola was beaten by their own 68k technology they refused to use.

Hammer Quote:

68020 wasn't a direct drop-in replacement for 68000 since it's not 100% instruction set compatible. Chaos for Sega Mega Drive games let alone Amiga/Atari ST games.

The pathways for many 68000 retro platforms are with PiStorm's 100% instruction compatibility with 68000 while being faster.


The 68020 isn't pin compatible with the 68000 either. The 68010 is pin compatible and mostly compatible despite a few ISA changes like making SR register accesses supervisor only. The 68010 usually could be dropped into an Amiga, Sega Genesis, Mac and later Atari computers with enough compatibility to boot and work most of the time. This includes the Sega Genesis without an OS because it operates in supervisor mode all the time so was unaffected by the ISA change. Some OSs which operated in user mode were slower with the 68010 if they handled supervisor mode violation exceptions but this made it mostly compatible to the 68000. The 68020+ kept the 68010 SR register accesses being supervisor mode only but there were some supervisor mode changes which affected compatibility, especially for bare metal programming in supervisor mode like the Sega Genesis uses. The PiStorm and AC68080 making SR register accesses allowed in user mode like the 68000 for what you call "100% instruction compatibility" doesn't affect this. Some Sega Genesis games would have needed patches to work on the 68020+ but it should have been a smoother transition than the change to SuperH.

Hammer Quote:

Motorola couldn't shift their 68000 success into the newer 68020, 68030, and 68040.


The 68020 and 68030 were very successful for embedded and desktop markets. Motorola did start loosing workstation market share to RISC competition. Later desktop market share was lost to the 80386. The very late and hot running 68040 was where the 68k hit the wall. The 68040 was too hot for most embedded uses and the workstation market had mostly been lost leaving only the desktop market. Apple was the only big user of the 68040 on the desktop though. The 68060 design was what the 68040 design should have been and better suited for embedded use but Apple had already switched to PPC so there was no 68k desktop market left. CBM was best positioned to challenge IBM PCs with the 68k Amiga but their vision was a C64 replacement Amiga using a custom chipset to offload the CPU so they could provide the cheapest possible 68k CPU which was an embedded CPU.

Hammer Quote:

Motorola didn't seriously cultivate MMU-enabled Unix/Linux OS insurance and relied on MMU-less OS platform customers. Motorola was late with the integrated MMU 68030, Motorola did NOT guarantee MMU standards across their products. Motorola has a "nickel and dime" MMU as a premium product segmentation feature.

There's no BOM cost difference between 68EC030 and 68030 since they are the same chips.


Sure. The 68010 is what the 68000 should have been in 1979 but the market for MMUs was tiny then. Motorola needed to get out their 68k successors with updated features sooner but what they produced was generally high quality. Standardization could have been better but all major CPUs from the 68000-68060 work well with the Amiga.

Hammer Quote:

Intel guaranteed MMU across i386 standard which enabled mass memory-protected OS deployment. Intel guaranteed MMU across i286 standard which enabled Xenix potential for every 286-based PC. Without the Intel MMU standard, NEC's 8086 clones dropped from the market.

A certain 386 PC was used for creating Linux.


MMU pages for a segmented address space are not the same as MMU pages for a flat address space. This is a major reason why the 80386 was the game changer for x86 and Intel and not the 80286. A standard on chip MMU was helpful for the desktop market but it came at a higher Intel price. If an MMU was needed, an on chip MMU provided better performance and a potential cost savings compared to an external MMU. Most embedded CPUs did not use a MMU which was a larger part of Motorola's customers. Some desktop computers did not have MMU support for some time too. ARM was introduced in 1985 with no MMU, no FPU, no caches, no hardware MUL & DIV, no misaligned address support, etc. and did not receive many of these long available 68k features for many years to come. Some did not become standard until AArch64. RISC-V chooses a la carte features for customer versatility like ARM used to but too much hardware versatility is hell for software developers and results in less optimized code.

Hammer Quote:

https://segaretro.org/History_of_the_Sega_Saturn/Development

EGM reported that this new Saturn project was likely to use a Motorola 68030 processor. It also became increasingly unlikely that this new project would not be compatible with Mega Drive or Mega-CD software
...

On September 21st, 1993, Sega announced a joint venture with Hitachi with the intention of producing a "32-bit" video game multimedia machine, the idea being that Hitachi would be responsible for producing the processor. Mega Drive and Mega-CD support was effectively ruled out around this point - it was unlikely that Sega would equip its new console with all the chips required to run this software natively, and more modern techniques such as software emulation was completely unheard of at the time.


There are many stories of when 68K lost to RISC competitors not just Intel.


RISC was over hyped and divided between too many ISAs. Workstation customers left the CISC 68k for more performance and returned to the CISC x86 for more performance which was in many ways a downgrade from the 68k. There was no 68k to return to as Motorola fell for the RISC hype themselves and threw their beautiful baby out while Intel kept their ugly baby and nurtured it to a strong but even uglier maturity.

 Status: Offline
Profile     Report this post  
matthey 
Re: One major reason why Motorola and 68k failed...
Posted on 8-May-2024 18:48:38
#6 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2387
From: Kansas

Hammer Quote:

Fact: Motorola 68008 was released in 1982 which was too late for cost-reduced IBM PC's 8-bit bus with 16-bit ALU CPU requirement, hence Intel's 8088 was selected.

If 68008 was released in 1979 along with 68000, the timeline could be different.


Even if the 68008 had been available earlier, I have doubts it would have made much of a difference for IBM PC consideration. The 68008 value in performance/$ was much less than the 68000 and it was not as competitive as the 68000. The 68008 uses a similar number of transistors as the 68000 so the only savings is from the pin count, which isn't much, and one of the reasons why the price often wasn't cheaper. It did lower the board cost for the customer but it wasn't worth it most of the time. There are cheaper 16 bit CPUs like the 8088 that use less than half the transistors of the 68008. CBM chose the 16 bit Zilog Z8001 for their Commodore 900 next generation computer before acquiring the 68000 Amiga.

CPU | pins | transistors
68000 64-pin 68,000
68008 48-pin 70,000
Z8001 48-pin 17,500
8088 40-pin 29,000

There was a 40-pin Z8002 but it only supported 64kiB of address space where the Z8001 supported up to 8MiB with segmentation like the 8088. The 68008 was at a cost disadvantage and no longer had a performance advantage. It still had a better more orthogonal ISA with a large flat address space but it is not nearly as competitive in the budget 16 bit CPU market. Motorola perhaps should have looked at this and decided not to develop and produce the 68008.

Perhaps a better idea than the 68008 would have been a 68015 with a 16 bit data bus but 32 bit ALUs and a small cache.

68015
68010 ISA changes and external MMU support
64-pin DIP (pin compatible with 68000 and 68010)
data bus: 16-bit
address bus: 24-bit
32 bit ALUs
instruction cache: 64 or 128 byte direct mapped
data cache: 64 or 128 byte direct mapped

This 68015 could have been a significantly cheaper alternative to the 32 bit data bus 68020 while offering a drop in 68000 and 68010 replacement with a performance and feature upgrade. The small caches would have been more effective than the 68010 prefetch for loops and made the 32 bit ALUs more worthwhile than the many GP registers alone. With this intermediate upgrade, the 68020 could have even been skipped with the 68030 features added instead without the fear of making the chip too expensive. Maybe this 68015 could have been the 68020 and the 68030 would have been the full 32 bit upgrade with more pipelining.

Hammer Quote:

In 1984, IBM released the PC/AT with 286 CPU with integrated MMU. PC clones also used 286 CPUs. Before PC/AT's release, Intel focused on Unix (and Unix-like clones) with 286 platforms.

Motorola didn't integrate 68K's MMU until 1987's 68030 release. X86's built-in MMU has a cost advantage for system integrators when compared to Motorola's two-chip solution e.g. 68010 / 68451 (or single vendor boat anchor custom MMU).


Once again, it was the 80386 with large flat address space and standard MMU that was the game changer and not the 80286. Yes, Motorola should have stopped disabling the MMU and made it standard when it wasn't worthwhile to make new chips without a MMU to reduce the silicon space. The FPU probably should have been standard by the 68060 as well as one chip improves economies of scale and is easier to support. Also, this could have allowed the FPU and integer MUL and DIV instructions with 64 bit results to more easily share silicon instead of being removed (this is a potential advantage of an extended precision FPU with 64 bits of fraction precision instead of 53 bits for double precision).

Hammer Quote:

Apple's 68000-based Lisa was released in 1983 that is 2 years later than the original IBM PC's 1981 release.

The PC platform leveraged its business/professional software legacy that was built up during Commodore's 8-bit microcomputer era into the full 16-bit (8086/80286) and 32-bit (80386 with guaranteed MMU) era while the 68000 competition started from ground zero.


Developing from zero is very difficult. It took the 68000 about 5 years before Mac, Amiga and Atari PCs started to be released and they had as much to do with the 68000 price as development efforts. The value (performance/$) of the 68000 sparked a PC and game machine evolution like the 6502 had done years earlier.

Hammer Quote:

Despite owning MOS 65xx CPU family, Commodore couldn't carry over the success of the MOS 65xx PET business and C64 games legacy software into the full 16-bit era. The Amiga started from ground zero in 1985.


The 68000 is capable of good 6502 family emulation and there were good C64 Amiga emulators later but CBM lost customers by not prioritizing C64 emulators sooner. They seemed more fixated on 808x emulation which they also botched and wasted valuable development efforts on hardware PC support for the Amiga that increased Amiga costs.

Hammer Quote:

Under Jack Tramiel's Commodore, Tramiel didn't care about software compatibility across Commodore's 8-bit computers i.e. incompatible Plus4/C16 and C64. The same Tramiel shit product segmentation for Atari Mega ST (custom bitter) and ST (no bitter). Jack Tramiel doesn't understand 3rd party software development.


Jack sabotaged CBM before he left but the new upper management wasn't any better.

Hammer Quote:

The same lessons for AMD64's win against Itanium (IA-64) and Power64. AMD64 didn't win by "64-bit "alone when the IA-32 legacy/X86-64 pathway was the sledgehammer against 64-bit Power64 and Itanium. Software sold the hardware.


The Itanium offered x86 compatibility through Transmeta like "code morphing" which they called "dynamic binary translation". There were translation layers made to execute MIPS and SPARC code as well as x86 code. Itanium was an effort to consolidate and replace many high end ISAs for workstations and high end desktop markets. General purpose performance was the problem and not compatibility for VLIW CPUs. There is a higher VLIW resource cost somewhat like emulation too but lack of performance is what kills VLIW and emulation competitiveness for CPUs.

Hammer Quote:

Apple was aware of the business (productive) software use case and convinced enough major business software vendors for the Mac. Microsoft created software teams for Mac and learned from it.

There are reasons for Apple purchasing business professional software vendors for its Mac platform, hence creating 1st party professional software teams for the Mac.
Apple's 1st party software teams are similar to Nintendo/Sony's 1st party software teams.


The Amiga received Microsoft Amiga Basic. Wait, were you talking about professional quality software for the Amiga?

Hammer Quote:

Acorn predicated Commodore's crap CPU R&D road map and designed their ARM CPU family. Acorn wasn't a "big iron" Unix vendor, hence their profit expectations and pricing model are closer to normal customers i.e. RISC CPU for the masses, not classes.


ARM CPUs had few features early on and had limited adoption and success for many years. Success finally came due to Thumb ISA adoption with code density for the embedded market and low margin expectations as you say. The early ARM CPUs were over hyped like many RISC CPUs and ISAs. ARM is on their 4th ISA today which is far from good planning and long term standardization.

Hammer Quote:

Most end consumers didn't care about 68000's virtue. When my Amiga 500 was purchased, 68000's virtue wasn't in my mind i.e. it was the good quality games, price entry point, and "homework" education excuse. Atari ST was considered, but willing to pay a slight premium for superior graphics hardware, hence the Amiga 500 in 1989.


Most developers did care about the 68k and liked it. Most potential 68k Amiga customers care about the 68k because it brings maximum software compatibility to the retro Amiga market. They care about the Amiga custom chips for the same reason.

Hammer Quote:

In modern times, Intel has its "unstable Pentium III Ghz race" moment with the RaptorLake debacle. AMD piled in the pressure and Intel made another mistake.

When extreme pressure is applied, Intel usually makes mistakes i.e. Pentium FPU bug, unstable Pentium III Ghz race, and recent unstable RaptorLake's power profiles.


It's surprising there aren't more problems with the complexity of high performance x86-64 CPUs.

Hammer Quote:

Without Jack Tramiel's Commodore as a low-cost competitor, Raspberry Pi can continue from Acorn's low-cost education micro-computer entry point.


There is plenty of Chinese competition for the RPi. They don't have the RPi standard install base, don't have as much software support and don't have cost advantages using the same CPU and paying similar royalties. The SiFive RISC-V SBCs have a cost advantage and some good CPU designs but they lack the RISC-V software which had to be built from zero and hasn't caught up yet. The 68k has the software, especially hot retro games, but lacks the hardware to compete. Worse than Motorola upper management was CBM upper management and all the Amiga IP squatters. It's just as bad today trying to compete using emulation which is worse than the EPIC fail of code morphing on VLIW CPUs.

Last edited by matthey on 09-May-2024 at 04:50 PM.

 Status: Offline
Profile     Report this post  
kolla 
Re: One major reason why Motorola and 68k failed...
Posted on 9-May-2024 5:16:55
#7 ]
Elite Member
Joined: 20-Aug-2003
Posts: 3270
From: Trondheim, Norway

@matthey

Quote:
The Amiga received Microsoft WordPerfect


What?

_________________
B5D6A1D019D5D45BCC56F4782AC220D8B3E2A6CC

 Status: Offline
Profile     Report this post  
Hammer 
Re: One major reason why Motorola and 68k failed...
Posted on 9-May-2024 6:04:51
#8 ]
Elite Member
Joined: 9-Mar-2003
Posts: 6039
From: Australia

@matthey

Quote:

68015
68010 ISA changes and external MMU support
64-pin DIP (pin compatible with 68000 and 68010)
data bus: 16-bit
address bus: 24-bit
32 bit ALUs
instruction cache: 64 or 128 byte direct mapped
data cache: 64 or 128 byte direct mapped

This 68015 could have been a significantly cheaper alternative to the 32 bit data bus 68020 while offering a drop in 68000 and 68010 replacement with a performance and feature upgrade. The small caches would have been more effective than the 68010 prefetch for loops and made the 32 bit ALUs more worthwhile than the many GP registers alone. With this intermediate upgrade, the 68020 could have even been skipped with the 68030 features added instead without the fear of making the chip too expensive. Maybe this 68015 could have been the 68020 and the 68030 would have been the full 32 bit upgrade with more pipelining.

I was thinking about 68000 with 100% 68020 instruction set compatibility.

The tactic is to remove instruction set mistakes from the market by pricing them out e.g. 68015 for $8 and 68000 for $10.

When Sega designed and released their Mega Drive in 1988, it would be 100% 68020-based instruction set compatible 68015 instead of 68000.

There are mistakes and they need to be removed from the marketplace by "big nanny" or shepherd the platform customers in a certain direction.

68000's instruction set mistakes need to be culled in the bud.

The 100% instruction set compatibility with a faster CPU roadmap is the legacy software addiction.

Motorola couldn't convert the 68000's success with the newer 68K when its enemy was the 68000 itself.

Quote:

Once again, it was the 80386 with large flat address space and standard MMU that was the game changer and not the 80286. Yes, Motorola should have stopped disabling the MMU and made it standard when it wasn't worthwhile to make new chips without a MMU to reduce the silicon space.

It's to cultivate less reliance on single source non-MMU platform customers. A backdoor towards creating a cloneable desktop/workstation platform.

ARM has a "SystemReady" platform initiative.

Raspberry Pi 4, Raspberry Pi 400, and two Rockchip RK3399 SBC (RockPro64 & Leez P710) have received ARM SystemReady IR certification.

Raspberry Pi 4 has a UEFI firmware task force. https://github.com/pftf/RPi4

Raspberry Pi 4 had Arm SystemReady ES (server) certification allowing it to run Windows 11 for instance.

Sometimes, a "big nanny" from the CPU vendor is needed.

Reference
https://www.cnx-software.com/2021/10/16/raspberry-pi-4-rockchip-rk3399-sbcs-get-arm-systemready-ir-certification/


Quote:

The FPU probably should have been standard by the 68060 as well as one chip improves economies of scale and is easier to support. Also, this could have allowed the FPU and integer MUL and DIV instructions with 64 bit results to more easily share silicon instead of being removed (this is a potential advantage of an extended precision FPU with 64 bits of fraction precision instead of 53 bits for double precision).

Again, Motorola couldn't convert the 68000's success with the newer 68K when its enemy was the 68000 itself.

The death spiral can cause a reduction in R&D funds.

Motorola committed two separate ASIC designs for PowerPC i.e. 603 and 604 in a single generation. Motorola has a stricter product segmentation.

Meanwhile, Intel P6 microarchitecture design scales from Celeron, Pentium II, and Xeon.

Quote:

The 68000 is capable of good 6502 family emulation and there were good C64 Amiga emulators later but CBM lost customers by not prioritizing C64 emulators sooner. They seemed more fixated on 808x emulation which they also botched and wasted valuable development efforts on hardware PC support for the Amiga that increased Amiga costs.

For ARM925T, PalmOS 5 had user space Dragon Ball 68K emulation for legacy apps.

Officially, there wasn't a software legacy direct link between C64 and the Amiga, hence C64c/C128 was effectively a dead path.

Quote:

The Itanium offered x86 compatibility through Transmeta like "code morphing" which they called "dynamic binary translation".


Early attempts
https://slashdot.org/story/01/01/25/135201/itanium-preview-and-32-bit-benchmarks

"Tweakers.net has posted a preview of the Itanium that includes benchmarks of the x86 emulator. Looks pretty dismal here, as it struggles to keep up with even a Pentium I in many areas!


------------
A-32 EL benchmark between 1.5GHZ Itanium 2 processor compared to a 1.6GHZ Xeon processor.
https://www.researchgate.net/figure/Relative-performance-of-IA-32-EL-running-on-a-15GHZ-ItaniumR-2-processor-compared-to-a_fig5_2884026

No problem for Athlon 64/Opteron 64 to beat.


Quote:

There were translation layers made to execute MIPS and SPARC code as well as x86 code. Itanium was an effort to consolidate and replace many high end ISAs for workstations and high end desktop markets. General purpose performance was the problem and not compatibility for VLIW CPUs. There is a higher VLIW resource cost somewhat like emulation too but lack of performance is what kills VLIW and emulation competitiveness for CPUs.

NVIDIA's Project Denver is a recent VLIW CPU core emulating "binary translation (dynamic recompilation)" ARMv8 example.

Google's Nexus 9 had this SoC.
https://www.anandtech.com/show/8701/the-google-nexus-9-review/3

NVIDIA switch to ARM Cortex-A78AE with Orin SoC.


Quote:

The Amiga received Microsoft WordPerfect and Amiga Basic. Wait, were you talking about professional quality software for the Amiga?

Microsoft didn't sell WordPerfect.

Amiga's WordPerfect 4.1 wasn't the Mac version. Amiga received WordPerfect5.0 which is not the Mac version.

https://www.macintoshrepository.org/2666-wordperfect-1-0
1988, WordPerfect 1.0 for Mac.

https://www.macintoshrepository.org/1034-wordperfect-2-1
WordPerfect 2.1 for Mac.

A no-brainer for Mac's publishing dominance.

PC had a stable 640x480p resolution

https://www.youtube.com/watch?v=VBPGzNWo66Y
https://winworldpc.com/product/microsoft-word/1x
MS Word for Windows 1.0 was released in 1989 for Windows 2.x. This version resembles the Mac version.

Amiga OCS was focusing on action 2D games.


Quote:

There is plenty of Chinese competition for the RPi. They don't have the RPi standard install base, don't have as much software support and don't have cost advantages using the same CPU and paying similar royalties.

"Application processor" is a different market from the "embedded processor" market.

Refer to ARM SystemReady certification for RPi 4.

https://hackaday.com/2024/05/03/google-removes-risc-v-support-from-android/
Google removed RISC-V from Andriod.

There are independent ARM clones from Apple's M series and Qualcomm's Oryon.

RISC-V is useful for single-purpose microcontrollers.

Last edited by Hammer on 09-May-2024 at 06:16 AM.
Last edited by Hammer on 09-May-2024 at 06:05 AM.

_________________
Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68)
Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68)
Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB

 Status: Offline
Profile     Report this post  
Hammer 
Re: One major reason why Motorola and 68k failed...
Posted on 9-May-2024 7:02:38
#9 ]
Elite Member
Joined: 9-Mar-2003
Posts: 6039
From: Australia

@matthey

Quote:

The 68020 isn't pin compatible with the 68000 either.

My comment wasn't about physical drop-in replacement.


Quote:

The 68010 is pin compatible and mostly compatible despite a few ISA changes like making SR register accesses supervisor only. The 68010 usually could be dropped into an Amiga, Sega Genesis, Mac and later Atari computers with enough compatibility to boot and work most of the time. This includes the Sega Genesis without an OS because it operates in supervisor mode all the time so was unaffected by the ISA change. Some OSs which operated in user mode were slower with the 68010 if they handled supervisor mode violation exceptions but this made it mostly compatible to the 68000. The 68020+ kept the 68010 SR register accesses being supervisor mode only but there were some supervisor mode changes which affected compatibility, especially for bare metal programming in supervisor mode like the Sega Genesis uses. The PiStorm and AC68080 making SR register accesses allowed in user mode like the 68000 for what you call "100% instruction compatibility" doesn't affect this. Some Sega Genesis games would have needed patches to work on the 68020+ but it should have been a smoother transition than the change to SuperH.

Who will patch legacy Sega Genesis games for newer 68020/68030 CPUs and flash them into ROM?

Sega Mega Drive's ROM storage is not a hard disk or floppy disk.

Sega has enough 1st party game studios to partly solve the "chicken vs egg" problem, but Saturn's SDK documentation is poor.

Both Saturn and 3DO had quadrilateral 3D mistakes.

Quote:

The 68020 and 68030 were very successful for embedded and desktop markets. Motorola did start loosing workstation market share to RISC competition

Again, http://archive.computerhistory.org/resources/access/text/2013/04/102723315-05-01-acc.pdf
Page 86 of 417, DataQuest 1995

1994 Worldwide Microprocessor Market Share Ranking.

For 1994 Market Share
1. Intel, 73.2%
2. AMD, 8.6%
3. Motorola, 5.2%
4. IBM, 2.2%

Motorola's revenue in 1994 was less than AMD's.
-------------

Supply Base for 32-Bit Microprocessors—1994,
For Product's Share of Total 32-Bit-and-Up MPU Market 1994
Page 89 of 417,

68000, 17%
80386SX/SL, 3%
80386DX, 3%
80486SX, 16%
80486DX, 21%
683XX, 9%
68040, 3%
68030, 1%
68020, 3%
80960, 4%
AM29000, 1%
32X32, 3%
R3000/R4000, 1%
Sparc, 1%
Pentium, 4%
Others, 10%

Motorola wasn't able to convert 68000's success for 68020, 68030 and 68040.

AMD vs Motorola

AMD
80386DX = 3%, AMD has 85%, 2.55%
80386SX/SL = 3%, AMD has 56%, 1.68%
80486SX = 16%, AMD has 5%, 0.8
80486DX = 21%, AMD has 16%, 3.36%
AM29000 = 1%
Sub-total: 9.39%


Motorola
68040, 3%
68030, 1%
68020, 3%
683XX, 9% (68000 and semi-custom-CPU32)
Sub-total: 16%

AMD's AM29000 = 1%
AMD's 3rd gen CPU market share = 4.23%
AMD's 4th gen CPU market share = 4.16%
Subtotal: 9.39%

Motorola's 2nd gen CPU market share = 3%
Motorola's 3rd gen CPU market share = 1%
Motorola's 4th gen CPU market share = 3%
Subtotal: 7%

Motorola couldn't convert 68000's success for 68020/68030/68040.


683XX was a sloppy copy-and-paste with 68000 and kitbashed 68EC020 as CPU32.

Quote:

Sure. The 68010 is what the 68000 should have been in 1979 but the market for MMUs was tiny then. Motorola needed to get out their 68k successors with updated features sooner but what they produced was generally high quality. Standardization could have been better but all major CPUs from the 68000-68060 work well with the Amiga.

Motorola allowed 68000's bad habits to continue instead of pricing it out.

Unix requirements changed over time.

Quote:

MMU pages for a segmented address space are not the same as MMU pages for a flat address space.

286 MMU can still apply memory protection.


Quote:

This is a major reason why the 80386 was the game changer for x86 and Intel and not the 80286. A standard on chip MMU was helpful for the desktop market but it came at a higher Intel price.

Motorola's 68030 with MMU was tracking Intel 386DX-25 prices until it was sidewiped by AMD's 386-40.

https://archive.computerhistory.org/resources/access/text/2013/04/102723262-05-01-acc.pdf
Page 119 of 981

For 1992
68000-12 = $5.5
68EC020-16 PQFP = $16.06,
68EC020-25 PQFP = $19.99,

68EC030-25 PQFP = $35.94
68030-25 CQFP = $108.75

68040-25 = $418.52
68EC040-25 = $112.50
---
Competition

AM386-40 = $102.50
386DX-25 PQFP = $103.00

486SX-20 PQFP = $157.75
486DX-33 = $376.75
486DX2-50 = $502.75


For 68030-25, Motorola nearly price matched Intel's 386DX-25. Motorola didn't factor in AMD's 386-40 "attack of the clone".


Quote:

If an MMU was needed, an on chip MMU provided better performance and a potential cost savings compared to an external MMU. Most embedded CPUs did not use a MMU which was a larger part of Motorola's customers.

Embedded requirements change over time.

MMU wasn't needed for most 16-bit MS-DOS software but still resulted in MMU-less NEC 8086 clones dropping from the market.

A major criticism against AC68080 is the lack of 68K MMU i.e. evolved 68060++ CPU power without 68K MMU.

Quote:

Some desktop computers did not have MMU support for some time too. ARM was introduced in 1985 with no MMU, no FPU, no caches, no hardware MUL & DIV, no misaligned address support, etc. and did not receive many of these long available 68k features for many years to come. Some did not become standard until AArch64. RISC-V chooses a la carte features for customer versatility like ARM used to but too much hardware versatility is hell for software developers and results in less optimized code.

PalmOS 5's ARM925T has MMU. Handheld CPUs evolved with integrated MMUs.




Last edited by Hammer on 09-May-2024 at 07:11 AM.

_________________
Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68)
Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68)
Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB

 Status: Offline
Profile     Report this post  
Lou 
Re: One major reason why Motorola and 68k failed...
Posted on 9-May-2024 14:02:43
#10 ]
Elite Member
Joined: 2-Nov-2004
Posts: 4228
From: Rhode Island

If you all really want to fight...then:

A 1 Mhz 6502 was just as fast as a 7.14 Mhz 68000 because many instructions on the 68000 take 8 cycles. :P

Since a C128 in C64 mode can run at closer to 1.2 Mhz, Sonic on the C64 on 128 hardware really outperforms all other base Amiga platformers until you get to the A1200.

A 6502 is just a RISC 6800. (6800 - not 68000).

The W65C02S was able to go to 14mhz in 1983.

The TurboGrafx-16/PC Engine used a variant that was running at 7.16Mhz.
This is why the SNES using a 16ish-bit variant 3.57 Mhz 65C816 ran circles around the Amiga.

A C128 with a REU with supported games like Sonic = Blast Processing! :)
https://www.youtube.com/watch?v=L4CGwp4N9xg

 Status: Offline
Profile     Report this post  
pixie 
Re: One major reason why Motorola and 68k failed...
Posted on 9-May-2024 14:21:50
#11 ]
Elite Member
Joined: 10-Mar-2003
Posts: 3384
From: Figueira da Foz - Portugal

@Lou

Regarding SNES, was the cpu or rather the gfx chip?

_________________
Indigo 3D Lounge, my second home.
The Illusion of Choice | Am*ga

 Status: Offline
Profile     Report this post  
Lou 
Re: One major reason why Motorola and 68k failed...
Posted on 9-May-2024 14:35:42
#12 ]
Elite Member
Joined: 2-Nov-2004
Posts: 4228
From: Rhode Island

@pixie

Quote:

pixie wrote:
@Lou

Regarding SNES, was the cpu or rather the gfx chip?

You can say the same thing about Amiga vs C64...

You got some smaller code and more registers going to 68000 but clock for clock, the 6502 wins.
I remember learning 8088 assembly in college and noting how there was this 1 instruction on the 6502 that required 2 instructions on the 8088. The 6502 is very efficient. You lose 1 register over the 6800, but it was cheaper and faster. It's akin to the Z80 vs 8086.

Note: a 1Mhz 6502 is more or less equivalent to a 4Mhz Z80.

Last edited by Lou on 09-May-2024 at 02:38 PM.

 Status: Offline
Profile     Report this post  
matthey 
Re: One major reason why Motorola and 68k failed...
Posted on 9-May-2024 21:55:42
#13 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2387
From: Kansas

Hammer Quote:

I was thinking about 68000 with 100% 68020 instruction set compatibility.

The tactic is to remove instruction set mistakes from the market by pricing them out e.g. 68015 for $8 and 68000 for $10.

When Sega designed and released their Mega Drive in 1988, it would be 100% 68020-based instruction set compatible 68015 instead of 68000.

There are mistakes and they need to be removed from the marketplace by "big nanny" or shepherd the platform customers in a certain direction.

68000's instruction set mistakes need to be culled in the bud.

The 100% instruction set compatibility with a faster CPU roadmap is the legacy software addiction.

Motorola couldn't convert the 68000's success with the newer 68K when its enemy was the 68000 itself.


The 68010 was not enough of a performance upgrade to replace the 68000 considering the higher price. The 32 bit 68020 price and cost of 32 bit hardware was too high for many customers. That is why I suggested the 68015 with enough performance and better value to upgrade 68000 customers while cheaper 16 bit hardware and existing 68000 hardware could be used.

The 68010 was kind of like Amiga ECS which was not enough of an upgrade from Amiga OCS for most customers to justify the upgrade. My 68015 suggestion is a more Amiga Ranger like upgrade that is better and sooner but still practical compared to more major upgrades to follow. The 68008 was kind of like trying to sell a 68000@7MHz ECS Amiga 600 at the same time as a 68EC020@14MHz AGA Amiga 1200.

Hammer Quote:

Again, Motorola couldn't convert the 68000's success with the newer 68K when its enemy was the 68000 itself.

The death spiral can cause a reduction in R&D funds.

Motorola committed two separate ASIC designs for PowerPC i.e. 603 and 604 in a single generation. Motorola has a stricter product segmentation.

Meanwhile, Intel P6 microarchitecture design scales from Celeron, Pentium II, and Xeon.


The Intel OoO P6 microarchitecture was used for the low end Celeron because their in-order P5 microarchitecture was disappointing in power, performance and area (PPA). This limited how low x86 could scale and mostly kept them out of embedded markets. In comparison, the 68060 primarily sold into embedded markets where it was successful and had a long life.

Before the low end PPC 603 and high end PPC 604, there was a universal PPC 601. The low end PPC 603 targeted embedded and low end desktop markets where it was bad at both giving PPC a poor reputation. Low end PPC has always been bad due to poor code density though. Doubling the caches and a die shrink for the PPC 603 and PPC 604 is a good way to improve performance but not value (value=performance/$). PPC was already showing a lack of competitiveness before they tried to clock up the shallow pipelines.

Hammer Quote:

For ARM925T, PalmOS 5 had user space Dragon Ball 68K emulation for legacy apps.

Officially, there wasn't a software legacy direct link between C64 and the Amiga, hence C64c/C128 was effectively a dead path.


There was limited upgrade potential while maintaining software compatibility for hardware like the C64. Simply clocking the CPU from 1MHz to 2MHz would break a significant amount of software. In comparison, an Amiga CPU which was originally clocked at 7MHz could be clocked to 7GHz and most of the software would be compatible. Existing C64 software can only address 64kiB of memory while existing Amiga software can address and use up to 2GiB of memory. The 68k Amiga large flat address space doesn't seem as huge as it used to be but it is still pretty good for a memory miser CPU like the 68k thanks to the foresight of Jay Miner.

Hammer Quote:

Early attempts
https://slashdot.org/story/01/01/25/135201/itanium-preview-and-32-bit-benchmarks

"Tweakers.net has posted a preview of the Itanium that includes benchmarks of the x86 emulator. Looks pretty dismal here, as it struggles to keep up with even a Pentium I in many areas!


------------
A-32 EL benchmark between 1.5GHZ Itanium 2 processor compared to a 1.6GHZ Xeon processor.
https://www.researchgate.net/figure/Relative-performance-of-IA-32-EL-running-on-a-15GHZ-ItaniumR-2-processor-compared-to-a_fig5_2884026

No problem for Athlon 64/Opteron 64 to beat.


It's still better performance than emulation while maintaining compatibility. I did say the problem with a VLIW CPU is general purpose performance.

Hammer Quote:

NVIDIA's Project Denver is a recent VLIW CPU core emulating "binary translation (dynamic recompilation)" ARMv8 example.

Google's Nexus 9 had this SoC.
https://www.anandtech.com/show/8701/the-google-nexus-9-review/3

NVIDIA switch to ARM Cortex-A78AE with Orin SoC.


Good article that adds transparency to the Nvidia VLIW propaganda. VLIW has several positive traits like high peak IPC and low power operation but 15% of instructions on average are branches and they are the weakness of VLIW. To be fair, NVIDIA's Denver core performance was on par with some OoO cores and outperformed ARM OoO cores back when it was developed. In-order cores outperform OoO ARM cores from back then and are much easier to develop than either OoO or VLIW cores. We haven't even seen what the limits of an in-order CISC core are but they may be half of the performance of an OoO core for a tiny fraction of the cost.

Hammer Quote:

Microsoft didn't sell WordPerfect.


For whatever reason, I was thinking Microsoft bought WordPerfect before swapping it out for Word but it was Corel. WordPerfect was a major productivity software the Amiga received though. The Amiga wasn't that bad for productivity software but generally behind Windows and Mac.

Hammer Quote:

"Application processor" is a different market from the "embedded processor" market.

Refer to ARM SystemReady certification for RPi 4.

https://hackaday.com/2024/05/03/google-removes-risc-v-support-from-android/
Google removed RISC-V from Andriod.

There are independent ARM clones from Apple's M series and Qualcomm's Oryon.

RISC-V is useful for single-purpose microcontrollers.


There are MCU and non-MCU embedded SoCs for the embedded market today. When the 68000 came to market, there was very limited silicon space for the additional hardware in MCUs and SoCs. Some CPUs had a few features of modern MCUs and SoCs but there were generally just CPUs and external support chips for desktop and embedded markets. Integration improved even with later 68k generations like the 683xx chips which were early SoCs but with no or limited on-chip memory to be considered a MCU.

Hammer Quote:

Who will patch legacy Sega Genesis games for newer 68020/68030 CPUs and flash them into ROM?

Sega Mega Drive's ROM storage is not a hard disk or floppy disk.


The easy solution is too include a 68000 like the Atari Jaguar. The Jaguar was designed to use the 68000 for I/O processing but many games used it because developers were familiar with it and it avoided bugs in other hardware.

Hammer Quote:

Again, http://archive.computerhistory.org/resources/access/text/2013/04/102723315-05-01-acc.pdf
Page 86 of 417, DataQuest 1995

1994 Worldwide Microprocessor Market Share Ranking.

For 1994 Market Share
1. Intel, 73.2%
2. AMD, 8.6%
3. Motorola, 5.2%
4. IBM, 2.2%

Motorola's revenue in 1994 was less than AMD's.
-------------

Supply Base for 32-Bit Microprocessors—1994,
For Product's Share of Total 32-Bit-and-Up MPU Market 1994
Page 89 of 417,

68000, 17%
80386SX/SL, 3%
80386DX, 3%
80486SX, 16%
80486DX, 21%
683XX, 9%
68040, 3%
68030, 1%
68020, 3%
80960, 4%
AM29000, 1%
32X32, 3%
R3000/R4000, 1%
Sparc, 1%
Pentium, 4%
Others, 10%

Motorola wasn't able to convert 68000's success for 68020, 68030 and 68040.


Customers upgrading to newer 68k CPUs was not as bad as the data appears. Embedded customers did not upgrade in the early years because the 68000 was usually adequate for their needs and in their low price range. When the price of the 68020+ came down after a few years, the 68020, 68030 and 683xx chips started to sell more into the embedded market. The 68020 and 68030 sold less as the 683xx chips became popular with SoC features lowering hardware costs. The desktop market had already moved on to the 68040 also causing 68030 sales to drop by the time of the data above. Revenue was low because the embedded market is high volume but low margin.

Hammer Quote:

683XX was a sloppy copy-and-paste with 68000 and kitbashed 68EC020 as CPU32.


The 683xx chips were very popular in the embedded market for the same reason that SoCs and MCUs are popular today. Motorola should have been more standardized and consistent with features and upgraded them better though. Later, ARM did a better job of catering and customizing to the embedded customer needs while Motorola unnecessarily castrated the loved 68k into an incompatible ColdFire for the low end embedded market and shoved fat PPC down customers throats that increased hardware costs for the mid to high end embedded market.

Hammer Quote:

286 MMU can still apply memory protection.


The 808x and x86 had various security features like segment addressing limitations, hardware virtualization, protection rings, etc. These features were more of a miss and often unused even though they could provide more security. The implementation is important and MMU pages over segmented memory is not optimum.

Hammer Quote:

Embedded requirements change over time.

MMU wasn't needed for most 16-bit MS-DOS software but still resulted in MMU-less NEC 8086 clones dropping from the market.

A major criticism against AC68080 is the lack of 68K MMU i.e. evolved 68060++ CPU power without 68K MMU.


Most AC68080 users want to use the core for desktop like retro use. I haven't heard of any embedded customers asking for a MMU even though a MMU is used in more embedded hardware today.

Last edited by matthey on 09-May-2024 at 10:24 PM.
Last edited by matthey on 09-May-2024 at 09:57 PM.

 Status: Offline
Profile     Report this post  
OneTimer1 
Re: One major reason why Motorola and 68k failed...
Posted on 9-May-2024 22:23:57
#14 ]
Super Member
Joined: 3-Aug-2015
Posts: 1111
From: Germany

@Matt3k


There where 2 times when Motorola with its 68k missed the mass market:

1st

When it goes to PCs (before IBM entered the market) Intel feared the Z8000 from Zilog more than the 68k.

Zilog with its Z80 had built a 8080 compatible CPU that was better in every aspect.

The 68000 was something for bigger systems, systems that where thought to be to expensive for the PC market.

IBM on the other hand was not interested in making the next big computer revolution, they just wanted to have a cheap machine that could act as a terminal for their mainframes and could be sold to customers that where asking for such a useless toy called PC.

Because Intel was in fear of Zilog they quickly put together an enhanced 8080 with 16bit registers and some kind of additional 16 bit registers enhancing the address space to 1MB.

Motorola where out of this game because a full built 68k system would have been more expensive that the enhanced 8-Bit junk from Intel.

The rest is history.

2nd

68k variants where later used in a lot of high end workstations, with or without the original 68k MMU. VME bus (technically a 68k bus with additional signals) became an industry standard for modular built Unix systems.
A lot of those 32Bit 68k systems from Sharp, Sun, HP, Apple or other companies where used wherever real work has to be done.

But Motorola couldn't ramp up the performance of their 68k systems, many workstation companies started to built their own CPUs using RISC technology for their CPUs, MIPS from SGI, SPARC from SUN, PA-RISC from HP where typical products from this period. Motorola tried to compete with their own MC88 family but failed.
And in the end they where all made obsolete by 100MHz i486 or Pentium systems running WindowsNT.

The rest is history.

At the end Motorola lots its own high performance CPU architecture, I don't know if they could built a PPC without paying IBM for using their architecture, using ARM instead of PPC for their microcontrollers could even be cheaper for them.

Last edited by OneTimer1 on 09-May-2024 at 10:25 PM.
Last edited by OneTimer1 on 09-May-2024 at 10:24 PM.

 Status: Online!
Profile     Report this post  
matthey 
Re: One major reason why Motorola and 68k failed...
Posted on 9-May-2024 23:44:05
#15 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2387
From: Kansas

Lou Quote:

If you all really want to fight...then:

A 1 Mhz 6502 was just as fast as a 7.14 Mhz 68000 because many instructions on the 68000 take 8 cycles. :P


No. The basic instruction on the 68000 is 4 cycles which I believe is 2 cycles on a 6502 family CPU. Therefore, a 6502@2MHz may be able to reach the performance of a 68000@8MHz but this is a theoretical best case by clock cycles and there may be other limiting factors. In reality, a 6502@4MHz is more equivalent to a 68000@8MHz and the 68000 still has a major advantage with larger datatypes. This is pretty close to the Sega Mega Drive (Genesis in N/A) vs the SNES.

https://segaretro.org/Sega_Mega_Drive/Hardware_comparison

The SNES Ricoh 5A22@~3.5MHz holds its own with 8 bit datatypes while the Sega 68000@~7.6MHz has a large advantage with 16 bit datatypes and a huge advantage with 32 bit datatypes (shown in first chart). The 68000 has more memory bandwidth, doesn't have to access memory as often because of more registers and better code density reduced memory accesses for instructions. The 68000 has fewer instructions to execute too.

Lou Quote:

Since a C128 in C64 mode can run at closer to 1.2 Mhz, Sonic on the C64 on 128 hardware really outperforms all other base Amiga platformers until you get to the A1200.


There is some impressive software for the C64 but the 68000 Amiga is vastly better hardware than a C64 or C128. The SNES is a closer comparison with the 68000 Amiga but the Sega seems to have better hardware than both.

https://segaretro.org/Sega_Mega_Drive/Hardware_comparison Quote:

Vs. Amiga

The Mega Drive was generally more powerful than the Amiga. The Mega Drive's 68000 CPU is clocked at 7.6 MHz, while the Amiga's 68000 CPU was clocked at 7.16 MHz (NTSC) or 7.09 MHz (PAL). The Mega Drive displays eighty 15-color sprites at 32×32 pixels each, while the Amiga displays eight 3-color sprites at 8 pixels wide. The Mega Drive displays 61–64 colors standard and 183–192 colors with Shadow/Highlight, while the Amiga displays 2–32 colors standard and 64 colors with EHB. The Mega Drive's VDP can DMA blit 3.21845–6.4 MB/s bandwidth (6.4 MPixels/s fillrate), while the Amiga's Blitter can blit 1.7725–3.58 MB/s (2.363333–4.773333 MPixels/s with 64 colors). During active display, with 64 colors at 60 FPS, the VDP can write 708 KB/s to 2 MB/s (1.4–2 MPixels/s) during 320×224 display, while the Blitter can write 332.5–700 KB/s (443,333–933,333 pixels/s) during 320×200 display. The Mega Drive supports tilemap backgrounds, reducing processing, memory and bandwidth requirements by up to 64 times compared to the Amiga's bitmap backgrounds, giving the Mega Drive an effective tile fillrate of 6–36 MPixels/s. The Mega Drive has a Z80 sound CPU and supports 10 audio channels, while the Amiga lacks a sound CPU and supports 4 audio channels.


The Sega Mega Drive is better at what it is good at, especially sprites and tilemaps, but the Amiga has some versatility advantages and the sound is slighted here as the Amiga has 4 DMA audio channels that rarely require the CPU and sound better than the Mega Drive. The SNES was the most colorful of the three and had a good chipset but struggled at times with performance, especially CPU performance. Jay Miner would have had the Ranger chipset in the competition but CBM liked the C64 performance level too much.

Lou Quote:

A 6502 is just a RISC 6800. (6800 - not 68000).


No. The 6502 may use some RISC like concepts like reduced instructions and simplification but an accumulator architecture is much different than a RISC architecture. RISC architecture load/store memory accesses only load or store values in GP registers while accumulator and CISC architectures do mem-reg and possibly reg-mem operations like ADD with memory. RISC architectures also need many GP registers because of this.

Lou Quote:

The W65C02S was able to go to 14mhz in 1983.

The TurboGrafx-16/PC Engine used a variant that was running at 7.16Mhz.
This is why the SNES using a 16ish-bit variant 3.57 Mhz 65C816 ran circles around the Amiga.

A C128 with a REU with supported games like Sonic = Blast Processing! :)
https://www.youtube.com/watch?v=L4CGwp4N9xg


Where the SNES ran circles around the Amiga, it was because of the chipset and 5 years newer silicon and not because of the CPU.

Last edited by matthey on 09-May-2024 at 11:48 PM.

 Status: Offline
Profile     Report this post  
Lou 
Re: One major reason why Motorola and 68k failed...
Posted on 10-May-2024 2:46:43
#16 ]
Elite Member
Joined: 2-Nov-2004
Posts: 4228
From: Rhode Island

@matthey

Not quite sir.
Please see here:
https://wiki.neogeodev.org/index.php?title=68k_instructions_timings

Some are 4, some are 6, some are 8, some are 10, some are 12....the list goes on ... One even takes 34 cycles...

The average is closer to 8.

So, again, depending on workload and how many registers is required for a task, a 6502 wins in some cases and loses in others. But generally speaking there isn't as much performance difference as you think.

Last edited by Lou on 10-May-2024 at 02:49 AM.

 Status: Offline
Profile     Report this post  
matthey 
Re: One major reason why Motorola and 68k failed...
Posted on 10-May-2024 6:41:57
#17 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2387
From: Kansas

Lou Quote:

Not quite sir.
Please see here:
https://wiki.neogeodev.org/index.php?title=68k_instructions_timings

Some are 4, some are 6, some are 8, some are 10, some are 12....the list goes on ... One even takes 34 cycles...

The average is closer to 8.


A 68000 DIVS instruction can take more than 158 cycles and that may save cycles over the 6502 performing a similar division in software. It is better to look at the most common and basic operations like ADD mem,Dn which is 4 cycles on the 68000 and I believe 2 cycles on the 6502. The 68000 has addressing modes that increase the cycles of instructions but the equivalent code on the 6502 is often multiple instructions.

Lou Quote:

So, again, depending on workload and how many registers is required for a task, a 6502 wins in some cases and loses in others. But generally speaking there isn't as much performance difference as you think.


The 6502 had good performance/MHz. If the 6502 is 2.5 times the performance/MHz of the 68000 but the 68000 has 8 times the clock speed, the 68000 has better performance. The 6502 isn't going to win this battle very often.

 Status: Offline
Profile     Report this post  
Hammer 
Re: One major reason why Motorola and 68k failed...
Posted on 10-May-2024 7:40:15
#18 ]
Elite Member
Joined: 9-Mar-2003
Posts: 6039
From: Australia

@matthey

Quote:

The 68010 was not enough of a performance upgrade to replace the 68000 considering the higher price. The 32 bit 68020 price and cost of 32 bit hardware was too high for many customers. That is why I suggested the 68015 with enough performance and better value to upgrade 68000 customers while cheaper 16 bit hardware and existing 68000 hardware could be used.

The 68010 was kind of like Amiga ECS which was not enough of an upgrade from Amiga OCS for most customers to justify the upgrade. My 68015 suggestion is a more Amiga Ranger like upgrade that is better and sooner but still practical compared to more major upgrades to follow. The 68008 was kind of like trying to sell a 68000@7MHz ECS Amiga 600 at the same time as a 68EC020@14MHz AGA Amiga 1200.

The goal is to de-incentivised the 68000.

Motorola is aware of 68000's instruction set mistakes, but continues to sell 68000 at a lower cost when compared other newer 68K CPU designs.



Quote:

The Intel OoO P6 microarchitecture was used for the low end Celeron because their in-order P5 microarchitecture was disappointing in power, performance and area (PPA). This limited how low x86 could scale and mostly kept them out of embedded markets. In comparison, the 68060 primarily sold into embedded markets where it was successful and had a long life.

Pentium P54 delivers superior Quake results when compared to 68060.

Pentium P54 was available in low power consumption SKUs e.g. Embedded Pentium 133 with VRE with 7.9 to 12.25 watts. Embedded Pentiums were used in laptops which is not the case for 68060.

The desktop PC's Pentiums are bias toward max performance with less concern for low power.

For budget gaming PCs, Celeron is the superior option compared to Pentium MMX 233Mhz.


Quote:

Before the low end PPC 603 and high end PPC 604, there was a universal PPC 601. The low end PPC 603 targeted embedded and low end desktop markets where it was bad at both giving PPC a poor reputation.

PPC 603's native code is pretty good. 68K emulation is better with a larger cache.

On native code, PPC 601 FPU is pretty good compared to Pentium P54's FPU. Pentium FPU was improved with Pentium MMX.

Quote:

Low end PPC has always been bad due to poor code density though. Doubling the caches and a die shrink for the PPC 603 and PPC 604 is a good way to improve performance but not value (value=performance/$). PPC was already showing a lack of competitiveness before they tried to clock up the shallow pipelines.

For Apple, 68K emulation is a problem.

PowerPC has other problems from PC ported games' GPR to FPU register mixing, less optimal for stack-based C++ function calls and 'etc'.

Quote:

There was limited upgrade potential while maintaining software compatibility for hardware like the C64. Simply clocking the CPU from 1MHz to 2MHz would break a significant amount of software.

Solve by "turtle" mode. PC had the "turbo" button.

PC's DirectX has to factor in laptop to desktop hardware scaling, hence the resource tracking feature.

PS4 Pro and PS5 have PS4 legacy mode to match timings. Hit-the-metal access has a downside.

Quote:

In comparison, an Amiga CPU which was originally clocked at 7MHz could be clocked to 7GHz and most of the software would be compatible.

For WHDload games, high performance emulated 68K CPU from PiStormEmu68 needs "turtle" features. Considerable effort was invested in Emu68's "turtle" features.

WinUAE is aware of the timing scope for CPU and Amiga Custom Chips.

You're forgetting Vampire has turtle mode. I used modified CoffinOS R58 to R60 to insert Emu68's "turtle" features.

Quote:

Existing C64 software can only address 64kiB of memory while existing Amiga software can address and use up to 2GiB of memory. The 68k Amiga large flat address space doesn't seem as huge as it used to be but it is still pretty good for a memory miser CPU like the 68k thanks to the foresight of Jay Miner.

Western Design Center (WDC)'s 65C816 has 24-bit memory addressing via segmentation memory, but it launched in 1985 which is too late for many 16-bit designs in the early to mid 1980s.

For 32-bit, Bill Mensch recommended ARM and has canceled WDC 65C832.

65C832 is the 32-bit version for the 6500-series. 65C832 has linear addressing for 24-bit address bus and 32-bit ALU. Bill Mensch doesn't have the R&D resources at the level of ARM.

Commodore didn't have a proper business plan for competitive CPU R&D.

Intel 386's 1985 release was timely for the X86 PC's evolution, but too late for Amiga Lorraine, Apple Lisa and Atari ST.

Later, Intel reused the memory segmentation concept with Pentium Pro's PAE-36 as a stop-gap measure for memory requirements larger than 4GB.

When AMD defined their 64-bit extension for x86 architecture, AMD64 or x86-64, AMD also enhanced the paging system in "Long Mode" based on PAE.

Reference
https://downloads.reactivemicro.com/Electronics/CPU/WDC%2065C832%20Datasheet.pdf
Documentation for unreleased 65C832.

Quote:

It's still better performance than emulation while maintaining compatibility. I did say the problem with a VLIW CPU is general purpose performance.

The problem is IBM's second source insurance i.e. AMD.

Quote:

Good article that adds transparency to the Nvidia VLIW propaganda. VLIW has several positive traits like high peak IPC and low power operation but 15% of instructions on average are branches and they are the weakness of VLIW. To be fair, NVIDIA's Denver core performance was on par with some OoO cores and outperformed ARM OoO cores back when it was developed. In-order cores outperform OoO ARM cores from back then and are much easier to develop than either OoO or VLIW cores. We haven't even seen what the limits of an in-order CISC core are but they may be half of the performance of an OoO core for a tiny fraction of the cost.

NVIDIA's current CPU direction is focusing on ARM's Cortex X and Neoverse families.

Qualcomm has the jump on ARM clones via Nuvia purchase. Nuvia has key engineers from Apple M1.

Quote:

We haven't even seen what the limits of an in-order CISC core are but they may be half of the performance of an OoO core for a tiny fraction of the cost.

68060 is expensive.


Quote:

For whatever reason, I was thinking Microsoft bought WordPerfect before swapping it out for Word but it was Corel. WordPerfect was a major productivity software the Amiga received though. The Amiga wasn't that bad for productivity software but generally behind Windows and Mac.

For publishing and WYSIWYG word processing, ECS Denise should been released in 1987 or Amiga's Ranger (7 bit-panes 128 colors display).

Apple had stable monochrome, 16 color and 256-color business resolution Macs in 1987.

Amiga's ProWrite was okay, but interlaced business resolution wasn't good.

Amiga's successes were in games and Video Toaster.

A2000 in 1987 should have at least ECS Denise for stable 4 color publishing and WYSIWYG word processing business resolution. ECS Denise was too late.

Quote:

I haven't heard of any embedded customers asking for a MMU even though a MMU is used in more embedded hardware today.

Motorola's 68K lost the handheld smart device market despite a good start.

Last edited by Hammer on 10-May-2024 at 09:04 AM.
Last edited by Hammer on 10-May-2024 at 07:44 AM.

_________________
Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68)
Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68)
Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB

 Status: Offline
Profile     Report this post  
OlafS25 
Re: One major reason why Motorola and 68k failed...
Posted on 10-May-2024 9:04:38
#19 ]
Elite Member
Joined: 12-May-2010
Posts: 6448
From: Unknown

@Matt3k

the main problem to me was... developing new 68k processors would have needed a high investment in R&D. Motorola was not able or willing to do that. Instead RISC like PowerPC offered the easier route and seemed less risky and more profitable compared to further invest in the CISC 68k range of processors. Finally it is all about money. On the long run CISC won, except ARM of course. PowerPC today is only used on servers and partly on embedded hardware. Except ARM most RISC prpocessors disappeared.

@Hammer

we are sticking as a community vastly in a time bubble of early 90s. For many amiga is 68k + chipsets. Would we have a living OS and not a retro community, amiga hardware would be very different (like a PC) and also the AmigaOS would be very different from today.

Last edited by OlafS25 on 10-May-2024 at 09:20 AM.
Last edited by OlafS25 on 10-May-2024 at 09:19 AM.
Last edited by OlafS25 on 10-May-2024 at 09:08 AM.

 Status: Offline
Profile     Report this post  
Matt3k 
Re: One major reason why Motorola and 68k failed...
Posted on 10-May-2024 13:09:45
#20 ]
Regular Member
Joined: 28-Feb-2004
Posts: 244
From: NY

@Everyone

Great information I wasn't aware of, just like I wasn't aware of the process Intel adopted before I read the book (FYI it does have some great ideas to glean for any business).

Seems like more than a few factors and some by Motorola themselves.

Thanks to everyone for the info...

Concerning Commodore and the Amiga I think the biggest issue that destroyed the Amiga and Commodore was firing Tom Rattigan, he already proved that he could pivot and bring Commodore from the Brink of collapse and clean up the portfolio (ie. 500 and 2000) and that he could read the market and make tough decisions. The guy saved Commodore once so it isn't a big stretch to me that his talent could have continued forward. He left the engineer guys to do their thing as well.

It was interesting the John Sculley and he both came from Pepsi so and I think Commodore faired better than Apple with Sculley, it would have been very interesting to see where it all would have been if Rattigan was left in...

Last edited by Matt3k on 10-May-2024 at 01:10 PM.

 Status: Offline
Profile     Report this post  

[ home ][ about us ][ privacy ] [ forums ][ classifieds ] [ links ][ news archive ] [ link to us ][ user account ]
Copyright (C) 2000 - 2019 Amigaworld.net.
Amigaworld.net was originally founded by David Doyle