Click Here
home features news forums classifieds faqs links search
6071 members 
Amiga Q&A /  Free for All /  Emulation /  Gaming / (Latest Posts)
Login

Nickname

Password

Lost Password?

Don't have an account yet?
Register now!

Support Amigaworld.net
Your support is needed and is appreciated as Amigaworld.net is primarily dependent upon the support of its users.
Donate

Menu
Main sections
Home
Features
News
Forums
Classifieds
Links
Downloads
Extras
OS4 Zone
IRC Network
AmigaWorld Radio
Newsfeed
Top Members
Amiga Dealers
Information
About Us
FAQs
Advertise
Polls
Terms of Service
Search

IRC Channel
Server: irc.amigaworld.net
Ports: 1024,5555, 6665-6669
SSL port: 6697
Channel: #Amigaworld
Channel Policy and Guidelines

Who's Online
7 crawler(s) on-line.
 80 guest(s) on-line.
 0 member(s) on-line.



You are an anonymous user.
Register Now!
 matthey:  8 mins ago
 DiscreetFX:  2 hrs 33 mins ago
 amigakit:  2 hrs 46 mins ago
 OneTimer1:  2 hrs 57 mins ago
 saimo:  3 hrs ago
 lionstorm:  3 hrs 29 mins ago
 utri007:  3 hrs 30 mins ago
 zipper:  3 hrs 54 mins ago
 _Steve_:  4 hrs 25 mins ago
 Kronos:  4 hrs 52 mins ago

/  Forum Index
   /  Classic Amiga Hardware
      /  One major reason why Motorola and 68k failed...
Register To Post

Goto page ( Previous Page 1 | 2 | 3 | 4 | 5 | 6 | 7 Next Page )
PosterThread
OlafS25 
Re: One major reason why Motorola and 68k failed...
Posted on 10-May-2024 15:07:13
#21 ]
Elite Member
Joined: 12-May-2010
Posts: 6369
From: Unknown

@Matt3k

the main problem regarding Amiga was not one manager but a complete lack of understanding the technology. Even sales management responsible for amiga did not use amigas but PCs (if I understand the stories I read right). For them it was just another home computer they earned money with. But not the heart of the company and its future. I cannot imagine that they used PCs at Apple and not their own products. They spent a lot of money for useless managers but hardly any money in technology.

Last edited by OlafS25 on 10-May-2024 at 05:41 PM.

 Status: Offline
Profile     Report this post  
Matt3k 
Re: One major reason why Motorola and 68k failed...
Posted on 10-May-2024 17:18:26
#22 ]
Regular Member
Joined: 28-Feb-2004
Posts: 228
From: NY

@OlafS25

Very fair point, but he was there for such a short period I don't think you or I could predict that.

In his short period of time, he took Commodore from a $230 Million Dollar loss to almost $25 Million in profit in like one quarter and diversified the portfolio. So we don't have time on our side to see more sadly. He literally saved them in a very short period of time and then was taken out back and beat up.

I think he was smart enough to see the weaknesses and address them. I would surmise he would want to get a real handle on the products to steer the ship forward. I know this is very academic at this point but fun none the less.

Many times one person can save a company with leadership. Steve Jobs certainly made Apple relevant and created a future for Apple, so I certainly could have seen the same at Commodore...

 Status: Offline
Profile     Report this post  
Lou 
Re: One major reason why Motorola and 68k failed...
Posted on 11-May-2024 0:25:11
#23 ]
Elite Member
Joined: 2-Nov-2004
Posts: 4181
From: Rhode Island

@matthey

Quote:

matthey wrote:
Lou Quote:

Not quite sir.
Please see here:
https://wiki.neogeodev.org/index.php?title=68k_instructions_timings

Some are 4, some are 6, some are 8, some are 10, some are 12....the list goes on ... One even takes 34 cycles...

The average is closer to 8.


A 68000 DIVS instruction can take more than 158 cycles and that may save cycles over the 6502 performing a similar division in software. It is better to look at the most common and basic operations like ADD mem,Dn which is 4 cycles on the 68000 and I believe 2 cycles on the 6502. The 68000 has addressing modes that increase the cycles of instructions but the equivalent code on the 6502 is often multiple instructions.

Lou Quote:

So, again, depending on workload and how many registers is required for a task, a 6502 wins in some cases and loses in others. But generally speaking there isn't as much performance difference as you think.


The 6502 had good performance/MHz. If the 6502 is 2.5 times the performance/MHz of the 68000 but the 68000 has 8 times the clock speed, the 68000 has better performance. The 6502 isn't going to win this battle very often.


You are cherry-picking 1 instruction. Since the 6502 doesn't have DIV and MUL, most coders used LUTs which are faster then looping normal add/sub...

I repeat, the 6502 in 1983 could do 14 mhz and the superCPU of the late 80's claimed 20mhz.
Try over-clocking a 68000 to 150Mhz...

 Status: Offline
Profile     Report this post  
Hammer 
Re: One major reason why Motorola and 68k failed...
Posted on 11-May-2024 3:44:26
#24 ]
Elite Member
Joined: 9-Mar-2003
Posts: 5367
From: Australia

@Lou

MC68SZ328's 68000 core runs at 66 Mhz. It was displaced by ARM925T (ARMv4T) @ 126 to 144 Mhz in the handheld market.

_________________
Ryzen 9 7900X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB
Amiga 1200 (Rev 1D1, KS 3.2, PiStorm32lite/RPi 4B 4GB/Emu68)
Amiga 500 (Rev 6A, KS 3.2, PiStorm/RPi 3a/Emu68)

 Status: Offline
Profile     Report this post  
Hammer 
Re: One major reason why Motorola and 68k failed...
Posted on 11-May-2024 3:54:12
#25 ]
Elite Member
Joined: 9-Mar-2003
Posts: 5367
From: Australia

@OlafS25

Quote:
we are sticking as a community vastly in a time bubble of early 90s. For many amiga is 68k + chipsets. Would we have a living OS and not a retro community, amiga hardware would be very different (like a PC) and also the AmigaOS would be very different from today.


Amiga is not a Mac.

Amiga games bare metal access echos like PC DOS games. It took many years for Windows PC games libary to reach critical mass and displace PC DOS games e.g. Windows XP's 2001 release.

Windows 9X is the transition phase for the PC's MS-DOS to Windows (DirectX).

Apple influenced AmigaPPC and Amithlon ("we don't care about games" -Bernd Meyer) wouldn't work for the Amiga.

Notice the following sales pattern for the Amiga AGA.

UK:
A1200 (Oct-Dec 1992) = 44,000 (Amiga Format May 1993 report)
A1200 (Jan-Aug 1993) = 100,000 (Amiga Format Sep 1993 report)
A1200 (Sep-Dec 1993) = 160,000 (Amiga Format Feb 1994 report)
CD32 = 75,000
Sub-total: 379,000. No numbers for UK's A4000 unit sales.

Germany:
A1200 = 95,000
CD32 = 25,000
A4000 = 11,300
Sub-total: 131,300

Total from Germany and UK: excess 510,300 AGA units.

Germany's market was less into CD32 game consoles compared to the UK's market.

It's overwhelming about games.



Last edited by Hammer on 11-May-2024 at 04:01 AM.

_________________
Ryzen 9 7900X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB
Amiga 1200 (Rev 1D1, KS 3.2, PiStorm32lite/RPi 4B 4GB/Emu68)
Amiga 500 (Rev 6A, KS 3.2, PiStorm/RPi 3a/Emu68)

 Status: Offline
Profile     Report this post  
Hammer 
Re: One major reason why Motorola and 68k failed...
Posted on 11-May-2024 5:15:39
#26 ]
Elite Member
Joined: 9-Mar-2003
Posts: 5367
From: Australia

@matthey

Quote:

No. The basic instruction on the 68000 is 4 cycles which I believe is 2 cycles on a 6502 family CPU. Therefore, a 6502@2MHz may be able to reach the performance of a 68000@8MHz but this is a theoretical best case by clock cycles and there may be other limiting factors. In reality, a 6502@4MHz is more equivalent to a 68000@8MHz and the 68000 still has a major advantage with larger datatypes. This is pretty close to the Sega Mega Drive (Genesis in N/A) vs the SNES.

https://segaretro.org/Sega_Mega_Drive/Hardware_comparison

The SNES Ricoh 5A22@~3.5MHz holds its own with 8 bit datatypes while the Sega 68000@~7.6MHz has a large advantage with 16 bit datatypes and a huge advantage with 32 bit datatypes (shown in first chart). The 68000 has more memory bandwidth, doesn't have to access memory as often because of more registers and better code density reduced memory accesses for instructions. The 68000 has fewer instructions to execute too.


6502 CPU operates on double rate i.e. leading and failing edge.
https://youtu.be/acUH4lWe2NQ?t=544

6502's average instruction's completion clock cycle is around 4 cycles.

The designer for C65's CPU was hired by AMD and allocated to the K7 Athlon team. K7 Athlon's front side bus is double data rate (DDR).

Commodore was sitting on a double rate IP gold mine. Commodore's lost, AMD's gain.

https://gunkies.org/wiki/MOS_Technology_6502
WDC licenses 65C02 cores that run up to 200MHz.


https://youtu.be/acUH4lWe2NQ?t=749
Apple IIGS with 65C816 @ 8 Mhz accelerator running Wolfenstein 3D at near full screen.

----
The spiritual successor for 65xx is ARM.

https://web.archive.org/web/20221121195648/https://hcommons.org/deposits/objects/hc:49710/datastreams/CONTENT/content


.the design team decided to develop their own processor, which would provide an environment with some similarities to the familiar 6502 instruction set but lead Acorn and its products directly into the world of 32-bit computing.


The design team worked in secret to create a chip which met their requirements. As described earlier, these were for a processor which retained the ethos of the 6502 but in a 32-bit RISC environment


The 6502, to which Acorn's designers looked when designing the original ARM, had a short and simple instruction set which lent itself well to RISC.


Taken from 'The ARM RISC Chip: A Programmers' Guide' by Carol Atack and Alex van Someren, published 1993 by Addison-Wesley

--------

ARM1 was designed with VLSI Technology's toolchain. These samples were fabricated using VLSI's 3 um process.

VLSI Technology Inc. was having some success in convincing other companies to use the ARM.

VLSI Technology Inc. has extensive involvement with early ARM designs.
---------

65xx didn't evolved into 32-bit version in a timely manner.

68000's 32-bit programming model design wins for Apple Lisa/Mac, Atari ST, Amiga and many others.

Last edited by Hammer on 11-May-2024 at 06:07 AM.
Last edited by Hammer on 11-May-2024 at 05:51 AM.
Last edited by Hammer on 11-May-2024 at 05:41 AM.
Last edited by Hammer on 11-May-2024 at 05:34 AM.
Last edited by Hammer on 11-May-2024 at 05:31 AM.
Last edited by Hammer on 11-May-2024 at 05:29 AM.

_________________
Ryzen 9 7900X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB
Amiga 1200 (Rev 1D1, KS 3.2, PiStorm32lite/RPi 4B 4GB/Emu68)
Amiga 500 (Rev 6A, KS 3.2, PiStorm/RPi 3a/Emu68)

 Status: Offline
Profile     Report this post  
Hammer 
Re: One major reason why Motorola and 68k failed...
Posted on 11-May-2024 6:33:40
#27 ]
Elite Member
Joined: 9-Mar-2003
Posts: 5367
From: Australia

@Matt3k

Quote:
Concerning Commodore and the Amiga I think the biggest issue that destroyed the Amiga and Commodore was firing Tom Rattigan, he already proved that he could pivot and bring Commodore from the Brink of collapse and clean up the portfolio (ie. 500 and 2000) and that he could read the market and make tough decisions. The guy saved Commodore once so it isn't a big stretch to me that his talent could have continued forward. He left the engineer guys to do their thing as well.

It was interesting the John Sculley and he both came from Pepsi so and I think Commodore faired better than Apple with Sculley, it would have been very interesting to see where it all would have been if Rattigan was left in...


I wondered about the management source for "read my lips, no new chips" directive.

Rattigan was Commodore's CEO from April 1st, 1986.
Rattigan was fired as Commodore's CEO around April 1987.

What's needed is A500's development timeline.

Rattigan's mistake was the shutdown of the original Los Gatos Amiga team. Rattigan should fast tracked ECS Denise for the larger DTP and wysiwyg word processing markets for the A2000.

When 256 color chipset is needed for the Amiga, it wasn't production ready to counter SNES and 1990 PC's SVGA clones.

Commodore's A2410 year 1991 release was a rush job in an attempt to counter criticism against A3000's aging ECS chipset.




Last edited by Hammer on 11-May-2024 at 06:34 AM.

_________________
Ryzen 9 7900X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB
Amiga 1200 (Rev 1D1, KS 3.2, PiStorm32lite/RPi 4B 4GB/Emu68)
Amiga 500 (Rev 6A, KS 3.2, PiStorm/RPi 3a/Emu68)

 Status: Offline
Profile     Report this post  
OlafS25 
Re: One major reason why Motorola and 68k failed...
Posted on 11-May-2024 8:22:20
#28 ]
Elite Member
Joined: 12-May-2010
Posts: 6369
From: Unknown

@Hammer

he closed the original team?That was to me the begin of the end already

 Status: Offline
Profile     Report this post  
matthey 
Re: One major reason why Motorola and 68k failed...
Posted on 12-May-2024 0:01:40
#29 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2064
From: Kansas

Hammer Quote:

The goal is to de-incentivised the 68000.

Motorola is aware of 68000's instruction set mistakes, but continues to sell 68000 at a lower cost when compared other newer 68K CPU designs.


I wouldn't call the 68000 ISA a mistake. There is nothing wrong with it for the embedded market. It was adequate for the desktop market as the choice by Mac, Amiga and Atari shows who all could have used the 68010 (the Amiga could have used the 68000 for a game machine and the 68010 for the desktop). The 68000 ISA was adequate for the workstation market which the 68000 created even though some customers wanted better MMU support. The 68000 ISA is better than some other 68k ISAs. If I were to give grades for the ISAs, they would be something like the following.

68000 A-
68010 A-
68020 C
CPU32 A
ColdFire D

Notice I didn't change the grade from the 68000 to the 68010. The 68000, 68010 and 68020 ISAs were challenging as they were general purpose ISAs targeting embedded, desktop and workstation markets. The CPU32 and ColdFire ISAs were targeting the embedded market only which is much easier. The fact that there were so many ISAs is a standardization failure by Motorola which is worse than any 68000 ISA "mistake". Was it really that bad to be able to read the whole SR from user mode?

bit(s) | function
15-14 trace enable
13 supervisor/user state
12 master/interrupt state
11 reserved
10-8 interrupt priority mask

This data can only be read (MOVE.L SR,EA) and could never be written (MOVE.L EA,SR) in user mode. Is this a major security risk or a minor one?

The 68000 MMU issue was the non-restart ability after some interrupts due to the way some data was not stored on the stack. It's not really a bug but an oversight kind of like the OoO AMD K6 FPU behavior where independent instructions could not be retired. It's more of a technical oversight than an ISA mistake.

Hammer Quote:

Pentium P54 delivers superior Quake results when compared to 68060.

Pentium P54 was available in low power consumption SKUs e.g. Embedded Pentium 133 with VRE with 7.9 to 12.25 watts. Embedded Pentiums were used in laptops which is not the case for 68060.

The desktop PC's Pentiums are bias toward max performance with less concern for low power.

For budget gaming PCs, Celeron is the superior option compared to Pentium MMX 233Mhz.


Intel tried twice to go back to the in-order P54C core design to reduce power and area but the attempts were short lived.

https://en.wikipedia.org/wiki/Bonnell_(microarchitecture)
https://en.wikipedia.org/wiki/Larrabee_(microarchitecture)

One of the keys to success is to save power by not breaking instructions down as far into microops. The trouble is that x86 is far from orthogonal with too many instructions needing special handling that also limits how many instructions can be multi-issued and executed together. This was the problem with the P54C Pentium and where the 68060 has a significant integer performance advantage. The Cyrix 6x86 is a better comparison to the 68060 than the lower tech Pentium P54C. It is a mostly in-order core design with a focus on integer performance where it crushed the in-order P54C and competed with OoO Intel and AMD x86 designs because it had a much higher multi-issue rate. It has a longer 7 stage pipeline, integer register renaming, optimizations to source data from another multi-issued instruction, removes mem access penalty of P54C and has a non-pipelined FPU that can multi-issue integer instructions with FPU instructions. The features are closer to the 68060 than the more primitive for integer execution P54C. There are some differences like the 6x86 has a unified cache, very limited OoO completion, more speculative execution, a hardware return stack and a 4 instruction FPU execution queue while the 68060 has an instruction buffer decoupling the fetch pipeline and execution pipelines which reduces power with a smaller instruction fetch, split caches, a better code density increasing cache efficiency, a better more orthogonal ISA likely allowing for a better multi-issue rate and generally better instruction execution times. The 6x86 was released about a year after the 68060.

year | CPU | int stages | caches | transistors
1994 68060 8-stage 8kiB_I+D ~2.5 million
1994 P54C 5-stage 8kiB_I+D ~3.3 million
1995 6x86 7-stage 16kiB_U ~3.5 million

All 3 of these CPUs have the same sized 16kiB cache while a 68060+ with double the caches would have been ~3.3 million transistors and improved performance by 20%-30% according to Motorola. This gave the 68060 design the versatility of either a power savings with a smaller cache or a performance advantage with the larger cache. The x86 competition did not have this luxury due to the x86 ISA and baggage.

The Cyrix 6x86 was known as the Pentium killer until Quake came along and killed the 6x86 instead. Integer performance is by far more important than fp performance and the 6x86 FPU performance is not bad on more common mixed int and fp code even outperforming the Pentium in some benchmarks.

http://dns.fermi.mn.it/inform/materiali/evarchi/cyrix.dir/fpu-summ.htm

The 6x86 is generally at a disadvantage with games but not as much as with Quake. The issue is a poor x86 FPU ISA and Quake being optimized for the Pentium x86 FPU. The minimalist 68060 FPU has less FPU hardware than the 6x86 but I expect it to perform better. The Pentium chose to execute a FXCH instruction for free instead of multi-issuing integer instructions in parallel like the 6x86 and 68060. The 6x86 has the additional overhead of all the FXCH instructions due to the poor x86 FPU ISA.

FPU latencies in cycles
instruction | Pentium | 6x86 | 68060
FABS 1 2 1
FADD 3 4-9 3 (FSUB, FCMP and FTST are similar and common too)
FMUL 3 4-9 3
FDIV 19/33/39 24-34 37
FSQRT 70 59-60 68
FXCH 1 3 unnecessary

The Pentium making the FXCH free (1 cycle but executed in parallel) to make the stack based registers like GP registers made this instruction much more common than in existing x86 code. The 6x86 has to add 3 cycles to all the FPU instruction latencies above when the FXCH instruction is used.

FPU latencies in cycles with FXCH
instruction | Pentium | 6x86 | 68060
FABS 1 5 1
FADD 3 7-12 3
FMUL 3 7-12 3
FDIV 19/33/39 27-37 37
FSQRT 70 62-63 68

The most common FPU instructions are FABS, FADD, FSUB, FCMP, FTST and FMUL which have latencies ~3 times higher than the Pentium and 68060. The poor x86 FPU ISA played a major roll in killing the 6x86 and the Quake Pentium support finished it off. If Quake had been optimized for the 6x86, it would have used memory (cache) more where the 6x86 sometimes has an advantage over the Pentium and the instruction scheduling would have been more favorable to the 6x86. Instruction scheduling was more important with these old FPUs. The majority of the performance gain from my 68060 vbcc support code changes likely came from the elimination of trapped FPU instructions and instruction scheduling of the linked 68060 code even though there is no 68k instruction scheduler. The 68060 and 6x86 designs are overall superior to the Pentium P54C. The P54C FPU has a small advantage with FPU optimizations that Intel did a good job of recognizing and exploiting with developer support but the x86 FPU ISA is still a major handicap and the reason why it was deprecated.

Hammer Quote:

PPC 603's native code is pretty good. 68K emulation is better with a larger cache.

On native code, PPC 601 FPU is pretty good compared to Pentium P54's FPU. Pentium FPU was improved with Pentium MMX.


Most low end in-order RISC CPU cores need instruction scheduling to have any performance at all. Load-to-use stalls are performance killers that don't exist for CISC cores (instruction scheduling is highly recommended even with PPC limited OoO). The PPC 603 with only one simple int unit contributes to making instruction scheduling impossible with resulting pipeline stalls. The 68060 and 6x86 did a good job of showing the performance advantage of CISC for existing and unoptimized code as these good CPU designs received less developer support but generally had good performance. The SiFive U74 core is a step in the right direction for RISC cores with a deliberate design intention of reducing stalls and making scheduling easier. Of course, a CISC like pipeline design was used even though RISC-V is the ISA least able to take advantage of performance gains.

CISC FPUs and SIMD untis should have a significant performance advantage over RISC FPUs but early CISC FPUs had more limited transistor budgets as integer pipelining used more transistors partially offset by fewer GP registers needed. Most 68k FPU instructions only allow "Fop mem-reg" accesses and not "Fop reg-mem" accessed which gets most of the performance advantage with reduced complexity (no RMW FPU instructions). The 68060 minimalist FPU is likely due to trying to save transistors for caches (68060+) and targeting the embedded market where FPU performance is not as important. The 68k FPU ISA has more performance potential than the x86 FPU and most RISC FPU ISAs but heavy FPU using code would benefit from doubling the FPU registers from 8 to 16. An ABI that passes function fp args in extended precision registers instead of as double precision args on the stack would be a big improvement as well.

Hammer Quote:

Solve by "turtle" mode. PC had the "turbo" button.

PC's DirectX has to factor in laptop to desktop hardware scaling, hence the resource tracking feature.

PS4 Pro and PS5 have PS4 legacy mode to match timings. Hit-the-metal access has a downside.


Fully static core designs have no problem at any clock speed below max but different clock sources are tricky, add expense and may have limited granularity.

Hammer Quote:

For WHDload games, high performance emulated 68K CPU from PiStormEmu68 needs "turtle" features. Considerable effort was invested in Emu68's "turtle" features.

WinUAE is aware of the timing scope for CPU and Amiga Custom Chips.


For an ASIC 68k SoC, a cycle exact fully static 68000 core could improve compatibility and could be used as an I/O processor. Only 68,000 transistors is cheap.

Hammer Quote:

You're forgetting Vampire has turtle mode. I used modified CoffinOS R58 to R60 to insert Emu68's "turtle" features.


When I was part of the AC team, I suggested Gunnar allow the TG68 core in Vampires for compatibility. He was miffed by the idea and added turtle mode instead.

Hammer Quote:

Western Design Center (WDC)'s 65C816 has 24-bit memory addressing via segmentation memory, but it launched in 1985 which is too late for many 16-bit designs in the early to mid 1980s.

For 32-bit, Bill Mensch recommended ARM and has canceled WDC 65C832.

65C832 is the 32-bit version for the 6500-series. 65C832 has linear addressing for 24-bit address bus and 32-bit ALU. Bill Mensch doesn't have the R&D resources at the level of ARM.


I believe Bill Mensch and his WDC early fabless semiconductor development capabilities were at least on par with ARMs. The 6502 is a minimal silicon CPU that was and is perfect for many low end low cost CPUs.

https://www.westerndesigncenter.com/ Quote:

The legendary 6502/65816 microprocessors with both 8-bit and 8/16-bit ISA's keep cranking out the unit volumes in ASIC and standard microcontroller forms supplied by WDC and WDC's licensees. Annual volumes in the hundreds (100's) of millions of units keep adding in a significant way to the estimated shipped volumes of five (5) to ten (10) billion units. With 200MHz+ 8-bit W65C02S and 100MHz+ 8/16-bit W65C816S processors coming on line in ASIC and FPGA forms, we see these annual volumes continuing for a long, long time.

The 6502 is likely the only processor family that has remained loyal to its ISA over the last 45 years. In addition it has served the widest spectrum of electronic markets through those years. For example, it has served and in some cases created markets for the PC, video game, toy, communication, industrial control, automotive, life support embedded in the human body medical devices, outside the body medical systems, engineering education systems, hobby systems, and you name it electronic market segments. I might add the 6502 has served in a highly reliable and successful way!


The 6502 is a great tiny minimalist CPU but it is not good at everything. Few registers result in lots of memory traffic, code density is poor and there are many simple instructions to execute. The 6502 ISA does not have the upgrade or performance potential of other ISAs. ARM was a good upgrade recommendation for a larger but small core that has less memory traffic and fewer instructions to execute. The 68000 is larger yet but improves memory traffic, code density and reduces the number of instructions executed further.

Hammer Quote:

Commodore didn't have a proper business plan for competitive CPU R&D.

Intel 386's 1985 release was timely for the X86 PC's evolution, but too late for Amiga Lorraine, Apple Lisa and Atari ST.


The x86 ISA has less upgrade potential than the 68k ISA but it was done anyway. The 68k has less memory traffic, better code density and has fewer instructions to execute than x86. The x86 encoding map is full resulting in a further decline in code density as new instructions were added and used. The x86 ISA has only 7 GP registers which is too few even for a CISC ISA.

Hammer Quote:

68060 is expensive.


There is more demand than supply. It's not my fault. A 68060 CPU would use less than $0.50 USD of silicon today but it is not currently available using semi-modern silicon.

 Status: Offline
Profile     Report this post  
matthey 
Re: One major reason why Motorola and 68k failed...
Posted on 12-May-2024 1:38:47
#30 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2064
From: Kansas

Lou Quote:

You are cherry-picking 1 instruction. Since the 6502 doesn't have DIV and MUL, most coders used LUTs which are faster then looping normal add/sub...


You mentioned an instruction with a long latency which was far from the longest. I was just saying that long latency instructions are not as important as common instructions to performance. The long latency instructions can actually save cycles and certainly save code in memory which is important when there is only 64kiB max in the case of the 6502. It is possible to do a look up table for 8 bit MUL and DIV but tables for 16 bit datatypes use a lot of memory again. The 6502 has good performance/MHz with 8 bit datatypes but quickly falls behind with larger datatypes.

Lou Quote:

I repeat, the 6502 in 1983 could do 14 mhz and the superCPU of the late 80's claimed 20mhz.
Try over-clocking a 68000 to 150Mhz...


The 1979 68000 was not on modern silicon in 1983. Usually only new designs use modern silicon and not all do. The 68000 used an older process when introduced as it was the best the outdated Motorola fabs could produce. The 68000 is microcoded and not pipelined (instruction prefetch could be considered as primitive pipelining) which reduces performance and limits the max clock speed. The 68000 is not a particularly high performance design but the 68000 ISA boosts performance. The 68000 ISA is relatively simple and a 68000 core in hardware could reach many GHz with pipelining.

Hammer Quote:

I wondered about the management source for "read my lips, no new chips" directive.

Rattigan was Commodore's CEO from April 1st, 1986.
Rattigan was fired as Commodore's CEO around April 1987.

What's needed is A500's development timeline.

Rattigan's mistake was the shutdown of the original Los Gatos Amiga team. Rattigan should fast tracked ECS Denise for the larger DTP and wysiwyg word processing markets for the A2000.


Wiki gives a little different dates for Thomas Rattigan's management start.

https://en.wikipedia.org/wiki/Amiga#History Quote:

In late 1985, Thomas Rattigan was promoted to COO of Commodore, and then to CEO in February 1986. He immediately implemented an ambitious plan that covered almost all of the company's operations. Among these was the long-overdue cancellation of the now outdated PET and VIC-20 lines, as well as a variety of poorly selling Commodore 64 offshoots and the Commodore 900 workstation effort.


By this, he was likely involved in the Los Gatos closure. This may have been shortly after the "We made the Amiga, they f***ed it up” Easter egg was found in ROM. There are different conflicting stories about this but one story says that UK Amiga 1000s were pulled on launch until the ROM could be changed. This would have been 1986 and the kickstart 1.2 ROM?

Amiga engineers were given an option to move to the CBM headquarters on the East coast. CBM also payed Jay Miner as a consultant. Jay Miner's vision certainly switched to CBM management's vision. It would be interesting to know what Thomas Rattigan thought about the Ranger chipset and other Amiga upgrades. Some layoff were certainly called for as a cost cutting measure.

https://arstechnica.com/gadgets/2008/02/amiga-history-part-6/ Quote:

Chopping heads

When a corporation is bleeding money, often the only way to save it is to drastically lower fixed expenses by firing staff. Commodore had lost over $300 million between September 1985 and March 1986, and over $21 million in March alone. Commodore's new CEO, Thomas Rattigan, was determined to stop the bleeding.

Rattigan began three separate rounds of layoffs. The first to go were the layabouts, people who hadn't proven their worth to the company and were never likely to. The second round coincided with the cancellation of many internal projects. The last round was necessary for the company to regain profitability, but affected many good people and ultimately may have hurt the company in the long run. Engineer Dave Haynie recalled that the first round was actually a good thing, the second was of debatable value, and the last was "hitting bone."


There is more on Thomas Rattigan at the link above. The Los Gatos group survived this round of layoffs and competed for the Amiga 2000 and 500 designs which they lost as CBM management moved further and further form Jay's vision. It doesn't say if Thomas or the remaining Los Gatos team was let go first but it doesn't really matter as both had lost power.

 Status: Offline
Profile     Report this post  
bhabbott 
Re: One major reason why Motorola and 68k failed...
Posted on 12-May-2024 11:56:20
#31 ]
Regular Member
Joined: 6-Jun-2018
Posts: 347
From: Aotearoa

Quote:

matthey wrote:
The Los Gatos group survived this round of layoffs and competed for the Amiga 2000 and 500 designs which they lost as CBM management moved further and further form Jay's vision. It doesn't say if Thomas or the remaining Los Gatos team was let go first but it doesn't really matter as both had lost power.

Which was good because Jay's vision was a bad one. The A500 was the only really successful Amiga, and he was against it. His 'Ranger' design was awful. CBM management were right to choose the A2000 over it.

Unfortunately Commodore didn't 'stray' far enough. Instead of building on the strengths of the A500 and A2000 they created the loss-making A3000 and CDTV, and spent years trying to make a high-end graphics chipset while not doing much to improve the low end - until AGA (which should have been made a priority and released at least a year earlier).

 Status: Offline
Profile     Report this post  
bhabbott 
Re: One major reason why Motorola and 68k failed...
Posted on 12-May-2024 12:33:37
#32 ]
Regular Member
Joined: 6-Jun-2018
Posts: 347
From: Aotearoa

Quote:

Lou wrote:
I repeat, the 6502 in 1983 could do 14 mhz and the superCPU of the late 80's claimed 20mhz.
Try over-clocking a 68000 to 150Mhz...

Clock speed alone isn't that useful. A 20MHz 6502 is not equivalent to a 150MHz 68000 in a real-world computer, not even close. By the late 80's the 68000 had become the 68030, a far more competent processor than the 65816 - while still being practically 100% compatible with the 68000 without any compromises.

In practice designs in the 80's were limited by memory speed. DRAM was very slow. Fast Static RAM was available, but was very expensive. The 'SuperCPU' had up to 128k of it, enough for a C64 but not nearly enough for the Amiga (just the ROM alone would need twice that much to shadow).


 Status: Offline
Profile     Report this post  
matthey 
Re: One major reason why Motorola and 68k failed...
Posted on 12-May-2024 17:15:10
#33 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2064
From: Kansas

bhabbott Quote:

Which was good because Jay's vision was a bad one. The A500 was the only really successful Amiga, and he was against it. His 'Ranger' design was awful. CBM management were right to choose the A2000 over it.


Jay Miner was more interested in an expandable desktop computer and less interested in a gaming machine but he supported the gaming machine goal too. He didn't want the game machine goals to cripple his desktop goals and wanted compatibility between the desktop and gaming Amiga which was ultimately achieved but not without conflicts.

o Jay fought to keep the 68000 instead of the cheaper but much lower value 68008 or another cheaper CPU.

o Jay fought to have a memory expansion on the Amiga 1000.

o Jay fought to have 512kiB of memory on the motherboard of the Amiga 1000. CBM wasted valuable time and resources having it reduced to 256kiB only to turn around and use Jay's memory expansion to add it back in a more expensive way. Memory prices also dropped as Jay foresaw in his planning.

o Jay foresaw the need for an Amiga chipset upgrade to stay competitive and developed the Ranger chipset without being told.

o Jay foresaw VRAM prices dropping enough to use for the Ranger chipset. Jay Foresaw the need for more chip ram bandwidth, higher resolutions and more colors to stay competitive for the high end desktop.

The Ranger design competed with the Amiga 2000 for the high end which is what he cared about. The Amiga 2000 physical design was more practical than the Ranger prototype in some ways but the Ranger hardware was better. The Amiga 2000 design was more expensive than it should have been with wasted ISA connectors, slots and space that requires longer and more expensive Zorro boards. The Amiga 2000 is ugly too.

bhabbott Quote:

Unfortunately Commodore didn't 'stray' far enough. Instead of building on the strengths of the A500 and A2000 they created the loss-making A3000 and CDTV, and spent years trying to make a high-end graphics chipset while not doing much to improve the low end - until AGA (which should have been made a priority and released at least a year earlier).


The Amiga 3000 had a lot of things right but the ECS chipset was not competitive for the high end market. CBM lost what high end market they had by not having an upgraded chipset sooner. Even the low end market needed an upgrade sooner. AGA was too little too late. The Amiga 1200 and CD32 did not sell particularly well either because of this but were exactly the products CBM needed. A 68EC030@28MHz with AA+ chipset a year earlier in the Amiga 1200 and CD32 likely would have saved the business. Not making the Amiga 600 and CDTV mistakes would have helped a lot too. The Amiga 300 turning into the Amiga 600 without being canceled is an obvious mistake while the CDTV idea may have been workable with enough changes, especially cost reductions and an upgraded CPU+chipset as it morphed into with the CD32. I wonder what Jay thought of the expandable CD32 console. Modern consoles are somewhat similar but closed locked down systems while the RPi is perhaps closer to being a successful low cost expandable open hardware microconsole but with some standards lacking and lacking for game software.

https://en.wikipedia.org/wiki/Microconsole

A CD32 with expansions probably has a better game selection than a RPi without using emulation. I wonder how compatible those 68060 CD32s are. I haven't found a video yet. A modernized expandable sub $100 USD CD32+ would be interesting today.

Last edited by matthey on 12-May-2024 at 05:17 PM.

 Status: Offline
Profile     Report this post  
OneTimer1 
Re: One major reason why Motorola and 68k failed...
Posted on 12-May-2024 19:01:09
#34 ]
Cult Member
Joined: 3-Aug-2015
Posts: 995
From: Unknown

@OlafS25

Quote:


we are sticking as a community vastly in a time bubble of early 90s. For many amiga is 68k + chipsets. Would we have a living OS and not a retro community, amiga hardware would be very different (like a PC) and also the AmigaOS would be very different from today.


Absolutely, Apple changed its CPU platform 4 times and its OS at least 1 time ...

But Apple has professional applications that have strong support by the software industry and will be ported to a new platform quickly, and users that don't cry for lost 68k / 24Bit compatibility.

Most professional applications on the Amiga had active support for only a few years and a lot of them switched to other platform before C='s bankruptcy.


Last edited by OneTimer1 on 12-May-2024 at 07:09 PM.

 Status: Offline
Profile     Report this post  
matthey 
Re: One major reason why Motorola and 68k failed...
Posted on 12-May-2024 20:24:34
#35 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2064
From: Kansas

OneTimer1 Quote:

Absolutely, Apple changed its CPU platform 4 times and its OS at least 1 time ...


More specifically you are probably talking about Mac. Counting ISAs is kind of tricky.

Mac ISAs
68k
PPC 32-bit
PPC 64-bit
x86
x86-64
ARM (original 32-bit ARM, Thumb and Thumb2 ISAs together and switchable)
ARM AArch64

It's funny that they went from 64 bit ISAs back to 32 bit ISAs twice. Nintendo did it once from N64 64-bit MIPS to Gamecube 32-bit PPC. This couldn't be anything but a well planned roadmap based only on technical factors. It is impressive that developers followed this path with the expense of switching ISAs so often.

OneTimer1 Quote:

But Apple has professional applications that have strong support by the software industry and will be ported to a new platform quickly, and users that don't cry for lost 68k / 24Bit compatibility.


The original 68000 has full 32 bit address registers so software written for the 68000 should work on all 68k CPUs. There was some early software that was not 32 bit clean because developers used the upper 8 bits of addresses but these were only horrible software developers like Microsoft with AmigaBasic. The Amiga RKRM manuals clearly tell developers not to do this and there is little advantage to it anyway. I would be very surprised if anybody asks for 24 bit addressing for any kind of upgraded 68k Amiga. The 32 bit addressing can take a small footprint 68k Amiga further than fat PPC that is already using bank switching techniques to access more address space. Even the Vamp/ACSA hardware is only up to 512MiB of memory.

The 68k users didn't cry when PPC AmigaNOne was created. We just ignored the overpriced commodity PC hardware they stuck an AmigaNOne label on and tried to pass off as Amiga hardware with emulation of the 68k. After 20 years of failure and with sales of only a few thousand, we still aren't buying that crap. I even refused a free Sam for developers. Maybe we are more stubborn than Apple users but we are smarter too. The majority of us like the 68k and consider it one of the greatest CPUs of all time. Almost all 68k Amiga users like the compatibility that the 68k, chipset and AmigaOS allow. THEA500 Mini leaves a lot to be desired for hardware but it has this compatibility and has likely sold in the hundreds of thousands compared to thousands for AmigaNOne.

Last edited by matthey on 12-May-2024 at 08:30 PM.

 Status: Offline
Profile     Report this post  
OneTimer1 
Re: One major reason why Motorola and 68k failed...
Posted on 12-May-2024 21:01:51
#36 ]
Cult Member
Joined: 3-Aug-2015
Posts: 995
From: Unknown

@matthey

Quote:

original 68000 has full 32 bit address registers so software written for the 68000 should work on all 68k CPUs.


Its even more chaotic on Mac(intosh). The early MacOS used the upper 8 bit of the 32 Address on a system call for additional information, making it impossible to use more than 16MB of address space, some people told me they used their own MMUs (could be misinformation) MMUs where primary used like programmable glue logic for memory management the 'security feature' most people talking about came later.
Some people claim the Amiga Basic has been developed for the Mac(intoish) and used the same tricks so it didn't work on 68k CPUs with full 32Bit address space.

They ironed it out on the Mac(intosh) and refused to talk much about it.


Last edited by OneTimer1 on 12-May-2024 at 09:02 PM.

 Status: Offline
Profile     Report this post  
matthey 
Re: One major reason why Motorola and 68k failed...
Posted on 12-May-2024 21:39:41
#37 ]
Elite Member
Joined: 14-Mar-2007
Posts: 2064
From: Kansas

@OneTimer1
I was aware that early Mac software had a problem with not using 32 bit clean addressing. I didn't know the cause but suspected some kind of tagged pointer.

https://en.wikipedia.org/wiki/Tagged_pointer

I found a good explanation of the Mac debacle.

https://en.wikipedia.org/wiki/Classic_Mac_OS_memory_management#32-bit_clean Quote:

Because memory was a scarce resource, the authors of Classic Mac OS decided to take advantage of the unused byte in each address. The original Memory Manager (up until the advent of System 7) placed flags in the high 8 bits of each 32-bit pointer and handle. Each address contained flags such as "locked", "purgeable", or "resource", which were stored in the master pointer table. When used as an actual address, these flags were masked off and ignored by the CPU.

While a good use of very limited RAM space, this design caused problems when Apple introduced the Macintosh II, which used the 32-bit Motorola 68020 CPU. The 68020 had 32 physical address lines which could address up to 4 GB of memory. The flags that the Memory Manager stored in the high byte of each pointer and handle were significant now, and could lead to addressing errors.

On the Macintosh IIci and later machines, HLock() and other APIs were rewritten to implement handle locking in a way other than flagging the high bits of handles. But many Macintosh application programmers and a great deal of the Macintosh system software code itself accessed the flags directly rather than using the APIs, such as HLock(), which had been provided to manipulate them. By doing this they rendered their applications incompatible with true 32-bit addressing, and this became known as not being "32-bit clean".

In order to stop continual system crashes caused by this issue, System 6 and earlier running on a 68020 or a 68030 would force the machine into 24-bit mode, and would only recognize and address the first 8 megabytes of RAM, an obvious flaw in machines whose hardware was wired to accept up to 128 MB RAM – and whose product literature advertised this capability. With System 7, the Mac system software was finally made 32-bit clean, but there were still the problem of dirty ROMs. The problem was that the decision to use 24-bit or 32-bit addressing has to be made very early in the boot process, when the ROM routines initialized the Memory Manager to set up a basic Mac environment where NuBus ROMs and disk drivers are loaded and executed. Older ROMs did not have any 32-bit Memory Manager support and so was not possible to boot into 32-bit mode. Surprisingly, the first solution to this flaw was published by software utility company Connectix, whose System 6 extension, OPTIMA, reinitialized the Memory Manager and repeated early parts of the Mac boot process, allowing the system to boot into 32-bit mode and enabling the use of all the RAM in the machine. OPTIMA would later evolve into the more familiar 1991 product, MODE32, for System 7. Apple licensed the software from Connectix later in 1991 and distributed it for free. The Macintosh IIci and later Motorola based Macintosh computers had 32-bit clean ROMs.

It was quite a while before applications were updated to remove all 24-bit dependencies, and System 7 provided a way to switch back to 24-bit mode if application incompatibilities were found. By the time of migration to the PowerPC and System 7.1.2, 32-bit cleanliness was mandatory for creating native applications and even later Motorola 68040 based Macs could not support 24-bit mode.


The 68k AmigaOS has one function that does not preserve the most significant bit of a returned address limiting memory to 2GiB but this can be worked around and the whole 4GiB address space used with software that is aware of how to use the upper 2GiB with care. At least one PCI expansion has used the upper 2GiB address space of the 68k Amiga.

 Status: Offline
Profile     Report this post  
kolla 
Re: One major reason why Motorola and 68k failed...
Posted on 12-May-2024 22:52:09
#38 ]
Elite Member
Joined: 21-Aug-2003
Posts: 2950
From: Trondheim, Norway

Apple didn’t "move to ARM", they just at last moved away from x86, they’ve more or less always supported ARM. Switching ISA was never much of a problem once they had moved away from "classic" MacOS, as the core of the OS (XNU, Darwin) with its BSD and Mach roots was created to be portable - NeXTSTEP ran on both 32bit and 64bit already before Apple purchased NeXT. It’s not much different than BSDs and Linux existing and running on just about all archs, 32-bit and 64-bit.

I’m breathing some life into my Gentoo/m68k activities these days…with qemu-system-m68k now supporting a non-existant "virt" profile, we now have an emulated 68040 system with up to 4GB of RAM and no hardware emulation, only virtio.

_________________
B5D6A1D019D5D45BCC56F4782AC220D8B3E2A6CC

 Status: Offline
Profile     Report this post  
kolla 
Re: One major reason why Motorola and 68k failed...
Posted on 12-May-2024 22:59:58
#39 ]
Elite Member
Joined: 21-Aug-2003
Posts: 2950
From: Trondheim, Norway

@OneTimer1

Quote:

some people told me they used their own MMUs (could be misinformation) MMUs where primary used like programmable glue logic for memory management the 'security feature' most people talking about came later.


Sun certainly made their own MMUs for 68k and even did tricks like running all code on two CPUs in parallel with one being a slightly behind the other, ready to take over should the first CPU trip on some bad code.

_________________
B5D6A1D019D5D45BCC56F4782AC220D8B3E2A6CC

 Status: Offline
Profile     Report this post  
Lou 
Re: One major reason why Motorola and 68k failed...
Posted on 13-May-2024 2:42:04
#40 ]
Elite Member
Joined: 2-Nov-2004
Posts: 4181
From: Rhode Island

@bhabbott

Quote:

bhabbott wrote:
Quote:

Lou wrote:
I repeat, the 6502 in 1983 could do 14 mhz and the superCPU of the late 80's claimed 20mhz.
Try over-clocking a 68000 to 150Mhz...

Clock speed alone isn't that useful. A 20MHz 6502 is not equivalent to a 150MHz 68000 in a real-world computer, not even close. By the late 80's the 68000 had become the 68030, a far more competent processor than the 65816 - while still being practically 100% compatible with the 68000 without any compromises.

In practice designs in the 80's were limited by memory speed. DRAM was very slow. Fast Static RAM was available, but was very expensive. The 'SuperCPU' had up to 128k of it, enough for a C64 but not nearly enough for the Amiga (just the ROM alone would need twice that much to shadow).

65C816 was designed to run at 14mhz on its debut... It's use-cases that underclocked it to 3.57Mhz in the C65 and SNES...just as Commodore down-clocked the 6510.

If you want to start throwing 'successors' into the mix: ARM. As Hammer pointed out it was essentially the 32bit successor of the 6502 because the 65C816 was underwhelming. 68k loses against ARM.
Heck - ARM is the best 68k accelerator right now. :)

People want simple.

Please remember, the 6502 was created because Motorola's 6800 was overpriced and slow (2Mhz).
This is the same reason the Z80 was created to simplify the Intel 8080 and lower the price.

Blame Motorola for abandoning it. The were found in many vehicle ECM's of the time and into the 90's before they went PPC. Unfortunately PPC wasn't cheap enough and now ARM is everywhere.

You want to talk about memory ... if CBM didn't cheap out with the C128's MMU, if could have been a 1 MB machine. There are mods available to give C128's 1MB of ram and this is not including the REU... The VDC acts like an external gpu, if they had added a better way to communicate with it and some sprites, it could have really been something. Heck this demo alone makes one wonder what could have been: https://www.youtube.com/watch?v=sW4V-ehYFQw

Oh wait isn't this a cpu thread...?
Here's a datasheet for the 68C832...though it was never actually made:
https://downloads.reactivemicro.com/Electronics/CPU/WDC%2065C832%20Datasheet.pdf

No need to get into what-ifs. ARM won.

Last edited by Lou on 13-May-2024 at 03:34 AM.
Last edited by Lou on 13-May-2024 at 03:15 AM.
Last edited by Lou on 13-May-2024 at 03:06 AM.

 Status: Offline
Profile     Report this post  
Goto page ( Previous Page 1 | 2 | 3 | 4 | 5 | 6 | 7 Next Page )

[ home ][ about us ][ privacy ] [ forums ][ classifieds ] [ links ][ news archive ] [ link to us ][ user account ]
Copyright (C) 2000 - 2019 Amigaworld.net.
Amigaworld.net was originally founded by David Doyle