Your support is needed and is appreciated as Amigaworld.net is primarily dependent upon the support of its users.
|
|
|
|
Poster | Thread | Hammer
| |
Re: what is wrong with 68k Posted on 6-Dec-2024 3:49:55
| | [ #261 ] |
| |
|
Elite Member |
Joined: 9-Mar-2003 Posts: 6134
From: Australia | | |
|
| @bhabbott
Quote:
BTW in 1982 IBM did make a 68000-based computer, the System 9000, originally intended to be a laboratory computer. It had an 8 MHz 68000, multitasking real-time OS stored on 128k ROM, 128k RAM expandable to 5 MB, Motorola VME system bus, 80 × 30 text and 768 × 480 graphics, various I/O ports for communicating with laboratory instruments, and a sophisticated version of BASIC for programming. For storage it used a 5.25" or 8" floppy drive and up to four 10MB hard drives.
|
During 1985 OS/2 meeting, IBM forced the 16-bit 286 CPU issue over Bill Gates' pro-386 32-bit argument. IBM OS/2 1.x targeted the "brain dead" 286 CPU.
IBM doesn't want 32-bit micro-computers to step on IBM's other 32-bit products. 68000 would be rejected as per the OS/2 1985 meeting.
System 9000 was developed by IBM Instruments, Inc., an IBM subsidiary established in 1980 that focused on selling scientific and technical instruments as well as the computer equipment designed to control, log, or process these instruments.
Bill Gates labeled the 286 as "brain dead".
Meanwhile, for Windows 2.x, Microsoft worked on Compaq's 386 projects along with Intel. MS Excel team (with experience from Apple's MacOS GUI) and Compaq co-funded Windows 2.x 386 R&D.
System 9002 could also run the multi-user Microsoft Xenix OS 68K. PC has Xenix 286.
System 9002 was unsuccessful in the business market, due to the lack of business application software support from software developers other than IBM .
Compare IBM's System 9002 (68K) efforts against Apple Mac (68K)'s 3rd party "next-gen" GUI business application success e.g. Adobe Postscript, Aldus Page Maker GUI, MS Word GUI, MS Excel GUI and etc.
MS Word crushed IBM Display Write.
A3000 with Unix does not have credible GUI business software.
Last edited by Hammer on 06-Dec-2024 at 03:57 AM. Last edited by Hammer on 06-Dec-2024 at 03:55 AM. Last edited by Hammer on 06-Dec-2024 at 03:54 AM.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | bhabbott
| |
Re: what is wrong with 68k Posted on 6-Dec-2024 4:54:11
| | [ #262 ] |
| |
|
Cult Member |
Joined: 6-Jun-2018 Posts: 507
From: Aotearoa | | |
|
| @agami
Quote:
agami wrote:
In my opinion, it's wrong that 68k was sidelined by Motorola's business struggles and straw-clutching of PowerPC via the AIM alliance. |
Not 'wrong', just the way the cookie crumbled.
The truth is, Motorola struggled to keep their CPUs relevant the whole time they were making them, starting with the 6800 in 1974. Their problems didn't start there though... Quote:
In 1971, Motorola decided to enter the calculator business. Looking for someone to lead the effort, the hired Bennett away from Victor. Shortly after joining, Olivetti visited Motorola with a outline of a design for a microprocessor they were planning to use in a series of programmable calculators. Motorola agreed to complete the design and produce it on their PMOS lines in Phoenix.
While the design was eventually completed successfully, their fab proved unable to produce the chips. The problems with the line had become obvious with a number of similar failures; it also proved unable to make competitive memory devices and other designs. To save the contract, Motorola licensed the design to their competitor, Mostek |
I was introduced to the 6800 in 1980 when the Post Office sent me on a two week microprocessor training course at the Wellington Polytechnic. I was so impressed that I bought a 6802 chip while down there, and built a computer around it. However having to write an OS for it quickly showed up the 6800's weaknesses - in particular the single index register which made moving blocks of memory around a pain. It did have one advantage of the 6502 though, the bus could be tri-stated, which allowed it to transparently share memory with the video system using a minimum number of buffer chips.
The next time I attended a course in Wellington the local electronics supplier there had a 6809 with the programmer's reference manual for NZ$25. I bought one with the idea of building a computer around it, but this didn't happen because in the mean time I had been introduced to the Z80 via a Sinclair ZX81. This mirrored what was happening in the rest of the industry. The 6809 was a great improvement over the 6800, but came too late for many of us. Motorola's next microprocessor - the 68000 - was an even bigger improvement. But once again it arrived just a bit too late for the business market, while being considered overkill in the home market. Looking back that attitude might seem shortsighted, but we had to go with what we had at the time. A system that cost more would be a hard sell, even if it did more and was more future-proof.
So Motorola had design and production problems that kept them on the backfoot compared to Intel. But to a large extent it was just luck. Other chip manufacturers struggled to make a reliable 16-bit design unencumbered by legacy baggage, However in Intel's case it turned out in their favor. The 8088 wasn't doing very well until IBM chose it for the PC, which they did largely because they didn't want a powerful 16-bit CPU. And when they did it changed everything. Now suddenly x86 was the standard that the industry rushed to embrace, and everyone else (including Motorola) was locked out. Luckily for Motorola some designers recognized the clear advantages of the 68000 over the 8088 that made it worth spending a little more on. Jay Miner was one of them. But the Amiga struggled to maintain relevance in a world dominated by x86. The Macintosh did too, despite Steve Jobs managing to develop a cult following for it. The PC was bound to win out in the end, dashing Motorola's hopes of continuing to be a major competitor to Intel in the desktop CPU market.
The revenue Intel was receiving meant that they had far more resources to throw into improving their lacklustre x86 design, and the industry was more accommodating than they would be of an incompatible rival. So it wasn't long before the 286 and 386 were rivaling Motorola's CPUs, and once again they found themselves on the backfoot. With the PC outselling 68k systems by 5:1 or more there was little hope of Motorola holding out against x86.
PPC was a desperate attempt to find a shortcut that would match Pentium performance without the excessive R&D needed to advance 68k. It worked for a while, until Apple finally threw in the towel and joined the dark side.
If you want someone to blame, put it on consumers who were gaga over IBM and would have bought whatever they produced, even something as poor as the 8088. Blame them for being sheep who were too lazy and too afraid to consider alternatives. IOW, blame them for being human.
Quote:
Almost as wrong as the loss of Amiga as an alternative computing approach to Wintel and Mac. It is capitalism after all, and it required its sacrifice of an industry wide focus on RISC so that out of the rubble we could find the balance between RISC and CISC.
|
Capitalism drove the innovation that gave us what we have today. But capitalism didn't kill the Amiga - consumers did. Put bluntly, they didn't want it. They wanted to know that when they bought 'a' computer, it would work with all the other computer hardware and software out there. They wanted the industry to standardize on one platform, and develop it to the detriment of others. A few didn't of course - they bought Macs and Amigas instead. But they were not enough. Eventually one platform had to dominate for purely practical reasons. Economy of scale and profit would favor the dominant architecture, pushing out all others. And most people consider this to be a good thing.
Things are different today though, because software development has become far more sophisticated. Now we can easily compile an OS and apps for multiple platforms with different CPUs and peripherals, so the hardware is no longer so important. Some ISAs may be more efficient that others, but modern CPUs are so powerful that it hardly matters. The most commonly used software isn't even running on the CPU itself, but a virtual machine via javascript etc.
We even see this on the Amiga. You can build an 'Amiga' that is far more powerful than any hardware you could make by simply installing WinUAE on an old PC - which are often being practically (if not literally) given away. Or for a couple hundred bucks you can upgrade your vintage Amiga to mind blowing performance with a PiStorm - no 68k required!
So for us it doesn't matter a damn that Motorola didn't advance 68k more. Commodore would have gone bankrupt anyway due to consumers rejecting the Amiga, and any other advantages 68k might have had are now moot. Last edited by bhabbott on 06-Dec-2024 at 05:55 AM.
|
| Status: Offline |
| | bhabbott
| |
Re: what is wrong with 68k Posted on 6-Dec-2024 5:48:26
| | [ #263 ] |
| |
|
Cult Member |
Joined: 6-Jun-2018 Posts: 507
From: Aotearoa | | |
|
| @Hammer
Quote:
Hammer wrote:
During 1985 OS/2 meeting, IBM forced the 16-bit 286 CPU issue over Bill Gates' pro-386 32-bit argument. IBM OS/2 1.x targeted the "brain dead" 286 CPU.
IBM doesn't want 32-bit micro-computers to step on IBM's other 32-bit products. 68000 would be rejected as per the OS/2 1985 meeting.
System 9000 was developed by IBM Instruments, Inc., an IBM subsidiary established in 1980 that focused on selling scientific and technical instruments as well as the computer equipment designed to control, log, or process these instruments. |
IBM made a lot of bad decisions for sure. But they accidentally made one good one (the PC), and that is all that matters. They just had to give the industry something that they could run with without too many restraints, and the PC did that in spades. The System 9000 just shows that they could have done it with 68k instead, if they wanted to. Had they done that we wouldn't be talking about the 'brain-dead' 286, because it probably wouldn't even exist (Intel would have stuck with 80186, which was great for embedded applications but not quite compatible enough for the PC).
Quote:
Bill Gates labeled the 286 as "brain dead". |
He was wrong though. It was only 'brain-dead' when used in the PC, which had to operate in Real mode for BIOS and DOS functionality.
Quote:
Meanwhile, for Windows 2.x, Microsoft worked on Compaq's 386 projects along with Intel. MS Excel team (with experience from Apple's MacOS GUI) and Compaq co-funded Windows 2.x 386 R&D. |
Meanwhile, Windows 2.x ran on 80286.
Quote:
Compare IBM's System 9002 (68K) efforts against Apple Mac (68K)'s 3rd party "next-gen" GUI business application success e.g. Adobe Postscript, Aldus Page Maker GUI, MS Word GUI, MS Excel GUI and etc. |
Thank Jobs for that. He pushed for the Mac as a desktop publishing tool, even to the point of rejecting a color display because it wasn't 'what you see is what you get'.
IBM didn't have that vision. The System 9000 was tailored for a different market that already existed. So it lacked the innovation required to reach new markets that would make it popular. Nothing wrong with that though. If I was running a lab I would have taken it over a PC or Mac. If I could run 'business' apps on it too that would be a bonus. Quote:
MS Word crushed IBM Display Write. |
Not just Word. I bought an IBM spreadsheet program for my JX. It worked well but was a bit different from the 'industry standard' of designating rows and columns by letter and number. So of course it wasn't popular.
You see, people don't want choice. If they learnt how to use one word processor or spreadsheet, they don't want to have to learn another one - even if it's better. And they don't want to be bothered with compatibility issues. Even something as simple as a slightly different font could be enough to put them off. We can't blame them for that. When you just want to get a job done these issues are very frustrating. So of course people stick with what they know because the alternatives are too risky. Quote:
A3000 with Unix does not have credible GUI business software. |
That wasn't the purpose of Unix. Have you any appreciation of what Unix workstations were actually used for?
However the Amiga did have some excellent GUI business apps. I know because I used them in my businesses. The only problem is they weren't PC apps.
As an example, Easy Ledgers was an excellent GUI accounting and Point of Sale package that blew away the 'industry standard' DOS programs used in New Zealand which were written in QuickBASIC. But my accountant hated it because he couldn't just grab the database and open it in his own accounting package (which of course used a secret proprietary format). I worked around that by reformatting the reports using an AmigaBASIC program I wrote. You see the problem here - most office workers wouldn't have a clue how to do that - ergo Amiga bad, PC good.
Last edited by bhabbott on 06-Dec-2024 at 05:52 AM. Last edited by bhabbott on 06-Dec-2024 at 05:49 AM.
|
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 6-Dec-2024 5:58:00
| | [ #264 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @bhabbott
Quote:
bhabbott wrote: @Hammer
Quote:
Hammer wrote:
Cyrix 6x86's superior integer (dual ALU) advantage is backed by Pentium's 64-bit bus, which is
|
Funny, ISTR that the Cyrix 6x86 was a dog compared to Pentium. Am I wrong? |
Only for the FPU. Quote:
Quote:
not the case for 68060's 32-bit bus. |
Yes, the Pentium's 64 bit bus did give it an advantage - in being able to use lower speed memory without slowing down. But the CPU itself was still only 32-bit, right? |
Yes. Quote:
Perhaps the wider bus was what allowed the FPU to go faster. |
Yes, because it was able to directly load 64-bit values (64-bit ints and FP64).
However, a 64-bit was very good not only for the FPU. In fact, you can fill the caches more quickly (I mean: you can load twice the data at the same time). Quote:
IIRC some Amiga accelerator cards used interleaved RAM to boost memory speed. |
That's a different thing and other systems (including PCs) had interleaved memory. Quote:
bhabbott wrote:
If Motorola had been a bit quicker out the gate with 68000 things might have been a lot different. Had IBM chosen 68000 for the PC, Motorola would be the kingpin and Intel would have struggled. But that's not likely because IBM didn't want to make the PC too good or it would compete against their own products. That's why they chose the 'toy' 8088 CPU over the 8086 - too afraid to have a 16-bit bus lest it make the machine too 'professional'. Then when PC sales exceeded their wildest expectations they changed their minds, but were locked into x86.
BTW in 1982 IBM did make a 68000-based computer, the System 9000, originally intended to be a laboratory computer. It had an 8 MHz 68000, multitasking real-time OS stored on 128k ROM, 128k RAM expandable to 5 MB, Motorola VME system bus, 80 × 30 text and 768 × 480 graphics, various I/O ports for communicating with laboratory instruments, and a sophisticated version of BASIC for programming. For storage it used a 5.25" or 8" floppy drive and up to four 10MB hard drives.
That gives you some idea of what the PC might have been like with a 68000 CPU. |
The main problem it that Motorola had no second source neither was ready with the 68000 when it was building the first PC.
Plus, it costed too much (Intel sold its CPU for $5). |
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 6-Dec-2024 6:05:30
| | [ #265 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @minator
Quote:
minator wrote: @cdimauro
Quote:
But ARM is clearly a CISC member. No doubt on that. |
If you go by the definition John Mashey describes in the post I linked earlier, Arm isn't pure RISC, but it's very clearly not CISC. |
If you look at all of what Mashey has written (I mean: not only the list of features) and compare it to the ARM, then you'll see my it's a CISC. Quote:
Quote:
Thumb was created by ARM EXPLICITLY AND SOLELY for getting a much better code density, because the original ARM architecture sucked very badly at that. |
Thumb was created for Arm to get a deal with a customer in the mobile phone space, back when memory in phones was very constrained. |
Which confirms my point. Quote:
Quote:
That was the only reason and also the reason why ARM had a HUGE success on the embedded market. |
Arm were probably successful for many different reasons, but the biggest was low power. It was low power from the beginning because Acorn didn't want to use a ceramic chip case, plastic was cheaper, but could only withstand a certain amout of heat. |
Right as well. And it was also cheap (only 25k transistors used -> very small area). Quote:
Arm the company exists because Apple wanted a low power chip for the Newton. ARM was spun out as a separate company because they didn't want to buy the chip from a competitor. |
It was its fortune, at the end. Albeit Apple faded and substantially exited from the ARM joint venture. Quote:
Quote:
The original ARM was quickly surpassed by the new Thumb first, and especially Thumb-2 after, precisely for this reason. |
The first chip with Thumb was in the ARM7 series, 7 years after the original. |
Yes, but again: there's only ONE reason why Thumb was created: purely and solely the code density. Quote:
It compromised performance so Thumb 2 brought back some 32 bit instructions to speed things up a bit. |
Not some: almost all 32-bit instructions of the ARM ISA were added back. The difference is that those instruction cannot be conditionally executed. Quote:
The original ARM instructions were never removed. |
It's successor T32, is used in the M series chips[/quote] In fact, all Cortex-M chips have only Thumb and not the old ARM ISA: https://en.wikipedia.org/wiki/ARM_Cortex-M#Cortex-M3
"Only Thumb-1 and Thumb-2 instruction sets are supported in Cortex-M architectures; the legacy 32-bit ARM instruction set isn't supported." Quote:
but I believe the 32 bit modes (and Thumb variants) have been completely removed from the latest 64 bit Arm chips. |
Yes. ARM has a clear separation: Thumb for 32-bit systems and A64 for 64-bit systems.
The original ARM is dead since some years now. With good reasons... |
| Status: Offline |
| | bhabbott
| |
Re: what is wrong with 68k Posted on 6-Dec-2024 23:07:08
| | [ #266 ] |
| |
|
Cult Member |
Joined: 6-Jun-2018 Posts: 507
From: Aotearoa | | |
|
| @cdimauro
Quote:
cdimauro wrote: The main problem it that Motorola had no second source neither was ready with the 68000 when it was building the first PC.
Plus, it costed too much (Intel sold its CPU for $5). |
One big problem was that IBM insisted on having large quantities for burn-in testing, and Motorola couldn't initially supply them in quantity. So IBM's staid conservative approach favored Intel.
The other, as you say, was price. IBM wanted a machine to compete against the Apple II and S100 bus CP/M machines, not a 'mainframe on a chip'. They went for the 8088 over the 8086 despite its lower performance because it made the whole system cheaper. It was all about getting costs down, with performance that only had to match existing 8-bitters using a 6502 or Z80.
Most people don't realize just how low they went with the original PC. In its base configuration it only had a miserable 16k of RAM installed, expandable to 64k on the motherboard (no wonder Bill Gates didn't think MS-DOS needed to support more than 640k). The operating system was BASIC in ROM, and the storage media was your own audio cassette recorder. Everything else was an option via plug-in cards.
The standard video was CGA with NTSC composite output to display on your own TV, and the CPU clock frequency was chosen to suit that (the CGA card gets its color clock from the motherboard). To use the full-height 180k 5.25" floppy disk drive you needed a floppy disk controller card, just like the Apple II did. For that you would also need PC-DOS, which - like CP/M - did not have subdirectories. 64 files was the maximum you could put on a disk. If you wanted a (unidirectional) parallel printer port, (unbuffered) serial port, more than 64k RAM or a battery-backed real-time clock, you would have to buy more plug-in cards.
However the monochrome text MDA card (a full-length card stuffed with 65 TTL logic chips plus the BIOS ROM and Motorola 6845 CRTC) had a parallel port built in, which reduced cost in the standard business model that came with a printer. The dedicated mono monitor had a higher horizontal frequency (18.432 kHz) and lower vertical frequency (50Hz) than NTSC to get nicer looking text, but to avoid flicker used a long-persistence phosphor which blurred movement. This didn't matter much though because the PC's BIOS print routine (as well general operation of the computer) was so slow that you generally had to wait anyway. With system specs like these a 68000 CPU would be total overkill. The 8088 was a clever choice because it matched the low system specs of the PC, while providing a path to higher performance in the future via its internal 16-bit architecture and segment registers which extended the address space to 1MB. Its numerous 8-bit opcodes favored an 8-bit data bus, which is why the 8086 is only ~50% faster despite its data bus being twice as wide.
In short the PC was a turd, but could be polished. |
| Status: Offline |
| | Hammer
| |
Re: what is wrong with 68k Posted on 6-Dec-2024 23:41:58
| | [ #267 ] |
| |
|
Elite Member |
Joined: 9-Mar-2003 Posts: 6134
From: Australia | | |
|
| @bhabbott
Quote:
IBM made a lot of bad decisions for sure. But they accidentally made one good one (the PC), and that is all that matters. They just had to give the industry something that they could run with without too many restraints, and the PC did that in spades. The System 9000 just shows that they could have done it with 68k instead, if they wanted to. Had they done that we wouldn't be talking about the 'brain-dead' 286, because it probably wouldn't even exist (Intel would have stuck with 80186, which was great for embedded applications but not quite compatible enough for the PC).
|
1. IBM President John Opel assigned William C. Lowe and Philip Don Estridge as heads of the new Entry Level Systems unit.
2. Refer to IBM's Corporate Management Committee, which provided "Project Chess" funds.
Corporate politics matter.
3. It was 3rd party "killer apps" that sold the IBM PC platform.
4. System 9000's 1982 release was later than the IBM PC's 1981 release. Refer to IBM corporate's release timing demands.
5. IBM Instruments Computer subsidiary released System 9000. Factor in the corporate structure.
Quote:
@bhabbott,
He was wrong though. It was only 'brain-dead' when used in the PC, which had to operate in Real mode for BIOS and DOS functionality.
|
You're wrong.
286's protected mode is a dead end.
Excel/Word teams have experience with Mac's 68K.
Most of Bill Gates's GUI use case arguments during his meetings with IBM were from Mac GUI experience.
Microsoft reused its Apple Mac experience and smashed the DOS establishment of Lotus 123, Word Star, and Word Perfect.
Quote:
@bhabbott
Meanwhile, Windows 2.x ran on 80286.
|
Windows 286 wasn't the future for Microsoft. Dave Cutler was hired in October 1988 and worked on 32-bit Windows NT (known as OS/2 3.x) project.
The existing 286 PC has 386SX/386SLC upgrade paths.
Microsoft engaged Windows NT work-in-progress drip feed marketing.
Quote:
@bhabbott,
Thank Jobs for that. He pushed for the Mac as a desktop publishing tool, even to the point of rejecting a color display because it wasn't 'what you see is what you get'.
|
Steve Jobs focused on high-resolution business GUI apps.
Apple released the 256-color Macintosh II and color QuickDraw RTG in March 1987. Macintosh II's graphics chipset has VRAM.
The Macintosh II project was begun by Dhuey and Berkeley during 1985 without the knowledge of Apple co-founder and Macintosh division head Steve Jobs.
Quote:
@bhabbott
IBM didn't have that vision. The System 9000 was tailored for a different market that already existed. So it lacked the innovation required to reach new markets that would make it popular. Nothing wrong with that though. If I was running a lab I would have taken it over a PC or Mac. If I could run 'business' apps on it too that would be a bonus.
|
Factor in IBM's corporate structure. System 9000 wasn't part of IBM's core governance.
Quote:
Not just Word. I bought an IBM spreadsheet program for my JX. It worked well but was a bit different from the 'industry standard' of designating rows and columns by letter and number. So of course it wasn't popular.
|
#metoo wouldn't displace DOS Lotus 123 establishment.
Quote:
@bhabbott,
You see, people don't want choice. If they learnt how to use one word processor or spreadsheet, they don't want to have to learn another one - even if it's better. And they don't want to be bothered with compatibility issues. Even something as simple as a slightly different font could be enough to put them off. We can't blame them for that. When you just want to get a job done these issues are very frustrating. So of course people stick with what they know because the alternatives are too risky.
|
Microsoft has to battle the DOS establishment of Lotus 123, Word Star, and Word Perfect.
Quote:
@bhabbott,
That wasn't the purpose of Unix. Have you any appreciation of what Unix workstations were actually used for?
|
Where's ECC memory support?
From Commodore - The Final Years, Quote:
The A3000UX was not well received by the press. Unix World magazine reviewed the system in depth and found that, although it had outstanding graphics and sound compared to competing Unix workstations, it was no bargain at $6998, excluding monitor and tape drive.
The reviewer found fault mainly with the outdated 68030 chip (which had been superseded by the 68040) and the sparse software available. Around 100 of the most popular Unix programs had been compiled to Amix by Commodore, and most of those did not use the Amiga’s remarkable sound and graphics capabilities.
|
Meanwhile, NeXT attracted major 3rd party software developers from the Apple Mac market.
https://youtu.be/CtnX1EJHbC0?t=285 Internal NeXT video (1991) In 1990, SUN sold around 40,000 workstations in the professional workstation markets.
Quote:
@bhabbott,
However the Amiga did have some excellent GUI business apps. I know because I used them in my businesses. The only problem is they weren't PC apps.
|
I used IntroCAD during my early high school years, but it wasn't a good experience with A500 (the same problem for A2000/A2000-CR). The A3000 works around A500's interlace problem with frame buffered Amber and PC VGA monitor, but the A3000 wasn't mass-produced like the A500.
Commodore's box Amigas wasn't cost-reduce optimized like A500's Far East mass production.
For OCS, A2024 monitor production is in the 5000-unit range.
According to Dataquest November 1989, VGA crossed more than 50 percent market share in 1989 i.e. 56%. http://bitsavers.trailing-edge.com/components/dataquest/0005190_PC_Graphics_Chip_Sets--Product_Analysis_1989.pdf
Low-End PC Graphics Market Share by Standard Type Estimated Worldwide History and Forecast
Total low-end PC graphic chipset shipment history and forecast 1987 = 9.2. million, VGA 16.4% market share i.e. 1.5088 million VGA. 1988 = 11.1 million, VGA 34.2% i.e. 3.79 million VGA. 1989 = 13.7 million, VGA 54.6% i.e. 7.67 million VGA. 1990 = 14.3 million, VGA 66.4% i.e. 9.50 million VGA. 1991 = 15.8 million, VGA 76.6% i.e. 12.10 million VGA. 1992 = 16.4 million, VGA 84.2% i.e. 13.81 million VGA. 1993 = 18.3 million, VGA 92.4% i.e. 16.9 million VGA.
https://dosdays.co.uk/topics/Manufacturers/tseng_labs.php By 1991, according to IDC, Tseng Labs held a 25% market share in the total VGA market.
Economic of scale matters.
AGA mass production should have been started earlier.
Apple had about 14 million install base when Commodore went bust. Apple's install base is mostly businesses with a high business resolution baseline.
Quote:
As an example, Easy Ledgers was an excellent GUI accounting and Point of Sale package that blew away the 'industry standard' DOS programs used in New Zealand which were written in QuickBASIC. But my accountant hated it because he couldn't just grab the database and open it in his own accounting package (which of course used a secret proprietary format). I worked around that by reformatting the reports using an AmigaBASIC program I wrote. You see the problem here - most office workers wouldn't have a clue how to do that - ergo Amiga bad, PC good.
|
Note that MS Excel and WinWord have major contributions to MS's revenue base.
For the 1988 time context, name one accounting software company that matched MS's revenue base.
"1988 - With the arrival of Windows 2.0 in 1987, computers start becoming more commonplace in the office. Microsoft becomes the largest PC software company based on global sales" - Reuters
1991: With Mac GUI and DOS versions already available, Quicken launches a version for Windows.
https://en.wikipedia.org/wiki/History_of_Microsoft
On February 16, 1986, Microsoft relocated their headquarters to a corporate office campus in Redmond, Washington. Around one month later, on March 13, the company went public with an IPO, raising US$61 million at US$21.00 per share. By the end of the trading day, the price had risen to US$28.00. In 1987, Microsoft eventually released their first version of OS/2 to OEMs. By then the company was the world's largest producer of software for personal computers—ahead of former leader Lotus Development—and published the three most-popular Macintosh business applications. In July 1987 Microsoft purchased Forethought, the developer of PowerPoint. It was the company's first major acquisition, and gave Microsoft a Silicon Valley base
(skip)
During the transition from MS-DOS to Windows, the success of Microsoft Office allowed the company to gain ground on application-software competitors, such as WordPerfect and Lotus 1-2-3.
On August 8, 1989, Microsoft introduced its most successful office product, Microsoft Office.
You're not factoring in the economies of scale.
Commodore's economies of scale in annual million units is the A500 model. --------------------
I have checked Easy Ledger rev A6 (1989) and compared it to Macintosh's Quicken 3.0 (1991), 1. Amiga OCS's resolution is not high enough. Interlace resolution wouldn't be good experience. 2. Easy Ledger rev A6's displayed reports such as trail balance "look and feel" like a DOS text presentation.
https://www.youtube.com/watch?v=RO98SpkhaVk Apple Macintosh's Quicken 3.0 (1991). Reports in WYSIWYG. Intuit released Quicken 1.0 for Windows 3 in 1991 and it's similar Mac's 1991 version.
The Amiga can run Mac apps via AMax/AMax II, Emplant, Fusion, and Shapeshifter. For non-multi-media work, I run my A3000 as a Macintosh. A few thousand A3000 unit sales wouldn't change multi-million A500 install base's business resolution display issues.
Last edited by Hammer on 07-Dec-2024 at 06:17 AM. Last edited by Hammer on 07-Dec-2024 at 05:46 AM. Last edited by Hammer on 07-Dec-2024 at 05:39 AM. Last edited by Hammer on 07-Dec-2024 at 12:27 AM. Last edited by Hammer on 07-Dec-2024 at 12:20 AM. Last edited by Hammer on 07-Dec-2024 at 12:19 AM. Last edited by Hammer on 06-Dec-2024 at 11:48 PM. Last edited by Hammer on 06-Dec-2024 at 11:44 PM.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | Hammer
| |
Re: what is wrong with 68k Posted on 7-Dec-2024 0:54:23
| | [ #268 ] |
| |
|
Elite Member |
Joined: 9-Mar-2003 Posts: 6134
From: Australia | | |
|
| Quote:
@bhabbott
So Motorola had design and production problems that kept them on the backfoot compared to Intel. But to a large extent it was just luck. Other chip manufacturers struggled to make a reliable 16-bit design unencumbered by legacy baggage, However in Intel's case it turned out in their favor. The 8088 wasn't doing very well until IBM chose it for the PC, which they did largely because they didn't want a powerful 16-bit CPU. And when they did it changed everything. Now suddenly x86 was the standard that the industry rushed to embrace, and everyone else (including Motorola) was locked out.
|
Motorola would miss IBM's PC 1981 release timing e.g. System 9000's 1982 release.
Quote:
@bhabbott,
Luckily for Motorola some designers recognized the clear advantages of the 68000 over the 8088 that made it worth spending a little more on. Jay Miner was one of them. But the Amiga struggled to maintain relevance in a world dominated by x86. The Macintosh did too, despite Steve Jobs managing to develop a cult following for it. The PC was bound to win out in the end, dashing Motorola's hopes of continuing to be a major competitor to Intel in the desktop CPU market.
The revenue Intel was receiving meant that they had far more resources to throw into improving their lacklustre x86 design, and the industry was more accommodating than they would be of an incompatible rival. So it wasn't long before the 286 and 386 were rivaling Motorola's CPUs, and once again they found themselves on the backfoot. With the PC outselling 68k systems by 5:1 or more there was little hope of Motorola holding out against x86.
|
Unlike the 386 standard, 1. 68020's MMU was late, 2. 68020 wasn't bundled with MMU until 68030's release.
Any 386 PC is Xenix 386 or Windows 386 capable.
Quote:
PPC was a desperate attempt to find a shortcut that would match Pentium performance without the excessive R&D needed to advance 68k. It worked for a while, until Apple finally threw in the towel and joined the dark side.
|
Most of PowerPC 601's CPU core R&D was done by IBM with Motorola's 88000 60x bus contribution.
Quote:
Capitalism drove the innovation that gave us what we have today. But capitalism didn't kill the Amiga - consumers did. Put bluntly, they didn't want it. They wanted to know that when they bought 'a' computer, it would work with all the other computer hardware and software out there. They wanted the industry to standardize on one platform, and develop it to the detriment of others. A few didn't of course - they bought Macs and Amigas instead. But they were not enough. Eventually one platform had to dominate for purely practical reasons. Economy of scale and profit would favor the dominant architecture, pushing out all others. And most people consider this to be a good thing.
|
Nope. STMicro was former Italian and French state-owned and ATI used its fabrication services.
https://en.wikipedia.org/wiki/STMicroelectronics
ST was formed in 1987 by the merger of two government-owned semiconductor companies: Italian SGS Microelettronica (where SGS stands for Società Generale Semiconduttori, "General Semiconductor Company"), and French Thomson Semiconducteurs, the semiconductor arm of Thomson.
From https://medium.com/discourse/tsmc-the-taiwanese-titan-be0774531bb In the 1970s and 1980s, the Taiwanese government gave the semiconductor industry strategic priority for development.
China wasn't the 1st country with state-owned intervention in the semiconductor sector.
The US had enough of other countries' state-owned semiconductor intervention and created the "Chip Act".
My home country has its version of the "Chip Act". Thanks to China's race to the the bottom.
Last edited by Hammer on 07-Dec-2024 at 12:59 AM.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | Hammer
| |
Re: what is wrong with 68k Posted on 7-Dec-2024 2:14:42
| | [ #269 ] |
| |
|
Elite Member |
Joined: 9-Mar-2003 Posts: 6134
From: Australia | | |
|
| @bhabbott
Quote:
Funny, ISTR that the Cyrix 6x86 was a dog compared to Pentium. Am I wrong? |
Only FPU.
Cyrix 6x86 (1995) has the following - Register renaming, - Out-of-order completion, - Multi-branch predication, - Speculative execution.
Quote:
@bhabbott
Yes, the Pentium's 64 bit bus did give it an advantage - in being able to use lower speed memory without slowing down.
|
PC-100 SDRAM didn't exist in 1993.
Quote:
@bhabbott
IIRC some Amiga accelerator cards used interleaved RAM to boost memory speed.
|
It wouldn't boost 68060's 32-bit bus.
Quote:
@bhabbott Amazing, I learn more every day! OTOH the 68060 didn't need need twice as many instructions to do anything.
|
Show 68060 rev 6 @ 100Mhz beating Pentium 100's Quake results.
Warp1260 68060@ 100Mhz+RTG+DDR3 wouldn't beat Pentium 100 with S3 Trio 64V+.
https://www.youtube.com/watch?v=0_dW-21gdkw Warp1260 68060@ 100Mhz+RTG running Quake.Last edited by Hammer on 07-Dec-2024 at 02:36 AM. Last edited by Hammer on 07-Dec-2024 at 02:33 AM.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | Hammer
| |
Re: what is wrong with 68k Posted on 7-Dec-2024 2:29:50
| | [ #270 ] |
| |
|
Elite Member |
Joined: 9-Mar-2003 Posts: 6134
From: Australia | | |
|
| @bhabbott
Quote:
And how much did Intel burn on Itanium? |
Intel believed in HP's PA-WideWord cool-aid.
Intel has a backup 64bit x86 plan i.e. project Yamhill.
PA-WideWord is the follow-on architecture after PA-RISC.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 7-Dec-2024 4:58:27
| | [ #271 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @matthey
Quote:
matthey wrote:
cdimauro Quote:
The performance of MUL/DIV instructions was good enough when the 68020 was introduced. Also, those instructions aren't so much used (especially the DIV).
|
Hammer exaggerates the importance of MUL/DIV performance. The 68020 MUL/DIV instruction was introduced when many CPUs lacked hardware instructions and 8x8 multiplication table lookups were more common. Several RISC CPUs had higher performance MUL/DIV instructions before the 68040 was slow to arrive and had mediocre performance. |
Indeed. Contextualizing is very important, both in terms of history (position on the market) and usage (how much code used such instructions). Quote:
cdimauro Quote:
Me neither.
Interesting. Do you have any post from Mitch where he explains some technical reasons why he doesn't like it?
|
Searching for "Mitch Alsup" and "RISC-V" brings up several hits. The first hit for me just happens to be Mitch talking about immediates and displacements in the code vs worse alternatives.
RISC-V worse and better than 1980s RISC https://www.realworldtech.com/forum/?threadid=219885&curpostid=220922
There are many RISC-V fans which Mitch has sparred with on several occasions. |
Thanks. I've started reading this thread, and it's a pleasure. 'til now I'm pretty much in-line with Mitch's statements. Quote:
RISC-V was an improvement in many ways but more knowledgeable developers would not be surprised to see a RISC-VI ISA do over which seems to be the predominant RISC philosophy today. |
As someone else said on the above thread (and I repeat since some years), RISC-V is too much academic and far from the real-world. That's why it's a weak ISA.
The base ISA is pretty much useless. In fact and despite what some big experts like Jim Keller said some time ago ("8 instructions should be enough for a CPU". More or less), it becomes useful only by adding several extensions. Guess what, Keller's company is selling products based on RISC-V, but a more advanced profile (sporting TONS of extensions!): https://tenstorrent.com/ip/tt-ascalon For reference: https://github.com/riscv/riscv-profiles/blob/main/src/rva23-profile.adoc Coherent people...
Anyway, I don't believe that a RISC-VI will ever come. Berkeley's RISC family were a complete failure and only good for teaching at the university (again: "academics"...). They were very lucky with RISC-V, transforming a university summer project to something which gained attention from commercial vendors primarily because it had no licenses and the ISA, with the drafted extensions, looked decent. Quote:
cdimauro Quote:
There's a clear advantage, no doubt on that: once you're able to pack immediates and displacements/offsets on the same instruction, then you've removed more instructions to be executed and more accesses to memory, providing an overall benefit to both performance and power usage.
My only concern is allowing 64-bit everywhere, having very long instructions which can cause implementation problems (require much larger instruction buffer, as you stated), for questionable benefits.
While 64-bit immediates are fine and welcome (I also support them on both NEx64T and my new architecture, but only limited to certain instruction types), 64-bit absolute addresses/displacements/offsets aren't so much used to justify the additional complication (because you can have 64-bit for immediates + 64-bits for abs/disp/offs = 16 byte only for them -> very long instructions to handle).
That's the only critical point for me.
|
It is difficult to know how big is too big for immediates/displacements/addresses without an implementation for testing. |
It's easy for displacements and addresses: how much often 64-bit values for them could be used & useful on real code? 32-bit values already allow to reference 4GB of memory, either the bottom (unsigned) or bottom+top (signed) 4GB memory, or relative to the PCs. Which should be more than enough both for referencing code and local data. Bigger arrays usually are allocated, so they use pointers -> no direct reference.
Immediates are different: 64-bit are rare for integers, but much more used /useful for floating-values. Quote:
Even then, limiting the max instruction size is likely a tradeoff sacrificing some performance for lower power and area. |
Exactly. This allows to use large immediates while keeping the maximum length on reasonable values. Quote:
cdimauro Quote:
Absolutely. It was shame. However, with such 6 bytes max length, there's nothing else that could have been made (FPU instructions are already 32 bit. The maximum is having 16-bit immediates).
|
Limiting the ColdFire max instruction size to 8 bytes would have allowed single precision immediates which many double precision immediates can be exactly converted/compressed into with the optimization Frank Wille and I developed and is working in Vasm/Vbcc. Gunnar suggested that 8 byte instructions could be supported by ColdFire with minimal degradation in core timing on the NXP forums. |
Too late. Probably NXP was already moving towards ARMs at the time. Quote:
Did the ColdFire developers care about performance anymore after ColdFire was likely arbitrarily castrated enough to satisfy their PPC zealot masters? |
Well, you can see how much consensus gained PowerPCs on the embedded market...
PowerPC: an ISA good for... what? Quote:
cdimauro Quote:
Not exactly. x64 allows 64-bit absolute addresses only with the two special MOV Abs64,RAX and MOV RAX,Abs64 instructions. Similarly for 64-bit immediates: they are only allowed with the MOV RAX,Imm64 instruction.
That's because there's a upper 15 bytes limit to the length of the instruction. Plus, 64-bit absolute addresses are very rare (only recently, with APX, Intel added a JMP Abs64 instruction).
|
The x86-64 15 byte instruction limit may be due to inefficient decoding. |
No, it's an hard limit which Intel put since very long time (at least from the 80386). Quote:
Decoding is performed 8-bits at a time and there are more wasted bits per instruction compared to a 16-bit base encoding which can be anywhere from half a byte to well over a byte per instruction depending on optimizations, int/fp/SIMD code distribution, etc.
x86-64 instruction lengths | frequencies 1B 4% 2B 18% 22% 3B 23% 45% 4B 17% 62% 5B 18% 80% 6B 7% 87% 7B 8% 95% 8B 4% 99%
On average, 3 bytes have to be examined one at a time on x86-64 to decode roughly the equivalent of 2 bytes at once with a 16-bit base encoding. About 5 bytes have to be examined one at a time to reach the equivalent of 4 bytes (2x2B) with a 16-bit base encoding. |
It's a bit more complicated, because not only of the prefixes, but of the EA extension byte (Mod/RM) which is similar to 68k's extension word and embeds the length of the displacement (if any).
Both ISAs sport this problem, unfortunately, which makes their decoding harder.
Prefixes could be handled quite easily, because they are 8-bit and it's relatively fast having comparators applied to the bytes of a (code) cache line. After that you've a bit mask as result, which allows to quickly catch the beginning of the real opcode, and derive everything else. Requires transistors, of courses, and it's expensive in terms of drawn power (which is the reason why the micro-ops cache was introduced: to completely detach -> turn-off the instructions decoders).
The problem remains with the EA byte (and word for 68ks), which requires a table to understand which instructions have an EA, and then checking the extension word if it's used. That's more tricky IMO, but I'm not an expert on this area and I prefer to stop here. Quote:
Size decode is also easier without prefixes and over ride encoding bytes. |
Consider that x86 is dead in terms of development and evolution, and x64 is going towards a set of established extensions/prefixes, which are much easier to catch and already encode all information (don't require other prefixes). REX is the standard for the general x64's GP instructions, and you can use the LOCK prefix only when you need atomic access to memory. AVX has its two prefixes for the SIMD instructions (removing all other being used with MMX and SSE). AVX-512 has its single prefix for SIMD instructions (no other prefixes are needed). APX has a new REX2 prefix and reused AVX & AVX-512 prefixes.
I short, I expect that applications become more regular in future when MMX and SSE will be dropped, and AVX will be put as the minimum requirement. This allows chip vendors like Intel and AMD to use less resources on the decoders to the legacy code, and focus on the new extensions, to minimize both transistors usage & drawn power. But x86 heritage, unfortunately, will always be there. Quote:
I expect the instruction size of over 90% of 68k instructions can be determined by looking at the first 2 bytes and 97% by examining the first 4 bytes. The 68k average decoding case is much simpler with less wasted bits which applies to most other 16-bit VLEs |
Yes, but this is also the main problem with 68k: you need to compare 16-bit values instead of 8-bits, which require more transistor & power.
It's not a big problem if you've regular opcode structures, because you can use less bits for the comparisons (by checking specific portions of the opcodes).
Time has passed, and here I haven't anymore a good vision of the situation for the 68ks. So, I don't know how much complicated could be a decoder for this architecture. Quote:
as well like your NEx64T as well. |
Yes, but it's much more regular and require less than 16 bits (max 14 bits, but only for a specific group of instructions) for decoding the instruction length (and all related information for immediates, displacements/offsets, and absolute addresses).
With my new ISA I need a maximum of 15 bits for decoding everything, because I had room to add the the mem-mem-mem instructions (but it's the only group of instructions which needs 15 bits, and it's fairly easy to catch it). Besides such single cases, it's much more regular compared to NEx64T (which was already an enormous improvement compared to x86/x64). Quote:
The 68k does allow up to 22 byte instructions but decoding 16-bits at a time is in some ways more like 11 bytes when compared to x86(-64). |
That's exactly the reason why I've adopted a 16-bit VLE: in case that you need to introduce tag bits on the code cache, I need halve the amount compared to x86/x64 for the same instruction lengths. Quote:
The 68k could easily limit instructions to 16 bytes max and most existing programs wouldn't need any changes. For a 68k64 ISA, a 22 byte limit would allow 64 bit immediates/displacements without double memory indirect modes which the 68060 could handle without much performance loss. |
Have you removed the double memory indirect modes on your 68k64? Quote:
With a 8 byte/cycle instruction fetch instead of 4 byte/cycle fetch and wider than 96-bit fixed length instructions in the instruction buffer, the 68060 could have superscalar/parallel executed more large (greater than 6 byte) instructions rather than losing a cycle or two when encountering them. |
Nowadays you can also use more bits for the micro-ops, and go beyond the 96 use by the 68060. x86 CPUs should use around 110-120 bits. So, even using 128 bits isn't an handicap. Quote:
The 68060 balanced power with performance where x86(-64) fixed length macro-op instructions grew in size with performance designed cores. |
Indeed. But I don't expect that you just want to recycle the same 68060 core design, right? I mean, a more modern (micro)architecture doesn't mean just adding more caches, increasing the instruction buffer, and widening the data bus. Quote:
cdimauro Quote:
Oh, nice. Now I know why the Libre SoC vector unit is so different from what I've seen, and it looks similar to the 66000 one (which has... no vector unit But allows to do vectors processing).
|
Vector units instead of SIMD units seem to be popular for RISC-V, perhaps because the love for the ISA stops short of assembly coding. SiFive won the NASA contract to replace PPC with a SOC using a CISC like U74 CPU core design and a vector unit. |
It's not only RISC-V. ARM also introduced vector units with SVE and now SVE2, because it's much easier to handle both at the microarchitecture and software development, being also future-proof (write the code once and support any micro/architecture).
The good thing of ARM is that it reuses NEON's SIMD registers and extends them to vector, while also keeping both ISA extensions. So, you can freely use SIMD and vector instructions (which I do also with my architectures).
That's something which RISC-V lacks, AFAIR (SIMD and vector extensions are independent / separated: either you use the one or the other).
BTW, vector assembly coding is more complicated with RISC-V, but a bit easier with ARM. Another cool ISA which I look at (and took inspiration for some parts of its vector extension) is MRISC32: https://mrisc32.bitsnbites.eu
Vector coding is way more fun on my architectures (especially my last one) even in assembly. Quote:
I wonder if SiFive looked at the Libre SoC vector unit considering the project was originally supposed to be RISC-V based. |
No, they look different. |
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 7-Dec-2024 5:01:43
| | [ #272 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @agami
Quote:
agami wrote: @thread
This is one of those perfect examples of why there is no right or wrong, only opinion.
Inherently, there is nothing "wrong" with 68k. On the contrary, many will point out all the things that are "right" with it, and that it was Motorola/industry that made wrong decisions. But that is the very crux of the matter: It was the (short-sighted) opinion of Motorola execs who wanted to pursue more profitable projects for which they deemed the 68k unsuited, and it was the opinion of the industry/market that cheaper brute-force designs were preferable over more costly elegant designs.
Many companies have and still make the mistake of thinking their competitor is another company or market trends, when in truth they are competing with themselves. Like individuals, companies are often their own worst enemies and can't get out of their own way.
In my opinion, it's wrong that 68k was sidelined by Motorola's business struggles and straw-clutching of PowerPC via the AIM alliance. Almost as wrong as the loss of Amiga as an alternative computing approach to Wintel and Mac. It is capitalism after all, and it required its sacrifice of an industry wide focus on RISC so that out of the rubble we could find the balance between RISC and CISC.
|
To be short and besides the lack of vision (and courage) of its management, Motorola was another victim of the RISC propaganda.. |
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 7-Dec-2024 5:12:50
| | [ #273 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @bhabbott
Quote:
bhabbott wrote: @cdimauro
Quote:
cdimauro wrote: The main problem it that Motorola had no second source neither was ready with the 68000 when it was building the first PC.
Plus, it costed too much (Intel sold its CPU for $5). |
One big problem was that IBM insisted on having large quantities for burn-in testing, and Motorola couldn't initially supply them in quantity. So IBM's staid conservative approach favored Intel.
The other, as you say, was price. IBM wanted a machine to compete against the Apple II and S100 bus CP/M machines, not a 'mainframe on a chip'. They went for the 8088 over the 8086 despite its lower performance because it made the whole system cheaper. It was all about getting costs down, with performance that only had to match existing 8-bitters using a 6502 or Z80. |
Correct. But if Motorola wasn't able to deliver, then it's not IBM's fault: they needed CPUs for their PCs and don't want to wait Motorola, of course. Quote:
Most people don't realize just how low they went with the original PC. In its base configuration it only had a miserable 16k of RAM installed, expandable to 64k on the motherboard (no wonder Bill Gates didn't think MS-DOS needed to support more than 640k). |
It wasn't Gates that stated this.
Anyway, 640kB is fair enough for a 1MB address space: proportionally, Amiga had exactly the same limit (RAM stopped at $A00000. PC's at $A0000). Quote:
The operating system was BASIC in ROM, and the storage media was your own audio cassette recorder. Everything else was an option via plug-in cards.
The standard video was CGA with NTSC composite output to display on your own TV, and the CPU clock frequency was chosen to suit that (the CGA card gets its color clock from the motherboard). To use the full-height 180k 5.25" floppy disk drive you needed a floppy disk controller card, just like the Apple II did. For that you would also need PC-DOS, which - like CP/M - did not have subdirectories. 64 files was the maximum you could put on a disk. If you wanted a (unidirectional) parallel printer port, (unbuffered) serial port, more than 64k RAM or a battery-backed real-time clock, you would have to buy more plug-in cards.
However the monochrome text MDA card (a full-length card stuffed with 65 TTL logic chips plus the BIOS ROM and Motorola 6845 CRTC) had a parallel port built in, which reduced cost in the standard business model that came with a printer. The dedicated mono monitor had a higher horizontal frequency (18.432 kHz) and lower vertical frequency (50Hz) than NTSC to get nicer looking text, but to avoid flicker used a long-persistence phosphor which blurred movement. This didn't matter much though because the PC's BIOS print routine (as well general operation of the computer) was so slow that you generally had to wait anyway. With system specs like these a 68000 CPU would be total overkill. The 8088 was a clever choice because it matched the low system specs of the PC, while providing a path to higher performance in the future via its internal 16-bit architecture and segment registers which extended the address space to 1MB. Its numerous 8-bit opcodes favored an 8-bit data bus, which is why the 8086 is only ~50% faster despite its data bus being twice as wide.
In short the PC was a turd, but could be polished. |
Do you understand that the first PC was delivered on 1981? Of course it was very very limited: what do you expect?
Just check what the market was offering at the time, included Apple and Commodore machines.
Amiga arrived FOUR years after and, guess what, offered much much more. Clap clap clap...
BTW, CGA and MDA had a programmable display controller: something which on the Amiga arrived only with the ECS (great "enhancement")... |
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 7-Dec-2024 5:17:20
| | [ #274 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @Hammer
Quote:
Hammer wrote:
Quote:
@bhabbott
[quote] Capitalism drove the innovation that gave us what we have today. But capitalism didn't kill the Amiga - consumers did. Put bluntly, they didn't want it. They wanted to know that when they bought 'a' computer, it would work with all the other computer hardware and software out there. They wanted the industry to standardize on one platform, and develop it to the detriment of others. A few didn't of course - they bought Macs and Amigas instead. But they were not enough. Eventually one platform had to dominate for purely practical reasons. Economy of scale and profit would favor the dominant architecture, pushing out all others. And most people consider this to be a good thing.
|
Nope. STMicro was former Italian and French state-owned and ATI used its fabrication services.
https://en.wikipedia.org/wiki/STMicroelectronics
ST was formed in 1987 by the merger of two government-owned semiconductor companies: Italian SGS Microelettronica (where SGS stands for Società Generale Semiconduttori, "General Semiconductor Company"), and French Thomson Semiconducteurs, the semiconductor arm of Thomson.
|
Correct. BTW, my mother in love worked for ATES in Catania: https://en.wikipedia.org/wiki/STMicroelectronics#History
SGS Microelettronica originated in 1972 from a previous merger of two companies:
ATES (Aquila Tubi e Semiconduttori), a vacuum tube and semiconductor maker headquartered in L'Aquila, the regional capital of the region of Abruzzo in Southern Italy, which in 1961 changed its name to Azienda Tecnica ed Elettronica del Sud and relocated its manufacturing plant in the Industrial Zone of Catania, in Sicily; Società Generale Semiconduttori (founded in 1957 by Jewish-Italian engineer, politician, and industrialist Adriano Olivetti).
I've also made my internship at STM, and worked with them for my bachelor thesis.
/OT |
| Status: Offline |
| | Hammer
| |
Re: what is wrong with 68k Posted on 7-Dec-2024 6:02:37
| | [ #275 ] |
| |
|
Elite Member |
Joined: 9-Mar-2003 Posts: 6134
From: Australia | | |
|
| @cdimauro
Semiconductor chips have reached national security level issues.
Taiwan's participation in the PC market needs to factor in Taiwanese state support.
Like the BBC Micro, RPi Foundation has state support via the UK's education initiatives.
Commodore's C64 has beaten state-funded BBC Micro. Last edited by Hammer on 07-Dec-2024 at 06:05 AM.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | cdimauro
| |
Re: what is wrong with 68k Posted on 7-Dec-2024 6:06:52
| | [ #276 ] |
| |
|
Elite Member |
Joined: 29-Oct-2012 Posts: 4127
From: Germany | | |
|
| @Hammer
Quote:
Hammer wrote: @cdimauro
Semiconductor chips have reached national security level issues.
Taiwan's participation in the PC market needs to factor in state support.
Like the BBC Micro, RPi Foundation has state support via the UK's education initiatives. |
Correct again. The situation is different now, and silicon industry is the most strategic and important ones in the world panorama.
The strongest is who owns the best technologic. Quote:
Commodore's C64 has beaten state-funded BBC Micro. |
Indeed. Which proves that the a good product can have success despite of the money / investments. |
| Status: Offline |
| | Hammer
| |
Re: what is wrong with 68k Posted on 8-Dec-2024 10:51:05
| | [ #277 ] |
| |
|
Elite Member |
Joined: 9-Mar-2003 Posts: 6134
From: Australia | | |
|
| @cdimauro
Taiwan started from the 1970s into the 1980s. The US tolerated the trade BS from Taiwan.
Trump's argument on trade BS is correct.
https://medium.com/discourse/tsmc-the-taiwanese-titan-be0774531bb
"In the 1970s and 1980s, the Taiwanese government gave the semiconductor industry strategic priority for development."
Commodore Germany's big box Amigas didn't have a chance. For some countries, there were extra national security issues even back in the 1970s to 1980s. -----------
Last edited by Hammer on 08-Dec-2024 at 10:51 AM.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | kolla
| |
Re: what is wrong with 68k Posted on 8-Dec-2024 11:17:50
| | [ #278 ] |
| |
|
Elite Member |
Joined: 20-Aug-2003 Posts: 3337
From: Trondheim, Norway | | |
|
| Last edited by kolla on 08-Dec-2024 at 11:25 AM. Last edited by kolla on 08-Dec-2024 at 11:23 AM.
_________________ B5D6A1D019D5D45BCC56F4782AC220D8B3E2A6CC |
| Status: Offline |
| | ppcamiga1
| |
Re: what is wrong with 68k Posted on 8-Dec-2024 11:46:20
| | [ #279 ] |
| |
|
Cult Member |
Joined: 23-Aug-2015 Posts: 985
From: Unknown | | |
|
| so what is wrong with 68k? still nothing like good old ocs blitter updated to year for ex 1992 nothing easy to use with support 16 bit color support textures zbuffer clipping 16 years after start of this whole of natami/apollo/vampire thing 68k is where it was in 1992 many 68k followers has problems with pc with other cpu than x86 but they failed to provide something even on ps 1 level
|
| Status: Offline |
| | Karlos
| |
Re: what is wrong with 68k Posted on 8-Dec-2024 18:59:12
| | [ #280 ] |
| |
|
Elite Member |
Joined: 24-Aug-2003 Posts: 4817
From: As-sassin-aaate! As-sassin-aaate! Ooh! We forgot the ammunition! | | |
|
| @ppcamiga1
Please correct your dosage and/or medication. You are complaining that the 68K doesn't support a range of fixed function 3D stuff that's the purview of a GPU. That can't make sense, even in your head. _________________ Doing stupid things for fun... |
| Status: Offline |
| |
|
|
|
[ home ][ about us ][ privacy ]
[ forums ][ classifieds ]
[ links ][ news archive ]
[ link to us ][ user account ]
|