Your support is needed and is appreciated as Amigaworld.net is primarily dependent upon the support of its users.
|
|
|
|
Poster | Thread | matthey
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 23-Jun-2024 17:11:57
| | [ #121 ] |
| |
|
Elite Member |
Joined: 14-Mar-2007 Posts: 2380
From: Kansas | | |
|
| CosmosUnivers Quote:
Commodore had a success plan, they sold millions computers. They found the recip. They created a real market. For me, they failed because too many softwares and hardwares mistakes...
|
OneTimer1 Quote:
It's not that simple, C= had a huge success with PET one of the first PCs on the market.
|
It is simpler today. CBM bought MOS because there were no fabless semiconductor businesses back then. The chip foundry needed to be owned or a business partner needed to own one. Small design teams working for fabless semiconductor businesses using modern design tools and commodity fab services are common and relatively easy today. ARM built an a la carte embedded chip market based on it and Raspberry Pi Foundation/Ltd. built a thriving business based on ARM SoC chips. The newcomer Raspberry Pi has been so successful that they have sold over 50 million units, the business is publicly traded on the LSE as RPI, ARM invested in RPi and the business is developing and producing successful SoCs like the RP2040 selling millions of chips as a fabless semiconductor business. The original startup expected to sell only 1000 units and was funded by only 6 individuals.
https://www.zdnet.com/article/we-thought-wed-sell-1000-the-inside-story-of-the-raspberry-pi/ Quote:
"We honestly did think we would sell about 1,000, maybe 10,000 in our wildest dreams. We thought we would make a small number and give them out to people who might want to come and read computer science at Cambridge," he told ZDNet.
The first inkling of the fervour the credit card-sized board would create came in May 2011, when the first public outing of the Pi in a BBC video generated some 600,000 views on YouTube.
Upton and his colleagues revised their initial run of boards up to 10,000, thinking that would be more than enough to meet demand.
It wasn't. The 10,000 boards sold out within hours of going on sale in February last year, with an incredible 100,000 boards ordered on that first day.
...
Meeting demand far in excess of what the Foundation planned for posed a challenge. As the Pi was getting ready to launch, the operation to build and ship the boards — from booking factory time to purchasing the chips — fell to the relatively modest resources of the Raspberry Pi Foundation, a charitable body initially funded by loans from Upton and five other trustees.
|
RJ Mical estimated it would take $49 million to take the Amiga from design to market if starting over which is something like $110 million today adjusted for inflation. The RPi was likely brought to market with a few million dollars and likely less than Trevor personally wasted on PPC Amiga1 hardware to sell maybe 5000 units compared to 50 million RPi units. Also, the Amiga1 failure has been on the market for another decade compared to the RPi. The problem is not the Amiga either as THEA500 Mini has shown with hundreds of thousands of handicapped toys sold. The problem is the brain dead mentality that PPC is good, that Amiga1 hardware has adequate 68k Amiga compatibility and that an Amiga has to be an expensive desktop with a PCIe graphics card. The RPi would have gone nowhere if it was a $350 desktop requiring a graphics card instead of a $35 SBC with integrated graphics.
OneTimer1 Quote:
Tramiel had the idea of selling more by making it cheaper but the C64 was not on the same level like a PC (Apple2) and their top level machines weren't to.
|
Pushing the price point was how Tramiel and the RPi Foundation found success. The C64 outsold the Apple II by about 2:1 despite the Apple II being available longer, being older, originally competing with the PET, and being upgraded more in later models.
OneTimer1 Quote:
By accident they got the Amiga and somehow repeated the same mistake, sold a lot with little profit but were unable to compete on the top market, the big desktop machines were to expensive or to under powered (depends on how you look at it) and finally users replaced them with machines that weren't.
|
The problem for the high end was not using VRAM. High end graphics used VRAM and low end graphics did not in the late 1980s. Jay Miner wanted VRAM and a high end Amiga but CBM wanted to turn the Amiga into a low end C64 only.
OneTimer1 Quote:
Maybe a AGA machine (made right) sold in 1990, or single motherboards like in the PC industry, would have helped them, maybe a consoles that was payed by copyright protected games on CD, maybe it was all to late because C= didn't really had a plan where to go, they had tactic but not strategy.
|
CBM was just way too slow to upgrade the Amiga.
OneTimer1 Quote:
If C= would have asked me (as A500 user) what I wanted, I would have asked for a Amiga compatible Desktop with 256 color graphic and at least 640x480 flicker free hires mode and a faster CPU for a low price.
I would have got a A1200 in a big box, maybe with the possibility for cheaper 3.5" HD, maybe it would had a ZII slot but it would not have been enough to solve the Amiga problem in general.
|
CBM delivered most of what you wanted with 68EC020@14MHz and AGA but by then the competition was delivering more value.
OneTimer1 Quote:
But if I look back, even good ideas went no where: Acorn Archimedes was discontinued as a PC/HC. Atari went no where with their Transputer workstation. SinclairQL leaped, crashed and was discontinued. Even Apples Macointosh was changed into a niche gadget that hardly resembles the revolutionary start. And IBM-PC ? IBM lost it.
|
There was too much competition with too many incompatible 6502, 68k and x86 PCs and consoles and then too many RISC ISAs after that. The x86 ISA and BIOS was standardized to outlive the divided competition. Today, there aren't enough ISAs considering the ones available don't scale well.
x86-64 - high end high performance only, has trouble scaling to mobile and embedded markets Thumbs - low end low power only, only used where AArch64 could not scale down AArch64 - mid to high end, die shrinks allowed it to scale somewhat lower but code density is lacking RISC-V - mid to low end due to weak performance, poor compiler support as standardization lacks
The 68k scaled from low to high end practically owning the low end 16/32 embedded market and creating the high end MPU workstation market. It was successful in the middle PC and console markets too with the 68k Amiga, Mac, Atari ST, X68000, Genesis and NeoGeo. Then Motorola threw their 68k baby out with the bath water to promote fat PPC that had trouble scaling down and up.
Last edited by matthey on 23-Jun-2024 at 05:22 PM.
|
| Status: Offline |
| | bhabbott
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 24-Jun-2024 7:10:13
| | [ #122 ] |
| |
|
Regular Member |
Joined: 6-Jun-2018 Posts: 471
From: Aotearoa | | |
|
| @matthey
Quote:
matthey wrote:
CBM was just way too slow to upgrade the Amiga. |
Wouldn't helped much if they did it faster. 1995 is the end for Amiga regardless of hardware.
Quote:
CBM delivered most of what you wanted with 68EC020@14MHz and AGA but by then the competition was delivering more value. |
Where 'more value' is defined as 'IBM compatible'.
Quote:
There was too much competition with too many incompatible 6502, 68k and x86 PCs and consoles and then too many RISC ISAs after that. The x86 ISA and BIOS was standardized to outlive the divided competition. |
Yep. IBM nailed it in 1981. Every other platform was then 'divided competition'. This despite the poor job they did of the PC (it's numerous faults would be fixed as time went on). The reason? "Nobody ever got fired for buying IBM". The industry was desperate for a standard that could be easily cloned, especially one created by IBM. And IBM handed it to them on a silver platter.
Quote:
The 68k scaled from low to high end practically owning the low end 16/32 embedded market and creating the high end MPU workstation market. It was successful in the middle PC and console markets too with the 68k Amiga, Mac, Atari ST, X68000, Genesis and NeoGeo. Then Motorola threw their 68k baby out with the bath water to promote fat PPC that had trouble scaling down and up. |
True. However Motorola had always struggled to match Intel's processes. I think they gave up because developing advanced CISC CPUs was just too hard. Perhaps if Commodore had been stronger they might have convinced Motorola to keep making 68k CPUs. But then Commodore's engineers also fell under the RISC spell.
Doesn't worry me though. IMO Commodore expired at just the right time, before they got around to screwing up the Amiga. I've got my 68k CPU to enjoy writing code for so I am in Heaven!
|
| Status: Offline |
| | Hammer
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 25-Jun-2024 4:46:00
| | [ #123 ] |
| |
|
Elite Member |
Joined: 9-Mar-2003 Posts: 6014
From: Australia | | |
|
| @bhabbott
Quote:
Where 'more value' is defined as 'IBM compatible'. |
Between 1987 and 1990, A500 can run contemporary 2D gaming experience with usually superior multiplatform game results with performance punching above its price class.
The key phrase is "contemporary 2D gaming experience".
Jeff Porter's "8-bit planes with 16 million colors" focus (AGA) and Bill Sydnes' PCJr/A600Jr mindset with bottom-of-the-barrel fake "32-bit" 68EC020 14Mhz, A1200 or CD32 could NOT run contemporary 3D texture mapped 3D gaming experience in 1993 to 1994 time period.
PS1 has enough compute power and RAM to run contemporary Pentium class 3D texture-mapped 3D gaming experience from H2 1995 to 1991. PS1 has a custom chip solution to boost geometry processing power (58 Mhz, 66 MIPS) from MIPS R3000A @ 33.8 Mhz (30 MIPS).
Most consumers can see the value and 68030 @ 50Mhz accelerator with A1200 is not value for money relative to the 486 PC clones and Apple's 68LC040 competition.
A1200 or CD32 didn't have its custom chip solution to boost 3D geometry and texture mapper use cases.
When Amiga's gaming scene dropped, it took out most of Amiga's non-gaming scene with it. For a professional workstation, the Amiga was behind in math power!
Last edited by Hammer on 25-Jun-2024 at 04:58 AM. Last edited by Hammer on 25-Jun-2024 at 04:55 AM. Last edited by Hammer on 25-Jun-2024 at 04:52 AM. Last edited by Hammer on 25-Jun-2024 at 04:49 AM.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | Hammer
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 25-Jun-2024 5:33:25
| | [ #124 ] |
| |
|
Elite Member |
Joined: 9-Mar-2003 Posts: 6014
From: Australia | | |
|
| @matthey
Quote:
RJ Mical estimated it would take $49 million to take the Amiga from design to market if starting over which is something like $110 million today adjusted for inflation. The RPi was likely brought to market with a few million dollars and likely less than Trevor personally wasted on PPC Amiga1 hardware to sell maybe 5000 units compared to 50 million RPi units. Also, the Amiga1 failure has been on the market for another decade compared to the RPi. The problem is not the Amiga either as THEA500 Mini has shown with hundreds of thousands of handicapped toys sold. The problem is the brain dead mentality that PPC is good, that Amiga1 hardware has adequate 68k Amiga compatibility and that an Amiga has to be an expensive desktop with a PCIe graphics card. The RPi would have gone nowhere if it was a $350 desktop requiring a graphics card instead of a $35 SBC with integrated graphics.
|
PCIe Video Card is a neutral factor since can be used with PCs. Like many others, I already have the necessary Radeon HD GCN graphics for AmigaOne PCIe.
Quote:
x86-64 - high end high performance only, has trouble scaling to mobile and embedded markets
|
FALSE.
AMD's mobile Ryzen APU and Intel's mobile Meteor Lake are making inroads in handheld mobile gaming devices e.g. Steam Deck and clones.
https://www.sahmcapital.com/news/content/new-high-level-sony-playstation-handheld-in-the-works-with-amd-tech-report-2024-02-02 Sony is working with AMD on handheld mobile PlayStation. Handheld mobile PlayStation has backward compatibility with PS4.
AMD's embedded Ryzen APU powers Tesla's recent EVs.
Like the Steam Deck, AMD's embedded Ryzen APU entered the handheld medical devices and portable medical electronics. Hint: Radeon RDNA 2 or 3 IGP beating Qualcomm's Adreno.
Last edited by Hammer on 25-Jun-2024 at 05:41 AM.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | Hammer
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 25-Jun-2024 6:35:17
| | [ #125 ] |
| |
|
Elite Member |
Joined: 9-Mar-2003 Posts: 6014
From: Australia | | |
|
| @Kronos
Quote:
Kronos wrote:
They did get the Amiga by accident and did get it to market, but failed to sell in sufficient numbers or to get anything beyond basic SW support.
It was so bad that they had to stop making them and it was only years later that someone came up with a cost reduced Amiga that could be sold as an C64 replacement. That did work and no real progress on the HW was made for the next 5 years.
So yeah "success plan" indeed.
|
From Commodore The Final Years by Brian Bagnall,
From 1986, Amiga Ranger was cancelled in mid-1986.
Commodore leadership looked at their competitors and decided on either a match or one-up-the-competition's high-resolution offerings e.g. monochrome Mac and monochrome high-mode Atari ST.
Hi-res Denise started as monochrome 8369R1 and evolved to four-color Hi-res 8373.
Hi-res Denise 8373 was largely completed in late 1987. By August 1987, Hedley Davis’ high-resolution monitor was given full authorization i.e. A2024 hack job.
After A500 was completed, the large time-wasting debate with post-A500 while safe hi-res Agnus and Denise were being completed i.e. ECS Agnus and ECS Denise. A500 Rev 6A has reserved 2MB Chip RAM ECS capability.
The "Super A500" concept has 68020, 1 MB of expensive VRAM, and a matching chipset.
(Side comment: From 1987 to 1989, the PC/Mac world has 256 KB and 512 KB VRAM instead of the crazy expensive 1 MB VRAM UMA. Tseng Labs worked on memory interleave controllers to deliver 70 ns access times FP DRAM into VRAM-like performance e.g. 1989 released ET4000AX).
Commodore would have to spend $77 for the 68020 plus $100 for VRAM on top of the basic $230 cost of the original A500 (minus redundant costs for memory and processor).
AGA's faster FP DRAM selection is less than the crazy "Super A500".
Jeff Porter "specifically wanted 1000 by 800 resolution with 8 bit planes and 16 mil- lion colors—something", hence why AGA didn't have chunky graphics when the focus is on "8 bit planes".
All of these plans would be discussed at Commodore’s worldwide engineering meeting, scheduled for September 22, 1987 at the Embassy Suites hotel in New York.
-------------------------- My comment:
By 1992-1993, the 3DO approach is MADAM (Agnus counterpart) having the capability to access both 2 MB FP DRAM with 1 MB VRAM.
1988-1989 3DO MADAM approach for evolved Agnus would have the capability to access 1 MB FP DRAM and 512 KB VRAM. Certain memory address ranges in the UMA would have high-performance VRAM. Evolved Denise would have its high-performance graphics modes in the VRAM address range.
Motorola's full 32-bit CPU cost issue would need to be countered i.e. geometry math power via custom chips or DSP.
3DO has the lessons from the crazy "Super A500".
When VRAM's price drops, 512 KB VRAM would be expanded into 1 MB.
Modern Xbox Series X has a fast 320-bit bus of 10 GB and a slower 192-bit bus of 6 GB in UMA. Last edited by Hammer on 29-Jun-2024 at 12:47 PM. Last edited by Hammer on 25-Jun-2024 at 06:43 AM.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | matthey
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 27-Jun-2024 1:43:45
| | [ #126 ] |
| |
|
Elite Member |
Joined: 14-Mar-2007 Posts: 2380
From: Kansas | | |
|
| bhabbott Quote:
Wouldn't helped much if they did it faster. 1995 is the end for Amiga regardless of hardware.
|
I disagree. There were many changes CBM could have made that would have increased the chances of them surviving. They had about 5 years with the Amiga before the PC monster grew into a giant. They started with a vastly superior CPU and chipset in the 68k Amiga.
bhabbott Quote:
Where 'more value' is defined as 'IBM compatible'.
|
There was value in being IBM compatible but the 68000 Amiga had the value (performance/$) advantage. The performance/$ advantage was so large that the Amiga was very close to being able to emulate an 8088 PC. A PC bridgeboard was actually an inferior value solution for the Amiga compared to increasing the 68k Amiga performance/$. A 68020 with chipset upgrade Amiga would have encouraged 68k Amiga development and PC emulation would have improved while PC bridgeboards instead boosted the competition without increasing the Amiga performance/$.
bhabbott Quote:
Yep. IBM nailed it in 1981. Every other platform was then 'divided competition'. This despite the poor job they did of the PC (it's numerous faults would be fixed as time went on). The reason? "Nobody ever got fired for buying IBM". The industry was desperate for a standard that could be easily cloned, especially one created by IBM. And IBM handed it to them on a silver platter.
|
The 8088 IBM PC was more fail than nail and it handicapped the PC for a long time and even today. The x86-64 code density is poor for a VLE, decoding cost is well above any other current ISA and x86-64 cores are large and power hungry limiting their ability to scale down.
IBM did not just hand the IBM PC to the world as open hardware. They would have liked to keep the IBM PC proprietary but they had U.S. antitrust regulators closely scrutinizing them. The PC was a cheap and primitive low end business PC with a high margin so they made it more open. It had limited upgradability and they could satisfy regulators where it didn't seem important. As I recall, a lawsuit was required to free the BIOS for clone use. They could have created an incompatible 68k based PC successor but they thought they could control the 808x/x86 PC monster they had unleashed. The rest is history but x86 is and was a handicap even though it offered good CISC performance.
Anyone that thinks I'm exaggerating the IBM antitrust scrutiny should look at AT&T. AT&T was also under scrutiny which also had a profound affect on computer history.
https://en.wikipedia.org/wiki/Breakup_of_the_Bell_System Quote:
The monopoly position of the Bell System in the U.S. was ended on January 8, 1982, by a consent decree providing that AT&T Corporation would, as had been initially proposed by AT&T, relinquish control of the Bell Operating Companies, which had provided local telephone service in the United States. AT&T would continue to be a provider of long-distance service, while the now-independent Regional Bell Operating Companies (RBOCs), nicknamed the "Baby Bells", would provide local service, and would no longer be directly supplied with equipment from AT&T subsidiary Western Electric.
This divestiture was initiated in 1974 when the United States Department of Justice filed United States v. AT&T, an antitrust lawsuit against AT&T. At the time, AT&T had substantial control over the United States' communications infrastructure. Not only was it the sole telephone provider throughout most of the country, its subsidiary Western Electric produced much of its equipment. Relinquishing ownership of Western Electric was one of the Justice Department’s primary demands.
Believing that it was about to lose the suit, AT&T proposed an alternative: its breakup. It proposed that it retain control of Western Electric, Yellow Pages, the Bell trademark, Bell Labs, and AT&T Long Distance. It also proposed that it be freed from a 1956 antitrust consent decree, then administered by Judge Vincent P. Biunno in the United States District Court for the District of New Jersey, that barred it from participating in the general sale of computers (retreat from international markets, relinquish ownership in Bell Canada, and Northern Electric a Western Electric subsidiary). In return, it proposed to give up ownership of the local operating companies. This last concession, it argued, would achieve the government's goal of creating competition in supplying telephone equipment and supplies to the operative companies. The settlement was finalized on January 8, 1982, with some changes ordered by the decree court: the regional holding companies received the Bell trademark, Yellow Pages, and about half of Bell Labs.
|
From Bell Labs had come the solid state transistor, Unix and the C language which are better known computer advances. Lesser known is what they developed and had limitations on selling, namely the Bellmac 32 CPU, later marketed as WE 32000 family after the Bell breakup.
https://en.wikipedia.org/wiki/Bellmac_32
This was a CMOS 32 bit MPU with 32 bit addressing (flat 32 bit address space), 32 bit data paths, a 32 bit barrel shifter, pipelining, a 256 byte instruction cache, an instruction queue, etc. that was developed in the late 1970s to be "ready by 1980".
https://ethw.org/First-Hand:The_AT%26T_BELLMAC-32_Microprocessor_Development Quote:
For telecommunications services, AT&T Bell Laboratories first developed the 8-bit microprocessor chip BELLMAC-8 in the mid-1970s using 5 micron CMOS technology with its debut in 1977, followed by 4-bit microcomputer chip BELLMAC-4 using 3.5 micron CMOS technology in late 1970s. In 1974, however, the federal government started putting more pressure on AT&T to divide into two independent companies -- a long-distance telecommunications service company and another consisting of seven regional Bell operating companies (Baby Bells). Before agreeing to the divestiture, which happened on Jan. 1, 1984, AT&T asked to get into computer business and was allowed to do so. As a result of the request, the development of the next generation microprocessor chip, initially named BELLMAC-80, became critically important. A strategic decision was made for the architecture of BELLMAC-80 to be 32-bit instead of 16-bit chip, which would have been more gradual and prevented major surprises in development. Also, AT&T wanted the chip ready by 1980, another reason to call it BELLMAC-80. Thus, a serious project was initiated in late 1970s.
|
Bell Labs wasn't able to release it until 1985 because of antitrust issues. It was higher end than the 68000 and no doubt more expensive. The 68000 has a better ISA in my opinion with more usable GP registers, likely better code density, a 16 bit VLE instead of 8 bit VLE, etc. The CISC ISA perhaps resembles VAX more than anything but there are similarities to the PDP-11, 68k and x86.
http://www.bitsavers.org/pdf/westernElectric/WE_32100_Microprocessor_Information_Manual_Jan85.pdf
If it was introduced earlier, there would have been more time for integration and price reductions to potentially become another competing MPU with a large flat address space. It was certainly ahead of its time like the 68000 and NS32000. By the time it was released, the 68k already dominated the workstation market.
The Amiga was very close to using the AT&T DSP3210 as well but CBM may have been waiting on new features.
http://www.bambi-amiga.co.uk/amigahistory/leweggebrecht.html Quote:
Does that mean that a AA+ machine will have a DSP?
"We can't make that decision right now - it's something we'll have to look at but in that time frame, even in the low end, every machine is likely to have a DSP. It's a cost thing - although the AT&T chip itself is only $20 to $30 or so. AT&T has a number of lower cost options, as well, that are designed more specifically to go on the motherboard. The problem with the present DSP design is that it has one serial channel and everything you attach to it has to be run through at that channel rate. I think they're looking at having four independent channels running at different clock rates, and with that kind of enhancement, DSP makes a lot of sense."
|
AT&T developed several DSPs like DSP1, DSP20, DSP3210 and later StarCore which was a joint development between Lucent (successor of Bell Labs) and Motorola. The StarCore ISA closely resembles the 68k ISA, especially for mnemonics, and uses a 16 bit VLE like the 68k. Some instruction names are better than ColdFire. For example, I prefer StarCore SXT and ZXT rather than ColdFire MVS and MVZ and x86 MOVSX and MOVZX.
bhabbott Quote:
True. However Motorola had always struggled to match Intel's processes. I think they gave up because developing advanced CISC CPUs was just too hard. Perhaps if Commodore had been stronger they might have convinced Motorola to keep making 68k CPUs. But then Commodore's engineers also fell under the RISC spell.
Doesn't worry me though. IMO Commodore expired at just the right time, before they got around to screwing up the Amiga. I've got my 68k CPU to enjoy writing code for so I am in Heaven!
|
Motorola only had an economies of scale advantage for lower end embedded 68k CPUs where pushing the envelope of chip processes was financially detrimental. CBM effectively killed the 68k Amiga high end so they were not buying high end 68k CPUs. Atari was trying to undercut the Amiga at the low end so they were not buying high end 68k CPUs. This left Apple and alone they did not generate the economies of scale like x86 PCs.
It is true that CBM engineers "fell under the RISC spell" but it may not have lasted. Once x86 had SIMD unit instructions some of them may have seen that CISC SIMD units are stronger with mem-reg instructions and need fewer SIMD registers. The PA-RISC MAX-2 SIMD ISA only uses a 16 bit integer datatype and has few instructions for RISC simplicity which was underwhelming. CBM added or planned to add some customizations although I don't know what. I have some doubts about the performance of PA-RISC using the weak MAX-2 ISA. SIMD operations were performed in the CPU integer registers (like A1222 minimalist SIMD unit) which made it difficult to upgrade to wider SIMD registers than 64 bit.
History MC88110 GPU (coprocessor like SFU extension) 1992 MAX - HP PA-RISC PA-7100LC 1994 MAX-2- HP PA-RISC PA-8000 1996 MMX - Intel Pentium MMX 1997 SSE - Intel Pentium 3 1999 Altivec - Motorola PPC 7400 (G4) 1999 SSE2 - Intel Pentium 4 2000 Neon - ARM1136J (ARMv6) 2002 WMMX - Intel XScale PXA270 2004 AVX - Intel Sandy Bridge CPUs 2011 AArch64 - Apple A7 (iPhone 5S) 2013
Basic features when introduced MC88110 GPU 32x32b int regs (64b SIMD ops using reg pairs); int4x16, int8x8, int16x4, int32x2, uint4x16, uint8x8, uint16x4, uint32x2 MAX - 32x32b int regs; int16x2, uint16x2 MAX-2 - 32x64b int regs; int16x4, uint16x4 MMX - 8x64b regs shared with FPU; int8x8, int16x4, int32x2, uint8x8, uint16x4, uint32x2 SSE - 8x128b regs; fp32x4 Altivec/VMX - 32x128b regs; int8x16, int16x8, int32x4, uint8x16, uint16x8, uint32x4, fp32x4 SSE2 - 16x128b regs; int8x16, int16x8, int32x4, uint8x16, uint16x8, uint32x4, fp32x4, fp64x2 Neon - 16x128b regs shared with FPU; int8x16, int16x8, int32x4, uint8x16, uint16x8, uint32x4, fp32x4 WMMX - 16x64b regs; int8x8, int16x4, int32x2, uint8x8, uint16x4, uint32x2 AVX - 16x256b regs; fp32x8, fp64x4 AArch64 - 32x128b regs shared with FPU; int8x16, int16x8, int32x4, uint8x16, uint16x8, uint32x4, fp32x4, fp64x2
PA-RISC was early with SIMD but the 88k SIMD had some advantages, at least until the 88k was killed off. MMX and SSE came shortly after MAX-2 which starts to look outdated fast. I did some comparisons between PA-RISC with and without SIMD unit and a 68060 on EAB about 7 years ago.
http://eab.abime.net/showpost.php?s=9a7c7ee9b88f1ebc4dde420b41148e94&p=1142968&postcount=22 Quote:
Let's evaluate the PA-RISC 7150 (introduced in 1994) with SIMD. It was based on the PA-RISC 7100 using the same fab process but with improved circuit design to allow 125MHz. The PA-RISC was one of the first general purpose processors to include an SIMD called MAX which processed only 32 bits as 2x16 bit data at one time using a single instruction (1.9x to 2.7x fps speedup claimed for MPEG video, convolve 512x512, zoom 512x512 and H.261 video). MAX-2 could process 64 bits as 4x16 bit data at one time using a single instruction (likely 2x the performance of 32 bit MAX).
The PA-RISC 7100@99MHz (L1: 256kB ICache/256kB DCache) without SIMD could decode MPEG 320x240 video at 18.7 fps. My 68060@75MHz (L1: 8kB ICache/8kB DCache) using the old RiVA 0.50 decodes MPEG video between 18-22fps (average ~20fps). An update to the new RiVA 0.52 works now giving 21-29 fps (average is ~26fps with more 68060 optimization possible). Note that the PA-RISC 7100 was introduced in 1992 and used in technical and graphical workstations and computing servers while the 68060 was introduced in 1994 for desktop and embedded applications (less demanding and lower cost applications). The PA-RISC 7100LC@60MHz (L1: 32kB ICache/32kB DCache) introduced in 1994 with SIMD (initially 32 bit MAX but may have been upgraded to MAX-2 later?) could do 26fps decoding 320x240 MPEG. MAX not only improved the performance (finally better than the 68060 at MPEG fps) but improved the code density by replacing many RISC instructions allowing the cache sizes to be reduced tremendously. The PA-RISC 7100LC@80MHz (L1: 128kB ICache/128kB DCache) with MAX SIMD could do 33fps decoding 320x240 MPEG. The Apollo Core 68k@78MHz should be about the same performance, if not a little better, without using an SIMD (the Apollo Core with SIMD is likely twice as fast as the PA-RISC 7100LC@80MHz in MPEG decoded fps). As we can see, the PA-RISC had unimpressive performance even with an SIMD and lots of resources.
|
With these performance results, clocking up the 68060 with its deep 8-stage pipeline sounds like the easiest solution to more performance. A real SIMD unit could be added later. If wider than 64 bit registers are desired for more SIMD parallelism, SIMD operations using shared CPU integer registers should be avoided as used in PA-RISC, 88k, AC68080 and the PPC e500v2 core in the A1222.
Last edited by matthey on 27-Jun-2024 at 02:04 AM. Last edited by matthey on 27-Jun-2024 at 01:55 AM. Last edited by matthey on 27-Jun-2024 at 01:50 AM.
|
| Status: Offline |
| | kolla
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 27-Jun-2024 8:08:34
| | [ #127 ] |
| |
|
Elite Member |
Joined: 20-Aug-2003 Posts: 3265
From: Trondheim, Norway | | |
|
| @matthey
In 1995, the biggest "flaw" with Amiga wasn’t the hardware, it was the operating system. And that’s why so many Amiga _developers_ moved to Linux, BSD and even Windows. _________________ B5D6A1D019D5D45BCC56F4782AC220D8B3E2A6CC |
| Status: Offline |
| | Karlos
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 27-Jun-2024 13:40:46
| | [ #128 ] |
| |
|
Elite Member |
Joined: 24-Aug-2003 Posts: 4653
From: As-sassin-aaate! As-sassin-aaate! Ooh! We forgot the ammunition! | | |
|
| | Status: Offline |
| | K-L
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 27-Jun-2024 14:07:31
| | [ #129 ] |
| |
|
Super Member |
Joined: 3-Mar-2006 Posts: 1427
From: Oullins, France | | |
|
| @Karlos
And DSP if I remember correctly. _________________ PowerMac G5 2,7Ghz - 2GB - Radeon 9650 - MorphOS 3.14 AmigaONE X1000, 2GB, Sapphire Radeon HD 7700 FPGA Replay + DB 68060 at 85Mhz |
| Status: Offline |
| | ppcamiga1
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 27-Jun-2024 14:20:02
| | [ #130 ] |
| |
|
Cult Member |
Joined: 23-Aug-2015 Posts: 909
From: Unknown | | |
|
| @kolla
kolla bought this shit pistorm so as others start retro propaganda. developers switch from Amiga to other platorms around 1995 because AGA has not chunky pixels. cpu and os was still good enough.
|
| Status: Offline |
| | ppcamiga1
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 27-Jun-2024 14:21:08
| | [ #131 ] |
| |
|
Cult Member |
Joined: 23-Aug-2015 Posts: 909
From: Unknown | | |
|
| | Status: Offline |
| | pixie
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 27-Jun-2024 15:49:06
| | [ #132 ] |
| |
|
Elite Member |
Joined: 10-Mar-2003 Posts: 3376
From: Figueira da Foz - Portugal | | |
|
| @ppcamiga1
Quote:
kolla bought this shit pistorm so as others start retro propaganda. developers switch from Amiga to other platorms around 1995 |
But didn't you had a raspberry also? And a PC? It makes way less sense..._________________ Indigo 3D Lounge, my second home. The Illusion of Choice | Am*ga |
| Status: Offline |
| | pixie
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 27-Jun-2024 15:49:46
| | [ #133 ] |
| |
|
Elite Member |
Joined: 10-Mar-2003 Posts: 3376
From: Figueira da Foz - Portugal | | |
|
| | Status: Offline |
| | Karlos
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 27-Jun-2024 16:20:21
| | [ #134 ] |
| |
|
Elite Member |
Joined: 24-Aug-2003 Posts: 4653
From: As-sassin-aaate! As-sassin-aaate! Ooh! We forgot the ammunition! | | |
|
| | Status: Offline |
| | matthey
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 27-Jun-2024 19:50:01
| | [ #135 ] |
| |
|
Elite Member |
Joined: 14-Mar-2007 Posts: 2380
From: Kansas | | |
|
| kolla Quote:
In 1995, the biggest "flaw" with Amiga wasn’t the hardware, it was the operating system. And that’s why so many Amiga _developers_ moved to Linux, BSD and even Windows.
|
Sarcasm? In 1995 the biggest issue with Amiga was that CBM was defunct and the Amiga future was in doubt.
The 68k Amiga would not have been competitive in low end markets with Linux.
https://www.linuxjournal.com/article/2090 Quote:
Like Linux/i386, 4MB of RAM is the absolute minimum, with 8MB being sufficient for most uses. The X Window System requires a minimum of 12MB of RAM for a usable system. A minimal installation currently requires about 55MB of hard drive space, plus at least a few MB of swap space. My personal system currently has about 830MB of hard drive space devoted to Linux (one SCSI hard drive and most of two IDE hard drives). When it comes to RAM and hard drive space, you can never have too much.
|
An Amiga would have required a MMU, 55+ MiB hard drive and 12MiB of memory to use Linux with a GUI which was only available in 1993. The Amiga 3000UX came with 9MiB of memory minimum and a large hard drive (200MiB?) for $4998 but did not include the A2410 graphics card recommended for a GUI? Linux reduced the OS cost of a Unix like OS to zero and brought it to cheap and available 386 hardware but it was primitive for several years and 386 hardware was a pain to develop it on.
https://gunkies.org/wiki/Linux Quote:
Hi folks, For quite some time this "novice" has been wondering as to how one goes about the task of writing an OS from "scratch". So here are some questions, and I would appreciate if you could take time to answer 'em. Well, I see someone else already answered, but I thought I'd take on the linux-specific parts. Just my personal experiences, and I don't know how normal those are.
1) How would you typically debug the kernel during the development phase? Depends on both the machine and how far you have gotten on the kernel: on more simple systems it's generally easier to set up. Here's what I had to do on a 386 in protected mode. The worst part is starting off: after you have even a minimal system you can use printf etc, but moving to protected mode on a 386 isn't fun, especially if you at first don't know the architecture very well. It's distressingly easy to reboot the system at this stage: if the 386 notices something is wrong, it shuts down and reboots - you don't even get a chance to see what's wrong. Printf() isn't very useful - a reboot also clears the screen, and anyway, you have to have access to video-mem, which might fail if your segments are incorrect etc. Don't even think about debuggers: no debugger I know of can follow a 386 into protected mode. A 386 emulator might do the job, or some heavy hardware, but that isn't usually feasible. What I used was a simple killing-loop: I put in statements like
die: jmp die
at strategic places. If it locked up, you were ok, if it rebooted, you knew at least it happened before the die-loop. Alternatively, you might use the sound io ports for some sound-clues, but as I had no experience with PC hardware, I didn't even use that. I'm not saying this is the only way: I didn't start off to write a kernel, I just wanted to explore the 386 task-switching primitives etc, and that's how I started off (in about April-91). After you have a minimal system up and can use the screen for output, it gets a bit easier, but that's when you have to enable interrupts. Bang, instant reboot, and back to the old way. All in all, it took about 2 months for me to get all the 386 things pretty well sorted out so that I no longer had to count on avoiding rebooting at once, and having the basic things set up (paging, timer-interrupt and a simple task-switcher to test out the segments etc).
2) Can you test the kernel functionality by running it as a process on a different OS? Wouldn't the OS(the development environment) generate exceptions in cases when the kernel (of the new OS) tries to modify 'priviledged' registers? Yes, it's generally possible for some things, but eg device drivers usually have to be tested out on the bare machine. I used minix to develop linux, so I had no access to IO registers, interrupts etc. Under DOS it would have been possible to get access to all these, but then you don't have 32-bit mode. Intel isn't that great - it would probably have been much easier on a 68040 or similar. So after getting a simple task-switcher (it switched between two processes that printed AAAA... and BBBB... respectively by using the timer-interrupt - Gods I was proud over that), I still had to continue debugging basically by using printf. The first thing written was the keyboard driver: that's the reason it's still written completely in assembler (I didn't dare move to C yet - I was still debugging at about instruction-level). After that I wrote the serial drivers, and voila, I had a simple terminal program running (well, not that simple actually). It was still the same two processes (AAA..), but now they read and wrote to the console/serial lines instead. I had to reboot to get out of it all, but it was a simple kernel. After that is was plain sailing: hairy coding still, but I had some devices, and debugging was easier. I started using C at this stage, and it certainly speeds up developement. This is also when I start to get serious about my megalomaniac ideas to make "a better minix that minix". I was hoping I'd be able to recompile gcc under linux some day... The harddisk driver was more of the same: this time the problems with bad documentation started to crop up. The PC may be the most used architecture in the world right now, but that doesn't mean the docs are any better: in fact I haven't seen /any/ book even mentioning the weird 386-387 coupling in an AT etc (Thanks Bruce). After that, a small filesystem, and voila, you have a minimal unix. Two months for basic setups, but then only slightly longer until I had a disk-driver (seriously buggy, but it happened to work on my machine) and a small filesystem. That was about when I made 0.01 available (late august-91? Something like that): it wasn't pretty, it had no floppy driver, and it couldn't do much anything. I don't think anybody ever compiled that version. But by then I was hooked, and didn't want to stop until I could chuck out minix. 3) Would new linkers and loaders have to be written before you get a basic kernel running? All versions up to about 0.11 were crosscompiled under minix386 - as were the user programs. I got bash and gcc eventually working under 0.02, and while a race-condition in the buffer-cache code prevented me from recompiling gcc with itself, I was able to tackle smaller compiles. 0.03 (October?) was able to recompile gcc under itself, and I think that's the first version that anybody else actually used. Still no floppies, but most of the basic things worked. Afetr 0.03 I decided that the next version was actually useable (it was, kind of, but boy is X under 0.96 more impressive), and I called the next version 0.10 (November?). It still had a rather serious bug in the buffer-cache handling code, but after patching that, it was pretty ok. 0.11 (December) had the first floppy driver, and was the point where I started doing linux developement under itself. Quite as well, as I trashed my minix386 partition by mistake when trying to autodial /dev/hd2. By that time others were actually using linux, and running out of memory. Especially sad was the fact that gcc wouldn't work on a 2MB machine, and although c386 was ported, it didn't do everything gcc did, and couldn't recompile the kernel. So I had to implement disk-paging: 0.12 came out in January (?) and had paging by me as well as job control by tytso (and other patches: pmacdona had started on VC's etc). It was the first release that started to have "non-essential" features, and being partly written by others. It was also the first release that actually did many things better than minix, and by now people started to really get interested. Then it was 0.95 in March, bugfixes in April, and soon 0.96. It's certainly been fun (and I trust will continue to be so) - reactions have been mostly very positive, and you do learn a lot doing this type of thing (on the other hand, your studies suffer in other respects :)
Linus
|
"Intel isn't that great - it would probably have been much easier on a 68040 or similar.", according to Linus. His influence was a 68008 Sinclair QL with preemptive multitasking QDOS OS.
https://en.wikipedia.org/wiki/Sinclair_QL#Linux Quote:
Linus Torvalds has attributed his eventually developing the Linux kernel, likewise having pre-emptive multitasking, in part to having owned a Sinclair QL in the 1980s. Because of the lack of support, particularly in his native Finland, Torvalds became used to writing his own software rather than relying on programs written by others. In part, his frustration with Minix, on the Sinclair, led, years later, to his purchase of a more standard IBM PC compatible on which he would develop Linux. In Just for Fun, Torvalds wrote, "Back in 1987, one of the selling points of the QL was that it looked cool", because it was "entirely matte black, with a black keyboard" and was "fairly angular". He also wrote he bought a floppy controller so he could stop using microdrives, but the floppy controller driver was bad, so he wrote his own. Bugs in the operating system, or discrepancies with the documentation, that made his software not work properly, got him interested in operating systems. "Like any good computer purist raised on a 68008 chip," Torvalds "despised PCs", but decided in autumn 1990 to purchase a 386 custom-made IBM PC compatible, which he did in January 1991.
|
Did he overlook the Amiga or choose the 386 because Amiga's with MMUs were too expensive? Amiga did not have a high end market after they chose to make the chipset low end and use embedded CPUs. The Amiga 3000UX became available at about the time he bought his 386 PC but it was expensive (Linux perhaps played a role in the Amiga 3000UX being discontinued?). Inferior but cheap x86 PC hardware won with inferior OSs. Windows 95 had just graduated from a GUI file manager for MS-DOS to a real OS in 1995. The AmigaOS was a much better embedded OS than Linux or Windows in 1995 although perhaps not as advanced of desktop OS in some areas. The Amiga would not have been as successful with Linux as the AmigaOS even if it had been available way back in 1985. The "thin" 68k AmigaOS allowed the small footprint and cheaper hardware that made the 68k Amiga popular.
|
| Status: Offline |
| | OneTimer1
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 27-Jun-2024 20:23:10
| | [ #136 ] |
| |
|
Super Member |
Joined: 3-Aug-2015 Posts: 1108
From: Unknown | | |
|
| Quote:
K-L wrote:
And DSP if I remember correctly.
|
The DSP56000 has its own 24 Bit(!) RAM you can hardly share it with the computers main memory.
I don't know what memory interface they used, but it might have been a bottle neck with limited usability for normal applications.
It was a good chip for real time audio processing, programming it in assembler was one of the first tasks I did commercially. There where times when I (as an Amiga fan) thought Atari would make it ... 1st with a Transputer, 1st with a DSP, 1st with 70Hz monitor 1st with 256 colors 1st with a low cost VME bus computer
And in the end they where
1st to go
|
| Status: Offline |
| | matthey
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 28-Jun-2024 0:10:51
| | [ #137 ] |
| |
|
Elite Member |
Joined: 14-Mar-2007 Posts: 2380
From: Kansas | | |
|
| OneTimer1 Quote:
The DSP56000 has its own 24 Bit(!) RAM you can hardly share it with the computers main memory.
I don't know what memory interface they used, but it might have been a bottle neck with limited usability for normal applications.
|
This was not unusual for a DSP. The Amiga AT&T DSP would have had similar limitations. Actual performance depends on DSP local memory, interface(s) to other memory and cache coherency with other processors if there are caches involved. Some people think MIPS and FLOPS of a DSP can be added to that of the CPU to determine cumulative performance. A DSP has limited ability to directly boost the performance of the CPU but is very good at offloading specific number crunching tasks away from the CPU.
OneTimer1 Quote:
It was a good chip for real time audio processing, programming it in assembler was one of the first tasks I did commercially. There where times when I (as an Amiga fan) thought Atari would make it ... 1st with a Transputer, 1st with a DSP, 1st with 70Hz monitor 1st with 256 colors 1st with a low cost VME bus computer
And in the end they where
1st to go
|
Actually, Atari Corporation outlived CBM and survived until 1996. Atari discontinued their 68k PC line in 1993 so the 68k Amiga lived a little longer. The 68k Amiga finally won the battle against the 68k Atari but Atari beat CBM by surviving longer and giving Jack Tramiel his revenge. Apple was the big beneficiary of the Amiga Atari battle for the low end of the 68k PC market. Despite the Amiga having "too much hardware", Apple was mostly unopposed in the high end high margin 68k PC market. Then they switched to PPC in 1994-1996 and almost went bankrupt in 1997.
Last edited by matthey on 28-Jun-2024 at 12:14 AM.
|
| Status: Offline |
| | Hypex
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 28-Jun-2024 5:46:05
| | [ #138 ] |
| |
|
Elite Member |
Joined: 6-May-2007 Posts: 11341
From: Greensborough, Australia | | |
|
| @ppcamiga1
This obviously all discussed now but wanted to chime in.
Quote:
DOOM on Amiga CD32 with Fast RAM. Runs betten than on 386. Amiga with 68020 Akiko and FAST RAM is good enough to run DOOM better than on affordable PC 30 years ago. |
I don't think so. It's a smaller screen. And it's laggy. It's ok but this game is designed to move fast. The Akiko is a terrible example of chunky to planar. For one thing, needing a chip function to do it. Just forget the chip and implement a packed data mode. It's just a simple dumb framebuffer. Less advanced than bitplanes. Another thing is, the Akiko way of doing it is terrible. Write 32 pixels in, read 8 long words out and write back to 8 bitplanes one plane at a time. Unless the screen renderer can write direct to Akiko which is unlikely, you got a game that needs to render the screen to ram, then copy it in blocks to Akiko, then read it out in smaller blocks, and write it to chip ram for each plane. That's a terrible solution. And if you have only CHIP RAM then it's even WORSE.
Quote:
Best proof that Commodore bankrupt because AGA was too slow and too outdated. And should be replaced by something better. |
That contradicts your point. You said it runs better than on 386 using a CD32 with FAST RAM as example. Then later said AGA was too slow. Was your point that AGA needed Akiko as standard?
They had something better. It was called AAA. It could have competed with the Falcon. AGA could not. Even post AAA they planned to kill off AGA and go back to ECS for compatibility.
The A500 needed AGA to compete with the VGA onslaught coming. In the least they needed AGA in 1990 when DOS VGA had become the defacto standard. Around 1989 they stopped producing Amiga games because it lacked a 256 colour mode. And all the Amiga got was poor ports from the PC version.
AGA itself was a hack as it was a 32 bit design retrofited onto a 16 bit chipset. Now it was a neat job but bank switching was a work around from the C64 days. Didn't suit the Amiga. The really needed to create new custom chips. But then it needed compatibility. What they should have done is expand it so the palette is in chip ram in the least. So that a copper list could have created a pseudo chunky mode. But Productivity modes really needed a proper packed mode. Games could have used if it needed chunky.
The Amiga was fine for hardware 2d acceleration. That was what it was designed for. Unfortunately, it didn't become the 3d flight simulator Jay had designed it for. And although the blitter could help, 3d wasn't its strong suite. Nor was it upgraded to assist like it should have been. It couldn't even do 2d scaling or 2d rotation. Even a Nintendo could. So when Doom took over the Amiga had no comeback. In reality, as advanced as it was, games like Doom are too old fashioned even for an A500. They write to the screen on every frame and stick software sprites on top. Totally old fashioned and archaic techniques! The A500 was above such primitive methods. But ironically it lost out as until the 90's 3d games were all software 3d. It lacked the hardware for both software and hardware 3d. Shame. Last edited by Hypex on 28-Jun-2024 at 05:56 AM.
|
| Status: Offline |
| | Hypex
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 28-Jun-2024 5:47:54
| | [ #139 ] |
| |
|
Elite Member |
Joined: 6-May-2007 Posts: 11341
From: Greensborough, Australia | | |
|
| @OneTimer1
Quote:
And in the end they where
1st to go |
LOL! |
| Status: Offline |
| | kolla
| |
Re: DoomAttack (Akiko C2P) on Amiga CD32 + Fast RAM (Wicher CD32) Posted on 28-Jun-2024 7:58:58
| | [ #140 ] |
| |
|
Elite Member |
Joined: 20-Aug-2003 Posts: 3265
From: Trondheim, Norway | | |
|
| @ppcamiga1
Quote:
ppcamiga1 wrote: @kolla
kolla bought this shit pistorm so as others start retro propaganda.
|
Why not, they were cheap... I have 4 pistorms currently.
Quote:
developers switch from Amiga to other platorms around 1995
|
Well, that's funny, because at the large IRC "conferences" after CBM folded, developers were already looking elsewhere, several had already moved to Linux and NetBSD and were encouraging others to join, and many did... this was late 1994. In 1995 also users really switched to Win95.
Quote:
because AGA has not chunky pixels.
|
No, because CBM was dead, the OS unfit for Internet, and prices for third party hardware were high.
Quote:
cpu and os was still good enough.
|
I agree that CPU was good enough, if you had 030+882 or 040 (and eventually 060), but the OS was not. People were getting Internet, and this was before NAT and fancy firewalls, so being online meant being on the actual Internet. Without any memory protection, sandboxing, multiuser capabilities whatsoever, very often with AmiTCP's TCP: device mounted, IRC clients with vulnerable scripts, "backtick" bugs all over the place etc. Amiga systems were very exploitable for anyone who cared, luckily few cared as there were larger fishes in that pond. _________________ B5D6A1D019D5D45BCC56F4782AC220D8B3E2A6CC |
| Status: Offline |
| |
|
|
|
[ home ][ about us ][ privacy ]
[ forums ][ classifieds ]
[ links ][ news archive ]
[ link to us ][ user account ]
|