Click Here
home features news forums classifieds faqs links search
6071 members 
Amiga Q&A /  Free for All /  Emulation /  Gaming / (Latest Posts)
Login

Nickname

Password

Lost Password?

Don't have an account yet?
Register now!

Support Amigaworld.net
Your support is needed and is appreciated as Amigaworld.net is primarily dependent upon the support of its users.
Donate

Menu
Main sections
» Home
» Features
» News
» Forums
» Classifieds
» Links
» Downloads
Extras
» OS4 Zone
» IRC Network
» AmigaWorld Radio
» Newsfeed
» Top Members
» Amiga Dealers
Information
» About Us
» FAQs
» Advertise
» Polls
» Terms of Service
» Search

IRC Channel
Server: irc.amigaworld.net
Ports: 1024,5555, 6665-6669
SSL port: 6697
Channel: #Amigaworld
Channel Policy and Guidelines

Who's Online
9 crawler(s) on-line.
 111 guest(s) on-line.
 0 member(s) on-line.



You are an anonymous user.
Register Now!
 A1200:  24 mins ago
 michalsc:  29 mins ago
 amigakit:  1 hr 5 mins ago
 OlafS25:  1 hr 27 mins ago
 clint:  1 hr 32 mins ago
 amigang:  2 hrs 42 mins ago
 Tpod:  3 hrs 22 mins ago
 pixie:  3 hrs 27 mins ago
 Birbo:  3 hrs 42 mins ago
 Hammer:  3 hrs 49 mins ago

/  Forum Index
   /  Amiga General Chat
      /  How good or bad was the AGA chipset in 1992/1993.
Register To Post

Goto page ( Previous Page 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 Next Page )
Poll : How good or bad was the AGA chipset in 1992/1993.
10p Excellent (Best at 2D/3D, colors, and resolution, frame rate etc.)
5p Good / better than most computer.
0p Barely hanging in there.
-5p Below average / slow but usable
-10p useless / horrible
 
PosterThread
Hammer 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 18-Oct-2022 2:24:46
#381 ]
Elite Member
Joined: 9-Mar-2003
Posts: 5275
From: Australia

@agami

Quote:

agami wrote:
@Hammer

Quote:
Can't you see the SKU gap between A1200(020@14Mhz) and A4000/040 in Q4 1992?

I remember this baffling me when I was in Maxwells Computer Centre almost 30 years ago to the day, as I was gladly handing over hard-saved money from a minimum wage post-uni job for the machine that would go on to change everything for me. While waiting for Lee to get everything together, I can vividly recall how much I coveted the A4000 040, and how it would've been nice to have a machine in between the A1200HD and the A4000, the latter being more than 3x the price of the former.
In my mind I started thinking about future upgrades that would bridge the gap, but there was still a missed opportunity in not having that mid-tier SKU, or add-ons that could immediately elevate the A1200 to the mid-tier position.

Interestingly, Apple did the same during the Steve Jobs comeback tour in the late '90s as they had the new and simplified SKU approach, with the all-in-one, colourful and popular iMac at one end, and then a big jump to the short-lived PowerMac G3 quickly replaced by the PowerMac G4.
They did have a misguided attempt at creating a mid-tier machine in 2000, with the PowerMac G4 Cube.


Quote:
It's relevant for gaming PCs that can run Doom (Q3 1993).

Doom didn't come out until December 1993, so for all intents and purposes in terms of influencing PC purchase decisions, Doom is a 1994 game.

I agree that in late 1993, the A1200 (with or without accelerator) and the newly released CD32, were not looking so good to an objective, first-time personal computer buyer.
If I hadn't got the A1200 a year earlier, and didn't already have a decent library of Amiga game titles and productivity software, in all honesty I can't say that I would not seriously consider a 386 machine at that time. Certainly, if I stuck with the A500 until mid-to-late 1994, which some friends did, then anything AGA would not even be a factor unless it was at stupendously reduced "clearance" prices.

Computer Gaming World, July 1993 has an article on Doom.

http://71.80.226.128/pub_cdn/DoomArchive/Unorganized/cgw_doom.zip

PC's Doom was being hyped before Xmas December 1993 release. There was a concert effort to prepare PC upgrades with Doom.

I already left the Amiga gaming scene in 1993 and A3000 is mostly used for school multimedia presentations and acting like a Macintosh.

My "sold for parts found from the attic" A1200 purchase is during the COVID-19 lockdown in 2020 and it was working i.e. the seller was ignorant.

If A1200 was released in Q4 1991, Commodore would have two years to ramp up AGA's manufacturing capability and build up the AGA install base.

I support the following viewpoints
1. Dave Haynie's release AGA ASAP when AGA chipset was completed in March 1991.

2. David Pleasance's out-of-the-box accelerated A1200 bundle SKUs.



Last edited by Hammer on 18-Oct-2022 at 03:52 AM.
Last edited by Hammer on 18-Oct-2022 at 03:44 AM.

_________________
Ryzen 9 7900X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB
Amiga 1200 (Rev 1D1, KS 3.2, PiStorm32lite/RPi 4B 4GB/Emu68)
Amiga 500 (Rev 6A, KS 3.2, PiStorm/RPi 3a/Emu68)

 Status: Offline
Profile     Report this post  
bhabbott 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 18-Oct-2022 7:55:59
#382 ]
Regular Member
Joined: 6-Jun-2018
Posts: 332
From: Aotearoa

@Hammer

Quote:

Hammer wrote:
@bhabbott

Quote:
In 1991 I paid NZ$7200 for my A3000.


https://www.poundsterlinglive.com/bank-of-england-spot/historical-spot-exchange-rates/usd/USD-to-NZD-1991
U.S. Dollar / New Zealand Dollar Historical Reference Rates from Bank of England for 1991

Amiga World Magazine (June 1991) page 88 of 104 in USD
From Commodore,
Amiga 3000-25/100 has $1500 (25 Mhz 68030, 100MB HDD)
Amiga 3000-25/50 has $1250 (25 Mhz 68030, 50MB HDD)
Amiga 3000-16/50 has $1150 (16Mhz 68030, 50MB HDD)

Amiga 3000-25/100 should cost about $2550 NZD to $3000 NZD when directly translated from USD to NZD.

$7200 NZD is abnormally high unless it has other extras e.g. 1992 OpalVision Card for PAL or VideoToaster with PAL converter (e.g. GVP TBC+), extra RAM, and hard disks.

NZ's weaker economies of scale seem to be a factor.

Er, wot?

Launch price of the A3000-25/40 was US$3999 (Amiga World June 1990). Exchange rate NZD to USD Jan 14 1991 0.5935. $3999/0.5935 = NZ$6738. + 10% sales tax (GST) = $7412. My supplier said he gave me a discount, and it seems he was telling the truth!

Your estimate of NZ$3000 is way off. How could that be?

Oh dear, I just figured it out:-

Quote:
If You Own A Commodore Computer, It's Worth Up To $1500 Toward An Amiga 3000.

The Amiga Power Up™ Program rewards Commodore or Amiga owners with up to $1,500 to trade up to a powerful Amiga 3000 computer. Without trading in your current Amiga or Commodore CPU.

If you have a Commodore VIC20, 64, 128 or Amiga 500, 1000 or 2000 series, save up to:
$1,500 on the Amiga 3000-25/100
$1,250 on the Amiga 3000-25/50
$1,150 on the Amiga 3000-16/50

That's not the price, it's the trade-in discount!

Quote:
My Dad purchased an ex-corporate Amiga 3000-25/120 (68030 @ 25Mhz, 120 MB HDD, 1 MB Chip RAM, 1 MB Fast RAM) for about $850 AUD with A500 1MB trade-in.

Your dad got a good deal, but this has no bearing on the price of a new A3000 when it was released.

Quote:
PC World Australia May 1991, Page 67
OCT 386C-33 PC has $2,445 AUD.
OCT 386-25 PC has $1,845 AUD.
OCT 386SX-16 PC has $1,295 AUD

Lets me guess - no monitor, no sound card, no mouse, no operating system. And what about the video card? "You don't need to know that'.

"All OCT computers are assembled in AUSTRALIA, using the highest quality components". Translation - "We buy the cheapest clone parts we can get, throw them together and stick our badge (the only bit we actually designed) on the front".

You know there's a world of difference between a solidly built reliable machine with high performance 32 bit bus, supported by a company that actually designs their own stuff, and a crappy ISA bus 'motherboard of the day' clone, right? If you don't then consider this - I was supporting those crappy clones through the 80s and 90s, and they really were crappy. We loved them because we got plenty of business from customers needing repairs!

Interestingly, 386 clones are actually quite rare now. Can you guess why? Yes, most of them didn't last. Even though they must have outnumbered Amigas by at least 10:1 back then, now it's harder to find one in reasonable condition. I have a late model 386SX-25 'all-in-1' motherboard that I recently bought from a guy in Norway. Having everything on the motherboard is great (no ISA cards to crap out) and It goes well, but I can't find the jumper settings anywhere so I don't know how to disable the onboard VGA. Some day I hope to get a slimline case to put it in, if I can be bothered.



 Status: Offline
Profile     Report this post  
agami 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 18-Oct-2022 9:04:22
#383 ]
Super Member
Joined: 30-Jun-2008
Posts: 1650
From: Melbourne, Australia

@Hammer

Quote:
Computer Gaming World, July 1993 has an article on Doom.
(link removed)
PC's Doom was being hyped before Xmas December 1993 release. There was a concert effort to prepare PC upgrades with Doom.

Yes, I can see why PC retails would want to do that. And sure, some 'innovators' and 'early adopters' will buy HW based on magazine coverage of an upcoming game. The majority of early sales will however come from the rest of the 'early adopters' and 'early majority' groups who would buy after seeing the game running.
I have no doubt that Doom moved a crap-ton of 486 PCs in 1994. More than any Microsoft package.



Quote:
I support the following viewpoints
1. Dave Haynie's release AGA ASAP when AGA chipset was completed in March 1991.
2. David Pleasance's out-of-the-box accelerated A1200 bundle SKUs.

I agree.

Might not have saved Commodore, but it might have saved the Amiga by attracting better buyers.

_________________
All the way, with 68k

 Status: Offline
Profile     Report this post  
agami 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 18-Oct-2022 9:19:54
#384 ]
Super Member
Joined: 30-Jun-2008
Posts: 1650
From: Melbourne, Australia

@bhabbott

Quote:
"All OCT computers are assembled in AUSTRALIA, using the highest quality components". Translation - "We buy the cheapest clone parts we can get, throw them together and stick our badge (the only bit we actually designed) on the front".

Ah, the good old nondescript beige tower with a designated square area for a postage-stamp sized stick-on badge.

I remember my A1200 Tower case which I got from some Amiga retailer in Adelaide, if I recall correctly. It was some no-brand beige full-tower conversion, mostly just a custom back-panel, with a cable for the included AT PSU, with instructions on how to solder the power wires to the A1200 motherboard.
Luckily, my brother was running his own sign-making business and I designed some AMIGA 1240T decals (I had a Blizzard 040 40MHz with SCSI II in there) that my brother cut with his vinyl cutter. Looked better than anything ESCOM/AT designed.

_________________
All the way, with 68k

 Status: Offline
Profile     Report this post  
Hypex 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 18-Oct-2022 13:11:56
#385 ]
Elite Member
Joined: 6-May-2007
Posts: 11204
From: Greensborough, Australia

@Karlos

Quote:
One idea I had in the distant past with superhires was to use a fixed 2 bits per gun palette for the 64 base palette, then by simple bit shifting of RGB choose the closest one of those for a the first pixel. Then modify the green, red and blue in that order for the next three. Specifically in that order because green contributes the most luminance to an RGB pixel, followed by red, then blue.


Be good to see how that would work one day. I tend to imagine fixed palette like VGA had. Or the base palette where they chose a variant of RGB levels. AGA would provide more colour resolution with HAM. It could be used to modify only RGB and ignore base palette which would give at least 6 bits per gun.

 Status: Offline
Profile     Report this post  
Kronos 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 18-Oct-2022 13:54:09
#386 ]
Elite Member
Joined: 8-Mar-2003
Posts: 2561
From: Unknown

@Hammer

Quote:

Hammer wrote:

Not with IBM VGA. It must be a fast 16-bit VGA clone.



And what does that mean to the price of the fish???

In 1993 16bit VGA with 0.5 or 1MB were dirt cheap so that was the competition.

The OCS had 2 major selling points:

- using RAM cycles that would otherwise be wasted with minimum impact on the CPU performance.
--- those weren't a thing with an 020 and running the RAM at half the CPU's speed meant that the CPU performance was crippled even before AGA drew a single pixel

- it could do thing useful for 80s style 2D games with little to no CPU needed
--- that type of gaming was going out of fashin fast by 1993

If it wasn't for backwards compatibility an A1200 with 1MB proper fast and fully memory mapped 1MB 16Bit VGA would have been much better at the early 3D games and productivity. Heck with the same level of HW coding it might have done the old-skool 2D just as fast as AGA (due to much better CPU performance).

_________________
- We don't need good ideas, we haven't run out on bad ones yet
- blame Canada

 Status: Offline
Profile     Report this post  
Hypex 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 18-Oct-2022 16:22:47
#387 ]
Elite Member
Joined: 6-May-2007
Posts: 11204
From: Greensborough, Australia

@bhabbott

Quote:
So many points to address...


Yes these discussion to tend to escalate.

Quote:
Yes, that conductive rubber connector wasn't the best idea. I did have to clean mine a couple of times. If I still had it today I would put a better connector system on it.


It was some kind of standard part. Should clean mine as well. Still have it but my Cobra had an annoying issue of yellow screening a lot.

Quote:
Connectors often cause problems. Every now and then I would have to re-seat the 060 board in my A3000. PCs can often be fixed by simply pulling out the the cards and RAM modules and re-inserting them. VL bus cards were particularly finicky. The ZX-81 was famous for crashing due to 'RAM-pack wobble'. Even the Blizzard 1230-IV in my A1200 needs to be moved a bit to clean the contacts every few years.


I've only seen real RAM problems on AmigaOne PPC machines and they could at times be fixed by the pull out and in trick. It lead me to believe that only PowerPC machines had this issue and they just weren't as robust as a PC. Put it this way, I keep my PPC machines relatively clean and vac them out when they look too dusty inside, and after five or so years they start failing. I've seen a PC that was so full of dust there were hairs everywhere made of dust and it still booted up! That had no RAM issues. Not the best looking example but it still worked in a terrible state.

Quote:
This was a legacy issue, with the latest OS3 versions you don't need it. Not a big deal once you know about it.


Without knowing you can get crashes and data corruption. But it is a hardware limitation and should have been hard wired in the driver. What's worse is it had to be set for every partition which is ridiculous when all controller transfers were affected so it was certainly a bad design flaw.

Quote:
More concerning to me was the flaky operation of external SCSI devices on my A3000. Even with carefully applied termination and settings some devices just didn't want to play nice. Having an 060 probably made it worse. Also early DMA chip revisions were known to have problems. But such is life on the 'bleeding edge'. The PIO IDE interface was generally much more forgiving, as well as being easier to reproduce. Discerning Amiga fans might have been disappointed by it, but being able to add a compatible IDE interface to retro machines with a simple circuit makes up for it.


When my Ferret worked well the SCSI worked fine. I had a Yamaha CDRW connected externally. I set it up so I could turn it on and off at will when I needed it. I don't recall it being set up with a terminator so the internal CDROM in the tower must have acted as terminator. I recall it some term jumpers on the back.

Quote:
What low power models?


A600 and A1200. I mean low CPU power. The A1200 may have been clocked at 14Mhz but without fast ram was more crippled.

Quote:
One day I want to create a low power Amiga by replacing chips that draw more than they could, just for the hell of it. For example the A1200 has four 74F245 chips to buffer the PCMCIA port. If they were replaced with CMOS equivalents it could save over 250mA on the 5V line! However even with this inefficiency the A500 PSU powering my A1200 only gets warm after many hours of use. I have PC laptops that get much hotter.


The same kind of idea is in a C16 RAM expansion I have. It uses lower power chips for RAM and replaces some other chips with lower powered ones to reduce heat. I read ROM can be replaced to reduce power. There are also FPGA designs of Amiga chips in the works. This would help to reduce power requirements on boards like ReAmiga. Of course soon enough a ReAmiga type board with one FPGA doing all chips would make sense.

Quote:
Squashing stuff down to fit on one disk was lots of fun. But most 'productivity' users bought another floppy drive to cut down on disk swapping. I have an A1010 drive (originally for the A1000) which I plug into the A1200 or A500 as needed. On the A1000 I used to have a 5.25" drive for use with PC floppies.


They had an A500 in the art room at school with external floppy. Good for games.

Quote:
However if you only want to play games a single floppy drive is often enough. There are probably thousands of games that only need a single drive. What's important is that customers could purchase the base model and get an external drive later if they needed it. Many A1200 owners may already have had a drive that they used on an earlier Amiga. Furthermore the A1200 could have a hard drive added later, which would get cheaper and larger capacity the longer you waited. Quote:


Unless you wanted to play Dragons Lair and really needed that second drive! So the A500 at school was best for games that relied on two drives.

I never had an external drive until my A1200 days when I got a few second hand.

Quote:
You can't have it both ways. Should they have used only specs they could rely on or not? At least the official PCMCIA spec was imminent and Commodore had a draft document. DMA IDE wasn't going to happen anyway (too expensive) and higher PIO speeds might violate Amiga bus timing so safer to stick with the lower speed (which was still pretty fast for the hard drives of the time).


Reliable specs would be best. The only real problem was the CC reset. Which could be dealt with in software and I expect ROM updates should include it.

Commodore could have used the SCSI chip on their cards since they already sourced them. But laptop SCSI drives weren't common place. Buy that stage using SCSI drives was feeding off the back of Apple while the Mac was using it.

I have read of Amiga users with a big box Amiga SCSI setup ask why the A1200 IDE transfer is slower and eats up CPU.

Quote:
I didn't work with the A4000 much, but I put lots of hard drives in A600s and A1200s. Never had one that didn't work properly.


Those were the days.

Quote:
Through the parallel port. I used a standard CMOS dual 64 bit shift register IC to buffer the signals. Worked perfectly!


64 bit!

Quote:
IEC was bitbanged. There was also a serial port cartridge that plugged into the user port. This was also bitbanged. Some 3rd party cards used a proper UART chip such as the R6551, which only a had a 1 byte receive buffer like the Amiga.


OTOH I read the Plus/4 user port had proper RS232 in hardware and could do up to 19,200 baud. Imagine that, a Plus/4 better for hackers. But hackers don't need SID or sprites when hacking over the network.

Quote:
It's less than 1% out, plenty accurate enough (need 5% if the other end is perfect, 2.5% at both ends).


I read some info on the ST being able to clock to exact MIDI rate which put it on the map. As well as having actual built in ports that Commodore still had not addressed by the A1200. And how for years later the PC still wasn't the best for MIDI, despite having dedicated card hardware, so the Atari ST kept the MIDI lead for years after it's demise.


Quote:
Technically you are correct, Amiga OS is not a 'realtime' OS. Game designers understood this, but some application designers didn't. If you want real-time response you either have to turn off multitasking, or have a good appreciation of what could be compromising response time and deal with it.


I tend to think they shouldn't have had to do this. Or all music software would be like ProTracker. Only good for an ECS A500.

So the hardware was slightly lacking if interrupts are meant to avoid that kind of banging.

Quote:
The serial device driver is on disk for a reason. You can replace it with one that works better in your application, or use a completely custom driver, or have nothing and access the hardware directly. Only problem is if you want full control then you miss out on all those lovely things the OS gives you. MIDI app developers probably didn't want to do that, so they were stuck with trying to do 'realtime' with the OS adding unkown latency.


I thought the serial driver was on disk as it didn't need to be in ROM so could be stored elsewhere like the parallel.device or printer.device. Usually, software allowed you to change it, but selecting a device driver API is rather low level. They never fixed that up in the system.

I was checking the hardware registers and there are three main ones. Then the interrupt control. 31.250 is thought to be a high rate. But with interrupts per byte of serial data I would have expected it be fine as that would override any code running.

I've read that it can send 16 bits total including stop bit. Sounds useful if could have worked. But more so if 16 bits could have been buffered in.

Quote:
There were products that used this technique. eg. the 'ALF' hard drive which I copied. Dead simple interface. I don't know of anyone doing this with a PC serial port card, but it shouldn't be hard. Main reason you wouldn't do it is that you probably couldn't do much with high speed serial on a stock A500.


There would have been less demand in the A500 years. I didn't even get a modem until after my A1200 and by then the BBS years had passed. But using the same serial hardware in the next century soon caught up with the Amiga.

Quote:
So does adding a realtime clock. Many A1200 users did far more 'hacking' than that. Everything from jamming a 3.5" hard drive in it to taking the entire motherboard and putting it in a PC case. Today some accelerator cards come with a clock port or two on board.


I bought a clock. Small internal board that didn't poke out the back. Hardly used it. An accelerator with clock was better though more expensive. Cases were harder as the ports needs to poke out the back. Thus why there were all these custom cases with the "A1200" slot on the back.

Quote:
Many of us enjoy doing all kinds of hacks to our machines, and have been doing it ever since we got our first home computer. If you didn't want to do it yourself then a friendly dealer or other user was often at hand to do it for you. I was involved in several user groups for various machines, where we did stuff like this for each other. This was an important part of the hobby for us.


I did more hacking with my A1200 but I still liked to keep it neat. I only had a drive tower as a dealer set up that up early for me when I added a CDROM. It was a SCSI Tower of Power as it was called. Cool name but based on an ugly PC case. But I don't like hacks where the insides needs to be hacked out for it to fit. And even less for outside hacks that have no covers, attract dust and simply look unprofessional.

Quote:
But hey, some people just want to do a job and not have to muck around 'hacking' their machines. That's that what 'big box' Amigas (or PCs) and computer technicians are for. They have the skills to install a serial card so you don't have to! I made a living out of it.


Seems obvious but you really need those hand to eye co-ordination skills as it is physical work. And slotting in a card seems simple. But an A4000 Zorro slot can be a real bitch!

...

Quote:
The ISA slots were supposed be for a 'Bridgeboard' (PC on a card). In most machines they lay idle because we had no use for them. However they were useful when I put a PC hard drive in my friend's A2000. I made a card that bridged the buses to access the PC ST506 hard drive controller. Worked very well.


Perhaps it's not surprising the last Amiga also came with last centuries ISA slots and not the replacement PCI. Oh no. I found another one.

Quote:
Yes. It was only needed because I wanted to use the minimum possible number of logic chips (none).


Lol. I thought it may have been because it expected ECC/ECP port lines. Even if the mode couldn't be used.

Did a nibble mode have to be used?

Quote:
Not that hard. A simple MUX to switch data and control lines between Amiga and 765 is all you need.


But can it be used to transfer Amiga data? That is work with both PC and Amiga disks transparently.

Quote:
No, it didn't. You see, the Amiga was always inferior and a joke because it didn't comply with 'current standards'. You can blame Jay Miner for that. But at least, unlike the Apple Macintosh, it could read and write 'standard' PC 360k 5.25" floppies with Commodore's external 5.25" drive (I had one).


Well, that would be objectively inferior, not technically inferior. Technically, it was superior. But the IBM PC barely had 3 years in by that stage so don't see how being "IBM Compatible" was such a big deal back then like it was later.

In any case though I didn't spend much with this IBM PC thing, but I found the Amiga was actually very IBM compatible and adhering to sacred IBM standards. My CBM printer had to be set to IBM mode to connect to my Amiga. The character set wasn't a proper CBM character set with card pictures and had a more serious IBM style character set in the Topaz font. There were other things I forget along with PC transfer tools in the OS. So people can't say the Amiga didn't follow PC standards at all as it supported them in the beginning.

Quote:
Kindwords 2:- program disk - 819k used dictionary disk: - 724k used Superfonts disks:- 542k used


Oh no. You brought it back. The horror.

Quote:
The program and dictionary disks alone total 1543k, so you would need two 1.44MB disks just for them. At most you save one disk by using HD PC format. But the dictionary and fonts disk are not needed all the time, and the Amiga will ask for them if necessary.


Possibly all three with compression. But I didn't have a compressor in those days. However on one disk drive it was terrible. The dictionary would incur 20 disk swaps or something. I don't think they tested it on one drive. The file manager was terrible.

Quote:
But... Windows NT came on 22 disks! Beyond ludicrous!!! And one version of Windows 95 came on 28 disks!!! Ludicrous squared!!?!!! Can we top that? Sure. Microsoft Office 97 Professional was provided on a total of 55 diskettes !!!@#$%&*!!! (words fail me).


I don't know whether to laugh or cry at that one. I could laugh now. For those that went before me actually installing those disks were surely crying before the horror ends.

Quote:
Aha! You've caught on to why they used so many disks. You see, there comes a point when the scummy pirates can't justify using up that many disks on a game they probably won't play much anyway. :)


The solution to pirating. Trip them up on laziness. Most games like that would be adventure games to my knowledge. Though some only needed 3 disks. A compressor would help, but then even if they didn't write it, that's extra work scaling those disks down.

Quote:
True, and not so necessary either, when games only came on one or two disks. But many users did get an external disk drive.


I wonder what they installed to it? I mean, few games installed to HDD. Protective ones installed to HDD then needed the game disk right back in the drive.

Quote:
Adding a CDROM to any computer was expensive. But PC owners did because they wanted the software that came on CDs.


PCs had the benefit of having a case with a drive slot to plug it into.

Quote:
Despite the expense, a lot of Amiga owners got one. So much so that Amiga magazines had cover CDs. Before that Amiga PD collections were distributed on CD, including Aminet. This must have cost them a bit to produce, so they must have been reasonably confident of getting enough sales to justify it.


It wasn't as fun as booting a floppy but I couldn't imagine my A1200 without a CDROM and checking out the CUCD once a month.

Quote:
Unless you had a CD Writer. I did. It cost NZ$750 ($350 of which was the freight from the US).


I got one later. Still got my original backups. CD and CDRW. I could reuse the CDRW. But I seem to think 20 years plus Amiga discs are sacred.

Quote:
Commodore was working on a solution called the CD1200, which would have included Fast RAM and an optional 030 CPU. Unfortunately they went bankrupt before they could get it out.


It looks real nice and would have been suitable for the part. But I'm not sure of the way they were going to connect it. I would have expected it to connect neatly on the side, like the A570, but through the PCMCIA slot and be joined at the hip.

 Status: Offline
Profile     Report this post  
Karlos 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 18-Oct-2022 16:50:37
#388 ]
Elite Member
Joined: 24-Aug-2003
Posts: 4402
From: As-sassin-aaate! As-sassin-aaate! Ooh! We forgot the ammunition!

@Hypex

Quote:

Hypex wrote:
@Karlos

Quote:
One idea I had in the distant past with superhires was to use a fixed 2 bits per gun palette for the 64 base palette, then by simple bit shifting of RGB choose the closest one of those for a the first pixel. Then modify the green, red and blue in that order for the next three. Specifically in that order because green contributes the most luminance to an RGB pixel, followed by red, then blue.


Be good to see how that would work one day. I tend to imagine fixed palette like VGA had. Or the base palette where they chose a variant of RGB levels. AGA would provide more colour resolution with HAM. It could be used to modify only RGB and ignore base palette which would give at least 6 bits per gun.


I reckon I could knock up a simple enough program to test the idea. It would just read a source RGB image and generate one that's 4x wider and repeats every row 4x, applying the logic described.

_________________
Doing stupid things for fun...

 Status: Offline
Profile     Report this post  
Hypex 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 18-Oct-2022 16:51:07
#389 ]
Elite Member
Joined: 6-May-2007
Posts: 11204
From: Greensborough, Australia

@cdimauro

Quote:
Indeed. But the main problem is that SuperHires using 8 bitplanes is utterly slow: it leaves only a few memory slots free for doing something other than displaying the screen.


Given HAM was always a hardware mode I always wondered what incursion it had on the system. It uses max planes but aside from that does HAM8 mode take more out of the system? Though not as pretty, HAM6 would be better, if system load is considered.

C2P takes some time. Though not too much on an 060 I found out. But here it needs to render four times as much. So it does it's raytcasting in low res. But then it upscales it to super hi res! Ouch.

16 bpp, 24 bpp, either way, the framebuffer it renders is already in a bigger size against 8 bit. In the best quality case, it needs to render a long word per pixel, so 320 pixels across will be 1280 bytes to store per line. Then, to dump it in HAM, it needs to read in a long per pixel and output 4 bits in planar per plane. So, taking 8 pixels, scaling each to a nibble per plane, 8 long words of 24-bit RGB pixels becomes 8 long words in planar. Madness!

 Status: Offline
Profile     Report this post  
bhabbott 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 18-Oct-2022 23:02:16
#390 ]
Regular Member
Joined: 6-Jun-2018
Posts: 332
From: Aotearoa

@Hypex

Quote:

Hypex wrote:

Given HAM was always a hardware mode I always wondered what incursion it had on the system. It uses max planes but aside from that does HAM8 mode take more out of the system?

Superhires 8 bitplanes saturates the bus during active display time no matter what mode is in use. HAM8 is no more taxing than 256 colors or 16 color dual playfield. Actually if you don't touch the base colors it's less taxing, because you only have to render to 6 bitplanes.

A better choice would be hires HAM8, giving effective horizontal resolution of ~213 pixels. On TV it would look pretty good, with close to the same resolution and much smoother color graduations than 256 colors. In hires there is no DMA contention so the blitter/CPU has full speed access, and there much less to do.

Quote:
16 bpp, 24 bpp, either way, the framebuffer it renders is already in a bigger size against 8 bit.

But this isn't a problem with 'modern' CPUs. 32 bits is only one memory access (same as or faster than 8 bits into a chunky graphics card). You'll needs lots of Fast RAM too, but that's not a problem either these days.

 Status: Offline
Profile     Report this post  
bhabbott 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 19-Oct-2022 2:35:21
#391 ]
Regular Member
Joined: 6-Jun-2018
Posts: 332
From: Aotearoa

@Hypex

Quote:

Hypex wrote:

Without knowing you can get crashes and data corruption. But it is a hardware limitation and should have been hard wired in the driver. What's worse is it had to be set for every partition which is ridiculous when all controller transfers were affected so it was certainly a bad design flaw.

No, not in the driver - in HDtoolbox. So it's not really a system design issue except that HDtoolbox was bundled with the OS. The reason for not limiting it to 64k by default was that other drives didn't have the problem, so it would be unnecessarily limiting their performance. However it might have been better to go with the safe option and let 'power' users change it as required.

Quote:
Quote:
What low power models?


A600 and A1200. I mean low CPU power. The A1200 may have been clocked at 14Mhz but without fast ram was more crippled.

Oh, you meant performance, not electrical power consumption.

The 68020 was less 'crippled' than you might think, due to its 256 byte instruction cache and 32 bit data bus (the CPU in a 386SX PC was more crippled because it had a 16 bit bus and no internal cache). Also it wasn't crippled when reading the system ROM, so OS operation was quite snappy. All in all it was a cheap way to get much better performance than an A500, with the added 'bonus' that 'Fast' RAM really was faster!

Quote:
Commodore could have used the SCSI chip on their cards since they already sourced them. But laptop SCSI drives weren't common place. Buy that stage using SCSI drives was feeding off the back of Apple while the Mac was using it.

Back then SCSI was still commonly used in high performance PC systems as well. It was particularly suitable for RAID because you could have up to 8 drives on one controller.

IDE became popular because it was cheap, then its performance got better and it pushed SCSI out of the marketplace. I remember buying a Quantum 120MB SCSI drive for my A3000 that 'only' cost NZ$1000 wholesale. A few years later Gigabyte size IDE drives were the same price. The Amiga doesn't generally need as much storage space as a PC though, so that drive lasted me a decade.

Quote:
I have read of Amiga users with a big box Amiga SCSI setup ask why the A1200 IDE transfer is slower and eats up CPU.

It's the nature of Amiga fans to get dejected about not having the ultimate best possible performance. But the biggest bottleneck in most systems was the drive itself. The A590 had DMA, but the XT-IDE drive in it maxed out at 150kB/s. The 120MB Seagate 2.5" drive supplied with some A1200s was much slower than the older 40MB model. Any drive that managed to get over 1MB/s was considered to be 'high end'. The 1GB 2.5" IDE drive in my A1200 in my A1200 gets ~2.5MB/s, which is plenty fast enough for me. With a CF Card in the PCMCIA slot it's even faster in typical operation due to the near instantaneous seek time.

Quote:
OTOH I read the Plus/4 user port had proper RS232 in hardware and could do up to 19,200 baud. Imagine that, a Plus/4 better for hackers.

Yes, the Plus 4 had an 6551 (or 8551) UART chip in it.

Quote:
I read some info on the ST being able to clock to exact MIDI rate which put it on the map. As well as having actual built in ports that Commodore still had not addressed by the A1200. And how for years later the PC still wasn't the best for MIDI, despite having dedicated card hardware, so the Atari ST kept the MIDI lead for years after it's demise.

The PC's serial port couldn't do MIDI baud rate (not even close). The Amiga's serial port did, but of course nobody was comparing the ST to the Amiga, they were comparing it to the PC. PCs eventually solved this problem by putting a MIDI capable UART on the sound card. Since you probably wanted a sound card anyway if you were into making music this wasn't so much of a big deal.

However the 'standard' on PC sound cards was to have TTL level MIDI signals on the 15 pin joystick port, and you had to buy a special cable with active components in it do MIDI (just like you did on the Amiga). I bought a bunch of these cables from my supplier who made all kinds of computer cables and adapters. Had to send them all back though, because the MIDI in part didn't work. I took one of the plugs apart and discovered the actual manufacturer (probably in Taiwan) hadn't connected it! MIDI in goes through an optocoupler which I am guessing they couldn't be bothered putting into the cable.

Quote:
I tend to think they shouldn't have had to do this. Or all music software would be like ProTracker. Only good for an ECS A500.

Yet nobody had an issue with the numerous parallel port samplers that disabled multitasking while recording.

They didn't have to kick the OS out completely to get reliable MIDI in, they just had to prevent higher interrupts from interfering and perhaps have a more efficient driver. But that took knowledge of the machine that the developers didn't have (and apparently didn't think to get). Those of us who were into real-time systems knew what was needed, and we weren't fooled into thinking the 'massive power' of the Amiga's 68k CPU and custom chip hardware would make it unnecessary to consider interrupt latency.

Of course this wasn't an issue on the ST because it didn't have preemptive multitasking. So all software on the ST was 'like Protracker' in that it had the system to itself while running (we won't talk about the appallingly bad programming in the original Protracker, or all the 'improved' hacks based on a disassembly of it.).

Quote:
So the hardware was slightly lacking if interrupts are meant to avoid that kind of banging.

No, you don't understand. The Amiga has many interrupt sources at different levels. The serial port is not the highest, so if you aren't careful a higher priority interrupt can override it and steal CPU time. Also the standard serial port driver was designed to 'play nice' with other interrupts by only doing one character at a time. This was fine when 1200 baud was fast, not so good at 31250 baud with a MIDI keyboard blasting out note sequences with no flow control. The answer is to suppress higher interrupts and deal with the received data efficiently. And don't run a hires 16 color screen!

Quote:
I thought the serial driver was on disk as it didn't need to be in ROM so could be stored elsewhere like the parallel.device or printer.device. Usually, software allowed you to change it, but selecting a device driver API is rather low level. They never fixed that up in the system.

Main reason was so Commodore could upgrade the driver if necessary without having to make new ROMs, or for adding different serial port hardware without having to patch existing programs to use it. The Amiga has a standard device interface to make this easy, and it was fully expected that users might replace the 'stock' driver with one tuned to their needs

Quote:
I've read that it can send 16 bits total including stop bit. Sounds useful if could have worked. But more so if 16 bits could have been buffered in.

Not useful for MIDI etc., even if possible.

Quote:
There would have been less demand in the A500 years. I didn't even get a modem until after my A1200 and by then the BBS years had passed. But using the same serial hardware in the next century soon caught up with the Amiga.

A lot of things soon caught up with the Amiga, but we should remember that the A1200 was designed in 1991 - and was not supposed to be a high-end machine. Most PCs of its class at that time didn't have buffered serial ports either.

I remember when 9600bd MODEMs came out. I was gobsmacked at how they could have done this on a normal telephone line. I was trained as a telecoms technician and so knew about bandwidth theory which said it shouldn't be possible, but I didn't know about QAM because it wasn't a thing when I was trained.

The first Supra 9600bd MODEM I got for the shop's bulletin board cost NZ$1000, half the price of a complete Amiga computer system. In 1992 Supra released their first 14400bd MODEM for US$399. The A1200 had no problem handling this baud rate either, but then came 33k6, and finally 56k which was too much for it. By this time the average PC was a fast 486 or Pentium, and anyone with a 386SX (A1200 equivalent in the PC world) had dumped it - or at least was not trying to use it on the Web!

Quote:
Perhaps it's not surprising the last Amiga also came with last centuries ISA slots and not the replacement PCI. Oh no. I found another one.

The last Amigas came out in 1992. All PCs back then had ISA slots, and few had PCI. I have an 800MHz Celeron PC with one ISA bus slot - and a parallel printer port, two RS232 serial ports and a 5.25" floppy drive too! All useful stuff you can't get today.

It's a pity that bridgeboard slot wasn't better utilized because the market was awash with cheap ISA bus cards back then. Later on they became available second hand for nicks, and would have been an excellent way for Amiga fans do more on the cheap.

Quote:
Lol. I thought it may have been because it expected ECC/ECP port lines. Even if the mode couldn't be used.

No, the Zip drive didn't use ECC/ECP, only 'EPP' (bidirectional) which the Amiga had. The issue was the PC printer port has 4 programmable output control lines, but the Amiga's strobe line is dedicated to producing strobe pulses only. I could have used the other 3 lines with a demux chip to create 8 states, but the joystick port was right there so...

Quote:
Did a nibble mode have to be used?

AFAIK the Zip drive didn't use nibble mode, and if it did I wouldn't have wanted to use it (too slow, too hacky!).

Quote:
Quote:
Not that hard. A simple MUX to switch data and control lines between Amiga and 765 is all you need.


But can it be used to transfer Amiga data? That is work with both PC and Amiga disks transparently.

Yes, that's what the MUX is for, to switch from one controller to the other. A lot of hassle just to get higher density though.

Quote:
But the IBM PC barely had 3 years in by that stage so don't see how being "IBM Compatible" was such a big deal back then like it was later.

I can assure you it was, at least in the US which was Commodore's home turf. It was a big deal here in NZ too in the 'serious' market. Home computers like the Amstrad CPC and Amiga were considered to be toys, not worth even thinking about. The very idea of a 'toy' being 'technically superior' to a PC was laughable. And yes, 3 years was all the time it took....

Quote:
In any case though I didn't spend much with this IBM PC thing, but I found the Amiga was actually very IBM compatible and adhering to sacred IBM standards. My CBM printer had to be set to IBM mode to connect to my Amiga. The character set wasn't a proper CBM character set with card pictures and had a more serious IBM style character set in the Topaz font. There were other things I forget along with PC transfer tools in the OS. So people can't say the Amiga didn't follow PC standards at all as it supported them in the beginning.

Yes, compared to other platforms like the C64 and Mac the Amiga was a lot more 'compatible'. At the launch of the A1000 they even demonstrated it running the PC's 'killer app' Lotus 123, from a copy-protected disk! But in the PC world that wasn't nearly enough. Many an early PC clone got dinged for not being 100% compatible, which meant it had to run everything you threw at it without the slightest glitch. That attitude never let up, except for the '100%' goalposts being moved to suit modern machines.

Quote:
Possibly all three with compression. But I didn't have a compressor in those days. However on one disk drive it was terrible. The dictionary would incur 20 disk swaps or something. I don't think they tested it on one drive. The file manager was terrible.

Kindwords wasn't the best word processor, and like most sophisticated productivity apps it benefited greatly from a second disk drive. But it did work with one drive, cost nothing and was useful. To do the same thing on a PC you needed a hard drive, which in 1985 was not cheap!

Quote:
I don't know whether to laugh or cry at that one. I could laugh now. For those that went before me actually installing those disks were surely crying before the horror ends.

I reckon I spent half my life shoving floppy disks into PCs. It took well over an hour to install Windows 95 from floppies.

Quote:
I wonder what they installed to it? I mean, few games installed to HDD. Protective ones installed to HDD then needed the game disk right back in the drive.

A lot of games used 'manual' protection (we even did that for one title I coded - forced the user to read the manual!). In general games produced in the US tended to be hard drive installable, while those from Europe didn't. This reflected the ownership trends, with many more US Amiga fans having hard drives. About half the people I knew with an Amiga had a hard drive, and many of them had 'big box' Amigas stuffed with cards.

Quote:
PCs had the benefit of having a case with a drive slot to plug it into.

Mostly true. For a while there was a trend towards 'slimline' machines with only one or even no 5.25" drive bays, then CDROM drives became popular and everybody wanted a 'tower' case.

Quote:
It looks real nice and would have been suitable for the part. But I'm not sure of the way they were going to connect it. I would have expected it to connect neatly on the side, like the A570, but through the PCMCIA slot and be joined at the hip.

It had a card that plugged into the trapdoor slot, and a cable coming out back on the right hand side. No good if you had an accelerator card. Like I said, we are lucky Commodore died when they did!

 Status: Offline
Profile     Report this post  
cdimauro 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 19-Oct-2022 6:00:02
#392 ]
Elite Member
Joined: 29-Oct-2012
Posts: 3650
From: Germany

@matthey

Quote:

matthey wrote:
cdimauro Quote:

Possible. But a guy in a post said that Lightwave increased the speed by 300% when it was properly compiled for the 68040. So, I assume that OxyPatcher wasn't required at least for this processor and which also means that it was good for the 68060 as well, since it had an extra instruction (which was removed on the 68040).


SAS/C may generate more optimal code for the 68040 and maybe even the 68060 but it is still using plenty of trapped 6888x FPU instructions. It takes a significant amount of work to update the backend and support code of a compiler and remove all trapped instructions. I'm not even sure GCC has done it for 68k FPU code generation although the number of 6888x instructions has been greatly reduced to where they are uncommon and don't make much difference to performance.

I don't expect many "legacy" / more complex instructions to be generated. The most important and common ones are natively supported by the 68040/68060.
Quote:
Vbcc shouldn't use any trapped instructions when compiling for the 68040 or 68060 which is more important for embedded use where trap support like the 68060SP may not exist. It was a little easier to change because the backend was simple, not generating trapped instructions to begin with. It was primarily the fp math libraries that needed to be changed (and created in the case of the m060.lib) which is what I worked on. Despite the simplicity of the vbcc 68k FPU support, it appears to work well and outperforms GCC and SAS/C in floating point performance on the 68060.

Unbelievable. Have you tried with LLVM? Recently it got a 68k backend.
Quote:
If you have the last versions of Lightwave, try diassembling it. The version of ADis I updated does a good job of disassembling it.

No, I don't have it. Was it horrible?
Quote:
I could have started optimizing it but it is big. Maybe back in the desktop video days if I had my knowledge and the tools I have today, I could have optimized/patched it for the 68040 and 68060 and there would have been money in it.

Absolutely. The time worth saved on the rendering has a cost and someone could be interested on paying to reduce the time-to-market.
Quote:
I'm surprised it wasn't simply compiled with GCC for the Amiga which should have been an improvement. GCC was one of the most logical candidates for compiling the x86 version of Lightwave but if it was used then why didn't Newtek compile the Amiga version with GCC also?

The most logical candidate would have been Intel's compiler, which generates much better code of any other.

So, SAS/C was used for the Amiga and something else for PCs? It's not clear to me.
Quote:
cdimauro Quote:

What was left I think it was the instructions ordering for the 68060's superpipeline.


The Diab Data 3.4A C compiler is the only compiler I know of with an instruction scheduler for the 68060. Motorola had developed their own compiler which they were using for the 88k just a couple of years earlier but stopped development to save money and worked with other compiler developers. Mitch Alsup complains about this being the start of the downward spiral at Motorola and he has experience working on compilers.

It depends on what was the goal for the company. If Motorola developed its own compiler to make money AND there was a market for that, then it was OK continuing.

But if it was only for supporting its architectures, then it's better to route the resources to an existing compiler, like GCC, to save money while still providing the required support for its customers.
Quote:
The Diab Data compiler/toolchain is owned by Wind River now and switched to LLVM (Wind River is the VxWorks OS embedded guys).

Yes, I know them: they were colleagues when I was at Intel.
Quote:
LLVM doesn't sound like a good embedded compiler with its bloat

It's not bloat. Its problem is that take a HUGE amount of resources to just compile this big project.

But once you have the binaries, then it's like other compilation tools.
Quote:
but I guess all compiling is done remotely anymore.

That's a good option for systems with few resources.
Quote:
Maybe a smaller footprint compiler with few dependencies like vbcc isn't needed for embedded systems.

It entirely depends on the project's needs. If you don't need to compile often and the compilation time isn't a critical factor, then a big compiler like LLVM is fine.

otherwise VBcc could be a good option. IF the architecture is well supported. A backend could be added, because it's a very simple compiler. However I really don't like how VBcc is written.
Quote:
An instruction scheduler for vbcc would be especially good for the 68k. It could do optimizations that Volker expects the assembler to do but can't (vasm can only do relatively simple peephole optimizations). The instruction scheduler has information where the branches are so can do more advanced optimizations besides instruction scheduling. A good example is the following.

movem.l d2,-(sp) ; 4 bytes pOEP only

to

move.l d2,-(sp) ; 2 bytes pOEP|sOEP

Vasm can't make this optimization because MOVEM.L does not affect the flags while MOVE.L does.

Of course. This cannot be solved by a peephole analyzer: it should be addressed at the backend level.
Quote:
With branch information, I believe the instruction scheduler could optimize this in most cases while keeping the backend simpler and optimizations modular like Volker wants. An instruction scheduler could provide optimizations for earlier 68k CPUs which don't need instruction scheduling although scheduling all code for the 68060 is worthwhile as it is no worse on earlier CPUs (except for 68040 optimized FPU code which is better with parallel FMOVE scheduling).

But actually the 68060 could only issue one FPU instruction per cycle, at most. So, no instructions scheduling should be needed here.
Quote:
cdimauro Quote:

As you know, I don't like those synthetic benchmarks: I always prefer real world applications. A rich set, if possible.


The ByteMark benchmarks come from real algorithms like the SPEC benchmarks.

No, SPEC was providing applications, whereas ByteMark only a collection of algorithms. That's not the same thing.

Testing how good a processor could go on a single algorithm isn't a real measure of how it could go running an applications with many algorithms being used and maybe calling each others.
Quote:
Lightwave rendering is a good benchmark but I expect it shows the weakness of SAS/C code generation on the Amiga likely partially saved by OxyPatcher when used. There is likely no SAS/C 68060 instruction scheduling either but the strength of the in-order 68060 with unscheduled code still shines through.

I doubt that the compilers of the time were able to do proper scheduling for the Pentium, and especially for its FPU section (since it required some special handling).
Quote:
cdimauro Quote:

They look strange to me. On the usual code (not fully optimized for the 68060) 16-bit instructions didn't even reached 50% of the total (on average).

The 68060 was able to execute a pair of instructions only if they were both 16-bit.

Let's simplify the calculations and say that 16-bit instructions were exactly 50% of the total. It means that the probability to have (and execute) a pair of them is 50% * 50% = 25%. So, very very distant from the above numbers.


Don't be fooled by the 4 byte/cycle instruction fetch of the 68060 even though it is half of the less efficient Pentium and PPC 603e competition. The key is the decoupled fetch (IFP) and execution (OEP) pipelines with a FIFO between them. The IFP consistently fetches, predecodes instructions and places them in the FIFO when the OEPs are stalled or executing multi-cycle instructions. This is often enough that the FIFO rarely empties with just a 4 byte/cycle fetch and the FIFO often contains enough instructions for multi-issue into the OEPs. The IFP may spend multiple cycles fetching a long instruction but it doesn't matter unless the FIFO is nearly empty. Too many large instructions together can starve the FIFO as can a large instruction after a pipeline refill but these are rare as the average instruction length is less than 3 bytes. An 8 byte/cycle fetch may have increased performance slightly but it is not worthwhile until other improvements are made, especially cache size increases.

Some 68k developers believe the instructions coming out of the FIFO are very restricted in size and thus the instructions multi-issued are weak. I believe this is due to various errors and lack of clarity in 68060 documentation.

FIFO instructions (i+1),(i+2) or (i+2),(i+3) issued to the OEPs

Some developers believe i=2B because a basic 68k instruction or OP=2B. The 68060UM says that the FIFO is 96B and other documentation states that there are 16 FIFO locations (96B/16=6B). Even with this belief, i=6B already allows much stronger instructions to be executed. I suspect there is an error in the 68060UM and the 96B FIFO should actually be a 16x96b FIFO.

Superscalar Architecture of the MC68060 Quote:

The four-stage instruction fetch pipeline, which the FIFO instruction buffer decouples from the dual execution pipelines, performs the chip's instruction prefetch. The third stage is a table lookup that uses the 16-bit opcode to produce a 32-bit longword of decode information.

...

Another field provided by the early decode is the instruction length. Having this information available allows packaging of the stream of fetched variable-length instruction into machine instructions before they move into the instruction buffer. The buffer provides 16 storage locations, each location containing a 16-bit operation word, 32-bit extension words, and the early decode information. Typically, instructions require a single instruction buffer entry, although the more complex instructions require multiple locations.


16b OP (operation word) + 32b (extension words) + 32b (early decode) = 80b

Joe Circello is describing a 16x80b FIFO which is significantly larger than the 16x48b (16x6B=96B) FIFO of the 68060UM. Now what I believe Joe left out.

16b (OP) + 16b (brief/full extension word) + 32b (immediate/displacement) + 32b (early decode) = 96b

The 16b OP and 16b brief/full extension word together provide basically a 32 bit instruction. The brief/full extension word has to be read to calculate the instruction length when using (d,An,Xn*Scale) addressing modes. There can be two brief/extension words for MOVE mem,mem but these instructions are broken into 2 RISCier FIFO instructions and the load half executed separate from the store half (pOEP until last & 2 cycles). The 32b immediate/displacement is what Joe calls extension words of the instruction. I believe this matches the capabilities of the 68060 with instructions that require multiple brief/full extension words or more than 32 bits of immediate/displacement data requiring more than one FIFO instruction slot, requiring more than one cycle to execute and do not multi-issue. An internal 96b fixed length instruction format allows for powerful 68k instructions to fit in one FIFO instruction slot to be multi-issued. The P6 Pentium and successors used an internal 118b fixed length RISC format for a long time although this was eventually widened for more powerful instructions.

I see, but I'm still not convinced that the 4B/cycle instructions fetch is enough to keep the instructions buffer properly fed to guarantee that the two pipes are used.

I don't know which simulations Motorola did, but looking at several disassembled or written lines of 68k assembly code, there aren't many chance to do it. Usually it could happen when instructions take more cycles to execute, or if a pair cannot match. But in those cases it means that you're already missing the execution of instructions on both pipes. I don't know if you got what I mean.
Quote:
The 64 bit integer math instructions on the "CISC" 68060 had to be removed to be more "RISC" like though. It's not like Motorola philosophy changed in 2 years.

Well, it already started with the 68030. And not only about instructions: functionalities were affected as well by Motorola's axe.
Quote:
It's more like start axing features to save transistors, area and chip cost for embedded use.

Hum. I don't know think so. Maybe it could be for the 68060, but 68030 and 68040 were sold for the desktop & enterprise markets.
Quote:
RISC philosophy even then seems to be keep it simple and today we have micro-coded OoO monstrosities with CISC like features except for reg-mem accesses, at least not yet.

If you add the reg-mem operations then what's left from the RISC philosophy? Nothing!

It would be the complete, total failure. But I'm pretty sure that the academics will find other ways to justify it and continue with their ridiculous propaganda.
Quote:
Here is another good quote by Mitch Alsup.

Mitch Alsup https://groups.google.com/g/comp.arch/c/SrAHwT4lfEo Quote:

A 16KB L1 cache (small) is already bigger than the register file, forwarding logic, and all integer execution stuff, and other interfaces this section talks to. Adding RMW operations to the cache does not add "that much" logic or "that much" to verification.


The last, most important and most taboo separation of RISC and CISC isn't really that much. The caches use more logic than reg-mem which saves caches.

Indeed. But I don't understand what Mitch is stating with this "Adding RMW operations to the cache". To me it looks a typo: it should have been ISA instead of cache.
Quote:
Requiring more caches must be more RISC like though.

ROFL
Quote:
Moore's law is over and the battle to get to a newer chip fab process first has changed into a battle of CPU design and ISA/ABI efficiency.

Absolutely. That's the opportunity for new architectures to be considered.
Quote:
cdimauro Quote:

Yes, it had many advantages.

But Pentium had also its own: 64-bit data bus, larger instructions fetch buffer (so, being able to pair longer instructions), all instructions supported (no trapping and/or software patches or special library versions needed).


The 64 bit data bus increased cost of not just the CPU due to more pins but also memory required for it while providing a small performance benefit.

I've no data, but to me fetching more instructions and filling the data cacher faster should give good improvements.
Quote:
A good compromise would have been to allow 32 or 64 bit memory like the PPC 603(e) allowing for cheaper or higher performance memory but it still requires more pins/multiplexing and logic.

That's if you consider the embedded market. On other markets having bigger memory bandwidth is an advantage.
Quote:
It's more compelling to use a 64 bit data bus today as 64 bit memory is more common and cheaper. Most high performance CPUs are integrated into a SoC where as much memory bandwidth as possible is more advantageous.

And we have 128-bit memory accesses even on low-end desktop computers.
Quote:
The 68060 decoupled IFP and OEPs handle variable length instructions including larger instructions with elegance and efficiency instead of Pentium brute force as explained above. Any Pentium performance advantage here is "marginally faster". The only time there is a performance advantage is when the FIFO is nearly empty (i less than 3) which is rare. The 68060 could have easily added an 8 byte/cycle fetch but deemed it not worthwhile.

Superscalar Architecture of the MC68060 Quote:

We used dynamic code analysis of existing 68K applications to determine the instruction-fetch bandwidth necessary to support the superscalar operand-execution pipelines. The chip’s instruction-set architecture contains 16-bit and larger instructions, with a measured average instruction length of less than 3 bytes. Simulations based on trace data indicated that, holding the rest of the architecture constant, with the combination of a branch-prediction driven prefetch and an instruction buffer, a 64-bit instruction prefetch would be only marginally faster than a 32-bit instruction prefetch. Based on this analysis, the instruction cache to the instruction-fetch pipeline interface has separate 32-bit address and data buses. All instruction fetches are 32-bit aligned fetches. The instruction cache supports a continuous one instruction fetch per cycle rate.

Already replied above about this. However if instruction fetches are also 32-bit aligned, then the situation becomes even worse.
Quote:
Removing instructions and hardware support of rarely used instructions doesn't have much theoretical affect on performance but the reality is that compatibility is lost and compiler support is more difficult and generates less efficient code. The fact that we talked about OxyPatcher above which is a kludge and that the tiny development team vbcc compiler may generate the best performance FPU code for the 68060 more than two decades after its release shows that this can be important. I can see Motorola's logic in removing instructions when transistors were expensive but I think they went too far with the 68040 (FPU) and 68060 (64 bit MUL/DIV and FPU). Yes, this is a small performance advantage for the Pentium which has an advantage from keeping its inferior FPU implementation in hardware while the 68k removed much of a cleaner implementation.

Indeed, and it didn't costed too much, ad the very end: microcode is here for those cases...
Quote:
cdimauro Quote:

I didn't mentioned it because this is another advantage for the Pentiums.

They were able to achieve those much higher frequencies despite packing many more transistors. Especially for the MMX versions.


Intel turned it into an advantage. They optimized the parts of the Pentium that were limiting the clock speed. Motorola had no desire to do this for the 68060.

Unfortunately it already started with the 68040, which wasn't able to reach high frequencies and it was also hot.
Quote:
cdimauro Quote:

It looks like that IBM had no other choice, since the 68k wasn't able to fit its requirements (especially the missing second supplier).


Of course IBM had a choice. It may have been riskier or more difficult to go with the 68000 at this point is all. IBM is known for being very conservative. Is it too conservative to go with a gimp 8088 in good supply because it is a slow seller instead of a fast selling and much superior 68000 which may run out of stock?

AFAIR at the time when the 68000 was evaluated Motorola had issues producing them.
Quote:
Toshiba, Hitachi, SGS-Thompson and Phillips had 15.5% of the 68k market share by units in 1991 according to Dataquest so 68k licensing did happen but maybe not early enough.

But the first IBM PC was released on 1981. And its development started a couple of years before, AFAIR. So, IBM had to took a decision around 1979 about which processor to use.
Quote:
cdimauro Quote:

Exactly. AGA was low-end but... it was used on the top notch machine: the Amiga 4000. Only Commodore could have done it...


CBM did much better in the low end computer market than they did in the high end market because the Amiga chipset had slipped to low end. Would they consider upgrading the high end chipset and then the low end to the same spec when it became cheap enough? Was that what Jay Miner was suggesting with the VRAM Ranger chipset? Instead, CBM buried the Ranger chipset and developed an even higher spec and more expensive AAA chipset using VRAM which couldn't even be used for the high end. Then they hastily created a cheaper rushed AGA chipset for all Amigas but continued to develop and produce the previous obsolete chipset (Amiga 600 & 500+) which led to financial trouble. They finally created specs for a more practical chipset with practical needed features with AA+ but ran out of cash to produce it due to their chipset incompetence.

Indeed. But what's important to underline is that they rejected the Ranger chipset, which at least brought some new things, and STARTED the AAA.

This way they had NOTHING new to give to its customers, which... were waiting since years.

And then they rushed and gave the infamous AGA.

How much stupid they were?!? Bah

 Status: Offline
Profile     Report this post  
cdimauro 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 19-Oct-2022 6:06:05
#393 ]
Elite Member
Joined: 29-Oct-2012
Posts: 3650
From: Germany

@Hypex

Quote:

Hypex wrote:
@cdimauro

Quote:
Indeed. But the main problem is that SuperHires using 8 bitplanes is utterly slow: it leaves only a few memory slots free for doing something other than displaying the screen.


Given HAM was always a hardware mode I always wondered what incursion it had on the system. It uses max planes but aside from that does HAM8 mode take more out of the system?

Of course. HAM = 6 bitplanes whereas HAM8 = 8 bitplanes. So the latter is consuming more memory slots.
Quote:
Though not as pretty, HAM6 would be better, if system load is considered.

Yes, but the quality is much lower then.
Quote:
C2P takes some time. Though not too much on an 060 I found out. But here it needs to render four times as much. So it does it's raytcasting in low res. But then it upscales it to super hi res! Ouch.

Indeed. And if you work without fast mem then you end-up having only a few free memory slots available to do this process.
Quote:
16 bpp, 24 bpp, either way, the framebuffer it renders is already in a bigger size against 8 bit. In the best quality case, it needs to render a long word per pixel, so 320 pixels across will be 1280 bytes to store per line. Then, to dump it in HAM, it needs to read in a long per pixel and output 4 bits in planar per plane. So, taking 8 pixels, scaling each to a nibble per plane, 8 long words of 24-bit RGB pixels becomes 8 long words in planar. Madness!

Yup.

@bhabbott

Quote:

bhabbott wrote:
@Hypex

Quote:

Hypex wrote:

Given HAM was always a hardware mode I always wondered what incursion it had on the system. It uses max planes but aside from that does HAM8 mode take more out of the system?

Superhires 8 bitplanes saturates the bus during active display time no matter what mode is in use. HAM8 is no more taxing than 256 colors or 16 color dual playfield. Actually if you don't touch the base colors it's less taxing, because you only have to render to 6 bitplanes.

A better choice would be hires HAM8, giving effective horizontal resolution of ~213 pixels. On TV it would look pretty good, with close to the same resolution and much smoother color graduations than 256 colors. In hires there is no DMA contention so the blitter/CPU has full speed access, and there much less to do.

Which is plainly wrong: 8 bitplanes in hires use HALF of the available slots during the active display.

 Status: Offline
Profile     Report this post  
OneTimer1 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 19-Oct-2022 18:18:50
#394 ]
Cult Member
Joined: 3-Aug-2015
Posts: 973
From: Unknown

@Hypex

There was not much software supporting HAM.

It was good for displaying static pictures but hardly used in games or productivity software, even adventures avoided this mode.

I wouldn't call it a failure but it wasn't that much of a benefit.

 Status: Offline
Profile     Report this post  
bhabbott 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 19-Oct-2022 23:28:09
#395 ]
Regular Member
Joined: 6-Jun-2018
Posts: 332
From: Aotearoa

@cdimauro

Quote:

cdimauro wrote:
@Hypex

Quote:

Hypex wrote:
@cdimauro

Given HAM was always a hardware mode I always wondered what incursion it had on the system. It uses max planes but aside from that does HAM8 mode take more out of the system?

Of course. HAM = 6 bitplanes whereas HAM8 = 8 bitplanes. So the latter is consuming more memory slots.

You misinterpreted what he said. HAM8 uses no more resources than any other (same size) screen mode that uses max planes.

Quote:
Which is plainly wrong: 8 bitplanes in hires use HALF of the available slots during the active display.

OK, I slightly mispoke there. The CPU does not have access to those slots, so it is not contended by bitplane DMA. The blitter can use those slots, so it is about 25% slower with 8 biplanes than with 2 bitplanes. However with 8 bitplanes it is the same speed as the OCS/ECS blitter with 2 bitplanes (or 4 bitplanes in lores), so it is not contended in comparison to commonly used OCS/ECS screen modes.

 Status: Offline
Profile     Report this post  
cdimauro 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 20-Oct-2022 5:57:12
#396 ]
Elite Member
Joined: 29-Oct-2012
Posts: 3650
From: Germany

@bhabbott

Quote:

bhabbott wrote:
@cdimauro

Quote:

cdimauro wrote:

Which is plainly wrong: 8 bitplanes in hires use HALF of the available slots during the active display.

OK, I slightly mispoke there. The CPU does not have access to those slots, so it is not contended by bitplane DMA.

The CPU isn't a 68000 (AGA machines have at least a 68020), so it has always access to those slots (I mean: it can potentially access all of them).

And even a 68000 in those conditions has (potentially) access to half of those slots.

 Status: Offline
Profile     Report this post  
bhabbott 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 20-Oct-2022 8:11:23
#397 ]
Regular Member
Joined: 6-Jun-2018
Posts: 332
From: Aotearoa

@cdimauro

Quote:

cdimauro wrote:

The CPU isn't a 68000 (AGA machines have at least a 68020), so it has always access to those slots (I mean: it can potentially access all of them).

So these people are wrong?

A1200 CPU memory access speed (with DMA)
Quote:
Toni Wilen
WinUAE developer

Blitter timing has not changed. Chipset timing has not changed. Main difference is bus width, CPU has 32-bit access to chip ram... timing is same, chipset still needs 2 clocks to complete single CPU access just like OCS/ECS.


Quote:
Kalms

Whenever the CPU - regardless of if it's a 68000/020/030/040/060 performs a write operation, it will spend 2 buscycles communicating with the custom chipset. This puts the peak throughput for the CPU at ~1.77M write operations in a frame. A 68020+ in an AGA system would perform 32-bit writes, thus, ~7.09MB/s write throughput.

The CPU will not be able to utilize more than every 2nd buscycle for the actual data transfer. That leaves the 1st buscycle in each buscycle pair free for other things - system DMA, bitplane DMA, blitter DMA.


 Status: Offline
Profile     Report this post  
Hammer 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 20-Oct-2022 9:28:11
#398 ]
Elite Member
Joined: 9-Mar-2003
Posts: 5275
From: Australia

@bhabbott

Quote:

Er, wot?

Launch price of the A3000-25/40 was US$3999 (Amiga World June 1990). Exchange rate NZD to USD Jan 14 1991 0.5935. $3999/0.5935 = NZ$6738. + 10% sales tax (GST) = $7412. My supplier said he gave me a discount, and it seems he was telling the truth!

Your estimate of NZ$3000 is way off. How could that be?

Oh dear, I just figured it out:-


https://archive.org/details/Australian_Commodore_and_Amiga_Review_The_Volume_9_Issue_9_1992-09_Saturday_Magazine_AU/page/n3/mode/2up




September 1992 for Australian price from the state of Western Australia
Amiga 3000 with 52 MB HDD = $2,765 AUD.
Amiga 500 Plus = $589

For a new computer in Q3 1992, A3000's asking price is not competitive when compared to the 386DX-25 PC clone.

386DX PC clone with an ISA slot can be upgraded with a fast SVGA card.

A3000 is not AGA upgradable, hence like many other full 32-bit 020/030 Amigas OCS/ECS, they can't run 256-color Doom.



A2000 base = $995 AUD
A2000 with 52 MBB HDD = $1,399 AUD.

A2000 has the same core 68000/OCS or ECS with A500/A600 with an inflated price.

https://jolt.law.harvard.edu/digest/intel-and-the-x86-architecture-a-legal-perspective

during which the PC platform’s market share grew from 55% in 1986 to 84% in 1990; leaving Apple’s Macintosh at a distant second place with just 6%.


In 1990, PC clones already has 84% of the desktop computer market.


Last edited by Hammer on 20-Oct-2022 at 09:29 AM.

_________________
Ryzen 9 7900X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB
Amiga 1200 (Rev 1D1, KS 3.2, PiStorm32lite/RPi 4B 4GB/Emu68)
Amiga 500 (Rev 6A, KS 3.2, PiStorm/RPi 3a/Emu68)

 Status: Offline
Profile     Report this post  
bhabbott 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 20-Oct-2022 20:45:08
#399 ]
Regular Member
Joined: 6-Jun-2018
Posts: 332
From: Aotearoa

@Hammer

Quote:

Hammer wrote:

https://archive.org/details/Australian_Commodore_and_Amiga_Review_The_Volume_9_Issue_9_1992-09_Saturday_Magazine_AU/page/n3/mode/2up

Thanks for the link - another magazine I had forgotten about!

But...

Amiga 3000
Quote:
Release date June 1990
Discontinued 1992

Commodore replaced the A3000 six months behind schedule, in the fall of 1992, with the A4000

We knew something was coming even before the A4000 was announced, because Commodore slashed the price of the A3000 a few months beforehand. The price you see it selling for in July 1992 was not the launch price!

And yes, those of us who bought it at full price were a little miffed. But we understood why they had to dump it, and the promise of something new helped alleviate the pain of seeing the value of our 'investment'' drop dramatically. Such is life on the bleeding edge...

The magazine you should be looking at is ACAR Annual 1991, with a review of the A3000 quoting prices of:-

Amiga 3000-25-40 (25MHz with 40Mb HD) AU$6119
Amiga 3000-25-100 (25MHz with 100Mb HD) AU$7199

 Status: Offline
Profile     Report this post  
bhabbott 
Re: How good or bad was the AGA chipset in 1992/1993.
Posted on 20-Oct-2022 22:31:19
#400 ]
Regular Member
Joined: 6-Jun-2018
Posts: 332
From: Aotearoa

@Hammer

Quote:

Hammer wrote:

For a new computer in Q3 1992, A3000's asking price is not competitive when compared to the 386DX-25 PC clone.

Equating a crappy PC clone with slow 16 bit ISA bus to the A3000 with its 32 bit Zorro III bus and 32 bit Chip RAM? You can't be serious.

Quote:
386DX PC clone with an ISA slot can be upgraded with a fast SVGA card.

Oh, you are serious!

Could you upgrade the CPU in that 386DX clone to a Pentium? Or more 32 bit RAM than could fit on the motherboard? No. But the A3000 had a CPU slot that could take not only an 060 but oodles of faster RAM and super-fast graphics too if you wanted.

I upgraded my A3000 with a Cyberstorm 060 at 50MHz and 32MB of local Fast RAM, and a Picasso-II RTG card with 2MB RAM (which is only Zorro II, but still pretty snappy due the Cirrus Logic GD5428's 32 bit blitter). With this configuration it was providing sufficient performance to do the same job as a Pentium PC in the applications I was using it for, into the 21st century. The only reason I sold it was due to compatibility issues, particularly on the Web.

Quote:
A3000 is not AGA upgradable, hence like many other full 32-bit 020/030 Amigas OCS/ECS, they can't run 256-color Doom.

It can with an RTG card. I was running Quake on my A3000. No other stock OCS/ECS Amiga was 'full' 32 bit, but you could buy 32 bit accelerator cards with very fast on-board RTG for the A2000, and the A500 also had a solution for 256 colors. However even on OCS you can run Doom in 64 colors, which doesn't look quite as good as 256 colors but still plays just as good (with textures optimized for 64 colors it would look better). The only thing you really need is a fast CPU.

If Doom is what you really wanted. I had various PCs over the years, including a 386DX and 486. The 486 played Doom very well, but it got boring fast. I preferred real-time strategy and adventure games with nicely drawn graphics. Textured mapped 3D didn't excite me, and wasn't enough to make up for the over-the-top mindless violence in games like Doom and Quake.

Last edited by bhabbott on 20-Oct-2022 at 10:33 PM.

 Status: Offline
Profile     Report this post  
Goto page ( Previous Page 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 Next Page )

[ home ][ about us ][ privacy ] [ forums ][ classifieds ] [ links ][ news archive ] [ link to us ][ user account ]
Copyright (C) 2000 - 2019 Amigaworld.net.
Amigaworld.net was originally founded by David Doyle