Your support is needed and is appreciated as Amigaworld.net is primarily dependent upon the support of its users.
|
|
|
|
Poster | Thread | coder76
|  |
Re: Commodore > Motorola Posted on 7-Apr-2025 7:50:07
| | [ #81 ] |
| |
 |
New Member |
Joined: 20-Mar-2025 Posts: 9
From: Finland | | |
|
| Now that some of you started talking about typesetting your theses, there was an attempt to bring also LaTeX to Amiga computers, it is called AmiWeb2c (still available on aminet), got it installed on my A1200/68030-50MHz machine back then, and it actually worked, but it was quite slow and difficult to install. Did actually write a few physics reports with it, it also had some dvi viewer, so you could view your dvi files generated from .tex files. And used CED to edit the .tex text files.
Actually also had the geek gadget ADE-environment installed with gcc and other unix tools on my A1200. Now I think these wouldn't work well anymore, if most recent versions would be ported. I remember how slow and bloated gcc started becoming after release 3, and of course not much use for Amiga, as m68k code generation did not become better. |
| Status: Offline |
| | Lou
|  |
Re: Commodore > Motorola Posted on 7-Apr-2025 15:59:15
| | [ #82 ] |
| |
 |
Elite Member  |
Joined: 2-Nov-2004 Posts: 4256
From: Rhode Island | | |
|
| @cdimauro
Quote:
Another DUMB idea from Motorola, which had missed no opportunity to show how it doesn't care, at all, about backward-compatibility.
|
Well - retard - you're finally seeing the light...
The move to Motorola was the initial dumb move by the original Amiga team.
Not moving off Motorola ASAP was even dumber by Commodore.
How many Amiga engineers have you thought about blowing today? |
| Status: Offline |
| | matthey
|  |
Re: Commodore > Motorola Posted on 7-Apr-2025 21:57:14
| | [ #83 ] |
| |
 |
Elite Member  |
Joined: 14-Mar-2007 Posts: 2598
From: Kansas | | |
|
| codis Quote:
Not that I'm an expert in this field. But AFAIK, Risc-V is relatively open, much less restrictive than ARM licences. It would make sense, I would say. Having been employed in the "low power / low budget" niche of the embedded market, I came across a lot of proprietary architectures and ISAs that are obsolete today. Maintaining such stuff is a nightmare. Just saying ...
|
RISC-V is a reasonable guess. It looks like Arox is innovating the ax-e0 design for the lowest possible power which may be restricted by using ARM Cortex-M designs. Cortex-M designs are configurable but may require an architecture license for the most detailed changes. Royalties are lower for Cortex-M than Cortex-A cores which discourages new competing designs and ISAs. RISC-V cores can be very low area and power but have been weak performance. The Arox website says ax-e0 has better performance per Watt (power efficiency) than "conventional architectures" which could be due to very low power even though the performance is weak. There are low power and high performance power efficiency leaders that may not compete with each other as they are in different embedded categories. The lowest power designs use more transistors/gates for power gating, lower power 8 transistor SRAM instead of 6 transistor SRAM, VLE compressed encodings which require more transistors to save SRAM/caches and instruction fetch/supply power, etc.

RISC-V compressed VLE lacks the code density to compete for ultra low power MCUs and SoCs. Many of the deeply embedded RISC-V cores have chosen the lower area 32-bit fixed length encoding rather than the VLE while larger Cortex-A like cores have adopted the VLE as the de facto standard, perhaps because the 32-bit fixed length encoding code density is as bad as traditional RISC ISAs like Alpha, PA-RISC, MIPS, SPARC, PPC, original ARM, etc. that went extinct because of the lack of code density. Code density is more important for low power embedded use and anyone using less than Thumb(-2) code density are going to be questioned why they did not use an ARM Cortex-M core. ARM Thumb was created from a SuperH license from Hitachi copied from the 68k for the purpose of obtaining embedded leading code density. The three leading embedded 32-bit ISAs were the 68k, SuperH and Thumb(-2) which all had good code density and very good in the case of the 68k and Thumb(-2). RISC-V compressed VLE is behind all of them in code density.
Linux_Logo Executable hand optimized for size ISA | instructions | code size | executable size RISCV32IMC 170 504 961 SuperH-3 229 458 994 Thumb 210 420 920 Thumb-2 189 402 908 68k 156 394 854 https://docs.google.com/spreadsheets/u/0/d/e/2PACX-1vTyfDPXIN6i4thorNXak5hlP0FQpqpZFk2sgauXgYZPdtJX7FvVgabfbpCtHTkp5Yo9ai6MhiQqhgyG/pubhtml?gid=909588979&single=true&pli=1
The embedded standard Thumb-2 uses 11% more instructions but has 20% better code density than RISCV32IMC in this benchmark. The 68k uses 8% fewer instructions and has 22% better code density than RISCVIMC. Even SuperH has 9% better code than RISCV32IMC but has 35% more instructions and a larger executable size from large immediates being located in the program data due to the restrictive 16-bit fixed length encoding. ARM Thumb copied and was an improvement on SuperH while Thumb-2 added a 16-bit and 32-bit VLE for another nice improvement but still has inferior performance metrics compared to the 68k. A VLE with 2 sizes was an improvement while ColdFire has 16-bit, 32-bit and 48-bit sizes for 3 sizes but Motorola thought this was too much for embedded use and created the 16-bit fixed length encoding MCore which was a short lived failure. Some people thought ColdFire would have been better by supporting a 64-bit size for a VLE with 4 sizes like Gunnar suggested and I agree with. With the de facto code density standard of Thumb-2, Cast created the BA2 ISA with 16-bit, 24-bit, 32-bit and 48-bit VLE sizes. Four sizes was too much for ColdFire but NXP licensed a BA2 ISA core for the low power NXP JN5168 MCU.
https://alephsecurity.com/2019/07/09/xiaomi-zigbee-2/ https://www.nxp.com/products/wireless-connectivity/zigbee/zigbee-and-ieee802-15-4-wireless-microcontroller-with-256-kb-flash-32-kb-ram:JN5168 Quote:
The JN5168 is an ultra low power, high performance wireless microcontroller supporting ZigBee and IEEE802.15.4 networking stacks to facilitate the development of Smart Energy, Home Automation, Smart Lighting and wireless sensor applications. It features an enhanced 32-bit RISC processor with 256 kB embedded Flash, 32 kB RAM and 4 kB EEPROM memory, offering high coding efficiency through variable width instructions, a multi-stage instruction pipeline and low power operation with programmable clock speeds. It also includes a 2.4 GHz IEEE802.15.4 compliant transceiver and a comprehensive mix of analogue and digital peripherals. The very low operating current of 15 mA, with a 0.6 ÎĽA sleep timer mode, gives excellent battery life allowing operation direct from a coin cell.
|
Nothing as low of power as the Arox ax-e0 but maybe older silicon and minimizing area instead. We can compare transistors or gates as more commonly used for embedded cores today to get a general idea on area for competitors and VLE cores at least.
core | pipeline | min gates | ISA Cortex-M0 3-stage 12k Thumb(-2) Cortex-M0+ 2-stage 15k? Thumb(-2) (lower power uses more transistors despite shallower pipeline) ColdFireV1 2-stage 19k ColdFire ColdFireV2 4-stage 40k ColdFire BA20 0-stage 8k BA2 (low area) BA21 2-stage 10k BA2 (low power deeply embedded) BA22DE 4/5-stage 15k BA2 (deeply embedded) BA22CE 5-stage 30k BA2 (cache enabled) BA25 5-stage 55k BA2 (application) BA51 2-stage 16k RISC-V (ultra low power) BA53 5-stage 30k RISC-V (low power deeply embedded)
https://www.cast-inc.com/processors/32-bit
The Cast core with a BA2 ISA VLE supporting 4 sizes is the smallest. The Cast RISC-V cores are nothing special but maybe not as well optimized? Enabling more extensions like compression may make them larger in area too. ColdFire cores are maybe a little larger area but have more support included rather than optional. There is room for improvement from ARM cores for lower area and code density. Cast claims up to 20% better code density than Thumb-2 and 7% to 20% better code density is claimed in some articles. Performance appears to be good but it is an 8-bit VLE which has less efficient alignment and decoding than a 16-bit VLE like the 68k and Thumb-2. The 4 variable sizes and large BA2 ISA immediate support should reduce the number of instructions executed for improved performance.
coder76 Quote:
Actually also had the geek gadget ADE-environment installed with gcc and other unix tools on my A1200. Now I think these wouldn't work well anymore, if most recent versions would be ported. I remember how slow and bloated gcc started becoming after release 3, and of course not much use for Amiga, as m68k code generation did not become better.
|
GCC 3.4 had a major decline in compiled 68k code quality. GCC 3.x started the bloat but it was better in some ways and worse in others while the newer C standard support was useful. VBCC is a much friendlier for the Amiga and more modern compiler (mostly C99 compatible) but it is slower and uses more memory even though the executables are reasonably sized. The integer compiled code is less optimized due to the backend even though the support code is good and floating point support and performance are better than GCC. It is a cross compiler but there is no C++ either. Amiga compiler support peaked and there will likely be less support with virtual Amigas taking over and no need to optimize for emulation anymore. Many people think the ARM emulation hardware is great but it is killing Amiga development. The really nice thing about the 68k Amiga is the small footprint standard system which would be awesome for embedded use if we had real hardware. The popularity of RPi hardware has shown that many embedded developers prefer more standard embedded hardware which results in better support and often better documentation. The BA2 ISA is cool but required reverse engineering to get basic info on the ISA and there is no mainstream compiler support for benchmark comparisons. Too many hardware and OS configurations are a pain even for embedded use. That is why ARM moved from a la cart AArch32 cores to more standard AArch64 cores. The 68k Amiga used to be a standard system including hardware and the OS before all the division. It is still the base system but dividing and being conquered is the modern Amiga way.
Last edited by matthey on 07-Apr-2025 at 11:22 PM. Last edited by matthey on 07-Apr-2025 at 10:56 PM.
|
| Status: Offline |
| | cdimauro
|  |
Re: Commodore > Motorola Posted on 8-Apr-2025 4:56:35
| | [ #84 ] |
| |
 |
Elite Member  |
Joined: 29-Oct-2012 Posts: 4278
From: Germany | | |
|
| @codis
Quote:
codis wrote: @cdimauro
Quote:
Architecture usually means ISA, but in different contexts it's also used for microarchitecture (SIGH!).
|
Not that I'm an expert in this field. But AFAIK, Risc-V is relatively open, much less restrictive than ARM licences. It would make sense, I would say. |
Indeed. The only license for RISC-V is about the name, which is registered, and if you want to use it you've to prove that your implementation is conformant to the specs. Otherwise, you can still implement a RISC-V processor, but without advertising that it's RISC-V.
However, the problem of RISC-V is that it's a weak architecture: doesn't excel on code density neither on performance. It's very simple, yes, but going low-power means that at the execution time you need to have a good balance on both factors: one for reducing the pressure on the memory hierarchy (which consumes a lot of power, at all level, but especially on the frontend) and one for shortening the execution time (the lesser the execution time, the lesser the energy spent by the processor for completing a task). I've a bit simplified, because the situation is more complicated, but overall should give an idea.
That's why Leonardo might be a new architecture. Quote:
Having been employed in the "low power / low budget" niche of the embedded market, I came across a lot of proprietary architectures and ISAs that are obsolete today. Maintaining such stuff is a nightmare. Just saying ... |
That's why it's better to stick to consolidate projects. Nowadays there are LLVM and GCC which are offering a good infrastructure for supporting any architecture. But it takes time to master them, and contribute. Quote:
Quote:
You were a bit late, but better enjoyed our beloved platform.
|
But for good reasons. Growing up on the other side of the fence, Amigas were not accessible until early 1990, like a lot of other stuff. |
Got it. There's pavlor here, which was in the same situation, and followed a similar path. Quote:
OTOH, I stuck with Amiga up until about 2000, when I bought my first x86 PC. And it took me less than a year to begin experimenting with Linux. The diploma thesis topic is still fresh on my mind, especially how my mentor looked down on me, suggesting he will borrow me a PC. Needless to say, M$-Word had it's reputation even back then. The FinalWriter worked quite flawless. It took me about 6 weeks to get it finished, with about 120 pages and 20% diagrams. I only experienced one minor crash. And in difference to equally or more expensive PC text processing packages, FW could import EPS graphics, e.g. from Gnuplot. A huge plus for me. |
Unfortunately, I missed such experience. My Amiga 1200 had all video output broken on 1996, so I had to step down and move to PCs (university, and then work). Quote:
While I had a few games, this part was less significant for me. Work and hobby coding projects more so. |
I was more balanced: I was a gamer since I was 4 years old (1975) and I've a spent a considerable time on enjoying games (well, monochromatic small stuff, at the beginning) in my life, up to the university time.
I was also tinkering with computer since I was 11, but first I've spent time on magazines and books. I had my first computer only 2 years late: a beautiful Plus/4. Before that I was writing programs in BASIC and machine language.
But I've also made sports, especially with friends (tennis, soccer, athletics).
So, I missed nothing.  |
| Status: Offline |
| | cdimauro
|  |
Re: Commodore > Motorola Posted on 8-Apr-2025 5:00:22
| | [ #85 ] |
| |
 |
Elite Member  |
Joined: 29-Oct-2012 Posts: 4278
From: Germany | | |
|
| @coder76
Quote:
coder76 wrote: Now that some of you started talking about typesetting your theses, there was an attempt to bring also LaTeX to Amiga computers, it is called AmiWeb2c (still available on aminet), got it installed on my A1200/68030-50MHz machine back then, and it actually worked, but it was quite slow and difficult to install. Did actually write a few physics reports with it, it also had some dvi viewer, so you could view your dvi files generated from .tex files. And used CED to edit the .tex text files.
Actually also had the geek gadget ADE-environment installed with gcc and other unix tools on my A1200. Now I think these wouldn't work well anymore, if most recent versions would be ported. I remember how slow and bloated gcc started becoming after release 3, and of course not much use for Amiga, as m68k code generation did not become better. |
And it's even worse with LLVM: it takes an incredible amount of resources to build (>16GB of RAM recommend, but even 32GB can be not enough).
The primary problem with LLVM and GCC is that the backend for the each processor need to be finally tuned to generate optimal code (speed. size. Or a balance of both).
Writing a backend is relatively easy. Squeezing the most requires A LOT of effort.
I haven't checked recently the status of LLVM and GCC for 68k, but it'd be good to take a look at it. |
| Status: Offline |
| | cdimauro
|  |
Re: Commodore > Motorola Posted on 8-Apr-2025 5:07:17
| | [ #86 ] |
| |
 |
Elite Member  |
Joined: 29-Oct-2012 Posts: 4278
From: Germany | | |
|
| @Lou
Quote:
Lou wrote: @cdimauro
Quote:
Another DUMB idea from Motorola, which had missed no opportunity to show how it doesn't care, at all, about backward-compatibility.
|
Well - retard - |
Oh, you immediately started with free, personal offences: a clear sign that it's impossible for you to have a discussion, and prefer to resort to it, to satisfy your devasted, violated ego.  Quote:
you're finally seeing the light... |
It's around 40 years that I know the situation.
Guess what: I study architectures since very long time. Something that doesn't apply to you, of course. Quote:
The move to Motorola was the initial dumb move by the original Amiga team. |
Ah, then please tell me: what were the equivalent alternatives when the Amiga project was started (1982)? Quote:
Not moving off Motorola ASAP was even dumber by Commodore. |
Ah, then please tell me: at which time you wanted to exit from Motorola, and what were the equivalent alternatives for the Amiga? Quote:
How many Amiga engineers have you thought about blowing today? |
None: it's since very long time that I had a word for all of them.
But I can start with the 65xx engineers: I still have things to say about these super crappy architecture, in all its variants.  |
| Status: Offline |
| | Hammer
 |  |
Re: Commodore > Motorola Posted on 8-Apr-2025 6:35:39
| | [ #87 ] |
| |
 |
Elite Member  |
Joined: 9-Mar-2003 Posts: 6320
From: Australia | | |
|
| @Lou
Quote:
Lou wrote:
Well - retard - you're finally seeing the light...
The move to Motorola was the initial dumb move by the original Amiga team.
Not moving off Motorola ASAP was even dumber by Commodore.
How many Amiga engineers have you thought about blowing today?
|
In the early 1980s, the 68000's forward-looking 32-bit programming model was the correct path for 32-bit desktop computers. Intel 386 backward compatibility record wasn't established when Apple Lisa and Amiga Lorraine were at their initial design stage.
68000 wasn't retard like 16bit Z8000's shared data and address I/O bus.
1. Commodore management failed to exploit the US$100 priced 68EC040-25, and that's on Commodore. US$100 priced 68EC040-25 is similar to AMD's 386DX-40 price range.
Commodore could have kick-ass 386DX-40 PC priced Amiga with 68EC040-25.
2. Commodore management fu_kup A1200's mass production is being burdened by 1 million unit scale A600's old debts.
For a 400,000 A1200 unit production scale, each A1200 unit sold has USD$ 50 allocated to pay the old A600-related debt. A600's 1 million units production run couldn't pay for itself and this is on Commodore management.
This is the major reason why A1200's compute power was bare bones when compared to Atari Falcon.
Atari Falcon's 68030(with MMU)'s 16-bit bus wasn't a good PR for the "32-bit generation" change.
3. Since the original Amiga engineers were also MOS/CSG 6507-based Atari 2600's multimedia chipset designers, the main reason for the 68000 migration is due to Jack Tramiel era CSG's weak focus on CPU R&D investment.
Other mainstream 65xx-based desktop computing platform vendors have jumped ship away from MOS/CSG.
https://www.youtube.com/watch?v=P4VBqTViEx4 Steve Jobs's POV against technical noob salespeople like Jack Tramiel and CEOs/management from Pepsi Co.
Henri Rubin can't even use a computer.
4. For 1991 to 1993, Motorola wasn't competitive in the US$30 to $40 CPU price relative to MIPS R3051 (for PS1) and R4000 (for N64 offer).
Last edited by Hammer on 08-Apr-2025 at 06:50 AM. Last edited by Hammer on 08-Apr-2025 at 06:44 AM. Last edited by Hammer on 08-Apr-2025 at 06:41 AM.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | Hammer
 |  |
Re: Commodore > Motorola Posted on 8-Apr-2025 7:44:01
| | [ #88 ] |
| |
 |
Elite Member  |
Joined: 9-Mar-2003 Posts: 6320
From: Australia | | |
|
| @cdimauro Quote:
You can't handle the truth on tokenism. Amiga's 3rd-party CPU accelerator card sales statistics' visibility is poor. Refer to John Carmack's 1994 statement on Amiga's install base issue.
Prove A1000's 1986-1987 accelerator vendors rival GVP's $50 million+ annual revenue.
Quote:
Oh, and wasn't that true? When do you plan to read AND understand what the people are writing (see on the previous comment the proof that I've brough).
|
From 1983-to1987, mainstream 68K platform vendors were recovering ground-zero situation.
Quote:
Do you understand that there were TWO different Kickstart 1.2, one for the A1000 and one for the A500/2000?
|
Do you understand your "easy to program" is trainwrecked with KickStart 1.2 being available from Sep 1986, and your cited 3D software's 6888x FPU support in 1988?
You can't handle the fact that PC's AutoCAD and Lotus 123 2.0+ are the larger sales drivers for x87 when compared to 6888x?
Quote:
https://youtu.be/v_YozYt8l-g?t=9 The chosen one joins the dark side.
Mac is alive. Enough said.
#Metoo R&D is not enough to displace the establishment.
Quote:
It's relevant, TURD!
Quote:
Irrelevant + Red Herring.
As usual, with you: hopeless...
|
Remind me when the solo FPU chip is useful without a display and platform.
It's relevant, TURD!
Quote:
Right, and it was about to go bankrupt and was saved by... rolling drum... Microsoft / Bill Gates |
https://slashdot.org/story/24/10/20/004255/chip-designers-recall-the-big-amd-intel-battle-over-x86-64-support
Park also shared a post from Nicholas Wilt (NVIDIA CUDA designer who earlier did GPU computing work at Microsoft and built the prototype for Windows Desktop Manager):
I have an x86-64 story of my own. I pressed a friend at AMD to develop an alternative to Itanium. "For all the talk about Wintel," I told him, "these companies bear no love for one another. If you guys developed a 64-bit extension of x86, Microsoft would support it...."
Interesting coda: When it became clear that x86-64 was beating Itanium in the market, Intel reportedly petitioned Microsoft to change the architecture, and Microsoft told Intel to pound sand.
The same Microsoft told Intel to pound sand after the Itanium debacle. 
Bill Gates stepped down as Microsoft CEO in the year 2000; hence no privilege protection for Intel.
Steve Jobs' view on Microsoft's support for the Mac. https://www.cnbc.com/2017/08/29/steve-jobs-and-bill-gates-what-happened-when-microsoft-saved-apple.html
“Apple was in very serious trouble,” said Jobs. “And what was really clear was that if the game was a zero-sum game where for Apple to win, Microsoft had to lose, then Apple was going to lose.
“There were too many people at Apple and in the Apple ecosystem playing (that) game,” he explained. “And it was clear that you didn’t have to play that game, because Apple wasn’t going to beat Microsoft.
“Apple didn’t have to beat Microsoft. Apple had to remember who Apple was because they’d forgotten who Apple was.”
To stay alive, Jobs had to step outside of the competitive mindset.
“To me, it was pretty essential to break that paradigm,” says Jobs. “And it was also important that, you know, Microsoft was the biggest software developer outside of Apple developing for the Mac. So it was just crazy what was happening at that time. And Apple was very weak and so I called Bill up and we tried to patch things up.”
The two founders did just that, though their partnership met with resistance. When Jobs announced the $150 million investment at the Macworld Boston conference in 1997, the audience booed Gates’ appearance via satellite.
For Microsoft, the investment meant propping up one of its greatest competitors, but it was also a new business opportunity.
“That’s worked out very well,” says Gates at the 2007 conference. “In fact, every couple years or so, there’s been something new that we’ve been able to do on the Mac and it’s been a great business for us.”
Also, as part of the deal, Apple agreed to drop a lawsuit accusing Microsoft of copying its operating system.
Quote:
Again free, personal offences from a BOT which is unable to understand the context of discussion.
|
Your blame on Commodore engineers is wrong and misguided. Your argument is without context from first-hand accounts.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | matthey
|  |
Re: Commodore > Motorola Posted on 8-Apr-2025 20:40:24
| | [ #89 ] |
| |
 |
Elite Member  |
Joined: 14-Mar-2007 Posts: 2598
From: Kansas | | |
|
| Hammer Quote:
68000 wasn't retard like 16bit Z8000's shared data and address I/O bus.
...
4. For 1991 to 1993, Motorola wasn't competitive in the US$30 to $40 CPU price relative to MIPS R3051 (for PS1) and R4000 (for N64 offer).
|
You complain about the high price of the 68k compared to the cheaper competition that used multiplexed busses.
https://stuff.mit.edu/afs/sipb/contrib/doc/specs/ic/cpu/mips/r3051.pdf Quote:
The R3051 bus interface utilizes a 32-bit address and data bus multiplexed onto a single set of pins. The bus interface unit also provides an ALE (Address Latch Enable) output signal to de-multiplex the A/D bus, and simple handshake signals to process CPU read and write requests. In addition to the read and write interface, the R3051 incorporates a DMA arbiter, to allow an external master to control the external bus.
|
High-Performance Internal Product Portfolio Overview Issue 10 Fourth Quarter, 1995 https://www.datasheetarchive.com/datasheet/85f6800abd04bfea?type=N Quote:
IDT
Competition Device 3051/2
Motorola Solution EC040
Competitor's Advantages o R3000 core o Price aggressive
Competitor's Disadvantages o Poor DRAM performance o Multiplexed bus o Inferior development tools
|
Consoles and embedded devices often saved chip pins which were more expensive back then.
Cheapest option: multiplexed 32-bit busses (baseline, requires multiplex select pins) mid range option: non-multiplexed 32-bit busses (+32 pins minus multiplex select pins) high end option: non-multiplexed 64-bit data bus (+64 pins minus multiplex select pins)
MIPS/RISC has more memory traffic and a memory bottleneck due to lack of code density to begin with and multiplexing makes the bottleneck worse. Learn and be consistent. And please do not spam the thread in response. It will not help your case.
Last edited by matthey on 08-Apr-2025 at 09:45 PM.
|
| Status: Offline |
| | coder76
|  |
Re: Commodore > Motorola Posted on 8-Apr-2025 21:26:22
| | [ #90 ] |
| |
 |
New Member |
Joined: 20-Mar-2025 Posts: 9
From: Finland | | |
|
| @matthey Quote:
GCC 3.4 had a major decline in compiled 68k code quality. GCC 3.x started the bloat but it was better in some ways and worse in others while the newer C standard support was useful. VBCC is a much friendlier for the Amiga and more modern compiler (mostly C99 compatible) but it is slower and uses more memory even though the executables are reasonably sized. The integer compiled code is less optimized due to the backend even though the support code is good and floating point support and performance are better than GCC. It is a cross compiler but there is no C++ either. Amiga compiler support peaked and there will likely be less support with virtual Amigas taking over and no need to optimize for emulation anymore. Many people think the ARM emulation hardware is great but it is killing Amiga development. The really nice thing about the 68k Amiga is the small footprint standard system which would be awesome for embedded use if we had real hardware. The popularity of RPi hardware has shown that many embedded developers prefer more standard embedded hardware which results in better support and often better documentation. The BA2 ISA is cool but required reverse engineering to get basic info on the ISA and there is no mainstream compiler support for benchmark comparisons. Too many hardware and OS configurations are a pain even for embedded use. That is why ARM moved from a la cart AArch32 cores to more standard AArch64 cores. The 68k Amiga used to be a standard system including hardware and the OS before all the division. It is still the base system but dividing and being conquered is the modern Amiga way.
|
VBCC needs perhaps some more development, I remember using it over 20 years ago, and then it sometimes crashed when compiling code, now it is likely more usable, but havent tried the latest versions. VBCC was good for mixing assembly with C, even jumping from asm code back to C code was possible. Gcc was a bit awkward with that.
You mean Pistorm accelerators are a bad idea? You still get some new stuff to work with them, like video decoding, which require a lot more than a 100 MHz 68060 can give. Even AC68080 struggles with video decoding, you'll get something like 25 FPS@640x480. And graphics, raytracing programs now have sufficient CPU power. Its not so bad for games either, now Quake 2 also runs smoothly on an Amiga, and other 3D ports. You now also have real software developed that needs the power of a new and faster m68k CPU. |
| Status: Offline |
| | matthey
|  |
Re: Commodore > Motorola Posted on 8-Apr-2025 22:33:08
| | [ #91 ] |
| |
 |
Elite Member  |
Joined: 14-Mar-2007 Posts: 2598
From: Kansas | | |
|
| coder76 Quote:
VBCC needs perhaps some more development, I remember using it over 20 years ago, and then it sometimes crashed when compiling code, now it is likely more usable, but havent tried the latest versions. VBCC was good for mixing assembly with C, even jumping from asm code back to C code was possible. Gcc was a bit awkward with that.
|
VBCC has improved slowly but steadily over the last 20 years. There have been many bug fixes. Change logs and the compiler are available at the following site.
http://sun.hasenbraten.de/vbcc/
It uses VASM which is the best 68k assembler with the most peephole optimizations but there is no need for an optimizing assembler or compiler anymore with emulation.
coder76 Quote:
You mean Pistorm accelerators are a bad idea? You still get some new stuff to work with them, like video decoding, which require a lot more than a 100 MHz 68060 can give. Even AC68080 struggles with video decoding, you'll get something like 25 FPS@640x480. And graphics, raytracing programs now have sufficient CPU power. Its not so bad for games either, now Quake 2 also runs smoothly on an Amiga, and other 3D ports. You now also have real software developed that needs the power of a new and faster m68k CPU.
|
PiStorm and WinUAE provide high performance 68k Amigas but there is no large resurgence of game development and less reason for developer compiler improvements. If more performance is wanted, then buy a higher end host system. The problem is that many people are using cheap ARM hardware so that is the standard kind of like a 68000&OCS@7MHz and 68020&AGA@14MHz were standards even though some people had upgraded CPUs and memory. It is impossible to optimized for emulation as performance varies based on the host hardware. Performance is still not good enough on ARM hardware to offer competitive value and attract even ex-Amiga fans. PiStorm requires the old failing and expensive hardware. The Amiga market is divided with no standardization, no competitive hardware and no leadership yet the businesses fighting over the dead Amiga corpse waste resources on never ending lawsuits instead of investing in competitive hardware needed for survival. There is no future with emulation. It is EOL which businesses and developers recognize as exhibited by the lack of interest by them in Amiga emulation. High end hardware is not needed as exhibited by low end RPi hardware that benefits from standardization like the 68k Amiga used to, even for the embedded market.
Last edited by matthey on 08-Apr-2025 at 10:35 PM.
|
| Status: Offline |
| | cdimauro
|  |
Re: Commodore > Motorola Posted on 9-Apr-2025 5:01:57
| | [ #92 ] |
| |
 |
Elite Member  |
Joined: 29-Oct-2012 Posts: 4278
From: Germany | | |
|
| @matthey
Quote:
matthey wrote:
RISC-V compressed VLE lacks the code density to compete for ultra low power MCUs and SoCs. Many of the deeply embedded RISC-V cores have chosen the lower area 32-bit fixed length encoding rather than the VLE while larger Cortex-A like cores have adopted the VLE as the de facto standard, perhaps because the 32-bit fixed length encoding code density is as bad as traditional RISC ISAs like Alpha, PA-RISC, MIPS, SPARC, PPC, original ARM, etc. that went extinct because of the lack of code density. Code density is more important for low power embedded use and anyone using less than Thumb(-2) code density are going to be questioned why they did not use an ARM Cortex-M core. ARM Thumb was created from a SuperH license from Hitachi copied from the 68k for the purpose of obtaining embedded leading code density. The three leading embedded 32-bit ISAs were the 68k, SuperH and Thumb(-2) which all had good code density and very good in the case of the 68k and Thumb(-2). RISC-V compressed VLE is behind all of them in code density.
Linux_Logo Executable hand optimized for size ISA | instructions | code size | executable size RISCV32IMC 170 504 961 SuperH-3 229 458 994 Thumb 210 420 920 Thumb-2 189 402 908 68k 156 394 854 |
I read that you have reached 854 bytes now: could you please share the updated source? The last one which I had was counting 860 bytes. Quote:
https://docs.google.com/spreadsheets/u/0/d/e/2PACX-1vTyfDPXIN6i4thorNXak5hlP0FQpqpZFk2sgauXgYZPdtJX7FvVgabfbpCtHTkp5Yo9ai6MhiQqhgyG/pubhtml?gid=909588979&single=true&pli=1
The embedded standard Thumb-2 uses 11% more instructions but has 20% better code density than RISCV32IMC in this benchmark. The 68k uses 8% fewer instructions and has 22% better code density than RISCVIMC. Even SuperH has 9% better code than RISCV32IMC but has 35% more instructions and a larger executable size from large immediates being located in the program data due to the restrictive 16-bit fixed length encoding. |
Indeed, but what impresses me is the number of executed instructions: RISCV32IMC is second only to the 68k, which is a great result.
However, even better is doing AArch64: 148 instructions is incredible! At the expense of code density: 592 bytes is A LOT (even x86_64 is doing better, with its poor ISA). It's also very good at the branch and memory access instructions. I think that it deserves some study to see which instructions are producing those very nice effects. Quote:
ARM Thumb copied and was an improvement on SuperH while Thumb-2 added a 16-bit and 32-bit VLE for another nice improvement but still has inferior performance metrics compared to the 68k. A VLE with 2 sizes was an improvement while ColdFire has 16-bit, 32-bit and 48-bit sizes for 3 sizes but Motorola thought this was too much for embedded use and created the 16-bit fixed length encoding MCore which was a short lived failure. Some people thought ColdFire would have been better by supporting a 64-bit size for a VLE with 4 sizes like Gunnar suggested and I agree with. With the de facto code density standard of Thumb-2, Cast created the BA2 ISA with 16-bit, 24-bit, 32-bit and 48-bit VLE sizes. Four sizes was too much for ColdFire but NXP licensed a BA2 ISA core for the low power NXP JN5168 MCU.
https://alephsecurity.com/2019/07/09/xiaomi-zigbee-2/ https://www.nxp.com/products/wireless-connectivity/zigbee/zigbee-and-ieee802-15-4-wireless-microcontroller-with-256-kb-flash-32-kb-ram:JN5168 Quote:
The JN5168 is an ultra low power, high performance wireless microcontroller supporting ZigBee and IEEE802.15.4 networking stacks to facilitate the development of Smart Energy, Home Automation, Smart Lighting and wireless sensor applications. It features an enhanced 32-bit RISC processor with 256 kB embedded Flash, 32 kB RAM and 4 kB EEPROM memory, offering high coding efficiency through variable width instructions, a multi-stage instruction pipeline and low power operation with programmable clock speeds. It also includes a 2.4 GHz IEEE802.15.4 compliant transceiver and a comprehensive mix of analogue and digital peripherals. The very low operating current of 15 mA, with a 0.6 ÎĽA sleep timer mode, gives excellent battery life allowing operation direct from a coin cell.
|
|
What's interesting to notice is that even nowadays there's not that much RAM available on embedded systems.
This enforces my idea that a (proper) 16-bit architecture could be very good at covering such market segments, thanks to much smaller cores. Quote:
Nothing as low of power as the Arox ax-e0 but maybe older silicon and minimizing area instead. We can compare transistors or gates as more commonly used for embedded cores today to get a general idea on area for competitors and VLE cores at least.
core | pipeline | min gates | ISA Cortex-M0 3-stage 12k Thumb(-2) Cortex-M0+ 2-stage 15k? Thumb(-2) (lower power uses more transistors despite shallower pipeline) ColdFireV1 2-stage 19k ColdFire ColdFireV2 4-stage 40k ColdFire BA20 0-stage 8k BA2 (low area) BA21 2-stage 10k BA2 (low power deeply embedded) BA22DE 4/5-stage 15k BA2 (deeply embedded) BA22CE 5-stage 30k BA2 (cache enabled) BA25 5-stage 55k BA2 (application) BA51 2-stage 16k RISC-V (ultra low power) BA53 5-stage 30k RISC-V (low power deeply embedded)
https://www.cast-inc.com/processors/32-bit |
That's a great table, thanks! Out of curiosity: where did you get it?
BA20 is impressive, but ColdFire V1 looks competitive with the Cortex-M0+, which is promising. Quote:
The Cast core with a BA2 ISA VLE supporting 4 sizes is the smallest. The Cast RISC-V cores are nothing special but maybe not as well optimized? Enabling more extensions like compression may make them larger in area too. ColdFire cores are maybe a little larger area but have more support included rather than optional. There is room for improvement from ARM cores for lower area and code density. Cast claims up to 20% better code density than Thumb-2 and 7% to 20% better code density is claimed in some articles. Performance appears to be good but it is an 8-bit VLE which has less efficient alignment and decoding than a 16-bit VLE like the 68k and Thumb-2. The 4 variable sizes and large BA2 ISA immediate support should reduce the number of instructions executed for improved performance. |
I think that BA2 set the stone regarding code density: with its mixture of 2 and 3 bytes instructions there's very little chance that some other architecture could do better. It has also much space available for being extended (SIMD, vector, and other useful instructions), albeit the ISA is already very well balanced (it has a lot of useful instructions). Kudos to the architects!
I don't get why Cast has abandoned it in favour of the WAY WEAKER RISC-V: it reminds me Motorola with its awesome 68k... Quote:
coder76 Quote:
Actually also had the geek gadget ADE-environment installed with gcc and other unix tools on my A1200. Now I think these wouldn't work well anymore, if most recent versions would be ported. I remember how slow and bloated gcc started becoming after release 3, and of course not much use for Amiga, as m68k code generation did not become better.
|
GCC 3.4 had a major decline in compiled 68k code quality. GCC 3.x started the bloat but it was better in some ways and worse in others while the newer C standard support was useful. VBCC is a much friendlier for the Amiga and more modern compiler (mostly C99 compatible) but it is slower and uses more memory even though the executables are reasonably sized. The integer compiled code is less optimized due to the backend even though the support code is good and floating point support and performance are better than GCC. It is a cross compiler but there is no C++ either. Amiga compiler support peaked and there will likely be less support with virtual Amigas taking over and no need to optimize for emulation anymore. |
But Bebbo did a great job with his GCC fork for 68k. GCC is still supported, and he should upstream his changes, so that it would be easy for anyone to take advantage of the much better backend. I don't understand why he's not doing it, keeping the burden of maintaining this fork.
VBCC isn't really a viable solution for any architecture which pretends to be used on the market.
LLVM is my hope, because it's a complete and modern infrastructure (with a recent 68k banckend), but it requires TONS of resources (cores, memory, fast SSD->NVMe), unfortunately. Quote:
Many people think the ARM emulation hardware is great but it is killing Amiga development. The really nice thing about the 68k Amiga is the small footprint standard system which would be awesome for embedded use if we had real hardware. The popularity of RPi hardware has shown that many embedded developers prefer more standard embedded hardware which results in better support and often better documentation. The BA2 ISA is cool but required reverse engineering to get basic info on the ISA and there is no mainstream compiler support for benchmark comparisons. |
Indeed, but the forecast looks very good for BA2: it can easily reach around 2.5 bytes average length for the instructions on compiled code (around 3.0 for 68k).
But the number of executed instructions and memory accesses is also very important, and could make the difference. 68k has some chances here. Quote:
Too many hardware and OS configurations are a pain even for embedded use. That is why ARM moved from a la cart AArch32 cores to more standard AArch64 cores. The 68k Amiga used to be a standard system including hardware and the OS before all the division. It is still the base system but dividing and being conquered is the modern Amiga way. |
At least the 68k could be revived, because it has very good cards to play. But an investor is needed... |
| Status: Offline |
| | codis
|  |
Re: Commodore > Motorola Posted on 9-Apr-2025 10:53:39
| | [ #93 ] |
| |
 |
Member  |
Joined: 23-Mar-2025 Posts: 16
From: Austria | | |
|
| @matthey On that note, I don't know what niche Arox is targeting with their new controller. Performance is of secondary importance in some applications. Code density ( = application size = flash size = chip price) might be as well. Although flash size / price mattered in all projects I was involved in ... Battery-operated devices with relatively sparse operational periods perhaps. There is definitely a market for that.
Quote:
GCC 3.4 had a major decline in compiled 68k code quality. GCC 3.x started the bloat but it was better in some ways and worse in others while the newer C standard support was useful. VBCC is a much friendlier for the Amiga and more modern compiler (mostly C99 compatible) but it is slower and uses more memory even though the executables are reasonably sized. The integer compiled code is less optimized due to the backend even though the support code is good and floating point support and performance are better than GCC.
|
I had been dabbling a bit in signal processing at that time. And I once compiled a FFT test function for FPU usage (double) on gcc, StormC and MCPP4, and compared the generated *.s files - which were a bit surprising to say the least. The "native" toolchains used only two of the eight FPU registers available, while the gcc code used all of them. Consequently with about half the number of instructions required. At least one use case where the Amiga port profited from the Gnu project's decent optimisation approach. I only remember the library glue code handling has a bit awkward. But to be honest, I haven't done any coding for the Amiga in 30 years ... |
| Status: Offline |
| | matthey
|  |
Re: Commodore > Motorola Posted on 9-Apr-2025 20:56:16
| | [ #94 ] |
| |
 |
Elite Member  |
Joined: 14-Mar-2007 Posts: 2598
From: Kansas | | |
|
| cdimauro Quote:
I read that you have reached 854 bytes now: could you please share the updated source? The last one which I had was counting 860 bytes.
|
The source is on EAB with the last optimizations written by ross who was also unhappy that the last submission failed to receive an update on the website.
https://eab.abime.net/showthread.php?s=ad389c667595017181a4e074fbb9854b&t=85855&page=9
Vince Weaver communicated that he accepted the submission too.
cdimauro Quote:
Indeed, but what impresses me is the number of executed instructions: RISCV32IMC is second only to the 68k, which is a great result.
However, even better is doing AArch64: 148 instructions is incredible! At the expense of code density: 592 bytes is A LOT (even x86_64 is doing better, with its poor ISA). It's also very good at the branch and memory access instructions. I think that it deserves some study to see which instructions are producing those very nice effects.
|
It is the number of static instructions in the code rather than trace executed instructions but there should be a strong correlation. RISCV32IMC and RISCV64IMC are tied for 4th out of 13 architectures for instruction count which is a pretty good showing for RISC-V in this important performance metric. It does beat all the good code density compressed ISAs except the 68k. A VLE ISA from inception has advantages. Most RISC embedded ISAs with limited immediate and displacement encoding bits caused a significant increase in the number of instructions.
Profile Guided Selection of ARM and Thumb Instructions https://www2.cs.arizona.edu/~arvind/papers/lctes02.pdf Quote:
While the use of Thumb instructions generally gives smaller code size and lower instruction cache energy, there are certain problems with using the Thumb mode. In many cases the reductions in code size are obtained at the expense of a significant increase in the number of instructions executed by the program. In our experiments this increase ranged from 9% to 41%. In fact in case of one of the benchmarks, the increase in dynamic instruction count was so high that instead of obtaining reductions in cache energy used, we observed an increase in the total amount of energy expended by the instruction cache.
|
Efficient Use of Invisible Registers in Thumb Code https://www.cs.ucr.edu/~gupta/research/Publications/Comp/micro05.pdf Quote:
Thumb code size was 29.8% to 32.5% smaller than the corresponding ARM code size. However, it was also observed that there was an increase in instruction counts for Thumb code which was typically around 30%.
|
This level of increased instructions handicapped performance on all but the lowest memory bandwidth hardware. The original ARM ISA instruction counts are about the same as RISC-V compressed ISAs but the RISC-V compressed ISAs have better code density. The holy grail for an ISA is to have few instructions and good code density like the 68k. Surprisingly, the 32-bit fixed length encoding ISAs like MIPS, SPARC and PPC were worse at instruction counts than original ARM likely due to the extra encoding bits for 32 GP registers instead of 16 GP registers. RISC-V compressed encodes 32 GP registers and has similar instruction counts to the original ARM with 15 GP registers while having ~40% better code density. AArch64 is very low (the best) at instructions counts but it is a much larger and more complex ISA and code density is not as good as RISC-V compressed. The best ISAs are the 68k, Thumb-2, AArc64 and RISC-V depending on which traits are most important for the application. RISC-V does not lead at any performance metric in the comparison but it is simpler than the other ISAs, has open hardware and encoding space for customization which could be enough to survive. The 68k is older and more primitive in some ways than all the other ISAs in the comparison but it is still very good and has considerable room to improve.
cdimauro Quote:
What's interesting to notice is that even nowadays there's not that much RAM available on embedded systems.
This enforces my idea that a (proper) 16-bit architecture could be very good at covering such market segments, thanks to much smaller cores.
|
There is a need for smaller area and lower power CPU cores in the embedded market. As I recall, there are more 8-bit and 16-bit CPU cores used for the embedded market although 16-bit cores have become less popular than 8-bit and 32-bit cores. A 32-bit ISA can scale up much further than a 16-bit ISA so they are more popular while 8-bit ISAs are used for the smallest area and power applications. I was surprised how popular 64-bit ISAs have become in the embedded market considering how rarely the extra addressing space is used and the extra area and power often wasted.
cdimauro Quote:
That's a great table, thanks! Out of curiosity: where did you get it?
BA20 is impressive, but ColdFire V1 looks competitive with the Cortex-M0+, which is promising.
|
I made the table up based on data. The info for the Cast cores is on their site with the link given. ARM has given the data for the Cortex-M0.
https://www.southampton.ac.uk/~bim/notes/cad/reference/ARMSoC/P2/Cortex_M0_And_DesignStart.pdf https://community.arm.com/support-forums/f/architectures-and-processors-forum/5176/arm-cortex-m0-details
ColdFire data is on the Silvaco website where they license the cores.
https://silvaco.com/design-ip/embedded-processors/ https://silvaco.com/wp-content/uploads/product/ip/pdf/70008_ColdFireV1Core_Brief.pdf https://silvaco.com/wp-content/uploads/product/ip/pdf/70019_ColdFireV2Core_Brief.pdf
cdimauro Quote:
I think that BA2 set the stone regarding code density: with its mixture of 2 and 3 bytes instructions there's very little chance that some other architecture could do better. It has also much space available for being extended (SIMD, vector, and other useful instructions), albeit the ISA is already very well balanced (it has a lot of useful instructions). Kudos to the architects!
I don't get why Cast has abandoned it in favour of the WAY WEAKER RISC-V: it reminds me Motorola with its awesome 68k...
|
I agree that BA2 is difficult to beat in code density. I wish compiler support was mainstream so we could test claims though. I have seen other claims about ISA code density that were not true or their promoted ISA best cases vs the competition worst cases. Compiler options in comparisons make a big difference and may conveniently not be optimal for the competition. Some of the RISC-V compressed code density claims are a good example. I believe the BA2 ISA has very good code density but I would be surprised if it was 10% better than Thumb-2 or 68k on average. I believe the 68k code density could be improved another 5% on average so these 3 ISAs are in the same ballpark for code density. I do expect instruction counts to be lower on BA2 than Thumb-2 and perhaps closer to the CISC 68k but Thumb-2 is simpler and likely smaller area than both. It needs more investigation and is not open or standard enough to judge.
I would not jump to conclusions that Cast is replacing their BA2 ISA cores with RISC ISA cores. There are advantages to RISC-V like more open, more standard, better compiler support, more configurable, etc. RISC-V cores may compliment and diversify their core selection. They may have licensed the RISC-V cores rather than developing them themselves.
coder76 Quote:
But Bebbo did a great job with his GCC fork for 68k. GCC is still supported, and he should upstream his changes, so that it would be easy for anyone to take advantage of the much better backend. I don't understand why he's not doing it, keeping the burden of maintaining this fork.
VBCC isn't really a viable solution for any architecture which pretends to be used on the market.
LLVM is my hope, because it's a complete and modern infrastructure (with a recent 68k banckend), but it requires TONS of resources (cores, memory, fast SSD->NVMe), unfortunately.
|
Easier said than done with Bebbo upstreaming his GCC changes. GCC development and developers have issues. I doubt his changes will be upstreamed without new 68k hardware as the developers likely do not want to be bothered to make changes and create more work for a dead platform. VBCC will never be a GCC/LLVM compatible compiler for porting but it is an important and portable retro and niche cross compiler. It would be nice if it was improved to be a better lightweight compiler for embedded use as originally intended. The LLVM compiler code is more maintainable than GCC but it is a resource hog limiting its use. There is no perfect compiler.
cdimauro Quote:
Indeed, but the forecast looks very good for BA2: it can easily reach around 2.5 bytes average length for the instructions on compiled code (around 3.0 for 68k).
But the number of executed instructions and memory accesses is also very important, and could make the difference. 68k has some chances here.
|
Motorola said 68k code had "a measured average instruction length of less than 3 bytes". Compiled code varies depending on the compiler, algorithms, FPU instructions, etc. Integer only code can be closer to the 2.5 byte average instruction length. Vince Weaver's Linux_Logo 68k code is only 2.53 bytes average instruction length. ColdFire instructions and compressed immediates would improve compiled average instruction length too.
codis Quote:
On that note, I don't know what niche Arox is targeting with their new controller. Performance is of secondary importance in some applications. Code density ( = application size = flash size = chip price) might be as well. Although flash size / price mattered in all projects I was involved in ... Battery-operated devices with relatively sparse operational periods perhaps. There is definitely a market for that.
|
From the Arox website, it definitely looks like the AX-E0 core is targeting low power. Area would likely be a secondary concern as active transistors use more power and performance is likely least important of PPA. They claim good power efficiency (performance/Watt) so they designed the core for some performance and not just sleeping most of the time.
codis Quote:
I had been dabbling a bit in signal processing at that time. And I once compiled a FFT test function for FPU usage (double) on gcc, StormC and MCPP4, and compared the generated *.s files - which were a bit surprising to say the least. The "native" toolchains used only two of the eight FPU registers available, while the gcc code used all of them. Consequently with about half the number of instructions required. At least one use case where the Amiga port profited from the Gnu project's decent optimisation approach. I only remember the library glue code handling has a bit awkward. But to be honest, I haven't done any coding for the Amiga in 30 years ...
|
Without a pipelined 68k FPU, the fewest non-trapped FPU instructions and fewest FP registers used often wins in performance. There is a gain by intermixing integer and FPU instructions but few compilers have a 68k specific instruction scheduler to take advantage. Not even VBCC which generates code with more than double the FPU performance in the ByteMark benchmark with a 68060@50MHz than GCC 3.3 generated code for a 68060@75MHz.
https://amigaworld.net/modules/newbb/viewtopic.php?topic_id=44391&forum=25#847418
All the gain was in the FOURIER FPU benchmark but it is not a fast Fourier transform (FFT) unfortunately. GCC had some major issues here.
|
| Status: Offline |
| | cdimauro
|  |
Re: Commodore > Motorola Posted on 10-Apr-2025 5:11:32
| | [ #95 ] |
| |
 |
Elite Member  |
Joined: 29-Oct-2012 Posts: 4278
From: Germany | | |
|
| @matthey
Quote:
Thanks. I've a slightly different version, which has the same instructions, but a few of them use the .l suffix instead of the .w used on the above version. So, the size should be exactly the same. Quote:
Vince Weaver communicated that he accepted the submission too. |
Probably he stopped working on this project. It would be good to have it up-to-date, but 6 bytes of difference doesn't change the top list, at the end. Quote:
cdimauro Quote:
Indeed, but what impresses me is the number of executed instructions: RISCV32IMC is second only to the 68k, which is a great result.
However, even better is doing AArch64: 148 instructions is incredible! At the expense of code density: 592 bytes is A LOT (even x86_64 is doing better, with its poor ISA). It's also very good at the branch and memory access instructions. I think that it deserves some study to see which instructions are producing those very nice effects.
|
It is the number of static instructions in the code rather than trace executed instructions but there should be a strong correlation. |
Yes, in general, but probably it's not the case with some architectures which have very long prologue/epilogue code (e.g.: PowerPCs). Quote:
RISCV32IMC and RISCV64IMC are tied for 4th out of 13 architectures for instruction count which is a pretty good showing for RISC-V in this important performance metric. It does beat all the good code density compressed ISAs except the 68k. A VLE ISA from inception has advantages. Most RISC embedded ISAs with limited immediate and displacement encoding bits caused a significant increase in the number of instructions. |
The strange thing is that RISC-V has also very limited immediates. Quote:
Profile Guided Selection of ARM and Thumb Instructions https://www2.cs.arizona.edu/~arvind/papers/lctes02.pdf Quote:
While the use of Thumb instructions generally gives smaller code size and lower instruction cache energy, there are certain problems with using the Thumb mode. In many cases the reductions in code size are obtained at the expense of a significant increase in the number of instructions executed by the program. In our experiments this increase ranged from 9% to 41%. In fact in case of one of the benchmarks, the increase in dynamic instruction count was so high that instead of obtaining reductions in cache energy used, we observed an increase in the total amount of energy expended by the instruction cache.
|
Efficient Use of Invisible Registers in Thumb Code https://www.cs.ucr.edu/~gupta/research/Publications/Comp/micro05.pdf Quote:
Thumb code size was 29.8% to 32.5% smaller than the corresponding ARM code size. However, it was also observed that there was an increase in instruction counts for Thumb code which was typically around 30%.
|
This level of increased instructions handicapped performance on all but the lowest memory bandwidth hardware. The original ARM ISA instruction counts are about the same as RISC-V compressed ISAs but the RISC-V compressed ISAs have better code density. The holy grail for an ISA is to have few instructions and good code density like the 68k. Surprisingly, the 32-bit fixed length encoding ISAs like MIPS, SPARC and PPC were worse at instruction counts than original ARM likely due to the extra encoding bits for 32 GP registers instead of 16 GP registers. RISC-V compressed encodes 32 GP registers and has similar instruction counts to the original ARM with 15 GP registers while having ~40% better code density. |
Which means that those three ISA were very poorly designed. Probably the instructions are too simple and not reflecting what's found in the real code. Quote:
AArch64 is very low (the best) at instructions counts but it is a much larger and more complex ISA and code density is not as good as RISC-V compressed. The best ISAs are the 68k, Thumb-2, AArc64 and RISC-V depending on which traits are most important for the application. RISC-V does not lead at any performance metric in the comparison but it is simpler than the other ISAs, has open hardware and encoding space for customization which could be enough to survive. The 68k is older and more primitive in some ways than all the other ISAs in the comparison but it is still very good and has considerable room to improve. |
Indeed, it's the best overall / on average.
Albeit I think that a redesign of the opcodes could lead to a much better and future-proof architecture. But binary-compatibility is a "must have" on legacy markets... Quote:
cdimauro Quote:
What's interesting to notice is that even nowadays there's not that much RAM available on embedded systems.
This enforces my idea that a (proper) 16-bit architecture could be very good at covering such market segments, thanks to much smaller cores.
|
There is a need for smaller area and lower power CPU cores in the embedded market. As I recall, there are more 8-bit and 16-bit CPU cores used for the embedded market although 16-bit cores have become less popular than 8-bit and 32-bit cores. |
Which is strange, because they allow to solve common problems much better than 8-bit cores (which needs to use 16 bits for the PC and for the memory reference, at least). Quote:
A 32-bit ISA can scale up much further than a 16-bit ISA so they are more popular while 8-bit ISAs are used for the smallest area and power applications. |
Yes, in general, but that's because we had even completely different ISAs when talking about 8, 16, 32-bit architectures.
There are other ways for an ISA to scale up, while keeping the same opcode structure. Quote:
I was surprised how popular 64-bit ISAs have become in the embedded market considering how rarely the extra addressing space is used and the extra area and power often wasted. |
Indeed. I don't know why it's possible. The only value in this case is represented by tagged-pointers, like Apple did with its ARM processors, but it's not enough to justify a 64-bit architecture in the embedded market. Quote:
Thanks! Quote:
cdimauro Quote:
I think that BA2 set the stone regarding code density: with its mixture of 2 and 3 bytes instructions there's very little chance that some other architecture could do better. It has also much space available for being extended (SIMD, vector, and other useful instructions), albeit the ISA is already very well balanced (it has a lot of useful instructions). Kudos to the architects!
I don't get why Cast has abandoned it in favour of the WAY WEAKER RISC-V: it reminds me Motorola with its awesome 68k...
|
I agree that BA2 is difficult to beat in code density. I wish compiler support was mainstream so we could test claims though. I have seen other claims about ISA code density that were not true or their promoted ISA best cases vs the competition worst cases. Compiler options in comparisons make a big difference and may conveniently not be optimal for the competition. Some of the RISC-V compressed code density claims are a good example. I believe the BA2 ISA has very good code density but I would be surprised if it was 10% better than Thumb-2 or 68k on average. |
Probably a bit more, but it's there: unreachable by other architectures. Quote:
I believe the 68k code density could be improved another 5% on average so these 3 ISAs are in the same ballpark for code density. I do expect instruction counts to be lower on BA2 than Thumb-2 and perhaps closer to the CISC 68k but Thumb-2 is simpler and likely smaller area than both. It needs more investigation and is not open or standard enough to judge. |
Hum. I see BA2 much simpler compared to Thumb-2. The number of used gates which you reported confirms this impression which I had. Quote:
cdimauro Quote:
But Bebbo did a great job with his GCC fork for 68k. GCC is still supported, and he should upstream his changes, so that it would be easy for anyone to take advantage of the much better backend. I don't understand why he's not doing it, keeping the burden of maintaining this fork.
VBCC isn't really a viable solution for any architecture which pretends to be used on the market.
LLVM is my hope, because it's a complete and modern infrastructure (with a recent 68k banckend), but it requires TONS of resources (cores, memory, fast SSD->NVMe), unfortunately.
|
Easier said than done with Bebbo upstreaming his GCC changes. GCC development and developers have issues. I doubt his changes will be upstreamed without new 68k hardware as the developers likely do not want to be bothered to make changes and create more work for a dead platform. |
Ah, ok. This can explain it. Typical behaviour found on such projects (Linux, GNU). Quote:
VBCC will never be a GCC/LLVM compatible compiler for porting but it is an important and portable retro and niche cross compiler. It would be nice if it was improved to be a better lightweight compiler for embedded use as originally intended. |
And not only that: it has also to be rewritten. As it is, the source code isn't that good to be worked on. Quote:
The LLVM compiler code is more maintainable than GCC but it is a resource hog limiting its use. There is no perfect compiler. |
No, but fortunately we've plenty of resources available nowadays, for a few bucks.
LLVM is very immature for the 68k, generating very poor code, but I can see that many things have patterns that should be easy to recognize and generate proper, optimized, instructions.
It takes time, of course, but I think that it's very well spent time. Quote:
cdimauro Quote:
Indeed, but the forecast looks very good for BA2: it can easily reach around 2.5 bytes average length for the instructions on compiled code (around 3.0 for 68k).
But the number of executed instructions and memory accesses is also very important, and could make the difference. 68k has some chances here.
|
Motorola said 68k code had "a measured average instruction length of less than 3 bytes". Compiled code varies depending on the compiler, algorithms, FPU instructions, etc. Integer only code can be closer to the 2.5 byte average instruction length. Vince Weaver's Linux_Logo 68k code is only 2.53 bytes average instruction length. ColdFire instructions and compressed immediates would improve compiled average instruction length too. |
Don't take Vince's code as a measure of the code density: that's finely tuned code written with the sole purpose of getting the best code density.
The reality is made by high-level compilers, and it's very hard to reach such measures. That's why I think that 3B/instruction is a more realistic measure for the 68k (a bit more for x86). Quote:
codis Quote:
I had been dabbling a bit in signal processing at that time. And I once compiled a FFT test function for FPU usage (double) on gcc, StormC and MCPP4, and compared the generated *.s files - which were a bit surprising to say the least. The "native" toolchains used only two of the eight FPU registers available, while the gcc code used all of them. Consequently with about half the number of instructions required. At least one use case where the Amiga port profited from the Gnu project's decent optimisation approach. I only remember the library glue code handling has a bit awkward. But to be honest, I haven't done any coding for the Amiga in 30 years ...
|
Without a pipelined 68k FPU, the fewest non-trapped FPU instructions and fewest FP registers used often wins in performance. There is a gain by intermixing integer and FPU instructions but few compilers have a 68k specific instruction scheduler to take advantage. Not even VBCC which generates code with more than double the FPU performance in the ByteMark benchmark with a 68060@50MHz than GCC 3.3 generated code for a 68060@75MHz.
https://amigaworld.net/modules/newbb/viewtopic.php?topic_id=44391&forum=25#847418
All the gain was in the FOURIER FPU benchmark but it is not a fast Fourier transform (FFT) unfortunately. GCC had some major issues here. |
68k needs also a better ABI, to pass parameters to registers instead of the stack. That would be a great improvement for both integer and FP code.
P.S. As usual, no time to read again. |
| Status: Offline |
| | cdimauro
|  |
Re: Commodore > Motorola Posted on 11-Apr-2025 5:06:13
| | [ #96 ] |
| |
 |
Elite Member  |
Joined: 29-Oct-2012 Posts: 4278
From: Germany | | |
|
| @Hammer
Quote:
Hammer wrote: @cdimauro Quote:
You can't handle the truth on tokenism. Amiga's 3rd-party CPU accelerator card sales statistics' visibility is poor. Refer to John Carmack's 1994 statement on Amiga's install base issue.
Prove A1000's 1986-1987 accelerator vendors rival GVP's $50 million+ annual revenue. |
What's not clear to you about this:
you're not able to understand that the install base was totally irrelevant for the new machines, like the Amiga, which quickly received support from the software houses with software of any kind.
?
Which is THE CONTEXT of this part of discussion.
And this even for the A1000, which sold only a few thousand. Care to explain how it was possible?
However, the problem is that bots do NOT understand, have NO memory, NO logic, hence can't get context neither know the history.
As I've said, you are (also) an alien, because you have no clue, at all, of the Amiga history and what happened to our (excluded you, of course) beloved platform. Quote:
Quote:
Oh, and wasn't that true? When do you plan to read AND understand what the people are writing (see on the previous comment the proof that I've brough).
|
From 1983-to1987, mainstream 68K platform vendors were recovering ground-zero situation. |
And what's problem with that?
68k platforms received A LOT of support, despite the very limited numbers (compared to PCs).
Even FPU support, as I've proved. Quote:
Quote:
Do you understand that there were TWO different Kickstart 1.2, one for the A1000 and one for the A500/2000?
|
Do you understand your "easy to program" is trainwrecked with KickStart 1.2 being available from Sep 1986, and your cited 3D software's 6888x FPU support in 1988? |
Do you recall WHY Kickstart 1.2 was mentioned, WHO mentioned it, and the real situation? No, because you completely lost the context... as usual.
Yes, it was available one year after that the A1000 was first sold. And? What's the problem with that? The question remains the same: did the Amiga get support from the software houses or not?
The fact that I've cited the FPU support for 1988 was just what it is (!): a testament from the fact that the Amiga received support for professional software DESPITE THE INSTALL BASE (which had ZERO FPUs from the available machines). Quote:
You can't handle the fact that PC's AutoCAD and Lotus 123 2.0+ are the larger sales drivers for x87 when compared to 6888x? |
I, and all Amigans besides you, have accepted it since very long time. That's the point and the focus / context of the discussion, which you completely lost.
But one question: could you please tell me when AutoCAD got x87 support? How long it took since the introduction of the first release?
And, more important, could you please tell me how DynaCAD was able to be TEN TIMES faster? Quote:
Alive, thanks... to the Dark Side (Microsoft). 
As you can see, all platforms had to deal with the Empire of Evil due to the professional software which was available for PCs.
Amiga wasn't an exception. As it was the Mac with the software coming from Microsoft... Quote:
#Metoo R&D is not enough to displace the establishment. |
Sure, and tell me what this has something to do with the context of discussion... Quote:
For what? Care to explain, BOT!  Quote:
Guess what: another personal offence. 
The bot was contradicted, and became angry, losing his temper and being unable to do anything but go on the personal level, given his complete inability to sustain the discussion.  Quote:
Quote:
Irrelevant + Red Herring.
As usual, with you: hopeless...
|
Remind me when the solo FPU chip is useful without a display and platform. |
I reveal you a secret, bot: Amiga received TONS of professional software even without a professional display support.
But you don't know it because you were (and are, yet) an alien to the Amiga community.  Quote:
See above, BOT! .-D Quote:
See above as well, hopeless looser. Quote:
Quote:
Right, and it was about to go bankrupt and was saved by... rolling drum... Microsoft / Bill Gates |
https://slashdot.org/story/24/10/20/004255/chip-designers-recall-the-big-amd-intel-battle-over-x86-64-support
Park also shared a post from Nicholas Wilt (NVIDIA CUDA designer who earlier did GPU computing work at Microsoft and built the prototype for Windows Desktop Manager):
I have an x86-64 story of my own. I pressed a friend at AMD to develop an alternative to Itanium. "For all the talk about Wintel," I told him, "these companies bear no love for one another. If you guys developed a 64-bit extension of x86, Microsoft would support it...."
Interesting coda: When it became clear that x86-64 was beating Itanium in the market, Intel reportedly petitioned Microsoft to change the architecture, and Microsoft told Intel to pound sand.
The same Microsoft told Intel to pound sand after the Itanium debacle.  |
A completely useless PADDING / Red Herring. As usual. BOT!  Quote:
Bill Gates stepped down as Microsoft CEO in the year 2000; hence no privilege protection for Intel. |
In your distorted mind which created this parallel universe where you are living... Quote:
Steve Jobs' view on Microsoft's support for the Mac. https://www.cnbc.com/2017/08/29/steve-jobs-and-bill-gates-what-happened-when-microsoft-saved-apple.html
“Apple was in very serious trouble,” said Jobs. “And what was really clear was that if the game was a zero-sum game where for Apple to win, Microsoft had to lose, then Apple was going to lose.
“There were too many people at Apple and in the Apple ecosystem playing (that) game,” he explained. “And it was clear that you didn’t have to play that game, because Apple wasn’t going to beat Microsoft.
“Apple didn’t have to beat Microsoft. Apple had to remember who Apple was because they’d forgotten who Apple was.”
To stay alive, Jobs had to step outside of the competitive mindset.
“To me, it was pretty essential to break that paradigm,” says Jobs. “And it was also important that, you know, Microsoft was the biggest software developer outside of Apple developing for the Mac. So it was just crazy what was happening at that time. And Apple was very weak and so I called Bill up and we tried to patch things up.”
The two founders did just that, though their partnership met with resistance. When Jobs announced the $150 million investment at the Macworld Boston conference in 1997, the audience booed Gates’ appearance via satellite.
For Microsoft, the investment meant propping up one of its greatest competitors, but it was also a new business opportunity.
“That’s worked out very well,” says Gates at the 2007 conference. “In fact, every couple years or so, there’s been something new that we’ve been able to do on the Mac and it’s been a great business for us.”
Also, as part of the deal, Apple agreed to drop a lawsuit accusing Microsoft of copying its operating system.
|
Good. How saved Apple? Microsoft / Gates. As I've already reported. Thanks for confirming it!  Quote:
Quote:
Again free, personal offences from a BOT which is unable to understand the context of discussion.
|
Your blame on Commodore engineers is wrong and misguided. Your argument is without context from first-hand accounts. |
"You can't handle the truth!" (cit.). 
What I've reported are FACTs, and many of them are coming from YOUR writings.
In fact (!), you can't dismantle for those two reasons: FACTs... are FACTs, and I'm reporting what YOU have stated. You can't contradict yourself, right? 
However, the primary problem is what I've already said several times: you read, but do NOT understand and, more important, you're NOT able to figure out the overall picture: how was really the situation.
Another proof directly coming from you on EAB:
Quote:
Partial AGA Lisa's productivity mode can't be designed with Bob Welland's influence.
[...]
AGA Lisa was design by Bob Raible from the LSI group. The LSI group has studied Amiga OCS designs for 8 bit planar C65 chipset (from Commodore The Final Years).
[...]
By late 1989,
A minor revision of Agnus would appear in the Pandora chipset to extend the amount of memory it could address. The engineers pulled in Bob Raible, an engineer form the LSI group, to define a chipset spec for an improved version of the display chip that would be a little sister to AAA’s Linda, called Lisa.
[...]
On October 6, George Robbins orchestrated a meeting with Jeff Porter, Hedley Davis, Bryce Nesbitt, himself, and four members of the LSI group: Bob Raible, Ted Lenthe, Jim Redfield, and Dave Anderson. The purpose of the meeting was to obtain management approval for the Lisa display chip and outline the goals, timetable, and required resources.
|
The engineers of the Amiga team were so much incompetent that weren't able to define even the pure SPECs and LSI team had to jump in and help.
Pay attention the part that I've highlighted: the LSI team was the one which was able to understand the Amiga chipset (and to "transplant" part of it to the C65 chipset) and NOT the (new) Amiga team!
As you can see, you don't miss opportunities to bring fuel to me, and prove my reconstruction.
Thanks a lot!  |
| Status: Offline |
| | Hammer
 |  |
Re: Commodore > Motorola Posted on 14-Apr-2025 5:32:37
| | [ #97 ] |
| |
 |
Elite Member  |
Joined: 9-Mar-2003 Posts: 6320
From: Australia | | |
|
| @cdimauro
Quote:
What's not clear to you about this:
you're not able to understand that the install base was totally irrelevant for the new machines, like the Amiga, which quickly received support from the software houses with software of any kind.
?
Which is THE CONTEXT of this part of discussion.
And this even for the A1000, which sold only a few thousand. Care to explain how it was possible?
However, the problem is that bots do NOT understand, have NO memory, NO logic, hence can't get context neither know the history.
As I've said, you are (also) an alien, because you have no clue, at all, of the Amiga history and what happened to our (excluded you, of course) beloved platform.
|
Your "easier to program with 68K" argument is useless when there are other fuckups with 68K Amiga platform e.g. very low install base with high resolution display.
From ground zero, only 68K Macintosh was able to establish a large enough business customer base that could spend 1 million PowerMac unit sales from March 1994 to January 1995.
Despite 68000's early 32-bit programming model and Amiga's early multimedia technical lead, poor management proved to be the real boat anchor for both Motorola and Commodore.
Atari ST's annual unit sales growth stalled after 1987. Other GUI platform competitors exceeded Atari ST/Mega ST's monochrome high-resolution offering.
For the US market after 1986, Amiga's stable 600x200p NTSC resolution is not enough for a mainstream business platform. Where's your "easier to program with 68K" argument? Have you realized your "easier to program with 68K" argument is useless?
You can't focus on just the CPU when the platform is the entire desktop computer solution.
Your cited 6888x 3D application's 1988 release is late! Autodesk developed Autoshade in 1987 for AutoCAD. Autodesk 3D Studio later displaced Autoshade. My point, PC's X87 market drivers are larger than Amiga's 6888x!
Last edited by Hammer on 14-Apr-2025 at 06:13 AM.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | Hammer
 |  |
Re: Commodore > Motorola Posted on 14-Apr-2025 7:51:50
| | [ #98 ] |
| |
 |
Elite Member  |
Joined: 9-Mar-2003 Posts: 6320
From: Australia | | |
|
| @cdimauro
Quote:
And what's problem with that?
68k platforms received A LOT of support, despite the very limited numbers (compared to PCs).
|
Without a timeline, that's meaningless fluff.
Quote:
Even FPU support, as I've proved.
|
The 1988 release is already late. The FPU is not the only factor in a desktop platform.
For the 1985 to 1988 context, Lotus 123's back office market is larger than the 3D application market, e.g. Lotus is larger than AutoDesk in unit sales and revenue.
Quote:
Do you recall WHY Kickstart 1.2 was mentioned, WHO mentioned it, and the real situation? No, because you completely lost the context... as usual.
|
Your cited 1988-released 3D software didn't work for KickStart 1.1/Workbench 1.1.
A2000-B was released in March 1987 with Kickstart 1.2. Quote:
Yes, it was available one year after that the A1000 was first sold.
|
From September 1986 and beyond. The early Kickstart 1.2 version didn't have the V1.2 marking. There was a Kickstart 1.2 delay rollout due to unhappy engineers. Amiga 1000's production was terminated in early 1987. From "Commodoree - The Final Years"
The show marked the official release of the Sidecar, the IBM PC emulator for the Amiga 1000 announced so long ago. Demo versions of the units had been available to Amiga dealers as early as 1986, but the product had entered limbo as engineers worked out the bugs.[3] In December 1986, the hardware finally passed FCC regulations and production began in January 1987.
Unfortunately, the software still had several known bugs. Dealers began selling the units in February using pre-production software, with the promise of a disk upgrade later in the year. Although Commodore previously announced it would market the Sidecar for “significantly below $1000” it was now put on sale for a suggested retail price of $999. The Sidecar began appearing en masse in retail stores later in June.
In one of its legendary marketing fiascos, the important device appeared just in time for Commodore to phase out the A1000, on which it was dependent. Magazines speculated that, with Commodore moving onto the Amiga 2000, they did not want to sell too many Sidecars. The belated release was primarily meant to avoid false advertising lawsuits.
A1000 is being phased out in early 1987.
Quote:
And? What's the problem with that? The question remains the same: did the Amiga get support from the software houses or not?
|
Have you realized that September 1986 to your cited 3D software example's 1988 release year has damaged your "easy to program with 68K" argument?
Your "easy to program with 68K" argument is useless.
Quote:
The fact that I've cited the FPU support for 1988 was just what it is (!): a testament from the fact that the Amiga received support for professional software DESPITE THE INSTALL BASE (which had ZERO FPUs from the available machines).
|
1. The missing factor is Amiga ECS's 640x480p productivity mode being missing in action during 1987 Windows 2.x+VGA and Macintosh II's release window.
FPU alone doesn't complete the GUI desktop platform!
Unlike the mainstream PCs, the mainstream Amiga doesn't include an FPU socket.
1987's Mac II with 256KB VRAM and PC VGA support 640x480p 16 colors for graphics business markets.
Mac II's color 640x480p resolution, and color Quick Draw ecosystem set the Mac LC from October 1990) and LC II (from March 1992) sales boom for the Mac platform.
Amiga engineers have demonstrated ECS's 640x480p productivity mode in Q4 1988 with A2000, while management has other ideas e.g. timed exclusive ECS for A3000 (from June 1990).
There is a story in the Commodore - The Final Years book about suppressing the AmigaOS 2.x upgrade for existing Amiga OCS. Management wanted AmigaOS 2.x/ECS to be exclusive for A3000, and Amigans must buy a fat profit margin A3000 for the upgraded experience.
Mainstream press criticism against A3000's June 1990 release was the lack of a 256-color display and a 68040 CPU. Mainstream press doesn't give damn about 3rd parties.
Amiga engineers have designed a fully functional 68040-25 with an L2 cache accelerator card with the A3000, and marketing (management) rejected it. The downgrade A3640 card was Commodore's second attempt for the later A3000T/040, and it was approved by management.
A3000plus with AGA and 68040+L2 cache in 1991 is superior when compared to Commodore management's stonewalling the Amiga while promoting Commodore's PC clone evolution improvements i.e. you want 256 colors from Commodore? Buy a Commodore PC with SVGA instead.
--------------------
Install base's demographics matter for road map planning and reducing development risk for 3rd party developers. Note why Amiga is not a Mac.
For the US market, Amiga OCS's 640x200 NTSC is frozen in time, and A2024's production delays and 5000 unit scale are a joke.
Quote:
Good. How saved Apple? Microsoft / Gates. As I've already reported. Thanks for confirming it!
|
You missed Steve Jobs's effort when He argued his case with Bill Gates. MS's support for the Mac wasn't automatic. Apple's superior leadership matters for this case.
Quote:
The engineers of the Amiga team were so much incompetent that weren't able to define even the pure SPECs and LSI team had to jump in and help.
Pay attention the part that I've highlighted: the LSI team was the one which was able to understand the Amiga chipset (and to "transplant" part of it to the C65 chipset) and NOT the (new) Amiga team!
As you can see, you don't miss opportunities to bring fuel to me, and prove my reconstruction.
|
For the 1987 context, you miss is CSG LSI's inability to quickly re-engineer OCS Denise with partial AA Lisa features e.g. shared 4096 color palette. Commodore management's firing of Amiga's original graphics engineers has a brain drain cost.
Henri Rubin initiated the monochrome ECS R&D direction, and the blame for this debacle is on management. Henri Rubin was replaced by Bill Sydnes. Both Bill Sydnes (A300/A600, A2200/A2400/A3200/A3400) and Henri Rubin (A3000) repeated the ECS mistake i.e. departed from ECS's original purpose.
After shutting down the original Los Gatos Amiga group, the second Amiga group was from the Commodore's system engineering group (e.g., C900, Unix clone), the original engineer for Paula, and a team from AmigaOS.
Commodore's major engineering groups are; 1. The system engineering group and later created the VLSI group during the AAA project. This group designed C900. https://en.wikipedia.org/wiki/Commodore_900 Has cost reduced A1000 to A500 e.g., Gary chip, co-designed by an external outsourced team. Primary designers for Gayle, Fat Gary, Bridgette, Buster, Super Buster, Ramsey, DMac, SDMac, custom MMUs, A2620, A2630, A3640, early A3640 with L2 cache, A590 and, etc'.
2. Original Los Gatos Amiga group, designed Amiga ICS (missing 64 color EHB mode) and OCS. The original Los Gatos Amiga group quickly added 64-color EHB mode for PAL A1000 and later NTSC A1000. Cancelled Amiga Ranger has up to 128 color 7 bit planes and is the closest to AGA's 256 color 8 bit planes. Key engineers later designed 3DO (with 16-bit color and 24-bit color display, quadrilateral 3D with texture accelerator) and 3DO M2 (triangle 3D with texture accelerator). For the 1st 3DO, the 3DO team made the same mistake as Sega Saturn and NVIDIA NV1/NV2 quadrilateral 3D systems.
3. LSI engineering group designed VIC-20, C64, C128, C65 and AA Lisa. Participated in AAA Andrea's R&D. Designed CSG 65xx CPUs, CIA, and many other chips. The LSI engineering group participated in turning the Amiga Lorraine breadboard into three main ASICs. Suffered a brain drain with the SID chip.
4. Commodore PC group (led by Jeff Frank, later moved into the Amiga group's leadership position in June 1991, replacing Jeff Porter).
5. Multimedia group (e.g. CDTV, CDTV-CR), Akiko's DMA CD-ROM controller, and hardware C2P. Dependent on the Amiga group's multimedia chipset R&D. Jeff Porter was moved into this group in June 1991, and cared enough about the planar with chunky pixels issue.
Read the Commodore - The Final Years book.
Last edited by Hammer on 14-Apr-2025 at 09:36 AM. Last edited by Hammer on 14-Apr-2025 at 08:15 AM. Last edited by Hammer on 14-Apr-2025 at 08:01 AM.
_________________ Amiga 1200 (rev 1D1, KS 3.2, PiStorm32/RPi CM4/Emu68) Amiga 500 (rev 6A, ECS, KS 3.2, PiStorm/RPi 4B/Emu68) Ryzen 9 7950X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB |
| Status: Offline |
| | bhabbott
|  |
Re: Commodore > Motorola Posted on 15-Apr-2025 6:27:51
| | [ #99 ] |
| |
 |
Cult Member  |
Joined: 6-Jun-2018 Posts: 526
From: Aotearoa | | |
|
| @Hammer
Quote:
Hammer wrote:
In one of its legendary marketing fiascos, the important device appeared just in time for Commodore to phase out the A1000, on which it was dependent. Magazines speculated that, with Commodore moving onto the Amiga 2000, they did not want to sell too many Sidecars. The belated release was primarily meant to avoid false advertising lawsuits.
|
The Sidecar was not an 'important device', it was a mistake which should never have been produced. How do I know? Because I had one.
The motivation for producing it wasn't 'to avoid false advertising lawsuits', but because the PC had taken over the business market and Commodore was having a hard time convincing Americans to buy a non IBM compatible computer.
Commodore the Amiga Years page 329:- Quote:
Commodore's executives (and most of the press at the time) considered MS-DOS compatibility to be of paramount importance to the success of the Amiga as business computer... says George Bucas "there was... that feeling in the company, 'What are we going to do with a non PC compatible computer? We can't sell it'."... "'We need to be able to run spreadsheets and word processors and all that kind of stuff so we can justify this $1300 price" recalls Carl Sassenrath. |
At first they tried a software emulator, and while it worked amazingly well it was of course slower than the real thing. Hardware would be required. However the engineers were understandably not enthusiastic about developing it, so an outside contractor was hired to make it. This was supposed to be ready by the end of 1985, but the card never worked so Irving Gould asked the German engineering team to develop a solution. This made sense because they had developed Commodore's PC clones.
So what did the Germans do? They took an entire PC motherboard and interfaced it to the A1000 via a daughterboard. This wasn't just an emulator - it was the real thing! The only software needed was drivers on the Amiga side so it could use the Amiga's screen and keyboard etc. By December 1985 36 prototypes had been made and sent to the US as demo units.
The A1000 wasn't 'phased out' until March 1987 when the A2000 was released, over a year later. The original A2000 was effectively an A1000 with Zorro slots and Sidecar merged together, with the PC components on a plugin card ('bridgeboard'). This was the obvious way to reduce the size and cost of the combo that business customers would want, as well as providing the expansion slots etc. that 'serious' Amiga users wanted. By making the PC side a plugin card, Amiga fans didn't have to buy stuff they didn't need.
The Sidecar was an overly expensive ergonomic nightmare that did very little for Amiga fans and not much for PC fans. While its performance matched any 4.77MHz PC clone it sucked compared to the Amiga, and displaying the video output via the Amiga made it appear even slower. Furthermore the 'janus' software was buggy and could be tricky get working properly. Add to that the total cost being no less than a standalone XT clone and there was no reason for business customers to buy it. Thus it utterly failed to achieve its goal. Its only importance was being an inspiration for the A2000. Quote:
Your "easy to program with 68K" argument is useless. |
No, it isn't. x86 was known to be difficult to program. If IBM hadn't chosen it for the PC its popularity would have quickly waned as better architectures like 68k became preferred. Even Intel tried to distance themselves from x86, but of course had to accede because it had become the industry standard that users would not give up.
Many of today's Amiga fans have stuck with the platform in large part due to the ease of programming 68k. It might be 'useless' to modern computer users, but for us it's very important, just as it was back in the day. Commodore managed to get enough sales to develop the machines that we enjoy programming today. That's all that matters to us. But we are not alone. Fans of the Atari ST, Sinclair QL, Apple Mac, Sega Mega Drive and X68000 also appreciate it, along with the lucky owners of various rarer machines like the Sun-1, SGI IRIS, DEC VAXstation and Tandy model 16 - to name just a few of the other systems using 68k.
Quote:
1. The missing factor is Amiga ECS's 640x480p productivity mode being missing in action during 1987 Windows 2.x+VGA and Macintosh II's release window. |
Piffle. Windows 2.x was not very popular and the Mac II had virtually zero penetration into the business market. VGA was very expensive in the beginning too. Many business users were still using MDA into the 90's because that's all they needed.
The real thing 'missing in action' on the Amiga in 1987 was of course x86 applications. But this didn't matter to us. There were plenty of Amiga apps that did the same job. The only thing stopping businesses from using the Amiga was that nobody else was so getting support might be difficult and its future was uncertain. Businesses have enough uncertainty to deal with already, so they almost always go for the 'safest' computer option - which is fair enough.
Considering how entrenched the PC was in the business market by 1984 it made no sense to tout the Amiga as an alternative, and therefore no point in trying to make the hardware equivalent. The Amiga had a solid 640x200/256 display that was fine for the typical 'business' apps we might want to run. That provided the standard 80x25 text which was required for typical business apps of the day. With the standard CGA style 8 point font you even got an extra 7 lines in PAL. You also got up to 16 color graphics in the same resolution, or could display up to 32 colors in lores for nice graphs and charts in a separate screen that could be dragged into view as desired. This was fine for home users or small business operators who weren't hung up on IBM compatibility.
Quote:
Unlike the mainstream PCs, the mainstream Amiga doesn't include an FPU socket. |
...as befitted its use as a low cost home computer. But high-end Amigas didn't have just have an FPU socket - it was populated with an actual FPU (unlike the vast majority of PCs). Furthermore there were plenty of 3rd party FPU addons for those who wanted one, while the majority didn't have to pay for something they weren't going to use.
Quote:
1987's Mac II with 256KB VRAM and PC VGA support 640x480p 16 colors for graphics business markets. |
1987's Mac II was shockingly expensive and only purchased by 'professional' users with a specific use in mind. PCs were made for business use yes, but even after 1987 many only had MDA because it was all that was needed. Most business work was done in text mode, with graphics possibly used for displaying a graph or previewing a document before printing. Your fixation on 640x480 resolution is off base. Even if by 1988 this was a deal breaker for business users (it wasn't) it's irrelevant to us because the Amiga wasn't a business computer and that didn't bother us. If it bothers you it's only because you suffer from PC envy.
I am glad that the Amiga wasn't a PC. After 15 years of business PC support I don't have happy thoughts about them. For me the Amiga has always been for enjoyment, not work - a hobby, not a chore. I'm glad that Commodore didn't bow to the pressure to conform, and maintained the Amiga's unique character. I'm glad I that I can still run my A1200 on a TV in composite and don't have to attach a VGA monitor to run 'business' software. I'm glad that using my Amigas today brings back happy memories of playing awesome games and enjoying 68k coding rather than nightmares of PC problems. I can't imagine how a more more 'business oriented' Amiga would make it any better. More likely it would be worse.
|
| Status: Offline |
| | Lou
|  |
Re: Commodore > Motorola Posted on 16-Apr-2025 19:21:02
| | [ #100 ] |
| |
 |
Elite Member  |
Joined: 2-Nov-2004 Posts: 4256
From: Rhode Island | | |
|
| @Hammer Quote:
In the early 1980s, the 68000's forward-looking 32-bit programming model was the correct path for 32-bit desktop computers. Intel 386 backward compatibility record wasn't established when Apple Lisa and Amiga Lorraine were at their initial design stage.
68000 wasn't retard like 16bit Z8000's shared data and address I/O bus.
1. Commodore management failed to exploit the US$100 priced 68EC040-25, and that's on Commodore. US$100 priced 68EC040-25 is similar to AMD's 386DX-40 price range.
Commodore could have kick-ass 386DX-40 PC priced Amiga with 68EC040-25.
2. Commodore management fu_kup A1200's mass production is being burdened by 1 million unit scale A600's old debts.
For a 400,000 A1200 unit production scale, each A1200 unit sold has USD$ 50 allocated to pay the old A600-related debt. A600's 1 million units production run couldn't pay for itself and this is on Commodore management.
This is the major reason why A1200's compute power was bare bones when compared to Atari Falcon.
Atari Falcon's 68030(with MMU)'s 16-bit bus wasn't a good PR for the "32-bit generation" change.
3. Since the original Amiga engineers were also MOS/CSG 6507-based Atari 2600's multimedia chipset designers, the main reason for the 68000 migration is due to Jack Tramiel era CSG's weak focus on CPU R&D investment.
Other mainstream 65xx-based desktop computing platform vendors have jumped ship away from MOS/CSG.
https://www.youtube.com/watch?v=P4VBqTViEx4 Steve Jobs's POV against technical noob salespeople like Jack Tramiel and CEOs/management from Pepsi Co.
Henri Rubin can't even use a computer.
4. For 1991 to 1993, Motorola wasn't competitive in the US$30 to $40 CPU price relative to MIPS R3051 (for PS1) and R4000 (for N64 offer).
|
Apple was close to abandoning Mac on 68K but Bill Mensch's suppliers couldn't deliver faster 65816 cpus in quantity...meanwhile other manufacturers were producing much faster versions in quantity. So the Apple IIgs never got a successor and always gimped to 2.8MhZ in fast mode.
Jobs wanted an 8MhZ 65816 and Mensch's suppliers didn't deliver. This video documents it pretty well: https://www.youtube.com/watch?v=UDUQEKxfGEw Apple IIgs was hobbled by talking down to the 1MhZ bus for I/O and Apple II compatibility. I have documented this in another post/thread. For (1) reference see the Nintendo SA-1 chip (RF5A123) at 10.74MhZ. Ricoh's 65816 had DMA and MUL and DIV...and fixed REP and SEP commands. The CMD SuperCPU128 used a 20MhZ 65816 in 1997. Accelerator cards for the Apple IIgs have existed since 1990. SANYO created their own 65816 design that could run at 15MhZ in 1992 for WDC.
While Bill M. did have the beginnings of a 32bit design, he started recommending ARM in the late 80's/early 90's to customers who asked for 32bit solutions.
I find it 'funny' that the 68K is so great that 'Amiga' uses 65CE02 in the CDTV CDROM controller and A2232 serial card.
Speaking of Apple, they had a skunkworks project called Mobius - for an 8MhZ ARM-based APPLE III (yes 3!) that could emulate 6502, 65816 and 68000) and the 68000 emulation was faster than an actual 68000 (cuZ it sucks) ... https://web.archive.org/web/20050208095510/http://www.advanced-risc.com/art1stor.htm
Well, what comes around goes around and the best 68K accelerator is a 68K emulator on ARM. LMFAO!
So yes, once again - I re-iterate that Amiga on 65816 would have naturally led to more success and eventually ARM instead of 68K and PPC death. |
| Status: Offline |
| |
|
|
|
[ home ][ about us ][ privacy ]
[ forums ][ classifieds ]
[ links ][ news archive ]
[ link to us ][ user account ]
|