Poster | Thread |
Lou
| |
Re: News about Vampire and Apollo Posted on 27-Sep-2018 16:16:49
| | [ #1601 ] |
|
|
|
Elite Member |
Joined: 2-Nov-2004 Posts: 4223
From: Rhode Island | | |
|
| @tlosm
Quote:
tlosm wrote: @Lou
add this ... quake on Amiga4000 604e 366mhz /060@66mhz with Vodoo4 https://youtu.be/D3S5aM0ylPc?t=4m56s
for what i see the quake on vampire have really good performances ... and we know quake use fpu instructions and aga is a bootleneck |
IIRC that version uses some version of OpenGL on the Voodoo card. Again - something the Vampire will never do. |
|
Status: Offline |
|
|
bennymee
| |
Re: News about Vampire and Apollo Posted on 27-Sep-2018 17:23:15
| | [ #1602 ] |
|
|
|
Cult Member |
Joined: 19-Aug-2003 Posts: 697
From: Netherlands | | |
|
| @Lou
Never ? if there would be a Vampire A1200, fit a Mediator and here we go.
|
|
Status: Offline |
|
|
Lou
| |
Re: News about Vampire and Apollo Posted on 27-Sep-2018 17:30:07
| | [ #1603 ] |
|
|
|
Elite Member |
Joined: 2-Nov-2004 Posts: 4223
From: Rhode Island | | |
|
| @bennymee
Quote:
bennymee wrote: @Lou
Never ? if there would be a Vampire A1200, fit a Mediator and here we go.
|
That is a good point. However, I seem to use Vampire and Apollo interchangeably too often. That being said - my whole point is that for a *future* V5 Gunnar *should* create/add 256x [aka multiply by 256 but run in parallel] AMMX to a custom chip. I suggested a SuperAkiko chip since the current one is useless in it's current form... It would be wholly developed in-house so it would not require any 3rd party hardware.
My point is/was to do it right...to do it the "Amiga way"...this is how you do it...
He claimed this FICTIONAL chip+memory controller would need an impossible 50GB/s transfer rate. I stated the FACT that GDDR5 has a transfer rate of 320GB/s.
For reference, the Nintendo Gamecube had an internal memory bandwidth of 18GB/s so I believe his math is off on what it takes to get a Vampire into the 3D gaming realm/neighborhood... https://en.wikipedia.org/wiki/Nintendo_GameCube_technical_specifications Raw polygon performance: 90 million polygons/sec[8] 40 million polygons/sec, with fogging, Z-buffering, alpha blending and Gouraud shading[7] 33 million polygons/sec, with fogging, Z-buffering, alpha blending and texture mapping[7] 25 million polygons/sec, with fogging, Z-buffering, alpha blending, texture mapping and lighting[7] 6-20 million polygons/sec,[9] assuming actual game conditions, with complex models, fully textured, fully lit, etc.
Last edited by Lou on 27-Sep-2018 at 05:47 PM. Last edited by Lou on 27-Sep-2018 at 05:45 PM. Last edited by Lou on 27-Sep-2018 at 05:40 PM.
|
|
Status: Offline |
|
|
Overflow
| |
Re: News about Vampire and Apollo Posted on 27-Sep-2018 17:56:26
| | [ #1604 ] |
|
|
|
Super Member |
Joined: 12-Jun-2012 Posts: 1628
From: Norway | | |
|
| @Lou
Out of curiosity, do you have a cost estimate of inclusion of such a system? GDDR5 for example?
What kind of design/hardware changes would be required to be done on the Vampire card to accomodate such speeds? |
|
Status: Offline |
|
|
Lou
| |
Re: News about Vampire and Apollo Posted on 27-Sep-2018 21:51:24
| | [ #1605 ] |
|
|
|
Elite Member |
Joined: 2-Nov-2004 Posts: 4223
From: Rhode Island | | |
|
| @Overflow
Quote:
Overflow wrote: @Lou
Out of curiosity, do you have a cost estimate of inclusion of such a system? GDDR5 for example?
What kind of design/hardware changes would be required to be done on the Vampire card to accomodate such speeds? |
I wasn't suggesting to use GDDR5. I used GDDR5 to show such speeds exist in the real world and are not "fictional".
Also, if you've been following along, the Gamecube's memory, circa 2001 was doing 2.6GB/s to 18GB/s depending on where in the system you look... and could still push 20M-90M polygons per second. This is 2018. So even his fictional 50GB/s number was a gross overestimation of what is required to render acceptable 3D. Mind you, I was only looking for the 100k to 200k number to beat or match systems available in the mid to late 1990s that had FAR WEAKER cpus...
I repeat - his hubris is thinking the cpu can do it all. |
|
Status: Offline |
|
|
Overflow
| |
Re: News about Vampire and Apollo Posted on 27-Sep-2018 22:15:47
| | [ #1606 ] |
|
|
|
Super Member |
Joined: 12-Jun-2012 Posts: 1628
From: Norway | | |
|
| @Lou
At what monetary cost? Anything is possible if you throw enough money at it. But will it be affordable for the team OR the enduser that is supposed to purchase it?
I dont actually know, which is why I ask. Last edited by Overflow on 27-Sep-2018 at 10:16 PM.
|
|
Status: Offline |
|
|
NutsAboutAmiga
| |
Re: News about Vampire and Apollo Posted on 27-Sep-2018 22:30:38
| | [ #1607 ] |
|
|
|
Elite Member |
Joined: 9-Jun-2004 Posts: 12880
From: Norway | | |
|
| |
Status: Offline |
|
|
g01df1sh
| |
Re: News about Vampire and Apollo Posted on 27-Sep-2018 22:35:33
| | [ #1608 ] |
|
|
|
Super Member |
Joined: 16-Apr-2009 Posts: 1782
From: UK | | |
|
| @bennymee
I have mediator in wait _________________ A1200 ACA1232 128MB Indivison MkIICr Elbox empty Power Tower RPi3 Emulating C64 ZX Atari PS BBC Wii with Amiga emulation Vampire v4 SA |
|
Status: Offline |
|
|
SHADES
| |
Re: News about Vampire and Apollo Posted on 28-Sep-2018 0:52:18
| | [ #1609 ] |
|
|
|
Cult Member |
Joined: 13-Nov-2003 Posts: 867
From: Melbourne | | |
|
| @NutsAboutAmiga
Quote:
NutsAboutAmiga wrote: @Lou
Quote:
My point is/was to do it right...to do it the "Amiga way"...this is how you do it... |
what is the Amiga way? is it similar to what has been done in the last decade?
|
Agreed wholeheartedly. Great point. The Amiga way is 20 years or crappy nothing.
Get the hardware out, offer a way to expand into available graphics cards with good integration. PCI/PCIe expansion capabilities via a mini-header/port m.2 ?? out. Then you can go get A $200 GPU and the rest is driver support._________________ It's not the question that's the problem, it's the problem that's the question. |
|
Status: Offline |
|
|
Lou
| |
Re: News about Vampire and Apollo Posted on 28-Sep-2018 14:25:40
| | [ #1610 ] |
|
|
|
Elite Member |
Joined: 2-Nov-2004 Posts: 4223
From: Rhode Island | | |
|
| @Overflow
Quote:
Overflow wrote: @Lou
At what monetary cost? Anything is possible if you throw enough money at it. But will it be affordable for the team OR the enduser that is supposed to purchase it?
I dont actually know, which is why I ask. |
I don't know, but generally speaking, unless it's an embedded device, you ship a PC board with sockets, not soldered memory... |
|
Status: Offline |
|
|
Lou
| |
Re: News about Vampire and Apollo Posted on 28-Sep-2018 14:29:48
| | [ #1611 ] |
|
|
|
Elite Member |
Joined: 2-Nov-2004 Posts: 4223
From: Rhode Island | | |
|
| @SHADES
Quote:
SHADES wrote: @NutsAboutAmiga
Quote:
Agreed wholeheartedly. Great point. The Amiga way is 20 years or crappy nothing.
Get the hardware out, offer a way to expand into available graphics cards with good integration. PCI/PCIe expansion capabilities via a mini-header/port m.2 ?? out. Then you can go get A $200 GPU and the rest is driver support. |
|
TO me, the Amiga way is letting the Amiga custom chips do the work, where are the cpu was nothing special... The 68000 in the Amiga was the same as the Mac and Atari... So taking AMMX (which could still be in the cpu) and creating a 256x version of it within a custom chip that's already playing with CHIPRAM (aka graphics memory in the PC world) is the Amiga way.
Custom chips is what made the Amiga an Amiga. Otherwise it's a Mac. One thing that limited the Amiga was the CHIPRAM bus. This is why FASTRAM is 'fast'. This is also why I asked about memory controllers... Having a unified design where the same memory is divided into fast and slow/chip has it's +'s and -'s. Letting custom chips play with chipram while the cpu it running logic is where having a separate controller and memory bank would come into play.
What was actually crap with the old design was the 3.x Mhz bus.Last edited by Lou on 28-Sep-2018 at 02:41 PM. Last edited by Lou on 28-Sep-2018 at 02:30 PM.
|
|
Status: Offline |
|
|
BigD
| |
Re: News about Vampire and Apollo Posted on 28-Sep-2018 14:40:43
| | [ #1612 ] |
|
|
|
Elite Member |
Joined: 11-Aug-2005 Posts: 7358
From: UK | | |
|
| @Lou
Quote:
Amiga's were both cheaper and had more bang to buck than Macs. But because software had to be developed to respect OS protocols the platform could more easily out live its 68k / PPC heritage and probably even make the jump from Intel to Apple chips in the future!
That in retrospect is an advantage. The Amiga way was self defeating in the long run once CSG/MOS was no longer a cost effective / a competitive in-house chip maker and was instead an albatross around C='s kneck and an environment polluting time bomb!Last edited by BigD on 28-Sep-2018 at 02:41 PM.
_________________ "Art challenges technology. Technology inspires the art." John Lasseter, Co-Founder of Pixar Animation Studios |
|
Status: Offline |
|
|
Lou
| |
Re: News about Vampire and Apollo Posted on 28-Sep-2018 14:57:28
| | [ #1613 ] |
|
|
|
Elite Member |
Joined: 2-Nov-2004 Posts: 4223
From: Rhode Island | | |
|
| @BigD
Quote:
BigD wrote: @Lou
Quote:
Amiga's were both cheaper and had more bang to buck than Macs. But because software had to be developed to respect OS protocols the platform could more easily out live its 68k / PPC heritage and probably even make the jump from Intel to Apple chips in the future!
That in retrospect is an advantage. The Amiga way was self defeating in the long run once CSG/MOS was no longer a cost effective / a competitive in-house chip maker and was instead an albatross around C='s neck and an environment polluting time bomb! |
Perhaps I should have said 'Draco'.... In the end, the 3.57Mhz bus was the main limitation... |
|
Status: Offline |
|
|
matthey
| |
Re: News about Vampire and Apollo Posted on 29-Sep-2018 0:31:04
| | [ #1614 ] |
|
|
|
Elite Member |
Joined: 14-Mar-2007 Posts: 2209
From: Kansas | | |
|
| Quote:
Lou wrote: I wasn't suggesting to use GDDR5. I used GDDR5 to show such speeds exist in the real world and are not "fictional".
Also, if you've been following along, the Gamecube's memory, circa 2001 was doing 2.6GB/s to 18GB/s depending on where in the system you look... and could still push 20M-90M polygons per second. This is 2018. So even his fictional 50GB/s number was a gross overestimation of what is required to render acceptable 3D. Mind you, I was only looking for the 100k to 200k number to beat or match systems available in the mid to late 1990s that had FAR WEAKER cpus...
I repeat - his hubris is thinking the cpu can do it all. |
A CPU can do most of the work of a GPU. The Commodore Hombre chipset was using a single core superscalar PA-RISC CPU with primitive SIMD unit (PA-RISC SIMD unit part of integer unit like the Apollo Core).
https://en.wikipedia.org/wiki/Amiga_Hombre_chipset
Then there was the Larrabee GPGPU which used multi-core in-order superscalar Pentium/Atom like CPUs with 512 bit wide SIMD units. The result was a very flexible and easy to program GPU with cores which could also be used for CPU SMP work. Surely a 16 core setup like this could handle 3D gfx of old consoles even though 48 cores was not competitive with more specialized GPUs of the day resulting in Larrabee being cancelled. Perhaps this concept could work with more efficient and smaller cores (so they could have more in the same space), more specialized GPU hardware and/or not seeking to be competitive with state of the art GPUs.
https://en.wikipedia.org/wiki/Larrabee_(microarchitecture)
Then there was the Knights Landing Xeon Phi x86_64 which resembled Larrabee but was now called a CPU instead of a GPGPU. However, it was sometimes faster to render with the SIMD units that go through a slow bus to a GPU.
https://en.wikipedia.org/wiki/Xeon_Phi
The concept is really cool but keeping the logic close enough, cache coherency issues, stream processing requirements and a good MMU design probably make such a setup tricky. Intel thought they could make an x86_64 based GPGPU work with Larrabee and it wasn't efficient enough.
The Apollo core is restricted by the small resources of affordable FPGAs. It's not difficult to add multiple cores or a wider SIMD unit (speaking of hardware limitations not Gunnar's ISA limitations). It makes little sense to move dedicated functionality to other chips. As I've said before, it is better to move everything closer into one SoC, perhaps with a separate FPGA for versatility.
Quote:
Lou wrote: I don't know, but generally speaking, unless it's an embedded device, you ship a PC board with sockets, not soldered memory... |
So the Raspberry Pi is an embedded device? Maybe you are right. Affordable computers seem to find embedded uses. It certainly is a better embedded device than any NG Amiga.
|
|
Status: Offline |
|
|
simplex
| |
Re: News about Vampire and Apollo Posted on 29-Sep-2018 1:26:43
| | [ #1615 ] |
|
|
|
Cult Member |
Joined: 5-Oct-2003 Posts: 896
From: Hattiesburg, MS | | |
|
| @Lou
Quote:
TO me, the Amiga way is letting the Amiga custom chips do the work, where are the cpu was nothing special... The 68000 in the Amiga was the same as the Mac and Atari... |
In this case, every computer today follows the Amiga way: graphics and sound are offloaded to dedicated chips, usually using a DMA bus no less, and they have their own memory as well.
The Atari also had custom chips if memory serves, which is why its music was better (or so I hear). Custom chips were pretty common in game machines, which I thought was how Jay Miner spent most of his career. But I agree that the Amiga pioneered this in a general-purpose computer, and that did make Amiga special at the time.
Finally, the 68000 was a special chip at that time, at least compared to the x86 line. The fact that Mac and Atari also used it actually drives the point home; some people have speculated (and this may have been confirmed; I can't recall) that IBM chose the x86 line for their PC precisely because they knew there was no way it would compete with their really high-priced 68000-based workstations.
Honestly, I think what really made the Amiga special was that its OS handled really well, leveraging the custom chips good enough for general use, and letting you shunt it aside when needed. That was really nice at the time, and most OS's have caught up to that now, which is one reason smartphones are generally awesome._________________ I've decided to follow an awful lot of people I respect and leave AmigaWorld. If for some reason you want to talk to me, it shouldn't take much effort to find me. |
|
Status: Offline |
|
|
ppcamiga1
| |
Re: News about Vampire and Apollo Posted on 29-Sep-2018 5:23:50
| | [ #1616 ] |
|
|
|
Cult Member |
Joined: 23-Aug-2015 Posts: 834
From: Unknown | | |
|
| Using custom chips was Amiga way before 1992. Commodore do not spend money on R&D and even a1200 from Commodore has faster 68020 standard cpu than original Amiga blitter. There is nothing special in SAGA, it is not original Commodore work. If SAGA is too slow it should be replaced with better graphics chips from pc.
|
|
Status: Offline |
|
|
cdimauro
| |
Re: News about Vampire and Apollo Posted on 29-Sep-2018 7:41:16
| | [ #1617 ] |
|
|
|
Elite Member |
Joined: 29-Oct-2012 Posts: 3948
From: Germany | | |
|
| @matthey Quote:
matthey wrote: Quote:
Lou wrote: I wasn't suggesting to use GDDR5. I used GDDR5 to show such speeds exist in the real world and are not "fictional".
Also, if you've been following along, the Gamecube's memory, circa 2001 was doing 2.6GB/s to 18GB/s depending on where in the system you look... and could still push 20M-90M polygons per second. This is 2018. So even his fictional 50GB/s number was a gross overestimation of what is required to render acceptable 3D. Mind you, I was only looking for the 100k to 200k number to beat or match systems available in the mid to late 1990s that had FAR WEAKER cpus...
I repeat - his hubris is thinking the cpu can do it all. |
A CPU can do most of the work of a GPU. The Commodore Hombre chipset was using a single core superscalar PA-RISC CPU with primitive SIMD unit (PA-RISC SIMD unit part of integer unit like the Apollo Core).
https://en.wikipedia.org/wiki/Amiga_Hombre_chipset
Then there was the Larrabee GPGPU which used multi-core in-order superscalar Pentium/Atom like CPUs with 512 bit wide SIMD units. The result was a very flexible and easy to program GPU with cores which could also be used for CPU SMP work. Surely a 16 core setup like this could handle 3D gfx of old consoles even though 48 cores was not competitive with more specialized GPUs of the day resulting in Larrabee being cancelled. Perhaps this concept could work with more efficient and smaller cores (so they could have more in the same space), more specialized GPU hardware and/or not seeking to be competitive with state of the art GPUs.
https://en.wikipedia.org/wiki/Larrabee_(microarchitecture)
Then there was the Knights Landing Xeon Phi x86_64 which resembled Larrabee but was now called a CPU instead of a GPGPU. However, it was sometimes faster to render with the SIMD units that go through a slow bus to a GPU.
https://en.wikipedia.org/wiki/Xeon_Phi
The concept is really cool but keeping the logic close enough, cache coherency issues, stream processing requirements and a good MMU design probably make such a setup tricky. Intel thought they could make an x86_64 based GPGPU work with Larrabee and it wasn't efficient enough. |
Just some notes here, since I was primarily working on Xeon Phi products when I was at Intel.
Larrabee failed because GPUs were (and still are, albeit something is changing) too much optimized for raster graphic, so the latter allowed to better use the silicon for this specific task. First Larrabee versions (never released) had not fixed-functions at all, so the x64 cores had to do all the work, which brought to very worse performances; this forced Intel to add some fixed-functions units (texturing) to improve the situation. However it wasn't enough to compete with GPUs. The paradox is that hardware-based raytracing GPUs were just presented by nVidia, and this means that NOW a Larrabee design could have been much higher chances to compete...
Larrabee and the first Xeon Phi products weren't called CPUs because they were just coprocessors (they lacked some instructions. So, they weren't fully x64-compatible) and sold only as PCI-Express cards.
Starting from Knights Landing they are called CPUs, because they have a full x64 ISA. They were still sold as PCI-Express cards, but also as standalone processors (which offered much better performances too, since Knights Landing processors hadn't to go through the the very slow PCI-Express to share memory: NUMA works much better).
/OT |
|
Status: Offline |
|
|
cdimauro
| |
Re: News about Vampire and Apollo Posted on 29-Sep-2018 7:45:48
| | [ #1618 ] |
|
|
|
Elite Member |
Joined: 29-Oct-2012 Posts: 3948
From: Germany | | |
|
| @ppcamiga1 Quote:
ppcamiga1 wrote: Using custom chips was Amiga way before 1992. |
Some newer PPC PCs were and are sold integrating a XMOS chip, which isn't custom neither proprietary, to resemble the "Amiga way" while not being absolutely comparable to it... Quote:
Commodore do not spend money on R&D and even a1200 from Commodore has faster 68020 standard cpu than original Amiga blitter. |
Ask to some developer how fast is the above XMOS chip compared to the (main) CPU... Quote:
There is nothing special in SAGA, it is not original Commodore work. |
There is nothing special in PPC PCs, it is not original Commodore work. Quote:
If SAGA is too slow it should be replaced with better graphics chips from pc. |
If PPC PC is too slow it should be replaced with better processor chips from pc. |
|
Status: Offline |
|
|
NutsAboutAmiga
| |
Re: News about Vampire and Apollo Posted on 29-Sep-2018 8:09:08
| | [ #1619 ] |
|
|
|
Elite Member |
Joined: 9-Jun-2004 Posts: 12880
From: Norway | | |
|
| @cdimauro
PPC is big endian, this what is great about the CPU, and that is about it, and the fact that is big endian make nice to program for, it good because 680x0 is big endian, and its good because we have Petunia that can translate 680x0 code in power PC code, just in time for it executed.
Of course, it bad for web browsers, but that is what tablets and mobile phones are for. general is software issue, it bad because can't just get computer from DELL, or Toshiba, at nice price tag.
Last edited by NutsAboutAmiga on 29-Sep-2018 at 08:16 AM. Last edited by NutsAboutAmiga on 29-Sep-2018 at 08:11 AM.
_________________ http://lifeofliveforit.blogspot.no/ Facebook::LiveForIt Software for AmigaOS |
|
Status: Offline |
|
|
NutsAboutAmiga
| |
Re: News about Vampire and Apollo Posted on 29-Sep-2018 8:30:55
| | [ #1620 ] |
|
|
|
Elite Member |
Joined: 9-Jun-2004 Posts: 12880
From: Norway | | |
|
| @Lou
Quote:
TO me, the Amiga way is letting the Amiga custom chips do the work, |
Yes this similar to how GPU's do all the work today. Quote:
The 68000 in the Amiga was the same as the Mac and Atari... |
Well it pretty outdated by today's standards that’s for sure,
Quote:
So taking AMMX (which could still be in the cpu) and creating a 256x version of it within a custom chip that's already playing with CHIPRAM (aka graphics memory in the PC world) is the Amiga way. |
No no no, that is what GPU's do, "CHIP" RAM the RAM for the GPU/Graphic chip, now referenced to as Video Memory, on the rest of the industry.
AltiVec is extension of normal CPU instructions, instead of "add" you can do "add, add, add, add" with one instruction, on an array of memory, so when working matrixes and arrays, chunks of image data, you can do things in short time normal instruction.
Quote:
Having a unified design where the same memory is divided into fast and slow/chip has it's +'s and -'s. Letting custom chips play with chipram while the cpu it running logic is where having a separate controller and memory bank would come into play. |
This is something modern graphic cards can do, ever heard about GART https://dri.freedesktop.org/wiki/GART/
Last edited by NutsAboutAmiga on 29-Sep-2018 at 08:34 AM.
_________________ http://lifeofliveforit.blogspot.no/ Facebook::LiveForIt Software for AmigaOS |
|
Status: Offline |
|
|