Click Here
home features news forums classifieds faqs links search
6071 members 
Amiga Q&A /  Free for All /  Emulation /  Gaming / (Latest Posts)
Login

Nickname

Password

Lost Password?

Don't have an account yet?
Register now!

Support Amigaworld.net
Your support is needed and is appreciated as Amigaworld.net is primarily dependent upon the support of its users.
Donate

Menu
Main sections
» Home
» Features
» News
» Forums
» Classifieds
» Links
» Downloads
Extras
» OS4 Zone
» IRC Network
» AmigaWorld Radio
» Newsfeed
» Top Members
» Amiga Dealers
Information
» About Us
» FAQs
» Advertise
» Polls
» Terms of Service
» Search

IRC Channel
Server: irc.amigaworld.net
Ports: 1024,5555, 6665-6669
SSL port: 6697
Channel: #Amigaworld
Channel Policy and Guidelines

Who's Online
24 crawler(s) on-line.
 58 guest(s) on-line.
 2 member(s) on-line.


 Hypex,  cdimauro

You are an anonymous user.
Register Now!
 Hypex:  1 min ago
 cdimauro:  4 mins ago
 AmigaMac:  2 hrs 12 mins ago
 JimS:  4 hrs 23 mins ago
 Hans:  4 hrs 37 mins ago
 RobertB:  5 hrs 14 mins ago
 agami:  5 hrs 39 mins ago
 A1200:  5 hrs 40 mins ago
 Kremlar:  6 hrs 1 min ago
 amigakit:  6 hrs 9 mins ago

/  Forum Index
   /  Amiga General Chat
      /  Commodore Amiga Global Alliance
Register To Post

Goto page ( Previous Page 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 Next Page )
Poll : Commodore Amiga Global Alliance
Yes, I would Join! £30
Yes, for less
Maybe
No
Bad idea, I have a better one....
Pancakes!
 
PosterThread
MEGA_RJ_MICAL 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 2:27:07
#121 ]
Super Member
Joined: 13-Dec-2019
Posts: 1200
From: AMIGAWORLD.NET WAS ORIGINALLY FOUNDED BY DAVID DOYLE

ZORRAM ALLIANCE

_________________
I HAVE ABS OF STEEL
--
CAN YOU SEE ME? CAN YOU HEAR ME? OK FOR WORK

 Status: Offline
Profile     Report this post  
QuikSanz 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 2:37:54
#122 ]
Super Member
Joined: 28-Mar-2003
Posts: 1236
From: Harbor Gateway, Gardena, Ca.

@MEGA_RJ_MICAL,

Glory hogging the first post on every new page eh? very tacky sir!

Chris

 Status: Offline
Profile     Report this post  
MEGA_RJ_MICAL 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 2:54:24
#123 ]
Super Member
Joined: 13-Dec-2019
Posts: 1200
From: AMIGAWORLD.NET WAS ORIGINALLY FOUNDED BY DAVID DOYLE

@QuikSanz

Yes and no, dearest friend QuikSanz:

Amigaworld.net, which was originally created by David Doyle, needs to be reminded that it NOW LIVES IN THE AGE OF MEGA_RJ_MICAL,

but that's not all there's to it.

Some threads,
not all threads - you might or might not have noticed -
have to be marked for "derailed" or "moronic".

That mark, my friend,
is represented by a solemn ZORRAM on top of every page, when and where possible.

A decontextualized ZORRAM that signifies
"Nothing to see here, go away".

A lone, towering ZORRAM that signifies
"What the f-word are you nimrods bantering about?"

I hope this brings some clarity,
there,
up, up in the heavens where your jet fighter oh so mightily soars.




/mega

_________________
I HAVE ABS OF STEEL
--
CAN YOU SEE ME? CAN YOU HEAR ME? OK FOR WORK

 Status: Offline
Profile     Report this post  
cdimauro 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 6:28:21
#124 ]
Elite Member
Joined: 29-Oct-2012
Posts: 3649
From: Germany

@matthey

Quote:

matthey wrote:
cdimauro Quote:

Actually it's better to spend the transistor budget to more L1 cache. It would be good to have 32kB x 2 for instructions and data caches. And even better would be to have an L0 cache for some uops, to get rid of the mess/effort of decoding 68k instructions all the time.


There is little doubt that increasing a 68060 like CPU L1 caches to at least 16kiB I+D would be good. Larger is not always better as access latency is increased. There is a balance between L1 access latency and the performance boost from having more data available in L1.

I know, but usually it's very low: 2-3 cycles.
Quote:
Many modern high performance CPUs have found 32kiB I+D L1 caches to be the best compromise but that doesn't mean it is always the case. The 68k extreme code density could allow the I cache size to be decreased while retaining performance compared to other CPUs but it is usually better to benefit from the L1I cache having significantly more code than some RISC like CPUs with poor code density (the 68060 8kiB L1I can contain as much code as a PPC CPU with 32kiB L1I).

4 times looks too much to me: do you've some data about it?
Quote:
A 68060 like CPU would most resemble the in order Intel Atom Bonnell architecture although there re significant differences between x86 and 68k.

https://en.wikichip.org/wiki/intel/microarchitectures/bonnell

They chose to use an 8 way 32kiB L1I which is a high capacity L1I cache even with x86 code density but oddly reduced the L1D to 6 way 24kiB, likely for timing purposes. More set associative ways also increases latency so maybe they found 6 way 24kiB L1D was better than 4 way 32kiB L1D.

Yes, it looked strange do me that they decreased the data cache and not the instruction cache: I would have expected the opposite.
Quote:
Bonnell does not break x86 instructions down as far in order to reduce power. This is more like 68060 decoding but not as efficient.

Quote:

Bonnell is a departure from all modern x86 architectures with respect to decoding (including those developed by AMD and VIA and every Intel architecture since P6). Whereas modern architectures transform complex x86 instructions into a more easily digestible µop form, Bonnell does almost no such transformations. The pipeline is tailored to execute regular x86 instructions as single atomic operations consisting of a single destination register and up to three source-registers (typical load-operate-store format). Most instructions actually correspond very closely to the original x86 instructions. This design choice results in lower complexity but at the cost of performance reduction. Bonnell has two identical decoders capable of decoding complex x86 instructions. Being variable length instruction architecture introduces an additional layer of complexity. To assist the decoders, Bonnell implements predecoders that determine instruction boundaries and mark them using a single-bit marker. Two cycles are allocated for predecoding as well as L1 storage. Boundary marks are also stored in the L1 eliminating the need to preform needlessly redundant predecoding. Repeated operations are retrieved pre-marked eliminating two cycles. Bonnel has a 36 KiB L1 instruction cache consisting of 32 KiB instruction cache and 4 KiB instruction boundary mark cache. All instructions (coming from both cache or predecode) must undergo full decode. It's worthwhile noting that Intel states Bonnell is a 16-stage pipeline because for the most part, after a cache hit you'll have 16 stages. This is also true in some cases where the processor can simultaneously decode the next instruction. However, in the cases where you get a miss, it will cost 3 additional stages to catch up and locate the boundary for that instruction for a total of 19 stages.


Rather than a micro-op cache, Bonnell is using more like a mini trace cache by storing decoding data in the L1I.

No, this isn't a trace cache (like the one on the Pentium 4), but the usual instruction cache tag bits which were used to put additional and useful metadata. In this case they just add one bit to know when an instruction starts, but more bits might be used to flag other information (e.g.: branches).

x86/x64 OoO processors used a significant amount of tag bits in the instruction cache for this purpose. With the introduction of L0 caches (like the trace cache) I expected that they aren't used anymore (or at least significantly reduced).
Quote:
Now we see why such a high capacity L1I is needed and this wastes some of the advantage of good code density which was further wasted when Atom moved to significantly fatter x86-64, as you know. The 68060 has less decoding overhead (16 bit encodings are inherently lower overhead than 8 bit encodings)

It depends on how the 16-bit encoding is organized. The one for the 68k family is quite complicated, with a lot of exceptions to be taken into account.

On the exact opposite, an 8-bit encoding could be easier to decode. x86/x64 is a complicated ISA, but with a simple 32 bytes LuT you can quickly and easily catch its prefixes, for example. Similarly, you can see if instructions have memory references. And so on.

You can't do it with the 68k, because using LuTs is too much expensive. Emulators do it because it's "cheap", but hardware-wise it's not possible.

It would be interesting to know how Gunnar implemented it on its 68080.
Quote:
and maintained competitive high performance without breaking down instructions into uops and requiring extra resources for a muop cache. On top of that, the 68k has better code density and I believe it could maintain much of the code density while moving to 64 bit unlike x86-64.

Well, the only 64-bit 68k extension is 68080 and I expect a very poor code density if some application is compiled (or assembled) using 64-bit data & address registers, since a 16-bit prefix is used in this case; some improvements can come as well, because ternary operations are enabled (AFAIK).
Quote:
cdimauro Quote:

eDRAM is too much expensive for a cheap SoC. Better to get rid of it.


That's entirely possible but eDRAM has been used for low cost console SoCs where cost is more important that higher margin desktop SoCs. Moving memory onto the SoC has a performance and power advantage, reduces SoC pins which are far more expensive than transistors and allows to reduce the size of boards. Do you have knowledge of actual licensing costs?

No. But I doubt that they could be compatible with a $1-3 SoC.
Quote:
cdimauro Quote:

See above on this: L0+L1 should be enough for this kind of SoC.


As above, I doubt a uop L0I cache is necessary. The write buffer can act like a mini L0 cache and the 68060 has room for improvement here.

The L0 is very important for increasing the performance while greatly reducing the drawn power (since the I$ is very energy-consumer, and with the L0 it's turned off most of the time. Intel reports more than 80% of time in its statistics).

But it requires several transistors, of course...
Quote:
If using cheap commodity memory, then a L2 cache would certainly be beneficial and practically required for running nearly modern programs.

An L2 cache is too much expensive from a transistor budget, raising the SoC costs.
Quote:
cdimauro Quote:

No, please. Who's using dual ported VRAM nowadays? Just use the regular DDR memory, which has already enough bandwidth.


Dual porting SRAM is simple and cheap for professionals. The Amiga chipset certainly works with interleaving of memory bandwidth but there still may be some loss of efficiency that could be regained with dual porting of SRAM.

Only if you plan to keep the chip-ram vs fast-ram partitioning, so allowing both the chipset and the CPU to access the memory at the same time.

But other than that it would be a waste of bandwidth. Unless you want to allow each of those devices to access memory twice at the time (so, using both the "slots", if the other device isn't using its one), but this further complicates the devices implementations.
Quote:
Certainly 2MiB of on SoC chip memory SRAM is no problem and would give blazing performance but modernizing also means significantly increasing chip memory if not making all memory chip memory with minimal performance loss so there is a good chance this is not practical too.

Not only that: 2MiB of SRAM on chip is an enormous quantity, which goes well beyond what's embedded on the mentioned RP2040: you can forget a low cost SoC then...
Quote:
cdimauro Quote:

Please get rid of this Stone Age technology. Modern memory has already more than needed to address the small requests of legacy/retro system.

Using SDRAM only for this reason will only cripple the whole platform. OR complicate it, if you want to use both SDRAM and DDR at the same time.

So, DDR should be the only memory system. Unless there's a clear proof that DDR causes problems.


Modern DDR memory is also SDRAM but DDR (Double Data Rate) instead of SDR (Single Data Rate).

Yes, but with the DDR5 we reached 32 times that data rate of SDRAMs. So, a bit far away, even of the first DDR memory...
Quote:
cdimauro Quote:

Nintendo, Sony, SEGA with their respective "mini" versions of the their old consoles.

It should be enough, but if it's not you can take a look also at the C64 Mini and Amiga Mini.


Most of these were quick and cheap eye candy money grabs that have inferior game performance compared to the much older original hardware.

Nevertheless, they used emulation, which actually looks like the only solution for big companies which propose "modernized" versions of their old products.
Quote:
cdimauro Quote:

Do you understand that the current hardware-based systems use FPGAs (so, not ASICs) and they are utterly expensive?

How much it costs a MISTer or a Vampire V4 (let's talk only about the standalone versions, which are "self-contained" AKA SBC)?

How much it costs a low-end (because nowadays it's enough to address most of the retro systems) MiniPC?

Actually the differences between FPGAs and PCs (or RPis) is about how you use the transistors budget. But both require A LOT of transistors.

At least PCs are much more flexible and development of emulators takes a lot LESS effort compared to HDL. Not even counting the performance that they could get if speed is important (and here using a JIT makes at least an order of magnitude of difference).


Cheap FPGAs are cheap and are still very good for retro chipset simulation as these are parallel workloads. Cheap CPUs are good at sequential workloads and do reasonably well with CPU emulation of old CPUs as long as the code and data is in their limited caches. FPGAs large enough to do retro CPU simulation aren't more expensive than large CPUs which give improved emulation. Compare the cost of your x86-64 CPU to the MiSTer FPGA cost. The whole MiSTer base board only cost $140 at one time and that not only included the FPGA but also an ARM SoC, 1GiB of memory, etc.

I made a search and I've found that it's much more expensive:
https://ultimatemister.com/product-category/ultimate-mister/
https://misteraddons.com/products/mister-bundles
https://amigastore.eu/en/866-mister-mini6-128mb-fpga-computer.html

The lowest cost is $425. And all of them are bundled with only 128MB of SDRAM.

If you have other sources where to buy a standalone & complete MiSTer, please let me know.

Regarding a MiniPC, here's one of the many (and with good enough quality): https://store.chuwi.com/products/chuwi-herobox-pro
Just $180, and it also includes Windows 10.

It uses the latest low-power Intel's Tremont microarchitecture: https://en.wikipedia.org/wiki/Tremont_(microarchitecture) which has very good performance (definitely much better than my Silvermont MiniPC, which was already enough for Amiga AGA emulation using Blitter immediate in WinUAE).
Quote:
The MiSTer FPGA can't reach anywhere near the CPU performance of your CPU but I believe it still more accurately simulates most retro CPUs than using your CPU emulation. At least on my mid generation Intel Core i5 using Windows, it has inconsistent performance and sometimes noticeable latency.

Are you using a FreeSync or GSync monitor? This makes a HUGE difference.
Quote:
I'm sure your higher performance computer is better but all it takes is for something to fall out of cache to set you back more than a decade in performance. The differential between CPU performance and memory performance has only grown which is why modern CPUs have ridiculous amounts of caches but the Achilles heel remains.

But here I'm not talking of my latest monster PC, which of course is overkill for retro emulation.

An Atom/Celeron like the above one which I've reported should be enough. If not, then some other solutions could be found based even on i7, but way lower than the $425 required for a complete MiSTer.
Quote:
cdimauro Quote:

Because of a psychological reason: they need to "touch" the hardware.

Something which is also happening the mentioned "Mini" retro systems, even if all of them use an emulator inside!

People touch the hardware --> Oooh, it's cool: it's like the original platform!

You see? Psychology...


I consider the Modern Vintage Gamer to be not only credible but an authority on retro gaming who owns many of the original systems. He noticed a different and admitted he was wrong to think emulation was better. In my experience, people usually don't admit they were wrong unless they were wrong.

Then he should start using the technologies that I've reported when testing PCs for retro gaming, if he wants to remain credible.
Quote:
cdimauro Quote:

On both videos the author says that FPGA systems do cycle-accurate implementations. So, letting people think that this is not possible with emulators.

This is clearly false and totally misleading. As we know, most emulators provide cycle-accurate emulation as well. Since years. And actually better than FPGAs, because emulators are the most accurate.


You have a valid argument here. Even FPGA simulation is not necessarily cycle exact, just more accurate. Retro CPU emulation can be cycle exact especially if you would throw away your jitter causing OS and your caches were large enough for all retro gaming purposes, which they may be. Chipset emulation is more difficult especially worst case depending on the complexity of the chipset which is again about jitter.

PCs have multiple cores nowadays, and the process(es) used by an emulator could be bound to specific cores and with very high priority to reduce those problems to almost zero.

That's what well-made AAA games use to do (including marking their memory pages as not pagable to avoid moving them to the swap file) to ensure to have almost full control of the system.

As you can see, there are already solutions to make the emulation experience very low-latency and with almost no jitter: it's just a matter to use / implement them.
Quote:
cdimauro Quote:

Aside this, today latency could be greatly reduced with GSync or FreeSync displays. In the last years almost all displays (TVs, monitors) implement FreeSync and some GSync.

If you have no such display or you need something better, then you have Lagless VSync: https://blurbusters.com/blur-busters-lagless-raster-follower-algorithm-for-emulator-developers/
WinUAE already implemented it, but other emulators are adopting it.

That should be enough. But one more thing about 3D systems. Emulation could do much better the original systems even in terms of latency. For example: https://www.neogaf.com/threads/dolphin-emulating-wii-and-gamecube-games.395121/page-310#post-246464070
And, in general, 3D systems are not that easy to be implemented using FPGAs. Plus they require A LOT of resources (e.g. LEs). And they might require higher clocks. For both things FPGAs aren't usable.


Part of the problem is that modern CPU and GPU latency and jitter have grown. Turning up the clock and adding more caches, necessitating multi-level caches, actually makes this problem worse.

See above for this.
Quote:
That little Raspberry Pico beats modern CPUs in this regard. It can probably emulate a 6502 and maybe even a 68000 with similar accuracy to a FPGA. Hitting the hardware with no OS isn't much easier than HDL programming though. A system like the Amiga was the next best thing with a very thin and responsive OS on small footprint DMA driven hardware.

Indeed. That's why I see AROS as an ideal solution for retrogaming. Unfortunately it lacks many emulators and especially WinUAE.
Quote:
The 68k Amiga was good at emulation before it became extinct and emulated instead, often by fatter systems. It's kind of like a big truck coming to pick up an efficient little motorcycle to take it places. All that matters is how how much power and payload the big truck has while little attention is payed to how inefficient it is to haul around the efficient little motorcycle. Yea, the motorcycle needs repairs and some modern improvements but it doesn't need a big truck to carry it around.

Well, the main problem is that there's no little motorcycle available anymore. Plus, the big truck is quite cheap...

 Status: Online!
Profile     Report this post  
agami 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 7:49:24
#125 ]
Super Member
Joined: 30-Jun-2008
Posts: 1648
From: Melbourne, Australia

@ferrels

Quote:
Believing that the Amiga will make a comeback is as ludicrous as believing that landline phones will make a comeback.

Poor analogy.
A proverbial Amiga comeback would not be the literal reintroduction of the computer system from the late '80s and early '90s.

I the same way the Apple and Mac had a comeback with the launching of a then modern incarnation of a Mac computer, capturing the essence of the original Mac but contextualized for the late '90s with the Bondi Blue iMac.

Also Mac OS experienced a comeback with Mac OS X. It took an operating environment which was seeing a reduction in support from application vendors, to one that has has seen continued growth in vendor support over the years, into the second mainstream platform.

Unlike the recent growth in the interest of valve-based amps, vinyl records and turntables, a successful Amiga comeback will require reinvention. Resulting in a NEW system that fits in a contemporary use-case and user context, which embodies some of the essence of the original Commodore Amiga and pays homage with a re-implementation of some if its more familiar aspects.

_________________
All the way, with 68k

 Status: Offline
Profile     Report this post  
kolla 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 15:35:35
#126 ]
Elite Member
Joined: 21-Aug-2003
Posts: 2880
From: Trondheim, Norway

@cdimauro

Exactly why are you complaining about “only 128MB SDRAM”?

_________________
B5D6A1D019D5D45BCC56F4782AC220D8B3E2A6CC

 Status: Offline
Profile     Report this post  
Karlos 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 15:57:47
#127 ]
Elite Member
Joined: 24-Aug-2003
Posts: 4402
From: As-sassin-aaate! As-sassin-aaate! Ooh! We forgot the ammunition!

@NutsAboutAmiga

Quote:

NutsAboutAmiga wrote:
@pixie

Quote:
PPC is long dead, why not embrace 68k as the defacto standard?


Naive PPC code will always be faster.


ITYM "native".

Let me let you into a secret. JIT is native. I don't know why people think it's an emulation. It's not. It's just a very late stage compilation step.

It's also possible for JIT to be faster in execution than statically compiled "native" code. You didn't read that wrong. It can be faster. This happens as the result of compiling only the paths that are encountered in situ when code is running and not having tons of conditional code that isn't taken.

_________________
Doing stupid things for fun...

 Status: Offline
Profile     Report this post  
amigang 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 16:04:39
#128 ]
Elite Member
Joined: 12-Jan-2005
Posts: 2020
From: Cheshire, England

@Karlos

This is why i would like a version of AmigaOs that's more aware its on an emulated device, so that some parts of the system could be run even faster, Ram limits be gone, and maybe more Rabbit hole features brought over. Specially in the case of AmigaOs4 emulation.

Plus the topics gone abit off the main thread. But never mind its still a interesting discussion

Last edited by amigang on 06-Sep-2022 at 04:06 PM.

_________________
AmigaNG, YouTube, LeaveReality Studio

 Status: Offline
Profile     Report this post  
NutsAboutAmiga 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 16:19:32
#129 ]
Elite Member
Joined: 9-Jun-2004
Posts: 12817
From: Norway

@Karlos

True but it’s not an exact 1 to 1 correlation, and 68K registers don’t necessarily get directly mapped to equal PowerPC register, ROR.L / 680x0 is special, it moves bits from high bit into low bits, I don’t believe, there is a equal PowerPC rotate function that does that, so its extra AND, shifts and OR, easily be added there.
And also, it might not pick up the most optimal instructions, for example AltiVec. It might be easier to JIT AltiVec into standard instructions, then converting standard instructions into AltiVec.

680x0 has SWAP.w instruction it basically doubles Data registers, as many intrunctions can operat on the lower 16bit, PowerPC don't need tricks like that has lots of GP registers, so SWAP.w don't exist on PowerPC. Hand optimizing you know when you need move lower bits into high bits, or when you can be sloppy with conversion, JIT might not have the time to do several passes, to determine when it can discard part of the operation. (Like when a smart compiler removes dead code)

PowerPC uses double precision FPU, while 680x0 uses signal precision FPU. What this means is that what most optimal on one is not most optimal on the other.

In addition, JIT has tendency to only keep small part of code translated, and discard parts that’s not used, when JIT cache gets full.

Compilation usually is CPU intensive, it might be a small pike, it can slow things horribly, if it happens at the wrong places wherry often.

Putina JIT on 233Mhz / 604e was able to emulate a 68060 @ 50 Mhz, this why Hyperion figured out, no point multi-processing, (back then there where no SMP, so it meant waiting for other CPU) beside context switched between two CPU will kill performance/cache. But the point is that a 233Mhz 604e does not becomes 68060 @ 233Mhz, there is lots of loss. Yes, does gain some speed by native API’s and addons.

Last edited by NutsAboutAmiga on 06-Sep-2022 at 06:22 PM.
Last edited by NutsAboutAmiga on 06-Sep-2022 at 04:54 PM.
Last edited by NutsAboutAmiga on 06-Sep-2022 at 04:53 PM.
Last edited by NutsAboutAmiga on 06-Sep-2022 at 04:51 PM.
Last edited by NutsAboutAmiga on 06-Sep-2022 at 04:31 PM.
Last edited by NutsAboutAmiga on 06-Sep-2022 at 04:30 PM.
Last edited by NutsAboutAmiga on 06-Sep-2022 at 04:20 PM.

_________________
http://lifeofliveforit.blogspot.no/
Facebook::LiveForIt Software for AmigaOS

 Status: Offline
Profile     Report this post  
Karlos 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 17:34:50
#130 ]
Elite Member
Joined: 24-Aug-2003
Posts: 4402
From: As-sassin-aaate! As-sassin-aaate! Ooh! We forgot the ammunition!

@NutsAboutAmiga

None of this matters. You don't care about the registers of your emulated machine, beyond any special atomicity they may have in some operations. To the JIT your 68000 registers are just a super hot set of global variables to be register allocated natively as needed.

_________________
Doing stupid things for fun...

 Status: Offline
Profile     Report this post  
OlafS25 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 21:41:07
#131 ]
Elite Member
Joined: 12-May-2010
Posts: 6338
From: Unknown

@agami

I think so too

"Amiga" 2022 would need to be very different from the past. Custom Hardware that is competitive in todays terms is impossible so basically you would have a custom mainboard with perhaps some special developed components, all the rest (most of it) would consist of standard components, expecially the chips for graphics and sounds. And the OS would need to be very different, secure, full memory protection, 64bit and support SMP. 68k emulation would have been already dropped and A500 only a distant nostalgic memory. We will never know of course what would have been

I personal look forward to create something combining linux and aros components. That is my hope for 2023

Last edited by OlafS25 on 06-Sep-2022 at 09:41 PM.

 Status: Offline
Profile     Report this post  
matthey 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 21:56:06
#132 ]
Super Member
Joined: 14-Mar-2007
Posts: 1999
From: Kansas

cdimauro Quote:

I know, but usually it's very low: 2-3 cycles.


L1 cache access latency is extremely important for performance. The L1 latency matters far more than later multi-level caches. L1 caches use SRAM which allow single cycle access and size increase is the biggest reason the latency has increased.

cycle latency | L1 cache sizes | CPU
1 8kiB Pentium
2 16kiB Pentium Pro-Pentium 3
4 32kiB Pentium 4-Intel Core i7

I believe they are pipelining the cache accesses which has advantages and disadvantages. I would have thought faster transistor switching and shorter lines from chip process improvements would have improved timing to allow a single cycle latency 16kiB access or at least a 3 cycle 32kiB access but it is common that set associative ways have been increased which also increases access latency. I expect the biggest factor behind increased ways is supporting multi-threading which increases address aliasing but there may be other factors too.

cdimauro Quote:

4 times looks too much to me: do you've some data about it?


Please recall the RISC-V research I have posted several times before.

The RISC-V Compressed Instruction Set Manual Quote:

The philosophy of RVC is to reduce code size for embedded applications and to improve performance and energy-efficiency for all applications due to fewer misses in the instruction cache. Waterman shows that RVC fetches 25%-30% fewer instruction bits, which reduces instruction cache misses by 20%-25%, or roughly the same performance impact as doubling the instruction cache size.


Assuming the 68k has 50% better code density than PPC, a 68k 8kiB I cache would have a miss rate similar to a PPC 32kiB I cache. This is an effective quadrupling of the code in the 68k I cache compared to a similar size PPC I cache. The PPC I cache latency with a 32kiB I cache of 4 cycles just happens to be quadruple the latency of a 68k 8kiB I cache of 1 cycle using the x86 latencies above which would be similar for all architectures. It's not completely free as the CPU pipleline length may be extended with more decoding stages which is why decoding efficiency is important. I don't know the best way to measure decoding efficiency but x86(-64) is the only variable length encoded architecture I know of that has consistently tried to avoid decoding stages with uop caches, trace caches and instruction delimitation markers in instruction caches. Otherwise, the code compression seems worthwhile kind of like a simple on the fly decompressor can increase bandwidth when reading from slow I/O with little or no increase in latency.

cdimauro Quote:

No, this isn't a trace cache (like the one on the Pentium 4), but the usual instruction cache tag bits which were used to put additional and useful metadata. In this case they just add one bit to know when an instruction starts, but more bits might be used to flag other information (e.g.: branches).

x86/x64 OoO processors used a significant amount of tag bits in the instruction cache for this purpose. With the introduction of L0 caches (like the trace cache) I expected that they aren't used anymore (or at least significantly reduced).


It is more resource usage to avoid the x86(-64) decoding tax. It is smart of Intel to move the decoding tax from a performance and power impediment to a resource hog because transistors are cheap after all. Well, those extra transistors still use more power limiting how low x86(-64) can scale but RISC still can't match CISC performance. Too bad there is not a more efficient CISC architecture competing. I would look for one that has competitive performance with x86, better code density, less decoding tax and better power efficiency.

cdimauro Quote:

It depends on how the 16-bit encoding is organized. The one for the 68k family is quite complicated, with a lot of exceptions to be taken into account.


A 16 bit variable length encoding is optimal as a compromise for alignment vs code density. Byte encodings have alignment penalties vs 16 bit encodings but can have better code density as exhibited by the Cast BA2x processors but x86 is not optimal and x86-64 is much worse. A 32 bit variable length encoding has better alignment characteristics but horrible code density. Most variable length CPU encodings are 16 bit today (68k, ARM Thumb, RISC-V C) and even intermediate byte codes often changed to 16 bit encodings (Java byte code translation to Dalvik 16 bit encoding for example) while sometimes still called byte codes.

cdimauro Quote:

On the exact opposite, an 8-bit encoding could be easier to decode. x86/x64 is a complicated ISA, but with a simple 32 bytes LuT you can quickly and easily catch its prefixes, for example. Similarly, you can see if instructions have memory references. And so on.

You can't do it with the 68k, because using LuTs is too much expensive. Emulators do it because it's "cheap", but hardware-wise it's not possible.

It would be interesting to know how Gunnar implemented it on its 68080.


Decoding on the 68k is likely more challenging in an FPGA.

https://www.eecg.utoronto.ca/~jayar/pubs/kuon/kuonfpga06.pdf Quote:

More recently, a detailed comparison of FPGA and ASIC implementations was performed by Zuchowski et al. They found that the delay of an FPGA lookup table (LUT) was approximately 12 to 14 times the delay of an ASIC gate. Their work found that this ratio has remained relatively constant across CMOS process generations from 0.25 μm to 90 nm. ASIC gate density was found to be approximately 45 times greater than that possible in FPGAs when measured in terms of kilo-gates per square micron. Finally, the dynamic power consumption of a LUT was found to be over 500 times greater than the power of an ASIC gate. Both the density and the power consumption exhibited variability across process generations but the cause of such variability was unclear. The main issue with this work is that it also depends on the number of gates that can be implemented by a LUT. In our work, we remove this issue by instead focusing on the area, speed and power consumption of application circuits.


Many decoding choices are bad for FPGAs but not ASICs. In the case of the Apollo core, the FPGA disadvantage is partially offset by a fairly modern fab process (28nm) and the relatively low clocked Apollo core in the Cyclone V FPGA. The Apollo core was still able to achieve a reduction in EA calculation time from the similarly clocked 68060 by reducing the EA calc time of (bd,An,Xi*SF) and (bd,PC,Xi*SF) using a full extension word format from 1 cycle to 0 cycles effectively making all addressing modes except double memory indirect modes free. The full extension word format has the most choices and is therefor the most difficult to decode, especially in FPGA, However, it is not commonly used with 32 bit addressing so has minimal performance impact even with the extra cycle of EA calc on the 68060. With 64 bit addressing, these addressing modes are more useful and reworking and reencoding them would be beneficial anyway. Removing double indirect modes in a 64 bit mode would be a simplification and reduce max instruction length simplifying decoding. While max instruction length is long for the 68k, the common and average instruction length are shorter than x86 and much shorter than x86-64 which grew due to inadequate encoding space (more of x86 encoding space should have been reencoded for x86-64). The 68k has more free encoding space to begin with, a separate 64 bit mode frees up more encoding space and is optimal with 2 bits for byte (8b), word (16b), long (32b), quad (64b) integer datatype sizes instead of using a prefix (likely justified to also add more registers for Gunnar's infatuation to increase registers despite Gunnar wanting to use 2 bits for the size field but finding it incompatible without a separate 64 bit mode).

cdimauro Quote:

Well, the only 64-bit 68k extension is 68080 and I expect a very poor code density if some application is compiled (or assembled) using 64-bit data & address registers, since a 16-bit prefix is used in this case; some improvements can come as well, because ternary operations are enabled (AFAIK).


Yes, I would expect Gunnar sacrificed 64 bit code density but maybe not worse than x86-64. Most new features or accessing the new registers likely requires the prefix, not that I've studied it as it is too painful for me. Really not much new as far as addressing for 64 bit either. Better PC relative addressing would be very valuable like (d32,PC) which can be very cheap and even (d64,PC) through the full extended word format would simplify compilers even though rarely used. We really needed AmigaOS development with it though for something like fully PC relative 68k64 libraries without wasting base relative pointers.

cdimauro Quote:

No. But I doubt that they could be compatible with a $1-3 SoC.


Didn't I say $3-$7 SoC ASIC was a target? That gets a huge transistor budget. More isn't necessarily a problem other than the amount of development time to fill it with quality enhancements. It would start to get expensive for embedded and hobby use and it would be bad to move up into the crowded SoC ASIC competition where ARM+Linux starts to work well with standard distros.

cdimauro Quote:

An L2 cache is too much expensive from a transistor budget, raising the SoC costs.


It's surprising how many transistors are available for $1. Enough SRAM that can be configured as either a L2 or as main memory for embedded/hobby/testing is very valuable even if it adds a few dollars to the cost.

cdimauro Quote:

Yes, but with the DDR5 we reached 32 times that data rate of SDRAMs. So, a bit far away, even of the first DDR memory...


Supporting the newest DDR memory available makes sense for a new ASIC with external memory as older DDR types become EOL and eventually more expensive and difficult to source. It may be nice if the on chip memory controller could support LPDDR memory as well for lower power embedded use but I don't know if this is possible. The Amiga standard spec could call for DDR over LPDDR though.

cdimauro Quote:

I made a search and I've found that it's much more expensive:
https://ultimatemister.com/product-category/ultimate-mister/
https://misteraddons.com/products/mister-bundles
https://amigastore.eu/en/866-mister-mini6-128mb-fpga-computer.html

The lowest cost is $425. And all of them are bundled with only 128MB of SDRAM.

If you have other sources where to buy a standalone & complete MiSTer, please let me know.


Trust me, the price of the base board was $140 not long before the Covid crisis and supply disruptions. The base board is the main constraint and out of stock almost everywhere. Eventually, the base board should be restocked although it will likely be $150+ which is reasonable considering current inflation. The producer may have caught on to the fact that the MiSTer base board has become very popular too. The price of assembled MiSTer units has increased (nearly doubled?) but are still available. MiSTer stores which sell them likely bought up base boards when they discovered them low in stock and assembled them with stocks of optional boards so they weren't left with dead inventory of optional boards while the base boards were out of stock.

There are lots of MiSTer specific stores as well as sales on Amazon, Ebay, Walmart, Newegg. It really was big business compared to the Amiga, at least before base boards were out of stock.

cdimauro Quote:

Regarding a MiniPC, here's one of the many (and with good enough quality): https://store.chuwi.com/products/chuwi-herobox-pro
Just $180, and it also includes Windows 10.

It uses the latest low-power Intel's Tremont microarchitecture: https://en.wikipedia.org/wiki/Tremont_(microarchitecture) which has very good performance (definitely much better than my Silvermont MiniPC, which was already enough for Amiga AGA emulation using Blitter immediate in WinUAE).


Too much x86 tax on resources. A Pi Zero 2 W at $15 should be able to do descent emulation but in order RISC cores are notoriously horrible performance. I sure wish we had more efficient in order CISC CPUs that could scale down further than x86-64 like the 68060 using a modern chip processes.

cdimauro Quote:

Well, the main problem is that there's no little motorcycle available anymore. Plus, the big truck is quite cheap...


Nostalgia types can climb up on the truck and then onto the motorcycle and imagine the good old days.

Last edited by matthey on 06-Sep-2022 at 10:15 PM.
Last edited by matthey on 06-Sep-2022 at 10:06 PM.

 Status: Offline
Profile     Report this post  
OneTimer1 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 22:36:15
#133 ]
Cult Member
Joined: 3-Aug-2015
Posts: 973
From: Unknown

@Karlos

Quote:


Let me let you into a secret. JIT is native. I don't know why people think it's an emulation.


JIT compilation is also used in some emulators, in order to translate machine code from one CPU architecture to another.
https://en.wikipedia.org/wiki/Just-in-time_compilation

I don't care how much JIT is better than a interpreter at the end your code will run slower than a code compiled directly for the CPU it's running on.

Well 68k might be some kind of standard but it is not the future.
The Amiga OS API couldn't be the future either, maybe some hosted AROS like OS on Linux/BSD would be enough for the Amiga feeling, together with a built in UAE for 68k/PPC stuff.

Well most people here will only accept it when it has the sticker Amiga on it, so it will never happen.

Last edited by OneTimer1 on 06-Sep-2022 at 10:36 PM.

 Status: Offline
Profile     Report this post  
OlafS25 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 22:40:34
#134 ]
Elite Member
Joined: 12-May-2010
Posts: 6338
From: Unknown

@OneTimer1

better is AxRuntime from Deadwood....

Aros becomes Linux,,,,

https://www.axrt.org/

And Scalos will be a unix desktop

Last edited by OlafS25 on 06-Sep-2022 at 10:41 PM.

 Status: Offline
Profile     Report this post  
matthey 
Re: Commodore Amiga Global Alliance
Posted on 6-Sep-2022 23:18:48
#135 ]
Super Member
Joined: 14-Mar-2007
Posts: 1999
From: Kansas

NutsAboutAmiga Quote:

True but it’s not an exact 1 to 1 correlation, and 68K registers don’t necessarily get directly mapped to equal PowerPC register, ROR.L / 680x0 is special, it moves bits from high bit into low bits, I don’t believe, there is a equal PowerPC rotate function that does that, so its extra AND, shifts and OR, easily be added there.
And also, it might not pick up the most optimal instructions, for example AltiVec. It might be easier to JIT AltiVec into standard instructions, then converting standard instructions into AltiVec.


Rotate isn't exactly a common instruction and isn't likely to make much difference in emulation performance. PPC 32 GP registers aren't convenient to hold 68k 16 GP registers because of PPC register partitioning between caller save, callee save and system specific (global, base, zero, stack, reserved etc.). It's funny that the 68k 16 GP registers was able to hold the 8 x86 registers in DOSBox though (in 8 data registers).

NutsAboutAmiga Quote:

680x0 has SWAP.w instruction it basically doubles Data registers, as many instructions can operate on the lower 16bit, PowerPC don't need tricks like that has lots of GP registers, so SWAP.w don't exist on PowerPC. Hand optimizing you know when you need move lower bits into high bits, or when you can be sloppy with conversion, JIT might not have the time to do several passes, to determine when it can discard part of the operation. (Like when a smart compiler removes dead code)


It was sometimes worthwhile to use SWAP on the 68000 to effectively double the number of registers because memory accesses were so expensive. As memory accesses became cheaper and cached, this trick was less of an advantage. When pipelining and result forwarding/bypassing came about, this trick could also cause partial register stalls and should generally be avoided. RISC needs more GP registers because it needs free register for loads which for performance needs to be unrolled to avoid load-to-use penalties which needs even more registers (not good for code density either so now RISC has an instruction fetch bottleneck).

NutsAboutAmiga Quote:

PowerPC uses double precision FPU, while 680x0 uses signal precision FPU. What this means is that what most optimal on one is not most optimal on the other.


The 68k FPU has always been extended precision. All operations are performed using extended precision by default but the global rounding can be changed to double precision or single precision (or 68040+ FDop/FSop instructions) can be used to give the same results as a double or single precision FPU. The 68k FPU can easily emulate the PPC FPU but the PPC FPU can't emulate the 68k FPU.

NutsAboutAmiga Quote:

In addition, JIT has tendency to only keep small part of code translated, and discard parts that’s not used, when JIT cache gets full.


The JIT cache size or number of cache pages can usually be customized. If more pages are not kept, it is usually because it is not worthwhile.

NutsAboutAmiga Quote:

Compilation usually is CPU intensive, it might be a small pike, it can slow things horribly, if it happens at the wrong places wherry often.


It's usually no more CPU intensive than executing other code although it is extra work that needs to be done at execution time rather than compile time.

NutsAboutAmiga Quote:

Putina JIT on 233Mhz / 604e was able to emulate a 68060 @ 50 Mhz, this why Hyperion figured out, no point multi-processing, (back then there where no SMP, so it meant waiting for other CPU) beside context switched between two CPU will kill performance/cache. But the point is that a 233Mhz 604e does not becomes 68060 @ 233Mhz, there is lots of loss. Yes, does gain some speed by native API’s and addons.


Putina must not be very efficient then because...

Karlos Quote:

It's also possible for JIT to be faster in execution than statically compiled "native" code.

 Status: Offline
Profile     Report this post  
MEGA_RJ_MICAL 
Re: Commodore Amiga Global Alliance
Posted on 7-Sep-2022 0:32:10
#136 ]
Super Member
Joined: 13-Dec-2019
Posts: 1200
From: AMIGAWORLD.NET WAS ORIGINALLY FOUNDED BY DAVID DOYLE

amigang:

Quote:

So at the recent NWAG meeting David Pleasance (Former Commodore UK Boss) purpose a new project.

Commodore Amiga Global Alliance (CAGA)

The main goal would be a central website for all Amiga community's, to try and bring them together more, raise the profile of groups, projects and meetings that happen in the community.



matthey:

Quote:

L1 cache access latency is extremely important for performance. The L1 latency matters far more than later multi-level caches. L1 caches use SRAM which allow single cycle access and size increase is the biggest reason the latency has increased.



Ah, friends, friends,
you truly are a treasure,
an endangered species.


/mega

_________________
I HAVE ABS OF STEEL
--
CAN YOU SEE ME? CAN YOU HEAR ME? OK FOR WORK

 Status: Offline
Profile     Report this post  
Hans 
Re: Commodore Amiga Global Alliance
Posted on 7-Sep-2022 2:30:43
#137 ]
Elite Member
Joined: 27-Dec-2003
Posts: 5067
From: New Zealand

@matthey

Quote:
You still don't get it. It only takes a few times the development and production cost to bring a competitive mass produced Raspberry Pi like Amiga board to market with a 68k+chipset ASIC while the 68k+chipset market size is likely at least 10 times that of the PPC Amiga market. At the low price point which it could be produced (~$3-$7 for the SoC ASIC and ~$40 for the board) demand would go up and the market could easily be 100 times or 1000 times the current Amiga market. If I had anything to do with it, this would not be a quick and dirty Apollo core FPGA to ASIC conversion that continues to run at only 100MHz. That would be a waste of an ASIC, give a product based on it a bad reputation as well as not being competitive in performance for no good reason. I am not even sold on the Apollo core. Gunnar blundering may make other 68k cores better options and pipelined 68020 cores are available (Jens and Thomas are professional though). Using a 40nm process like the $1 RP2040 SoC ASIC can allow up to 3GHz practical CPU cores although that was from very professional teams and 1-2GHz is a more realistic target. PCIe is no longer needed with a small RPi like board and the extra expense of allowing a graphics board is a major reason why a product like the Tabor can't compete with the RPi and couldn't even if it was mass produced. An ASIC allows to customize and standardize what is available which is a huge advantage too.

Look, I'm genuinely impressed by what people do with their classic Amigas. I'm also impressed with the 68080, Vampire boards, etc., and impressed with what the MorphOS team have built. I'd also be very impress if someone managed to fund and produce ASICs. However, I'm just not interested enough to get involved.

I have plenty of other interests and other things that I'd like to do. If the AmigaOS 4 project collapses and A-EON no longer want to pay me for driver development, then I'll most likely find something else that interests me to get involved in (and that preferably pays better; I've got a family to provide for).

Quote:
Amiga PPC hardware is obsolete and noncompetitive before it goes on sale. You can put on blinders and keep walking out into the desert of failure but don't be surprised when your water runs out and you find yourself all alone.

I'm well aware of the situation. I simply don't share your interests or goals.

EDIT: To clarify further, I don't expect anybody to unite behind any particular AmigaOS variant, and wasn't interested in debating which variants are dead, and which ones should be "the future." Classic enthusiasts will likely stick with 68K, MorphOS fans will likely stick with MorphOS, etc., Hence my suggestion to have a common API standard for intercompatibility.

Hans

Last edited by Hans on 07-Sep-2022 at 03:01 AM.

_________________
http://hdrlab.org.nz/ - Amiga OS 4 projects, programming articles and more. Home of the RadeonHD driver for Amiga OS 4.x project.
https://keasigmadelta.com/ - More of my work.

 Status: Offline
Profile     Report this post  
Hammer 
Re: Commodore Amiga Global Alliance
Posted on 7-Sep-2022 5:09:37
#138 ]
Elite Member
Joined: 9-Mar-2003
Posts: 5272
From: Australia

@OlafS25

Quote:

OlafS25 wrote:
@BigD

PPC was the official route after the bankruptsy of Commodore. At that time many users already left. Most of them can only remember 68k based hardware (mostly A500). From emotions and memory PPC is as good or bad as X86/AMD64 or ARM for most people., Just another hardware.

All Amiga PowerPC paths followed the Apple's 68K-to-PPC migration model and OS level 68K emulation design wouldn't survive WHDLoad games without using the full machine emulation UAE.

All Amiga PowerPC with OS level 68K emulation is effectively DraCo's only AmigaOS apps-friendly environment.

VS

In X86 world, legacy CPU instruction set was kept while RISCy CPU core design with translation stage was adopted and the translation layer is below the OS. I have run bare-metal retro PC DOS games on Pascal GTX 1050's CGA/EGA/VGA and Core i7-3770K/MSI Z77 MPower. 640K config.sys/autoexec.bat management is included. I created MS-DOS 7.1 8GB USB boot device that contains DOS PC games.

PiStorm/RPi 3a ARM Cortex A53/Emu68 method survives WHDLoad Amiga games. For the retro gaming user interface, I prefer AmigaOS WIMP GUI over MS-DOS since it's closer to modern Windows PCs.

The Amiga is a blending between a game console and a desktop computer i.e. the Amiga is not Apple "Machintosh". I dislike the Linux world's elitist command-line mentality.

Modern lower-level APIs such as Vulkan and DirectX12 API and ReBar (which enables the CPU to access GPU's entire VRAM without the 256 MB access window) are methods to bring modern gaming PCs closer to near-metal game consoles.

Both Windows 9x/NT4/2K/XP/Vista/7/10/11 and AmigaOS assumes at least two button mouse WIMP GUI. Microsoft's barebone mouse products have at least two buttons.


_________________
Ryzen 9 7900X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB
Amiga 1200 (Rev 1D1, KS 3.2, PiStorm32lite/RPi 4B 4GB/Emu68)
Amiga 500 (Rev 6A, KS 3.2, PiStorm/RPi 3a/Emu68)

 Status: Offline
Profile     Report this post  
Hammer 
Re: Commodore Amiga Global Alliance
Posted on 7-Sep-2022 5:36:47
#139 ]
Elite Member
Joined: 9-Mar-2003
Posts: 5272
From: Australia

@matthey

Quote:

Too much x86 tax on resources. A Pi Zero 2 W at $15 should be able to do descent emulation but in order RISC cores are notoriously horrible performance. I sure wish we had more efficient in order CISC CPUs that could scale down further than x86-64 like the 68060 using a modern chip processes.

For the front end "X86 tax" remained nearly static for a long time while the transistor budget has grown.

Pi Zero 2 W at $15 doesn't reflect the current real-world market when supply is very tight with higher cost markup e.g. about $118 AUD from AliExpress without GPIO.

I have tried to find the cheapest PI 3a or Pi Zero 2 W for my Amiga 500/PiStorm/Emu68 setup.

From https://raspberry.piaustralia.com.au/collections/raspberry-pi-boards
Barebone Pi Zero 2 W and Pi 3A are sold out.


From https://core-electronics.com.au/raspberry-pi.html
Barebone Pi Zero 2W or Pi 3a are sold out, but higher-cost starter kit bundles are still available e.g. $99 AUD Raspberry Pi Zero 2 W Starter Kit.

From Poland, I purchased preconfigured 32 GB microSD Emu68/PiStorm Rev B Max II and RPi 3a for about $150 AUD.


Via the US side, from
https://vilros.com/collections/raspberry-pi-kits/products/vilros-raspberry-pi-zero-2-w-basic-starter-kit
Vilros Raspberry Pi Zero 2 W Basic Starter Kit starts at $49.99 USD and it's sold out.

_________________
Ryzen 9 7900X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB
Amiga 1200 (Rev 1D1, KS 3.2, PiStorm32lite/RPi 4B 4GB/Emu68)
Amiga 500 (Rev 6A, KS 3.2, PiStorm/RPi 3a/Emu68)

 Status: Offline
Profile     Report this post  
BSzili 
Re: Commodore Amiga Global Alliance
Posted on 7-Sep-2022 6:01:29
#140 ]
Regular Member
Joined: 16-Nov-2013
Posts: 447
From: Unknown

@MEGA_RJ_MICAL

Now that's what I call a ZORRAM moment!

_________________
This is just like television, only you can see much further.

 Status: Offline
Profile     Report this post  
Goto page ( Previous Page 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 Next Page )

[ home ][ about us ][ privacy ] [ forums ][ classifieds ] [ links ][ news archive ] [ link to us ][ user account ]
Copyright (C) 2000 - 2019 Amigaworld.net.
Amigaworld.net was originally founded by David Doyle