Click Here
home features news forums classifieds faqs links search
6071 members 
Amiga Q&A /  Free for All /  Emulation /  Gaming / (Latest Posts)
Login

Nickname

Password

Lost Password?

Don't have an account yet?
Register now!

Support Amigaworld.net
Your support is needed and is appreciated as Amigaworld.net is primarily dependent upon the support of its users.
Donate

Menu
Main sections
» Home
» Features
» News
» Forums
» Classifieds
» Links
» Downloads
Extras
» OS4 Zone
» IRC Network
» AmigaWorld Radio
» Newsfeed
» Top Members
» Amiga Dealers
Information
» About Us
» FAQs
» Advertise
» Polls
» Terms of Service
» Search

IRC Channel
Server: irc.amigaworld.net
Ports: 1024,5555, 6665-6669
SSL port: 6697
Channel: #Amigaworld
Channel Policy and Guidelines

Who's Online
6 crawler(s) on-line.
 102 guest(s) on-line.
 1 member(s) on-line.


 ppcamiga1

You are an anonymous user.
Register Now!
 ppcamiga1:  4 mins ago
 bhabbott:  6 mins ago
 matthey:  8 mins ago
 Karlos:  11 mins ago
 OlafS25:  26 mins ago
 michalsc:  36 mins ago
 ncafferkey:  1 hr 13 mins ago
 pixie:  1 hr 17 mins ago
 Hypex:  2 hrs 4 mins ago
 agami:  2 hrs 12 mins ago

/  Forum Index
   /  General Technology (No Console Threads)
      /  Applied Micro moving away from PowerPC
Register To Post

Goto page ( Previous Page 1 | 2 | 3 | 4 | 5 | 6 | 7 Next Page )
PosterThread
Hypex 
Re: Applied Micro moving away from PowerPC
Posted on 13-Oct-2014 14:50:47
#81 ]
Elite Member
Joined: 6-May-2007
Posts: 11222
From: Greensborough, Australia

@cdimauro

Quote:
The PowerPC binary was a mistake even when they decided to port the Amiga o.s. to this architecture.


So what do you think should have been chosen?

Although the PowerPC was touted as the 68K successor with the Mororola marking the CPU was completely different. PowerPC has more in common with the Copper than the 68K.

AFAIK there was no CPU similar to the 68K at the time. Motorola didn't come up with a real replacement or a way to stretch the archirecture beyond it's usefulness like Intel is still doing. So I don't know what big endian CPU could have been chosen instead.

Quote:
I think that resource tracking can be added, but it can create issues with the current applications. An application can share (actually: it "shares" everything) some resources with the o.s. or other applications, so they mustn't be freed when it exits.


OS4 has a sort of poor mans RT where the programmer has to specify it. But I think the OS should control it privately. I think it can be done. Most things shared would be data in messages which are max 64KB in size, data segment and stack. This should be a small amount of memory. Things allocated like screens and windows taking up major memory should be able ot be closed and freed. And if anything the program should be able to be shrunk, removed from view ands relegated to the background.

Quote:
The third is the of loss of backward-compatibility, as I stated before, because DOESN'T make sense to port it to another 32-bit architecture: (A REAL) 64-bit (platform) should be the next thing to do.


I don't see this as a problem. The problem was programming OS4 to be like this in the first place when I don't think there was any need. The fact is all 68K software runs in an emulator so why was this let to become an issue? OS4 was the chance to have a clean break and that wasn't chosen.

After all, OS4 uses interfaces instead of 68K jump tables. I see no problem in isolating OS4 system objects from 68K libraries. There can be "dummy" 68K system libraries with functions for 68K with whatever crossover programmed inside and private OS4 libraries for PPC apps.

Perhaps seperating the two is the next move. Leave the legacy stuff to emulated library bases. And focus on a new API where modern features can be brought in more easily.

Last edited by Hypex on 13-Oct-2014 at 03:11 PM.

 Status: Offline
Profile     Report this post  
Hypex 
Re: Applied Micro moving away from PowerPC
Posted on 13-Oct-2014 14:52:04
#82 ]
Elite Member
Joined: 6-May-2007
Posts: 11222
From: Greensborough, Australia

@OlafS25

Quote:
I only used "classic" because it is often used in discussions


I do that also at times. But I also call it a "Real Amiga".

 Status: Offline
Profile     Report this post  
WolfToTheMoon 
Re: Applied Micro moving away from PowerPC
Posted on 13-Oct-2014 15:01:15
#83 ]
Super Member
Joined: 2-Sep-2010
Posts: 1351
From: CRO

@Hypex

Quote:
AFAIK there was no CPU similar to the 68K at the time.


68060 and Pentium were similar in execution and conception.

In some paralel universe, Commodore bought Zilog in 85', releasing their own fully pipelined CISC design sometime in late 80s.

_________________

 Status: Offline
Profile     Report this post  
cdimauro 
Re: Applied Micro moving away from PowerPC
Posted on 14-Oct-2014 21:29:45
#84 ]
Elite Member
Joined: 29-Oct-2012
Posts: 3650
From: Germany

@Samurai_Crow

Quote:

Samurai_Crow wrote:

Re: Memory Protection and Managed code
I agree with cdimauro. Memory protection is limited by the page size of the MMU while managed code has byte-level granularity. Neither is a catch-all. I consider managed code to be a higher priority because of that granularity issue. All that's needed is a 68020+ CPU and a CHK opcode for each array access that isn't a constant. That's cheaper than an MMU+all the memory needed for the page tables.

What do you mean for "cheaper"? To implement (hardware resources)? The execution cost at runtime?

Of course, the implementation cost for managed code is zero from an hardware point of view: you have to emit the proper CHK or CHK2 instructions (but both are limited to signed-integer bound(s), and the latter that has to read the bounds from memory).

However the execution cost at runtime is high. Very high for code which has to make a lot of non-sequential accesses. Consider, also, that this way you're expanding the code, which increases in size, so it stresses the code cache. The data cache is stressed too, if you have to read the bounds from memory. Both code and data cache stress also the memory, consuming bandwidth and contending the resource for other tasks or devices.

Last but not least, managed code means that many useful optimizations cannot be applied anymore, since you have to do the checks, and they require that the checked data should be in a register. For example, you / the compiler cannot use pointers for "scan" data back and forth, but you always have to keep your data structure in the form of pointer + index, because the index has to be checked; you can use another register to store the computation of pointer + index, but this way you waste both a register and computational power because of the double calculations that you need to make, in order to have both data structures updated.

Trust me, Samurai: in the real world isn't hat simple to handle managed code, and in general bound checking: you have to sacrifice something, which has a sensible weight. I know it, because I did A LOT of research in this field, and I've found a solution to be implemented in hardware (with some constraints, but they are good enough for real-world applications).

Keep notice that if there's active research in this field, is that because there's demand (especially at the enterprise level; I can assure you), and also because the current solutions (software and hardware) don't fit the expectations.
Quote:
One thing I would accept as a substitute for the large page tables required by some MMUs is a minimal MMU that doesn't support address translation. All it would need to do is mark some regions as read-only and so on. This would still allow the simple flat memory model that Amigans have long known and loved while providing protection for memory at the same time.

It requires memory for this, and if you want to keep the byte-level granularity that asked before, it's a HUGE amount of memory which has to be sacrificed only for this.

Also, this memory has to be cached for fast checking, so it consumes other hardware resources as well.

Last but not least, MMUs are implemented and quite a standard feature from about 20-25 years now: why you don't want them? They represent a good compromise from the hardware implementation point of view, albeit with a big granularity. They, also, make the address translation a very fast operation, since the calculations are made into a pipeline stage (so, well hidden in the pipeline). Address translation is a good thing, because it let you implement virtual memory, and fast loaders (don't need to relocate code and data).

 Status: Offline
Profile     Report this post  
cdimauro 
Re: Applied Micro moving away from PowerPC
Posted on 14-Oct-2014 21:38:27
#85 ]
Elite Member
Joined: 29-Oct-2012
Posts: 3650
From: Germany

@Hypex

Quote:

Hypex wrote:
@cdimauro

Quote:
The PowerPC binary was a mistake even when they decided to port the Amiga o.s. to this architecture.


So what do you think should have been chosen?

WolfToTheMoon already answered it.
Quote:
Although the PowerPC was touted as the 68K successor with the Mororola marking the CPU was completely different. PowerPC has more in common with the Copper than the 68K.

AFAIK there was no CPU similar to the 68K at the time. Motorola didn't come up with a real replacement or a way to stretch the archirecture beyond it's usefulness like Intel is still doing. So I don't know what big endian CPU could have been chosen instead.

Why do you have to use a big-endian CPU? The endianess of an architecture should be an implementation-detail, not exposed by the o.s. and not used by the applications (unless for some low-level stuff).
Quote:
Quote:
I think that resource tracking can be added, but it can create issues with the current applications. An application can share (actually: it "shares" everything) some resources with the o.s. or other applications, so they mustn't be freed when it exits.


OS4 has a sort of poor mans RT where the programmer has to specify it. But I think the OS should control it privately. I think it can be done. Most things shared would be data in messages which are max 64KB in size, data segment and stack. This should be a small amount of memory. Things allocated like screens and windows taking up major memory should be able ot be closed and freed. And if anything the program should be able to be shrunk, removed from view ands relegated to the background.

The default should be: everything is private, and the o.s. should free it after the application is closed. The exception is the shared resource: here the application should signal to the o.s. what has to shared exactly (and how, if applicable).
Quote:
Quote:
The third is the of loss of backward-compatibility, as I stated before, because DOESN'T make sense to port it to another 32-bit architecture: (A REAL) 64-bit (platform) should be the next thing to do.


I don't see this as a problem. The problem was programming OS4 to be like this in the first place when I don't think there was any need.

Sorry, I haven't got it: can you clarify it? You don't want a real 64-bit o.s. as the next big & good thing?
Quote:
The fact is all 68K software runs in an emulator so why was this let to become an issue? OS4 was the chance to have a clean break and that wasn't chosen.

After all, OS4 uses interfaces instead of 68K jump tables. I see no problem in isolating OS4 system objects from 68K libraries. There can be "dummy" 68K system libraries with functions for 68K with whatever crossover programmed inside and private OS4 libraries for PPC apps.

Perhaps seperating the two is the next move. Leave the legacy stuff to emulated library bases. And focus on a new API where modern features can be brought in more easily.

I fully agree.

 Status: Offline
Profile     Report this post  
Samurai_Crow 
Re: Applied Micro moving away from PowerPC
Posted on 15-Oct-2014 5:51:18
#86 ]
Elite Member
Joined: 18-Jan-2003
Posts: 2320
From: Minnesota, USA

@cdimauro

Quote:

cdimauro wrote:
@Samurai_Crow

Last but not least, MMUs are implemented and quite a standard feature from about 20-25 years now: why you don't want them? They represent a good compromise from the hardware implementation point of view, albeit with a big granularity. They, also, make the address translation a very fast operation, since the calculations are made into a pipeline stage (so, well hidden in the pipeline). Address translation is a good thing, because it let you implement virtual memory, and fast loaders (don't need to relocate code and data).


The reason I hate MMUs is that we had a perfectly fast and responsive OS 25 years ago. The modern implementations are not much if any faster despite having 10 times faster processors.

You want to eliminate load times? Use position-independent code! Relative addressing is really useful in a flat memory model. Using virtualized address space? Not so much.

Being able to monitor for writes or protect from writes is only 2 bits per page. That much I will sacrifice for an MMU. No more. Memory protection can be implemented in a flat memory model more easily than in a virtualized address space anyway.

Address translations that cost pipeline length will make branches less responsive by introducing bubbles. I like short responsive pipelines and MMU pipelining makes them neither short nor responsive.

If you find out managed code doesn't work for you? You recompile the code not to use it and use a faster OS! How do you get a short pipeline out of a CPU with a long pipeline? IT IS IMPOSSIBLE!

 Status: Offline
Profile     Report this post  
Hypex 
Re: Applied Micro moving away from PowerPC
Posted on 15-Oct-2014 13:55:54
#87 ]
Elite Member
Joined: 6-May-2007
Posts: 11222
From: Greensborough, Australia

@cdimauro

Quote:
Samurai_Crow suggests to use managed code to avoid your problem. So, he's correct from this point of view: the underlining runtime of a managed platform permits to avoid exactly the issue that you had.


I've seen things like this at run time like stack checking and array bounds checking options in some compiler. But I think to be practical it is only useful for debugging purposes. Once a program is stable managed code shouldn't be needed.

And then the MMU can do the rest by catching out things that were missed in beta stage. I don't know what overhead memory protection has but the MMU would surely be faster than manual code doing bound checking at run time. Even so I've had my own concerns about the CPU being slowed down by needing to check every read and write.

Quote:
Why do you have to use a big-endian CPU?


Because AmigaOS and its structures were programmed in big endian. Back then (~2000) when the pressure was on to port AmigaOS to a new CPU it would have been a requirement. Unless they planned a real break fro mthe AmigOS we knew and compatibility.

Quote:
Sorry, I haven't got it: can you clarify it? You don't want a real 64-bit o.s. as the next big & good thing?


Sorry, no, I was thinking with regards to backward-compatibility, and an upward thinking to a 64-bit path and actual multi-core with multi-threading.

So, what I didn't see as a problem was programming OS4, so that these features could be realised in future. The problem I see now with putting these types of modernisations into OS4 is that they programmed it too close to the 68K model, instead of privitising the OS structures and only exporting the 68K API. As well as being able to compile base Amiga code so that it worked.

 Status: Offline
Profile     Report this post  
OlafS25 
Re: Applied Micro moving away from PowerPC
Posted on 15-Oct-2014 14:06:08
#88 ]
Elite Member
Joined: 12-May-2010
Posts: 6353
From: Unknown

@Hypex

The problem with a "modernized OS" is that it will not be compatible with 68k anymore. You can use UAE but then you have a blackbox, it is not the same anymore like on MorphOS or AmigaOS. It will be more like (or even identical) to AROS but people obviously use AmigaOS or MorphOS because of the 68k integration.

And I still think that these "modern" features will have zero effect on getting new software.

cdimauro still does not explain why commercial software developers (or even hobbyists from outside) should support it when they already have plenty of options with much more users. As I understand it he admitted himself that it would have no effect. Much of these features are not really feelable or visible to normal users and thus they do not really give any advantage.

For me the primary purpose of a OS is to run Software

Last edited by OlafS25 on 15-Oct-2014 at 02:26 PM.
Last edited by OlafS25 on 15-Oct-2014 at 02:09 PM.

 Status: Offline
Profile     Report this post  
Hypex 
Re: Applied Micro moving away from PowerPC
Posted on 15-Oct-2014 14:06:13
#89 ]
Elite Member
Joined: 6-May-2007
Posts: 11222
From: Greensborough, Australia

@paolone

Quote:
It's funny. AROS developers decided, in its infancy days, that AmigaOS DOS packets were basically evil and decided to go for a IO device based approach instead, leaving compatibility to a packet emulator


I don't know why it's thought that the DOS packet interface is so bad. It works with the Amiga multi-tasking model, a call to the DOS API generates a message that is sent to the filesystem, which does it's work, and replies back the result. This is like Basic Multiasking 101 and standard IPC practice.

What I don't like is the "name" hack they did in the message node. Never saw why it was done. As to any overhead, DOS packets as well as other objects like TimerRequests, should be cached in the Process structure anyway so that shouldn't be a problem.

 Status: Offline
Profile     Report this post  
Hypex 
Re: Applied Micro moving away from PowerPC
Posted on 15-Oct-2014 14:22:37
#90 ]
Elite Member
Joined: 6-May-2007
Posts: 11222
From: Greensborough, Australia

@Samurai_Crow

Quote:
Especially in the realms of Copper coprocessor support. There has been the ability in AmigaOS 2.x+ to have multiple buffers of Copper instructions but lacking a macro and a subroutine to actually advance to the next buffer!


Perhaps you could submit a non-OS hardware banging example of what you need, perhaps a wrap around dual playfield copper scrolling trick, in your OS4Coding thread.

 Status: Offline
Profile     Report this post  
itix 
Re: Applied Micro moving away from PowerPC
Posted on 15-Oct-2014 18:06:09
#91 ]
Elite Member
Joined: 22-Dec-2004
Posts: 3398
From: Freedom world

@Hypex

Quote:
I don't know why it's thought that the DOS packet interface is so bad. It works with the Amiga multi-tasking model, a call to the DOS API generates a message that is sent to the filesystem, which does it's work, and replies back the result. This is like Basic Multiasking 101 and standard IPC practice.


There is one reason: speed. The task switching must occur to process DOS packet at other end and it adds some complexity.

But advantage of course is that asynchronous IO is somewhat easy and you have separated file system process.

Both have pros and cons.

_________________
Amiga Developer
Amiga 500, Efika, Mac Mini and PowerBook

 Status: Offline
Profile     Report this post  
Hypex 
Re: Applied Micro moving away from PowerPC
Posted on 16-Oct-2014 13:57:10
#92 ]
Elite Member
Joined: 6-May-2007
Posts: 11222
From: Greensborough, Australia

@OlafS25

Quote:
The problem with a "modernized OS" is that it will not be compatible with 68k anymore.


I don't see this as a problem because the 68K API can still be emulated. And either way, all 68K code is emnulated to begin with so this shouldn't be a problem. I think the task emulator can still be there, but calls to standard API would go through a "middle man" where a 68K library would translate calls to the host OS. The current library model could be kept, or OS3 versions created for 68K software.

Quote:
And I still think that these "modern" features will have zero effect on getting new software.


Well, unless a program needs a 64-bit CPU and multi-cores it won't, but I don't know many. For new software we need easier APIs and build kits. Close to what Apple did with OSX, or even NeXT.

For new ports we need about the same and a compatible POSIX layer. Or something that can compile easily, Windows doesn't exactly work as a UNIX model but many software which is made to compile on nix systems compiles easy enough for Windows.

Quote:
cdimauro still does not explain why commercial software developers


Because there's no money in it! Well I'm thinking ones from the outside world. Aside from programming they'd probably wonder why anyone would use an OS that has hardware limitations and makes some things hard on the user.

Last edited by Hypex on 18-Oct-2014 at 02:30 PM.

 Status: Offline
Profile     Report this post  
Hypex 
Re: Applied Micro moving away from PowerPC
Posted on 16-Oct-2014 14:23:17
#93 ]
Elite Member
Joined: 6-May-2007
Posts: 11222
From: Greensborough, Australia

@itix

A lot of this IPC is done for ccommon things in AmigaOS.

 Status: Offline
Profile     Report this post  
olegil 
Re: Applied Micro moving away from PowerPC
Posted on 17-Oct-2014 8:24:28
#94 ]
Elite Member
Joined: 22-Aug-2003
Posts: 5895
From: Work

@WolfToTheMoon

Semi-related to the topic at hand:

New paradigm for CPU design, with no registers in the classical sense:

https://research.idi.ntnu.no/multicore/mill-lecture-2014

The "10x power/performance advantage over existing cores" is interesting, but could mean anything (since the performance per watt is VERY different between the cores available on the market today).

Apparently the guy who started it all has done some presentations in the past:
http://www.eetimes.com/author.asp?section_id=36&doc_id=1320128

Last edited by olegil on 17-Oct-2014 at 08:28 AM.

_________________
This weeks pet peeve:
Using "voltage" instead of "potential", which leads to inventing new words like "amperage" instead of "current" (I, measured in A) or possible "charge" (amperehours, Ah or Coulomb, C). Sometimes I don't even know what people mean.

 Status: Offline
Profile     Report this post  
cdimauro 
Re: Applied Micro moving away from PowerPC
Posted on 17-Oct-2014 22:49:50
#95 ]
Elite Member
Joined: 29-Oct-2012
Posts: 3650
From: Germany

@Samurai_Crow

Quote:

Samurai_Crow wrote:
@cdimauro

Quote:

cdimauro wrote:
@Samurai_Crow

Last but not least, MMUs are implemented and quite a standard feature from about 20-25 years now: why you don't want them? They represent a good compromise from the hardware implementation point of view, albeit with a big granularity. They, also, make the address translation a very fast operation, since the calculations are made into a pipeline stage (so, well hidden in the pipeline). Address translation is a good thing, because it let you implement virtual memory, and fast loaders (don't need to relocate code and data).


The reason I hate MMUs is that we had a perfectly fast and responsive OS 25 years ago. The modern implementations are not much if any faster despite having 10 times faster processors.

AFAIK AROS doesn't use the MMU, and the same is for MorphOS. Only OS4 requires an MMU.

Do you have some tests which show a big performance degrading enabling the MMU in a system? Possibly running a real applications, and not synthetic benchmarks.
Quote:
You want to eliminate load times? Use position-independent code! Relative addressing is really useful in a flat memory model.

You cannot use PIC every time. On 68K, for example, you're limited by the fact that you cannot write in memory using PC-relative address modes, so you have to use a register for keeping all global data, and this impacts the performance.

Compare it to a virtualized o.s.: you don't need a register, because you can use direct-addressing mode.
Quote:
Using virtualized address space? Not so much.

It's useful, not only for easier loading of executable, but also to implement the virtual memory, or to make it easier to implement hypervisor with a very limited impact on performance. It's also useful to do some tricky stuff (especially with emulators).
Quote:
Being able to monitor for writes or protect from writes is only 2 bits per page. That much I will sacrifice for an MMU. No more. Memory protection can be implemented in a flat memory model more easily than in a virtualized address space anyway.

Without considering the address-translation, and MMU can do more controls, and for this it uses more bits, of course: http://wiki.osdev.org/Paging#Page_Table

With 8-bit for each page you can do many useful things.
Quote:
Address translations that cost pipeline length will make branches less responsive by introducing bubbles. I like short responsive pipelines and MMU pipelining makes them neither short nor responsive.

If you find out managed code doesn't work for you? You recompile the code not to use it and use a faster OS! How do you get a short pipeline out of a CPU with a long pipeline? IT IS IMPOSSIBLE!

We are talking about ONE more cycle/pipeline stage, which isn't that much (thanks to the TLBs). In fact, it doesn't significantly impact on performance.

Second, a short pipeline limits the frequency that you can reach with a processor.

Third, we have very good branch predictors that reduce A LOT the impact of branch penalties.

Forth, and I think it's the most important reason, if you don't like to have longer pipelines, you have to stick with some RISC processor, which requires less, much less pipeline stages compared to a very complex CISC processor like a 68K (up to 22 BYTES longer instructions, no easy instruction length discovery, complex opcode patterns with many exceptions).

@Hypex

Quote:

Hypex wrote:
@cdimauro

Quote:
Samurai_Crow suggests to use managed code to avoid your problem. So, he's correct from this point of view: the underlining runtime of a managed platform permits to avoid exactly the issue that you had.


I've seen things like this at run time like stack checking and array bounds checking options in some compiler. But I think to be practical it is only useful for debugging purposes. Once a program is stable managed code shouldn't be needed.

You cannot be sure that a program is without bugs that can crash the system. And you don't want to discover them when you're doing something important...
Quote:
And then the MMU can do the rest by catching out things that were missed in beta stage. I don't know what overhead memory protection has but the MMU would surely be faster than manual code doing bound checking at run time.

Absolutely. There's no comparison here: the MMU wins hands down.
Quote:
Even so I've had my own concerns about the CPU being slowed down by needing to check every read and write.

In fact, it isn't: those checks are well hidden in the processor pipeline.
Quote:
Quote:
Why do you have to use a big-endian CPU?


Because AmigaOS and its structures were programmed in big endian.

But you don't care, and don't have to care about the endianess. It's important when you have to interface with a peripheral, for example, but they are very limited cases; and you can use macros to solve the problem without headache.
Quote:
Back then (~2000) when the pressure was on to port AmigaOS to a new CPU it would have been a requirement.

I never read something like that, and putting a similar non-functional requirement doesn't make sense.
Quote:
Unless they planned a real break fro mthe AmigOS we knew and compatibility.

I have no clue about it. Of course, compatibility can be easily lost without guaranteeing the proper endianess.
Quote:
Quote:
Sorry, I haven't got it: can you clarify it? You don't want a real 64-bit o.s. as the next big & good thing?


Sorry, no, I was thinking with regards to backward-compatibility, and an upward thinking to a 64-bit path and actual multi-core with multi-threading.

In this case, a 64-bit o.s. breaks the compatibility for sure. The same happens for the SMP, on PowerPC processors.
Quote:
So, what I didn't see as a problem was programming OS4, so that these features could be realised in future. The problem I see now with putting these types of modernisations into OS4 is that they programmed it too close to the 68K model, instead of privitising the OS structures and only exporting the 68K API. As well as being able to compile base Amiga code so that it worked.

I agree, but once you make 68K apps "first citizen", you're putting a big door open to mistakes.

The problem, here, is that the Amiga o.s. was badly designed from the beginning, without proper, opaque, access method to the o.s.' internals.

@OlafS25

Quote:

OlafS25 wrote:
@Hypex

The problem with a "modernized OS" is that it will not be compatible with 68k anymore. You can use UAE but then you have a blackbox, it is not the same anymore like on MorphOS or AmigaOS. It will be more like (or even identical) to AROS but people obviously use AmigaOS or MorphOS because of the 68k integration.

Right, but I think that an UAE sandbox is preferable to isolate 68K apps, which can be very dangerous for the system.
Quote:
And I still think that these "modern" features will have zero effect on getting new software.

Talk to developers that have to struggle with the infamous Amiga development cycle:
- write some code;
- run;
- the system crashes;
- reset the machine;
- reopen the environment like it was before running the bugged app.
Quote:
cdimauro still does not explain why commercial software developers (or even hobbyists from outside) should support it when they already have plenty of options with much more users. As I understand it he admitted himself that it would have no effect.

No commercial company can be interested in a nano-scopic niche market.

What drives the post-Amiga segment is essentially the passion for the platform, so it's mostly an hobby "job".
Quote:
Much of these features are not really feelable or visible to normal users and thus they do not really give any advantage.

I've already explained the advantages, but "strangely" you completely cut this part of my messages. Who knows why...
Quote:
For me the primary purpose of a OS is to run Software

And lose your work when something goes wrong, right?

Also, as a coder I'm getting frustrated if I've to follow the mentioned development cycle. That's a waste of resources which doesn't make sense on 2014, where a respectable o.s. takes care of it.

@itix

Quote:

itix wrote:
@Hypex

Quote:
I don't know why it's thought that the DOS packet interface is so bad. It works with the Amiga multi-tasking model, a call to the DOS API generates a message that is sent to the filesystem, which does it's work, and replies back the result. This is like Basic Multiasking 101 and standard IPC practice.


There is one reason: speed. The task switching must occur to process DOS packet at other end and it adds some complexity.

But advantage of course is that asynchronous IO is somewhat easy and you have separated file system process.

Both have pros and cons.

Maybe an hybrid solution is the ideal, albeit it's more complicated to handle and implement: use packets / IPCs only when a request cannot be immediately served.

@Hypex

Quote:

Hypex wrote:
@OlafS25

Quote:
The problem with a "modernized OS" is that it will not be compatible with 68k anymore.


I don't see this as a problem because the 68K API can still be emulated. And either way, all 68K code is emnulated to begin with so this shouldn't be a problem. I think the task emulator can still be there, but calls to standard API would go through a "middle man" where a 68K library would translate calls to the host OS. The current library model could be kept, or OS3 versions created for 68K software.

Exactly.
Quote:
Quote:
And I still think that these "modern" features will have zero effect on getting new software.


Well, unless a program needs a 64-bit CPU and multi-cores it won't, but I don't know many. For new software we need easier APIs and build kits. Close to what Apple did with OSX, or even NeXT.

For new ports we need about the same and a compatible POSIX layer. Or something that can compile easily, Windows doesn't exaclty work as a UNIX model but many software which is made to compile on nix systems compiles easy enough for Windows.

Windows is more similar to the Amiga o.s., since it doesn't have the infamous fork, which creates a lot of problems.

@olegil

Quote:

olegil wrote:
@WolfToTheMoon

Semi-related to the topic at hand:

New paradigm for CPU design, with no registers in the classical sense:

https://research.idi.ntnu.no/multicore/mill-lecture-2014

The "10x power/performance advantage over existing cores" is interesting, but could mean anything (since the performance per watt is VERY different between the cores available on the market today).

Apparently the guy who started it all has done some presentations in the past:
http://www.eetimes.com/author.asp?section_id=36&doc_id=1320128

I don't know how many times I've read of revolutionary approaches to the computation.

When it'll be something concrete, and available to the market, we'll see...

 Status: Offline
Profile     Report this post  
tlosm 
Re: Applied Micro moving away from PowerPC
Posted on 18-Oct-2014 8:26:49
#96 ]
Elite Member
Joined: 28-Jul-2012
Posts: 2746
From: Amiga land

...

Last edited by tlosm on 18-Oct-2014 at 08:29 AM.

_________________
I love Amiga and new hope by AmigaNG
A 500 + ; CDTV; CD32;
PowerMac G5 Quad 8GB,SSD,SSHD,7800gtx,Radeon R5 230 2GB;
MacBook Pro Retina I7 2.3ghz;
#nomorea-eoninmyhome

 Status: Offline
Profile     Report this post  
itix 
Re: Applied Micro moving away from PowerPC
Posted on 18-Oct-2014 12:38:43
#97 ]
Elite Member
Joined: 22-Dec-2004
Posts: 3398
From: Freedom world

@cdimauro

Quote:

AFAIK AROS doesn't use the MMU, and the same is for MorphOS. Only OS4 requires an MMU.


Nope, MorphOS requires MMU. I dont know if AROS standalone version uses MMU but it can run Linnux hosted where MMU is of course used by the host OS.

(And indeed, they execute 68k or native code faster than any real 68k ever could.)

Last edited by itix on 18-Oct-2014 at 12:40 PM.

_________________
Amiga Developer
Amiga 500, Efika, Mac Mini and PowerBook

 Status: Offline
Profile     Report this post  
Hypex 
Re: Applied Micro moving away from PowerPC
Posted on 18-Oct-2014 15:10:11
#98 ]
Elite Member
Joined: 6-May-2007
Posts: 11222
From: Greensborough, Australia

@cdimauro

Quote:
On 68K, for example, you're limited by the fact that you cannot write in memory using PC-relative address modes, so you have to use a register for keeping all global data, and this impacts the performance.


I would have thought a register base offset would have been faster as well as optimising the code.

Quote:
You cannot be sure that a program is without bugs that can crash the system.


No but with proper testing bugs should be caught. It's also up to the OS to catch it and not fall over. For example there are some DSI crashes that OS4 can catch and others freeze over. I wonder, why? I also see that the input.device can get caught and in that case instant freeze. Components like this need work on them so they can run stable and not fall over. There needs to be more protection for senstive OS routines from user code.

Quote:
But you don't care, and don't have to care about the endianess.


Actually, I do. I like to be able to read binary and not get confused over what I am looking at. For years I looked at BMP files and could never understand them, the same goes for WAVs. But when I learned about endianness I could pick out where I was geting confused. Still didn't look right. I like the layout to look natural in a forward readable order.

Quote:
I never read something like that, and putting a similar non-functional requirement doesn't make sense.


I don't think it was spoken about much then. But once you go into the OS structures and defined flags they are made to work in a big endian enviroment. You change that and it all gets rearranged. On a related subject I recall some AROS ported Amiga code had to be tweaked for this exact reason to work on x86.

And even C code that compiles on x86 isn't normal. For example, when looking for a long word ID like "RIFF" the code would use "FFIR" instead! Why can't it just look for "RIFF" !? That stuffs the code up. Perhaps it was just badly coded.

Quote:
The problem, here, is that the Amiga o.s. was badly designed from the beginning


It's a mixed bag, but it was a bit too open. In any case, even if it was more closed, what's to stop programmers checking what AllocVec() does? If anything was private they peeked behind the scenes anyway and hacked into it as if it was their public right.

Quote:
Windows is more similar to the Amiga o.s., since it doesn't have the infamous fork, which creates a lot of problems.


I wasn't even thinking that far. Getting past the makefile can be hard enough. Of course, most makefiles I've seen tend to assume that if it isn't compiling on Windows it must be a nix system instead.

 Status: Offline
Profile     Report this post  
wawa 
Re: Applied Micro moving away from PowerPC
Posted on 18-Oct-2014 15:18:49
#99 ]
Elite Member
Joined: 21-Jan-2008
Posts: 6259
From: Unknown

@itix
Quote:
I dont know if AROS standalone version uses MMU

not on 68k afair

 Status: Offline
Profile     Report this post  
cdimauro 
Re: Applied Micro moving away from PowerPC
Posted on 18-Oct-2014 18:10:37
#100 ]
Elite Member
Joined: 29-Oct-2012
Posts: 3650
From: Germany

@itix

Quote:

itix wrote:
@cdimauro

Quote:

AFAIK AROS doesn't use the MMU, and the same is for MorphOS. Only OS4 requires an MMU.


Nope, MorphOS requires MMU.

Strange. I never bet a coin on it. Do you know why it's needed?
Quote:
I dont know if AROS standalone version uses MMU but it can run Linnux hosted where MMU is of course used by the host OS.

AROS/68K not, for sure, as wawa already reported. I don't think that it changes with other architectures, but I don't have such kind of knowledge now.

@Hypex

Quote:

Hypex wrote:
@cdimauro

Quote:
On 68K, for example, you're limited by the fact that you cannot write in memory using PC-relative address modes, so you have to use a register for keeping all global data, and this impacts the performance.


I would have thought a register base offset would have been faster as well as optimising the code.

But that way you're wasting a precious address register only for this.
Quote:
Quote:
You cannot be sure that a program is without bugs that can crash the system.


No but with proper testing bugs should be caught.

What kind of "proper testing" do you mean? I'm really interested, since I do A LOT of testing (manual primarily, but I've also some automated tests) in my work.
Quote:
It's also up to the OS to catch it and not fall over. For example there are some DSI crashes that OS4 can catch and others freeze over. I wonder, why? I also see that the input.device can get caught and in that case instant freeze. Components like this need work on them so they can run stable and not fall over. There needs to be more protection for senstive OS routines from user code.

That's why memory protection is needed to give a big hand with this kind of problems. The o.s., itself, can do very little, if any application can do whatever it want on the system...
Quote:
Quote:
But you don't care, and don't have to care about the endianess.


Actually, I do. I like to be able to read binary and not get confused over what I am looking at. For years I looked at BMP files and could never understand them, the same goes for WAVs. But when I learned about endianness I could pick out where I was geting confused. Still didn't look right. I like the layout to look natural in a forward readable order.

Use macros. Your is a normal problem in computing, and it can be handled in a safe / portable way.
Quote:
Quote:
I never read something like that, and putting a similar non-functional requirement doesn't make sense.


I don't think it was spoken about much then. But once you go into the OS structures and defined flags they are made to work in a big endian enviroment. You change that and it all gets rearranged. On a related subject I recall some AROS ported Amiga code had to be tweaked for this exact reason to work on x86.

And even C code that compiles on x86 isn't normal. For example, when looking for a long word ID like "RIFF" the code would use "FFIR" instead! Why can't it just look for "RIFF" !? That stuffs the code up. Perhaps it was just badly coded.

That's my opinion. The world, unfortunately, is full of badly written code...
Quote:
Quote:
The problem, here, is that the Amiga o.s. was badly designed from the beginning


It's a mixed bag, but it was a bit too open. In any case, even if it was more closed, what's to stop programmers checking what AllocVec() does? If anything was private they peeked behind the scenes anyway and hacked into it as if it was their public right.

Sure, but having an opaque interface helps a lot. Of course, with the full address space being public, a crazy application can make the system crash.

Last but not least, an opaque interface is recommended for abstracting from the underling architecture, making it easier to port the software.
Quote:
Quote:
Windows is more similar to the Amiga o.s., since it doesn't have the infamous fork, which creates a lot of problems.


I wasn't even thinking that far. Getting past the makefile can be hard enough. Of course, most makefiles I've seen tend to assume that if it isn't compiling on Windows it must be a nix system instead.

Makefile... I hate them. I'm an aficionado of Visual Studio: create a solution, put files inside, compile, and run...

 Status: Offline
Profile     Report this post  
Goto page ( Previous Page 1 | 2 | 3 | 4 | 5 | 6 | 7 Next Page )

[ home ][ about us ][ privacy ] [ forums ][ classifieds ] [ links ][ news archive ] [ link to us ][ user account ]
Copyright (C) 2000 - 2019 Amigaworld.net.
Amigaworld.net was originally founded by David Doyle