@_ThEcRoW
Quote:
_ThEcRoW wrote: Why someone using an nvidia gpu would use opencl instead of cuda?. If these tests were done under cuda, nvidia ratings would be higher than ati's.
|
Companies like Adobe has dumped CUDA and shifted to OpenCL. AMD+ARM+Intel will make sure NV CUDA doesn't survive.
LuxRender mirrors Dirt Showdown's (DX11 Compute Shader Model 5.0) results.
Quote:
DiRT Showdown is the world’s first game to use Global Illumination while it also supports Advanced Lighting via Forward+ Rendering, High Definition Ambient Occulation (HDAO) and Contact Hardening Shadows.
Global Illumination is a group of algorithms that are used to add more realistic lighting as they not only take into account light that comes directly from the light source (direct illumination), but also other light which is reflected from other surfaces (indirect illumination)
From http://www.techspot.com/review/546-amd-radeon-hd-7970-ghz-edition/page7.html
|
From http://technewspedia.com/radeon-hd-7970-gpus-cannibalizes-professional-firestream-tesla/
Quote:
The recent event GPU Technology Conference (GTC) 2012 , which ended only yesterday (took place from 14 to 17 May), brought together many actors both new and experienced business and industry segment, many of whom were interviewed by Theo Valich, editor of VR-Zone and Bright Side of News. The purpose of the interview was to collect the products which were focused on accelerated by gpu computing (GPGPU), which enjoyed greater popularity among companies category “C”, the answers were surprising, because the GPUs that are leading their preferences domestic GPUs : geforce GTX 280, Geforce GTX 480, Geforce GTX 580 and Radeon HD 7970 .
. |
GTX680 has gimped GpGPU. Using CUDA wouldn't rescue GTX680's gimped 64bit DP FP hardware.
From http://parallelis.com/kepler-underperform-on-gpgpu-gtx680/
Quote:
... What we encounter today is exactly the inverse, a new architecture that is correct for games, but is incredibly bad for GPGPU computing! nVidia presented few details of the Kepler architecture, and while AMD Radeon was going from SIMD VLIW5 to VLIW4 and finally a simple SIMD model on the new GCN architecture, nVidia decided to step back from it’s move to 32-core grouped in warp then 16-core grouped in half-warp, to logically grouping 48-cuda core together. While Fermi groups of 48-core have 64KB of shared memory/cache, Kepler groups 192-core (4X more) have only access to the same 64KB for shared memory/cache, available register number is 2X lower. Kepler doesn’t improve over Fermi, it’s just inefficient, putting more pressure on a slow memory bus (compared to actual Radeon 7970), and making divergence troublesome
|
On AMD GCN's compute unit,
1. 64 "cores" shares 64KB local data storage (LDS) while GK104 has 192 "cores" that shares 64KB "shared memory/cache".
Last edited by Hammer on 03-Jul-2012 at 11:55 AM. Last edited by Hammer on 02-Jul-2012 at 02:52 PM. Last edited by Hammer on 02-Jul-2012 at 02:26 PM. Last edited by Hammer on 02-Jul-2012 at 02:19 PM. Last edited by Hammer on 02-Jul-2012 at 02:18 PM. Last edited by Hammer on 02-Jul-2012 at 02:15 PM. Last edited by Hammer on 02-Jul-2012 at 02:09 PM. Last edited by Hammer on 02-Jul-2012 at 02:05 PM. Last edited by Hammer on 02-Jul-2012 at 02:02 PM. Last edited by Hammer on 02-Jul-2012 at 02:01 PM. Last edited by Hammer on 02-Jul-2012 at 01:57 PM. Last edited by Hammer on 02-Jul-2012 at 01:48 PM. Last edited by Hammer on 02-Jul-2012 at 01:38 PM. Last edited by Hammer on 02-Jul-2012 at 01:33 PM.
_________________ Ryzen 9 7900X, DDR5-6000 64 GB RAM, GeForce RTX 4080 16 GB Amiga 1200 (Rev 1D1, KS 3.2, PiStorm32lite/RPi 4B 4GB/Emu68) Amiga 500 (Rev 6A, KS 3.2, PiStorm/RPi 3a/Emu68) |