Page 1 of 4 1234 LastLast
Results 1 to 25 of 82

Thread: Why Do ATI Cards Score So Much Better On 3DMark2005 Then Nvidia Cards?

  1. #1
    Xtreme 3D Mark Team Staff
    Join Date
    Nov 2002
    Location
    Juneau Alaska
    Posts
    7,607

    Why Do ATI Cards Score So Much Better On 3DMark2005 Then Nvidia Cards?

    I don't want to hear the usual dumb conspiracy theory of ATI bought Futuremark or something like that... I don't care about rumours, so unless you got a reciept showing ATI bought Futuremark-keep it to yourself.

    I just wanna know, why does ATI cards do so damn well on 3DMark2005?

    taking a look at my 5900XT I can get about 1300 points from it.
    but I see 9800 Pros getting 3 times more then this.
    now I know, logically, the difference between these two cards is no where near this...
    so the benchmark doesn't make much sense when trying to compare cards for real world performance.

    so I am wondering-what exactly is it that makes all Nvidia scores so damn low compared to ATI scores?

    did futuremark heavily weigh towards ATI cards cause of Nvidias past drivers cheating on 2003?
    is Nvidias drivers just that bad?
    is it benchmarking something that Nvidia cards just don't have?
    is the benchmark using only the 2D speeds of FX based cards?

    what is it?

    cause real world games show plenty of times that ATI and Nvidia have always been pretty close together, so why do 9800 pros score 3 times better then FX 5900's?
    why are 6800 ultras so low, and almost getting beat by X800 Pros?

    the benchmarks beautiful...
    but the results are questionable.




    "The command and conquer model," said the EA CEO, "doesn't work. If you think you're going to buy a developer and put your name on the label... you're making a profound mistake."

  2. #2
    Xtreme Member
    Join Date
    Dec 2003
    Posts
    157
    maybe it's just that ATI handles dx9 shaders much better.
    Gigabyte P55 UD5 | i5 750 @ 4.00GHZ | HD5850 | Corsair 650w

  3. #3
    Xtreme Mentor
    Join Date
    Apr 2003
    Location
    Ankara Turkey
    Posts
    2,631
    all i can understand about this situation is atis last drivers make the difference with my x800pro i got 900 points gain with only changing the driver so my answer can be yes nvidias drivers are just that bad


    When i'm being paid i always do my job through.

  4. #4
    Xtreme Addict
    Join Date
    Jun 2004
    Location
    near Boston, MA, USA
    Posts
    1,955
    Be aware that the Nvidia drivers are not that heavily optimized yet for the 6800 series. It often takes them 8mo to a year before they release a driver that has tweaked their last super card. We are still working with drivers that are mostly basic derivatives of the drivers at release. I think it would be ill advised to start into some wild theory of any amount that you might see out of better engineering, but Nvidia said this about the drivers when they released the 6800.

    Food for thought anyway.

  5. #5
    BANNED SCAMMER, DO NOT TRUST
    Join Date
    May 2004
    Location
    Czech republic
    Posts
    285

    Unhappy

    If ATI gainen 900 points just on driver change, I would suspect some sort of cheating there

    I also did not see, how much faster the ATI cards could be - they are not The benchmark is just weird and ATI optimized for it

    Speaking about trick and cheates - who started using lower mipmap levels for Q3 and detect Q3? ATI Who come up with fake trilinear filtering to beat up nVidia? ATI Who has got only 24bit shader precision? ATI ...frankly, IMHO, nVidia was forced to do something and since their chips support 8, 12, 16 and 32bit shader precisions, the next lower option (16bit) was used. No biggie, IMHO - just necessarity to make the comparsion more close to reality - it's kind of easy to hardwire chip shaders to run fast w/o options
    And that is what ATI does.

    BTW, it's not far ago, when I hit interesting thing. Changing AGP aperture size has affect on 3DMark03 scores on nVidia, but not on ATI
    It's sign on cheating, because both cards was 128MB ones, and therefore the nature scene can't fit to their videomemory. So there is where AGP aperture size step in and help. When there is no change between 16MB and 128MB settings on ATI, while the change is pretty obvious on nVidia (16 + 128MB is still not enought, so main nondedicated memory is used = slow-down). It simply suggesting that ATI did not play fair.
    It did not take a rocket science to figure out that someone again pretty optimized for the benchmark - probably to hide the lame drivers and fix the damaged the Doom3 poor scores on ATI did?

  6. #6
    Xtreme 3D Mark Team Staff
    Join Date
    Nov 2002
    Location
    Juneau Alaska
    Posts
    7,607
    the 6800 has nothing to do with why the 5900 scores 3 times less on 3dmark2005 then a 9800 pro...




    "The command and conquer model," said the EA CEO, "doesn't work. If you think you're going to buy a developer and put your name on the label... you're making a profound mistake."

  7. #7
    Xtreme Member
    Join Date
    Nov 2003
    Posts
    157
    My understanding is that this bench is very heavy in the shader department. The 5900 series was absolutely crippled when it came to shaders. Remember the old Shadermark benches where a 9800 was scoring 3 times what a 5900 did... if not more!

    As for drivers, I find it funny that people are saying ATI's new drivers must cheat since they give such a performance gain. Look at any of the Nvidia forums and you'll see the new 66.70 drivers give about a 6-800 point gain over the 61.77's (the last official release). ATI has consistently released WHQL and FM approved drivers all year long, Nvidia hasn't had an approved driver in months.

    Warden
    --------------------------------------
    AMD64-3500@10x256 (2560mhz)
    Abit AV8
    Evga 6800 GT @ 460/1200
    Dangerden TDX and 6800 Block
    Blueline HD30 Waterpump
    Dual 120mm Fans and Radiator setup
    http://service.futuremark.com/compare?2k3=3054009
    http://service.futuremark.com/compare?3dm05=22188

  8. #8
    Xtreme X.I.P.
    Join Date
    Aug 2002
    Posts
    4,764
    I'll have some guesses here

    1) The 4x2 architecture of the FX is great for multitextured environments but less so when all tests are done using shaders, an 8x1 format is then better

    2) Maybe full precision is being forced ( so not to reduce image quality ) so that means 32 bit on FX and only 24 bit on Ati

    4) Being shader 2 based the FX is weak in this area, as shown by nature in 03 as well.

    Regards

    Andy

  9. #9
    Xtreme Mentor
    Join Date
    Apr 2003
    Location
    Ankara Turkey
    Posts
    2,631
    i choose zakelwe's #1 guess it can be true


    When i'm being paid i always do my job through.

  10. #10
    Xtreme 3DTeam Member
    Join Date
    Jan 2004
    Location
    Dresden
    Posts
    1,163
    Quote Originally Posted by Kunaak
    the 6800 has nothing to do with why the 5900 scores 3 times less on 3dmark2005 then a 9800 pro...
    i think the low speed of nvidia cards is because they ignore the markets
    demand.

    todays game developers don't wanna waste their time on implementing extra features for every other video chip.
    they just follow the standard, which is dx9 as it looks.

    so there is no need for 32bit color precision shaders and shader model 3.0.
    even the far cry crew took like forever to add sm3.0 support.
    other companies are not that sophisticated.
    and even with sm3.0 support there is no speed advantage over ati cards - take a look at far cry or 3dm05 benches!
    gf5/6 cards will always have a higher workload because of their higher color precision.
    remember the time people used to save fill rate by swithing to 16bit color depth. i think its comparable to ati using only 24bit in shaders.

    id say nvidia should stop implementing features just for marketing and build vc's for raw performance again. like in the good old riva128-to-gf4-days

  11. #11
    Xtreme Legend
    Join Date
    Jun 2002
    Location
    Helsinki, Finland
    Posts
    2,559
    If ATI gainen 900 points just on driver change, I would suspect some sort of cheating there
    That 900 points happend to be the difference between ATI's X800XT PCI-E and AGP version. The updated driver fixed some AGP problem and the AGP version of X800XT now scores the same as the PCI-E version.

  12. #12
    Xtreme Enthusiast
    Join Date
    Jun 2004
    Location
    Florida, USA
    Posts
    934
    Looking more and more like this driver is actually a bug fix like ATI says, and not the conspiracy theory cheating that all the nV fans are claiming.
    DFI LP LT X38-T2R
    Intel Core 2 Quad Q9450
    Thermalright Ultra-120 eXtreme
    2x2GB OCZ Reaper DDR2 PC8500
    eVGA 9800 GX2
    WD Raptor X 150GB
    PCP&P 750W Silencer
    CM Stacker 830 SE

  13. #13
    Xtreme Addict
    Join Date
    Jun 2004
    Location
    near Boston, MA, USA
    Posts
    1,955
    Ok first off, 32 bit is THE standard implemented in game engines, not 24.

    BUT before we get into more of a he said/she said, they suck/this rocks kind of thing, don't we ALL look FAR more at the results from actual games? (Am laughing because I know perfectly well this is THE benching forum)

    I mean when you are reading hardware articles do you spend more time reading what 3Dmark did or how Doom 3 or FarCry stacks up?

    So don't go too wild because optimizations will come and they are a way of life

    $.02

  14. #14
    Xtreme 3DTeam Member
    Join Date
    Jan 2004
    Location
    Dresden
    Posts
    1,163
    Quote Originally Posted by Anemone
    Ok first off, 32 bit is THE standard implemented in game engines, not 24.
    right, but while the complete image is rendered in 32bit, ati shaders have a color depth of 24bit (per channel, which means 96bit altogether)
    Last edited by Der_KHAN; 10-01-2004 at 08:34 AM.

  15. #15
    Xtreme Enthusiast
    Join Date
    Nov 2002
    Location
    Earth: The Insane Asylum of the Universe
    Posts
    974
    I'm kinda inclined to like macci's idea, because Futuremark still has issues reading AGP rates and clock speeds. Could it be that these beta drivers have only partly fixed the problem? I'm not anti-nvidia, I used their cards through the Ti 4400. It's just been pretty clear that since the 9700s came out, ATI was doing things better.
    "I got a fever, and the only prescription, is MORE COWBELL" **I can't afford a sig, all my money's in hardware.**

  16. #16
    Registered User
    Join Date
    Dec 2003
    Location
    The Netherlands
    Posts
    49
    Quote Originally Posted by Anemone
    Ok first off, 32 bit is THE standard implemented in game engines, not 24.

    BUT before we get into more of a he said/she said, they suck/this rocks kind of thing, don't we ALL look FAR more at the results from actual games? (Am laughing because I know perfectly well this is THE benching forum)

    I mean when you are reading hardware articles do you spend more time reading what 3Dmark did or how Doom 3 or FarCry stacks up?

    So don't go too wild because optimizations will come and they are a way of life

    $.02
    24bit is the standard according to Microsoft with their DirectX 9.
    - VinnieWeiss


  17. #17
    Xtreme 3DMark Addict
    Join Date
    Sep 2003
    Posts
    4,225
    I cant fully explain the 6800s, but maybe lower clock speeds and fillrate has to do with it? the 6800u is only 400MHz while the x800xt is 520MHz! surely the shaders and what not is running much faster on the ati card. as for fx cards, they have very weak shader performance and although they do fine in older dx8 games, they fall flat to any 9700+ in dx9
    3000+ Venice 240x9=2.16GHz(ondie controller limit) 2x512mb patriot tccd ram
    9700pro at 325/310 runs all games buttery smooth!

    9700(8 pipe softmod, 128m) at 410/325 23821 at 325/310 21287 at 275/270 19159
    9500(4 pipes, 128m) at 420/330 18454 at 275/270 13319
    9500(8 pipe softmod, 64m) at 390/310 19201 at 275/270 16052
    9500(4 pipes, 64m) at 400/310 16215 at 275/270 12560
    3dmark scores with Ti4200 and Ti4800se
    Ti4200 at 340/730 19558 at 300/650 18032 at 275/550 16494 at 250/500 15295
    3dmark scores with older gpus
    Ti500 at 275/620 14588 Ti200 at 260/540 13557 MX440 at 380/680 11551

  18. #18
    Xtreme Addict
    Join Date
    May 2004
    Location
    Sherbrooke, Quebec, Canada
    Posts
    1,175
    It's all about the drivers.

    Yesterday, I got 4465 with 61.77 drivers. I installed the beta 66.70 drivers today and bingo, I get 5730, over 1200 pts increase.
    Wolfdale e8400es @4.5Ghz / Ultra-120 Extreme
    2GB Ballistix @ 589Mhz 4-4-4-x @ 2.5V - SPI 32M
    *** Motherboards tested: DFI Blood Iron (current), Asus P5K3-dlx, DFI P965-S, P5B-Dlx, DFI RD600, Bad Axe 2, EVGA 680i, DS3 and P5W DH ***
    Fan modded Zippy 850W, 500GB 7200.11 and EVGA 8800GTX @678/1062

  19. #19
    Xtreme Addict
    Join Date
    Nov 2003
    Location
    happy place
    Posts
    2,337
    the 5900 xt series has 4x2 parallel pipeline architecture and shaders were slower than ATI Radeon 9800 series. 3dmark03 and 05 are heavily use shaders, and having more parallel pipelines is important. The 9800pro used 8x1 true pipeline design, which is alot better than 4x2.
    --===== proud owner of new razor tarantula gaming keyboard =====--

  20. #20
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    ati cards in general dont perform THAT much better in 2k5... its just that the fx cards suck in 2k5, thats all

    fx cards have very very bad shader performence and 2k5 is using very very long and complex shaders. im sure nvidia COULD release a new driver to boost their fx cards quite a bit as the fx architecture is very complex, and reorganizisng the code should help those cards a lot, but knowing nvidia they wont do it. they will ust spend time on the geforce 6xxx drivers to outperform ati again...


    Quote Originally Posted by Kanavit
    the 5900 xt series has 4x2 parallel pipeline architecture and shaders were slower than ATI Radeon 9800 series. 3dmark03 and 05 are heavily use shaders, and having more parallel pipelines is important. The 9800pro used 8x1 true pipeline design, which is alot better than 4x2.
    kanavit, please dont make statements like that. what makes you say that? its not true!

    1x8 is not generally better than 4x2. it depends on the situation, and nvidias architecture is a hybrid that has most benfits of 4x2 and 8x1 in one architecture. its actually a 4x2 design with some tweaks to let it work like 8x1 in some circumstances. anyways, the problem that makes the fx cards so slow is NOT the 4x2 design!

    and 4x2 isnt generally faster than 8x1! having two tmus on each pipe has many advantages oder having only one...
    Last edited by saaya; 10-01-2004 at 11:32 AM.

  21. #21
    Xtreme Addict
    Join Date
    Nov 2003
    Location
    happy place
    Posts
    2,337
    Saaya, 8 parallel pipelines is indeed better than just 4x2, because of increased bandwidth! having 8 pipes vs 4x2 is that more wider bandwidth is available per clock cycle for 8x1 pipes than 4x2 pipes design.
    --===== proud owner of new razor tarantula gaming keyboard =====--

  22. #22
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    no... not really, as far as i know

    and even if, thats just one aspect of performence. 8x1 is not generally faster than 4x2.

  23. #23
    Xtreme 3D Mark Team Staff
    Join Date
    Nov 2002
    Location
    Juneau Alaska
    Posts
    7,607
    the FX 5900XT scores less then a 9600XT on this benchmark.
    that card is something like 4x1 isn't it?
    how would 4x1 be better then 4x2?

    I dont know...
    all I know is this benchmark can't be considerd reliable, with this much bias.
    the way this benchmark goes, if I was looking at raw numbers, this benchmark would make me think 5900's were worse then plain 9600's.




    "The command and conquer model," said the EA CEO, "doesn't work. If you think you're going to buy a developer and put your name on the label... you're making a profound mistake."

  24. #24
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    well dude, for heavy pixel shader use like in 2k5 they ARE worse then 9600s...

    its not a thing about 2k5 beeing biased, the fx cards just plain suck when it comes to using pixel shaders... they can run 100x more complex code than the 9600s and have dozens of extra nice lil features, but they can run all this at really low fps...

    you always have to find the sweet spot between features and performence.

    the r3xx and rv3xx are very good shots
    the fx cards are way off... too many features and too low performence, the nv35 made it better and improved performence for current games, but they are just a modification of the fx vpu wich is just extremely weak when it comes to pixel shaders...
    the x800 series is a bit* off again, with good performence but no sm3.0 support.
    the 6800 series is a great hit again with a great set of features and very good performence.


    *about sm3.0 vs sm2.0b.
    as we all know now, sm3.0 isnt needed yet and 2.0b is very close to sm3.0 performence and feature wise. at least the features that are going to be used soon are in 2.0b as well. it reminds me of the geforce3 vs radeon8500. geforce3 had pixel shader 1.1 and the radeon 8500 had 1.4 iirc. did it ever make a difference? no. pretty much the same can be said about sm3.0 vs sm2.0b. theres no need for sm3.0 yet, where are all the dozens of games using shader model 3.0 that nvidia was talking of? yeah, you can enable some nice extras in far cry... thats all...


    think about how pixel shaders developed in the past. it took over a year from the release of dx8 hardware until we finally had pixel shaders in games...

    i still think its not a smart move from ati by not going sm3.0
    now nvidia is the first and is seting the standards... but thats atis problem, for us geeks theres no need for sm3.0 hardware yet

    sooooooo the x800s missing sm3.0 support is just a small unbalance from the golden match of features vs performence in my opinion.


    again, how can you say the benchmark sux just because your card sux at it?
    and as i said before, from all cards available atm the fx cards are the ones who need most attention and need optimized drivers for every new app to really show its potential because of its complex design and not properly funtioning compiler. so the fx cards CAN get a big boost from optimized drivers (not cheating! i mean real optimizations like re-organizing the code and getting rid of bugs)
    Last edited by saaya; 10-01-2004 at 05:17 PM.

  25. #25
    Xtreme 3DTeam Member
    Join Date
    Jan 2004
    Location
    Dresden
    Posts
    1,163
    Quote Originally Posted by saaya
    no... not really, as far as i know

    and even if, thats just one aspect of performence. 8x1 is not generally faster than 4x2.
    ok, heres what i think:

    the difference is that nvidias architecture can render 2 layers of textures in one pass (i.e. the basic texure and one more for reflections)
    while ati cards have to perform one pass for each layer (while being much faster per pass cuz of the 8 pipes)

    so a scene with only one texture per triangle would be faster on an ati card, cuz nv's second tmu would be out of work. (except nv has some sort of clever driver optimization)
    so we have nv=1 slow pass and ati=1 fast pass, right? (regardless of z-pass and shader passes and whatever else there is)

    on the other hand a scene with two textures per triangle would require nv to render 1 pass and ati to render 2. if u double that to 4 textures u get 2 and 4 passes.

    at 3 textures again: nv=2 slow passes and ati=3 fast passes.

    so whenever the number of texture-layers is uneven ati-cards should naturally perform better.


    but in doom3 the 6800 renders in 16x0 instead of 8x2 (and i think the 5800/5900 renders 8x0 instead of 4x2, too), because they do not have to render the usually additional z-pass cuz of their special arcitecture.

    furthermore doom3 has a basic texture and an additional specular, diffuse and normal map, afaik. that would mean 4 altogether. (even number=good for nv)

    now that and their fast stencil buffer (for dooms stencil shadows) should be the reason for the advantage of performance in doom3.


    p.s. but now imagine a scene like car low in 3dmark01:
    u have a landscape with a single texture layer
    and a car in the distance (so not much of the image is filled by the car) with maybe a second layer for reflections

    in that case the second pass of the ati card would only have to render the car and thus would be much faster than the first pass.
    while the single pass of the nv-card would still have the same speed.

    so whenever u add a small object that has one more texture than the rest ati should have a performance advantage. (so this would mean that also in a scenario with an even number of passes ati is likely to have an advantage)

    which would also mean that 8x1 is almost always better than 4x2, and 4x2 never better than 8x1 (ergo 16x1>8x2)

    i recall 3dfx using 2tmus on one pipeline on their voodoo2-boards, cuz they couldnt afford to add another pipeline, yet 1x2 would still be better than 1x1. but 2x2 would have been even better, of course.

    so all in all using 2 tmu's on one pipeline looks like a way to save transistors on the chip, for maybe more advanced but in the end unnecessary stuff like sm3.0. (and sm3.0 comsumes 60 millions of transistors - that is ~27% of the whole 222million)


    now i dunno how to express this in english:
    alle klarheiten beseitigt?
    Last edited by Der_KHAN; 10-01-2004 at 06:13 PM.

Page 1 of 4 1234 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •