How many vertices per second can the average graphics card push?

1

How many vertices can your run of the mill $100 gfx card push these days?

Polaris878

Posted 2009-10-12T05:19:34.333

Reputation: 341

Question was closed 2011-08-14T08:17:55.553

It's not about quantity, it's about quality. Would Super Mario 64 be a better game if it had 10 times the number of vertices in it? No, not really. The player rarely notices the graphical quality when playing, it's only the onlookers that really benefit. It's not like you're sitting there with a sniper rifle and think "ooh, look at the detail on that crate." – Skizz – 2009-10-12T13:36:06.103

If you're new to games programming, and you're doing it in your own time, then it would be quite hard for you to push your average graphics card to it's limit without doing something horribly wrong. To push the limits you usually need lots of high detail meshes with lots of high detail textures with lots of custom shaders and usually these aren't cheap. – Skizz – 2009-10-12T13:48:58.153

Unfortunately once migrated a question cannot be migrated back without some serious changes by the team. However the OP can register on SU and associate his account to gain edit ability to this question. Email team@superuser.com should you have any queries on this. – BinaryMisfit – 2009-10-13T08:57:59.800

Answers

6

This isn't a very interesting number. Mainly because on modern graphic cards vertex and pixel shaders share the same computational hardware. So with rendering the theoretical maximum of vertices you aren't able to draw a single pixel.

Sebastian Tusk

Posted 2009-10-12T05:19:34.333

Reputation: 61

3

The Geforce 6600 can do 375 million vertices per second, and it has a street price of substantially less than $100. The Geforce 9800 retails for about $100, but at that level they don't seem to use Vertices Per Second as a measurement anymore.

Robert Harvey

Posted 2009-10-12T05:19:34.333

Reputation: 1 826

2

What exactly would vertices per second measure? You leave out too many variables.
What would you use the metric for? As a sane default or upper bound for your application's draw calls? For benchmarking? If so, you probably want to benchmark fill rate and triangle draw rate instead. See my post regarding this here: https://stackoverflow.com/questions/1493581/how-to-go-about-benchmarking-a-software-rasterizer/1504635#1504635

On the other hand, if I take your question literally, I sadly don't know of any trustable sources for low-end graphics hardware performance. You surely don't want to trust the vendors themselves. I know of a good usage survey though, and that's the monthly Steam hardware survey: http://store.steampowered.com/hwsurvey/

Since I'm not aware of any sites with good and deterministic benchmarks statistics, knowing which cards (and technology) is popular is the second best thing, if you need some statistics to help you make correct choices for your target group.

Mads Elvheim

Posted 2009-10-12T05:19:34.333

Reputation:

2

Further to Sebastian's comment bear in mind that due to shader hardware the number of vertex transformations is largely irrelevant. What constitutes a transform? You can bet they just do a simple matrix multiplication to a position only vertex (I know Microsoft and Sony did that for their X-Box and PS2 vertices-per-second marketing bumph).

In this day and age the most meaningful things are memory bandwidth and shader cycles per clock. Of course driver are free to re-order instructions to improve throughput. Lots of latencies can be hidden this way which makes the hardware produce faster results.

Goz

Posted 2009-10-12T05:19:34.333

Reputation: 135