ATI & Nvidia Duke It Out In New Gaming War 208
geek_on_a_stick writes "I found this PC World article about ATI and Nvidia battling it out over paper specs on their graphics cards. Apparently ATI's next board will support pixel shader 1.4, while Nvidia's GeForce3 will only go up to ps 1.3. The bigger issue is that developers will have to choose which board they want to develop games for, or, write the code twice--one set for each board. Does this mean that future games will be hardware specific?"
Simple way to help you choose (Score:5, Funny)
Hell, Half Life and Doom are barely distinguishable from each other if your beer-goggles are thick enough. And it doesn't matter if the frame-rate slows down thru lack of processing power - your reactions are already terrible from the booze.
Yet again, beer is the cause of, and solution to one of life's problems (thanks to Homer for the [slightly paraphrased] quote).
Different battle, same war (Score:1)
This sucks (Score:3, Insightful)
Now the hardware industry has moved away from that, instead giving us free drivers for windows. Which not only are crappy in their first release, but are also useless on other platforms which the vendor decides not to support.
Bring hardware standards back, and MS will lose much of the power it's able to leverage through the high degreee of hardware support their system provides. I for one would sacrifice a little technological progress for the ability to have things work together as expected out of the box.
Re:This sucks (Score:2, Insightful)
Standards are always developed later. Maybe you fail to grasp how new this technology. GeForce 3 was the first video card to support hardware vertex/pixel shaders. That was released 2 months ago.
Remember when things were "100% compatible"? ... SoundBlaster compatible
Do you really remember what those days were like, when sound-cards *just* came out? You had to pick which sound card you wanted to lock your life into. Adlib? SoundBlaster? ELS? I can't hardly remember anymore.
ATI and nVidia *are* arguing about standards right now. They're working from a common frame. It's not that bad. You're just exagerating the problem.
Re:This sucks (Score:2)
The problem with Compatibility in hardware is that it wasn't. How many 100% Vesa compatible devices do you remember that were 100% Vesa compatible? "Oh? Your Video Card's Vesa implimentation is non-standard? Well, just BUY Display Doctor, and you'll be okay!" No, screw that. Not only did the standard hold back hardware, it didn't even do what it was supposed to.
At least with a proprietary API such as DirectX, the inferior, crash-prone, nasty, closed Microsoft OS has one thing Unix still doesn't. Support for all video cards right off of the shelf, from day one. Most software works without any fight.
Well, there are some video card venders who seem to have trouble writting drivers even for a well established API, we'll leave them guys out of this since most of them are quickly dying or are now dead. Good riddance.
I really don't think the way NVidia and ATI are going to add their own unique features is going to make THAT much of a difference. At best, some coders will take advantage of one or the other, and at worst the rest will ignore anything not built into DirectX and the extra features won't matter.
Best to have features and not need, than to need and not have.
Re:This sucks (Score:1)
Most cards do work from day one, otherwise they'd have a very short life-span. There are a lot of examples of cards that died horribly on the market due to their crap drivers.
And you can believe DirectX hasn't helped video card venders all you want, but it doesn't change the truth.
The fact is -- most video cards almost never were well supported by any platform before DirectX started to mature. Even in the earliest days of Windows 95 (and certainly back in the 3.1 days) it wasn't uncommon for a video card to have features that just were never used. Resolutions that weren't available though technically possible, color depths that you couldn't use, etc.
I remember seeing many video cards capable of resolutions much higher than 1024x768 in 24 bit color years before you could use such features simply because even though the hardware did it, there weren't drives for it that worked well.
Once, back in the DOS days, a friend of mine had a video card capable of some insane high resolution, but while VPIC (or was it CSHOW? Some image viewing program) DID support his video card, Windows certainly did not.
If you don't think that DirectX has helped Windows attain the greatest amount of hardware support, you are being unrealistic. Marketshare alone only insures so much. Don't believe me? Explain to me how it is that not everything has Linux drivers dispite the amazing growth of Linux?
It's not because the Market share isn't there.
Re:This sucks (Score:1)
It's great that DirectX supports all of this funky hardware; this wouldn't be a problem under Linux, of course, because the video card manufacturer could write their own kernel module that makes it work correctly with the OS. Now, if you're talking about having a standard set of abstractions for all video card manufacturers to write to, this very article is discussing how two manufacturer's cards aren't going to work equally well even with the benefit of DirectX.
I can explain that very easily - "amazing growth" does not equal "majority market share". The amazing growth has mostly been in the server market, so that there's no pressure on consumer video card makers to worry about Linux.
Market share does matter, simply because you can't make a video card that doesn't work with Windows and expect to stay in business. It may be that there's been some contribution as well from DirectX and that makes the process less painful for video card manufacturers., but I don't think you can really pin too much on it because even if Windows had the worst graphics API on the market, card developers would still have to do whatever it takes to support Windows from the start, or else start looking for other employment.
A better test would be: if Linux had a wonderful graphics abstraction layer (I'm not an expert here, but perhaps SDL qualifies? I'm not sure what Linux needs to add in this area) and this graphics layer was much better than DirectX, would it automatically gain a huge amount of market share and have great support from video card makers? The answer is: not if the Linux standard was incompatible with the dominant market player, DirectX. Nobody's going to switch their support to a superior technical alternative if it's going to lose them money. DirectX may be superior to what came before, but without Microsoft's market dominance it would not have necessarily ensured complete driver support for Windows.
Re:This sucks (Score:1)
No, actually, I'm willing to bet that if Linux had a really easy to impliment, highly functional API, similar to DirectX or not, there would a much higher level of support from video card vendors.
I admit, it still wouldn't be as high as Windows because of the whole "market share" issue, but it would be much higher than it is now.
It might even generate some Linux Only video cards. Don't believe it? Blah. It's not as if the Amiga didn't have it's share of Video Cards, and none of them work with the PC at all. The Amiga had a MUCH SMALLER market share than Lunix, even in it's prime.
Cutting through the hysteria... (Score:5, Informative)
There seems to be a large amount of confusion as to what this means, and some people seem to be jumping off the deep end (as usual), so here's an attempt to clear up some of the issues.
(PS = Pixel Shader in the following points)
Hope this makes things clearer.
Pixel/Vertex shaders are an attempt to provide developers with low-level access while still maintaining the abstraction needed to support multiple sets of hardware.
To be honest, compared to the issues of shader program proliferation due to the number/type of lights you have in a scene etc., this isn't that big a deal. You might as well complain that writing a PS that uses PS1.3 means that you're 'choosing' GeForce 3 over all the existing cards that don't support PS1.3. Or that when bump mapping was added to DX and you used it, you were choosing the cards that did bump mapping over those that didn't.
DirectX is supposed to let you know the capability set of the gfx card, and allow you to use those capabilities in a standard way. The pixel shader mechanism is just another example of this at work.
As ever with games development, you aim as high as you can, and scale back (within reason) when the user's hardware can't cope with whatever you're doing.
Trust me, this is not news for games developers :-)
Tim
new hardware & games (Score:1)
New graphics hardware from ATI and NVidia are both being developed to support MS's DirectX spec. Game developers just need to pick the level of DirectX that they want to focus on with their engine (and both companies cards support pixel shaders up to v1.3.. only ATI also has 1.4 at the moment).
So you code it once, and it works on both hardware. If you really like the ATI cards and have some time to spare, you might hack in shader 1.4 support.
As far as open source support is concerned, I believe both companies are making OpenGL extensions for these technologies available. Here you have a split in the standard, and it becomes harder to support multiple hardware with one piece of code. But the support is there, and it should be possible to have it run under any free (GNU-style and otherwise) system. As much as I don't like MS, DirectX is under more constant revision than OpenGL (from what I've seen), and does a better job of providing the features that a cutting-edge game developer would want to take advantage of.
Aren't they now? (Score:3, Interesting)
Heck, I even remember Carmack talking on Slashdot [slashdot.org] about things like "Nvidia's OpenGL extensions" and other features of specific cards that he was having to take advantage of.
Yeah, the new wiz-bang game will probably be able to limp-along on whatever you've got, but likely will only be optimized for a few special cards.
The video-card industry has gotten really awful. I hope that someone pulls it back in line and we get back on a standards track where card manufacturers contribute to the standards efforts and then work hard to make the standard interface efficient.
Re:Aren't they now? (Score:2)
That's the PC trade-off - it can grow as a platform, but that gives developers a moving and fractured target. If you don't like it then get a console, which has exactly the opposite characteristics.
I hope that someone pulls it back in line and we get back on a standards track where card manufacturers contribute to the standards efforts
Standards are for stable technologies. As soon as video card makers agree on a feature (eg multitexturing or texture-compression), it gets standardised.
Hardware specific games? - Of course! (Score:1)
The problem is not usually for the Developers (Score:3, Insightful)
END COMMUNICATION
Re:The problem is not usually for the Developers (Score:1)
Its much like the choice to support AMD's 3DNOW or Intel's SIMD instructions.
..which are converging. The Palomino Athlon core now supports Intel's SSE opcodes as well as 3DNow, and it is promised that the Hammer will also support SSE2. One can only hope that Nvidia and ATI's pixel shaders can also be comfortably converged into a common interface (sounds like they pretty much will be in DirectX 8.1, hopefully it won't be long until there's a common ARB extension for them in OpenGL too).
Some bleeding edge features are initially only supportible by writing specific code, but that is the exception.
And in the case of 3D hardware, the bleeding edge features are sure to be used for extra "flash", not vital functionality. A game might have phong-shaded bump-mapped objects on a Radeon2, but it will still run with slightly less exciting graphics on your elderly TNT2.
Games are already hardware-specific (Score:1, Informative)
Max Payne [maxpayne.com], for instance, was developed mostly with GeForce cards. This means that by choosing their standard developer hardware setup the developers are actually becoming hardware-dependant and are, if not frankly, saying that these cards are the ones that you should use to play their game.
This is really no news.
"Optimized for Pentium III" is what read on every possible piece of marketing material with the late Battlezone II.
I would make a conclusion that hardware dependancy of games goes far beyond than just the graphic cards. Use this processor to get better results, use this sound card to hear the sounds more precisely, etc.. It seems that game industry has big bucks, and every hardware vendor wants to make sure that when the next big hit comes, everyone needs to buy their product in order to get that +15% FPS out of it.
Does there have to be a problem for developers? (Score:1)
Eventually, all the other manufacturers will catch up with the new features, and the extension will become integrated into the standard.
An analogy could be (think ye olde days) detection of a sound card, and only enabling sound if one is available.
If even that it too much work for the developer, then just dont support the new extension - the graphics will just look as pretty to the untrained (read consumers) eye.
Re:Does there have to be a problem for developers? (Score:1)
Do you work for Microsoft?
This is such a non-story (Score:2, Interesting)
But what's with all of these doom & gloom posts about fragmenting games for specific hardware? There's already a ton of features that may or may not be available in Direct3D or OpenGL depending upon your underlying hardware and driver. In Direct3D these are known as 'capabilities', in OpenGL they are 'extentions', in either API you can easily check for their existence.
Game developers are already doing this for features such as dot3 bumpmaping. Some boards support this feature in hardware, some don't, so your code is free to check to see if its available and use it if it is or ignore it (or fallback to some other method) if its not.
These shaders aren't really any different from that.. you write code to look at the shader version supported and either use 'new improved' shaders or 'older style shaders' depending upon the platform.
Yes, its more work for the programmer/artists to support a fallback mode, but that's the price of targetting cutting edge gaming hardware while still supporting users of older systems. It always has been and always will be.
As to the dramatic question of ATI vs NVidia, I'd say that NVidia has the early advantage due to the XBox. Considering how similar the XBOX graphics system is to the PC GeForce 3, its pretty much guaranteed that all of the major gaming engines being used to create most 'big' games these days will target GeForce3/XBOX features specifically, and features of 'other boards' (such as ATI) only as a bonus if there's enough time, or ATI lays down enough cash on a crossmarketing deal.
Of course, if Microsoft manages to flub the XBOX release to a staggering degree, all bets as to the future are off.
On hardware compatibility... (Score:1)
I have heard that there is work ahead to do this with Pixel Shaders. Once Pixel Shaders become sufficiently general, all you need to do is re-target the back end of the compiler and you're set.
Henry
Not that big of a problem (Score:2, Interesting)
We are currently able to target both pixel shader versions in DirectX, and hopefully soon in OpenGL. We are currently ignoring features not supported by the hardware that shader code tries to use. So rendering the shader surface on a GeForce1 will look much worse than on a full featured card, but we don't waste time emulating it.
For reference on similiar techniques check otu Proudfoot et al. 'A Real-Time Procedural Shading System for Programmable Graphics Hardware'. (Thought thats based on NVIDIA hardware, it extendable to new features as well)
Do You Remember Glide? (Score:1)
Well, Not a long time ago, we just had 3dfx (and glide) as the only option for 3D games.
Even if there were other 3d hardware and other tecnologies (OpenGL and the rising Direct3D), glide (and 3dfx) was the default choice.
Hardware Specific? (Score:1, Interesting)
Well, yes actually. Haven't they always been? We've had 3Dfx versus PowerVR, Glide versus OpenGL, Direct3D versus OpenGL...
It goes all the way back to floppy versus CD, Win3.1 versus Win32s, 16 colours versus 256...
Every game has system requirements (even if your only talking about a scale like processing power), and always has done. I still remember the shock when I realised I'd need to get a tape drive to complement the disk drive in my CPC664, just to play some of the games!
The thing people miss here (Score:2)
Re:The thing people miss here (Score:2)
hmm, but it seems like game developers don't do that. There is a segment of gamers that are attracted to the newest hardware _because_ it has the latest features, and they then want to buy a game that uses that feature they just paid a $$$ premium for. Totally wrong priorities, but it seems to happen.
Sure, write your game for the best compatibility across different hardware, but then you run the risk that PC Gamer magazine won't drool all over themselves in their review because the reviewer ran your demo on his rig with a GeForce XXI, but your game didn't have the latest 'cyclops, semi-transparent, half-inverse bump/pixel grinding' feature.
A 14-year old reading pcgamer has no idea what this feature really does for him, but he knows that dad is getting him a GeForce XXI for xmas, so this game isn't going to be on his santa list.
Re:The thing people miss here (Score:1)
it's not about tech - it's about lock-in (Score:1)
two points:
History Repeats Itself (Score:2, Interesting)
Looks like we're back to the days of yore, when you (the developer) got to choose to support a specific card (3dfx or the others that didn't survive) because there was no DirectX support... because there was no DirectX. Then you (the consumer) got the shaft if you didn't have the right card, unless the developer later came out with a binary that would support your card's features. But if it wasn't an uber-popular game, this usually didn't happen.
So why are Nvidia and ATI forcing developer to go back to the stone age of accelerated polygons? Oh that's right... Me likes pretty picture.
direct x is not open, OpenGL is, we should use OGL (Score:1, Insightful)
It's scary that so many people are relying on M$'s proprietary graphicx technology. at any time they could discontinue it, or change the API in such a way to make all games broken. I wouldn't put it past them.
subatomic
http://www.mp3.com/subatomicglue [mp3.com]
Re:direct x is not open, OpenGL is, we should use (Score:1, Informative)
OpenGL is an open standard, but the source code isn't open--there isn't even any source code! It's just a specification, then each individual vendor must implement according to that specification. For example, Nvidia makes an OpenGL implementation that is accelerated by their graphics cards, MS makes an implementation that is software only, and 3dfx made a mini-implemenation at one point.
I think maybe Mesa is open-source? Not sure. But the actual implementation inside the vendor's API is whatever they want, and is probably closed (see Nvidia). The only requirement is to follow the specification and the rendering pipeline properly (so transforms/shading/etc will be applied the same through any OGL implementation).
Actually, it IS open... (Score:2)
It's Open Source.
Re:direct x is not open, OpenGL is, we should use (Score:2)
OpenGL is an open standard, but the source code isn't open--there isn't even any source code!
http://oss.sgi.com/projects/ogl-sample/
Re:direct x is not open, OpenGL is, we should use (Score:1)
Using COM interfaces easier than using C/C++ ones? (Score:2)
Re:Using COM interfaces easier than using C/C++ on (Score:1)
Re:direct x is not open, OpenGL is, we should use (Score:2, Troll)
OpenGL is written for a UNIX environment, DX is for a Windows environment. And yes, opengl is opensource and very easy to learn, but still it has alot of drawbacks, one of them being those dinosaurs that runs it.
OpenGL does NOT change very much, which has both good and bad sides, for example, this threads discusses pixel shading, which is a feature OpenGL does not natively supports. I do not know how hard this is to implement in DX, but I figure that since they are even talking about it and not just dismissing it as some "toy" like the OpenGL-board seems todo..
Re:direct x is not open, OpenGL is, we should use (Score:3, Informative)
OpenGL is written for a UNIX environment, DX is for a Windows environment
No. OpenGL is an API, with bindings on UNIX platforms, on the Mac, Win32, Linux, PSX2, XBox and so on. Pretty much all 3D hardware of note has an OpenGL driver.
OpenGL does NOT change very much, which has both good and bad sides, for example, this threads discusses pixel shading, which is a feature OpenGL does not natively supports.
OpenGL does change a lot. Hardware vendors are free to add functionality via extensions [sgi.com], something they cannot do with D3D without going through microsoft.
Also, it does support what DX8 calls pixelshading. It exposes it through a quite different interface to DX8 (see here [sgi.com] and here [sgi.com]), this much more closely represents what the hardware is actually doing.
Re:direct x is not open, OpenGL is, we should use (Score:1)
OpenGL and DirectX don't compete with each other, the only comparison that you can really draw is between Direct3D (a small component of DirectX) and OpenGL. You can use OpenGL and DirectX in the same project, and many games do. There isn't anything better than DX when it comes to an interface for i/o devices, etc. on the windows platform. But as far as OpenGL and D3D, they are directly comparable, and there are minor trade-offs when you choose between the two.
>OpenGL is written for a UNIX environment, DX is for a Windows environment.
Okay, what is the point here? OpenGL has long ago been ported to the windows env, and it runs fine, even better than on UNIX (for most workstations) because it has direct access to the hardware layer, UNIX (until very recently) had to go thru the X protocol (nice and portable, but slow).
>OpenGL does NOT change very much[....]
True, the core of OpenGL doesn't change very much, but this is very good. With every release of D3D, the API changes drasticly, so you must relearn it every time. OpenGL on the other hand, doesn't change (much), but it has extensions that make it dynamic. The pixel/vertex shading on OpenGL has the same features that the D3D version has (supposedly it is better if you believe opengl.org). So, by design, it doesn't matter if OpenGL "natively" supports shaders, the API was designed to be flexible and extendable.
I am not saying that since OpenGL is "open" and extensible, that it makes the choice of what to use a no-brainer, it is far from that. Some choose on or the other for many reasons, sometimes political/ideology, sometimes monitary (MS paid big bucks to early adopters of D3D), etc. It is by far an easy decsion. The only good part about it: it doesn't really matter; you can do anything in one that you can do in the other, if you know the API well enough. And that includes pixel/vertex shaders, etc.
what is the difference between 1.3 and 1.4? (Score:1)
so what if we all just use opengl instead? open standard etc... would definately worth it to pressure the ARB to extend their spec to shaders.... NVIDIA shader extensions would have to be uses cause the opengl ARB is very slow in adopting new standards (like pixel shading)
subatomic
http://www.mp3.com/subatomicglue [mp3.com]
Oh good. A pissing contest... (Score:5, Informative)
I have a few disparate thoughts on this subject, but rather than scatter them throughout the messages I'll put 'em all in one place.
ATI are attacking what is possibly the weakest part IMHO of DirectX 8 - the pixel shaders. Pixel shaders operate on the per-fragment level, rather than on the per-vertex level vertex shaders which were actually Quite Good. The problem with Pixel Shaders 1.1 is that, to paraphrase John Carmack, "You can't just do a bunch of math and then an arbitary texture read" - the instruction set seemed to be tailored towards enabling a few (cool) effects, rather than supplying a generic framework. Again, to quote Carmack, "It's like EMBM writ large". Read a recent
If you read the ATI paper, they don't really tell you what they've done - just a lot of promises, and a couple of "more flexibles!", "more better!" kind of lip-service. I don't care about reducing the pass number. Hardware is getting faster. True per-pixel phong shading looks nice, but then all they seem to do extra is allow you to vary some constants across the object via texture addresses. Well that's great, but texture upload bandwidth is can already be significant bottleneck, so I don't know for sure that artists are gonna be able to create and leverage a separate ka, ks etc map for each material. (I did enjoy their attempts to make Phong's equation look as difficult as possible)
True bump-mapping? NVidia [nvidia.com] do a very good looking bump-map. Adding multiple bump-maps is very definitely an obvious evolutionary step, but again, producing the tools for these things is going to be key. Artists won't draw bump-maps.
Their hair model looks like crap. Sorry, but even as a simple anisotropic reflection example (which again NVidia have had papers on for ages) it looks like ass. Procedural textures, though, are cool - these will save on texture uploads if they're done right.
What does worry me is that the whole idea of getting NVidia and Microsoft together to do Pixel Shaders and Vertex Shaders is so that the instruction set would be universally adopted. Unfortunately, ATI seem to have said "Sod that, we'll wait for Pixel Shader 1.4 (or whatever) and support that." I hope that doesn't come back to bite them. DirectX 8.0 games are few and far between at the moment, so when they do come out there'll be a period when only Nvidia's cards will really cut it (I don't think ATI have a PS 1.0 implementation, someone please correct me if I'm wrong) - will skipping a generation hurt ATI, given that they're losing the OEM market share as well?
I dunno, this just seems like a lot of hype, little content.
Henry
Re:Oh good. A pissing contest... (Score:1)
This would be done more through code than by an artist. You only need to either write a shader do do it properly, or just assign a ka/kd/ks to each material, and that isn't exactly difficult. After all, in the real world, most surfaces have a pretty much constant reflectance function for the whole dang thing. Just look around...Yes, things have different *colors* across its surface, but the actaul reflectance is usually the same.
Artists won't draw bump-maps.
Why not? They draw textures now, I'm sure they would have no problem drawing a bump map if need be. Besides, if there is support for procedural textures, you can just use those to generate a bump map.
Re:Oh good. A pissing contest... (Score:1)
True, but what you go on to say is that the Phong constants are indeed constant across a surface - then ATI saying 'oh look - you can programatically change ka and ks' becomes useless because you won't need to change it. This assumes that you are working on a one material : one texture map correspondance. If, like their examples, you have say metal and stone on one map then varying some constants becomes necessary. But then this requires another map (or even two) at close to the resolution of the source diffuse map. You can do per-material ka/kd/ks now with no troubles at all. Per-fragment is a bit more involved.
Why not? They draw textures now,
I'm not an artist, so I don't know. But I don't know that the tools are there for them to draw bump-maps, and you have to admit that using an RGB channel as a three component normal vector can't be the most intuitive way to draw things. Much better to procedurally generate, like you say.
Henry
Re:Oh good. A pissing contest... (Score:1)
True enough. I'm thinking more in a high-end graphics environment rather than a gaming one, where that situation wouldn't come up very often really--it's just a way of being lazy and putting multiple objects into one--not sure if I'm being clear.
But you're right, you would have to change those constants across a surface, especially in games, where i suppose surfaces might be merged together for optimization's sake.
As far as the RGB/normal channel goes, I think most bump mapping is sufficiently done with just a grayscale type image...much like a heightmap. Since bump maps inherently give the appearance of micro geometry, some accuracy that might be acheived through an RGB bump map can be set aside for the sake of ease and speed (even if the speedup is in development!).
Re:Oh good. A pissing contest... (Score:4, Informative)
The framebuffer is only 8 bits per channel at most, while pixel shader hardware has higher internal precision per channel, keeping the math in the chip as well as saving read-back from the framebuffer saves bandwidth AND improves quality.
True per-pixel phong shading looks nice, but then all they seem to do extra is allow you to vary some constants across the object via texture addresses
Pixel shaders enable arbitrary math on pixels, it isn't a fixed function phong equation with a few more variables added. Maybe an artist wants a sharp terminator, cel shading, a fresnel term, or wants a anisotropic reflection.
All these are performed using 4D SIMD math operations, just like they were in 1.1: Add, Subtract, Multiply, Multiply-Add, Dot Product, Lerp, and Read Texture. But texture reads can happen AFTER more complex math, before there was only a few set math ops possible during a texture read. It's all in the DX8 SDK, which anyone can download.
Well that's great, but texture upload bandwidth is can already be significant bottleneck
"texture upload?" This isn't a problem, with DX8.1 cards having 64mb of memory for texture, why would developers be uploading textures per-frame? If you are talking about texture reads by the pixel shader, this also isn't a bottleneck. Reading geometry from the AGP bus is the bottleneck.
Artists won't draw bump-maps.
Sure they will, (heck, I do) look at any x-box game, they are all over the place. They won't draw in vectors-encoded-as-colors, they'll draw height maps, which would be converted off-line into normal maps.
I don't think ATI have a PS 1.0 implementation, someone please correct me if I'm wrong
1.4 hardware can support any previous version, including DX7 fixed function blend ops.
P.S.
I design hardware for this stuff, I do know what I'm talking about.
I believe (Score:3, Insightful)
Re:I believe (Score:1)
But that doesn't necessarily mean that it will run at maximum performance in all conditions. Perhaps it is something like this:
Not likely ... (Score:1)
That's pretty much why DX 8 exists. Sure, you wont have all the features of 1.4, but your 1.3 will be zipping along instead at full speed regardless. It wont simply be tossed aside to software-emulation mode.
Re:I believe (Score:2, Interesting)
The Pixel Shader technology will be backwards compatable as far as the DirectX 8.0 API is concerned. Imagine that. Microsoft using an API to bring software developers together across various hardware choices.
Sadly the situation is not unified in OpenGL yet, with both Nvidia and ATI providing their own separate extensions for accessing pixel shaders. One can only hope that its not too long before we can get an ARB-approved extension that covers the capabilities of both cards.
Of course, since it will be quite a while before games publishers can rely on people having a GeForce3 or Radeon2, I expect pixel shaders will only be used for optional flash for quite some time. If people are doing bump mapping and phong shading and so on using them, they'll certainly have the option to run in a slightly less attractive mode for those with lamer hardware.
Future games? (Score:1)
3DFX only games came and went, then again so did 3DFX.
Why DirectX is better (Score:4, Interesting)
In the long run, this will make OpenGL unpopular with game developers. Sure guys like Carmack and afford to suck it in and develop to all the extensions, but for a small development house that wants to make an impressive game, they'll go with DirectX to save themselves the development costs. And when they do, there goes the possibility of a native Linux port.
Now there are two solutions to this. First, the ARB could get off their asses and start integrating extensions. This could be problemetic for the pro segment, which wants a stable API. On the other hand, the ARB could fork OpenGL into a pro and a consumer version, but that results in two incompatible APIs. I think Microsoft is doing the right thing by supporting the latest featuers (in essense, baiting all the hardware manufactuers to integrate these features) but it *does* make DirectX unsuitable for pro use.
Re:Why DirectX is better (Score:2, Informative)
First of all, there are no pixel shaders in OpenGL. nVidia's extensions divide pixel shaders into Texture shaders, and Register Combiners. Which, basically mean, "Closer to the metal."
What does that mean? Well, Pixel Shader language is just a language. How the metal reacts is the same, if the semantics are the same.
However, *more importantly* ATI is going to *copy* nVidia's existing OpenGL extensions. That's the only way to compete - you must support existing features.
Don't believe me? They've already been doing this for years. Do a glGetString( GL_EXTENSIONS ); on any video card. Matrox, ATI, whatever. You're going to see a lot of NV_ (nVidia) specific extensions.
Re:Why DirectX is better (Score:2)
You are boring, you know? (And moderators couldn't spot a troll even if they were standing under a bridge) We already went over this at least twice, and you are bringing out the same dried up arguments again. You still haven't said what prevents NVIDIA and ATI and the ARB from sitting at the table and coming up with an uniform API for such an extension. Last time your cried, just like know, that the "hardware" is too different. Bullshit. Direct3D is no better. You just have to decide on a freaking API and let the vendors implement that. Read elsewhere in this discussion about programmers not supporting DirectX N yet. How's that better than OpenGL's extension mechanism? It's not better. You still have to rewrite stuff. The only difference is that some companies that will remain unnamed have placed stupid patents arround interfaces, not features, interfaces and other companies have to come up and implement their own. If you want to bitch at someone, bitch at the companies that patent interfaces and stop trolling about OpenGL not supporting current technology.
Re:Why DirectX is better (Score:2)
>>>>>>>>
When they do, I'll be impressed. But they show no signs of it thus far. And I'll tell you what prevents them from working together: human arrogance, plain and simple.
Last time your cried, just like know, that the "hardware" is too different.
>>>
Care to cite that?
Bullshit. Direct3D is no better. You just have to decide on a freaking API and let the vendors implement that. Read elsewhere in this discussion about programmers not supporting DirectX N yet. How's that better than OpenGL's extension mechanism?
>>>>>>>
Because there are only two versions of DirectX ever in use. The current version, which is used by games already on the shelf (that would be 7.0) and the new version that is being used in games that are in development (that would be 8.0). Plus, DirectX is fully backwards compatible, which can't be said for all OpenGL extensions.
It's not better. You still have to rewrite stuff.
>>>>>>>>
No you don't. If you go from DirectX 7.0 to DirectX 8.0, you have to rewrite 0 lines of code, since they are 100% backwards compatible. To go from NVIDIA's extension to ATI's extension, you might have to rewrite a good bit of code, depending on the exact semantics of the extension.
The only difference is that some companies that will remain unnamed have placed stupid patents arround interfaces, not features, interfaces and other companies have to come up and implement their own.
>>>>>>>>>>>
You hit the nail on the head! "Interfaces, not features!" That's what OpenGL's extension mechanism misses. Two features with different interfaces might as well be two different features. With DirectX, every feature has exactly (1) interface, DirectX's. Not the same is true for OpenGL.
If you want to bitch at someone, bitch at the companies that patent interfaces and stop trolling about OpenGL not supporting current technology.
>>>>>>>>
OpenGL does support coherent technology, just not in a sane way. Extensions are bad design. Game developers don't like them, and users don't like them. They only people who like extensions are hardware manufacturers, because it allows them to lock people into using their products.
Re:Why DirectX is better (Score:2)
Re:Why DirectX is better (Score:2)
Re:Why DirectX is better (Score:2)
Re:Why DirectX is better (Score:2)
Another day, another marketing war (Score:2, Insightful)
Does this mean that future games will be hardware specific?
Well, no. Game developers do prefer the state of the art, but common sense dictates that you target something that is exists and is popular.Comparisons to browser market shares are appropriate here: When Internet Explorer became the norm, web sites tended to take advantage of IE's superior DHTML and DOM support, but developers have mostly strived to make pages backwards-compatible with Netscape and other less capable browsers. After Mozilla caught up, most web sites still aren't targeting it specifically.
Keep in mind that, according to the article, the board does not currently exist. One's desire to write custom code for a nonexistent board is contingent on several factors, such as the manufacturer's present and potential future market share.
Case in point: Developers used to target Glide, 3Dfx' low-level rendering API. Games these days don't bother: 3Dfx has DirectX support, the effort to squeeze a few extra FPS from writing "straight to the metal" usually isn't worth the time and money, and most importantly, 3Dfx is dead. Its user base is dwindling, and there is no incentive to use the (generally) hardware-specific Glide over the generic DirectX.
As for the development effort: As a former game developer and Direct3D user, I agree with the claim that when targeting both shaders, "they'll have to write more code". A few hundred lines, perhaps, for detecting and using the two extra texture shaders per pass. It's not like it's a new, different API.
Re:Another day, another marketing war (Score:1)
Who gives a fuck?
Seriously, this is all quite a bit of petty whining about things that don't really matter. The difference will barely be noticeable except in hardware review screenshots, where it will become apparent that one square centimeter of the screen looks significantly better with one card than the other, if only for a split second, away from your focus.
Alternate platform support? (Score:2, Insightful)
If you own a GeForce3 *today*, you can access all of the hardware's features on Linux, Windows and Mac through OpenGL.
I don't know about ATI's Mac support, but under Linux the Radeon drivers still don't support T&L, cube maps, 3D textures or all three texture units. The card has been available for well over a year, but the driver only enables Rage128-level features. How long do you think it's going to take for the pixel and vertex shader capabilities to make it into the Linux drivers? And what about the Mac?
I've been extremely impressed by the balanced approach NVIDIA has been taking: they do a great deal of work on D3D 8 with Microsoft, but they simultaneously create OpenGL extensions for interesting hardware features, allowing Windows developers to target OpenGL, and also allowing alternate plaforms to access the new features. Their OpenGL support surpasses any other consumer grade hardware manufacturer's, and they offer better cross plaform support than any graphics company.
The safest choice any game developer can make is NVIDIA.
-Mark
Double code (Score:3, Interesting)
The simple answer was to write the common code in the main part of the engine then write multiple drivers for the engine that would use different features on different cards. This way we could push both cards and optimize the code for each card to get the best performance. Of course this is no easy task either.
This is a pain but if you wish to push what each card can do, you have to write code for each individual card or maker of the cards (IE a nVidia driver and an ATI driver then a 3rd driver for everything else that the other 2 don't optimize and run on).
Performance benefits (Score:5, Informative)
It is still unclear how the total performance picture will look.
Lots of pixels are still rendered with no textures at all (stencil shadows), or only a single texture (blended effects), so the pass advantage will only show up on some subset of all the drawing.
If ATI doesn't do as good of a job with the memory interface, or doesn't get the clock rate up as high as NVidia, they will still lose.
The pixel operations are a step more flexible than Nvidia's current options, but it is still clearly not where things are going to be going soon in terms of generality.
Developers are just going to need to sweat out the diversity or go for a least common denominator for the next couple years.
I fully expect the next generation engine after the current DOOM engine will be targeted at the properly general purpose graphics processors that I have been pushing towards over the last several years.
Hardware vendors are sort of reticent to give up being able to "out feature" the opposition, but the arguments for the final flexibility steps are too strong to ignore.
John Carmack
Re:Performance benefits (Score:2, Insightful)
hardware specific code again? (Score:2, Interesting)
So, which API allows one to most easily get at the GPU's coding power? How many hooks does the high level api have into the gpu's engine, and can the gpu get data from the api on the fly?
If anyone out there has worked with them, I'd be curious to hear what's present or lacking from the standards, and if it's feasable to try and write GPU level code abstractly.
Not hardware specific... (Score:2)
If this means using ASCII on a VT100, that's what they'll do.
Deja vu. (Score:4, Insightful)
If so, it won't be the first time; remember the days of 3dfx? Original Unreal would only run on Glide hardware acceleration; if you didn't have a 3dfx card, you were forced to run it in software. Of course, this didn't sit well with the growing NVidia user base who consistently pointed out that Quake 2 and Half-Life both rendered on anything running OpenGL (including 3dfx cards; remember those mini-driver days?), and OpenGL and Direct3D renderers were finally introduced in a patch. That's about when 3dfx started to go down the toilet; delaying product releases and missing features (32-bit color and large texture support being two of the most blatant omissions) eventually tainted the 3dfx brand to the point of extinction.
Since then, 3D gaming has been a less lopsided world. Linux gaming was taken seriously. Standardised APIs that could run on almost anything were the rule; if it wasn't OpenGL, it would at least be Direct3D. Then the GL extensions war heated up, with NVidia developing proprietary extensions that would work only on their cards. But this wasn't a problem; you could still run OpenGL games on anything that could run OpenGL; you'd just be missing out on a few features that would only slightly enhance the scenery.
Leave it to Microsoft to screw it all up with DirectX 8. They suddenly started talking about pixel shaders and other new ideas. John Carmack has already described the shortfalls and antics of DX8 [planetquake.com]. And now 3D programmers will have to program for multiple rendering platforms, but at least you can still run it with anything.
Sure, this entire disagreement between ATI and NVidia is bad for the 3D industry, but things could be worse. A LOT worse.
Large textures don't mean jack (Score:1)
missing features (32-bit color and large texture support being two of the most blatant omissions)
3dfx cards supported 256x256 textures. Are you talking about a texture larger than that for a single polygon? If you're not, you can simply use multiple textures for one object, as in the 8- and 16-bit world where a sprite was made of several smaller tiles.
This is good for hardware and software (Score:3, Insightful)
This is good for software because developers will have more choice in the hardware that they develop for. ATI doesn't support super-duper-gooified blob rendering? Ah, NVidia does in their new Geforce5. No worries, ATI will have to support it in their next generation boards.
A bipolar competition is ALWAYS good for the consumer.
Re:This is good for hardware and software (Score:1)
Eh, I don't know. I would compare this to the late 80's when computers were being developed by Amiga, IBM (and clones), Mac, Apple, etc...you had certain games/software that were available on a given platform and not the other, people just couldn't support multiple hardware configurations. As long as there are multiple companies producing competing products, is there really a reason they can't be compatible at the software level? Personally, I'd rather be able to look at a video card's features (memory, fps) than what games I'm going to be able to play with it.
--trb
But the whole point of DirectX... (Score:3, Insightful)
Re:But the whole point of DirectX... (Score:1)
The article is very light n details, so who knows, but I really don't imagine it's about hardware compatibilities/cincompatibilities. DirectX is efectively a standard, the fact is, one board will go UP TO this or that version.
The higher version your board support, the better the game (should) run, with better and faster effects.
Now when they say game developers will have to work seperately for both cards hm.. well first they always had to twaek their code so that it work with as many boards as posible. but anyway, I don'0t think it goes in the way "we are going to do a game for this board or this one or both".
It's more in the way "we are going to use stick to this version of DirectX instead of this one". the impact of this decision is that the game would run faster on one board or on another, or that they could use or not some specific effects. They want this kind of super shading . ok, if they do it on software,it will be slow on both boards but everybody will see this nice looking shadow. if they chose the hardware way, maybe only one board will show it, maybe it will be disabled for the other, so you'll have a better looking game for one board, and an overall faster game, at the cost of pissing off the one that don't have the nice effect.
But I don't see it goes as deep as working completely focused on one or another board.
Also, lot of people say they should stack at the standards and all
this is of correct, but not at the cost of evolution.
Having a too static standard will allow better competition between boards, thus better prices, better performance etc.. yes and no.
It will, but it will be very limited, the only way to improve the boards would then be basically to improve the brute force. it's much better to add new concepts, or new way to represent the information so tat it can be more efficient/realistic at (posibly) a lower cost. So, if you stick to this version of DirectX, you'll be able to draw x textures, and if you use this one, it will be y textures in 1 pass only.
Also, instead of giving raw plygons to the board, you can give it quadric parameters (again, if the board supports it) or things like that.
The game developer has to choose if he wants to do it this way or tis other way, and of course it will have to do with the support there is in the boards market, but even then, i is not "developing for this or that board".
Re:But the whole point of DirectX... (Score:1)
I was getting worried - the majority of posters so far seem to be nutcases.
Re:This is good for hardware and software (Score:1, Interesting)
> will have more choice in the hardware that they
> develop for.
Bullshit. That path leads straight to the darn old days where every game was board-specific.
> ATI doesn't support super-duper-
> gooified blob rendering? Ah, NVidia does in
> their new Geforce5. No worries, ATI will have to
> support it in their next generation boards.
Wrong, this should be "ATI doesn't support super-duper-gooified blob rendering? Idiot, why did you buy that board in the first place? But no worries, NVidia has the new Geforce5, just spend 300+ bucks an get one. Unfortunatley, this will break application Y as only ATI has the new M.O.R.O.N.-rendering which is required by Y. But hey, such is life!
> A bipolar competition is ALWAYS good for the
> consumer.
This is not "bipolar competition", this is "fragmentation".
C. M. Burns
Re:This is good for hardware and software (Score:3, Insightful)
You mean like when Netscape and IE were competing? In case you haven't noticed, HTML rendering between the two browsers haven't exactly meshed.
"Were" competing? (Score:1)
You mean like when Netscape and IE were competing?
Were? [mozilla.org]
HTML rendering between the two browsers haven't exactly meshed.
Most sites are designed around IE 5, but I see very few problems with Mozilla 0.9.x aka Netscape 6.1, except for some Really Stupid Sites(tm) that use VBS instead of ECMAScript [jsworld.com]. HTML is not designed to be a pixel-perfect layout language; it's a structural markup language. For layout use CSS, which supports pixel-perfect positioning and is supported in current versions of IE (5+) and Netscape (6+). Except for a few glitches in IE such as inserting an extra 3px of left and right margins into the CSS box property float: and treating a newline before </div> as whitespace (contrary to the SGML spec), Mozilla and IE look pretty much the same.
I agree that Netscape 4.x is sucks. Users can't turn off CSS (cascading stylesheets) without turning off CSS (client-side scripting), and the buggy implementations of the parts of CSS it does support will only make sites look ugly [misunderestimated.net].
Re:This is good for hardware and software (Score:2, Insightful)
How is this meant to be good for developers, or consumers? Developers now have three options:
This is also terrible for the consumer. Sorry, but that new card you just spend a small fortune on doesn't support the pixel shader version the game you want uses. Oh well, you'll just have to upgrade to the next card, when it comes out, hope that's okay. But don't worry, it will have lots of new features too (which no-one elses card will support).
Re:This is good for hardware and software (Score:3, Insightful)
Say screw'em both and develop for neither, just using lowest common denominator stuff, and spend the saved time on improving the other parts of the game.
If your game cant stand on its own using that... well, maybe, just maybe, it sucks?
A bipolar competition is ALWAYS good for the consu (Score:2)
Imagine a sort of RIAA for soft drinks.
Re:Quake 2 supported 3 architectures (Score:1)
Quite the contrary, Quake 2 (much like quake 1) in conjuction with the 3dfx mini-gl did much to boost OpenGL support in games.
Neurotic
Re:How the hell? (Score:1)
Re:How the hell? (Score:1)
Apparently you've never seen a ATI Radeon in action... A 64 Meg DDR version will easily outperform the fastest 3dfx cards and gives the GeForce2 GTS a run for it's money.
Dinivin
All-In-Wonder boards (Score:1)
Their 3D capabilities have always been respectable. Not the TOP, but what's a few FPS between the Geforce and Radeon, especially when there's also a few hundred dollars difference. Heck, my Rage 128 still gets "good-enough-for-me" frame rates in Q3A (typically in the 60s).
Re:How the hell? (Score:1)
As someone that has Radeons at home and work, and I find that the image quality is better than Nvidia cards I've used (TNT and GeForce 2 MX, both from Hercules). Also, the All-In-Wonder line of cards is very nice if you're looking for that kind of functionality.
Re:How the hell? (Score:1)
But then again my next card will be a GeForce 3
Re:How the hell? (Score:3, Informative)
I've owned a few video boards over the years, and have been constantly looking for a board that does both good 2D and 3D, and up until now, I haven't really found it. My Matrox Millenium (from about four years ago) did excellent 2D, no 3D. My Voodoo Rush had decent 3D for it's time, but the 2D sucked (blurry image, and this was without a passthrough cable). That got replaced (after switching back to the Matrox) with an nVidia TNT Ultra. The 3D was pretty good, but the 2D was a bit blurry (I dumped the TNT when I spoke with nVidia and confirmed that they were not producing Open Source Linux drivers - I don't like liars too much). So, the TNT got replaced by a Matrox G400Max Dualhead - excellent 2D, the 3D was lacking somewhat.
Just this weekend I purchased an ATI Radeon All-In-Wonder for $250. An excellent deal, since the 2D is nice and crisp, and the 3D rocks (for my purposes anyway). And, in 32-bit mode, it almost equals the GeForce 2 in performance.
Plus, this board has excellent multimedia. I love the TV tuner, it's so much better than the Hauppauge I used it to replace, plus I can hook up all sorts of video input devices and record from them. Excellent on the fly MPEG compression. And of course, we can't forget the hardware DVD playback, which is outstanding.
Also, like other people have said, let's not forget that the GeForce cards are still quite expensive.
A friend of mine was telling me three years ago that ATI made great cards, and I scoffed at him. Looks like I owe him an apology.
So, in conclusion, who would buy an ATI? How about somebody who wants a full-featured card that gives outstanding image quality. If you want pure frames per second, then buy your GeForce with it's blurry, dim images, since they screw around with the palettes and overclock the chips to get those numbers that hardcore gamers seem to like so much.
-- Joe
Re:why arent.... (Score:2, Informative)
Not being a 3D programmer, I don't know whether the claim of vast differences in code are true. Can anyone shed light on this?
Neurotic
Re:why arent.... (Score:1)
Re:why arent.... (Score:2, Insightful)
Re:why arent.... (Score:2, Interesting)
Re:ATI stinks (Score:2, Informative)
Re:ATI stinks (Score:1)
Re:ATI stinks (Score:1)
Re:ATI stinks (Score:1)
Re:What about OpenGL? (Score:1)
microsoft actively develops new nice features helping the developers, opengl has stuck to its standard for a looong time.
Re:What about OpenGL? (Score:1)
But the "features" that microsoft develops are easily implemented in OGL, or can be added in as extensions by the vendor. That's the good part about OGL, each individual vendor can add its own extensions to suit whatever is needed.
Of course, then you have this situation, where some vendors support some things, and others support other technologies, etc, but the point of a "standard" is that it shouldn't change that often. Hence OpenGL is on v1.2, and DirectX is on 8. Most "standards" don't go too far up the version ladder.
0.1% compatibility? Office runs on Mac. (Score:1)
But [the PowerPC architecture is] useless to 99.9% of business software.
Are you counting Microsoft Office for Macintosh computers [microsoft.com] as part of that 0.1% of business software available for PowerPC-based computers?
If the GNUstep developers get their butts in gear, porting Mac Cocoa apps to GNU, BSD, and UNIX platforms will be a piece of cake.
Re:flash upgrades? (Score:1)
OpenGL and Pixelshaders (Score:1)
This will ONLY work on Geforce-3 maschines though..
Re:General8 (Score:2, Insightful)
I could care less about driver specs. The 3dfx ones are around if I want to see how modern-ish graphics cards are set up. And their drivers are such good quality, I can see why they don't want mutations springing up all over the web. I certainly don't have a problem with such a pleasant company to work with wanting to hold on to a few secrets.
Henry
Re:General8 (Score:2)
I think the poster you're replying to was referring to the fact that nVidia released closed-source drivers for X. Without the specs on hand, an independent group cannot develop an open source driver.
The problem with closed source drivers is that they're always going to suck. The kernel licensing makes it possible, but the kernel module interface is set up to benefit modules that are compiled against the actual source tree. nVidia's hardware accelerated drivers crash my machine without flaw. I had to downgrade to a prior version to get it to sort of work (the helpful nVidia developers actually told me that I had to do this, to their credit). Even then it crashes my X periodically.
If they had written an open source driver, all of the bugs would have certainly been fixed by now, and if they had just released the specs, they wouldn't have had to develop the driver at all.
And before you go and defend poor nVidia's property rights, let me save you the trouble. nVidia wanted to release an open source driver. They full well understand the benefits, being a company made up of the best engineers in the industry. Unfortunately, part of the driver code was contributed by a third-party who refuses to let them open source it. The end.
My next card, sadly, unfortunately, oh god I wish it could be, won't be an nVidia.