I’ve never really coded “to the metal” with a graphics library so this is going to be a bit of me talking out random non-mouth orifices. Actually, never isn’t fully accurate I suppose. For the sake of full disclosure I did what seemed like about 20 minutes worth of OpenGL 1.2 when I was in college the first time (16 years ago) and a full course with Direct3D during my second attempt at college (but didn’t make anything like a game). Direct3D isn’t going to help me much since I’ve chosen Java and LWJGL and a tiny bit of 16 year old experience with OpenGL is as good as having never used it at all.
This inspired another round of buying books. I actually bought 5 of them, but the important ones for this conversation are “The Red Book” and “The Orange Book.” More specifically, OpenGL Programming Guide: Eighth Edition and OpenGL Shading Language: Third Edition. This is particularly amusing for me because when I was learning a little about OpenGL the first time, I actually bought the Second Edition of the Red Book and there was no such thing as a shader.
So here I am, with literally thousands of pages to look over. I’m not going to pretend that I will read all of these books cover to cover, but I have at least started to pick them over for useful bits and educational potential. This is a mountain of a learning curve, but it’s a fun one to climb.
Things I’ve pieced together so far… The graphics pipeline used to be “fixed”. This means that it always did roughly the same thing all the time. Over time they made that pipeline more and more configurable in lots of fun ways. Turning features on and off was the most control you had over how things worked.
Over time, the processors on video cards got more powerful and more parallel and they introduced the ability to program them with small programs called shaders. Sort of like the ability to configure the pipeline, only way more powerful. The original shaders were extremely limited in the number of instructions they could do and what those instructions were allowed to be, but were still considered a step up from mere configuration options.
Eventually shaders evolved into general purpose programs and the number of instructions you were allowed to use got large enough for them to be truly useful. Video card makers saw this as an opportunity. Why keep around the old fixed pipeline that was buried in configuration items when you could replace it with shaders by themselves? Why create chips that have specific features that game designers may not even want when you can provide a way to run any features they want and leave the other ones out to avoid wasting overhead on them? That’s how the “shader pipeline” was born.
And now I have to learn how to use them… :)