Tricky Tricky…

So there I was, wrestling with an unending supply of “what the heck is going on” while trying to figure out LWJGL and OpenGL.  Everything looked right, everything compiled, logically the code made sense, but nothing would show up on my screen.  I double checked everything, I read more parts of my various books, I looked things up in online tutorials.  I triple checked and quadruple checked.  Nothing seemed wrong at all except nothing was showing up.

Over the course of over a week I systematically added more calls to System.out.println() (that’s Java’s console output command) than there were functional lines.  If there was something that some part of the code was doing and I didn’t already know everything about what it was doing, I’d add more output.  Still with dozens of lines of output with everything from confirmation that my configuration file was being found and properly loaded, to the exact contents of the shaders that it was using, to the ID that OpenGL assigned my vertex buffer, I just couldn’t find a problem.

After days of messing around with this, a friend gave it a shot from scratch and his worked.  This was obviously frustrating for me, but he gave me his code to look over and compare to mine.  After an hour or two of picking it over nothing seemed out of place except that he was using the “BufferUtils” that is built in to LWJGL.  I switched mine over to use that and what do you know, suddenly I could see things on my screen.

Now I’m not one to look a gift horse in the mouth here, but after all my attempts and with my goal being to learn how to do all of this for myself, I really had to know what was different.  I was using something to the effect of:

FloatBuffer vertexBuffer = ByteBuffer.allocateDirect( 12 )
                             .asFloatBuffer();

He was doing:

FloatBuffer vertexBuffer = BufferUtils.createFloatBuffer( 3 );

My knowledge of the various buffers comes from use of Java’s NIO package.  I had no idea there was a fancy utility class within LWJGL.  If I had bumped into that knowledge, I would have had a pretty good idea about what I was doing wrong.  The documentation provided for BufferUtils is pretty detailed about what is going on, if only I’d known to look.  In my defense, I’m primarily learning OpenGL and then mentally translating it to LWJGL.

The secret comes down to byte order.  In computers there are two ways of representing multi-byte values.  They are referred to as little endian and big endian.  In English, they mean that the low-order byte is at the end (on the right) or the high-order byte is at the end respectively.

This is an example of how to represent a two byte value containing the number 17,117 in both ways:

Little Endian:  01000010 11011101
Big Endian:     11011101 01000010

The reason this is important is because my computer (Intel architecture) uses little endian and the Java Virtual Machine (JVM) uses big endian.  In other words, Java sees everything as correct, all the right numbers show up in my console output, and then when I hand it to the video card everything is in the wrong order.  This means my fancy triangle had its vertexes somewhere very different than I was expecting (and likely not in the view of the camera at all).

As it turns out, the tiny part I was missing from my code was this:

FloatBuffer vertexBuffer = ByteBuffer.allocateDirect( 12 )
                             .order( ByteOrder.nativeOrder() )
                             .asFloatBuffer();

If that were there, Java would know that it should be using the buffer data in the opposite order it was expecting to use for itself (because it would be setting them up for the real computer instead of the JVM).  This is not to say I won’t be using the BufferUtils version, because I will, but now I know how what I had was different from what it needed to be and why it had to be that way.