Resuming Development

So there I was, completely stalled on QubeKwest for roughly seven months.  Sure, I thought about it from time to time, but that never managed to inspire me to actually work on it.  Naturally that means that thinking about it tended to make me a little sad.  The cycle continued and the project languished.

Then a couple of interesting things happened.  I started to realize that part of the reason I was never working on QubeKwest was that I would come home from my normal job and not be able to find the motivation.  I even mentioned that in one of the previous posts on this blog so it was clearly something I was aware of at least subconsciously.  A few things happened at work that made me blindingly unhappy with the company I was working for and it hit me like a ton of bricks.  My job was making me miserable.

There are a couple of paths you can choose to follow when you make a realization like that.  You can complain about it, a lot, to everyone that will listen.  This approach tends to not make anyone any happier.  You aren’t solving anything by grumbling and your friends get sick of hearing about it.  This path tends to be the one used early on before you realize the depth of the problem you are complaining about.

The second approach is to do something about it.  This way is hard.  At least partially because change is hard and not many people (myself included) like change much.  It is however the only way to actually solve anything.  My eventual solution was to find a new job.  This didn’t immediately provide me more time to code QubeKwest, but it did immediately improve my general happiness.

Now, as a happier guy in a new job, I still wasn’t fully motivated to start development again.  At this point, I honestly hadn’t even thought about QubeKwest in a while.  Then something happened that made me immediately think of it.  The Raspberry Pi 3 was released.  I got to thinking, “Hey, I was totally going to try to get QubeKwest to work on a Raspberry Pi 2 and now a new one is out.”

As the proud owner of virtually every model of Raspberry Pi and someone that was very impressed with the new updates to the Raspberry Pi 3 (now with 64-bit CPU, and with built in WiFi and Bluetooth!) I immediately ordered several of them hoping they would be the last little nudge to resume for real.  Or, if not the last little nudge, at least a nudge in the right direction.

The Raspberry Pi 3s came in and instead of starting development again I was inspired to reinstall Windows on my main development machine.  Long story I won’t bother telling, but it needed it.  That took almost a whole weekend.  I was still a little nervous about starting again but I was starting to have fewer and fewer excuses.  I hadn’t looked at the code in seven months and that meant trying to figure out what I was working on and what the heck I was thinking when I wrote that piece and where I go from here.  But at least now I was ready.

I fired up Eclipse, I got annoyed at Eclipse, again.  I started the hunt for another IDE.   I grabbed IntelliJ IDEA and I like it so far.  Last night I attacked the code for the first time and I found there were loads of things I needed to fix.  I grabbed the latest LWJGL, but they had moved things around and renamed things and removed things so I had to fix my code to line up with their code again.  I also had to figure out how to fix the various broken bits of code that I’d checked in when I put the project to sleep seven months ago.

After about an hour, I had a shiny new IntelliJ project set up, I had my external libraries configured, I had the worst parts of my busted code commented out or repaired, and I got my ugly red triangles back on the screen.  I was in business again.  I think the first thing I need to do is revisit the extremely sad network code and try to figure out what I was doing there.  I am going to officially call that a plan!  Let the coding begin.

Slow Evolution

In general I don’t tend to spend a lot of time planning things before coding them.  I find that more often than not you can sit around and plan something so much that nothing ever gets done.  There are a couple of side effects to this approach.  I tend to get a lot done initially very quickly.  Then I tend to stall out a little.  The reason for the first part is pretty obvious.  If you don’t waste time planning you have all that extra time to get things done.  The second part is probably equally obvious but it’s just that unplanned code tends to require a fair bit of nudging around to correct for issues that arise from not planning it.

Amusingly there is another pattern that produces a similar set of results.  It’s called a learning curve, and that’s both a problem I’m having and something that you can’t plan your way around.  As I attempt to learn more about how the graphics pipeline operates I also need to constantly rewrite bits of code because of new things I’ve learned.  I call this process “slow evolution.”

I didn’t want to bury my learning curve alive in the extra bits that make up a game so I started by creating a “Geometry Practice” program.  I’ve already explained the difficulties I had with getting a single triangle on the screen so I won’t rehash that here, but this practice program is where that original piece of code lives.

The evolution so far as gone a bit like this…  Start with the fight of getting a triangle on the screen, and do so with everything you need hard coded into the program.  Next decide that you want your shader programs to live in files on the disk instead of being hard coded and write the code to allow that to happen.  After that, get frustrated with hard coded geometry and realize that it might be a good idea to get those off the disk as well.  So stop and craft a file format for conveying the informational bits that make up geometry and write a parser to load them.  Next realize that you’ve got a few things coming flexibly off of the disk, but that which file being loaded is hard coded.  That means crafting something that allows your files to be selected from the command line.  Now that you can do fun things with the command line, make sure you allow that same system to tell the program about which shader files to load and which texture to load.  Now that you can provide information about which mesh file to load, add proper paths that let different types of geometry to be loaded (without breaking types you already had working, oops).  And so on…

You get the idea.  I’m plugging along slowly evolving my practice program into the beginnings of the underlying graphics engine.  From hard coded to flexible to encapsulated modular flexibility.

 

Math’d

Once I managed to get an entirely unimaginative red 2D triangle or three onto the screen in a few different ways, I knew that I had to expand into the 3rd dimension.  To that end, I dropped back to a single 3D triangle, left it red, and prepared myself for what should be on the screen.  A couple seconds of compiling and firing up the program revealed that while the triangle was in fact on my screen, it looked no different from my 2D version.

This is simply because by default OpenGL happily ignores the Z-Axis entirely.  In the old fixed pipeline days, you could probably fix that by simply setting some configuration value or other to one that makes 3D stuff happen.  In the fancy new shader pipeline you need to tell the card how to do that.  That means passing a few matrices down with all of your fun geometry.  In English, that means you need to tell the card how to use the Z-axis.

In one of the books I’m reading it was mentioned that OpenGL “is not a math library.”  This is especially true with LWJGL.  In C/C++ you can use the library mentioned by the book called vmath.  In Java you are on your own.  This fact has sent me down the multi-week long detour of crafting a math package of my own and making sure the tests for it prove it behaves the way math should.

The current list of things in my math library are as follows:

  • For graphics:
    • Vector2 – Vector of 2 floating points.
    • Vector3 – Vector of 3 floating points.
    • Vector4 – Vector of 4 floating points.
  • For pixel level precision:
    • IntVector2 – Vector of 2 integers.
    • IntVector3 – Vector of 3 integers.
    • IntVector4 – Vector of 4 integers.
  • For transforms:
    • Matrix33 – A 3×3 matrix of floating points.
    • Matrix44 – A 4×4 matrix of floating points.
    • Quaternion – Numbers in the form (a + bi + cj + dk) using floating point values.
  • For fractals:
    • ComplexNumber – Numbers in the form (a + bi) using floating point values.

This list is perhaps a bit larger than I needed with the goal of simply producing a a proper perspective projection matrix.  I was however on a roll and my test cases provided actual evidence of progress which is nice to have sometimes when you feel a bit stalled in a project.  I also created a little test program to see how well these things perform.  Specifically how quickly I can multiply a Matrix44 by another Matrix44, and how fast multiplying a Matrix44 by a Vector4 was.  I was satisfied with the numbers I was seeing so hopefully it will be enough.

As of right now, my test coverage tool tells me that around 89% of my math package is covered with tests.  I am shooting for a perfect 100% for this package because if I can’t trust my math library, how can I know when I’ve built things that use it and have problems whether it’s the math that’s wrong or the thing I built on top of it?

Tricky Tricky…

So there I was, wrestling with an unending supply of “what the heck is going on” while trying to figure out LWJGL and OpenGL.  Everything looked right, everything compiled, logically the code made sense, but nothing would show up on my screen.  I double checked everything, I read more parts of my various books, I looked things up in online tutorials.  I triple checked and quadruple checked.  Nothing seemed wrong at all except nothing was showing up.

Over the course of over a week I systematically added more calls to System.out.println() (that’s Java’s console output command) than there were functional lines.  If there was something that some part of the code was doing and I didn’t already know everything about what it was doing, I’d add more output.  Still with dozens of lines of output with everything from confirmation that my configuration file was being found and properly loaded, to the exact contents of the shaders that it was using, to the ID that OpenGL assigned my vertex buffer, I just couldn’t find a problem.

After days of messing around with this, a friend gave it a shot from scratch and his worked.  This was obviously frustrating for me, but he gave me his code to look over and compare to mine.  After an hour or two of picking it over nothing seemed out of place except that he was using the “BufferUtils” that is built in to LWJGL.  I switched mine over to use that and what do you know, suddenly I could see things on my screen.

Now I’m not one to look a gift horse in the mouth here, but after all my attempts and with my goal being to learn how to do all of this for myself, I really had to know what was different.  I was using something to the effect of:

FloatBuffer vertexBuffer = ByteBuffer.allocateDirect( 12 )
                             .asFloatBuffer();

He was doing:

FloatBuffer vertexBuffer = BufferUtils.createFloatBuffer( 3 );

My knowledge of the various buffers comes from use of Java’s NIO package.  I had no idea there was a fancy utility class within LWJGL.  If I had bumped into that knowledge, I would have had a pretty good idea about what I was doing wrong.  The documentation provided for BufferUtils is pretty detailed about what is going on, if only I’d known to look.  In my defense, I’m primarily learning OpenGL and then mentally translating it to LWJGL.

The secret comes down to byte order.  In computers there are two ways of representing multi-byte values.  They are referred to as little endian and big endian.  In English, they mean that the low-order byte is at the end (on the right) or the high-order byte is at the end respectively.

This is an example of how to represent a two byte value containing the number 17,117 in both ways:

Little Endian:  01000010 11011101
Big Endian:     11011101 01000010

The reason this is important is because my computer (Intel architecture) uses little endian and the Java Virtual Machine (JVM) uses big endian.  In other words, Java sees everything as correct, all the right numbers show up in my console output, and then when I hand it to the video card everything is in the wrong order.  This means my fancy triangle had its vertexes somewhere very different than I was expecting (and likely not in the view of the camera at all).

As it turns out, the tiny part I was missing from my code was this:

FloatBuffer vertexBuffer = ByteBuffer.allocateDirect( 12 )
                             .order( ByteOrder.nativeOrder() )
                             .asFloatBuffer();

If that were there, Java would know that it should be using the buffer data in the opposite order it was expecting to use for itself (because it would be setting them up for the real computer instead of the JVM).  This is not to say I won’t be using the BufferUtils version, because I will, but now I know how what I had was different from what it needed to be and why it had to be that way.

Still Learning OpenGL

So here I am still trudging through learning OpenGL and then using it in LWJGL’s flavor of OpenGL.  All of the books I’ve been using and all of the internet searches I’ve been doing as I learn are related to OpenGL not LWJGL because there is very little information about how to use LWJGL out there.  For those that don’t know, LWJGL is pretty much OpenGL if someone decided to mash 140 lbs of C code into Java.  This conveniently means you can just learn OpenGL and then just learn the extra little bits needed to use it in Java.

I give OpenGL a lot of credit for keeping all of its functionality neatly separated and more or less atomic.  The negative side effect of this is that everything you want to do takes 17 steps.  This gives you a lot of flexibility in how you use everything when making your own engine, but it also means finding information about the “right” or “best” way to do things can be extremely difficult.  This is made even more obnoxious by the fact that there is so much information out there about OpenGL from before everything was powered by shaders (pre-3.2).

I’ve gotten a couple of extremely simple shaders written, compiled, and linked into a shader program.  I’ve created a little bit of geometry (a cube of course) in a couple of different ways (raw vertices, indexed vertices, etc.) and loaded them into the video card.  I’ve attempted to draw that geometry and nothing shows up.  One of the problems is that all of what I’m describing is just plumbing to get anything on the screen with OpenGL and if you have any of it wrong, nothing happens.  Because there are so many places that something could have gone wrong before you have a single thing on the screen, it’s very hard to debug which one went wrong.

My hope is that once I get something working I will be able to make it incrementally more awesome.  Add textures, add lighting, put lots of cubes on the screen at the same time, and so on.  For now, I’m still reading and researching and learning and have nothing to show for it yet…  Hopefully soon though.