Slow Evolution

In general I don’t tend to spend a lot of time planning things before coding them.  I find that more often than not you can sit around and plan something so much that nothing ever gets done.  There are a couple of side effects to this approach.  I tend to get a lot done initially very quickly.  Then I tend to stall out a little.  The reason for the first part is pretty obvious.  If you don’t waste time planning you have all that extra time to get things done.  The second part is probably equally obvious but it’s just that unplanned code tends to require a fair bit of nudging around to correct for issues that arise from not planning it.

Amusingly there is another pattern that produces a similar set of results.  It’s called a learning curve, and that’s both a problem I’m having and something that you can’t plan your way around.  As I attempt to learn more about how the graphics pipeline operates I also need to constantly rewrite bits of code because of new things I’ve learned.  I call this process “slow evolution.”

I didn’t want to bury my learning curve alive in the extra bits that make up a game so I started by creating a “Geometry Practice” program.  I’ve already explained the difficulties I had with getting a single triangle on the screen so I won’t rehash that here, but this practice program is where that original piece of code lives.

The evolution so far as gone a bit like this…  Start with the fight of getting a triangle on the screen, and do so with everything you need hard coded into the program.  Next decide that you want your shader programs to live in files on the disk instead of being hard coded and write the code to allow that to happen.  After that, get frustrated with hard coded geometry and realize that it might be a good idea to get those off the disk as well.  So stop and craft a file format for conveying the informational bits that make up geometry and write a parser to load them.  Next realize that you’ve got a few things coming flexibly off of the disk, but that which file being loaded is hard coded.  That means crafting something that allows your files to be selected from the command line.  Now that you can do fun things with the command line, make sure you allow that same system to tell the program about which shader files to load and which texture to load.  Now that you can provide information about which mesh file to load, add proper paths that let different types of geometry to be loaded (without breaking types you already had working, oops).  And so on…

You get the idea.  I’m plugging along slowly evolving my practice program into the beginnings of the underlying graphics engine.  From hard coded to flexible to encapsulated modular flexibility.

 

Delays and Procrastination

This project is really honestly the one I want to finish.  Well, perhaps not “finish” exactly as much as simply “release to the public.”  The problem is that I’m feeling a bit overwhelmed and underawesome.

I’ve chosen, for better or worse, to build my own game engine for this game.  This has several interesting side effects.  First and foremost it means I have a metric crapload of code to write and things to learn before I ever manage to get any game on the screen.  Second, I have to build every possible subsystem required to make a concept into a game.  Third, I have about 17 simultaneous learning curves to try to tackle to make it all happen.

This post is not intended as pure complaining, but I’d be lying if I claimed I’m not complaining at least a little bit.  This is hard.  Really really hard.  It’s a little bit scary, and a lot bit interesting, and as previously mentioned a whole pile of overwhelming.

After wrapping up the code for my math library and the subsequent code to test the heck out of it, I’ve made very little progress.  A little bit of messing with how I represent 3D models and some confusion about how I should be representing them.  I’ve opened my programming tools a couple times with the intention of coding only to find myself not even sure where to start.  The next logical step for someone that doesn’t know where to start is sadly to simply not start.  The step after that is to play a bunch of Minecraft with the intention of observing how different things work.  Yes, that really is how the last month has gone.

On the bright side I’ve put in a lot of hours studying how liquids work in Minecraft and some of the sassier details about redstone.  I do not have any solid ideas about how to code either one if I ever manage to get enough of a game engine together to bother, but at least I’ve learned a little more about how they work.  Also, where Minecraft uses redstone as a form of electricity, I plan to make QubeKwest have good old fashioned electricity with wires and devices and logic circuits.  Someday.

Here is a list of the things I’ve been working on without successfully finishing much.

  • I started an asynchronous network client and server.  Neither is done or functional, and in fact, they no longer even compile.  My synchronous version grew up to be an extremely simple version of a chat program, but was dropped from the project as too limited.
  • I started making a lot of the data structures that the game will need to represent all sorts of things.  Everything from blocks to the chunks they live in to the idea of a world coordinate.  Some of these even have tests written for them.  Others are properly serializable.  None are “final.”
  • I wrote code that can load shaders from the disk into the video card.  I wrote code that can put geometry into the card as well.  Neither is especially flexible or in any way ready to be used for actual rendering of pretty things.
  • I’ve created tons of Java packages.  Some are empty and others are probably unnecessary.  It’s occurred to me that I may be breaking down the project a little too far, and that’s helping to confuse me instead of, well, helping me.
  • I wrote some random functions.  By this I mean functions that produce random numbers in various interesting ways.  They sort of work, but are also somewhat limited.  I guess I’m entitled to sort of finish something from time to time.
  • I have vast amounts of code devoted to doing complicated math and even more code devoted to making sure that code actually works correctly.  While that’s cool and everything, it’s also utterly unused so far.

Well, I think you get the idea.  I’ve stalled on the project for now because I get home from work and can’t find the motivation to work on it and because it’s hard and confusing.  I’ll build up a head of steam created by frustration with myself and I’ll start working on it eventually.  I hope.

Math’d

Once I managed to get an entirely unimaginative red 2D triangle or three onto the screen in a few different ways, I knew that I had to expand into the 3rd dimension.  To that end, I dropped back to a single 3D triangle, left it red, and prepared myself for what should be on the screen.  A couple seconds of compiling and firing up the program revealed that while the triangle was in fact on my screen, it looked no different from my 2D version.

This is simply because by default OpenGL happily ignores the Z-Axis entirely.  In the old fixed pipeline days, you could probably fix that by simply setting some configuration value or other to one that makes 3D stuff happen.  In the fancy new shader pipeline you need to tell the card how to do that.  That means passing a few matrices down with all of your fun geometry.  In English, that means you need to tell the card how to use the Z-axis.

In one of the books I’m reading it was mentioned that OpenGL “is not a math library.”  This is especially true with LWJGL.  In C/C++ you can use the library mentioned by the book called vmath.  In Java you are on your own.  This fact has sent me down the multi-week long detour of crafting a math package of my own and making sure the tests for it prove it behaves the way math should.

The current list of things in my math library are as follows:

  • For graphics:
    • Vector2 – Vector of 2 floating points.
    • Vector3 – Vector of 3 floating points.
    • Vector4 – Vector of 4 floating points.
  • For pixel level precision:
    • IntVector2 – Vector of 2 integers.
    • IntVector3 – Vector of 3 integers.
    • IntVector4 – Vector of 4 integers.
  • For transforms:
    • Matrix33 – A 3×3 matrix of floating points.
    • Matrix44 – A 4×4 matrix of floating points.
    • Quaternion – Numbers in the form (a + bi + cj + dk) using floating point values.
  • For fractals:
    • ComplexNumber – Numbers in the form (a + bi) using floating point values.

This list is perhaps a bit larger than I needed with the goal of simply producing a a proper perspective projection matrix.  I was however on a roll and my test cases provided actual evidence of progress which is nice to have sometimes when you feel a bit stalled in a project.  I also created a little test program to see how well these things perform.  Specifically how quickly I can multiply a Matrix44 by another Matrix44, and how fast multiplying a Matrix44 by a Vector4 was.  I was satisfied with the numbers I was seeing so hopefully it will be enough.

As of right now, my test coverage tool tells me that around 89% of my math package is covered with tests.  I am shooting for a perfect 100% for this package because if I can’t trust my math library, how can I know when I’ve built things that use it and have problems whether it’s the math that’s wrong or the thing I built on top of it?

Tricky Tricky…

So there I was, wrestling with an unending supply of “what the heck is going on” while trying to figure out LWJGL and OpenGL.  Everything looked right, everything compiled, logically the code made sense, but nothing would show up on my screen.  I double checked everything, I read more parts of my various books, I looked things up in online tutorials.  I triple checked and quadruple checked.  Nothing seemed wrong at all except nothing was showing up.

Over the course of over a week I systematically added more calls to System.out.println() (that’s Java’s console output command) than there were functional lines.  If there was something that some part of the code was doing and I didn’t already know everything about what it was doing, I’d add more output.  Still with dozens of lines of output with everything from confirmation that my configuration file was being found and properly loaded, to the exact contents of the shaders that it was using, to the ID that OpenGL assigned my vertex buffer, I just couldn’t find a problem.

After days of messing around with this, a friend gave it a shot from scratch and his worked.  This was obviously frustrating for me, but he gave me his code to look over and compare to mine.  After an hour or two of picking it over nothing seemed out of place except that he was using the “BufferUtils” that is built in to LWJGL.  I switched mine over to use that and what do you know, suddenly I could see things on my screen.

Now I’m not one to look a gift horse in the mouth here, but after all my attempts and with my goal being to learn how to do all of this for myself, I really had to know what was different.  I was using something to the effect of:

FloatBuffer vertexBuffer = ByteBuffer.allocateDirect( 12 )
                             .asFloatBuffer();

He was doing:

FloatBuffer vertexBuffer = BufferUtils.createFloatBuffer( 3 );

My knowledge of the various buffers comes from use of Java’s NIO package.  I had no idea there was a fancy utility class within LWJGL.  If I had bumped into that knowledge, I would have had a pretty good idea about what I was doing wrong.  The documentation provided for BufferUtils is pretty detailed about what is going on, if only I’d known to look.  In my defense, I’m primarily learning OpenGL and then mentally translating it to LWJGL.

The secret comes down to byte order.  In computers there are two ways of representing multi-byte values.  They are referred to as little endian and big endian.  In English, they mean that the low-order byte is at the end (on the right) or the high-order byte is at the end respectively.

This is an example of how to represent a two byte value containing the number 17,117 in both ways:

Little Endian:  01000010 11011101
Big Endian:     11011101 01000010

The reason this is important is because my computer (Intel architecture) uses little endian and the Java Virtual Machine (JVM) uses big endian.  In other words, Java sees everything as correct, all the right numbers show up in my console output, and then when I hand it to the video card everything is in the wrong order.  This means my fancy triangle had its vertexes somewhere very different than I was expecting (and likely not in the view of the camera at all).

As it turns out, the tiny part I was missing from my code was this:

FloatBuffer vertexBuffer = ByteBuffer.allocateDirect( 12 )
                             .order( ByteOrder.nativeOrder() )
                             .asFloatBuffer();

If that were there, Java would know that it should be using the buffer data in the opposite order it was expecting to use for itself (because it would be setting them up for the real computer instead of the JVM).  This is not to say I won’t be using the BufferUtils version, because I will, but now I know how what I had was different from what it needed to be and why it had to be that way.

Still Learning OpenGL

So here I am still trudging through learning OpenGL and then using it in LWJGL’s flavor of OpenGL.  All of the books I’ve been using and all of the internet searches I’ve been doing as I learn are related to OpenGL not LWJGL because there is very little information about how to use LWJGL out there.  For those that don’t know, LWJGL is pretty much OpenGL if someone decided to mash 140 lbs of C code into Java.  This conveniently means you can just learn OpenGL and then just learn the extra little bits needed to use it in Java.

I give OpenGL a lot of credit for keeping all of its functionality neatly separated and more or less atomic.  The negative side effect of this is that everything you want to do takes 17 steps.  This gives you a lot of flexibility in how you use everything when making your own engine, but it also means finding information about the “right” or “best” way to do things can be extremely difficult.  This is made even more obnoxious by the fact that there is so much information out there about OpenGL from before everything was powered by shaders (pre-3.2).

I’ve gotten a couple of extremely simple shaders written, compiled, and linked into a shader program.  I’ve created a little bit of geometry (a cube of course) in a couple of different ways (raw vertices, indexed vertices, etc.) and loaded them into the video card.  I’ve attempted to draw that geometry and nothing shows up.  One of the problems is that all of what I’m describing is just plumbing to get anything on the screen with OpenGL and if you have any of it wrong, nothing happens.  Because there are so many places that something could have gone wrong before you have a single thing on the screen, it’s very hard to debug which one went wrong.

My hope is that once I get something working I will be able to make it incrementally more awesome.  Add textures, add lighting, put lots of cubes on the screen at the same time, and so on.  For now, I’m still reading and researching and learning and have nothing to show for it yet…  Hopefully soon though.