First Ping

It’s been a long hard road with the networking engine, and I’m absolutely no where near the end…  but I have successfully pinged my server from my client!

Proof, because pictures make everything better, click it to see the animation.

There are loads of problems still, and a lot of those are simply because I haven’t even coded some of the bits that are needed to make things work correctly.  I’m very happy to see a client-server round trip working, and it gives me a lot of hope that this version of the network code will actually be the one I use without having to start over again.

Here is a list of some of the things I know don’t work correctly yet:

  • When a client disconnects from the server, there seem to be some weirdness around the server accepting new connections afterwards.  (Not sure if different ways of disconnecting cause different issues.)
  • When multiple clients are connected to the server, there is no way for the server to accurately identify which one is which, so things that are expected to return to the client that sent them (like a ping) actually end up going to the client that connected first.  (This one is easy, the clients don’t yet have the ability to register themselves and get their “source ID” from the server.  That means they all use the same hard coded ID.)
  • When the server sends a message that is intended as a broadcast, it is not actually broadcasted to all connected clients.  (I think this is the same problem as the one above, but I listed it separately because I am not positive yet.)
  • There is no way yet to do directed messages from either the client or the server.  (There isn’t a way for clients to request the list of valid target IDs and their associated names yet.)

This list probably isn’t complete, but it does represent some of the more obvious things that still need to be added to the network engine.  Please don’t take this as negative, I’m thrilled to have successfully gotten a round trip ping to work, and I’m very happy that the list above no longer includes things like “make the network engine work at all.”  I call this a total victory with a little bit of a TODO list.  :D

Ignoring Network Events

So as usual around here lately, I’ve been working a lot on the network code.  I suspect if you read this blog with any amount of consistency, or if you are reading all of the posts all at once as a form of weird entertainment, you are probably getting a bit tired of hearing about it.  I am too, so I get it.  The problem is that networking is hard and complex and easy to screw up.  The other problem is that it’s such an important core piece of QubeKwest, that I have to get it right to have any hope of building a game on top of it.

Today’s networking story is about network events.  These magical things fly around inside the various parts of the networking engine and change flags on selection keys and allow it to behave accordingly as it does its thing.  The start of the problem was that I noticed that every single time I connected to the server from the client, the client took almost exactly 3 seconds to do so.  To be fair, I could ignore this problem and say “well it’s slow, but it works,” at least for a little while.  That didn’t feel right to me, so I started to dig in and study the code to try to figure out both why it was happening, and why it was so consistent.

Thanks to 3 seconds being the selector’s select() method timeout, and a time that I chose when I realized that the network engine was never shutting down properly, I had a place to start in my search.  Unlike so many other issues with the networking code, this one jumped out immediately.

The process of connecting to the server from the client is a two step process.  The first step is to set up some stuff and schedule the second part to occur once the selector gets to it.  The second part is for the selector to realize it has a connection to complete, and to, well, complete it.  On the surface, this all seems pretty straight forward, but these things are taking place in two different threads.  The client network engine is up and spinning in its own thread, and the UI has its own thread too, and that will be where the connection process is started.

If we look at the client network engine thread first, we see that it does something a bit like this:  process any events that came in, select() on our selector for things that are ready, and then process the stuff that’s ready.  Then it loops back to the start of that and does it all over again.  The problem is that select() is a blocking call.  That means that in a network that isn’t busy doing anything (like before it has connected) there is nothing to cause the select() to ever finish waiting for something to do.  This is the exact spot that I added the 3 second timeout, so it would have the chance to check if the loop is supposed to end because the engine is being shut down.

Do you see the problem?  My selector is sitting there waiting for something to have its flags changed by an event.  The step where events change things happens somewhere else in the loop that isn’t happening because select() is blocking the loop from running.  This causes network events to be effectively ignored for 3 seconds.  That means the pattern is really more like this: when select() times out in 3 seconds, it immediately loops back around, processes the events, tries to select() again, and that ends immediately since there is a flag set from the event.  Just like that, the second part of the connection process immediately processes, but after the 3 second timeout.

Now comes the hard part.  Determining what was happening was easy, coming up with a way to fix it will be quite tricky and I believe to have any shot at it working, it will need to be yet another thread.  This new one has only one job to do.  Watch for events and process them immediately in a thread that isn’t being blocked by the select() method.  I don’t think the code involved in the new part will be terribly difficult to write, but it will require a bit of messing with the client and server engine codebases to integrate the proper use of it.

Streaming Wholeness

A while back I had a brief moment of horror spawned from the idea that I was doing my networking stuff all wrong.  The good news is that I wasn’t totally wrong, the bad news is that I needed to make more stuff to do things correctly.  The premise is actually pretty simple.  I had assumed that when I sent a serialized 100 byte object across the network, that it would arrive at the other end as a convenient 100 byte read of data from the network.  I’m not entirely sure why I thought this would be the case, but I have some theories.

First, every single tutorial I’d seen online or in books had no mention of this problem.  Second, I’m pretty sure that on localhost, this would be pretty close to how it actually works.  Third, the tutorials were sending things that didn’t actually have any logic behind them.  All these things settled into my brain as a simple “well, this must just work.”

The horror was at the theory that I was somehow mistaken.  I’m not really sure where that horror came from either.  It just sort of popped into my head and left shortly afterwards, only to appear again later on.  I think perhaps I realized that the way I had been coding it felt a bit like I was getting away with something.  As it turns out, I was right about that, unfortunately.

Let’s jump back to that 100 byte object.  If I send it, followed by 3 more of them, I’ve now sent 400 bytes of data, and all this happened as 4 simple writes on the sending end.  On the receiving end, there is the whole interwebs in the way.  So, while the 4 blobs of 100 bytes are guaranteed by the protocol to arrive in the correct order, and in their entirety, there is nothing what-so-ever that guarantees that they will arrive as 4 convenient 100 byte deliveries.  In fact, the worst case scenarios are vastly more likely.

Before we delve into the example above, let’s take the same approach most tutorials use.  We will send “Hello World” as our data, all at once.  If that all arrives all at once, and we show it on the screen, we will see “Hello World” and all will be just great.  If it arrives in 3 pieces containing “Hel”, “lo Wo”, and “rld”, and we show them on the screen as they arrive, we actually end up with the exact same result.  “Hello World” is on the screen, and everyone is happy about it.

The problem actually comes from the fact that I’m not just sending arbitrary strings of text across the network.  I’m sending carefully crafted collections of bytes that represent the various data structures in the game.  Everything from a chat message to a chunk full of blocks in the world just boils down to a pile of bytes.  The problem is of course that if I don’t have all the bytes for a data structure to repopulate all its fields, then I can’t reconstruct it properly.

Armed with this new wrinkle in the data I’m sending, let’s return to the example from above.  We’ll send 4 pieces of 100 bytes.  Imagine that they arrive in the following deliveries of data:  1 byte, 72 bytes, 285 bytes, 42 bytes.   Due to the structure of my serialized data, that first 1 byte delivery isn’t even enough information for me to figure out what kind of thing it was that I received.  The next 72 bytes allows me to determine the type of data I’m dealing with, which in turn allows me to determine the amount of data I need to recreate something of that type.  With 73 bytes delivered so far, you’ll probably notice that I still don’t have enough information to actually do that (since our hypothetical data blobs are 100 bytes each).  Next, we receive 285 bytes all at once, thanks to the magic of how networks work.  That means we are up to 358 bytes, which also means we can now reconstruct our first transmission successfully.  But wait, we can also reconstruct the second and the third objects as well, so we should definitely do that.  We now receive 42 more bytes, which brings us up to 400 bytes received, and we finally have the ability to finish the process of dealing with the data that was sent to us.

To help handle this situation, I have now created a class I call the DeserializationStreamer.  This new structure orchestrates the buffering of incoming data and assists in determining when it’s possible to deserialize something that has arrived, and also in actually doing that deserialization because it has to manage the buffers it’s using when that happens.  This was a long and complicated story, but the moral is probably something like “things are seldom as easy as they seem, and they are always way worse when networking is involved.”  Or something like that.

 

Silly Threads

It’s actually become quite entertaining to me how multiple threads can make silly things happen.  Like when a piece of code that you would never assume could go wrong suddenly develops amazing ways to surprise you by breaking code that is absolutely no where near it and in ways that are almost perfectly random.  This is actually a little related to networking code in QubeKwest because it is the first piece of the software that is actually multi-threaded, but I’m sure it won’t be the last as I try to squeeze every little bit of power out of whatever system I end up on.

In an attempt to prevent multi-threaded problems I spent a solid week walking around with printed out pages of  source code and a pen.  I was marking every line of code with little letters that indicated which threads I thought could get to it, and “playing computer” in an attempt to figure out how something could go wrong.  The end result of this effort was that I found and likely fixed several potential problems and then located and purchased a tool called ThreadSafe by Contemplate.  It was a very big hit on my meager indie development tool budget, but it found a problem I hadn’t considered.  Actually, I’m fairly proud that it only found one issue considering how many things it looks for and how big my codebase has become.

A new rule I made up and call “the core rule of multithreaded code” can be summarized as “If multiple threads can run the lines of code in a method in an inconvenient order, they will be run in the most inconvenient way possible.”

Consider this code for a moment, but pretend that all of the parts of it are actually using fancy or correctly named versions from the java.util.concurrent package as an attempt to make this work as well as possible without synchronization blocks.  I just didn’t want to use their longer names for this example because that would make things more complex for no reason.

private HashMap<Integer, List<String>> hash = 
    new HashMap<Integer, List<String>>();

public void addToList( Integer inKey, String inData )
{
    List<String> list = hash.get( inKey );
    if( list == null )
    {
        list = new List<String>();
        hash.put( inKey, list );
    }
    list.add( inData );
}

This code manages a collection of lists and was based on the idea of “a list of stuff to do for this thing” but was simplified to just a list of strings associated with an integer key.  First, I’m going to explain what this code does when you call it in a purely single threaded way.

Let’s say you call it with the parameters 1, “Taco”.  First it attempts to load the list of strings for key 1.  If there wasn’t one yet, it makes a new list and puts it into the hash map associated with key 1.  Afterwards, it adds “Taco” to the list and ends.  If you call it a second time later with the parameters 1, “Burrito” it tries to get the list associated with key 1 and succeeds.  That means the if block that sets up a new list never happens and it moves on to add “Burrito” to the list and ends.  This all flows nicely and covers your various edge cases and generally makes good sense.

Now, imagine you call it twice at the same time in two different threads.  (Which is actually possible in my network code.)  The first thread uses the parameters 1, “Taco” and the second thread uses 1, “Burrito” just like the two separate calls in the example above, except they happen at the same time.  Following the rule that inconvenient things must occur in the most inconvenient way possible, we’ll also assume that there is no list attached to key 1 yet.  Thus, both threads enter the method, both threads attempt to get the list for key 1, neither thread gets a list (because they are both the first).  Next, both threads check for a null list (which they both have), both threads make a new list, and both threads put their list into the hash map for key 1.  This is something that succeeds in both cases, but only one at a time, and where the problems really start.

Let’s say that the thread trying to add “Taco” goes first.  Now the hash map has a list that was successfully put into the spot for key 1.  Immediately afterwards, the thread trying to add “Burrito” puts its brand new list into the hash map for key 1 too.  This also succeeds, but replaces the list that was in the spot already from when the thread trying to add “Taco” put its there.

Consider the state of the local variables in each thread after this has occurred.  Both threads have a variable called “list” that refers to a perfectly valid new list, but only one thread has its list set in the hash map.  (But they both think their list is the one in the hash map.)  Finally, both threads add their new value to their own reference to a list, and exit.  What really happened though?  The thread putting “Taco” into a list did so to a list that is only known to the local list variable, and the thread putting “Burrito” actually puts it into a list that the hash map knows about.  That means that both calls to the function completed successfully, but the result is that in the hash map, there is a list that only has “Burrito” in it.  The “Taco” data is simply gone the moment the local variable “list” goes out of scope when the function ends.

This demonstrates how multithreaded code can blow up, but now let’s think about the side effect of this event.  Later on, code is going to grab the list for key 1 out of the hash map to do things with the list of strings in it.  It will also probably just be missing values (like “Taco” for example) and have no idea that they are even missing.  The end user expects the results to be shown for multiple items of food, but some of the ones it figured should be there simply aren’t.  Now a developer attempts to figure out why.

The bug report talks about code misbehaving code somewhere no where near the place where the problem actually occurred, and it is random which data is missing so they can’t even reliably reproduce the problem.  There is nothing that requires the “Taco” thread put its list in first, so it could be “Burrito” that’s missing just as easily, and completely randomly.

Better still, attempting to walk through the code with a debugger shows the function completing successfully twice, and in a lot of cases, the use of the debugger itself is enough to cause the problem to not occur at all.  After all, a breakpoint that causes a delay long enough for one path through the function to already see the other list in the hash map would make the problem vanish.

The solution is to use a special method on the concurrent form of a hash map that atomically puts something into the hash map only if there isn’t already something there.  This would make the code above look like this.

private HashMap<Integer, List<String>> hash = 
    new HashMap<Integer, List<String>>();

public void addToList( Integer inKey, String inData )
{
    List<String> list = hash.get( inKey );
    if( list == null )
    {
        list = new List<String>();
        List<String> tempList = hash.putIfAbscent( inKey, list );
        if( tempList != null )
            list = tempList;
    }
    list.add( inData );
}

This should look really familiar, but the attempt to set the list to the hash map is now protected by the use of putIfAbscent() and there is a second if block to adjust things based on how that went.  The putIfAbscent() method internally protects against things happening at the exact same time, so they are forced to get in line and happen one after another.  Which order that happens is still random, but they at least there is a little bit of sanity instead of none at all.  Using the same multithreaded example as before, the new flow has been changed to this:

The first thread uses the parameters 1, “Taco” and the second thread uses 1, “Burrito”, and as before they happen at the same time.  We will also repeat the scenario where both threads are attempting to be the first attempt to set a value associated with key 1.  Thus, both threads enter the method, both threads attempt to get the list for key 1, neither thread gets a list (because they are both the first).  Next, both threads check for a null list (which they both have), both threads make a new list, and both threads attempt to put their list into the hash map for key 1.  This time however, either thread can only put their list into the hashmap if there isn’t one already there.  The putIfAbscent() method returns null if there was no list already set to the key, or if there was a list, it returns the one that was there instead of actually putting the new one there.

This gives us the opportunity to behave in a way that makes more sense.  Now, when we try to insert a new list into the hashmap and there wasn’t one there, we succeed and move on to add our string to the list without any trouble.  When we try to insert our new list and there is already one there, we get the one that was there back from the attempt.  If that happens, we simply replace our new list with the one we got back.  That means we have a new list that will be garbage collected, and that both threads actually have a reference to the exact same list.

As before, let’s say that the thread trying to add “Taco” goes first.  Now the hash map has a list that was successfully put into the spot for key 1.  Immediately afterwards, the thread trying to add “Burrito” tries to put its brand new list into the hash map for key 1 too.  Instead of blowing away the list put there by the thread trying to add “Taco”, it actually returns the list the “Taco” thread put there.  After that, we simply check to see if there was some list returned by the putIfAbscent() call, and if there was, we use it as our local list.  That means when “Taco” gets added to the list, that works fine.  It also means when “Burrito” is added to the list, it’s actually doing it to the same list the “Taco” thread used instead of using the one it initially created for the job.  That means both strings are in the same list, and that list is correctly stored in the hash map.  That in turn means the code actually worked, even when the most inconvenient way possible was used, and that is a reason to be happy.  I like reasons to be happy.

Network Round 5

Networking is still kicking my butt.  At the end of the previous post, I’d started what I refer to as “Network Round 4” and I’ve already retired that version.  It actually temporarily became the “com.qubekwest.net.old” package (referred to just as “old” later) as a place to constantly refer back to it as I write what I now call “Network Round 5.”  This has obviously taken a bit of work, but it isn’t really as horrible as it probably sounds, honest.

The cause of this new iteration of the network code is actually that I’d gotten pretty far with the previous version, and then realized that I sort of had no idea how to actually use it.  Meaning, it probably worked, but I didn’t have any clean patterns to follow to use it in something useful.  After piles of refactors to try to make it useful, I gave up.  This in turn inspired a collection of interfaces that I wanted to write and that occasionally conflicted with names already in my networking code.  The best solution I could come up with to make this as easy as possible was to strip the network code back to nothing as described above with the “old” package and start making new files in the “net” package that was now completely empty.

This creation of interfaces helped me to determine two things at the same time.  The first was the things that a “user” of the network engine (which is actually other pieces of code, not people) would need it to be able to provide, and the second was the things that the network engine does that don’t need to be exposed to anyone else.  Those results may be immediately obvious to other people, since that’s exactly what interfaces are supposed to do, but in this case it was actually supremely useful.

From the first category of “need to be able to provide” I knew that I needed a collection of methods that the server had to be able to do to actually be a server.  For example, it needed to be able to send data to clients.  What surprised me a little was the number of different ways I had to be able to send data to the client.  My primary pattern is that I’m sending data from a specific source to a specific target.  Other methods build on that concept for the ability to send updates to other users that are near me in the world.  Maxing out the idea of sending to groups there are also methods for broadcasting a single piece of data from a specific source, but to all connected users.  This covers things a server needs to be able to do for other pieces of code.

The second category of “don’t need to be exposed” included things like “accept a connection.”  I initially put a method to accept a connection in the interface for a server, but then realized that there is literally no code outside of the network itself that would ever call it.  If you think about it for a moment, you will probably realize that the method for accepting connections must be there, but that it will be called because of network traffic, not by some other piece of code.  That means that it can really be a private method on the network server code itself and building the interface helped me to realize that.

There are several other interfaces as well.  I’ve defined what it means to be a network worker thread, which is a piece of code that actually does the work of the server.  There can be any number of workers in a server, and they each get incoming data handed to them if they want it.  They can’t affect each other, but they all get a shot at doing something with the data as it comes in.  There are also handlers that create a sort of call back structure that allows clients to send data to the server where a response is expected.

This whole ball of wax is being implemented with the Java’s NIO package patterns.  I still feel it is vastly more complex than other ways of doing things, but I’m also pretty confident that the result will perform better with more people connected to the server than the standard socket approach would.  I’m hoping that makes this all worth it.  I’ve actually spent almost six months working on, researching, writing and rewriting, and generally being frustrated by the network code, so I sure hope so.