How deep the rabbit hole goes…

I’ve been plowing ahead with my grid system redo. Yesterday I got the last vestiges of the old system unplugged from the code and, since this is officially our holiday week, taking the opportunity to clean up some of the peripheral code.

The old code combined the packing of data into the output stream with the actual assembly of the data. Result: spaghetti code mixed with massive blocks of #ifdefs that was utterly unreadable.

What the old code has going for it, perhaps, was efficiency. Hard to judge though. Even giving it the benefit of the doubt, the new code will lend itself to parallelization.

Right now my brain is stewing over how I am going to elegantly/efficiently track (and communicate) changes in such large and complex structures, and in a language that doesn’t have reflection.

Last month, I developed a tool that takes XML input and creates C++ classes with variables that track last-modified time. But I’m not 100% comfortable with the performance for such a dynamic data structure, and each field winds up having a 64-bit timestamp accompanying it.

32-bit timestamps won’t work because the servers can run for more than 49 days, although I could use the vehicle creation timestamp as a baseline, but that means expending CPU cycles to get absolute times.

I could probably get away with 8-bit per-field counters, simply incrementing every field when it is altered, which would allow me to tell what has changed between two viewings fairly easily, and with sufficient granularity. But I then need a way to map fields <-> counters.

I could also “chunk” changes to the data, but we usually perform some translation/reduction of fields in transmission. I’m not sure that chunking wouldn’t result in a significant decrease in bandwidth efficiency. Grr.

 

3 Comments

Why not maintain “last-sent” and “current” copy of the structure, and compare those when sending a new copy?

That is one of the options on my virtual table…

If you go with just the basic data, that’s ~200-250 bytes. Lets be generous with the in-memory representation and call it 250 bytes.

If you have 1000 players on a server, that’s no big deal, 250 x 1000 = 244Kb.

Except it’s not that… It’s 1000 players with, say, 128 of these. It turns out at 31Mb.

Given the environment that data is being worked in, you’d need to bring it back closer to that 244Kb by using frames:

# For vehicle 1 which last saw frame 1 of vehicle 2 and frame 3 of vehicle 5
vislist = [ { vehID = 2, lastFrame = 1 }, { vehID = 5, lastFrame = 3 } ]

The core of the cell host reads something like this:

while game.running == True:
  network.receiveData(maxMilliseconds = 5)
  server.housekeeping()

  candidates = server.chooseVehiclesToSendUpdatesTo()

  for candidate in candidates:
    candidate.sendNextWorldUpdate() # Actually, vehicle update with possible world stuff.

That is, an incoming event on vehicle X doesn’t immediately propagate out to the vehicles that can see it (and there are various reasons why switching to an event-driven topology like that wouldn’t necessarily be the best thing, the biggest of which is just the amount of overall change required being beyond the realistic scope of “refactoring” a live product).

Rather, events are received, applied and then propagated across the cluster.

Then observers periodically collate changes and propagate them to their client.

So – the frames concept would be akin to a minimalist revision control. Each incoming change would increment the frame count for that vehicle.

Each observer would record which frame they last saw. The first observer to see a new frame-delta would generate the necessary update and store it, so that subsequent observers can avoid the regeneration.

Frame-diffs would need to be indexed by source and destination, so that when a new change comes in, all diffs pointing to the old one are removed, and when no observers remain with frameX as their last frame, diffs from frameX are also released.

Since updates are generated in batches.

That would eliminate lots of the delta generation, but the indexing operations might potentially defeat any advantages to be gained there.

The amount of “automation” involved there … well, if this was a GC language I probably wouldn’t sweat it, but it would mostly need to be implemented semi-manually.

Big problem with this approach is the potential for data gluts in the event of any kind of lag spike.

I’m hoping to sit down with Troy for a few hours today (after I go back to bed) and throw around a few ideas that we might be able to pull off fairly quickly, such as adding discrete “fire message”s for one-shot weapons and heavy ordnance vs things like SMGs etc. That would remove one of the biggest hurdles to a framing-based delta approach.

Incidentally, I actually wrote code today for chunk-diffing opaque binary data, and I have a python-based XML parser which can generate modification-tracking classes based on “SyncVar” and “VarGroup” classes I created for 1.32.

The problem with the latter is that it uses 64-bit time stamps, which would bump the size of the 250 byte structure to a 578 byte behemoth. 1000 * (128 + 2) * 578 = 71Mb.

That’s a fairly large “active” data set, and I think the servers are only DDR2 not DDR3.

Using frame counters would only add 40 odd bytes to the structure, and reduce it from 1000 * 128 to more like 10-20 * 128. Assuming a lag spike somewhere, maybe 50 * 128 * 300bytes = ~2Mb. Under healthy conditions, it should be more like 12 * 128 * 300 = 450Kb.

Separating out fire messages would further reduce the size.

Another approach I’ve been considering is a ZeroMQ / Message Passing based option, using a broadcast method to send updates out. The code would then be something like this:


updateGenSockName = "inproc://update-generator"
updateGenSock = zmqContext.socket(zmq.PUSH)
updateGenSock.bind(updateGenSockName)

updateResSockName = "inproc://update-results"
updateResSock = zmqContext.socket(zmq.PULL)
updateResSock.bind(updateResSockName)

for i in range(0, numThreads):
  createThread(updateGenerator, updateGenSockName, updateResSockName)

otherStartupStuff()

# ...

  candidates = vehiclesToSendUpdates()
  for candidate in candidates:
    updateGenSock.send(candidate)

  for i in range(0, len(candidates)):
    vehicle, update = updateResSock.recv()
    vehicle.send(update)

# ...

def updateGenerator(inSockName, outSockName):
  inSock = zmqContext.socket(zmq.PULL)
  inSock.bind(inSockName)
  outSock = zmqContext.socket(zmq.PUSH)
  outSock.bind(outSockName)

  while game.running():
    candidate = inSock.recv()
    oldList = candidate.visList()
    newList = rethinkVisList(candidate, oldList)

    update = []
    # who left/joined their vis list?
    for vehicle in complement(oldList, newList):
      if vehicle in oldList:
        # Stop listening to the event stream for the vehicle.
        candidate.eventSock.unbind(vehicle)
        # Tell the player this vehicle went away
        update.append([ '-', vehicle.id, None ])
        # Stop listening to the event stream for this vehicle.
      else:
        # Start listening to the event stream.
        candidate.eventSock.bind(vehicle)
        # Introduce the player to this new and provide a reference frame.
        update.append([ '+', vehicle.id, vehicle.intro() ])
 
    # Read all the pending events.
    events = new dict()
    while True:
      vehID, eventID, event = candidate.eventSock.recv(zmq.NOBLOCK)
      if not vehID: break
      if not vehID in events: events[vehID] = new dict()
      events[vehID][eventID] = event

    for vehID, vehEvents in events:
      for eventID, event in vehEvents:
        update.append([ 'e', eventID, event ])

    outSock.send(candidate, update)

Note: This isn’t even pseudo code, it’s protocode. No attention to efficiency or intent to write target code. Just pure concept.

Actually, it would probably be easier to use ZeroMQ’s PUB/SUB sockets and “subscribe” to a particular vehicle rather than bind/unbind particular vehicle sockets.

Leave a Reply

Name and email address are required. Your email address will not be published.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

You may use these HTML tags and attributes:

<a href="" title="" rel=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <pre> <q cite=""> <s> <strike> <strong> 

%d bloggers like this: