Mmm, when a good class comes together

I’m so not a DBA. We’re currently working on a bunch of stuff that relates to the way the game understands permissions and subscriptions; a neccessity for work we’ve wanted to do for a while and for our Chinese partners. The database portion has been a pain because I’ve been overly focused on how performance will translate to China.

Until now we’ve generally had relatively simple SQL queries that are just hardcoded into the host executables, which is going to be difficult for China to manage. I finally stopped trying to create great queries this week and instead putting them into stored procedures so I can get the code working and worry about the SQL later (and China far more flexibility since they can replace the SPs with anything that conforms to a well documented API).

It didn’t help that my SQL benchmarking completely forgot to take into account the fact I was testing on a remote database over a busy T1 office link. My horror at query performance went away when I actually tried it on a local database.

With 4-5 days errantly focused on SQL, I’d almost forgotten what I’d done code wise: the host caches the permission groups and then further creates a simple cache of group combinations and the resulting permission aggregates, through which you view the individual player’s permission set.

Replacing the existing compound permission system (account permissions + server permissions + “chat group” permissions + rank permissions) was still to do and because they were implemented back when there was no common “player” entity amongst server processes, seemed like it would be a pain.

So pleasantly surprised to find that that my stored procedures covered all of my use case (a few more fields returned than I needed in a couple of cases, but never at any significant expense) which lends a potential for caching benefits, but also mean’t I could reduce a bunch of code down to a simple “get the details on this” encapsulation. Check 30% of the work.

My permissions encapsulation also plugs in sweetly to the unified “wwiiPlayer” base eliminating another 30% of the work. That left exposing the data to the client (oh, heck, it’s the same across the board – which means those 30 lines of code I just wrote did the job), another 10%.

I have to actually implement the effects of some of the new permissions this introduces (branch/account/rank restrictions are done, that’s another 10%), but so far today all the work I’ve envisaged being the big overhead has come together very succinctly. Hopefully tomorrow Ramp will have a working beta client that uses the changes to the client<->host protocol and I’ll be able to find out where it actually breaks, but it’s looking good.

What I’d like to have done this pass is re-encapsulate the huge morass of pre-spawn state into a simplified engine, but I was expecting all of the rest of this work to take so much longer. After 1.30 I’m going to go back to that and see if I can’t pull it off in fairly short order – that’ll allow me to shuffle around some of the ownership of data and simplify and awful lot of systems, and would be a huge step towards getting a persistent map/mission connection that is blocking the work we want to do on fancy scoring/mission system changes that should be really simple if it weren’t for the fact that there are so many limbo states and race conditions right now.

A huge load off my chest :)


Welcome to the world of database procedures. Centralized code, that really does run faster. Just remember to trap exceptions with meaningful error messages.

The biggest one I see if you return more than one answer.

Well, some of these I actually encapsulated as views rather than procedures because procedure was excessive. But MySQL doesn’t seem to handle SELECT * FROM view WHERE clause very efficiently when clause is something fundamental to the constraint of data in the result set (no surprise there) whereas using a procedure to insert clause early helps a lot.

a lot of speed is gained from when the procedure makes a call the results for the procedure is usually cached, so if it is called frequently it doesn’t have to retrieve a whole new version and go through the calculations, it just uses the cached result from a prior call.

It’s unlikely that the results from these queries will be cached, too many rows and variables – however the compiled statement(s) are likely to remain cached and because using the same query ensures the same access patterns, the indexes are likely to be better cached.

Leave a Reply

Name and email address are required. Your email address will not be published.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <pre> <q cite=""> <s> <strike> <strong> 

%d bloggers like this: