I really didn’t think this would happen so quickly. When you’ve been fitting spits and spots of systems together over 6 years towards a goal, you build up an overly active awareness of how much is involved. You’re overly inclined to fill in variables. I knew that the host was easy enough to build, and I’d scripted the mac build for Gophur, so I knew that wasn’t hairy. Recently I’d come to know that now the client and host source is all in one repository I can match source revisions. But, when I tried to put it all together, along with scripting it under Windows … Pop. I think my head just exploded, is that my amygdala on your lapel?
Finally: we have a one click method of building the complete set of Windows Client, Mac Client and Linux Host.
There’s plenty of polish I could afford to add (I can probably use their telnet “action” to have it talk to our internal IRC server to announce to the #coders channel that Revision 410 works on all 3 platforms).
Admittedly, there isn’t yet a built-in interface for running remote builds, although there are built-in actions for all kinds of version control, a variety of compilers and build tools, databases, installers, driving IIS, great Team Suite/Foundation integration, unit/testing tools, and so on.
I wrote a fairly simple shell script that the Mac and Linux build boxes share, that invokes either xcode or make to update it’s copy of the source to the version being tested and build the appropriate executables.
With that done, a trivial matter of adding a FinalBuilder action that invokes putty’s plink to log into the build boxes and ask them to do their work: FinalBuilder automatically seemed to understand the warnings and errors generated by GCC. I check if the script finished by saying “FAIL:” and voila.
The other part of this system is FinalBuilder Server, which is basically a web-based front end for managing and running multiple projects/builds. It can co-ordinate multiple build machines or it can do the work itself. Mostly it’s just a nice simple interface. It can also schedule things to happen automatically, either based on time or on fancier things like: run this project whenever someone checks in some new code.
So in addition to a guaranteed, full, daily test build, every time one of our coders adds or changes code and commits it to subversion, it automatically gets checked against all three platforms without any user action required.
You’d be amazed how much time we (and any project) lose because someone changes something and nobody notices that it broke the host code or that it killed the Mac build. Having feedback like that within minutes means it can be dealt with while the changes are still fresh in the developers head.
Once the script is bedded down, it’ll be time to embellish it some more: it needs to be able to build more than just the internal “debug” version of the binaries, it needs to be periodically checking the release variants too so that we don’t find out at the end of a development cycle that someone broke something 6 weeks ago.
From there it’s a matter of migrating manual tasks into the projects: building and staging executables for release / beta; automatic updates of the version information; automatic branching and tagging of the source code(*); automatic preparation of data; automatic terrain merges and validates.
(* As each build completes, I set a little revision property fb:mac-build or fb:win-build or fb:host-build to a value or 1 for success or 0 for bad, so that we can easily tell which builds were good/bad just from subversion)
While the name “FinalBuilder” seems to imply that’s all it is good for, I’m hoping that it won’t be too long until we can work with it to start building automated QA tests so that once it’s code changes it will be able to go on and do data tests. We already have a terrain validator, but that runs on a Linux box and Doc has to ssh into it and follow some prompts to get it to run. Of course I could probably build batch files or have him run ActivePerl and provide him a perl script to do it under Windows, but I dislike burdening a producer with something that technical. FinalBuilder is definitely the answer to that concern.
I’d like to see it progress rapidly to automated testing of the client, although that’s going to need the client guys to provide functionality to do that sort of thing. Our internal builds have, for instance, a /command that flies a fixed route thru the game world and records various metrics. If they can move that to a command line argument, then I can have FB build the game, launch it with that argument.
Which brings us one step closer to Continuous Integration
testing. FinalBuilder isn’t quite at the level as some Continuous Integration tools, but it seems to be headed that way, and the CI offerings I looked at were woefully Java-centric and preachy. “C++ legacy code is supported”: So wait, you list “C++” as one of your supported languages, but then you call it “legacy”? FinalBuilder is current PC only but not rude about it. They’ve been receptive and helpful with issues of integrating “other stuff” at every turn. They’ve alluded, but only very mildly, to an interest in developing for other platforms etc.
That, of itself, has been worth a lot. If I want someone who’ll sell me tools and then tell what I should be doing with them, I’ll go to Microsoft. VSoft have great customer service in my experience and they’re on the ball with delivering a product that works for you and in your working environment rather than treating your custom as an open invitation to tell you how you should organize your desk.
I still have to sell the production and management team on this: that its worth spending time to setup automation and to spend time adding things to automation. Playnet’s internal tools have had a bad history of creating drag rather than lift. Not only do they have to be operated and maintained, they have to be managed and tested and checked. Playnet, as a team, hasn’t really experienced much self-validation.
So when I tell them “hey, we have automatic checkin validation” it just doesn’t mean anything to them. “You mean you weren’t doing that already? Isn’t that, like, a part of your job?” Until it actually saves the day, that penny remains spinning.
A more producer friendly example might help the penny drop: our in-game map shows little over-head images of each building. Those are static icons that are generated by a tool that sat somewhere on Rickb’s Mac. The source is sitting in a little known folder in SourceSafe. Getting the icons updated mean’t getting Rick to do a full pull of data (from source safe, that’s painful), update the icon tool to the latest code (it isn’t built with the main project), build the icons, update the data and run into some anomaly or other. That has to go back to the artists or another coder, go through another iteration, and finally when he’s built the icon set, he has to integrate it back into the data, check it back in and Gophur has to remember to pull it over to the release data set… A laborious process that then requires the additional time of manual inspection. At every step, it uses more time.
In a sense, the “make building icons” utility isn’t a tool … it’s a rivet. And rivets require drilling, placing and welding.
I think when I get something like that automated, and if I can get something like Doc’s post-editing terrain preparation process automated the penny might drop. But when someone has become deadened to a particular pain, it can be really difficult persuading them that it’s worth the hour’s time to soothe the ache and experience life without the numbness again… ;)