Emlyn Tech

September 21, 2009

NSIS: Open Source Installer for Windows

Filed under: Uncategorized — Emlyn @ 11:47 am

Since I crossed over to the Ubuntu side, I’ve fallen in love with the debian repo system. However, I still need to create Windows installers, primitive and sad as they now feel 🙂

I’ve been using the Visual Studio Setup & Deployment projects for ages. They’re pretty damn inflexible, but solid, and the integration into the build is a lovely thing. Great for in-house deployment. However, they’re seriously limited, and you can only build them using the dev environment, albeit from the command line, which means you need to install Visual Studio on the build server.

A solid open source msi creator would be awesome. I think I’m at the point where I’d be willing to put in the time to learn something a little arcane, if it gives me more control. The tool for the job looks like NSIS. There’s even a gui for it.


Making Mantis on Ubuntu talk to SQL Server

Filed under: Uncategorized — Emlyn @ 11:16 am

I had occasion recently to move a working Mantis installation from Windows+IIS to Linux+Apache2.

The linux box is a Ubunty Jaunty (9.04) server.

I was happy with the slightly old version of Mantis from the standard repositories (1.1.6), as this is newer than the version running on Windows anyway.

So installation was done like this:

sudo apt-get install apache2
sudo apt-get install mysql-server
sudo apt-get install mantis

I did mysql and mantis as separate steps, because mysql really needs to be installed and running before you start installing mantis.

So, easy.

Except, the Windows Mantis was on SQL Server. There’s no straightforward way to move the Mantis database from SqlServer to MySql.

What is possible is to leave the database in SQL Server, and point the linux & Apache Mantis at it. This was a fine solution in my situation, given there were SQL Server boxen around for other reasons at any rate.

So how do you do it? Tricky, but not bad if you know what to do. I wont try to explain it here, but this guy gives you all the gory details (major league hat tip here!):


September 2, 2009

Fabulous PC Repair Flowcharts

Filed under: Uncategorized — Emlyn @ 1:41 pm
Power Supply Failure Diagram

Power Supply Failure Diagram


Unbelievably detailed PC hardware troubleshooting flowcharts, brilliant. The more tightly we network ourselves, the more our culture is defined by the heights that all the monomaniacs can reach.

August 19, 2009

Job Processing Engine

Filed under: Uncategorized — Emlyn @ 11:11 pm

I blogged about creating an online mp3 to youtube service on my main blog, point7. I’ve been thinking about it a lot, so here’s a start at some design for it.

The basic design is that we have a website, which presents the service. However, the real work is done by one or more Job Processing Engines, which is the subject of this post.

A Job Processing Engine runs on a Linux box, and is comprised of a webservice and a daemon. The webservice is the way the website talks to the engine. The daemon performs jobs that can’t be kicked off by calls to the webservice (eg: notification of startup and shutdown to the website, and scheduled job processing).

As ever, I want to approach this stuff incrementally. The smallest useful piece looks like this:

– Job Processing Library

This is a framework for processing jobs. It will have a set of interfaces (IJob, IJobProcessor)

- string JobID; unique id for the job, probably a guid.
- JobState State; // {Created, Started, InProgress, Success, Failed}
- DateTime LastStateChange; last time the state changed
- int ProgressAmount; 0 to 100, or -1 for unknown
- string ErrorMessage; an error message in case of failure
- int ResultCode; a result code, 0 for success, >0 for the error (not sure about bothering with this)
- void Start(); moves the job from Created to Started, and possibly to InProgress
- void Cancel(); moves the job from Started or InProgress to Failed (cancelled)

- int CreateJob(string aJobID, string aJobType, string aJobDetails, string aCallback, out int aResultCode, out int ErrorMessage);
    // aJobID must not exist, aJobType must make sense, aJobDetails must be for aJobType. State of new job is Created.
- int GetJob (string aJobID, out IJob aJob, out string aErrorMessage); // just a copy of the job
- int StartJob (string aJobID, out string aErrorMessage); // job must exist, starts it. Returns result code.
- int GetProgress (string aJobID, out int aProgress, out string aErrorMessage ); 
- int CancelJob (string aJobID, out string aErrorMessage); 
- int DeleteJob (string aJobID, out string aErrorMessage); // completely delete all trace of a job

To begin with we need two implementations of IJob, which are EncoderJob (calls mencoder), and YoutubeUploaderJob (uploads a video to youtube). Each of these implementations requires its own JobDetails (set via CreateJob), which is essentially a serialised version of the job.

A base job class could be implemented as a statemachine. That would allow the inherent state to be handled.

The subclasses could then just implement the bits that are specific to them; actually doing the work, and figuring out how to report progress.

The JobProcessor class is a good candidate to use for implementing the webservice, with similar methods. It’s designed to be webservice friendly. I’m thinking that it can actually hold references to instances of all “running” jobs (all jobs that anyone has asked about basically). Oddly enough, because there’s nothing scheduled here, we can have the entire thing running from the webservice. As long as all methods that talk to jobs are essentially asynchronous, that’ll be fine.

There’ll need to be some kind of file upload service, to allow uploading of files (required before our jobs can work on them!)

If processing is interrupted (say the machine reboots), the jobs will not restart. I might just leave this as an issue for now, it sounds like we need something on machine startup (in the daemon?) to look for unfinished jobs and pick them up.

Getting this going would be a nice start. The next pieces after that would involve calling back to the master website (which will of course also require something at the master website that they can call). First, there is an “I’m alive” function, handled by a daemon. On startup it would call the website to tell it that this Job Processing Engine is up, and on shutdown the converse. This allows the master website to know that the engine is available to process jobs. Second, the JobProcessor should be able to call the master website when significant progress occurs (more than X seconds pass and progress has changed since last call?), for any job in the InProgress state. It would report the progress. It would also call to report completion (success or failure).

August 10, 2009

Error handling in the StateMachine

Filed under: Uncategorized — Emlyn @ 11:10 pm

This follows from the previous post, and is a reply to Serendipity Seraph.

In a previous incarnation of this idea I did indeed have some error handling, but I was never able to quite get it to feel like it belonged. It seemed wrongly conceived. My practice in these situations is to remove the feature or leave it out of new versions, which is what I did here, and wait to see if it wants to come back in.

After doing some more thinking about this, I came to the conclusion that error handling is actually a large part of what state machines are all about in the first place. You should enumerate all the different things that can happen (ie: conditions that can occur), have states for handling all of these. So, there is an argument for no special error handling; unexpected occurrences mean your state machine is incomplete.

(btw in the past I also had a handler for transitions between states, allowing checking things, and diverting transitions elsewhere, but that too turned out to be redundant, and the whole thing feels better for its omission.)

An example with a database connection failing: Database connections don’t tend to tell you when they die, rather you are going about your business happily, executing queries, then blam, out of the blue, a failure. In this state machine environment, this kind of synchronous error is best handled by a try/catch and raising a condition in the OnNewState handler, eg:

   ... code for doing things on entering states
   ... including a line that hits the database
   connection.executesql(somesql); // or something like that, you get the drift
   ... more stuff, but we don't get this far
 catch (DBException dbex)

And of course you need to have transitions on DBConnectionFailure in your statemachine for every state where you don’t want to just ignore it. Note that what different parts of the machine need to do in case of DBConnectionFailure can be quite different.

A good contrast is where failures happen in an asynchronous manner. An example I recently worked on is where a modem drops out (loses Carrier Detect) (Yes, it’s appallingly old technology, but you get that at times). What happens in .net is that an event handler fires telling you that the pin state has changed for your serial connection. In the handler, you simply raise your condition (eg: _stateMachine.RaiseCondition(ConnectionLost)) and that’s all. The state machine will take it from there, assuming you’ve correctly set up your state machine to deal with ConnectionLost in all relevant cases.

I guess I could add some very simple support for error management, eg: a built in condition called UnexpectedErrorCondition, which the state machine would raise if the OnNewState handler ever throws an unhandled exception. You could have explicit transitions for this condition, but if you haven’t specified any, the machine would move to some specified error state (specified in the constructor just like Start and Stop are).

Another approach here would be setting up States hierarchically. In an hierarchical arrangement, when we look for transitions from state to state for a given transition, we would walk up the tree toward the root (ie: can’t find a transition for state x? Try x.parent, etc). This way “error handling” (ie: dealing with conditions which signal error conditions that can happen any time) can be dealt with at a high level grouping state, rather than having to be done explicitly at every state in the machine. I think that has promise, actually; currently, you do have to do a lot of cut and pasting to set up handling of error conditions explicitly in every state.

July 26, 2009

A General State Machine in C#

Filed under: Uncategorized — Emlyn @ 4:12 pm

I’m working toward some really nice tools for automated build and deployment of software, which I’ve written about previously both here and on point7. I’ve only just started this in earnest, so I picked what feels like a manageable subgoal; build a tool that shows you red lights and green lights re: builds and unit tests, in the dev environment. See this post about DevBuilder and LocalBuild. At the end of that post I picked on the sub task of change detection (specifically for detecting changes to files) as my first generally useful piece.

But in the immortal words of Dav Pilkey, before I can tell you that story, I have to tell you this story. This story is the General StateMachine.


A State Machine is a useful formalism for dealing with systems with a non-trivial dynamic behavioural component. That is, anything where you are coding up an “engine” or a “worker thread” or a “service” to be in charge of some piece of a process over time, in an active way.

An example is a windows service. Any windows service is coded in terms of a state machine; you have a state (Stopped, Started, Paused, Starting, Stopping, others?) and a statically defined set of rules about how you get from one state to another, eg:

Windows Service State Machine Diagram

Windows Service State Machine Diagram

This diagram is simplified, leaving out particularly some error conditions.

The blue rectangles are States. Under each state there is text describing what to do on entry to that state. The Arrows are transitions from one state to another, which happen when a certain condition occurs, eg: a command is received, or something else happens inside the program or in the environment (in this case, something the program was trying to do finishes, succeeding or failing).

Let’s take the example of a windows service to monitor a file for changes. We start in the “Stopped” state. The system boots up, and instructs this service to start (sends the START command to it). It enters the Starting state, which requires it to perform initialization. It loads a config, constructs some objects to perform monitoring based on those objects, and succeeds. Success means it transitions to the Started state, and commences runtime processing, which in this case is waiting for notification of file changes, and informing the
user (sending an email, something like that?) when they occur. Later on, the user pauses the file monitoring service (sends the PAUSE command). It transitions to the Paused state, where it tells the monitoring components to stop monitoring temporarily. The user subsequently sends the CONTINUE command, and the service transitions back to Started state, recommencing change monitoring. Later still, the system is to shutdown, and sends the STOP command to the service. It transitions to STOPPING, and commences cleanup. When it finishes cleanup, whether or not that succeeds, it transitions to Stopped, where it does nothing. The system finishes shutting down.

Other examples might be a stateful engine for a communication protocol like BitTorrent or Jabber, a user session management process in a web server, or a video playback mechanism in a media player. Stateful mechanisms with dynamic behaviour over time are everywhere.

It’s common to implement these mechanisms by encoding the state using collections of boolean variables, or ad hoc created state variables, in a custom way for each such job. We know the theory of state machines, and keep it in mind as we implement these engines, but using a custom approach every time leads to cutting corners and bad practises.

I come up against this requirement all the time, in professional and personal projects. The latest instance of this is in change detection for DevBuilder. I need a process to stay awake, watching for changes in a folder hierarchy, which has state much as shown in the example above. In fact, it may need to do something slightly more complex, given that it is monitoring multiple folders, and may need to be able to manage that monitoring at run time on the fly.

So, I’ve decided to create and use a State Machine library (a c# assembly) which will introduce the state machine formalism directly into my code.

Where’s the code?

The wandev.StateMachine.core assembly is part of my Wandering Developer codebase. It lives here on SourceForge:

(this project no longer just does what it was initially created to do, it’s becoming my general open source C# library’s repository)

You can directly browse the assembly’s source at the url below. It’s quite a nice way to look at the code; nicely formatted and syntax coloured.


This article refers to Revision 74 in the repository.

Overall design

To make a usable general statemachine, we firstly need to be able to define the machine’s state diagram (eg: the diagram above). This diagram is a directed graph, with States as the nodes and Conditions (what triggers transition to a new state) as the edges. We also need a way to represent States and Conditions. We need a way to run custom code when we enter a new State. And we need a way to signal that a Condition has occured to the statemachine.

States and Conditions

For our first foray into code, let’s look at representing States and Conditions.

Really they are just labels. We could easily just represent them as Strings, and be done with it. eg: States could be “Stopped”, “Starting”, “Started”, “Stopping”, “Paused” and Conditions could be “Start”, “Stop”, “Pause”, “Continue”, “Success”, “Failure”.

However, this is pretty weakly typed, and begs for us to get States and Conditions mixed up, causing mayhem. Instead, I’ve gone for classes State and Condition, which are really just simple wrappers around a string (“StringName”). Because they share a most of their implementation, I’ve made them inherit from an abstract base class, called StringNameBase. They’re not really semantically related, so this is a poor reason to use inheritance, but I’ll just go easy on myself and mark this as a something to split up later if it causes trouble.

What StringNameBase does primarily is to provide a read-only property StringName which is the name of the condition or state, implement a value equals, so that equality tests for case insensitive string equality, and implements (or forces its descendants to implement) Clone(), so we can make copies of these objects easily.

The Clone() is particularly important. I want to be able to use these objects like value types. We are in a heavily multithreaded environment, so if a caller wants to know what the current state is, for instance, or some internal code wants to work with states or conditions, concurrency becomes a lot easier if we work with copies of shared objects (lock-copy-unlock) rather than trying to share the same instances between threads, and between internal StateMachine code and external code.

State Machine definition and the constructor

The state machine definition is a directed graph as above. I’ve chosen to implement it as the following type:

Dictionary<Pair<State, Condition>, State>

What you say?!?!

This is a Dictionary, (Mapping in java? Think hash table), where the key is a (State, Condition) pair, and the value is a State. The key represents a condition occuring when you are in a particular State, and the value tells you which state to go to in that case.

The way this will be used will be when a condition occurs. We’ll get the current state, pair it with the condition, use it as a key to look up the dictionary, and transition to the state given by the value found.

(What if we find no value? We do nothing. eg: If the service is “Started” and we get the “Start” command, we throw that command away.)

I’ve also chosen not to provide the state machine with an explicit list of States and Conditions. We’ll just use what’s in the graph (ie: the dictionary) as a definitive list. So this should be enough for the constructor.

Except… we have a little bit of extra information required. Firstly, what should the first state be for the machine? That’s the Start state (not to be confused with the “Start” state; in our example above the start state would be “Stopped”!). Also, if the calling code needs to finish with the state machine altogether, dispose it, then the machine would like to be in a state appropriate for that. We want to define a Stop state (again not to be confused with Stop).

So, our constructor looks like this:

public StateMachine(Dictionary<Pair<State, Condition>, State> aTransitions, State aStart, State aStop)

Running custom code on entry to a new state

This has been done as a simple event:

public event EventHandler<NewStateEventArgs> OnNewState;

where NewStateEventArgs includes the following property:

public State NewState;

The simplest way to provide your custom State code is to hook this event, and inside the handler test for the state using e.NewState .

First you need to hook the event. You should do this before the state machine starts operating, so there is some metastate to any statemachine. You construct the statemachine, then you hook the event, then you call the Start() method to get it moving (this puts the machine into the state specified by aStart in the constructor, and call OnNewState for that state).


_stateMachine = new wandev.StateMachine.core.StateMachine(ltransitions, lstart, lstop);
_stateMachine.OnNewState += new EventHandler<NewStateEventArgs>(_stateMachine_OnNewState);

Then, you need to implement the handler. Here's an example appropriate for the service example:
void _stateMachine_OnNewState(object sender, NewStateEventArgs e)
    if (e.NewState.Equals(Starting))
        // this is an asynchronous call, expect event handlers elsewhere to deal with success or failure
    ... etc ...

Note: OnNewState is called synchronously by the State Machine. It has to be, because it can’t proceed with more conditions until the state entry code has completed. This means that, inside this handler, you mustn’t do anything time consuming, especially you mustn’t sit in a loop waiting for anything (if you find yourself doing this, you need more states in your machine!). If you need to do something potentially lengthy, put it in its own method and call that method asynchronously, relying on the EndInvoke() handler to raise a condition which will transition you to the next state.

Raising a Condition

So how does this machine know when conditions have occured? You raise them.

The state machine has the following method for this purpose:

public void RaiseCondition(Condition aCondition)

This method is threadsafe (as is the entire public interface of the state machine). So from anywhere, any handler, or even from within the OnNewState handler, you can raise a condition using RaiseCondition().

The state machine uses a queue to manage these conditions, and processes them asynchronously as soon as it can.

Special Conditions: StopCondition, ProceedCondition, TimerCondition

There are three special conditions defined for every statemachine. If you need to use them before the statemachine is constructed (ie: when creating the transition graph), there are staticly defined constant strings for each of their StringNames available (StateMachine.StopConditionStringName, StateMachine.ProceedConditionStringName, StateMachine.TimerConditionStringName). You can use these like so: (new Condition(StateMachine.StopConditionStringName)) can stand in for _stateMachine.StopCondition, and similarly for the others.


You mustn’t ignore this condition. This condition is raised when you call Dispose(). Every state except your Stop state should have a transition for this condition, which leads irrevocably if not directly to the Stop state. Currently Dispose raises this condition and then waits until the Stop state is reached before exitting, which is rather prone to hanging (if you never get there), I’ll work into this in future versions (with, eg, a timeout).


This is just a useful condition for states where there is only one edge out, or otherwise some default behaviour, for which “Proceed” is a good description. It just saves you defining another condition. You can never use this if you don’t want to.


This should be called TimeoutCondition! I’ll change this in the future.

It is really common in StateMachines for one of the edges to be a timeout edge, so I’ve provided a useful timer for states. It’s implemented using System.Threading.Timer. You use it by defining a timeout edge in your graph for any applicable states, using this condition as the condition, then in the OnNewEvent handler call SetTimer() in your processing for that state, passing the length of the desired timeout in milliseconds. If the specified timeout elapses and you haven’t transitioned to another state, this condition will be raised and the timeout transition will be invoked. Otherwise, on transition to any state the timer is cleared.

Here’s an example:

// in the transition dictionary construction code
Condition lTimer = new Condition(StateMachine.TimerCondition);
_transitions.Add(new Pair<State,Condition>(Starting, Success), Started);
_transitions.Add(new Pair<State,Condition>(Starting, Failure), Stopped);
_transitions.Add(new Pair<State,Condition>(Starting, lTimer), Stopped);

// here's the OnNewState handler
void _stateMachine_OnNewState(object sender, NewStateEventArgs e)
    if (e.NewState.Equals(Starting))
        // this is an asynchronous call, expect event handlers elsewhere to raise condition Success or Failure

        // and give it a ten second timeout
    ... etc ...

To Be Continued

I’m out of puff, I’ll continue this later. Subsequent posts will talk about the internals of implementation (or just look at the code, it’s right there!), and give a serious concrete example of use of the StateMachine.

July 21, 2009

Build Automation: LocalBuild #1

Filed under: Uncategorized — Emlyn @ 11:43 pm

I’ve been thinking more about build automation. It needs doing.

In a previous post, I outlined my idea for architecture; splitting monitoring of source control from the piece that actually does the work of building. So there is a piece which automates syncing repositories to your local filesystem (call this RepoSync), and a separate piece which performs builds based on code on the local filesystem, and might do that automatically based on changes detected on the local filesystem (call this LocalBuild).

I’m one guy, so I have to make things like this slowly in my spare time. An iterative approach is really the only feasible route (and how I like to work in any case). Thinking about these two pieces, I think LocalBuild is more useful on its own; in any case, I can simulate RepoSync with CruiseControl.net, which makes it not urgent.

I don’t want to build something abstract, I’d rather pick one of the projects suggested by the use cases in the previous article. I like the look of the tool for consistent building in the development environment. Let’s call it DevBuilder.

DevBuilder Description: The idea is to be able to configure building all projects in a single place, then have a tool that will show you what, if anything, needs building, and which allows you to execute those builds. Automatically firing off builds in the dev environment is probably a very bad idea (seeing as editing files will cause a build to start, no no no).

To configure the local build, you’ll need an editor of some kind. Note: I don’t want a giant hand editted xml file like cruisecontrol.net requires, I want a nice gui editor, which works with a file that might be xml, but need never be hand editted. Anyway, this editor will be needed for all variations of these tools.

To tell you when projects need rebuilding, you need to have some kind of monitoring app running. On reflection a system tray app is a good way to go; it allows there to be a running app, but only when the user is logged in, and running in the context of the current user, which is want you want in development.

The monitoring app will show a red light / green light kind of display. It will contain active monitoring software to monitor the filesystem as defined in the configuration. The monitoring must be persistent; ie: it can’t just tell you when changes are detected as it runs, it must also notice, when started, if changes happened while it wasn’t running. It must then keep track of the changes, showing which projects need building. When the user chooses to build, it must perform the build. It must be able to show the results of the build, and remember whether the previous build was successful or not for the red light / green light display.

The first interesting thing here is the persistent change monitoring.

Thinking ahead, the persistent change monitoring need not just be for filesystems. There’d be a lot of uses if it were pluggable, allowing persistent monitoring of, well, whatever needs monitoring.

Let’s look at the plugin framework for change monitoring this way:
A plugin requires a config (created in editor), and it requires state (initially null).
void Start(state): this looks at the config, the state, the environment (eg: filesystem), and decides whether there is some difference between the environment and the previous environment represented by the state. It then begins active monitoring to detect any subsequent changes.
state Stop(): this stops active monitoring, and returns a state, representing the last observed environmental state (this will be input to the start() method in the future).
bool ChangesOutstanding(): returns whether there have been changes detected
object ChangeDetails(): returns plugin specific information on the actual changes which have occurred.
state UpdateState(): Updates the state in the plugin to what is correct for now, and returns that state. This is how you register that changes have been noticed. (is this needed?)

The persistent filesystem monitor would need a mechanism for looking at the filesystem and creating a table of hashes or some such, which is the state. It needs to be able to compare two states and figure out what has changed. It will use the FileSystemWatcher in .net to notice changes in an active manner.

urgh I’ve run out of steam tonight. This will continue.

May 22, 2009

More on Automated Builds: Building in the clouds

Filed under: Uncategorized — Emlyn @ 9:11 am

It’d be great if you could get a .net oriented build service online somewhere, which you could configure to look at your repository, crank out your build(s), and put them somewhere useful.

I’m not sure how you even begin to make that useful. For a start, build servers need all kinds of custom stuff on them that things like source control don’t, eg: custom libraries.

OTOH, I guess when your build includes custom libraries, it’s just packaging them up, not running them, so it’s not as scary as allowing people to run just anything on your machines.

You could provide MSBuild-based builds without paying license fees I think; it comes with the .net framework,no? OTOH, I often use the setup and deployment packages in visual studio, which require you to actually build with the development environment, so that might be hard to achieve.

What would you actually need to make something like this work, at a minimum?

– MSBuild tasks
– Some kind of msi builder, something open source would be good (maybe nsis? http://nsis.sourceforge.net)
– General tasks for email, ftp, web services, yada yada
– Possibly, an ability to have custom files not from source control that are nevertheless to be part of the build. This would be to support 3rd party components

Now one thing this doesn’t cover is the ability to run unit tests. Nothing up to this point involves running custom code (does MSBuild support custom pre & post build steps? something to look at, because that’s also custom code). This is a tricky part, but in theory a well configured security setup should mean you can allow custom code, because it can’t harm anything. Coupled with timeouts for builds that take too long (hard timeouts; blam you’re dead kinda timeouts), you could accomodate this.

Does anything like this already exist?

If it doesn’t, then why not?

Also, all the tools above are free. So, the only thing costing here (besides labour!) is hosting space and time. Is there a way to provide the above for free, or close to free? Maybe for money for commercial stuff, free for open source and/or individuals?

btw a trick to distinguish open source/free software projects from commercial, is to make everything readable by everyone for the free stuff. If you want it private, you pay for the privellege.

May 19, 2009

A better automated build system

Filed under: Uncategorized — Emlyn @ 12:14 pm

I’m a big fan of automating builds. In non-trivial software, it becomes time consuming and error prone to build releases by hand. It’s a job for a machine. A machine can notice when you change things, and rebuild everything affected automatically, in a reliable way. It’s also a natural place to run all affected unit tests.  Plus, in statically typed languages, just recompiling everything that depends on some piece of modified code will often be enough to point out problems (ie: the build breaks), but with a complex set of interelated projects, you’ll forget to compile all dependant projects every time.

Professionally, I’m a c# coder, and I use CruiseControl.net for automating builds. I use svn for my source control. I’ve extended CruiseControl slightly, to include a straightforward way of deploying builds. My extensions take an msi to be deployed, rename it to include the version/build number, zip it up into a similarly named zip file, and ftp it to a desired location. This goes well with a stupidly simple little web app I wrote called SimpleDeployPortal, which provides a web app interface to the folder to which you deploy, and maintains an xml version file that can be consumed by automated processes to figure out what versions of what apps are available through a particular portal instance. In turn, I have some c# libraries which help you implement auto-update for your apps, based on this portal with xml versions file approach. This is described further in The Wandering Developer Build Tools, and the source is on sourceforge. I have windows installers for all of this stuff, built and deployed using these tools, but unfortunately the bit on my website that displays the builds is broken at the moment. For what it’s worth, it’s here http://emlynoregan.com/Software.aspx, I’ll fix it eventually. But I digress.

CruiseControl.net has been annoying me. It conflates some problems, causing it to be cumbersome and inflexible. The one I’m most concerned with is that it makes you define a project by defining where it is in source control (a folder), defining where it is on your local drive (another folder), and basic autobuilding on seeing a change in source control, whereapon it gets the modified files and rebuilds.

Now, automating a build is a bit of work. The resulting configuration is large, complex, and requires maintaining. So you want to get maximum value out of it.

Unfortunately, cruisecontrol ties the build automation and syncing from source together, so you end up having something which only works based on the branches you have coded it to look at in source control, and also assumes it is building in a folder where you are not also developing.

How I think it should work:

The central thing is the local copy of source code. If I have a local copy of the source defined (ie: a folder where it can go), then I should be able to automate my build based on that alone. I should be able to define projects in there, based in sub folders. I should be able to define dependencies between projects (ie: project B depends on project A, so changes to A are also changes to B). The build tool should be able to see when relevant files change in the local copy, figuring out what projects are affected, coming up with a dependency sorted list of projects that require rebuilding. The build tool should be able to actually perform that rebuild if required.

Separately, I should be able to define the relationship between the local copy and a source repository if I so desire. I should be able to break this into repository relevant projects, likely different (less finely grained) to the build automation projects. Here, there are no dependencies. For each project I should be able to define the local folder, and what branches are available (trunk, branches 1 to N, etc). There is then a separate location in source control for each branch for the project. Also, I need to be able to define various repository get profiles, which are a list of all desired projects, along with the branch to get for each project. Also, I need a service which can maintain synchronisation for a given build profile between the repository and the local folders, by getting changes from the repository when changes appear.

Now these two pieces, (source control -> local copy) and (local copy -> builds) are conceptually separate, and could be used in these separate use cases:

1: Production Build Machine: Simply pair the two pieces above. You define the repository get profiles to sync with the localcopy folder(s), and separately define the build projects (a directed graph of dependencies) to build. You rely on the detecting filesystem changes when new source comes from the repository, to make the two pieces interoperate.

2: Other Build Machines (eg: daily): Just like the production build machine, but you use a different set of compatible repository get profiles (eg: pointing at unstable dev branches of the repos). The build projects config is identical, can be used unchanged.

3: Development machine: You can use just the build projects stuff, with no repository sync. This gives you the same ability to build consistently, but based on your local changes. Also, it shouldn’t autobuild if it sees changes; it should require the developer to fire it off, but give prompting about what needs rebuilding.

4: Give me Build X: A system with a build machine and a web app. It allows users to specify a project that they want to build, and to choose the versions of the various projects in the repos to use. This machine then extracts all the code as required to somewhere appropriate, does the build (including all dependencies), and puts the result somewhere available to the web server, which can then provide it to the user.

And there are many other ways you could use this, I think.

A note: It might be a good idea to do the actual building via NAnt, that’s what it’s for after all.

That’ll do for now. But this beasty, it wants building.

December 28, 2008

The Clanking Replicators (game idea)

Filed under: Uncategorized — Emlyn @ 9:33 pm

A post to Open Manufacturing, regarding my game idea, The Clanking Replicators. A 2009 project. I think I’ll do the v0.1 in Python, and kill two birds with one stone.

Update: This is now in a wiki: http://clankingreplicators.wikidot.com

Hi again all,

Paul wrote this:
> If you are getting into Flash, like with your pong game (nice sounds), a
> cool games about open manufacturing might be nice. Anything about making
> things. I have some ideas, but you might have better ones if you just think
> about it yourself first.

Ah well I *do* have a semi-manufacturing game/sim idea, which I’m
working on writing an intial spec for, and which I’m intending to get
started on in 2009.

This is a very geeky for-programmers-only game.

Has anyone here played any of the various tank or robot battle type
games that have come and gone over the years? Ones where you design a
tank/bot (choose weapons, armour, sensors, engine, etc etc), and write
a control program for it, then set it against others in a virtual



I used to have Omega, loved it, and I played a lot of Robowar at uni
in the mac labs.

Anyway, start with this idea in your mind. But then imagine a serious variation:

– What if this was an mmo?
– What if the bots had to find resources to “metabolise”?
– What if the bots could replicate?

Then, you’d have something approaching a life simulation. That’s what
I’m thinking of.

The general idea is this:

– There is a persistent online environment, made up of a multitude of
interconnected but relatively discreet battle fields or arenas. These
environments have resources in them (stuff you can dig up, stuff in
the air, sunlight, broken bot bodies), and are mostly flat and open.
– Players can spec up bots, including providing a control program.
They can test these locally in simulators, then they can inject them
into the online environment according to certain rules (possibly
putting them somewhere isolated to begin with so they can get a chance
to sort themselves out).
– The bots are composed of hardware modules, which all perform certain
functions (weapons, comms, manipulators, sensors, central processor,
etc), and have “metabolism” requirements (eg: energy requirements,
other material requirements, and might require repairs from time to
– The bots can reproduce; ie: they can create new bots. This will
include providing the initial control program for the new bot, which
can be the parent bot’s own program, or something else.
– The control program should be represented as a string, or an array
of bytes, and should be self modifiable.
– Part of the environment will be “chat channels” which bots with the
appropriate comms hardware can talk on. So bots can collaborate fairly
– The control hardware should support powerful high level languages.
I’m thinking that a JVM, for J2ME, might be the ticket here, with the
bot hardware being accessible through a provided class library. So
then players can use any language for the game which can compile down
to j2me code, any dev environment they like, etc.

Now, when you look at an environment like the above, designed not for
a handful of simulated robots to run around in, but for really large
numbers of bots, you must immediately think “wow, that’s a lot of
processing required”. Yes!

The environment will be designed to scale to as much processing as can
be thrown at it. Think a 2D grid of connected battlefields. Over time,
each battlefield should support roughly the same number of bots, some
constant number. The processing scales, then, by the number of
battlefields growing or shrinking as resources are allocated and
deallocated. It might autoscale, trying to keep a constant ratio
between in-game time and real world time. Or, it might try to maximise
this ratio and just keep the world size constant. Or it might do a bit
of both.

The processing will be primarily based on volunteer computing. The
initial target for the world processing will be Boinc.

http://boinc.berkeley.edu/ .

One good thing about a game is you can make it artificially match the
constraints of volunteer computing; I aim to do this in the following

1: The game is temporally segmented (you can process from time X to time Y)
2: The game is geographically segmented (each battlefield is a world
of its own. you can travel between them, but this is constrained as

Volunteer computing requires the ability to process discrete chunks,
and validate the results (you can’t trust the processing nodes). I
suggest doing this as follows:

1 – Cut processing into temporal+geographical chunks as above. One
unit of processing is for a set length of in-game time, on one
2 – Movement between battlefields is constrained to happen only
between these chunks (ie: bots have to wait until the end of a time
segment to be moved)
3 – Within one chunk the game is *fully deterministic*.

The third point is important. The game is fully deterministic in one
chunk, and validation is performed by handing out the same chunk
multiple times, and bitwise comparing the results; they must match.

Also, to constrain the work to be done, everything in game must have
an in game time cost, including basic processing. So the JVM
implementation must enforce these time costs on all instructions. It
would need to be controllable from the sim engine where, at time T,
the engine would tell it “perform your next instruction and return the
time cost” (call this C), then the engine would not ask it to perform
another instruction until time T+C.

All these chunks would be assembled at a server, and the raw output of
the sim would simply be a rather stupendous log file, available
online. It might not actually be a file, I’m just using that as a

There need to be a LOT of tools for seeing in to the result log. These
would include action visualisation tools (where you can watch
historical action in “realtime”, maybe a flash front end?), as well as
lots of reporting type tools which can aggregate raw log information
and make it understandable, so for instance players can see how well
their bots are reproducing, all kinds of factors about resource usage,
etc etc, tools for seeing graphs of mutations where a player has
implemented a bot on which selection can occur over time, etc.

Finally, the whole thing must be open source. GPL or BSD style
license? I’m not sure. All of the above probably. Although I’ve
written about a centralised persistent environment, anyone should be
able to make their own server setup and have their own persistent
environment. Anyone should be able to modify the clients and boinc
plugins as desired. Nothing depends on binary code with hidden

And, of course, I dedicate this idea in its entirety to the public
domain. Do with it as you will!

Next Page »

Blog at WordPress.com.