Steve sent a pdf by Tim Sweeney of Epic Games entitled "The Next Mainstream Language: A Game Developers Perspective". He commented "While Sweeney doesn't exactly embrace Haskell, he seems certain that some variant will prevail".
Sweeney's 67 page presentation is a worthwhile read. It summarises the logistics of a multi-player online game - Even250K lines of C++, highly object orientated and garbage collected with associated scripts and data files. Object state evolves with gameplay. Typically there's around 1000 classes with 10K objects active during play. Events touch about 5 objects on average.
Numerical computation (for intersect detection, graph traversal etc) uses around 5 GFlops, Shading is inherently data parallel, performed on the GPU using a custom language (HLSL.) It uses around 500 GFLops
The resultant code is performance-sensitive and requires highly modular design of reliable code. Sweeney comments that C++ is poorly equiped for concurrency. This is bourne out by my own experience in the early nineties with Roguewave (remind me to tell you about the eek! bug one day.) A truism is that analysing performance isn't a simple matter of identifying "hot spots" and optimising that code - or writing assembly language to get around an issue. This is never done as Sweeney notes. Event timing and handling, data structure construction, execution path may have to be revisitied resulting in a lack of productivity. A the end of the day - you have to compromise performance vs productivity.
Sweeney then proceeds to deconstruct C++ as a language and Object-Orientation as a principle. As you've probably found yourself, sometimes the base class has to change - leading to significant refactoring. In particular, he slates the inability of C++ compiler to handle dynamic failure, contrasting how Haskell handles this with ease. C# also comes in for a wry observation: it exposes 10 types of integer-like data types, none of which are those defined by Pythagoras...
Pushing Haskell, he hypothesises that 80% of CPU effort could be parallelised using Haskel ST - the Strict State Thread module and, further still, that programmer "hints" can be used to guide parallelisation - stating that Haskell's Monadic nature is a natural fit for parallelisation - imperative programming is perhaps the "wrong trousers".
As I found to my cost, "shared concurrency is [indeed] hopelessly intractible" in my "Why events are a bad idea" post. I too got in a mess with manual synchronisation. Sweeney also suggests as the quoted paper does, that message passing is the recipe for high performance because, when combined with software tranactional memory and multiple threads, the overhead of synchronisation is bearable when the object population his rate is small (~ 5 per 10000 per message.)
The article then concludes with his musings on the next generation language. All-in-all, a nice empirical paper from the coal face.
Next, I'll cast my eye on Erlang - there's buzz there too from the finance crowd, but I know what my next language is going to be - and hit begins with H...
Tuesday, December 18, 2007
Monday, December 17, 2007
Applied Infoviz and Knowledge Re-injection
The Infoviz toolkit is used at Project Seven by a friend of mine et al. They're working on an intelligence analysis tool which supports reinjection of explicit knowledge earlier in the categorisation/discovery chain to guide discovery.
This is a different approach to the one I took in my signature based approach in my paper "Community-of-interest predicated program trading" where I suggested using centroid categorisation augmented with off-center categories. The Project Seven techinque relies on heuristically directed iteration (or as it's known in lay terms - trial and error) where my technique relies on the visualisation of the categorisation centroid. I think both approaches have merit and would produce good results.
This is a different approach to the one I took in my signature based approach in my paper "Community-of-interest predicated program trading" where I suggested using centroid categorisation augmented with off-center categories. The Project Seven techinque relies on heuristically directed iteration (or as it's known in lay terms - trial and error) where my technique relies on the visualisation of the categorisation centroid. I think both approaches have merit and would produce good results.
Subscribe to:
Posts (Atom)