Thursday, May 08, 2008

CUDA - GPU Programming Framework from nVidia

Catching up with some reading this morning, I picked up a series of articles from the Mar/April edition of ACM Queue. In particular, CUDA which was released by Nvidia last year. I read the article "Scalable Parallel Programming with CUDA" which can be found here.

The article identifies three key abstractions, hierarchical thread groups, shared memories and barrier synchronisation. CUDA is based on C with extensions for parallelisation - much like Handel-C. The difference is that Handel-C was FPGA based whilst CUDA is for GPU with its built-in floating point capability. There are simple and straightforward code examples showing parallel threading and memory sharing which was always an issue in my mind with FPGA: the leap of faith with Handel-C was what to do with the data set you generated in a Monte Carlo simulation.

This question has been perplexing developers on the CUDA forums at Nvidia too - but it looks like there's been progress as outlined in this presentation on Monte Carlo Options Pricing paper on the Nvidia developer site. However, the algorithm outlined in the paper is trivial, the secret being the generation of quasi-random numbers enabling quick convergence. Then filtration close to the data so you're not schlepping large lumps of data unnecessarily.

Then the next logical step is to make this a service. The appetite is reckoned to be about 5 trillion simulations per day in the average organisation according to a quant chum of mine. Combine this with S3 for asynchronous storage and you have the makings of a nice little business I think.













Wednesday, May 07, 2008

Functional Programming Creeps into Job Specs

As predicted in this article from June 2007 "Haskell - the Next Mainstream Programming Language?" - functional programming is getting into job specs...

http://jobview.monster.com/GetJob.aspx?JobID=70786548

http://jobview.monster.com/GetJob.aspx?JobID=70153611

http://jobview.monster.com/GetJob.aspx?JobID=70575524

http://jobview.monster.com/GetJob.aspx?JobID=67440522

http://jobview.monster.com/GetJob.aspx?JobID=70311202


"You will have previous experience of designing and building distributed, fault tolerant systems in a commercial environment. Experience of multi threading, socket programming,
network programming and functional programming languages (Haskell, Ocaml, F#) will be an advantage."

"Experience with fu
nctional languages such as Haskell, Erlang, F#, Scheme, LISP, etc., are greatly appreciated."
Bit of a scattergun approach in the last example perhaps? I wonder who writes the job specs - I guess the bizerati analystas high on the latest marketing speak. I'm still confused about is the insistence on C++ with it's late binding and poor library coverage (compared to Java.) As illustrated by this graph from the paper below, C++ is slower than C - so why would you want to use it when speed is the ultimate criterion? Beats me.

An empirical comparison of C, C++, Java, Perl, TCL and REXX for search/string processing

I'm also bemused at the use of C# - in light of the recent debacles at the LSE and TSE.

One wonders who is in charge of algo and program trading strategy. I do hope they realise the advantages of a monadic language are not without performance implications and that without stream fusion and massively multi-core processors (with FPUs) the performance gains they seek are going to be rather elusive. Then there's the data issue - you have to crack that particular nut - and here's a clue - the answer's not xml or any of its bloated siblings.