Tuesday, December 18, 2007

Haskell - The Next Mainstream Programming Language?

Steve sent a pdf by Tim Sweeney of Epic Games entitled "The Next Mainstream Language: A Game Developers Perspective". He commented "While Sweeney doesn't exactly embrace Haskell, he seems certain that some variant will prevail".

Sweeney's 67 page presentation is a worthwhile read. It summarises the logistics of a multi-player online game - Even250K lines of C++, highly object orientated and garbage collected with associated scripts and data files. Object state evolves with gameplay. Typically there's around 1000 classes with 10K objects active during play. Events touch about 5 objects on average.

Numerical computation (for intersect detection, graph traversal etc) uses around 5 GFlops, Shading is inherently data parallel, performed on the GPU using a custom language (HLSL.)
It uses around 500 GFLops

The resultant code is performance-sensitive and requires highly modular design of reliable code. Sweeney comments that C++ is poorly equiped for concurrency. This is bourne out by my own experience in the early nineties with Roguewave (remind me to tell you about the eek! bug one day.) A truism is that analysing performance isn't a simple matter of identifying "hot spots" and optimising that code - or writing assembly language to get around an issue. This is never done as Sweeney notes. Event timing and handling, data structure construction, execution path may have to be revisitied resulting in a lack of productivity. A the end of the day - you have to compromise performance vs productivity.

Sweeney then proceeds to deconstruct C++ as a language and Object-Orientation as a principle. As you've probably found yourself, sometimes the base class has to change - leading to significant refactoring. In particular, he slates the inability of C++ compiler to handle dynamic failure, contrasting how Haskell handles this with ease. C# also comes in for a wry observation: it exposes 10 types of integer-like data types, none of which are those defined by Pythagoras...

Pushing Haskell, he hypothesises that 80% of CPU effort could be parallelised using Haskel ST - the Strict State Thread module and, further still, that programmer "hints" can be used to guide parallelisation - stating that Haskell's Monadic nature is a natural fit for parallelisation - imperative programming is perhaps the "wrong trousers".

As I found to my cost, "shared concurrency is [indeed] hopelessly intractible" in my "Why events are a bad idea" post. I too got in a mess with manual synchronisation. Sweeney also suggests as the quoted paper does, that message passing is the recipe for high performance because, when combined with software tranactional memory and multiple threads, the overhead of synchronisation is bearable when the object population his rate is small (~ 5 per 10000 per message.)

The article then concludes with his musings on the next generation language. All-in-all, a nice empirical paper from the coal face.

Next, I'll cast my eye on Erlang - there's buzz there too from the finance crowd, but I know what my next language is going to be - and hit begins with H...


Monday, December 17, 2007

Applied Infoviz and Knowledge Re-injection

The Infoviz toolkit is used at Project Seven by a friend of mine et al. They're working on an intelligence analysis tool which supports reinjection of explicit knowledge earlier in the categorisation/discovery chain to guide discovery.

This is a different approach to the one I took in my signature based approach in my paper "Community-of-interest predicated program trading" where I suggested using centroid categorisation augmented with off-center categories. The Project Seven techinque relies on heuristically directed iteration (or as it's known in lay terms - trial and error) where my technique relies on the visualisation of the categorisation centroid. I think both approaches have merit and would produce good results.



Friday, December 07, 2007

Fork/Join - The End of the Road for FPGA/GPU/ASIC Gang?

Steve W sent this article from IBM - Java Theory and Practise: Stick a fork in it

"One of the additions to the java.util.concurrent packages coming in Java™ 7 is a framework for fork-join style parallel decomposition. The fork-join abstraction provides a natural mechanism for decomposing many algorithms to effectively exploit hardware parallelism."

Steve says "Looks like java 7 will have built-in support for the Fork/Join pattern (i.e. similar to what Erlang has and what the Squeak guys have been talking about)"

Is this the end for the short-lived FPGA/GPU bandwagon? It could well be - with massively multi-core chips round the corner, they are considerably more attractive an option with their on core FPU and low-latency cache etc. Solves a whole lot of issues around buying and supporting esoteric hardware it would seem.

Monday, December 03, 2007

Talk: Erlang for Five Nines by Francesco Cesarini

Looks like an opportunity for a beer afterwards?

"Francesco Cesarini will be presenting the concurrent soft real time functional programming language Erlang and its rapidly growing community. He will describe why Erlang programs are generally 4 - 10 times shorter than their counterparts in Java, C and C++ achieving 99,999% availability. Ongoing research projects in the UK and overseas which will be covered, including refactoring, AI, type systems and control systems for robots. The talk will also go into why Erlang, originally invented to handle the next generation of Telecom products, has been successful in a much wider range of sectors including banking and e-commerce. "

Date: Thursday, 13th December 2007

Time:18:30-20:30

Venue:
Skills Matter
1 Sekforde Street
London EC1R 0BE
Near Farringdon Rail/Tube

Registration http://skillsmatter.com/menu/85

Thursday, November 29, 2007

One Thread Per Core

Way back in 1990, when I was performance tuning Ingres on a multi-processor Sequent machine, I found that using processor affinity - ie tieing the main process to a particular CPU gave the best performance. Intuitively I figured that this was due to not having to context switch but never had the time to prove my theory.

More recently, numa issues have been surfacing again in the search for low latency.

A friend of mine has recenly been looking at k-way sorting and the thread scheduling again has come into play. I have always been curious about the analysis of Numa architecture but never got to spend that much time on it. In the old days of single core, context would have to be shared over the bus, Numa was a great advance, introducing a high performance inter-cpu/die channel with about 6GB/s throughput. Now with multicore it's all on the die but appears to be still an issue as the following results show from a friend show:


The k-way merge sort looks pretty good, almost twice as fast as a mono-threaded quicksort. Here some results for 10 millions numeric (double) values to sort :

sort 1st part done (parallel segments) in 3672ms
sort 2nd part done in 3157ms niter=19999993
Total time for k-way merge sorting 10000000 double values=6922 ms
Total time for monoquicksorting 10000000 double values=13609 ms


Now if you look at the performance variation when sorting 2 millions items, it appears that the best performance occurs when the number of parallel sorting threads equals the number of core available from the CPU (my box is a mono quad-core Xeon 5355 2.6Ghz) :

2 threads used:
sort 1st part done (parallel segments) in 1031ms
sort 2nd part done in 469ms niter=1999999
Total time for k-way merge sorting 2000000 double values=1562 ms
Total time for monoquicksorting 2000000 double values=1984 ms

3 threads used :
sort 1st part done (parallel segments) in 688ms
sort 2nd part done in 563ms niter=3333329
Total time for k-way merge sorting 2000000 double values=1281 ms
Total time for monoquicksorting 2000000 double values=2015 ms

4 threads used :
sort 1st part done (parallel segments) in 516ms
sort 2nd part done in 594ms niter=3999995
Total time for k-way merge sorting 2000000 double values=1125 ms
Total time for monoquicksorting 2000000 double values=2000 ms


8 threads used :
sort 1st part done (parallel segments) in 437ms
sort 2nd part done in 781ms niter=5999991
Total time for k-way merge sorting 2000000 double values=1234 ms
Total time for monoquicksorting 2000000 double values=2047 ms

16 threads used :
sort 1st part done (parallel segments) in 391ms
sort 2nd part done in 906ms niter=7999975
Total time for k-way merge sorting 2000000 double values=1313 ms
Total time for monoquicksorting 2000000 double values=2031 ms

Thursday, November 08, 2007

LSE Goes Boom!

"No other exchange is undertaking such an ambitious technology refresh programme based on next-generation Microsoft technology. We have always provided a first-class service, but now we can claim to be the fastest in the world as well."
- David Lester, CIO, London Stock Exchange

Now I wonder why no other exchange uses Microsoft technology for high performance messaging then...

The LSE was a big coup for Microsoft - one of the worlds leading exchanges which, until the Infolect platform based on .Net technology came along, had no outages since 1987. Even more embarassing are the anti-linux rhetoric contained in the "Get The Facts" campaign quoting the above David Lester quote - a nice analysis of which you can find here from Paul Murphy. He concludes that the so called high performance message delivery is the same as a "
relatively slow LAN".

This won't be the last time this occurs - in my operational experience, solutions based on .Net exhibited mysterious freezes which were never solved.



The Transition from N-Tier to High Performance Architecture

This is a little story of Anybank, who have a classic three-tier architecture using in-house developed solutions. They are embarking on a plan to improve the architecture, aiming for scalability, globalisation and integration in order to meet predicted growth in volumes.

In order to achieve this, the architecture will be transitioned from a transaction-orientated platform to an event-driven, peer-to-peer, asynchronous, persistent messaging-based architecture. Anybank have recognised that in order to achieve this, they must move away from in-house developed solutions and take advantage of open source and third-party products.

Architectural Components

Anybank have a classic three-tier architecture with a moderate level of fan out/in.

At the front end, there are a series of internal trading and control applications and several external client applications and gateways to systems such as FIX.

The communication from tier 3 to tier 2 is TCP based, carrying FIX or HTTP traffic to series of middle tier servers. Additionally, there is a custom protocol used by the thick client to talk to the middle tier servers. This is an efficient ASCII based protocol with pipe-delimited fields.

External Applications

The system has FIX connections to several exchanges and markets. Clients have a choice of a thick client or a web front end. Quotes are sent over FIX at a rate of 4/second.

Internal Applications and Services

There are several internal applications that are used by trading, credit and risk control and operations. The connections to these are managed by a series of front office servers and talk to the database tier for information. They run a generic set of applications that would be found in any other bank:
  • Market Making, Trading Station and Sales Trader Workbench: Prop. trading tools
  • Session Management: Routing, STP gateways, reception of execution reports.
  • Order Moniting: provides snapshot requests and streaming prices
  • Client Margin Calculation Engine: Limit checking, credit check, event publishing service
  • Exposure and Barrier Monitoring
  • Front Office Server: Authentication, Authorisation and Access control (AAA), Event notification.
  • Trade Server: Visual trading, large trades and limit trading.
  • Market Data Servers: Takes streaming market prices from feed handlers and distributes Market Data
  • Metrics and SLA monitoring
The current system handles thousands of daily trades with a peak rate of 200/second. Latency across several systems is in the order of .5 – 1 second with a desire to improve this figure across the board.

Anybank Tiered Architecture Analysis

Several key components have been targeted in an attempt to reduce potential bottlenecks in the system. Serial activities are being replaced with parallel activities, single threaded code is being rewritten to be multhreaded.

However, by applying HPC techniques and chosing the right products, we can gain significant performance quickly and robustly.

Tier 1 - Database Persistence Layer

This is the database layer, which is currently a replicated SQL Server. As with all relational databases, the maximal transaction rate is a function of the IO subsystem, log file transaction rate, number of indexes, data volumes and access patterns. Typically, one can expect a transaction rate of between 1-5 thousand transactions per second, which translates into 5-1 milliseconds per transaction.

Business level transactions may consist of many database transactions, which results in the potential for adding considerable latency to client and API calls.

Tier 2 - Application Services and Access Control Layer

The application logic layers combines authentication, access control and authorisation with business logic, static and derived data, streaming market data and database transactions. It is traditionally at this layer where fan-in/out is facilitated.

Tier 3 - Application Layer

The application layer consists of traditional thick, web and FIX clients, each with their own particular data access profile. Thick clients typically are used when streaming data is consumed by, for example, a pricing application. They may consume data delineated by topic or routed by function.

Web clients typically consume http/https/javascript and/or xml data to drive form-based, browser applications.

FIX clients can potentially deliver 100K messages per second when using FAST, the FIX compression protocol.

HPC Tools Applied to Anybank

The architecture is in transition from client server to an event-based and message orientated structure in order to increase resilience and enable scalability. The architecture has already some elements of fan out via multiple services connected via TCP.

There are opportunities to augment the current architecture and improve transaction capacity whilst reducing latency in each of the three tiers of the current Anybank architecture using Open Source messaging and caching products which have proven robustness in real-world applications.

Tier 1 and 2 - Peer-to-peer Message Bus

The publish and subscribe paradigm is ideal to use where multiple senders and receivers or combinations of the two are present and is common pattern for trading applications where prices, curves, derived data, immediate messages and topic based mechanisms need to be catered for.

In Tier 1 and 2, it is proposed to use the MantaRay peer-to-peer messaging product to provide message caching to tier 2 applications and services, effectively decoupling tier 1 and 2. MantaRay Messaging persistence logic guarantees delivery to unavailable peers once they come back online. Sending nodes need not await responses from a broker before resuming
other tasks. Such capabilities are critical issues in high availability environments.
This is considerably faster than a synchronous database access thereby accelerating application performance.

Also In tier 2, application/service fan out will be added using the pub/sub capabilities of the MantaRay solution.

Reliable Multicast over the WAN

MantaRay also has a WAN Bridge capability which accepts connections from other MantaRay peers thereby getting around multi-cast over the WAN issues. This would allow the mixing of the benefits of multicast with the recoverability of MantaRay.

This could be used to distribute real-time market data globally. Peer-to-peer is ideal for this purpose. It can be configured to deliver topic orientated messages over multiple multicast streams, layering the topics in order to use bandwidth effectively using source paced delivery which would be ideal for Anybank.

to be continued...

Wednesday, October 31, 2007

Merchant Bankers finally get the Internet and FIX?

From second Tanenbaum elephant hump comes the story about how emerging markets are booming because of gasp! - the application of internet technology to investment banking. It's rather ironic that the technology was there in 1970 - trouble was - the business just wasn't listening - messaging is as old as the hills - FIX is messaging in verbose clothing (XML) - suitable for the trading of web "services" by the externalisation of business processes as I observed in my 1999 paper on "The Role of XML in Enabling Business Solutions Built from Collaborative Web-Based Services". This was further extended in the blog article here.



Monday, October 29, 2007

Pot, Kettle, Black

I had to laugh last week, when I had dinner conversation with an IT journalist. Some fairly innocent conversation about firearms, internet security and shredding documents came up. But whilst the journalist writes stirring columns about ecommerce; security; credit card fraud and atms, he blanked me when asked about identity theft and cross-shredding shredders. Of course he has home insurance, and expects some sort of burgalar. But stealing credit card statements and gas bills. Thats the assumption that his new live-in girlfriend wont read through the pile, and decide what he can afford or not for xmas. So where does this guy keep his password or atm pin. Oh how silly of me, its in his wallet of course. (that I dared not to ask)

Monday, September 03, 2007

IP and the Internet

A friend of mine, Dr Tim Sampson is running an event on the above subject. Areas he will cover are:
  • Domain name rights and linking
  • Copyright, database rights, framing and “borrowed content”;
  • The Use and Abuse of Branding on Websites and Trademark and brand issues;
  • Advertising via the Web – Its Limits and Liabilities;
  • Digital Downloads – What you can and cannot do;
  • Streaming, Webcasting and Internet TV
  • Patents and Designs
  • The Trade in Regulated Goods
  • Commercial Exploitation of Database Rights over the Web
Tim is an excellent speaker. If you would like to attend, the flyer is here.

Wednesday, August 29, 2007

Beer in the Evening - Data Parallelisation and Functional Languages in Finance - September 25th 2007

It's been suggested that we get together again for another beer in the evening by John Barr who has recently joined the 451 Group from Intel. John wrote an article called
"Reinventing the future: parallelism and acceleration" which tracked the recent rise in accelerated computing and is a comprehensive review of the area.

I'll put up some reading material in due course. Watch this space.

So if you're up for a beer and a casual chat about HPC, HFF, low latency, parallelism, messaging vs events etc we'll be meeting in the Red Lion, 8 Lombard Court, London from 630pm onwards on the 25th September.

You can call me or email on 0791 505 5380 or rgb at enhyper.com if you have any questions.






Micro and Macro Performance Analytics - a Log with Two Tales

Micro and Macro Performance Analytics - a Log with Two Tales

Sometime when we measure things we end up introducing latency into the system. I recently came across a neat scheme where a timing message was “stamped” on every hop then rerouted to the source. The average round trip time was calculated by dividing the elapsed time in two. The overall times produced were adequate for what the guy required but there was an annoying latency spike which happened periodically and was a cause for concern.
The average trip time from source to receiver was 900 microseconds. Occasionally, a 9 millisecond time was reported which, if echoed in production, would have been unacceptable. To find the problem, all the messages above 1ms were tagged and grep’d out into a file. Using vi to analyse this log file, it was noted that the latency spikes had a definite periodicity of around 600ms.

Visual inspection is a great technique as the brain is good at recognising patterns. However, as we can now generated millions of log entries per second, this tactic has severe limitations in that it highlights micro trends, therefore It’s important to visualise the data too so that macro trends can be recognised. Luckily I did as there was one lurking in the data.
I took a minute’s worth of >1ms log file entries and manipulated them in good old excel to produce the graph below:

As you see from the graph, there's a an obvious 10 second event which is highly regular in it's form. On further investigation, it turned out to be a call to a summarisation routine which caused the spikes to cease momentarily. Now it's lucky this was found using a synthetic load - in production, it could be a lot more difficult to spot. On the plus side, it's nice to come across a well designed load simulation - this is seldom the case. People are prone to quote averages and pay little attention to the spikes - but in high frequency trading, it's the spike that will lose you big money.

So, be careful what you measure and be careful that by measuring, you don't introduce side effects which could so easily be hidden in production.


Wednesday, August 01, 2007

Community of Interest Predicated Program Trading

I've had my paper on the above reviewed and accepted for publication and presentation to the Knowledge Management stream of this year's Operational Research Society conference, OR49 in Edinburgh. I'll be attending the conference for the three days and hanging out with Dr Duncan Shaw, an expert in Soft Systems Methodologies at Aston University.

You can download the paper from the Enhyper site here.





Friday, July 20, 2007

A Design Pattern Template for High Performance Messaging

This is a work in progress. High performance messaging is hard. There are many parameters which can be configured (or misconfigured) so I'm working on a series of pattern templates. Work in progress...

Pattern Name


{ Source: Single.All Topics, Receiver: Multiple.All Topics }

Synonyms



Related Patterns

{ Source: Per Topic, Receiver: Per Topic }

Overview


Topic Methodology

Source per topic
{ Single.Source , Receiver.All Topics }


Group Rate Control Policy

Extreme 1 { Sender.Rate.Slowest Receiver, Reciever:at leisure)
Extreme 2 { Sender: Fast As Possible, Receiver: Try to keep up)
Middle Ground (Send.Minimum Data Loss, Receiver.Minimise Latency)

Transport Protocol


Example

Diagram

Operating Environment

Related Patterns
So How Fast is an APR Hash Table?

Just ran a quick metric on uploading approx 20K stock information items into an APR hash table on a heavily loaded (load avg ~4), Sparc III based Solaris 8 machine. It took 32ms to load which works out roughly 1 microsec per apr_hash_set() call. Each data item was roughly 140 bytes long - not bad for an old machine.


Wednesday, July 18, 2007

Developers, Developers, Developers?

If I could summarise, in one word, the general trend in Investment Banking IT over the last 8 years I'd say it's been one of mediocrity. However, thanks to the demands of High Frequency Finance, the tide is turning back towards hiring talented programmers capable of working closely with the business to craft high performance, high quality solutions that deliver real competitive edge. It's beginning to feel like the old days, when the focus was on technical excellence and delivery.

This sorry state has come about by the ingress of consultancy-led management (ie writing endless documents not code) and reinforced by HR policies such as "segmentation" where the aim of the bank is to hire average programmers at average salaries - indefensible. I'd rather pay top dollar for a girl/guy who can get the job done in a 10th of the time and cost.

I'm sure we've all seen the infamous "Developers, developers, developers" video - quite bizarre. I beg to differ, in Investment Banking (IB) I'd say it's "Traders, traders, traders" - at the end of the day - the business pays the wages...

I remember my first interview with a scary looking, fast talking New Yorker who ranted for half an hour about "only hiring the best in the world" - gulp I thought. All of a sudden he stopped and said - over to you. I told him - hey look - I'm not the best in the world - but I've done x, y and z etc - he said - right - you're hired on the spot. It was a big shock, I turned up first day in a new suit, so nervous that I forgot to cut the tags off. I was working with 40 quants (mathematicians/physicists mostly), working on fixed income stuff that was way over my head at the time. And so began my career delivering production quality solutions. Some good advice given to me when I first darkened the doors of a prestige tier 1 IB in 1996:

- No one will tell you what to do here - it's up to you to make your own role.
- IB is all about making money - not about elegant, cost-efficient architectures.

Today, however, the attitude of some developers in beggars belief. I was talking a guy a little while back who said "I don't care if a trade takes half a second or a second to route - I like programming in language X on platform P - it has a nice environment and is very productive". Sure it's nice and quick, how stable is the platform though? What about scalability? What's the clock resolution of the platform? How fast can it handle low-latency high performance streaming messages?

High Frequency Finance demands high performance data, calculations and messaging combined with a stable platform which scales well. Some platforms just aren't up to the job - caveat architectos.

To illustrate, I was flying to the States last week and enjoying the in-flight movies. The interface was perfunctory and seemed to do the job. I paused the movie a couple of times and noticed that the screen froze as expected but when I resumed it had been playing in the background all along. I then tried to rewind - and that didn't work either. Now lying to users is not good - not good at all. Then someone complained that their movie had "crashed" - so the steward decided to "restart the system". As soon as the first selection screen came back up - the whole plane pressed the movie button - boom! The system had to be restarted again, and again, and again. I never did see the end of the film. So you think I'm going to use this platform to build a trading system? Perhaps not...





Saturday, July 14, 2007

Saturday Afternoon Puzzle Club

A couple of weekends ago my daughter and I sat down with a brainpower DVD we picked up from the Times earlier this year. While we found that we could make good progress on most of the analytical puzzles as long as we spent enough time on each problem, we are unfortunately both quite lazy, and it didn't take long for us to start cheating.

For example:
When the letters of the following words are placed together how many eight letter words can you form?

VINE + FROG

The anagram puzzles are easiest to cheat on. All you need is a program that computes all the permutations of the letters and looks each resulting word up in an online dictionary to see if it's there. Now it's easy enough to find an online anagram search to do this, so you don't have to write one yourself.

But what happens if you have to compute the permutations of something other than letters in words, in particular, mathematical symbols?

Assume you are using a basic calculator. Press the numbers in order and replace each question mark, as you go, with a mathematical symbol. Using plus, minus, multiply and divide once only how many different arrangements are there for this equation to equal four?

7 ? 1 ? 7 ? 7 ? 4 = 4

In this case, you could evaluate the above expression with all permutations of the four symbols: * - / + and reject the ones that don't match. Now I realise that it would take longer for me to write a program to do this than it would for an eight-year-old to sit down and work out each of the possible values, but earlier this week I picked up the second and third fascicles of Volume 4 of Donald Knuth's Art of Computer Programming, and I felt like coding.

Algorithm L (lexicographic permutation generation) looks like this in Smalltalk:

LexicalPermutor >> permute: aSequenceableCollection
| n j l k |

"Precondition: the sequence is sorted lexicographically - necessary for step L3 to work"
sequence := OrderedCollection withAll: aSequenceableCollection asSortedCollection.
n := sequence size.

[
"L1. visit the sequence"
visitor visit: self.

"L2. find j, the smallest subscript such that all permutations up to j have been visited"
j := self smallestSubscriptVisited.

"L3. increase a[j] (the current element in the sequence)"
l := n.
[(sequence at: j) >= (sequence at: l)] whileTrue: [ l := l - 1 ].
sequence swap: j with: l.

"L4. reverse a[j+1]...a[n]"
k := j + 1.
l := n.
[ k < l ] whileTrue: [
sequence swap: k with: l.
k := k + 1.
l := l - 1 ].

] repeat ]

where step L2 finds the smallest substring that hasn't yet been permuted (and also provides us with a stopping condition)
LexicalPermutor >> smallestSubscriptVisited
| j |

j := sequence size - 1.

[(sequence at: j) >= (sequence at: j+1)] whileTrue: [
((j := j - 1) = 0)
ifTrue: [PermutationCompleteException raise]].

^j
The cool thing about this is that you can pass in any sequence of symbols you want - for example you could pass in the string 'vinefrog' and have a visitor class that looks up the value in a dictionary.

At the beginning of the book Knuth says that enumerating the permutations is just a fancy way of saying that we are counting them, which is actually not very interesting. Nor is it interesting to list them. What we want to do is visit them and do something intelligent. The visitor is doing the application-specific stuff, and the permutation algorithm essentially acts as a black box.

In this particular example, we want the visitor to evaluate an expression and only write an answer if the result is the same as some target value.

The permutation algorithm lives in a class called LexicalPermutor, and the visitor is called a FunctionListVisitor, which gets initialized with some parameters and a target value:

LexicalPermutor class >> example

| permutor visitor |

permutor := LexicalPermutor new.
visitor := FunctionListVisitor new
numberList: #(7 1 7 7 4);
target: 4.
permutor accept: visitor.
permutor permute: '+-*/'.

To answer the puzzle then, all we need is a visit: method that evaluates the expression, tests the result and prints a message if it finds something interesting:

visit: aTarget

[accumulatedValue := numberList first.
aTarget sequence keysAndValuesDo: [ :index :function |
accumulatedValue := accumulatedValue perform: function asSymbol with: (numberList at: index + 1) ]]
on: Number divisionByZeroSignal
do: [ :ex | ^self ].

"Notify the world of successful evaluation"
(accumulatedValue = target)
ifTrue: [
total := total + 1.
Transcript cr; show: total printString; tab.
self print: aTarget sequence on: Transcript withResult: target ]
The first block of code just does what your calculator would do as you successively go through and perform: the function on the accumulated result (with a check for division by zero).

The second block just prints the result, which looks like this:

1    7 * 1 - 7 / 7 + 4 = 4
2 7 * 1 / 7 + 7 - 4 = 4
3 7 + 1 * 7 / 7 - 4 = 4
4 7 + 1 / 7 * 7 - 4 = 4
5 7 / 1 - 7 * 7 + 4 = 4
The answer is 5. But I knew that because an eight-year-old told me the answer before I could figure it out for myself.

The visitor to solve the following puzzle is very similar to the one above:

Assume you are using a basic calculator and press the numbers in the order shown, replacing each question mark with a mathematical sign. Plus, minus, multiply and divide can each be used once only. What is the highest number you can possibly score?

7 ? 6 ? 1 ? 8 ? 9 = ?

The next step is to look algorithms to generate combinations of t items taken n at a time. This will solve puzzles like this:

Add together three of the following numbers each time to score 18. How many different combinations are there to do this?

0 4 6 8 10 12 14

Saturday, July 07, 2007

PR Surgery Disasters

It seems that hardly a week goes by anymore that some firm or another is blindsided by blog reports of bad customer service or legal departments run amok. Often the matter is made worse because the company is completely unaware of the extent to which the incident has been reported. While unfortunate events cannot always be prevented, being aware of what's going on can help prevent a misunderstanding from becoming a nightmare.

But search engines are only of limited use. Many people who read a lot of blogs gave up on browsing web sites long ago and instead use feed aggregators to keep up with the volume of their news. However, even aggregators are overwhelming at times and sophisticated processing rules are becoming widespread. The results need to be accessible and easily understood.

In particular, the Data Mining blog points us to a new system called Reputica which provides an ongoing reputation metric for your firm, brand or product.
The key to our service is in using Reputica's unique software to predict how information will disseminate across various media platforms. In other words, if a negative blog comes out one day, Reputica can predict - based on complex analytical algorithms - where that story is likely to go next, and when. We can then advise our clients on the most effective pre-emptive steps to take.
Whether these claims are true or not, an increasing number of services and tools are available to keep track of what people are saying about you. This is definitely a space to watch.

Friday, July 06, 2007

Software Pairs

In my anonymous applications post, I talked about an idea for a web service marketplace delineated by several criteria, such as cost, reliability, performance etc. Consumers could chose a service based on their particular preference, or regulatory obligation for that matter, and return to the marketplace in realtime should a chosen service provider fail to deliver the required service level, decrementing the service's reputation in the process.

Service providers would range from the high profile corporation to the reclaimed linux box powered by a solar panel in Africa (anyone can download the jdk and learn to program). Investment banks may be forced by the regulators to only source from high reputational providers in production. In dev, you can source from whoever you like.

So this service analogue applied at a macro level leads the notion of software analogues e.g Oracle and mySQL, Data Synapse and [Hadoop|ICE]. Slightly more (IB) user friendly is the notion of Software Pairs as in the stock/currency analogue of being different sectors, our delineator is Commercial vs Open Source. It mght be useful to code this up one day - a kind of del.icio.us for applications - whereby the dev community categorise which pattern is exemplified by a particular type of software.

Wednesday, July 04, 2007

ARMISTICE: Real-time Distributed Risk Management

In this paper, some experiences of using the concurrent functional language Erlang to implement a classical vertical application, a risk management information system, are presented. Due to the complex nature of the business logic and the interactions involved in the client/server architecture deployed, traditional development techniques are unsatisfactory. First, the nature of the problem suggests an iterative design approach. The use of abstractions (functional patterns) and compositionality (both functional and concurrent composition) have been key factors to reduce the amount of time spent adapting the system to changes in requirements. Despite our initial concerns, the gap between classical software engineering and the functional programming paradigm has been successfully fullfiled.

Anonymous Applications

This paper was written around 2000 and submitted as an abstract to the Financial Cryptography (FC) Conference in Anguilla. It was, unfortunately, not accepted as it was too far removed from the core focus of the conference, i.e. cryptography and has sat gathering dust ever since. It was inspired by a variety of influences, however, it was substantially affected by the rantings of Robert Hettinga et al who formed the FC community all of to whom I owe a debt gratitude for a sound education in applied crypto. I'm in the process of updating it so welcome comments and collaboration.

Abstract

I present an novel application development paradigm enabling applications to be built from a collection of web services sourced from a community maintained marketplace of open source and commercial web services, consumed via anonymising proxies and remunerated by anonymous, bearer electronic cash.

It is proposed to combine the power of community maintained hierarchical web service directories with open source software development to create a marketplace for granular software services.

These services will be delineated by cost, reputation/quality of service and paid for by a variety mechanisms ranging from pre-arranged contract to microcash payment mechanisms. The community will determine and maintain the directory ontology organically.

The Death of the Desktop Application

In the late 1990's, there was a focus towards hosted internet applications labelled the Application Service Provider (ASP) model. In 2007, this market is maturing fast and credible enterprise class services (such as salesforce.com) have become popular and ability to fill market niches by configuration as proposed in [7] in 1999 has becoming a business reality.

The ASP market has now fragmented and rebranded to Web 2.0, enabled by technologies such as AJAX on the server side and Adobe Flash on the client side.

Many people struggled to understand the ASP concept, equating it to the days of mainframe bureau computing provided as vertical, single vendor solution. Then, the balance was firmly with desktop applications and thier up-front licensing model with annual maintenance fees. Now, the hosted application model is poised to annihilate the traditional desktop and supplement the licensing model with pay-per-use-per-seat.

Open Source Business Model

The Open Source community has succeeded in developing high quality, robust and technically innovative software applications quickly. However, no one is quite sure how to turn this phenomonen to commercial advantage and there are several theoretical business models asproposed in [7] and [9].

As an attempt to fund Open Source development, there also exists several collaborative task markets for development such as Cosource (www.cosource.com) as proposed in [5]. Incentivised expert markets are also emerging as in . These have failed to attract significant numbers of developers to make them economically viable. In 1999, Cosource had $60,000 dollars of outstanding development and a total of 11 projects completed.

I propose a transactional model whereby developers can earn realistic incomes by anonymous subscription or micropayment for functionality which they sell to anonymous consumers via a community maintained directory based on the same operational principles used by the Open Directory Project (dmoz.org) and outlined in [10].

The Open Directory project presents an excellent example of the "Bazaar" effect as outlined by Raymond [11] whereby the power of the distributed community is harnessed to build a resource beyond the reach of commercial efforts.

After the initial failure of internet currency pioneers like Digicash et al, fungible micropayment currencies like E-Gold (www.e-gold.com) filled the void. We are again seeing a resurrection of micropayment mechanisms from a variety of companies but few offer anonymity. Anonymity of identity is a preferrable as it means you only have to trust the ethics and technology of one organisation rather than many. Combined with persistence of psuedonym and the integrity of a chained blinding architecture detailed in [4] and we have a solution which is credible to the internet community.

Meta service providers are currently setting up content peering and rebranding functionality to leverage their content delivery infrastructure as outlined in [3]. It is these providers who we perceive as adapting to carry application service components.

Real time bandwidth bandwidth exchanges

ADSL's influence

Possible applications

Military Uses

Data Security

Information Assurance

Commercial Uses

The future

Anonymous Applications Overview

The ability to deliver applications composed of discrete components exists today. By combining the benefits of Open Source development, global community-maintained directories persistent psuedonyms, anonymous payment mechanisms and dynamic bandwidth acquisition we predict a new market place for applications

The Developer

Developers sit at home and write applicationette's/components/applets whatever. They host these on their own hardware connected permanently to the internet via adsl. They sell services these on the open marketplace for ecash and have adaptive pricing algorithms to ensure that the things earn money. They use persisten nyms to hide their identity (for a variety of reasons).

The Consumer

Consumers have a front end application designer tool which allows them to pick services and design web applications using compoents/services which
are found by reading a global directory (see below). They also have the choice to route this content over an anonymising infrastructure (like zeroknowledge). It also would allow the user to pay for these services via subscription or per use and to control quality of service requirements, persistence and alternatives.

The directory

At the heart of the system is a global directory (much like dmoz.org) where vendors advertise their wares (in human readable format). This is important because it is a mechanism whereby consumers can request new functionality/classes of application to be written. The directory
is heirarchal and organic: cooperatively managed by the developer/consumers/intermediaries.

The intermediary

I see a strong market for intermediaries here to provide:

- dynamic bandwidth management - using things like rateXchange to dynamically purchasing bandwidth
- rebranding services
- consumer application hosting
- fan-in/concentration services for service suppliers - not everyone is going to have/desire services running on their home hardware.
- biling/credit control
- quality of service

Potential Customers

This has obvious use for the military who are increasingly being driven to use commercial static and wireless networks.

Persistent Psuedonyms

Anonymous identities have been widely used on the internet for both good and bad purposes. The biggest drawback is that it's not easy to communicate bidirectionally. Initial efforts were pioneered by the mixmaster remailers [Bacard] but there are vulnerabilities[Cottrel] which could be used to track messages to their originator. In answer to this problem, the Zero Knowledge Systems offered a robust anonymising service based on the work of Dr Stefan Brands, however, due to the issues post 9/11, the demand for this service disappeared overnight, leaving the door wide open for identity theft.

Open Source Component Providers

Cooperative Web-based Services Using XML

Conclusions

The main use of XML is as a vendor, platform and application independent data format that will be used to connect autonomous, heterogeneous applications. Building on this XML will enable a technology shift in computing whereby personalised business solutions will be constructed dynamically from distributed, cooperative applications (services) hosted by different classes organisations. XML will be successful in this area, where other technologies have failed, due to its simplicity, the fact that it has been designed from the outset for use on the Web, and most importantly, because it has across the board industry backing.

References

1. Bacard, A. Anonymous Remailer FAQ, Feb, 2000. http://www.andrebacard.com/remail.html

2. Cottrel, L.Mixmaster & Remailer Attacks, Feb, 2000. http://www.obscura.com/~loki/remailer-essay.html

3. Dyson, E. Zero Knowledge: It's freedom, baby!, Release 1.0, 12-99,15th December 1999.

4. Freedom Network 1.0 Architecture, I. Goldberg, A. Shostack, Zero-Knowledge Systems, Inc.,29th November 1999.

5. Using Electronic Markets to Achieve Efficient Task Distribution, I. Grigg, C. Petro, 28th February 1997. http://www.systemics.com/docs/papers/task_market.html

6. The Role of XML in Enabling Business Solutions Built from Collaborative Web-Based Services, G. Burnett, M. Papiani, 16th December 1999.

7. The Power of Openness, Berkman Center for Internet and Society, various, 1999.http://opencode.org/h20/

9. The Magic Cauldron, E. Raymond, 24th June 1999. http://www.tuxedo.org/~esr/writings/magic-cauldron/

10. Linux meets Yahoo!, D. Pink, The Fast Company, 2000. http://www.fastcompany.com/career/pink/0100b.html

11. The Cathedral and the Bazaar, E. Raymond, 26th October, 1999. http://www.tuxedo.org/~esr/writings/cathedral-bazaar/

Code: Productionisation of FAME

One of the first questions I ask when interviewing developers is "How many systems have you put into production?". There's a big difference between writing a system and productionising a system; perhaps that's why I'm so fascinated by logging (probably the dullest subject on the planet) as a mechanism for forensically diagnosing production problems and, potentially, a mechanism for measuring application service level, latency and security events - but that's for another post or two.

Here's some real production code from a long long time ago written for a bank that no longer exists:

http://enhyper.com/svn/src/fame/fmUtilLib

It's the maintenance script for a production installation of FAME and it has some interesting hooks and shell programming techniques. There's an example of the use of a Zeller function, the algorithm I pinched from Joe Celko's SQL for Smarties book and turned it into a bc(1) function - which I think is pretty neat. There's also a hook into a command line call to a program which generated an snmp trap to alert support in case of error.

IMHO all production support scripts should be written in straightforward bourne shell, just like the operating system scripts - there's a good reason scripts are written in the lowest common demoninator. In the old days, disk space was limited and fancy shells were not standard installations. Same for editors - emacs had an amusing nickname back then - "Eight Megs And Continually Swaps" - back in the days when 32MB of memory was a big deal.

Thursday, June 28, 2007

Logging Sins, Hierarchical Tuples and Effective Security Architecture

Logging Sins, Hierarchical Tuples and Effective Security Architecture

There are many sins in logging, none so amusing than this one I saw recently:

+++ ALERT : SEVERE : Invalid Market Order Submission ... Hence, releasing the Trader Order in Panic... +++ALERT : SEVERE+++

In the old days, we used to return a number which was then looked up in a decode table. This lead to obscure code and errors which were quite severe when the DBA messed up the decode table...

However, as an aid to debugging, self-describing code can be useful. So if you're tempted, try tuples - they're quite neat. They're a feature of functional languages, in particular Haskell. Here are some real examples:

TIB.SEND.TOALGO.AMENDTRADERORDER
TIB.SEND.TOALGO.ORDER.CANCEL
LBM.CALLBACK.INVOKE
LBM.CALLBACK.RETURN

Without commentary, you can deduce what's happening with this code and these messages. Tuples are cool. They can also be used to describe design patterns. I wrote a paper a while back which I presented at the IISyG on Security Architecture Patterns - I'll post the deck soon on the enhyper subversion server. I took an idea from the Antipatterns book on naming of "antipatterns" and crystallised the methodology to this tuple for describing a pattern:

{GENERAL, LESS GENERAL, SPECIFIC, MORE SPECIFIC}

Applied to the concept of security, this tuple looks like this:

Generic Security Concept.Application Context.Application Instance.Configuration

So I came up with patterns which looked like this:

DataTransfer.BBCP
SecrecyJurisdiction.Authentication.SSH.PasswordlessCertificate
Authentication.Applicaiton.JavaClient.OracleOID

Now the beauty of this scheme is that people are lead to logically to the comprehension of the solution by hierarchical decomposition. Add the ability to have arbitrary tuple lengths and encourage stereotypes and you now have well known patterns which you can back by implementations. Now here's the killer part - get security risk to certify/risk assess the implementation so you now have effective security in a risk compliant manner.

Unfortunately, this bright idea foundered for two reason - banks are here to make money, not develop software and security risk guys found it difficult to think like developers.



Tuesday, June 26, 2007

Local Disk Hotspot Analysis

One common mistake I see is NFS/Samba mounted partitions use to either stage data into memory or to write application log files to. There's no real excuse for this, and the argument that local storage is not allowed by intrastructure engineering policy is one which I easily argued against at a tier 1. We wanted about half a gig of file systems space to store logs and model data which could be loaded quickly in the event of a system or application crash. We were told that this was not allowed under any circumstances as it was against policy.

The issue we had was quite simple - on our Solaris box, restarting the application took 2 hours, pulling the data from an an ancient disk array with really poor service times and some config stuff NFS mounted partition. When we moved to Linux and used our local home dir to stage the models and store the logs, this went to 15 mins - an acceptable restart time for the business. So I arranged a meeting with the Head of Unix Infrastructure and told him that he needed to explain to the business why they couldn't trade for one and three quarter hours in the event of a system failure. For some reason he changed his mind and we managed to get two fast SATA II hard drives in addition to the two OS drives.

JBOD Arrays

If you are using a JBOD array with some form of disk suite management, frequently there's several performance problems that go unnoticed and unchecked - hardly surprising knowing the extreme workloads of some to the sys admins I've worked with. This leaves them no time to be proactive.

The first is what I call disk hot-spotting - where a bunch of disks are idle yet one is maxed due to poor partitioning. To analyse this, I wrote diskanalyse which aggregates sar output to highlight possible issue. Here's an example of two disks which just so happen to be the OS disks. It shows that there's a poor average service of 17ms per operation. Now that's going to slow things down. The cure here is quite simple - there's too much activity in /tmp - get the applicaiton guys to move it to a partition local to the application and the problem will be lessened.

avserv[ sd1,h ] = 0
avserv[ sd0 ] = 17
avserv[ sd320 ] = 2
avserv[ sd30 ] = 0
avserv[ sd1 ] = 17
avserv[ sd330 ] = 0

busy[ sd0 ] = 28
busy[ sd320 ] = 0
busy[ sd230 ] = 0
busy[ sd203 ] = 0
busy[ sd30 ] = 0
busy[ sd1 ] = 29

rwpersec[ sd0 ] = 57
rwpersec[ sd320 ] = 1
rwpersec[ sd230 ] = 0
rwpersec[ sd203 ] = 0
rwpersec[ sd30 ] = 0
rwpersec[ sd1 ] = 57

blkpersec[ sd0 ] = 920
blkpersec[ sd320 ] = 10
blkpersec[ sd230 ] = 0
blkpersec[ sd203 ] = 0
blkpersec[ sd30 ] = 0
blkpersec[ sd1 ] = 909

avwait[ sd0 ] = 7
avwait[ sd320 ] = 0
avwait[ sd230 ] = 0
avwait[ sd203 ] = 0
avwait[ sd30 ] = 0
avwait[ sd1 ] = 7








Monday, June 25, 2007

Ajowan ke paratha

Time for more geek food. Parathas are very much under appreciated in Asian cuisine that's presented to the restaurant going public - you'd be hard pressed to get a fresh paratha anywhere I've eaten.

This recipe comes from an book I picked up at a jumble sale:

Lot 1

8oz wholewheat flour
2 tsp Ajowan seeds
3/4 tsp salt
1/2 tsp chilli powder
7 fl oz water

Lot 2


3 tbsp melted ghee

Method

  • Mix lot 1 in a bowl to a dough, kneading well. Leave for 15 mins for the dough to rest.
  • Divide into 8 - 10 lumps
Loop
  • On a floured surface, roll out each ball into an oblong, 3 inches by 6-8 inches
  • Paint top surface with some of Lot 2
  • Fold one third of the length into the middle, fold the other third onto this
  • Repeat loop three times
  • Roll out to desired size (3-4 x 6-8 inches ideally)
Cook
  • Cook in a dry frying pan until surface bubbles and turns slightly opaque ( no ghee)
  • Paint surface with some of Lot 2
  • turn over and cook for a minute or so and repeat on other side
Eat with any curry or dall.



Wednesday, June 20, 2007

Beer in the Evening - Intel Fastercity Event

Intel are hosting a freebie beer in the evening event. Looks like it will be fun - I plan to attend.

Nigel Woodward of Intel writes:

A quick reminder about the FasterCity community event featuring: industry panel, discussion and Chief Wine Officer™ reception on the 2nd of July 2007, registrations begin at 5pm after which the conference is set to commence at 5.30pm – the venue is Savoy Place WC2R 0BL.

The many bottles of excellent fine wine are waiting to be tasted, and the chef has prepared a canapé menu specifically to complement the wine. Peter McCombie will host the event (www.petermccombie.com) and you will hopefully leave entertained, educated and possibly with a prize or award.

I look forward to meeting you on the night.

Nigel Woodward

Head of Financial Services

Intel Corporation

Dir: 020 7614 8600

Web:www.intelfasterfs.com/fastertrading
A Lesson in Simplification: Heuristic methods, Genetic Algos and NP Complete Problems

It's rare to find an intriguing and difficult task in investment banking - mostly our work involves straight forward computational work. I was asked to write a program for a guy who was manually sorting spreadsheets which contained totals of net asset values for exchange traded funds from Reuters. He was trying to find out which flags had been used by Reuters to calculate the total.

My initial reaction was that this would be a simple iterative algorithm which would be easy to implement. After consulting two quantitative analysts I know, I was informed that I'd come across the Postage Stamp problem (a.k.a. the Money Changing problem) which could only be solved using a brute force approach. Technically, the problem is N-P Complete. There was no known algorithm for this so he suggested I go through all the subsets as outlined in this article on Generating Partitions. At this point I felt like giving up but I persevered, and read some interesting articles which related the problem to Frobenious so I found some code which implements this though didnt get particulary far with this as it produced a (to me at least) meaningless series of numbers. I did find this McNuggets Problem analogy from Wolfram useful in explaining the problem in Joe six-pack terms. I also found a candidate algorithm which looked good for the job - The Diophantine Linear Equation.

A young turk here, Darryl Stein, was itching to help so we both worked on candidate solutions. He adopted a
GA approach based on a C# GA library from those lovely people at Google - Aforge.NET - this was efficient and reasonably accurate on the data we had - but being c# - it couldnt' work on the unix platform so was not really feasible as this had to be done server side (the data was coming out of Vhayu).

So we ended up coding a version of the Diophantine equation which did pretty much what the guy was doing with the spreadsheet - ie - starting with the biggest number, then the next until the limit was breached then sieving through the permutations until you get a hit or not. Darryl beat me to it with a neatly coded recursive solution which did the job.








Tuesday, June 19, 2007

LSE Electronic Trading

The Wall Street Journal Europe is reporting that the LSE has unveiled a new electronic trading system that will allow a trade to be booked and confirmed within 10ms, 130ms faster than previously.
The London Stock Exchange on Monday unveiled a new electronic-trading platform, TradElect, which promises to trade a share in 10 milliseconds -- 30 times faster than the blink of an eye and a speed that could help decide the fate of Europe's biggest exchange by market capitalisation of its listed companies.

The system cuts the time from placing an order to final confirmation to an average of 10 milliseconds from 140 milliseconds. It can handle 3,000 orders a second, up from 600 under the LSE's old system, known as SETS, a number the LSE said it has approached on several occasions in recent months.

By comparison, it takes 110 milliseconds for a trade to make its way through the main trading platform of NYSE Euronext's New York Stock Exchange. The Big Board intends to cut that to 10 milliseconds.
Another way of putting this is that someone can make 11 trades on the LSE while you are waiting for your NYSE transaction to complete.

How much XML can you process in 10ms?