Friday, February 11, 2011

HIFREQ 2011 Panel Discussion Input

Here's my input to the panel discussion for HIFREQ 2011.

• What are the different set ups and combinations for HFT architecture?

I have experience of four different architectures:
  • traditional monolithic event queue and broadcast
  • reflective memory, distributed processing
  • dma, shared memory, multi-process and multicast
  • trading engine on a card
The latter appears to be the dream set up. It consists of an FPGA enabled network card with the strategy running on the card itself. This has been implemented by several large prop trading outfits who have arb strats. Up until June last year, we were able to compete with an aggressive arb strat - but we our fill rates have dropped off dramatically and we're consistently being beaten on speed. So we've moved the goalposts - we now focus on market making, news and global multi-venue trading.

I'd be interested in other approaches

Is massive multicore or specialist silicon (FPGA, GPU etc.) the next frontier?

Multicore is attractive for a multi strategy play, but it requires careful design to avoid data races and performance issues like tlb cache misses and memory barriers. FPGA has always been attractive for dealing with FIX and conversion of ascii to binary (ie parsing). GPU has promise in the equities world where dynamic pricing and portfolio analysis are required. What's been widely overlooked is DSP. There are some very interesting things you can do with DSP.

Which solutions provide maximum scalability, configuration, and customisation to ensure continual upgrade and development of your
systems to survive in tough technology race?

It has to be a message orientated multicast pure layer 2 architecture with software routing.

• Managing microbursts: the art and engineering of data capacity management

Again, a high performance messaging system combined with high resolution time and accurate traffic analysis are a must. Combine this with knowledge of the underlying network hardware in order to utilise multiple hardware queues on the switches, judicious use of QoS and correctly configuring the messaging system to utilise multiple channels yields good results.

• Taming the data torrent: conflating direct and aggregate market data feeds

Here's where an FPGA enabled network card helps e.g. by coalescing multiple keystations or A and B feeds, hashing messages and dropping duplicates and translating ascii to binary help greatly. Combine this with multicast for efficient data transport.

• Is asynchronous processing inevitable? What are the implications?

This has been used in HFT for many years and are necessary for effective parallelisation. One pattern I use is an "asynchronous n-slot put-take connector" which is a way of joining different processes in a way that allows each process to utilise its full timeslice. The implications of not using it are latency...

• What is next for complex event processing and tick databases? Will they be able to keep up?

CEP is a necessary evil, however, any strategy that uses it is almost impossible to debug.

With regards to tick databases I fail to see why people store stuff in databases at all - all our
market data is captured on an electrically connected secondary machine using HDF5 in date order directories. it's then shipped up to a disk array overnight. Backtest generally uses only three months worth of data.

• Surviving the technical glitch at high speed: designing robust architectures

The ability to run multiple strategies and services on a multicast message bus means recovery
from failure is straightforward.

Tuesday, February 08, 2011

High Frequency Trading World, Chicago

I've kindly been invited to talk at High Frequency Trading World, Chicago on June 27-29th 2011.

I suggested the following three talks. The real-time risk management one was chosen as the host thought this would be of great interest to traders. I've implemented this for real and gave a talk on Infosec Data Analytics and Visualisation which was well received so I thought I'd extend it to transactional logging. The basic idea came from work I did with Ian on triple entry accounting [sic] and "Notelets".

Real-time risk management and regulatory compliance
  • persisting transactions to the cloud
  • non-repudiation, risk management and distributed regulation using "triple entry" transactional logging
  • Market analytics using Hadoop
The next idea is more pioneering. Looking at new approaches to exchange technology and innovative delivery mechanisms.

Next Generation Exchange Technology
  • The transition to non-computational infra
  • Neat tricks with FPGA, DSP and memristors
  • Making the real virtual - multicast in software
  • Affordable networks: VPLS, QoS and IGMP snooping
And finally the day job.

Trading Engine Technology
  • Lockless design: avoiding data races
  • Shared memory techniques, superpages and reflective memory,
  • RSS, recvmsg() and kernel bypass for fast data acquisition
  • Real-time Linux, thread prioritisation techniques.
  • Tuning systems for HPC
  • Cheap, high accuracy time using PTP