Saturday, July 07, 2007

PR Surgery Disasters

It seems that hardly a week goes by anymore that some firm or another is blindsided by blog reports of bad customer service or legal departments run amok. Often the matter is made worse because the company is completely unaware of the extent to which the incident has been reported. While unfortunate events cannot always be prevented, being aware of what's going on can help prevent a misunderstanding from becoming a nightmare.

But search engines are only of limited use. Many people who read a lot of blogs gave up on browsing web sites long ago and instead use feed aggregators to keep up with the volume of their news. However, even aggregators are overwhelming at times and sophisticated processing rules are becoming widespread. The results need to be accessible and easily understood.

In particular, the Data Mining blog points us to a new system called Reputica which provides an ongoing reputation metric for your firm, brand or product.
The key to our service is in using Reputica's unique software to predict how information will disseminate across various media platforms. In other words, if a negative blog comes out one day, Reputica can predict - based on complex analytical algorithms - where that story is likely to go next, and when. We can then advise our clients on the most effective pre-emptive steps to take.
Whether these claims are true or not, an increasing number of services and tools are available to keep track of what people are saying about you. This is definitely a space to watch.

Friday, July 06, 2007

Software Pairs

In my anonymous applications post, I talked about an idea for a web service marketplace delineated by several criteria, such as cost, reliability, performance etc. Consumers could chose a service based on their particular preference, or regulatory obligation for that matter, and return to the marketplace in realtime should a chosen service provider fail to deliver the required service level, decrementing the service's reputation in the process.

Service providers would range from the high profile corporation to the reclaimed linux box powered by a solar panel in Africa (anyone can download the jdk and learn to program). Investment banks may be forced by the regulators to only source from high reputational providers in production. In dev, you can source from whoever you like.

So this service analogue applied at a macro level leads the notion of software analogues e.g Oracle and mySQL, Data Synapse and [Hadoop|ICE]. Slightly more (IB) user friendly is the notion of Software Pairs as in the stock/currency analogue of being different sectors, our delineator is Commercial vs Open Source. It mght be useful to code this up one day - a kind of del.icio.us for applications - whereby the dev community categorise which pattern is exemplified by a particular type of software.

Wednesday, July 04, 2007

ARMISTICE: Real-time Distributed Risk Management

In this paper, some experiences of using the concurrent functional language Erlang to implement a classical vertical application, a risk management information system, are presented. Due to the complex nature of the business logic and the interactions involved in the client/server architecture deployed, traditional development techniques are unsatisfactory. First, the nature of the problem suggests an iterative design approach. The use of abstractions (functional patterns) and compositionality (both functional and concurrent composition) have been key factors to reduce the amount of time spent adapting the system to changes in requirements. Despite our initial concerns, the gap between classical software engineering and the functional programming paradigm has been successfully fullfiled.

Anonymous Applications

This paper was written around 2000 and submitted as an abstract to the Financial Cryptography (FC) Conference in Anguilla. It was, unfortunately, not accepted as it was too far removed from the core focus of the conference, i.e. cryptography and has sat gathering dust ever since. It was inspired by a variety of influences, however, it was substantially affected by the rantings of Robert Hettinga et al who formed the FC community all of to whom I owe a debt gratitude for a sound education in applied crypto. I'm in the process of updating it so welcome comments and collaboration.

Abstract

I present an novel application development paradigm enabling applications to be built from a collection of web services sourced from a community maintained marketplace of open source and commercial web services, consumed via anonymising proxies and remunerated by anonymous, bearer electronic cash.

It is proposed to combine the power of community maintained hierarchical web service directories with open source software development to create a marketplace for granular software services.

These services will be delineated by cost, reputation/quality of service and paid for by a variety mechanisms ranging from pre-arranged contract to microcash payment mechanisms. The community will determine and maintain the directory ontology organically.

The Death of the Desktop Application

In the late 1990's, there was a focus towards hosted internet applications labelled the Application Service Provider (ASP) model. In 2007, this market is maturing fast and credible enterprise class services (such as salesforce.com) have become popular and ability to fill market niches by configuration as proposed in [7] in 1999 has becoming a business reality.

The ASP market has now fragmented and rebranded to Web 2.0, enabled by technologies such as AJAX on the server side and Adobe Flash on the client side.

Many people struggled to understand the ASP concept, equating it to the days of mainframe bureau computing provided as vertical, single vendor solution. Then, the balance was firmly with desktop applications and thier up-front licensing model with annual maintenance fees. Now, the hosted application model is poised to annihilate the traditional desktop and supplement the licensing model with pay-per-use-per-seat.

Open Source Business Model

The Open Source community has succeeded in developing high quality, robust and technically innovative software applications quickly. However, no one is quite sure how to turn this phenomonen to commercial advantage and there are several theoretical business models asproposed in [7] and [9].

As an attempt to fund Open Source development, there also exists several collaborative task markets for development such as Cosource (www.cosource.com) as proposed in [5]. Incentivised expert markets are also emerging as in . These have failed to attract significant numbers of developers to make them economically viable. In 1999, Cosource had $60,000 dollars of outstanding development and a total of 11 projects completed.

I propose a transactional model whereby developers can earn realistic incomes by anonymous subscription or micropayment for functionality which they sell to anonymous consumers via a community maintained directory based on the same operational principles used by the Open Directory Project (dmoz.org) and outlined in [10].

The Open Directory project presents an excellent example of the "Bazaar" effect as outlined by Raymond [11] whereby the power of the distributed community is harnessed to build a resource beyond the reach of commercial efforts.

After the initial failure of internet currency pioneers like Digicash et al, fungible micropayment currencies like E-Gold (www.e-gold.com) filled the void. We are again seeing a resurrection of micropayment mechanisms from a variety of companies but few offer anonymity. Anonymity of identity is a preferrable as it means you only have to trust the ethics and technology of one organisation rather than many. Combined with persistence of psuedonym and the integrity of a chained blinding architecture detailed in [4] and we have a solution which is credible to the internet community.

Meta service providers are currently setting up content peering and rebranding functionality to leverage their content delivery infrastructure as outlined in [3]. It is these providers who we perceive as adapting to carry application service components.

Real time bandwidth bandwidth exchanges

ADSL's influence

Possible applications

Military Uses

Data Security

Information Assurance

Commercial Uses

The future

Anonymous Applications Overview

The ability to deliver applications composed of discrete components exists today. By combining the benefits of Open Source development, global community-maintained directories persistent psuedonyms, anonymous payment mechanisms and dynamic bandwidth acquisition we predict a new market place for applications

The Developer

Developers sit at home and write applicationette's/components/applets whatever. They host these on their own hardware connected permanently to the internet via adsl. They sell services these on the open marketplace for ecash and have adaptive pricing algorithms to ensure that the things earn money. They use persisten nyms to hide their identity (for a variety of reasons).

The Consumer

Consumers have a front end application designer tool which allows them to pick services and design web applications using compoents/services which
are found by reading a global directory (see below). They also have the choice to route this content over an anonymising infrastructure (like zeroknowledge). It also would allow the user to pay for these services via subscription or per use and to control quality of service requirements, persistence and alternatives.

The directory

At the heart of the system is a global directory (much like dmoz.org) where vendors advertise their wares (in human readable format). This is important because it is a mechanism whereby consumers can request new functionality/classes of application to be written. The directory
is heirarchal and organic: cooperatively managed by the developer/consumers/intermediaries.

The intermediary

I see a strong market for intermediaries here to provide:

- dynamic bandwidth management - using things like rateXchange to dynamically purchasing bandwidth
- rebranding services
- consumer application hosting
- fan-in/concentration services for service suppliers - not everyone is going to have/desire services running on their home hardware.
- biling/credit control
- quality of service

Potential Customers

This has obvious use for the military who are increasingly being driven to use commercial static and wireless networks.

Persistent Psuedonyms

Anonymous identities have been widely used on the internet for both good and bad purposes. The biggest drawback is that it's not easy to communicate bidirectionally. Initial efforts were pioneered by the mixmaster remailers [Bacard] but there are vulnerabilities[Cottrel] which could be used to track messages to their originator. In answer to this problem, the Zero Knowledge Systems offered a robust anonymising service based on the work of Dr Stefan Brands, however, due to the issues post 9/11, the demand for this service disappeared overnight, leaving the door wide open for identity theft.

Open Source Component Providers

Cooperative Web-based Services Using XML

Conclusions

The main use of XML is as a vendor, platform and application independent data format that will be used to connect autonomous, heterogeneous applications. Building on this XML will enable a technology shift in computing whereby personalised business solutions will be constructed dynamically from distributed, cooperative applications (services) hosted by different classes organisations. XML will be successful in this area, where other technologies have failed, due to its simplicity, the fact that it has been designed from the outset for use on the Web, and most importantly, because it has across the board industry backing.

References

1. Bacard, A. Anonymous Remailer FAQ, Feb, 2000. http://www.andrebacard.com/remail.html

2. Cottrel, L.Mixmaster & Remailer Attacks, Feb, 2000. http://www.obscura.com/~loki/remailer-essay.html

3. Dyson, E. Zero Knowledge: It's freedom, baby!, Release 1.0, 12-99,15th December 1999.

4. Freedom Network 1.0 Architecture, I. Goldberg, A. Shostack, Zero-Knowledge Systems, Inc.,29th November 1999.

5. Using Electronic Markets to Achieve Efficient Task Distribution, I. Grigg, C. Petro, 28th February 1997. http://www.systemics.com/docs/papers/task_market.html

6. The Role of XML in Enabling Business Solutions Built from Collaborative Web-Based Services, G. Burnett, M. Papiani, 16th December 1999.

7. The Power of Openness, Berkman Center for Internet and Society, various, 1999.http://opencode.org/h20/

9. The Magic Cauldron, E. Raymond, 24th June 1999. http://www.tuxedo.org/~esr/writings/magic-cauldron/

10. Linux meets Yahoo!, D. Pink, The Fast Company, 2000. http://www.fastcompany.com/career/pink/0100b.html

11. The Cathedral and the Bazaar, E. Raymond, 26th October, 1999. http://www.tuxedo.org/~esr/writings/cathedral-bazaar/

Code: Productionisation of FAME

One of the first questions I ask when interviewing developers is "How many systems have you put into production?". There's a big difference between writing a system and productionising a system; perhaps that's why I'm so fascinated by logging (probably the dullest subject on the planet) as a mechanism for forensically diagnosing production problems and, potentially, a mechanism for measuring application service level, latency and security events - but that's for another post or two.

Here's some real production code from a long long time ago written for a bank that no longer exists:

http://enhyper.com/svn/src/fame/fmUtilLib

It's the maintenance script for a production installation of FAME and it has some interesting hooks and shell programming techniques. There's an example of the use of a Zeller function, the algorithm I pinched from Joe Celko's SQL for Smarties book and turned it into a bc(1) function - which I think is pretty neat. There's also a hook into a command line call to a program which generated an snmp trap to alert support in case of error.

IMHO all production support scripts should be written in straightforward bourne shell, just like the operating system scripts - there's a good reason scripts are written in the lowest common demoninator. In the old days, disk space was limited and fancy shells were not standard installations. Same for editors - emacs had an amusing nickname back then - "Eight Megs And Continually Swaps" - back in the days when 32MB of memory was a big deal.