A Short History of Performance Systems

Date: June 14, 2018

It has been over 50 years since Peter Dietz published his seminal work ‘Pension Funds: Measuring Investment Performance’ – when performance measurement became both standardized and quantifiable and a new industry was born.

Performance system evolution: New generations

blog post@2x-1Over the following years, performance measurement evolved from simple calculations of portfolio returns on a quarterly basis, to segment-level calculation on a monthly basis, to daily calculation of security (or even strategy) level returns.

A similar evolution has taken place in the standards applied for performance, from an initial insistence on external performance measurement, to the development of the AIMR PPS standards and then the Global Investment Performance Standards (GIPS®) in 2000 and in the later version of ‘Gold’ GIPS® (2010) and forthcoming GIPS® 20/20).

The scope of the role of the performance team has also broadened from measurement to attribution analysis (for equity and balanced portfolios to fixed income, multi-currency and multi-strategy / multi-asset analysis) and to provide not just ex-post risk statistics, but also ex-ante risk monitoring and in some cases risk management.

Parallel with this growth, numerous systems have been developed over the years, taking advantage of new technological developments and advances in IT to provide the necessary functionality and scalability to support the expanding requirements for performance measurement and analysis. Such are the significant differences in approach over the years that we can view these as successive new ‘generations’ of performance systems.

Looking back

Back in the 70s, 80s, and even mid-way into the 90s, as noted earlier, there was an emphasis on external measurement, enabling firms such as WM and CAPS in the UK and Morningstar and the Frank Russell Company in the US to dominate the market. Asset managers were encouraged to submit their data to these companies, both for measurement of returns and for inclusion in their peer group universes. These were centralized systems based mainly on mainframe technology. I refer to these as the first generation of performance systems (before then, calculating performance returns using pen and paper and then calculators and spreadsheets was certainly possible, but I’m not really including this in my definition of ‘systems’!).

Vendor transformation

Then in the early 90s there was a move within some asset management firms to measure performance in-house, building systems themselves to support these. I view this as the second generation of performance systems. This was the time when I personally first became involved in performance- the asset management firm I worked for at the time decided to build systems to take on the performance measurement, the calculation of benchmarks, and also the attribution and reporting of performance results. Later, we developed systems to construct and report on composite performance to align ourselves to newly emerging international standards, driven by AIMR (later to evolve into GIPS®). What were the drivers for these developments? Well, around this time firms were beginning to recognize the value of performance analysis and performance data and were expecting more flexibility and more accuracy – for example, to be able to measure performance at the security-level on a daily basis and to tailor the analysis more closely to their investment processes. External measurement at the time didn’t provide this degree of flexibility. The increasing adoption of the AIMR-PPS (later GIPS®) standards and the relaxation of the requirement for external measurement was another factor.

From a technology perspective, PC-based systems and then client-server technology were becoming widely adopted, and the response of the external measurers was to develop their own systems using this technology which could then be sold as ‘package solutions’ to asset managers. These ‘software packages’ could typically calculate performance and basic attribution at segment-level, usually using data supplied on a monthly basis. The external measurers were transforming themselves into software vendors.

The noughties

During the late-1990s and into the ‘noughties’, the third generation of performance systems arrived. By the time I first joined an independent vendor in 1996, technology was sufficiently advanced (based on n-Tier architecture and SQL databases) to allow for the calculation of security-level performance on a daily basis across firms. This allowed for the development of comprehensive, often modular software platforms that could be either installed in-house within the asset manager, or to be hosted by the vendor or by a third party. As an aside, it was initially by no means certain that internal measurement (and therefore performance software) would be widely adopted in the industry. I remember interviewing many Heads of Performance in London, prior to developing our software product, and many (including one Mr. Carl Bacon!) expressed some skepticism about a move away from external performance measurement at least in the UK market. However, the launch of GIPS® in 2000 provided considerable impetus and during the next ten years, there were several new entrants into the performance software space. For most asset managers, implementing a performance software product seemed more logical than trying to develop and maintain a system in-house, and this became a competitive though potentially lucrative market for some vendors.

However, the proliferation and adoption of third generation specialist performance measurement and analysis modules has now lead, in some firms, duplicating data and functions, missed deadlines, large and costly implementations and upgrades – and for growing firms, typically the need to update IT infrastructure to cope with increasing scalability issues. The costs involved can be considerable. For the vendors, management of aging systems (many of which are by now up to 20 years old), multiple software versions, long release cycles and unavoidable technical refactoring is also a significant challenge.


The cloud era

With the adoption of pure cloud technology, we have now entered the fourth generation era of performance technology. This has taken some time to come to fruition as to realize the benefits of cloud, it is not possible to just move existing systems onto a new cloud platform – the systems must be written from scratch utilising native cloud technology. This provided a radical departure and offered a solution to many of the potential issues described above as well as providing many other benefits to both clients and vendors. This includes the ability to auto-scale processing to automatically cater for increased volume requirements (elastic cloud computing). Automated data management, workflow and customizable, highly visual, self-service digital distribution of performance is now possible for even the largest global firms. For a vendor, the benefits of developing and maintaining only one product version are obvious but considerable – whilst the client benefits from automatic, regular and frequent upgrades. A single source of analytics ‘truth’ across the firm has been a nirvana for performance teams for many years, but is now finally attainable with cloud technology.