In today’s world we are often overwhelmed by the sheer volume of information around us. Without realising it, we are constantly filtering information to keep what is valuable and useful to us while ditching the less relevant content.
Similarly, performance measurement teams and their end clients are dealing with vast amounts of data and information and there is often the temptation to churn out more and more detail without considering who the end recipient is.
Understanding what is relevant and of value to the end clients is key in today’s landscape where client demands and expectations are always increasing and ever changing.
Performance measurement teams need to be armed with the right tools to transform data into information and to be able to filter it based on clients’ needs. I prefer the concept of information rather than data because I firmly believe that performance measurement teams are providing information to their clients, both internally and externally. Data such as holdings, transaction, benchmarks and market reference are the building blocks of information – the end return. However, if the data is flawed, the information we derive from it is inaccurate and, as a result, cannot be developed into reliable knowledge. Good data quality is something that still challenges many performance teams who are too often acting as data specialists and not analysts.
Performance measurement teams present this information via various mediums to their end clients who evaluate its accuracy through different lenses.
While the end investor is evaluating total return relative to the objectives and the investment strategy mutually agreed upon within the investment guidelines, the portfolio managers perform more in-depth scrutiny of the performance output. Relying on market insights and experience, they have expectations on what performance should be. They evaluate specific stock contribution, stock selection and asset allocation numbers.
Performance measurement teams, however, are often much more detached from the overall investment process when analysing the end performance. This is perhaps a result of analysts spending more time doing manual data quality checks than true analysis.
I wonder though, as they are all evaluating the performance return, attribution and final output through different lenses, if for better accuracy and more relevant and consistent information, performance analysts should aim to evaluate the performance output through the same lens as the portfolio manager and the end client?
Ideally, performance analysts need to be more actively involved in understanding the investment process and guidelines detailing the objectives, the investment structure, the benchmarks, the risk profile, the restrictions and more. They need to strive to better understand the market, and have more in-depth knowledge about how specific indexes, benchmarks or stocks have performed recently. They need to be technical experts in their field, which is something they can’t achieve without first shifting their focus from data scrubbing to analysis.
But how can performance measurement teams succeed if data quality is still a challenge, resources are still limited and internal processes are inefficient? They must be equipped with the right tools coupled with the right expertise and the right team structure, including separation of data specialists from performance analysts and reporting specialists.
Firms need to start evaluating their performance system landscape in order to reduce the number of performance measurement solutions they have in place and define an operating model relevant to their individual firm’s operational process. Too often we see performance returns being provided from a system and attribution from a different one, while inconsistent methods are used across the two – transaction based vs buy-hold with a simple disclaimer explaining the difference. This is no longer acceptable for clients who require transparency.
How can vendors help solve this problem? First and foremost, the solution needs to fit the internal operating model and be able to adapt to clients’ changing requirements. Not every performance team has the same operational challenge when it comes to the accuracy of the end return, so the solution needs to be flexible to allow performance analysts to build checks that are relevant to their specific needs.
While there is a lot of focus on workflow and process automation, generic workflows that are not adaptable to the internal process will not improve the operational efficiency. In this case, flexibility and the ability to adapt to ever changing needs without vast amounts of technical expertise are of paramount importance. Similarly, with each new reporting requirement there is a new challenge for performance teams as many solutions have preconfigured reporting templates.
Besides the typical month-end performance reporting, performance analysts would like to provide value-add reporting capabilities to their internal clients which, at the moment, is usually achieved through Excel and macro-driven reports or long and costly internal development projects.
Performance measurement solutions need to be more than just tools that generate accurate returns and attribution results. They need to be flexible enough to fit the operating model and internal process. This is true from a reporting standpoint as well, since it will enable firms to better answer client needs and address new market conditions. Having the right operating model, the right team structure or the right system in isolation is not enough. Team, system and operating model all need to work together.