Resources

Technology to tame performance data management

Date: April 12, 2018

As asset service providers continue their success in middle office outsourcing, how can they leverage new technology to manage growing volumes of data and deliver quality to clients?

Endless growth?

Asset Service Provider Blog-10While the bull market can’t go on forever, and even the best sports teams lose now and again, one thing that seems to be experiencing constant growth is data. Financial market data is no exception and with more asset classes, more products, and more regulation, this is one trend that looks set to continue for the foreseeable future.

Many asset managers struggle to manage their data operations efficiently, hence the growth in middle office outsourcing to asset service providers. These services include daily transaction-based performance measurement along with risk analytics and risk reporting services. These activities are data intensive and require the highest data quality processes and standards to produce accurate outputs. Asset managers outsource not only to save costs, but to improve service levels.

How can asset service providers leverage technology to manage huge volumes of data at scale, while providing the highest quality results? Let’s quickly explore three areas that influence data quality.

Control, automation and workflow

Applying configurable controls to data is essential. Getting on top of data management requires the ability to see things holistically and clearly as well as the ability to apply controls to data so issues and errors can be managed by exception. There need to be enough controls to apply to the error types likely to be encountered with the source data, they also need to be configurable, so they fit within individual operating environments. Middle office performance and risk data comes from various sources, so applying controls against that source data is crucial to avoid classic cases of garbage in, garbage out.

Advances in this area are also allowing for controls to be applied to post calculation data. This helps capture results that may be mathematically correct but are wrong. An example here may be a one-day return figure on an individual security that is obviously wrong, caused by a cash flow timing issue. Being able to set rules on the performance system to automatically correct such errors saves valuable time post calculation. This also helps avoid issues going unnoticed all the way out to reporting and causing re-calculation tasks that are not only damaging to reputation, but waste valuable resource time.

Visibility across large volumes of performance data through the process requires visual workflow. Being able to see the status of portfolios as they enter the system and are checked, calculated, verified and exported for analysis is a critical capability to manage the process efficiently and at scale. Visibility into the performance process is important, but it’s the ability to organize data into the right groups and priorities alongside this that can make the daily performance measurement process more efficient. Organizing thousands of portfolios into groups that can be visually managed with various data controls and policies applied to each is what allows for very large data sets to be managed by very few people.

Scalable computing power

Taming large amounts of performance data on a daily basis is going to require some computing grunt at some stage. Calculations must be performed, and in most cases, this is done in ever shrinking overnight processing windows. Clients expect all the performance calculations to be completed within strict SLAs. Meeting these service levels is critical to asset service providers if they are to retain clients and hard-won reputations.

Legacy software architecture cannot meet future demand and in many cases is struggling to meet current requirements. Throwing additional hardware at the problem doesn’t always help either. This is because legacy software architectures reach a plateau where they simply won’t calculate the numbers any faster, no matter what underlying hardware is available. They are simply not designed to scale. Add to this a lack of management features such as workflow, data controls and visualization and you have a nice recipe for frequent failure and missed SLAs.

So, what is required? Flexible computing power is required and delivers many benefits to anyone calculating large volumes of performance data on a deadline. Being able to scale up rapidly when required but without needing expensive on-premise hardware to support such a service is what cloud-based Infrastructure as a Service is best at. This is often described as utility computing or elastic cloud computing and it’s exactly that. Elastic in terms of the ability to grow and then shrink again. Not only does this provide enormous amounts of computing power to applications that have been designed to make use of it, but it is also cost efficient as you only pay for what you use.

Cloud-native Software as a Service applications use elastic cloud computing to enable on-demand scalability. This is allowing for very large volumes of performance data to be calculated in smaller and smaller processing windows. It is even allowing for intra-day calculations to be performed if required. Overnight processing windows will soon be a thing of the past as the industry moves to an intra-day on demand process.

Takeaways

  • The amount of financial market data continues to grow
  • The middle office outsourcing market is expanding
  • Data management is time consuming and expensive without the right tools
  • Control, automation and workflow are needed to manage data efficiently
  • Elastic cloud computing helps you meet calculation deadlines

{{cta(‘db35ad65-1712-4191-a6d9-bad7a2df09ff’)}}