Resources

Investment Management : “It’s the technology, stupid”

Date: November 24, 2014

New assets come with new challenges

Assets under management are at an all-time
 high. In 2013 they stood at $68.7 trillion, which represents a 13% increase over the pre-crash peak in 2007. 

The future also appears to be rosy for 
the asset management industry with global AUM predicted to grow to over $100 trillion by 2020, 
but how do active managers stay competitive and relevant in a world where investors will have more choice than ever before? We’re already seeing
 the traditional institutional investor looking outside the normal channels of vanilla asset management by ploughing funds into alternative assets with hedge funds being the vehicle of choice. A recent survey found that pensions are the fastest growing investor segment and are the largest contributor
to the growth of the hedge fund industry. Personal wealth is also on the increase. High net worth client assets are predicted to increase from $52.4 trillion in 2012 to $76.9 trillion in 2020. As the global economy continues to recover and emerging markets grow, the mass affluent client market is expected to increase from $59.5 trillion to over $100 trillion in 2020.

Where will these assets end up? What kind
 of investments and asset types will be dominant and how can asset managers compete to ensure they are managing this money over someone else? Traditional active management is under threat from new asset classes and new lower cost investment vehicles. The huge growth in ETFs allows any investor to gain access to markets
and market segments without the need for active management. It’s easier than ever to diversify your investments using passive means. The growth 
in passive investing is not just restricted to ‘Joe Public’ either. A 2013 survey by Ignites of 1,001 investment professionals, many of whom make their living promoting active management products, found that two thirds have invested a sizeable amount of their own money in passive products with only one in five saying they avoided passive products all together. Having easy access to these new channels and only paying four basis points
for holding them in some cases, presents a real challenge to margins and profitability in the active management market.

Read: New Insights for Asset Managers: How Technology Can Drive the Most Effective Middle Offices

It’s obvious that there is real pressure on margins and profitability within the capital markets industry in general. Regulations keep on coming and many directly affect the asset management industry and drive up costs. A recent KPMG paper states that the asset management industry is investing heavily in compliance, on average spending more than seven percent of their total operating costs on compliance technology, headcount or strategy. Based on extrapolations from their data, KPMG believes that compliance
 is costing the industry more than $3 billion. This new wave of regulation affects all aspects of the asset management business, from the advice they provide, the way they trade securities to the way they report on performance and how they market themselves to investors. Keeping up with new regulations and ensuring effective implementation is a real challenge, and the impact of non-compliance has never been more serious.

Technology infrastructure – need for change

Is the existing technology infrastructure and application landscape within the average asset manager up to the task of ensuring cost effective management of increasing data volumes, reporting and regulatory pressure? I think not. With two thirds of asset management firms relying on technology from 2006 or before, what chance do they have of competing for new capital while managing the cost and margin pressures we’ve already discussed? Many systems exist as standalone solutions, they perform certain tasks and produce data, but they don’t consider a more complete workflow, they were not designed to integrate with multiple systems together on a single data set. This traditional infrastructure creates silos of data that doesn’t add any value because it’s immobile, difficult to share and cannot be
acted upon by the right business users in a timely fashion. Along with budget pressure, a growing desire to outsource non-core activities and this dislocation of data, the priorities regarding software development and infrastructure deployment are undergoing a major transformation within our industry. The IT strategy needed to meet today’s challenges must be based on an open architecture framework, elastic, scalable and on-demand hardware, collaborative workflows and new service delivery models such as Software as a Service (SaaS) and cloud. These new service delivery models mean new external partnerships. This switches the priority of internal IT teams from managing hardware and software development projects, to managed services and external partner and service level management. CEB TowerGroup estimates that the majority of applications could be delivered via alternative methods as early as 2016. The short term implication is that investment in IT capabilities is now assuming that traditional hosted and on-premise solutions are the last resort.

There is no doubt that existing hardware and software platforms are going to be replaced. The speed of that replacement is variable, but the trend towards outsourcing is continuing and makes sense as the new delivery models fit with the growing desire to streamline IT infrastructure and operations. This desire is being fuelled by the focus on core business activities rather than support operations such as IT. This new ability to consume technology as a service means the internal IT ‘power stations’ will no longer be required at such a scale as they are today. Think of this migration to managed services and outsourced infrastructure like the move to the power grid, instead of owning and operating a power station attached to your factory.

These traditional local IT ‘power stations’ are not up to the task when it comes to the growth in data volumes and the sheer amount of analytical calculations that are needed with today’s multiple asset class portfolios and the compliance reports that are needed to satisfy regulators on a daily basis. Legacy applications and the architecture behind them were simply not designed to handle the scale and complexity found all across the asset management world today. Even if a system went live in 2010, it was probably architected in 2006 and developed in 2007/8 before going through
an 18 month installation project. It was designed before the very first iPhone, so what chance does it have of coping with today’s requirements?! These on-premise applications were designed to run on single servers or at best, a small cluster of servers. The issue here is that eventually the system will plateau. It simply won’t handle any more data and it cannot produce the output any faster. Adding more data simply extends the processing window. Adding more processing power or memory doesn’t make any difference once you reach this point.
The problem lies in the underlying architecture
and design of the software. How many times do you experience delays in waiting for a system to recalculate results? Or if there is a data issue, waiting for an overnight process to catch up after the correction has been made?

Many IT departments have invested heavily in ‘virtualisation’ – the ability to create multiple virtual computers all living on one large pool of hardware. This enables flexibility in managing servers and also helps with portability and disaster recovery. It also reduces costs because you get full utilisation from your hardware allowing you to get more from less. The problem is that this infrastructure enhancement doesn’t address
the issue of the software application itself. If the software isn’t designed to scale then it is still going to plateau on a virtual server the same as it does on a physical one. The solution is scalable, multi-tenant software. Software needs to be able to scale out over many servers (even hundreds). Think about Google. Do you think the whole thing runs on one or two monster servers somewhere, or does it scale out across tens of thousands of smaller machines that each perform smaller tasks? Scaling out gives an application the ability to scale with business requirements. The cloud is where this scalability works best. Infrastructure as a Service (IaaS) providers like Amazon are able to power up thousands of virtual servers in a matter of seconds to meet the demands of an application during heavy loads or time critical calculations. They are simply powered off when not needed and the application owner is charged on an hourly basis for the servers they use. To put some real dollar values on this, Amazon charges $1.68 an hour for a server with 32 processors, 60 gigabytes of memory and 320 gigabytes of super-fast storage. These economies of scale are impossible to match with local on premise IT infrastructure. Multi-tenancy is also a key architectural element in the next generation of applications, especially from external providers (see more on multi- tenancy below). Cloud-based SaaS applications are constantly being upgraded behind the scenes which abstracts the business users from the pain of software upgrades.

Changing vendor landscape – the move to SaaS

The changes in technology and the deployment 
of services are also having a tremendous impact on the technology vendors that supply the asset management industry. Many have come from the world of the large software deployment project and the support and maintenance that goes with it. This world only works for so long before the cost of innovation is too high. Costs rise and service levels and innovation drop because too much time is spent on servicing outdated deployments of the ‘land based’ software that may have been developed many years ago. Software deployed on-premise with a client, instantly leaves the control of the vendor. They aren’t in control of the environment or any customisations that are made locally. Supporting this structure when you have hundreds of clients quickly becomes very difficult and very expensive. This results in poor support, poor service levels and cost increases which eventually get passed on to the end client. The level of innovation drops because the technology vendor has to maintain old versions of software. A high percentage of developer resources are wasted on fixing issues in old versions that are still live and in production with the client, instead of focussing on new versions and improvements. Having the ability to focus on a single version of a technology solution no matter how many end clients there are is a huge productivity bonus for the technology vendor. These productivity and innovation gains mean that development companies in all industries are switching to multi-tenant architectures with the cloud as the delivery mechanism. Certainly new software start-ups are able to benefit from day one using this new approach. This will eventually drive vendors who fail to adapt, out of business altogether.

While the business advantages of a streamlined IT strategy with outsourced partners are 
becoming clearer for asset managers, there are key questions of control and data security that must be addressed. It seems every month there is a new headline story of a ‘cloud hack’ or data security breach somewhere and this damages the reputation for cloud vendors and SaaS developers. What is the reality and what can be done to ensure control is maintained and data is secure? Classifying data and the business importance of various services helps to create a framework for assessing the readiness of internal applications for alternative methods of delivery. Not all applications are mission critical, not all data is ultra-sensitive so it’s important to map the requirements to the reality, rather than having a single policy that prevents any of the business benefits from being realised. Comparing new cloud-based SaaS applications with existing on-premise solutions will quickly result in anyone realising that pure play SaaS applications are more secure. Security is built in from the ground up and not added on at the end of the development process which is the case for many on premise applications.

Conclusion

It’s clear that the investment management industry needs to replace legacy IT systems in order to
 stay competitive. This isn’t something that will take place overnight but we have seen how on-premise and traditional hosted solutions can’t begin to scale or meet the business requirements of today’s asset manager. Regulation is here to stay and investors are looking for returns from a wider array of asset classes. All this creates data volume and pressures on reporting and transparency. A new generation of IT and software is required to meet these demands and to bring greater levels of agility and collaboration to the industry. As a portfolio analytics provider, StatPro began planning its journey to the cloud in 2008 and we’re confident this was the right decision as we continue to bring pure SaaS based applications to the market.

Looking for improvement in your middle office?

BlogCTAInsights