Vendor Relationships in the Time of Consolidation: Are Your Vendors in it for the Long Haul?

The impending sunset of Barclays POINT was prominent at the recent FTF Performance Measurement Americas (PMA) conference, underscoring the long-term risks of a short-term focus on software

Jeff Cullen, Solutions Principal

While most performance system RFPs focus on a given product’s user interface or the specific features that are built directly into the software, organizations should also consider the potential for technology debt, which stems from distributed systems and the unhealthy dependencies that often develop with their presence. I recently participated in a panel at the FTF PMA conference in New York entitled “How to Justify System Migration Pain”, and this was a theme that seemed to be top of mind among the panelists following the recent consolidation among fintech vendors that underscored this growing challenge.

The discussion also featured Jeffrey Malmin, General Director, Performance Reporting at John Hancock Investments; Shankar Venkatraman, Director, Global Head of Performance, Risk, Analytics and Compliance at Citi; Jeremy Welch, Head of US Hub Operations at BNP Paribas; and Jeffrey Bellavance, Manager Performance and Analytics at PanAgora Asset Management.

One takeaway from the event resonated with me as the sole vendor on the panel: The consistent recognition that investment firms and asset managers aren’t simply buying software anymore, they’re entering into a relationship with their vendor.

In this era of consolidation in the fintech space, it has never been clearer that decision makers must look at more than a given product’s features and functions when evaluating and implementing solutions. One example is Bloomberg’s acquisition and planned retirement of Barclays POINT. In the case of POINT, many firms had quite contentedly allowed the system to gain an outsized presence in their firm’s operational ecosystem. As a result, many of these firms were caught completely flat footed by the retirement announcement and are now faced with the troubling reality that in addition to losing the functionality of the POINT platform, they also have varying degrees of uncertainty around being able to extract their valuable historical data.

More recent examples can be seen in FactSet’s recent purchase of Bi-Sam and StatPro’s acquisition of UBS Delta. Even in such cases of “simple” consolidations, clients may encounter incidental effects of the acquisition, as two former competitors merge and begin the complex task of rationalizing their “new” product offering. This could prove to be another form of the type of migration pain encountered in system implementations.

So the question becomes, what type of deployment model can help to combat some of the risks presented by situations such as these? In our estimation, the best buffers against painful evaluation and implementation of performance systems can be found first, by considering the transparency of underlying data and second, by adopting a product that not only has performance capabilities but also expects that modern firms will almost always require multiple performance, risk and analytics systems.

At Eagle, for instance, we’ll often advocate for the 80/20 rule when it comes to new system implementations. When firms start with a centralized data hub that promises clean, traceable inputs, they can put in place a foundational platform that hopefully accommodates 80% to 90% of their performance measurement needs. This data-centric approach also lends itself to building on top of the platform to tackle the unique demands that arise within specialized teams or departments that may require satellite or specialized solutions better suited to their needs. Ultimately, all of these systems will require the same inputs with the same degree of cleanliness. This best-in-class strategy can flexibly accommodate the distinctive needs that may be unique to the individual business or department and represents the outstanding 10% to 20% of the required capabilities.

A centralized hub that prioritizes data quality and enables deeper data governance only makes system migrations that much easier. Add to that an explicitly designed data model that will eagerly consume other vendors’ performance measurement, attribution and risk outputs for purposes of comparison and tracking, and the result is a data-centric platform that can provide greater transparency to the entire organization’s performance measurement quality.

In the end, the most difficult part of migrations is typically untangling the legacy systems from their existing functions and then, somehow, forcing in the new system to facilitate business as usual. To use POINT as an example, users have to not only understand all the inter-dependencies that rely on its output and analytics, but also understand how POINT derives its analytics in the first place. It’s akin to assembling a single puzzle versus trying to piece together multiple puzzles all mixed together and upside down. How much easier would it be to unwind this kind of situation if the organization employed a centralized performance data hub from which POINT was fed as part of an overall strategy? The answer should be clear.

I would be remiss to not mention that the word “relationship” came up repeatedly in the panel’s conversation. As Citisoft’s Tom Secaur highlighted at an Eagle Summit last spring, most department heads may only shop for a new platform once or twice in their career. If they’re not focusing on a long-term strategy with a vendor—at the very least, a vendor that is not likely to be acquired or retired—system migration pain can quickly turn into a chronic disorder. At Eagle, we recognize the importance of a data foundation to performance measurement, but more importantly, we believe in long-term relationships, open communications that prevent disruptive surprises, and delivering technology that remains a healthy financial investment over the long haul.

Leave a Reply