We had the privilege of attending the TalkingTech: Harnessing AI event, hosted by the Investment Association. This insightful gathering brought together industry leaders, technology experts, and FinTech innovators to explore the evolving role of AI in investment management.

The event opened with a welcome from Gillian Painter, Head of Membership at the Investment Association, and featured thought-provoking sessions, including Prasad Chandrasheker of Fidelity International, who discussed best practices for integrating AI into investment strategies. Other key highlights included an engaging panel on AI ethics and regulatory considerations by Simon Bollans of Stephenson Harwood and John Bowman of IBM.

We also had the opportunity to learn from FinTech experts like Ángel Agudo from Clarity AI, Chandini Jain of Auquan, and John Paul from Equitably AI, who shared real-world applications of AI in improving processes and boosting efficiency. The closing message was clear: AI is evolving, and companies must adapt. 

Regulations are developing rapidly, and those who harness the power of AI will lead the future.

As leaders in AI-driven analytics, we recognise that AI is not just about technology—it’s about building a robust framework for adoption and ensuring that the essential governance and control infrastructure is in place.  While we continue to stay at the forefront of AI advancements, we understand that the human element, strategic planning, and cross-functional collaboration are equally crucial in driving successful AI adoption across industries.

What we're saying

When discussing AI ethics and regulation in the UK’s wealth management, investment banking, and financial services sectors, it’s clear that there are both opportunities and challenges. AI is transforming the industry through innovations in areas like robo-advisory, credit scoring, and algorithmic trading. However, despite AI technologies being in their infancy, they have operated up until this point without much in the way of legal or regulatory oversight. The EU’s AI Act which came into force on 1st August 2024, marks the first salvo in a wave of proposed legislation looking to establishing controls and regulatory protections across the field of AI.

A primary concern is transparency. Many AI models, particularly those using machine learning, are complex and difficult to interpret—often referred to as “black boxes.” This lack of explainability can erode trust if customers and regulators don’t understand how AI reaches decisions. For example, in investment banking, where AI is used for trading decisions or client profiling, there must be clear explanations for decisions that impact client outcomes, such as why an algorithm recommends certain trades or investments.

Moreover, in wealth management, AI-powered tools must respect strict data privacy and consent regulations. Since AI thrives on large datasets, ensuring customer data is not misused or repurposed without proper consent is essential. For instance, the General Data Protection Regulation (GDPR) places limits on how personal data can be processed, requiring firms to have explicit consent from clients before using their data for AI-driven services.

From a regulatory standpoint, the UK has adopted a sector-led approach, meaning AI must comply with existing financial services rules. The Financial Conduct Authority (FCA) requires that firms deploying AI treat customers fairly and communicate transparently, even if AI is involved in credit assessments or financial advice. This approach includes ensuring that AI systems don’t unintentionally harm clients, such as making an incorrect creditworthiness assessment. The FCA and Prudential Regulation Authority’s guidelines emphasise that firms using AI must manage risks diligently and ensure that their systems operate with proper oversight.

Ultimately, as AI becomes more prevalent in financial services, balancing innovation with regulatory compliance  will require ongoing dialogue between regulators, firms, and AI providers. 

Regulators are likely to continue adapting rules to ensure AI’s benefits don’t come at the cost of fairness, transparency, or data privacy.