Corporate

How to foster responsible artificial intelligence

27 May 2025

By Vincent GOUSSARD, Credit and ESG research Analyst, Crédit Mutuel Asset Management Crédit Mutuel Asset Management is an asset management company of Groupe La Française, the holding company of the asset management branch of Crédit Mutuel Alliance Fédérale.

AI & barriers to adoption

Often referred to as the 4th industrial revolution (after the Internet), artificial intelligence (AI) is a significant driver of productivity improvement. While its full potential has yet to be integrated across all business models, its rapid rise raises legitimate concerns. Indeed, according to recent Eurostat data, issues and uncertainties related to the regulation of AI usage are among the main barriers to adoption by companies with over 250 employees within the European Union.

This data highlights the need to develop AI within a secure framework. The legal and regulatory risks associated with AI, including those related to intellectual property, data protection, privacy and ethics, all fall under the umbrella of AI governance, i.e. all processes and standards that ensure a safe and ethical use . Robust governance is likely to encourage the adoption of AI. 

The need for governance: AI Act

In the absence of global governance, the European Union (EU) introduced the first AI Regulation (AI Act ) in June 2024 for progressive implementation until December 2030. The aim of this regulation is to ensure that AI is used safely, ethically, transparently and in respect of fundamental rights. To this end, AI use cases have been categorized by risk level (minimal, limited, high, unacceptable) and associated with obligations aimed at reducing or eliminating the associated risks. The EU has extended the scope of the AI Act so that the digital nature of AI does not allow to bypass the regulation. The AI Act targets not only providers and deployers of AI systems located in the EU but also those whose ‘outputs,’ produced by AI systems, are used in the EU. The term ‘output ‘is broad enough to capture a wide range of situations, including ‘predictions, content, recommendations or decisions.’

To prevent AI from functioning like a “black box”, the AI Act is designed to protect European citizens from a biased use by ensuring transparency and traceability of the inference process, which must remain under human supervision. Applied to human resources, a “high-risk” use case, this regulation requires that any a company using AI to sort job applications, must inform candidates and be capable of explaining how the algorithm used operates. Furthermore, to prevent potential malfunctions, such as ’hallucinations’ , the entire process must be carried out under human supervision.

While the AI Act lays the foundations for AI governance, it is not without criticism. From an economically liberal perspective, its complexity and associated costs could be seen as barriers to innovation. This is especially true since these new requirements need to be integrated with pre-existing ones, such as fundamental rights, personal data protection, consumer protection and intellectual property protection. For others, more sensitive to human rights, the regulation is seen as too lenient, and its implementation as too slow. 

The importance of global governance

The extraterritoriality of the AI Act and the substantial fines to which companies may be exposed (up to 7% of global revenue for prohibited uses) are likely to interfere with the interests of other economic regions. For instance, the current US administration, which has eased restrictions on AI suppliers in the name of preserving innovation , is also, according to Bloomberg, reportedly putting pressure on European authorities to abandon the General-Purpose AI Code of Practice which is expected by August.  This non-binding document, co-authored by various stakeholders, including AI model providers themselves, details the best practices to be adopted. 

This power struggle highlights the need to reach an agreement to avoid the proliferation of national rules and standards that could hinder the development of AI. However, given the existing cultural and ideological biases, it is likely that a global AI governance model can only be determined based on the lowest common denominator. This appears to be the role that has been assigned to the Global Digital Compact , adopted by 193 countries at the 2024 Summit of the Future. Much like its predecessor, the UN Global Compact, was designed to promote human rights, labor standards, environmental management and the fight against corruption, the Global Digital Compact aims to  ‘define shared principles for an open, free and secure digital future for all’. Within it, the Artificial Intelligence Committee aims to ‘strengthen international data governance and regulate AI for the benefit of humanity.’ 

Conclusion 

In his novel '1984', Georges Orwell depicts a world where citizens are under the constant surveillance of an authoritarian power, embodied by the figure of Big Brother. In the age of digital technology, the footprints we leave, combined with the advancements in AI, expose every citizen/consumer to a similar treatment. To prevent this Orwellian dystopia from becoming a reality, regulating the use of this technology seems crucial. The notion of responsible investment therefore involves ensuring that the companies creating and deploying these AI systems establish adequate operational ‘firewalls’ under the supervision of governance bodies that have the expertise to understand these issues and effectively mitigate the associated risks.




This commentary is provided for information purposes only. The opinions expressed by the author are based on current market conditions and are subject to change without notice.  The information contained in this publication is based on sources considered reliable, but Groupe La Française does not guarantee their accuracy, completeness, validity or relevance. Published by La Française Finance Services, registered office at 128 boulevard Raspail, 75006 Paris, France, a company regulated by the Autorité de Contrôle Prudentiel as an investment services provider, n° 18673 X, a subsidiary of La Française. Crédit Mutuel Asset Management: 128 boulevard Raspail, 75006 Paris is a management company approved by the Autorité des marchés financiers under number GP 97,138 and registered with ORIAS (www.orias.fr) under number 25003045 since 11/04/2025. Société Anonyme with a capital of €3871680, RCS Paris n° 388,555,021, Crédit Mutuel Asset Management is a subsidiary of Groupe La Française, the asset management holding company of Crédit Mutuel Alliance Fédérale.

Les sites du groupe
My bookmarks

Pins are saved using cookies. Deleting them from your browser will delete your preferences.

La Française Group provides access to the expertise of a number of asset management companies around the world. To provide you with the most relevant information, we have developed an interface to present the full range of products available for your investor profile and country of residence.
Please indicate your profile
1
Country
2
Language
3
Profile
Your country of residence
Your language
Your profile
<p class="new-disclaimer__legal-notice">Before consulting this website, for your protection and in your interest, please read the “<a href="en/legal-notice/" target="_blank">disclaimer</a>” and “<a href="en/regulatory-information/" target="_blank">current regulations</a>” carefully. This information explains certain legal and regulatory restrictions which apply to individual and professional investors according to local law. By accessing this site, in my non-professional or professional capacity, I acknowledge that I have read and accept the terms and conditions of use. Pursuant to the application of the European Markets in Financial Instruments Directive (“MiFID”), please state to which category of investor you belong&nbsp;:</p>