The AI Act – Impact on Alternative Investment Managers
On 13 June 2024, the European Parliament and the Council of the European Union signed the Artificial Intelligence Act
(Regulation (EU)
2024/1689) (the “AI Act”). The AI Act provides developers and deployers of AI with obligations regarding the use of AI. The law will apply in EU from 2 August 2026 and work is underway to facilitate rapid implementation in Norway.

The purpose of the AI Act is to promote the development and use of A1 systems that are reliable and safe, regardless of the sector in which they are used. The law must also ensure that democracy is safeguarded, and that citizens’ basic rights are respected, while clear rules must contribute to innovation.
The AI Act will affect natural and legal persons (including AIFMs) using an AI system in the course of professional activities (non-professional activities are exempt). These users of AI are referred to as deployers in the AI Act. That said, the majority of obligations fall on developers, referred to as providers in the AI Act, of high-risk AI systems.
The following are the main obligations in the AI Act for deployers when using AI:
- Prohibition to use unacceptable AI practices (as defined in the AI Act).
- Requirement to ensure that individuals in the organization possess the skills, knowledge, and understanding necessary to make an informed deployment of AI systems (AI literacy).
- Obligations specified in article 26 in connection with the use of high-risk AI systems, such as the obligation to keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control, for at least six months.
- Transparency obligations, for example, deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated.
Unacceptable AI practices are defined in the AI Act and relates to for example:
- AI systems that employ subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making, resulting in significant harm and AI systems that employ these techniques to distort behaviour and impair informed decision-making, resulting in significant harm.
High-risk AI systems are defined in the AI Act and relates to for example:
- AI systems that are intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates.
The AI Act will apply to developers and deployers from 2 August 2026. However, Chapters I and II regarding general provisions and prohibited AI practices will apply from 2 February 2025. The European Commission shall no later than 2 February 2026 provide guidelines specifying the practical implementation of the AI Act.
Permian comment: AIFMs utilizing AI systems in a professional capacity will fall within the scope of the regulation and must adhere to the AI Act. Therefore, AIFMs are advised to take measures to ensure, to the best of their ability, a sufficient level of AI literacy among their staff. Furthermore, AIFMs should consider the type of AI system they are utilizing, as obligations arise if a high-risk AI system is being utilized. Finally, it is recommended to include guidance to staff on the use of AI systems in an internal policy. Such policy should align with the DORA implementation for those AIFMs that are in scope of the DORA.
Please find our article on DORA here.
Contact
Anna Berntson Petas, Head of Legal and Compliance
anna.berntson@permian.se
Erik Elkan, Risk Manager
erik.elkan@permian.se
Samuel Hörberg Delac, Legal Counsel samuel.delac@permian.se








