What are AI agents in e-commerce?
AI agents are autonomous software programmes that can be used in various application areas. In e-commerce, for example, they take over a large part of the purchasing process for customers - known as agentic commerce. They are categorised by the EU AI Act as a "universally applicable AI model" (Article 3 No. 63 of the Regulation).
AI agents go far beyond traditional assistance functions: they research products, compare options, take preferences into account (e.g. price, quality, delivery time, reviews) and can also make purchases independently - within predefined limits. Users formulate a goal or a request ("Find me a red blouse for under €50" or "Order my hair shampoo every month") and the agent works autonomously or semi-autonomously to fulfil this goal.
Are you interested in AI agents or other exciting AI solutions in e-commerce? Please feel free to contact us. As an experienced AI agency, we look forward to helping you with your enquiry.
The EU AI Act and AI agents
Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act) is the world's first comprehensive legal framework for the use and development of artificial intelligence in the European Union. The aim is to ensure a high level of protection for fundamental rights or society, public health and safety - and to promote innovation at the same time.
This poses considerable legal, technical and organisational challenges for AI agents, i.e. systems that act on the basis of autonomous decision-making mechanisms.
Unclear risk categorisation of autonomous systems
The EU regulation on AI regulation follows a risk-based approach that categorises AI systems as minimal, limited, high and unacceptable risk. However, autonomous AI agents that operate in changing contexts or combine several functions often cannot be clearly assigned to a risk level. The distinction between a "high-risk system" and a "limited-risk system" is particularly problematic if the agent learns adaptively or dynamically adjusts its behaviour. This uncertainty leads to legal uncertainty for developers and operators and makes it difficult to plan compliance measures.
An AI agent in the field of eCommerce will be able to demonstrate considerable general usability in accordance with its intended purpose and, regardless of how it is placed on the market, will be able to fulfil a wide range of different tasks competently and can be integrated into a large number of downstream systems or applications. Therefore, such AI agents are covered by the EU Regulation. For the eCommerce platform operator (e.g. a retailer), this means that by using the AI agent, it is operating an "AI system for general purposes" (Article 3(66) of the Regulation).
When assessing what an AI agent system is to be used for, it is not only necessary to look at the area in which it is used (purchasing, customer service, etc.) or what kind of results it delivers. It is also important which tools the AI agent has at its disposal. So if an AI agent is intended z.B. is intended for purchases in the shop, but can independently access and control a computer or browser, then this harbours a higher risk of unexpected or abusive applications.
As a result, it could be categorised as a general purpose AI (GPAI) system - even if it was originally only intended for in-store purchases. However, retailers, importers, operators or other third parties as providers of a general purpose AI system in the eCommerce sector do not necessarily fall into the high-risk category of the Regulation. This follows from the reverse conclusion of Article 25(1)(c) of the Regulation.
Explainability and traceability
According to the regulation, high-risk AI systems must be transparent and comprehensible to humans. However, AI agents based on large language model/deep learning architectures or reinforcement learning in particular generate decision-making processes that are almost impossible to interpret ("black box problem"). The implementation of transparency requirements is therefore technically extremely challenging.
Responsibility and liability
Another central problem concerns the attribution of responsibility. The Regulation primarily regulates the obligations of providers and users of AI systems (Art. 16-29 of the Regulation), but does not comprehensively address liability issues. In the case of autonomous AI agents, the question arises as to who is responsible for unlawful behaviour or damage when decisions are made without direct human intervention. The planned EU Directive on AI liability (COM(2022) 496 final) attempts to close this gap, but the practical implementation of the allocation of liability remains unclear, especially for systems that develop autonomously.
Documentation and audit obligations
The EU AI Regulation obliges providers of high-risk systems to provide comprehensive technical documentation (Art. 11 of the Regulation) and to fulfil risk management, data quality and monitoring obligations (Art. 9-15 of the Regulation). Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE marking, that it is accompanied by a copy of the EU declaration of conformity and instructions for use referred to in Article 47 of the Regulation and that the provider and, where applicable, the importer of that system have fulfilled their respective obligations set out in Article 16(b) and (c) and Article 23(3) of the Regulation.
For AI agents that continuously learn or interact with open data sources, compliance with these obligations is often hardly practicable. The requirements for traceability and reproducibility collide with the inherent dynamics of such systems. Small and medium-sized enterprises (SMEs) in particular find their ability to innovate restricted by these regulatory requirements.
Conclusion
The EU AI Regulation creates an important framework for the responsible use of artificial intelligence, but it presents complex challenges for autonomous AI agents. This raises the question of liability if an AI agent behaves incorrectly (e.g., selecting the wrong item, adding a larger number of items to the shopping cart, calculating incorrect prices, etc.). Unclear risk classifications, a lack of technical explainability, unresolved liability issues, and high compliance costs complicate practical implementation. Therefore, before deploying an AI agent, it is necessary to define the system's framework with the implementing software agency and to eliminate malfunctions through stress testing.
At elio, we have been working on suitable AI solutions for our customers for several years now. Thanks to our legal department, we are always up to date with the latest AI compliance requirements in eCommerce. On this basis, we are happy to help you utilise AI agents for your challenges within the framework of the EU AI Act.