Insights to Inspire

Featured Article
AI Companions Need Ethics, Not Just Empathy: Lessons from Meta’s New Bot — and How to Create a Safer Path (Part 2)

AI Companions Need Ethics, Not Just Empathy: Lessons from Meta’s New Bot — and How to Create a Safer Path (Part 2)

The Dark Side of Cutting Corners There are 4 levels of risk according to the EU AI act: minimal, limited, high and unacceptable (Consillium, 2025). Minimal level refers to the majority of AI systems which do not pose a threat on users and therefore will not be regulated, same as it was before EU AI act basically. Video games usually fall into this category. Limited level refers to AI systems which can have some risks associated with some steps in the process, thus a transparency measure is required, raising awareness of the aforementioned risks. ChatGPT belongs to this category. High level refers to AI systems which can pose a threat to users if not used properly, in this case strict requirements and obligations are needed to gain access to the EU. Autonomous driving is a clear example of this category. Finally, the unacceptable level within AI systems is basically everything that can put people’s lives at risk, for example social scoring and predictive policing. After some consideration we could categorize an AI companion as a high risk AI solution under the EU AI act. There are some misuses for an AI companion which range between manipulative upsells depending on emotional state, unsafe medical advice or even addictive loops within the user experience. All of the potential risk scenarios are connected to the concept of “risk score” (Thought on AI Policy, 2023), this concept tries to explain probabilities related to a certain scenario. The main issue with this is that it can be an ambiguous concept to grasp, as there could be different interpretations for probabilities that are not clear to everyone (see image below for an example)

Isadora Monteiro and Juan Pablo Vargas June 25, 2025

All
1

AI Strategy & Implementation