RuneLex AI

Author: nslightowler

  • EU AI Act Bans Subliminal Manipulation

    As we explained in our recent post “EU AI Act: 4 Risk Levels” as of 2 February 2025, AI practices classified as “unacceptable risk” under Article 5 of the EU AI Act are banned including, among others:

    “techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm”

    In addition, according to Recital 29 of the EU AI Act:

    “Such AI systems deploy subliminal components such as audio, image, video stimuli that persons cannot perceive, as those stimuli are beyond human perception, or other manipulative or deceptive techniques that subvert or impair person’s autonomy, decision-making or free choice in ways that people are not consciously aware of those techniques or, where they are aware of them, can still be deceived or are not able to control or resist them. This could be facilitated, for example, by machine-brain interfaces or virtual reality as they allow for a higher degree of control of what stimuli are presented to persons, insofar as they may materially distort their behaviour in a significantly harmful manner. 

    From 2 August 2025,non-compliance with this prohibition will be subject to administrative fines of up to 35M € or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover.

    #ResponsibleAI #EUAIAct #EthicalAI #AIFuture #AIGovernance #SafeAI #AIRegulation #AICompliance #RuneLexAI

  • Article 4 EU AI Act: Minimum Requirements

    As the European Commission states, Article 4 of the EU AI Act mandatory as of 2 February 2025- “requires providers and deployers of AI systems to ensure a sufficient level of AI literacy of their staff and other persons dealing with AI systems on their behalf”.

    According to the European Commission’s Q&A, an AI literacy program should, at least:

    “a) Ensure a general understanding of AI within their organisation: What is AI? How does it work? What AI is used in our organisation? What are its opportunities and dangers?

    b) Consider the role of their organisation (provider or deployer of AI systems): Is my organisation developing AI systems or just using AI systems developed by another organisation?

    c) Consider the risk of the AI systems provided or deployed: What do employees need to know when dealing with such AI system? What are the risks they need to be aware of and do they need to be aware of mitigation?

    d) Concretely build their AI literacy actions on the preceding analysis, considering

    • differences in technical knowledge, experience, education and training of the staff and other persons – How much does the employees/person know about AI and the organisation’s systems they use? What else should they know?
    • as well as the context the AI systems are to be used in and the persons on whom the AI systems are to be used – In which sector and for which purpose/service is the AI system being used?”

    Define your role. Build literacy. Stay compliant.

    #ResponsibleAI #EUAIAct #EthicalAI #AIFuture #AIGovernance #SafeAI #AIRegulation #AICompliance #RuneLexAI

  • Which is Your Entity Type Under the EU AI Act?

    As the EU AI Act enters into force, clarity on your organisational role is vital to ensuring compliance with EU regulations:

    1. Provider: “natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge”.
    2. Deployer: “natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity”.
    3. Distributor:natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market”.
    4. Importer: “natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country”.
    5. Product manufacturer: places on the market or puts into service an AI system together with their product and under their own name or trademark”.
    6. Authorised representative: “natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation”.

    Understanding your entity type under the EU AI Act is the essential first step toward compliance.

    #ResponsibleAI #EUAIAct #EthicalAI #AIFuture #AIGovernance #SafeAI #AIRegulation #AICompliance #RuneLexAI

  • EU AI Act: 4 Risk Levels*.

    The EU AI Act introduces the following risk-based framework to regulate AI systems, categorizing them into 4 distinct risk levels, each with specific legal obligations and restrictions:

        Diagram sourced from: The EU Council’s official AI Act page.

    As of 2 February 2025, AI systems classified as “unacceptable risk” under Article 5 of the EU AI Act are banned including, among others, practices such as subliminal manipulation and social scoring.

    From 2 August 2025, according to Article 99 of the EU AI Act, Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative fines of up to 35 000 000 EUR or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher”.

    Is your organization in compliance with the EU AI Act?

    #ResponsibleAI #EUAIAct #EthicalAI #AIFuture #AIGovernance #SafeAI #AIRegulation #AIForGood #AICompliance

    * The information provided in this article is for general informational purposes only and does not constitute legal advice. For specific legal guidance, please contact us. https://runelexai.com/

  • Compliance with article 4 EU AI Act*.

    The so‑called “AI literacy” requirement under Article 4 states that providers and deployers of AI systemsshall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used”.

    Article 4 became mandatory as of 2 February 2025. Therefore, providers and deployers of AI systems must implement structured training programs, documentation, and ongoing education to meet its requirements.

    These measures are no longer optional; they now represent a legal obligation for any entity that develops or deploys AI within the EU or targets the EU market.

    If you want to know the minimum required content for compliance, you can refer to the official Questions & Answers section provided by the European Commission.

    Organizations that fulfil the mandatory requirements of Article 4 on AI literacy are not only meeting their legal obligations, but also positioning themselves to navigate future challenges, build trust, and lead responsibly in an increasingly digital and regulated environment.

    Is your organization in compliance with the EU AI Act?

    #ResponsibleAI #EUAIAct #EthicalAI #AIFuture #AIGovernance #SafeAI #AIRegulation #AIForGood #AICompliance

    * The information provided in this article is for general informational purposes only and does not constitute legal advice. For specific legal guidance, please contact us.

  • EU AI Act Compliance Checker in just 10 minutes*.

    The Compliance Checker guides users through a series of questions and criteria based on the EU AI Act’s requirements. With this interactive tool, you can quickly determine whether your AI system is subject to these regulations.

    As shown in the extract below, the tool is simple and intuitive, requiring only 10 minutes to complete.

    Is your organization in compliance with the EU AI Act?

    #ResponsibleAI #EUAIAct #EthicalAI #AIFuture #AIGovernance #SafeAI #AIRegulation #AIForGood #AICompliance

    * The information provided in this article is for general informational purposes only and does not constitute legal advice. For specific legal guidance, please contact us.

  • Prohibited AI Systems in the EU as of 2 February 2025.

    According to Article 5 of the EU AI Act, the following types of AI system are ‘Prohibited’ from February 2, 2025:

    • deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm.
    • exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.
    • biometric categorisation systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorises biometric data.
    • social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people.
    • assessing the risk of an individual committing criminal offenses solely based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity.
    • compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage.
    • inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
    • ‘real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except when:
      • searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited.
      • preventing substantial and imminent threat to life, or foreseeable terrorist attack; or
      • identifying suspects in serious crimes (e.g., murder, rape, armed robbery, narcotic and illegal weapons trafficking, organised crime, and environmental crime, etc.)”.

    Is your organization aligned with the EU AI Act?

    #ResponsibleAI #EUAIAct #EthicalAI #AIFuture #AIGovernance #SafeAI #AIRegulation #AIForGood

  • Why AI Will Never Be Charismatic.

    Every week I heard someone telling me that lawyers will be replaced by Artificial Intelligence(AI)pretty soon. I always respond in the same way. AI can speed tasks, but it cannot replace the spark of human connection that only a lawyer can bring to the practice of law.

    As lawyers we continue to integrate AI into our professional and personal lives, but empathy, trust and presence will never be replaced by an algorithm in our day-to-day practice of law.

    As Aristotle said, “Man is by nature a social animal”. (c. 350 B.C.E.). “Politics”.

    Also, the bestselling author Vanessa Van Edwards has pointed out in the book “Cues: Master the secret language of charismatic communication”, Penguin Random House (2022), that humans “are social animals. We evolved to get along in groups. So, we’re constantly telegraphing information-about our social status, our potential as mates, and our intentions. Similarly, we’re constantly alert to social information others are sending to us”.

    She also said that “Researchers find that nonverbal signal account for 65 to 90 percent of our total communication”.

    For those reasons, in a world growing more digital every day, social connection, trust and charisma will always be the centre of our law practice, and -that’s precisely something that AI will never could do-, as it’s our greatest advantage as humans and lawyers.

    #Leadership #LegalTech #HumanConnection #LawFirm #Charisma #AI #Empathy #Innovation #Trust #Vanessa Van Edwards

  • “Justice delayed is justice denied”.

    This maxim attributed to William Ewart Gladstone, who according to GOV.UK served as British -“Prime Minister for 4 separate periods. More than any other Prime Minister (Liberal 1868 to 1874, 1880 to 1885, 1886 to 1886, 1892 to 1894)-”, resonates more than ever in today’s AI-powered era.

    Artificial Intelligence (AI) is increasingly being integrated into the legal practice with the promise of automating research, routine tasks, and improving efficiency. Legal professionals can now use AI to quickly analyse case law, draft documents, and even predict case outcomes based on historical data. This acceleration of legal processes could reduce backlogs, lower costs, and make legal services more accessible.

    However, instead of reducing work, AI may increase the cognitive load on lawyers by requiring them to review large volumes of AI-generated content, detect subtle errors that could have serious legal consequences and stay updated on how AI tool’s function and evolve.

    This shift means lawyers will spend less time on manual tasks, but more time on critical thinking, quality control, and strategic decision-making. Yes, speed is valuable, but the practice of law demands precision and accountability.

    So, let me ask you: Has AI made your legal practice faster, or has it added new layers of complexity and review?

    #AI #LegalTech #FutureOfLaw #ArtificialIntelligence #EthicsInAI #LegalInnovation #JusticeTech

  • Do you believe everything AI says?

    Artificial Intelligence (AI) is rapidly becoming indispensable in -legal, tax, accounting, audit, corporate and government professions-, among others.

    According to Thomson Reuters Institute “2025 Generative AI in Professional Services Report”, –“26% of professionals use generative AI (GenAI) at work, almost twice the 14% who used it in 2024”-.

    However, -in the legal field- the states of California and Alabama (U.S), have recently sanctioned 2 attorneys for submitting AI-generated briefs containing fictitious citations, -which means that something is failing-.

    In the same way the England and Wales High Court has reminded that artificial intelligence (AI) cannot substitute legal judgment:

    “Those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example)”.

    Consequently, any professional, -especially attorneys- must verify that the information that comes out of the prompt, is real and comes from authoritative sources before using it!

    #LegalTech #AI #Law #Ethics #UKLaw #ResponsibleAI #RuneLexAI