EU AI Act enforces 2 August 2026 - are you ready?

We Make AI Explainable to Humans, Not Just Machines

Luxembourg-based consultancy combining Explainable AI engineering with sociological research. We audit your algorithms and test whether real people understand their decisions.

EU AI Act Compliance
XAI Expertise
Sociological Research

Enforcement: 2 August 2026

The EU AI Act Changes Everything

High-risk AI systems such as hiring, credit scoring, insurance, public services, must demonstrate transparency, fairness, and human oversight. Penalties reach €35M or 7% of global revenue.

Most compliance providers audit algorithms. We also audit whether the humans affected by those algorithms understand them. That's what the AI Act actually requires and what nobody else delivers.

See how we help →
-
Days until enforcement
65K+
High-risk AI systems in EU
€3.4B
Annual compliance market
0
XAI+Sociology firms in LU

Services

From AI Act compliance to algorithmic bias auditing: we cover the full spectrum of trustworthy AI, with the human dimension others miss.

Urgent
⚖️

EU AI Act Compliance

Risk classification, FRIA assessments, transparency audits, and regulatory-grade documentation: tailored for SMEs who can't afford Big 4 pricing.

Risk ClassificationFRIADocumentation
🔍

Algorithmic Bias Auditing

Independent third-party audits for AI in HR, finance, insurance, and public services. Statistical fairness testing + sociological impact analysis.

Fairness MetricsProxy DetectionRemediation
👥

XAI User Studies

Do your AI explanations work for real users? We test comprehension, trust calibration, and actionability across demographics and languages.

Trust TestingComprehensionMultilingual
🎓

AI Literacy & Training

Article 4 of the AI Act mandates AI literacy for all staff. Workshops that non-technical teams actually retain which are co-designed by an AI expert and sociologist.

AI Act EssentialsHR TeamsWorks Councils
🏛️

AI Ombudsman (Public Sector)

Retained advisory for municipalities and government agencies. We review every AI system before deployment (citizen perception studies included).

RetainerFRIA for Public AIAnnual Reports

Explainability Design Sprint

One-week intensive: audit your AI, interview 10 users, identify gaps, co-design explanation interfaces, deliver an AI Act-ready specification.

5-Day SprintProduct TeamsStartups
Technical Services: Data Pipelines, NLP & Forecasting

AI Architecture & Audit

SHAP-based explanation pipelines for ML models. Regulatory-grade transparency reports for FinTech. Built on FNR Bridges research.

EU Grant Strategy

Full lifecycle support for FNR, Luxinnovation, and Horizon Europe proposals. Consortium coordination and impact reporting.

Energy & NLP Pipelines

Time-series forecasting, real-time sentiment engines, and custom Python/Django infrastructure for operational data.

Why Leonarc

Every competitor has data scientists. Nobody else has a sociologist. That's our edge.

1

We audit algorithms and their users

Competitors check the code. We also test whether a nurse, a loan officer, or a citizen actually understands what the AI told them.

2

Sociology is our moat

User studies, trust calibration, social acceptability testing, multilingual comprehension like sociological methods no SaaS platform can replicate.

3

Built on peer-reviewed research

Our methods come from FNR researches and publications.

4

Luxembourg-first, SME-sized pricing

First dedicated XAI + AI ethics consultancy in Luxembourg. Greater Region coverage at a fraction of Big 4 rates.

Typical auditor vs. Leonarc

Typical auditor:"Your SHAP values show feature X drives 40% of decisions"
Leonarc adds:"…and our user study shows 73% of loan officers misinterpret that explanation, creating a false sense of confidence. Here's how to redesign it."
Typical auditor:"Demographic parity gap is 4.2%, within tolerance"
Leonarc adds:"…but interviews with rejected applicants reveal they had no actionable recourse. The system is statistically fair yet socially opaque."
q

Sociology × AI

Sociological analysis translated into practical AI strategy: teams deploy faster, manage risk better, and earn stakeholder trust.

Behavioral Context in Model Design

We model real decision behavior, institutional constraints, and user workflows to improve model relevance, reduce friction at rollout, and increase operational accuracy.

Fairness, Power & Accountability

We identify who benefits, where bias surfaces, and which controls are needed (then convert that into governance frameworks for executives, auditors, and regulators).

Human-Centered Deployment

We align AI tools with team incentives, communication patterns, and stakeholder expectations to improve adoption, shorten implementation cycles, and sustain value.

Track Record

Applied research and delivery across Explainable AI and real-time NLP.

Explainable AI & FinTech

FNR

Transparent Credit Scoring & Classification

SHAP-driven explanation pipelines for auditable financial ML models. Methodology for Refined Classifications using prototypical explanations.

Financial NLP

Large-Scale Financial Text Analytics

Multi-class text classification for financial document collections. Pre-trained language models for automated categorisation at scale.

Let's Talk

Whether you're preparing for the AI Act, need a bias audit, or want to make your AI understandable to real people, we're here to help.

No pitch decks. No upselling. Just a clear assessment of what you need.

Contact Us