I attended CollabDays Bremen last weekend. I attended Leo Visser’s session without any expectations as I just arrived at the venue.
Internal Reflection: Key Takeaways From Leo Visser’s Presentation
Leo Visser’s session took a broad and sometimes chaotic landscape — AI systems, regulation, user behaviour, adoption, and operational risks — and pulled it into a structure that actually makes sense in a real organisation. It wasn’t a “new feature showcase.” It was about what happens when AI moves from the lab into everyday work, and why most organisations are nowhere near prepared.
The talk started with the context of the EU AI Act. Leo didn’t treat it as a legal document but as a behavioural constraint. His main point was simple: AI is becoming as foundational as the computer was in the 80s and 90s. If users don’t understand it, they fall behind. And when AI becomes the default interface for services, access to it becomes a basic requirement — not optional.
That sets the stage for a bigger question: How do we make users literate enough to operate safely in an AI‑driven workplace?
Biases, Discrimination & Why Users Must Learn to Test Models Themselves
Leo spent a surprising amount of time on bias. Not abstract ethics, but hands‑on patterns:
- Models repeat the biases of the internet.
- They drift toward majority representation unless corrected.
- You can test bias by fixing one variable and watching what else shifts.
He gave a concrete method: generate two nearly identical prompts, change one parameter (country, gender, role), and observe “side‑effects.” If the model changes more than the parameter you adjusted, a hidden bias is involved. This is the type of testing that users can learn in minutes — and it moves bias from theory to practice.
The impact is clear: if the model is biased, outputs become skewed, and business decisions inherit these distortions.
Hallucinations & the Real-World Cost of Trusting AI Too Much
Leo moved on to hallucinations — again, not philosophically, but operationally.
He referenced cases where AI confidently invented regulations, resulting in consultants making decisions based on non‑existent laws. The pattern is familiar:
If you don’t validate sources, AI becomes a liability.
He gave simple detection methods:
- If AI normally gives sources but suddenly doesn’t → risk indicator.
- Ask the same question 2–3 times → high variance means hallucination.
- Treat AI like Wikipedia: a good starting point, never a final authority.
These are the things organisations forget to teach — and then wonder why output quality collapses.
Data Protection, Model Training & “Don’t Paste Prompts You Found Online”
A major section focused on data protection. Leo stressed that many users don’t understand that data pasted into “free” AI tools can be used to train future models. Company data becomes model data. And model data becomes someone else’s output.
He also raised awareness around data poisoning: deliberately crafted prompts hidden inside public examples that manipulate downstream behaviour. Users copy them without realising it. This is a real threat vector almost nobody trains for.
Responsible AI, Transparency & Output Labelling
Leo covered the transparency requirements of the AI Act in practical detail:
- AI‑generated images must be detectable.
- Videos must be labelled.
- Text sent to the public must disclose AI involvement unless reviewed by a human.
- Emotion analysis or sentiment scoring inside systems must be communicated to users.
This is where most organisations are non‑compliant without even realising it.
Practical Adoption: Training Modules, Super Users & Incentives
The session also addressed adoption challenges:
- Users won’t complete training unless incentivised.
- Breaking training into small weekly modules increases completion.
- Companies need “AI super users” inside each department.
- There should be a disclosure channel for AI misuse or concerns, similar to cybersecurity disclosure processes.
This part grounded the whole presentation: strategy means nothing without implementation.