Joris Postmus
AI Researcher & Developer
Amsterdam, Netherlands
About
I think AI is the most significant human invention of our time, and as someone in the field, I feel a real responsibility to help make sure it develops in ways that are genuinely good.
This is what originally got me into AI Safety. I previously co-founded AISIG, one of the largest student-led AI Safety organizations in the world, where I still serve on the advisory board. I also like to do research on making language models more interpretable and steerable, having published at NeurIPS on methods for controlling model behavior.
Over the years, I've noticed that the problems I care about most deeply seem to always trace back to deep human patterns. Growing disconnection and polarization, collapsing epistemics, an inability to coordinate even when the stakes are existential. I think a lot of this comes down to ego, self-deception, a fundamental disconnect from how our own minds actually work, and a prisoner's dilemma playing out at a global scale. I also think AI (if used properly), for the first time, gives us a real chance to actually help people grow past those patterns, epistemically, cognitively, spiritually, at a scale that was never possible before.
Given where my skills, personal interests, and sense of purpose point right now, that's what I want to focus on. I'm currently building AI-powered tools for personal and contemplative development at Waking Up, one of the largest meditation apps in the world. I've personally been using it for over five years, and the practice has deeply transformed how I think, experience, and relate to myself and others. I think we have a real chance to help millions, if not billions of people grow in ways that weren't possible before, to positively change the trajectory of our future across the board.
Research
Steering Large Language Models using Conceptors: Improving Addition-Based Activation Engineering
NeurIPS 2024 · Workshop on Foundation Model Interventions
A novel approach to controlling LLM behavior using conceptors as soft projection matrices. Instead of representing steering targets as single points in activation space, conceptors represent them as ellipsoids. This enables Boolean operations (AND, OR, NOT) over multiple steering objectives for compositional control, outperforming traditional methods across all tested tasks.
Additional papers submitted to NeurIPS 2025 and ICLR 2025, expanding this framework with more comprehensive compositional steering methodology.
Experience
Exploring how AI can best serve and advance the mission of making personal and contemplative development accessible to everyone.
AI Safety Initiative Groningen (AISIG)
Built one of the largest student-led AI Safety organizations in the world. Organized 20+ events, facilitated AI Safety courses for 60+ students, and supported research published at top-tier conferences.
Yara AI
First hire. Built an AI-powered self-improvement tool combining cutting-edge language models with clinical expertise. Full-stack development, AI safety engineering, and product design.
MomentumAI
Helped European organizations deploy AI effectively, safely, and responsibly. Translated complex metrics on model safety and performance into clear, actionable insights.
Education
BSc Artificial Intelligence
University of Groningen · 2020–2024
Final grade: 8.3/10 · Thesis: 9.5/10 (Conceptor steering for LLMs)
BSc Computing Science
University of Groningen · 2022–2025
High School
Malvern Collegiate Institute, Toronto · 2016–2020
Gold medal in province-wide coding contest (Skills Ontario) · Highest mark in Computer Science · Avg. 96%
Building
I started programming when I was about ten, making video games to play with friends. That turned into web development, then software tools, then AI applications. It's always been my main creative outlet. Some of those early projects are still playable at jorispos.github.io.
Frequently Asked Questions
What is activation engineering?
Activation engineering is a technique for steering the behavior of large language models by modifying their internal activations during inference. My research introduces conceptors as an improvement over traditional vector-based approaches, enabling more precise and compositional steering through Boolean operations (AND, OR, NOT) over multiple objectives.
What is AISIG?
The AI Safety Initiative Groningen (AISIG) is one of the largest student-led AI Safety organizations in the world. I co-founded it in 2022 with the mission of raising awareness of AI risks, supporting research, and educating students through courses, workshops, and events. I currently serve on the advisory board.
What is conceptor steering for LLMs?
Conceptor steering is a method I developed with Steven Abreu for controlling large language model behavior. Unlike traditional activation engineering that uses single vectors, conceptors represent steering targets as ellipsoids in high-dimensional activation space. This enables Boolean algebra over steering objectives for compositional control. The approach was published at the NeurIPS 2024 Workshop on Foundation Model Interventions.