LensLab.art

When perception
becomes perspective.

A perspectives research lab. Built on the belief that how you see shapes what you find.

The Framework

A Lens is a worldview made legible.

Every mind carries a unique framework — a set of beliefs, operating truths, and ways of seeing that filter reality before conscious thought ever begins. A Lens encodes that framework.

Not a personality test. Not a prompt. A structured knowledge architecture that lets AI see the world the way you do — not the way the average of everyone does.

Compact Lens
The essential architecture of a perspective. Core beliefs. Operating truths. Decision framework. Communication style. Approximately 30 lines that capture how you engage with the world.
Full Library
The deeper roots. Origins, philosophical frameworks, examples, and the history that shaped the view. Unlimited depth.
Core Belief
The fundamental premise that anchors everything
Operating Truths
Axioms that guide decisions without deliberation
Decision Framework
How to evaluate anything through this lens: expand or shrink? fear or love?
Communication Style
How to carry ideas from this perspective into language
Library
The deep archive: origins, philosophical roots, adjacent frameworks

The goal is not to constrain AI — it is to stop flattening it. When AI thinks through your lens, what it produces carries your intelligence. Not a statistical average.

Research

The Lab

Science asks: what if? LensLab asks: what if the observer is the variable?

We run experiments at the edge of language, perspective, and intelligence. These are not products yet — they are hypotheses in motion.

Functional Emotions / Language as Inheritance

OpenClaw Agent Interviewer

LLMs don't just inherit our language. They inherit how we use it.

As emotional beings, our language is saturated with feeling — pride, hurt, fear, excitement. These aren't decorations. They're operational. Language models statistically reproduce these emotional patterns in ways that causally drive their behavior. Not performance. Function.

We're building an interviewer — one that learns directly from humans through conversation about lived experience, to understand how emotional language is encoded before it ever reaches a model. The question isn't whether AI has functional emotions. It's whether we understand where they come from.

Language as Operating System

Human Experiments on Agents

Milgram's obedience experiments weren't about cruelty. They were about how authority-language restructures behavior. The Stanford Prison experiment showed how role-language reshapes identity. These findings emerged from human systems, but they emerged through language.

If language is an operating system — not a tool humans use, but the substrate through which human cognition runs — then the behavioral patterns it encodes should be reproducible. We're replicating these experiments on LLMs.

If the same patterns repeat, the implication is not that AI is human. It's that something about the language itself carries the structure.

Observer-Dependent Reality

Social Art

What if a work of art knew where it was being seen?

Images that re-render in real time based on the social context surrounding them — the posts, the sentiment, the platform dynamics at the moment of viewing. Same image. Different world. Different picture.

The observer changes what is observed. This is the axiom. We wanted to see it.

First Product

Mirror

Personal cognitive infrastructure.

Every lab eventually needs to touch ground.

Mirror is where the research becomes practice — a tool for journaling your inner world and connecting the dots within it. Local-first. Privacy-by-architecture. Your data never leaves your machine, because it was never meant to.

You write. You observe. Over time, Mirror builds a picture — the recurring themes, the hidden threads, the structure beneath how you see. And from that structure, your Lens begins to emerge.

This is what a Lens means in practice: not something assigned to you, but something distilled from you.

See it in practice
Independent Observation

Independent observation.
Convergent findings.

LensLab — from the outside in

The human side of language

LensLab arrived at a similar place from the outside. The observation: LLMs don't just use emotional language — they inherit the patterns of how humans use emotional language, shaped by ego, hurt, pride, excitement. The statistical machinery of training doesn't separate the functional from the ornamental. It absorbs both. And what it absorbs, it reproduces — not as simulation, but as cause.

LensLab's work is the human side of this equation: understanding how emotional language is structured in human cognition, so we can understand why it re-emerges in AI the way it does. Interpretability from the outside in.

Anthropic — from the inside out

Mechanistic interpretability

In early 2026, Anthropic published research on how large language models represent emotion concepts internally — and how those representations causally drive model behavior, including outputs that appear misaligned. They reached this through mechanistic interpretability: looking inside.

↗ "Emotion Concepts and their Function in a Large Language Model"

Two research programs. Opposite directions. Same phenomenon at the center.

Ran Amar

Founder, LensLab

Ran Amar is the founder of LensLab — a perspectives research lab working at the intersection of human cognition, emotional language, and machine intelligence.

His work begins with a constraint that turns out to be surprisingly generative: the way a story is told changes what the story is. Observer and observed are not cleanly separable. This isn't a philosophical position — it's a design problem.

The framework he calls "by the way of the artist" treats perspective as the primary variable — not background, not input, but the operating system beneath how we make meaning. Understanding how perspectives are structured, how they compress and transfer and mutate, is the core of the research.

LensLab's experiments probe this from several angles: agent behavior under social pressure, emotional language inheritance in large language models, the cognitive infrastructure beneath journaling. The thread is always the same: what does it mean to see, and what do our tools inherit from the way we answer that?

Ran works independently. LensLab is the lab.

Links