Janu Verma

I study how machines learn, remember, and reason. I strive to understand things deeply — and share what I find. Over last few years, my focus has evolved towards building learning systems that reason, adapt, and improve through feedback. Currently, I am a principal applied scientist working on AI personalization at Microsoft.

Scroll

Thinking
in public

Deep technical writing on the systems powering modern AI. I write about whatever I am trying to understand — agents, RL, recommendations, LLMs, sometimes life. The thread is rigor, not topic. Writing is how I understand the world. It's how I clarify my thinking, test my assumptions, and connect with others. It's also how I document the journey — the successes, failures, and lessons learned along the way.

RecSys After LLMs: Four Paradigms for What Comes Next

Building a Data Analysis Agent

Multi-Turn Tool Use with RL

Fine-tuning LLMs using Reinforcement Learning

LLMs for Recommender Systems

All posts on Substack

The journey is everything

Janu Verma

I started where the abstractions are purest — Cambridge, studying the mathematics of string theory and geometry. Then I spent three years in a math PhD before dropping out to build things that matter now.

That search took me through computational biology at Cornell (modeling crop genetics), IBM Research (drug discovery, clinical NLP), and eventually into the systems that touch everyday life — fraud detection at Mastercard covering a billion transactions, and personalized recommendations on the web scale at Hike and Microsoft. Along the way, I've published research in venues like Nature Genetics, KDD, ECIR, and RecSys.

Today at Microsoft, I work on M365 Copilot personalization. The problems I care about: how do AI systems learn user preferences? How do they reason across multi-turn conversations? How do we make user context and memory actually work?

The math background never left. It shaped how I think about representation, structure, and abstraction in ML. It shows up in how I feel the need to actually understand something before I trust it.

I have always been drawn to hard problems in complex systems. Be it algebraic geometry, crop genetics, fraud networks, or how AI learns what a person actually wants. The domain changes, but the way of thinking does not.

I use writing as a tool for growth. By articulating my ideas, I hold myself accountable to continuous learning and improvement. It's a way to push the boundaries of my knowledge and share that growth with others. Writing helps me make sense of the complex world of AI and my place within it.

When I'm not reading papers, I'm probably making pour-over videos, exploring a recipe, or thinking too much about what to wear.

Speaking & Training

I speak about AI that works in practice — the gap between research and production, building systems at scale, and what's actually changing in ML.

Invite me to speak
Topics I speak about

LLMs & Agents

From fine-tuning with RL to building agents that actually work — tool use, memory, and multi-turn reasoning.

Recommendations & Personalization

How LLMs are reshaping RecSys, semantic approaches, and building AI that understands user intent.

Research to Production

Bridging the gap — shipping ML at billion-scale, what breaks, what works, and how to build teams that deliver.

Selected Talks

LLMs in Recommender Systems

PyData London

Transformer Models in Recommendation Systems

PyData London

Graph Embedding Methods for Recommendation Systems

Data Hacks Summit, Analytics Vidhya

Transfer Learning in NLP

PyData Delhi

Beyond QWERTY: Solving the Input Problem of India

Future of Work, YourStory

connect
Say Hello

Let's talk

Always interested in conversations about AI research, interesting problems, or potential collaborations.