top of page

Programs, proofs, and why I care

When I’m skating, surfing, or trying to learn anything, I have the need to understand how it all works, to its core, until nothing feels like magic anymore.

 

For instance, if I’m learning a new skate trick, I don’t just throw myself at it and hope for luck. I think about it first. I picture the forces, the angles, the timing. Only then, after sitting with the mechanics for a while, I try it myself. But mental models aren’t perfect at first, I keep refining them by studying each execution. If I have the privilege of skating with someone who can do the trick well, I’ll ask them to do it to check if reality aligns with my mental model. If I land the trick by accident, it doesn’t count, not to me, not in my book. I need to know that I understand it. That I can replicate it. That I have it. That’s when it counts.

This is how my brain works, so it makes sense that I ended up researching Programming Languages and Formal Methods. I can’t settle for programs that just work; I need to know the story behind their correctness. To hold proof in my hands, not just results.

But what does that actually mean, Programming Languages and Formal Methods?

 

Programming Languages research isn’t about learning Python or building the next JavaScript. It’s about understanding the fundamental structures that make programming possible. We treat programs as mathematical objects. We ask questions like: What does this program mean? How can we be sure it behaves the way we expect? Can we write tools that help us reason about our code, or even write the code for us?

 

That’s where Formal Methods come in. They’re the mathematical techniques we use to model, specify, and verify software systems (programs, essentially). Think of them as tools that help you not only write code, but prove that your code is correct, down to the very logic that runs it. 

 

This isn’t theory for theory’s sake. It’s how airplanes fly safely, how medical devices stay reliable, how critical infrastructure keeps running without crashing. But beyond the stakes, it’s also just... beautiful! A program that proves something about another program? That’s what I work on!

 

Of course, this area of research covers a huge range of problems, and many great researchers are busy solving them. But here, I want to introduce you to the problems that have captured my attention (and my time) the most.​ The first problem I will introduce is called recursive program synthesis, and it’s the one I’ve spent the most time with, nearly two years now! 

 

At a high level, program synthesis is about getting machines to write code for us. You specify what a program should do, usually as examples, logical constraints, or a type, and the system figures out how to do it. You’re not just generating code that seems to work. You’re searching a space of possible programs, guided by logic, types, and proofs, to find one that meets the specification and terminates correctly on all valid inputs. It's like the machine truly understands the structure of the problem. It’s beautiful!

But that’s just the foundation, now let’s zoom in on what I actually work on. I don’t work on just any kind of programs; I focus on recursive ones: programs that solve problems by breaking them into smaller versions of themselves. Recursive programs are everywhere: sorting algorithms, interpreters, tree traversals, mathematical functions... They’re elegant, expressive, and notoriously tricky to get right. My work asks not only: Can we automatically generate recursive programs and guarantee they’re correct? More specifically: What happens when we need more than one recursive function to solve a problem?

 

For instance, to multiply, you need to know how to add. Addition becomes an auxiliary function to multiplication. But current systems can’t synthesize auxiliary functions on the fly. That means if the synthesizer doesn’t already know how to add, it can’t learn to multiply. My current project focuses on changing that. I’m working on a system that can learn to multiply without knowing how to add first, eliminating the need for auxiliary functions altogether. That means the burden shifts from the programmer understanding the problem’s core structure, to the synthesizer discovering it. A computer that finds patterns in patterns. Personally, that feels like mathematically grounded magic.

 

Other problems I’m interested in are automated deduction and automated theorem proving. Both involve reasoning about programs using formal logic. In automated deduction, the goal is to build systems that can derive new facts from existing ones, much like a human mathematician proving lemmas from axioms. In automated theorem proving, we go further: we ask the machine to construct full formal proofs of complex properties, such as verifying that a program satisfies its specification for all possible inputs. These techniques are at the core of building trustworthy software, where correctness isn’t just tested, it’s proved!

 

These are the kinds of questions that keep me up at night, and the reason I love what I do. If any of this sparks your curiosity, welcome to my blog. You’re my kind of person.​​

bottom of page