Interactive Theorem Proving using Lean, Winter 2026/27

4. Type Theory🔗

Lean's foundation is dependent type theory. Every term has a type; every proof is a term whose type is the proposition it proves. This part unpacks the consequences of that single design choice:

  • Curry-Howard: how propositions become types and proofs become terms, so that is both function arrow and implication.

  • Dependent types: types that depend on values, giving us Σ, Π, Subtype, Vector n, and the rest of Lean's type-level vocabulary.

  • Universes and axioms: why Type cannot contain itself, and which axioms (funext, propext, Classical.choice) Lean adds on top of the kernel.

  • Structures: bundling data and properties into named-field records, and the extends mechanism that Mathlib uses to build its algebraic hierarchy.

  • Typeclasses: how Lean's instance resolution lets you define generic operations like + once and reuse them across types.

4.1. The Curry-Howard Correspondence🔗

The Curry-Howard correspondence is one of the most profound ideas in the foundations of mathematics and computer science. It establishes a deep connection between logic and type theory: propositions correspond to types, and proofs correspond to terms (programs) of those types. In Lean 4, this correspondence is not merely a theoretical curiosity -- it is the very foundation on which the proof assistant is built.

4.1.1. Historical context🔗

The correspondence is named after Haskell Curry and William Alvin Howard, but its roots go deeper:

  • Haskell Curry (1934) observed that the types of combinators in combinatory logic correspond to axiom schemes in propositional logic.

  • William Howard (1969, published 1980) extended this to a full correspondence between natural deduction and simply typed lambda calculus.

  • Nicolaas Govert de Bruijn independently discovered the correspondence while developing the Automath system (1968), one of the first proof checkers.

  • Per Martin-Löf (1971 onwards) developed intuitionistic type theory, extending the correspondence to predicate logic with dependent types.

Lean's type theory is a descendant of Martin-Löf's work, enriched with features from the Calculus of Inductive Constructions (as used in Coq).

4.1.2. Propositions as types, proofs as terms🔗

In Lean, every proposition P : Prop is a type. A proof of P is a term h : P -- that is, an inhabitant of the type P. If a type is inhabited, the corresponding proposition is true; if it is empty, the proposition is false.

This idea is sometimes called the BHK interpretation (Brouwer-Heyting-Kolmogorov), which gives a constructive meaning to the logical connectives. Let us see how each connective maps to a type-theoretic construction.

4.1.3. Implication = function type🔗

The most fundamental instance of the correspondence: implication corresponds to the function type.

If P and Q are propositions, then P → Q is the type of functions from P to Q. A proof of P → Q is a function that, given a proof of P, produces a proof of Q.

Here is the same theorem proved in two ways -- by tactic and by term:

example (P Q : Prop) (hP : P) (hPQ : P Q) : Q := P:PropQ:ProphP:PhPQ:P QQ All goals completed! 🐙 -- Term proof: just function application! example (P Q : Prop) (hP : P) (hPQ : P Q) : Q := hPQ hP

A proof of P → Q → R is a function of two arguments (curried):

example (P Q R : Prop) : (P Q) (Q R) (P R) := P:PropQ:PropR:Prop(P Q) (Q R) P R intro hPQ P:PropQ:PropR:ProphPQ:P QhQR:Q RP R P:PropQ:PropR:ProphPQ:P QhQR:Q RhP:PR All goals completed! 🐙 -- Term proof: composition of functions example (P Q R : Prop) : (P Q) (Q R) (P R) := fun hPQ hQR hP => hQR (hPQ hP)

4.1.4. Conjunction = product type🔗

The conjunction P ∧ Q corresponds to the product type P × Q (more precisely, it is defined as a structure with two fields). A proof of P ∧ Q is a pair ⟨hP, hQ⟩ consisting of a proof of P and a proof of Q.

example (P Q : Prop) (hP : P) (hQ : Q) : P Q := P:PropQ:ProphP:PhQ:QP Q All goals completed! 🐙 -- Term proof: just construct the pair example (P Q : Prop) (hP : P) (hQ : Q) : P Q := hP, hQ -- Eliminating a conjunction: projections example (P Q : Prop) (h : P Q) : P := h.1 example (P Q : Prop) (h : P Q) : Q := h.2

Note that And.intro is the constructor, and And.left / And.right (equivalently .1 / .2) are the projections -- exactly like a product type.

4.1.5. Disjunction = sum type🔗

The disjunction P ∨ Q corresponds to the sum type (coproduct). A proof of P ∨ Q is either a proof of P (tagged with Or.inl) or a proof of Q (tagged with Or.inr).

example (P Q : Prop) (hP : P) : P Q := P:PropQ:ProphP:PP Q P:PropQ:ProphP:PP All goals completed! 🐙 -- Term proof: inject into the left summand example (P Q : Prop) (hP : P) : P Q := Or.inl hP -- Eliminating a disjunction: case analysis example (P Q R : Prop) (h : P Q) (hPR : P R) (hQR : Q R) : R := h.elim hPR hQR

4.1.6. Negation = function to False🔗

Negation ¬P is defined as P → False. That is, a proof of ¬P is a function that takes a hypothetical proof of P and derives a contradiction.

example (P : Prop) : (¬P) = (P False) := rfl -- Tactic proof example (P : Prop) (hP : P) (hnP : ¬P) : False := P:ProphP:PhnP:¬PFalse All goals completed! 🐙 -- Term proof: function application example (P : Prop) (hP : P) (hnP : ¬P) : False := hnP hP -- Ex falso: from False, anything follows example (P : Prop) (h : False) : P := h.elim

4.1.7. Universal quantifier = dependent function type (Pi type)🔗

The universal quantifier ∀ (x : α), P x corresponds to the dependent function type (Pi type) (x : α) → P x. A proof is a function that, for each x : α, produces a proof of P x.

example : (n : ), n = n := (n : ), n = n n:n = n All goals completed! 🐙 -- Term proof: a dependent function (lambda) example : (n : ), n = n := fun n => rfl

Notice that in Lean, and the dependent arrow are actually the same thing. When the codomain does not depend on the input, the dependent function type degenerates to the ordinary function type.

4.1.8. Existential quantifier = dependent pair type (Sigma type)🔗

The existential quantifier ∃ (x : α), P x corresponds to the dependent pair type. A proof consists of a witness a : α together with a proof that P a holds.

example : (n : ), n > 0 := n, n > 0 1 > 0; All goals completed! 🐙 -- Term proof: construct the dependent pair example : (n : ), n > 0 := 1, 1 > 0 All goals completed! 🐙 -- Another example example : (n : ), n + n = 10 := 5, 5 + 5 = 10 All goals completed! 🐙

Note: There is an important distinction between (which lives in Prop) and Σ (which lives in Type). We discuss this further in the chapter on dependent types.

4.1.9. Nonempty and propositional truncation🔗

The Curry-Howard view says "to prove ∃ x, P x, give an x and a proof of P x". But sometimes you want to express that some element of a type α exists, without committing to a specific witness. For this Lean has the typeclass

Nonempty : Sort u_1 Prop#check @Nonempty

Nonempty α is a proposition: it asserts only that α has at least one element, but it does not give you one. This is the propositional truncation of α: it forgets the witness, keeping only the bare existence claim.

To extract data from a Nonempty (i.e., to actually obtain a term of α), you must use Classical.choice, an axiom of Lean's type theory:

@Classical.choice : {α : Sort u_1} Nonempty α α#check @Classical.choice

This is a real axiom: constructively, knowing only that α is nonempty does not let you pick an element. Classical.choice turns the propositional fact into honest data, at the cost of being nonconstructive.

example {α : Type} (a : α) : Nonempty α := a noncomputable example {α : Type} (h : Nonempty α) : α := Classical.choice h

This pair (Nonempty.intro, Classical.choice) is the prototypical example of moving data between Type and Prop. In Mathlib it is the foundation for many "exists, therefore choose one" arguments, and the reason noncomputable appears so often.

A related construction is Squash α, the explicit propositional truncation of α -- it has a single constructor Squash.mk a for any a : α, and sits at the boundary between Prop and Type.

4.1.10. How tactic proofs build terms behind the scenes🔗

When you write a tactic proof in Lean, the tactic block constructs a term behind the scenes. Every tactic manipulates the proof state and ultimately produces a term of the goal type. You can use show_term (or look at the output of #print) to see what term a tactic proof generates.

example (P Q : Prop) (hP : P) (hQ : Q) : P Q := P:PropQ:ProphP:PhQ:QP Q Try this: [apply] exact hP, hQshow_term All goals completed! 🐙

Here is a more complex example where the term proof is less obvious:

example (P Q R : Prop) : (P Q) R (R P) := P:PropQ:PropR:PropP Q R R P intro hP, _ P:PropQ:PropR:ProphP:Pright✝:QhR:RR P All goals completed! 🐙 -- Equivalent term proof example (P Q R : Prop) : (P Q) R (R P) := fun hP, _ hR => hR, hP

4.1.11. Summary table🔗

Here is a summary of the Curry-Howard dictionary:

Logic

Type Theory

Lean notation

Proposition

Type

P : Prop

Proof

Term (inhabitant)

h : P

Implication P → Q

Function type

P → Q

Conjunction P ∧ Q

Product type

P ∧ Q / And P Q

Disjunction P ∨ Q

Sum type

P ∨ Q / Or P Q

True

Unit type

True

False

Empty type

False

Negation ¬P

Function to empty

P → False

∀ (x : α), P x

Dependent function (Π)

(x : α) → P x

∃ (x : α), P x

Dependent pair (Σ)

⟨a, ha⟩ : ∃ x, P x

Understanding this correspondence is key to becoming fluent in Lean: when you write a tactic proof, you are really constructing a term; when you write a term proof, you are programming a function. The two perspectives are equivalent, and switching between them often leads to deeper understanding.

4.2. Dependent Types in Depth🔗

Dependent types are the feature that distinguishes Lean's type system from those of most mainstream programming languages. In a dependently typed language, types can depend on values. This single idea has far-reaching consequences: it allows us to express precise specifications, encode mathematical statements, and catch entire classes of errors at compile time.

4.2.1. What makes dependent types special🔗

In a simply typed language (like Haskell without extensions, or basic OCaml), the type of a function's output cannot depend on the value of its input -- only on the type of its input. For example, you can write f : ℕ → ℕ, but you cannot write a type that says "a list whose length equals the input n."

In a dependently typed language, you can do this. The type of the output may mention the input value. For example:

  • (n : ℕ) → Fin n → ℝ is the type of a function that takes a natural number n and then an element of {0, 1, ..., n-1}, returning a real number.

  • (n : ℕ) → Vector ℝ n is the type of a function that, given n, returns a vector of n real numbers.

This expressiveness is what allows Lean to serve simultaneously as a programming language and a theorem prover.

4.2.2. Pi types: dependent function types🔗

The Pi type (also written Π-type) is the type of dependent functions. In Lean, we write it as:

(x : α) → β x

where β : α → Sort u is a type family indexed by α. For each value a : α, the function returns a term of type β a. When β does not actually depend on x, this reduces to the ordinary function type α → β.

In Lean, is notation for Pi types when the codomain is a Prop:

(n : ), n = n : Prop#check ( (n : ), n = n) -- Prop (n : ), n = n : Prop#check ((n : ) n = n) -- same Prop

Here is a concrete example of a dependent function:

def allPositiveSucc : (n : ) 0 < n + 1 := fun n => Nat.succ_pos n

Another important example: a function that, given a type, returns the identity function on that type. This is a function whose return type depends on its input:

def polyId : (α : Type) α α := fun _ a => a polyId 42 : #check polyId 42 -- ℕ polyId String "hi" : String#check polyId String "hi" -- String

4.2.3. Sigma types: dependent pair types🔗

The Sigma type (also written Σ-type) is the type of dependent pairs. In Lean, we write:

Σ (x : α), β x

or equivalently (x : α) × β x. A term of this type is a pair ⟨a, b⟩ where a : α and b : β a. Note that the type of the second component depends on the value of the first component.

def exampleSigma : { n : // n % 2 = 0 } := 4, 4 % 2 = 0 All goals completed! 🐙 -- Accessing the components exampleSigma : #check exampleSigma.1 -- ℕ exampleSigma.property : exampleSigma % 2 = 0#check exampleSigma.2 -- 4 % 2 = 0

When β does not depend on x, the Sigma type reduces to the ordinary product type α × β.

4.2.4. Difference between Sigma and Exists🔗

This is a subtle but important distinction in Lean:

  • Σ (x : α), β x lives in Type -- you can extract the witness and use it computationally.

  • ∃ (x : α), P x lives in Prop -- the witness is "erased" and cannot be used in computations (only in proofs).

This reflects the principle of proof irrelevance: in Prop, the specific proof does not matter, only whether the proposition is true or false. In Type, the data matters.

def sigmaExample : { n : // n > 5 } := 7, 7 > 5 All goals completed! 🐙 7#eval sigmaExample.1 -- 7 -- Exists: we CANNOT extract the witness for computation -- (we can only use it inside proofs) example : (n : ), n > 5 := 7, 7 > 5 All goals completed! 🐙

Mathematically, Σ is like a disjoint union (indexed coproduct), while is the propositional truncation thereof. In practice:

  • Use when you only need to know that a witness exists (typical in mathematics).

  • Use Σ when you need to actually compute with the witness (typical in programming).

4.2.5. Vectors indexed by length🔗

A classic example of dependent types is the type of vectors (lists of a fixed length). In Mathlib, one way to represent a vector of length n with entries in α is as a function Fin n → α, where Fin n is the type with exactly n elements {0, 1, ..., n-1}.

def myVec : Fin 3 := ![10, 20, 30] 10#eval myVec 0 -- 10 20#eval myVec 1 -- 20 30#eval myVec 2 -- 30

The advantage of this representation is that indexing is always safe: you cannot access index 5 of a 3-element vector, because 5 is not a term of type Fin 3. The type system prevents out-of-bounds errors at compile time.

4.2.6. Matrices as dependent types🔗

Matrices are a natural extension. An m × n matrix with entries in α can be represented as a function Fin m → Fin n → α, or equivalently as Matrix (Fin m) (Fin n) α in Mathlib:

def myMatrix : Fin 2 Fin 3 := ![![1, 2, 3], ![4, 5, 6]] 2#eval myMatrix 0 1 -- 2 6#eval myMatrix 1 2 -- 6

In Mathlib, matrix multiplication is only defined when the dimensions match. If A is an m × n matrix and B is an n × p matrix, then A * B is an m × p matrix. The dependent types enforce that the inner dimensions agree -- a dimension mismatch is a type error, caught at compile time.

4.2.7. Type families🔗

A type family is simply a function that returns a type. Type families are pervasive in Lean and Mathlib:

def BoolFamily : Bool Type | true => | false => String -- Dependent function using the type family def boolFamilyExample : (b : Bool) BoolFamily b | true => (42 : ) | false => ("hello" : String)

In mathematical formalization, type families appear everywhere. For instance, the fiber of a function f : α → β over a point b : β is the subtype {a : α // f a = b}, which is a type family indexed by β.

def fiber (f : ) (b : ) : Type := {a : // f a = b} -- An element of the fiber of (· * 2) over 10 def fiberExample : fiber (· * 2) 10 := 5, (fun x => x * 2) 5 = 10 All goals completed! 🐙

4.2.8. Subtypes🔗

The subtype {x : α // P x} is a special case of a Sigma type where P : α → Prop. It represents elements of α satisfying predicate P. Since the second component is a proof (in Prop), subtypes enjoy proof irrelevance: two elements of {x : α // P x} are equal if and only if their first components are equal.

def BigNat : Type := {n : // n > 5} def seven : BigNat := 7, 7 > 5 All goals completed! 🐙 def ten : BigNat := 10, 10 > 5 All goals completed! 🐙 -- We can extract the value 7#eval seven.val -- 7

This is how many mathematical objects are defined in Mathlib. For example, the type of prime numbers could be defined as {n : ℕ // Nat.Prime n}.

4.2.9. Why dependent types matter🔗

Dependent types make Lean strictly more expressive than simply-typed languages like Haskell or OCaml. Here are some things you can express with dependent types that you cannot express in simply-typed languages:

  1. Length-indexed collections: The type system guarantees that head is never called on an empty list, that matrix dimensions match in multiplication, etc.

  2. Precise specifications: You can write a sorting function whose type guarantees that the output is sorted and is a permutation of the input.

  3. Mathematical theorems: The type ∀ (n : ℕ), ∃ (p : ℕ), p > n ∧ Nat.Prime p expresses Euclid's theorem (infinitely many primes). A term of this type is a proof.

  4. Safe APIs: You can define a division function (a : ℤ) → (b : ℤ) → b ≠ 0 → ℤ that requires a proof of non-zero denominator. No runtime exceptions possible.

def safeDiv (a b : ) (hb : b 0) : := a / b -- This works: 3#eval safeDiv 10 3 (3 0 All goals completed! 🐙) -- This would be a type error (uncomment to see): -- #eval safeDiv 10 0 (by norm_num) -- norm_num cannot prove 0 ≠ 0

The price of this expressiveness is that type checking becomes undecidable in general (though Lean uses various heuristics to make it practical). But for mathematics, the payoff is enormous: the type system itself becomes a logic in which we can state and prove theorems.

4.3. Universes, Axioms, and Foundations🔗

Lean's type theory rests on a carefully designed system of universes, a small number of axioms, and a powerful logic that can be used both constructively and classically. In this chapter, we examine these foundational aspects.

4.3.1. The universe hierarchy🔗

In Lean, every type itself has a type. To avoid paradoxes (like Russell's paradox), types are organized into a hierarchy of universes:

  • Prop is the universe of propositions. It is also called Sort 0.

  • Type 0 (or simply Type) is the universe of "ordinary" types. It is also called Sort 1.

  • Type 1 is the universe of types whose elements are themselves types in Type 0. It is Sort 2.

  • In general, Type n is Sort (n + 1).

The key rule is that Sort n : Sort (n + 1), so:

Prop : Type#check (Prop : Type 0) -- Prop : Type Type : Type 1#check (Type 0 : Type 1) -- Type : Type 1 Type 1 : Type 2#check (Type 1 : Type 2) -- Type 1 : Type 2

This hierarchy prevents Type : Type, which would lead to Girard's paradox (a type-theoretic analogue of Russell's paradox).

4.3.2. Prop vs Type: proof irrelevance🔗

The universe Prop has a special property that distinguishes it from all Type n: proof irrelevance. This means that if P : Prop and h1 h2 : P, then h1 = h2. In other words, all proofs of the same proposition are considered equal.

This is a deliberate design choice. It means:

  • You cannot pattern-match on a proof to extract computational data.

  • Proofs can be erased at compile time without affecting program behavior.

  • Two mathematical proofs of the same theorem are interchangeable.

example (P : Prop) (h1 h2 : P) : h1 = h2 := rfl -- This is why ∃ lives in Prop: you cannot extract the witness computationally -- If you need the witness, use Σ (which lives in Type)

The distinction between Prop and Type corresponds to the mathematical distinction between "asserting that something exists" (Prop) and "constructing a specific example" (Type).

4.3.3. Universe polymorphism🔗

Many definitions in Lean need to work across multiple universe levels. For this, Lean supports universe polymorphism. You can declare universe variables and use them in definitions:

universe u v -- A universe-polymorphic identity function def myId' (α : Type u) (a : α) : α := a -- Works at any universe level myId' 42 : #check myId' 42 -- ℕ myId' Type : Type#check myId' (Type 0) -- Type

In Mathlib, many definitions are universe-polymorphic. For example, Set α works regardless of which universe α lives in. When you see {α : Type*}, the * means "at any universe level" -- this is Lean's way of automatically introducing a universe variable.

4.3.4. The axioms of Lean🔗

Lean's kernel is built on the Calculus of Inductive Constructions, which by itself is quite weak. To support the mathematics we want to formalize, Lean adds a small number of axioms. Let us examine each one.

4.3.4.1. Function extensionality (funext)🔗

Two functions are equal if and only if they give the same output on every input. This is the axiom funext:

@funext : {α : Sort u_1} {β : α Sort u_2} {f g : (x : α) β x}, (∀ (x : α), f x = g x) f = g#check @funext -- funext : ∀ {α : Sort u} {β : α → Sort v} {f g : (x : α) → β x}, -- (∀ (x : α), f x = g x) → f = g example : (fun n : => n + 0) = (fun n : => n) := (fun n => n + 0) = fun n => n n:n + 0 = n All goals completed! 🐙

Without funext, we could not prove that two functions are equal, even if they demonstrably agree everywhere. This axiom is indispensable in mathematics.

4.3.4.2. Propositional extensionality (propext)🔗

Two propositions are equal if and only if they are logically equivalent:

@propext : {a b : Prop}, (a b) a = b#check @propext -- propext : ∀ {a b : Prop}, (a ↔ b) → a = b -- Example: P ∧ True is the same proposition as P example (P : Prop) : (P True) = P := P:Prop(P True) = P P:PropP True P All goals completed! 🐙

This axiom is essential for working with sets (since Set α is defined as α → Prop, and we want equal predicates to give equal sets).

4.3.4.3. The axiom of quotients🔗

Lean has built-in support for quotient types. Given a type α and an equivalence relation r on α, you can form the quotient type Quot r. The axiom Quot.sound says that if r a b, then Quot.mk r a = Quot.mk r b.

This is how many mathematical constructions are formalized: the integers as a quotient of ℕ × ℕ, the rationals as a quotient of ℤ × ℤ, the reals as a quotient of Cauchy sequences, and so on.

@Quot.mk : {α : Sort u_1} (r : α α Prop) α Quot r#check @Quot.mk @Quot.sound : {α : Sort u_1} {r : α α Prop} {a b : α}, r a b Quot.mk r a = Quot.mk r b#check @Quot.sound @Quot.lift : {α : Sort u_1} {r : α α Prop} {β : Sort u_2} (f : α β) (∀ (a b : α), r a b f a = f b) Quot r β#check @Quot.lift

4.3.4.4. The axiom of choice (Classical.choice)🔗

The most powerful (and most controversial) axiom in Lean is the axiom of choice:

@Classical.choice : {α : Sort u_1} Nonempty α α#check @Classical.choice -- Classical.choice : ∀ {α : Sort u}, Nonempty α → α

Given a proof that a type is nonempty (i.e., inhabited), Classical.choice produces an actual element. This is a non-constructive axiom: it asserts existence without providing a construction.

From this single axiom (together with propext and Quot.sound), one can derive:

  • The law of excluded middle: ∀ (P : Prop), P ∨ ¬P

  • The axiom of choice in its more familiar set-theoretic form

  • Hilbert's epsilon operator

4.3.5. Constructive vs classical logic🔗

By default, Lean's type theory is constructive: proofs must provide explicit witnesses and constructions. The axiom of choice introduces classical reasoning, which allows non-constructive arguments.

4.3.5.1. Excluded middle🔗

In classical logic, every proposition is either true or false. This is expressed by the law of excluded middle:

Classical.em : (p : Prop), p ¬p#check @Classical.em -- Classical.em : ∀ (p : Prop), p ∨ ¬p -- With excluded middle, we can do case splits on any proposition example (P : Prop) : ¬¬P P := P:Prop¬¬P P P:ProphnnP:¬¬PP P:ProphnnP:¬¬PhnP:¬PFalse All goals completed! 🐙

Without Classical.em, double negation elimination ¬¬P → P is not provable. This is one of the key differences between constructive and classical logic.

4.3.5.2. The Decidable typeclass🔗

Lean has a clever middle ground between constructive and classical logic: the Decidable typeclass. A proposition P is Decidable if there is an algorithm that determines whether P is true or false.

Decidable : Prop Type#check @Decidable -- Decidable : Prop → Type -- Many concrete propositions are decidable inferInstance : Decidable (2 + 3 = 5)#check (inferInstance : Decidable (2 + 3 = 5)) -- isTrue ... inferInstance : Decidable (2 + 3 = 6)#check (inferInstance : Decidable (2 + 3 = 6)) -- isFalse ...

When a proposition is Decidable, you can use if P then ... else ... in code without needing the axiom of choice. This is important for writing executable programs. The decide tactic can close goals about decidable propositions:

example : 2 + 3 = 5 := 2 + 3 = 5 All goals completed! 🐙 example : ¬(2 + 3 = 6) := ¬2 + 3 = 6 All goals completed! 🐙

When you open Classical, every Prop becomes Decidable (non-constructively). This is convenient for proofs but means the resulting code is not executable.

4.3.5.3. When do you need Classical.choice?🔗

You need classical axioms when:

  • You want to do a case split on an arbitrary proposition (by_cases, by_contra)

  • You want to choose an element from a nonempty type without a constructive witness

  • You want to use the law of excluded middle

You do not need classical axioms when:

  • Working with decidable propositions (natural number arithmetic, finite structures)

  • Constructing specific terms (defining functions, data structures)

  • Proving things by direct construction or induction

In Mathlib, most proofs use classical logic freely (via open Classical), since the focus is on mathematical truth rather than computational content.

4.3.6. Comparison with ZFC set theory🔗

Most mathematicians are (at least implicitly) trained in ZFC set theory. Here are the key differences from Lean's type theory:

Feature

ZFC

Lean

Foundation

Sets and membership

Types and terms :

Everything is a...

Set

Term of some type

3 ∈ 7

Well-formed (and true!)

Type error

Functions

Sets of ordered pairs

Primitive notion

Equality

Between any two sets

Only within a type

Proof objects

Not part of the theory

First-class citizens

Computation

Not built in

Built in (terms reduce)

Type checking

Not applicable

Decidable (in practice)

A key practical difference: in ZFC, you can ask nonsensical questions like "is 3 ∈ π?" and get an answer. In Lean, comparing objects of different types is a type error, which catches many mistakes early.

Both foundations are equiconsistent for most of mathematics. Nearly everything that can be formalized in ZFC can also be formalized in Lean's type theory (and vice versa).

4.3.7. Brief mention of other type theories🔗

Lean's type theory is one of several:

  • Simply typed lambda calculus (Church, 1940): No dependent types. Foundation for Haskell, OCaml.

  • System F (Girard, Reynolds, 1970s): Adds polymorphism but not full dependent types.

  • Martin-Löf Type Theory (MLTT, 1971+): The origin of dependent types in proof assistants. Used in Agda.

  • Calculus of Inductive Constructions (CoIC): Used in Coq/Rocq. Lean's type theory is closely related.

  • Homotopy Type Theory (HoTT, 2013): Interprets types as topological spaces and equality as paths. Adds the univalence axiom (due to Voevodsky): equivalent types are equal. This is incompatible with Lean's proof irrelevance for Prop, so HoTT uses different proof assistants (Agda, cubical Agda, or specialized Lean forks).

  • Cubical Type Theory (2015+): Makes HoTT computational by using a cube category to model paths.

For this course, Lean's type theory (CIC + proof irrelevance + quotients + choice) is the framework we work in. It is expressive enough to formalize virtually all of modern mathematics, as the Mathlib library demonstrates.

4.4. Structures and Records🔗

While inductive types let us define types with multiple constructors, many mathematical objects are better described as a collection of named fields. For example, a point in the plane has an x-coordinate and a y-coordinate. In Lean, we use structure for this.

4.4.1. Defining structures🔗

A structure is a special case of an inductive type with exactly one constructor and named fields. Here is a simple example:

structure Point where x : Float y : Float

This defines a type Point with two fields, x and y, both of type Float. We create values of type Point using anonymous constructor syntax or by naming the constructor explicitly:

def origin : Point := { x := 0.0, y := 0.0 } def p1 : Point := 1.0, 2.5 def p2 : Point := Point.mk 3.0 4.0

All three syntaxes create a Point. The angle brackets ⟨...⟩ (typed \< and \>) are the anonymous constructor.

4.4.2. Accessing fields and dot notation🔗

We access fields using dot notation:

1.000000#eval p1.x -- outputs 1.0 2.500000#eval p1.y -- outputs 2.5

We can also define functions in the Point namespace, and then call them with dot notation:

def Point.distToOrigin (p : Point) : Float := Float.sqrt (p.x * p.x + p.y * p.y) 5.000000#eval p2.distToOrigin -- outputs 5.0

This works because Lean sees that p2 has type Point, so it looks for Point.distToOrigin.

4.4.3. Updating structures🔗

We can create a new structure value based on an existing one, changing only some fields. This uses the with keyword:

def p3 : Point := { p1 with y := 10.0 } -- p3.x = 1.0, p3.y = 10.0

Since structures are immutable (as everything in functional programming), this creates a new Point rather than modifying p1.

4.4.4. Structures with default values🔗

Fields can have default values:

structure MyConfig where width : := 80 height : := 24 title : String := "Untitled" def myConfig : MyConfig := { title := "My Window" } -- myConfig.width = 80, myConfig.height = 24

4.4.5. Extending structures🔗

One structure can extend another, inheriting all of its fields:

structure Point3D extends Point where z : Float def q : Point3D := { x := 1.0, y := 2.0, z := 3.0 } 1.000000#eval q.x -- outputs 1.0 (inherited from Point) 3.000000#eval q.z -- outputs 3.0

This is particularly important in Mathlib, where the algebraic hierarchy uses structure extension extensively. For example, CommRing extends Ring, which extends Semiring, and so on.

4.4.6. Mathematical examples🔗

Structures are natural for representing mathematical objects. Here is a complex number type:

structure MyComplex where re : Float im : Float def MyComplex.add (a b : MyComplex) : MyComplex := { re := a.re + b.re, im := a.im + b.im } def MyComplex.mul (a b : MyComplex) : MyComplex := { re := a.re * b.re - a.im * b.im, im := a.re * b.im + a.im * b.re } def MyComplex.norm (a : MyComplex) : Float := Float.sqrt (a.re * a.re + a.im * a.im) def i : MyComplex := { re := 0.0, im := 1.0 } def oneComplex : MyComplex := { re := 1.0, im := 0.0 } -1.000000#eval (MyComplex.mul i i).re -- outputs -1.0

Here is a structure representing a linear map between two types that have addition and scalar multiplication:

structure MyLinearMap (α β : Type) [Add α] [Add β] [HMul α α] [HMul β β] where toFun : α β map_add : x y : α, toFun (x + y) = toFun x + toFun y

Notice that the structure contains both data (the function toFun) and a property (map_add). This pattern of bundling data with properties is fundamental to how Mathlib organizes mathematics.

4.4.7. Inductive types vs structures🔗

When should you use inductive and when structure?

  • Use structure when your type has a single constructor with named fields. Examples: points, complex numbers, algebraic structures.

  • Use inductive when your type has multiple constructors. Examples: natural numbers, lists, trees, Option, Bool.

A structure is syntactic sugar for an inductive type with one constructor. The definition

structure Point where
  x : Float
  y : Float

is essentially equivalent to

inductive Point where
  | mk : Float → Float → Point

but the structure version gives us named fields, dot notation, default values, and the extends mechanism.

4.4.8. Using deriving🔗

When we define a new type, Lean can automatically generate certain instances for us using deriving:

structure Student where name : String age : deriving Repr def alice : Student := { name := "Alice", age := 22 } { name := "Alice", age := 22 }#eval alice -- outputs { name := "Alice", age := 22 }

Without deriving Repr, the #eval command would not know how to print a Student. The deriving clause tells Lean to automatically create a Repr instance. Other commonly derived instances include BEq (Boolean equality), Hashable, and Inhabited.

structure Pair (α β : Type) where fst : α snd : β deriving Repr, BEq, Hashable

4.4.9. Namespaces🔗

Every structure automatically creates a namespace. When we defined Point, Lean created the namespace Point containing Point.mk, Point.x, and Point.y. We can add more definitions to this namespace:

namespace Point def translate (p : Point) (dx dy : Float) : Point := { x := p.x + dx, y := p.y + dy } def scale (p : Point) (factor : Float) : Point := { x := p.x * factor, y := p.y * factor } end Point 11.000000#eval (p1.translate 10.0 20.0).x -- outputs 11.0 7.500000#eval (p1.scale 3.0).y -- outputs 7.5

Using namespaces keeps related functions organized and enables dot notation.

4.5. The Typeclass System🔗

Typeclasses are one of the most important features of Lean, and they are central to how Mathlib organizes mathematics. A typeclass is a way to associate operations and properties with types, and to have Lean find the right implementation automatically. If you have seen abstract algebra, typeclasses are the mechanism that lets Lean know that the integers form a ring, that the real numbers form a field, and so on.

4.5.1. The problem typeclasses solve🔗

Consider the + operator. It works on natural numbers, integers, real numbers, matrices, polynomials, and many other types. How does Lean know which + to use?

One approach would be to define different functions: addNat, addInt, addReal, etc. But that would be extremely tedious. Instead, Lean uses typeclasses: there is a single Add typeclass, and each type that supports addition provides an instance of Add.

When you write a + b, Lean looks at the types of a and b, finds the appropriate Add instance, and uses it. This process is called instance resolution and happens automatically.

4.5.2. Defining typeclasses🔗

A typeclass is defined using the class keyword. Here is a simplified version of how Add might be defined:

class MyAdd (α : Type) where myAdd : α α α

This says: for any type α, an instance of MyAdd α provides a function myAdd : α → α → α. The class keyword is almost the same as structure -- in fact, every class is internally a structure. The difference is operational:

  • A structure is constructed and passed explicitly: you write ⟨...⟩ or call its named projections.

  • A class is also a structure, but Lean's instance resolution will synthesize one for you whenever a function expects an [Add α] argument. You never write the instance argument by hand.

So the rule of thumb is: use structure when the value is part of the data the user manipulates (a Point, a Person, a RingHom); use class when the value is a canonical piece of structure attached to a type (a Group instance on , an Add instance on ) and you want Lean to find it automatically.

4.5.3. Creating instances🔗

We create instances using the instance keyword:

instance : MyAdd where myAdd := Nat.add instance : MyAdd where myAdd := Int.add

Now we can use MyAdd.myAdd on natural numbers and integers, and Lean will automatically find the right instance.

Let us define a custom type and give it an Add instance:

structure Vec2 where x : Float y : Float instance : Add Vec2 where add a b := { x := a.x + b.x, y := a.y + b.y } instance : ToString Vec2 where toString v := s!"({v.x}, {v.y})" (4.000000, 6.000000)#eval (1.0, 2.0 : Vec2) + 3.0, 4.0 -- outputs (4.0, 6.0)

We defined two instances: Add Vec2 so we can use +, and ToString Vec2 so Lean can print Vec2 values.

4.5.4. The square bracket notation🔗

When a function needs a typeclass instance, we use square brackets [...] in its signature:

def doubleIt [Add α] (x : α) : α := x + x 10#eval doubleIt 5 -- outputs 10 6#eval doubleIt (3 : ) -- outputs 6

The [Add α] argument tells Lean: "find an Add instance for α automatically." We do not need to pass it explicitly; Lean's instance resolution handles it.

You can name the instance if you need to refer to it:

def tripleIt [inst : Add α] (x : α) : α := inst.add x (inst.add x x)

But usually the unnamed form [Add α] is preferred, since Lean resolves it behind the scenes.

4.5.5. Inspecting typeclass arguments with @🔗

When working with Mathlib, it is often useful to see all the implicit and typeclass arguments of a lemma. The @ symbol makes all arguments explicit:

#check mul_comm
-- mul_comm : ∀ {M : Type u_1} [inst : CommMonoid M] (a b : M), a * b = b * a
#check @mul_comm
-- @mul_comm : ∀ {M : Type u_1} → CommMonoid M → M → M → M = M

The first form shows [inst : CommMonoid M] in brackets, meaning it is found automatically. The @ version shows all arguments explicitly, which helps understand exactly what typeclass constraints are required.

4.5.6. The algebraic hierarchy in Mathlib🔗

Mathlib uses typeclasses to build an extensive hierarchy of algebraic structures. Here are some key ones, ordered roughly from weakest to strongest:

  • Add α -- a type with an addition operation

  • Mul α -- a type with a multiplication operation

  • AddMonoid α -- a type with addition, an additive identity 0, and associativity

  • Monoid α -- a type with multiplication, a multiplicative identity 1, and associativity

  • Group α -- a monoid with inverses

  • AddCommGroup α -- a commutative group under addition

  • Ring α -- an additive commutative group with a multiplication that distributes over addition

  • CommRing α -- a ring with commutative multiplication

  • Field α -- a commutative ring where every nonzero element has an inverse

  • LinearOrder α -- a type with a total order

Each of these is defined as a class that extends simpler classes. For example (simplified):

class Group (α : Type) extends Monoid α where
  inv : α → α
  inv_mul_cancel : ∀ a : α, inv a * a = 1

When you write a lemma that requires a CommRing, Lean automatically knows that it also has Add, Mul, AddCommGroup, etc., because of the extends chain.

The power of this system is that we can state and prove theorems at the most general level. For instance, mul_comm works for any CommMonoid, which includes , , , , , polynomial rings, matrix rings (when the base is commutative), and many more.

4.5.7. Defining multiple instances🔗

Let us build a more complete example. We define a type Mod3 representing integers modulo 3, and equip it with algebraic structure:

inductive Mod3 where | zero | one | two deriving Repr, BEq instance : Add Mod3 where add | Mod3.zero, b => b | a, Mod3.zero => a | Mod3.one, Mod3.one => Mod3.two | Mod3.one, Mod3.two => Mod3.zero | Mod3.two, Mod3.one => Mod3.zero | Mod3.two, Mod3.two => Mod3.one instance : Mul Mod3 where mul | Mod3.zero, _ => Mod3.zero | _, Mod3.zero => Mod3.zero | Mod3.one, b => b | a, Mod3.one => a | Mod3.two, Mod3.two => Mod3.one instance : Zero Mod3 where zero := Mod3.zero instance : One Mod3 where one := Mod3.one Mod3.one#eval (Mod3.two + Mod3.two : Mod3) -- outputs Mod3.one Mod3.one#eval (Mod3.two * Mod3.two : Mod3) -- outputs Mod3.one

4.5.8. How instance resolution works🔗

When Lean encounters an expression like a + b where a b : α, it needs to find an instance of Add α. The resolution proceeds as follows:

  1. Lean looks through all registered instances (those declared with instance).

  2. It tries to unify the goal Add α with the type of each instance.

  3. If an instance itself requires other instances (e.g., Add (Prod α β) might require Add α and Add β), Lean recursively resolves those.

  4. If exactly one chain of instances leads to a solution, Lean uses it. If none or multiple exist, it reports an error.

This process is deterministic and happens at elaboration time (when Lean checks your code), not at runtime. So there is no performance penalty.

4.5.9. Practical tips🔗

  • When you get an error like "failed to synthesize instance," it means Lean cannot find a required typeclass instance. Check that your type has the necessary instance defined.

  • Use #check @lemma_name to see all implicit arguments and understand what instances a lemma requires.

  • In Mathlib, most standard types (, , , , ) have instances for all the common typeclasses. You rarely need to define instances for these.

  • When defining your own types, provide instances for the typeclasses you need. Start with Repr (for printing), then Add, Mul, etc., as needed.

  • The deriving mechanism can automatically generate instances for some typeclasses (like Repr, BEq, Hashable, Inhabited).