Interactive Theorem Proving using Lean, Winter 2026/27

3. Lean🔗

3.1. Notes on Lean🔗

3.1.1. Lean as a programming language🔗

Lean is a functional programming language (i.e. it actually only consists of functions). This paradigm is in contrast to imperative programming such as Python, Java and C. Lean comes with many features you might be familiar with, such as a library for output and input, but is still young and many things need to be developed.

3.1.2. Dependent type theory🔗

In all programming languages, you have data types such as int, string and float. In Lean, these exist as well, but you can (and will in this course) define own data types. In all cases, we write x : α for a term x of type α, so we write False : Bool, 42 : ℕ, but also f : ℕ → ℝ (for a function from ℕ to ℝ, which is an own type) and 0 ≠ 1 : Prop (the proposition that 0 and 1 are different natural numbers), which is a proposition. Terms and types can depend on variables, e.g. in ∀ (n : ℕ), n < n + 1 : Prop and f : (n : ℕ) → (Fin n → ℝ) (where Fin n is the type which carries {0, ..., n-1}), which is a function f with domain such that f n ∈ ℝ^n.

As we see, these new data types are more abstract in the sense that Lean understands (and ) as infinite types, which are not limited by floating point arithmetic. E.g., actually represents an infinite set that is characterized by containing 0, and if it contains n, then it also contains the successor of n (represented by succ n). Accordingly, the real numbers are defined by an equivalence relation on Cauchy sequences, which is quite elaborate. (Although is implemented as such a quotient within Lean, we will not have to deal with these implementation details when working with real numbers, since we will rely on results in Mathlib, the mathematical library, taking care of these details.)

3.1.3. Universes, Types and Terms🔗

In Lean, there are three levels of objects: universes, types and terms. We are concerned here with the last two. Of particular interest is the type Prop, which consists of statements that can be True or False. It includes mathematical statements, so either the hypotheses, or the goal of what is to be proven. A hypothesis in Lean has the form hP : P, which means P is true, and this statement is called hP. Synonomously, it meansthat P is true and hP is a proof of P. The hypotheses here have names P Q R S, and the proofs of the hypotheses hP hQ hR hS. All names can be arbitrary. Furthermore, there are hypotheses of the form P → Q, which is the statement that P implies Q. (Note the similarity to function notation as in f : ℝ → ℝ.)

3.1.4. Function definitions🔗

In Lean, the function f : ℕ → ℕ, x ↦ 2x is defined as

def f : := fun x 2*x

(Write \mapsto for .) It is assumed that the x is only introduced to define f. The application of f to an x : ℕ is then done using f x. (The notation f x is an abbreviation for f(x), since Lean is sparing with parenthesis.)

3.1.4.1. Pattern matching🔗

For functions on inductive types (like , List α, Option α, Bool), the most natural way to define them is by pattern matching on the constructors of the input. The syntax uses match ... with and one branch per constructor, each prefixed by a |.

For example, the factorial function on matches on whether the input is 0 or a successor n + 1:

def factorial : | 0 => 1 | n + 1 => (n + 1) * factorial n

Each branch may use the variables introduced by its pattern (here n on the right-hand side). Lean checks two things automatically:

  1. Exhaustiveness. Every constructor of is covered (the cases 0 and n + 1 exhaust ). If you forget a case, Lean complains.

  2. Termination. The recursive call factorial n is on a strictly smaller argument than n + 1, so the definition is well-founded.

Pattern matching works for any inductive type:

def negate : Bool Bool | true => false | false => true -- A function on Option α: extract or use a default def Option.getD' {α : Type*} (d : α) : Option α α | none => d | some a => a -- A recursive function on List α def length' {α : Type*} : List α | [] => 0 | _ :: xs => 1 + length' xs

The same syntax can be used inline with match:

example (n : ) : := match n with | 0 => 42 | _ + 1 => 0

You will see pattern matching extensively in the Functional Programming chapter, where we use it to define data structures and recursive algorithms.

3.1.5. Equality🔗

Due to the multitude of types in Lean, we have to be careful about equality. In Lean, there are three types of equality:

  • Syntactic equality: If two terms are letter-for-letter equal, then they are syntactically equal. However, there are a few more situations in which two terms are syntactically equal. Namely, if one term is just an abbreviation for the other (for example, x = y is an abbreviation for Eq x y, where Eq is a function which takes two terms of the same type, and assigns True if they are the same and False otherwise), then these both terms are syntactically equal. Also equal are terms in which globally quantified variables have different letters. For example, ∀ x, ∃ y, f x y and ∀ y, ∃ x, f y x are syntactically equal.

  • Definitional equality: Some terms are equal by definition in Lean. For example, x : ℕ, x + 0 is by definition identical to x. However, 0 + x is not definitionally identical to x. This is apparently only due to the internal definition of addition of natural numbers in Lean.

  • Propositional equality: If there is a proof of x = y, then x and y are said to be propositionally equal. Similarly, terms P and Q are said to be propositionally equal if you can prove P ↔ Q. Some Lean tactics only work up to syntactic equality (such as rw), others (most) work up to definitional equality (such as apply, exact,...) This means that the tactic automatically transforms terms if they are syntactically or definitional equality.

There is a special kind of equality to be observed with sets and functions. For example, two functions are exactly the same if they return the same value for all values in the domain. This behavior is called extensionality in the theory of programming languages. (If extensionality applies, then, for example, two sorting algorithms are the same if they always produce the same result).

3.1.6. Different parentheses in Lean🔗

There are (essentially) three different types of parentheses in Lean statements. The simplest is (...), which, as in normal use, indicates parentheses in the sense of what belongs together. However, you have to learn how Lean brackets internally when no '()' are given. Operators like and (), or (), are right-associative, so e.g. P ∧ Q ∧ R := P ∧ (Q ∧ R). The application of functions in sequence, such as f : ℕ → ℝ and g : ℝ → ℝ , applied to n : ℕ is g (f n), because g expects an input of type , and this is what f n provides. You cannot omit (...), i.e. in this case the parenthesis is left-associative.

Now let's comment on the parentheses [...] and {...}. For example, #check@ gt_iff_lt (the statement that a>b holds if and only if b<a holds), where both types occur. This yields

gt_iff_lt : ∀ {α : Type u_1} [_inst_1 : has_lt α] {a b : α}, a > b ↔ b < a

When this result is applied, the statements in {...} and [...] are added by Lean itself. The statements in {...} depend on the type of the objects that have to be given, and can therefore be inferred. (Above, when applying gt_iff_lt, the variables a and b have to be given.) Therefore, their type is also known, and one does not have to α is not explicitly specified. Since the application is made to a concrete α (for example, ), and Lean knows a lot about the natural numbers, the type class system can look up many properties of , and also finds that has_lt ℕ holds (i.e. on at least a partial order is defined).

3.1.7. Proving propositions and evaluating functions🔗

Although we almost exclusively prove propositions in tactic mode in these notes, it is instructive to know about the simplest example of how to turn the proof to term mode: There are two rules:

  • The tactic exact is the same as calling a function.

  • The tactic intro is like taking a variable, which will be the argument of a function which is evaluated in the next step.

Let us consider two examples:

The term proof

example (P : Prop) : False P := False.elim

is the same as

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

The term proof

example (s t : Set ) (hst : s t) (x : ) : x s x t := fun hx hst hx

is the same as

example (s t : Set ) (hst : s t) (x : ) : x s x t := s:Set t:Set hst:s tx:x s x t s:Set t:Set hst:s tx:hx:x sx t All goals completed! 🐙

3.1.8. Inductive types🔗

Many everyday types in Lean -- Nat, List, Option, Bool, even Empty -- are inductive types. You declare one by giving a name, the type's universe, and a list of constructors: each constructor says how to build a new element of the type out of existing pieces.

The classical example is the natural numbers:

namespace Demo inductive MyNat where | zero : MyNat | succ (n : MyNat) : MyNat end Demo

This declaration introduces three things at once:

  • a new type Demo.MyNat;

  • two constructors Demo.MyNat.zero and Demo.MyNat.succ, so every element of MyNat is either zero or succ n for some n;

  • a recursor Demo.MyNat.rec which lets you define functions on MyNat by specifying what happens in each constructor case.

Definitions on an inductive type are typically written with the pattern-matching syntax of the previous section:

namespace Demo def double : MyNat MyNat | .zero => .zero | .succ n => .succ (.succ (double n)) end Demo

Proofs about an inductive type use the induction tactic, which applies the recursor for you: one subgoal per constructor, with an induction hypothesis for each recursive argument.

Inductive types also cover non-recursive data:

inductive Colour where | red | green | blue

and parameterized types:

namespace Demo inductive MyOption (α : Type) where | none : MyOption α | some (a : α) : MyOption α end Demo

Inductive types are the main mechanism by which new data types enter Lean; Mathlib uses them extensively, and understanding them is essential for reading the library.

3.1.9. Exploring definitions with #check, #print and inferInstance🔗

Lean provides a handful of commands that are invaluable for exploring a library like Mathlib. They all start with # and only print information -- they do not contribute to a proof.

  • #check e prints the type of the term or expression e. This is the fastest way to learn what a lemma says, or what type a definition has.

  • #check @lemma (with a leading @) prints the type of the lemma without hiding implicit and instance arguments. Use @ when you want to see every argument.

  • #print name prints the definition of the constant name. For a typeclass, this shows you the list of fields; for a def, the body; for a structure, the constructor and fields.

  • #eval e evaluates the term e (when it is computable) and prints the result. It works on concrete , List, etc., but not on abstract propositions.

A very common idiom is to ask Lean whether a given type has a specific instance (e.g. "is a commutative monoid?"):

inferInstance : AddCommMonoid #check (inferInstance : AddCommMonoid )

The term inferInstance asks Lean to synthesize an instance of the indicated type; if no such instance exists the command will fail with a readable error message.

Two tactics complement these commands during an interactive proof:

  • exact? searches Mathlib for a single lemma that closes the current goal and prints a exact <lemma> line you can copy.

  • apply? is the same, but it suggests lemmas whose conclusion matches the goal, leaving side conditions as new subgoals.

Together, #check, #print, inferInstance, exact? and apply? are the main tools for navigating an unfamiliar part of Mathlib.

3.1.10. Getting help: Loogle, LeanSearch, Moogle, and friends🔗

Mathlib is enormous, and naming conventions -- however consistent -- will only take you so far. When you know what you want to say but not which lemma says it, a handful of search services can save hours of scrolling. All of them are available both as web pages and, more conveniently, as commands inside Lean via the LeanSearchClient package (already a dependency of Mathlib).

  • Loogle (loogle.lean-lang.org) searches by type shape. You write a pattern (using _ for holes) and Loogle returns every Mathlib lemma whose statement unifies with it. Inside Lean:

    #loogle (?a + ?b) * ?c = _
    

    returns distributivity lemmas in all their variants. Loogle also accepts a comma-separated list of constants that must appear, e.g. #loogle Real.exp, Real.log, or a target conclusion #loogle |- tsum _ = _.

  • LeanSearch (leansearch.net) searches by natural language. You phrase your goal in English ("derivative of composition of functions") and it returns the most relevant Mathlib lemmas, ranked by semantic similarity. From inside Lean:

    #leansearch "the derivative of a product of functions"
    
  • Moogle (moogle.ai) is another natural-language search, with a different ranking model. In Lean:

    #moogle "bolzano-weierstrass"
    

    The two tools often surface different lemmas, so it is worth trying both.

  • Mathlib docs (the generated API reference at leanprover-community.github.io/mathlib4_docs). If you already know the namespace (Filter, MeasureTheory, Topology.Basic, ...), browsing the module is faster than any search.

  • Zulip (leanprover.zulipchat.com, the #new members and #Mathlib4 streams) is the community chat. For questions that no search answers, posting a minimal example together with the goal you're stuck on almost always gets a helpful reply within hours.

  • AI assistants (ChatGPT, Claude, Gemini, Copilot, and the Lean-focused plugins and Copilot-for-Lean projects built on top of them) have become surprisingly effective at suggesting lemma names, spotting why a tactic fails, and translating informal arguments into Lean. They are fallible -- they will cheerfully invent lemmas that do not exist, or quote outdated Mathlib 3 syntax -- but used critically they are one of the fastest ways to get unstuck. A good workflow is: paste your goal state and the surrounding example, ask for two or three candidate approaches, and then verify each suggestion in Lean (via exact?, #loogle, or simply by trying to compile it). Treat AI output the same way you would treat a confident but occasionally wrong colleague -- use it, but always check. You are strongly encouraged to use these tools alongside #loogle, #leansearch, and the Mathlib docs.

As a rule of thumb: try exact? / apply? first (they know your exact proof state), then #loogle (precise, fast), then #leansearch / #moogle or an AI assistant (when you do not know the vocabulary Mathlib uses), and finally the docs or Zulip for open-ended questions.

3.1.11. Two abbreviations🔗

There are at least two abbreviations used in Mathlib which you will encounter frequently.

If you have h : x = y and hx : P x (with P x : Prop), you can prove P y by replacing h in hx. The shorthand notation for this is h ▸ hx. (Write \t for ).

example (P : Prop) (x y : ) (h : x = y) (hx : P x) : P y := P: Propx:y:h:x = yhx:P xP y All goals completed! 🐙

Sometimes, bracketing is critical, and it appears frequently that it has the form apply first (second very long statement), and you might get lost since the closing brackets are far away from their opening counterparts. In this case, we write apply first <| second very long statement, which does not need a closing symbol.

example (P Q : Prop) (hP : P) (hnP : ¬P) : Q := P:PropQ:ProphP:PhnP:¬PQ All goals completed! 🐙

3.1.12. Defining new notation🔗

Lean allows you to define custom notation using the notation command. This is useful when you want a concise mathematical symbol for a frequently used expression. The general syntax is

notation "symbol" arg1 arg2 ... => expression

where the left-hand side describes the syntax pattern (with arguments) and the right-hand side is the Lean expression it expands to. Here is a simple example:

section NotationDemo notation "δ" => (2 : ) 2 : #check (δ : )

After this definition, δ is replaced by 2 everywhere. The notation is purely syntactic: Lean replaces every occurrence of the new notation by the right-hand side before type checking. Here is a more interesting example with an argument:

notation "double" x => x + x 10#eval double 5 -- 10 end NotationDemo

You can also define infix notation with a priority (determining how tightly the operator binds):

section InfixDemo infixl:65 " ⊕⊕ " => fun (a b : ) => a + b + 1 8#eval 3 ⊕⊕ 4 -- 8 end InfixDemo

Here, infixl means left-associative infix, 65 is the binding power (the same as +), and the spaces around ⊕⊕ are part of the syntax. Similarly, infixr gives right-associative infix, and prefix / postfix are available for unary operators.

3.1.12.1. Prefix and postfix operators🔗

prefix and postfix define unary notation on a single argument:

section UnaryDemo -- A prefix "bang" operator, 80 precedence (higher than `+`) prefix:80 "¡" => fun n : => n * n 25#eval ¡5 -- 25 -- A postfix factorial-style operator postfix:90 "!!" => fun n : => 2 * n + 1 9#eval 4!! -- 9 end UnaryDemo

3.1.12.2. Multi-argument notation🔗

notation can take more than one argument and mix custom tokens with them:

section TernaryDemo -- "between a and b" means the half-open interval [a, b) notation "between " a " and " b => Set.Ico a b between 1 and 10 : Set #check between (1 : ) and 10 -- Set ℕ end TernaryDemo

3.1.12.3. Notation with binders🔗

Mathlib uses notation3 (and its underlying machinery syntax + macro_rules) to introduce binder-style notation like ∑ k ∈ range n, f k and ∀ᶠ x in F, p x. These let you bind a variable inside the expression that follows the comma. Writing such notation from scratch involves a fair amount of macro plumbing and is beyond the scope of this section -- the standard reference is the Lean 4 documentation on macros. For ordinary day-to-day notation, plain notation, infixl, infixr, prefix, and postfix are almost always enough.

3.1.12.4. Scoped notation🔗

If a notation should only be active inside a namespace (so it does not pollute the global symbol space), mark it scoped:

namespace MyDemo scoped notation "✦" => (42 : ) end MyDemo -- Outside the namespace, `✦` is not in scope by default; -- enable it with `open MyDemo` or `open scoped MyDemo`. open scoped MyDemo 42#eval -- 42

This is the pattern used throughout Mathlib for notation like 𝓝, 𝓟, , ∀ᶠ, etc.: they are scoped to the relevant namespace and enabled with open scoped.

For more complex notation involving multiple tokens or bespoke parsing rules, you can use the syntax and macro_rules commands, but notation, the infix variants, prefix/postfix, and notation3 cover most common use cases.

3.1.13. The abbrev command🔗

The abbrev command introduces a definition that is reducibly transparent, meaning Lean's elaborator will unfold it automatically whenever needed. In contrast, a definition introduced with def is semireducible and will not be unfolded automatically.

abbrev MyNat :=

After this, MyNat and are interchangeable everywhere — Lean treats them as definitionally equal without needing any extra work. In particular, all type class instances that apply to are automatically available for MyNat. Compare this with

def MyNat' :=

where MyNat' is a genuinely new type: Lean will not automatically apply instances to MyNat', and you would have to derive or register them yourself.

Use abbrev when you want a short name for a type or expression but do not want to create a new abstraction barrier. Use def when you intentionally want to hide the definition (e.g. to prevent the simplifier from unfolding it).

3.2. Proofs in Lean🔗

3.2.1. Foundation🔗

In Lean every object has a type. Types itself fall into several categories, called universes. There are two main universes, called Prop and Type. Any mathematical statement comes with a claim and its proof. Say we want to claim something, such as Goldbach's conjecture:

theorem declaration uses `sorry`goldbach : (n : ) (h₁ : n > 2) (h₂ : Even n), (i j : ), (Prime i) (Prime j) (n = i + j) := n > 2, Even n i j, Prime i Prime j n = i + j All goals completed! 🐙

we speak about a term ∀ (n : ℕ) (h₁ : n > 2) (h₂ : Even n), ∃ (i j : ℕ), Prime i ∧ Prime ∧ (n = i + j) of type Prop, which constitutes its own type. A term of this type (which sould repalce the sorry in the above Lean code) is equivalent to a proof of Goldbach's conjecture.

This is to say:

Types as theorems, terms as proofs!

Constructing a term of type is easier (0 : ℕ) is accepted by Lean for this construction) than constructing a term of type ∀ (n : ℕ) (h₁ : n > 2) (h₂ : Even n), ∃ (i j : ℕ), (Prime i) ∧ (Prime j) ∧ (n = i + j), for which we would require proving Goldbach's conjecture and implementing the proof in Lean.

Since we are already speaking about fundamentals: For a large part, it is safe to think of Types as Sets. Recall that it leads to logical self-inconsistencies if we allow for something like the set/type of all sets/types. For this reason, the Type universe is split into levels, such as Type 0 : Type 1, saying that the type Type 0 (of all objects in level 0) is of Type 1, i.e., we are moving up a ladder when constructing more complex types. The coresponding idea goes back to von Neumann and Ernst Zermelo.

3.2.2. Mathlib🔗

The Lean-library which contains many mathematical results is called Mathlib. On its documentation page you can search for some results and concepts. (More precisely, you can search names of definitions, lemmas and theorems.) You will find results about e.g. Real numbers, Groups in algebra, Topology and Measure Theory.

Another way to search Mathlib is Moogle and Loogle (which you also have in vscode when clicking on the sign.)

3.2.3. First steps🔗

Let us start with some simple examples in order to explain the first tactics in Lean. Here is a quick preview of the tactics we will meet in this chapter. They are explained in detail below, and a full alphabetical reference — with many more tactics — lives in the Tactics chapter.

Tactic

Use it when

Effect

intro hP

goal is P → Q or ∀ x, …

move the hypothesis (or bound variable) into context

exact e

the goal matches a known term e

close the goal with e

apply h

h : P → Q and goal is Q

reduce the goal to P

rw [h]

h : a = b or h : P ↔ Q

rewrite occurrences of a to b (resp. P to Q) in the goal; rw [h] at h' rewrites in a hypothesis; rw [← h] rewrites the other way

simp

goal / hypothesis can be normalised by standard lemmas

simplify using Mathlib's simp set (simp? suggests which lemmas)

apply?

stuck — need a suitable lemma from Mathlib

searches the library and proposes a closing exact …

have h : P := …

want to record an intermediate fact P

introduces h : P into context (provide the proof on the right, or enter tactic mode with by)

refine e

like exact, but with holes _ or ?_

closes the goal up to the remaining holes, which become new goals

obtain ⟨x, hx⟩ := h

h : ∃ x, P x or h : P ∧ Q

destructs the witness / pair into named pieces

You can also skip this chapter on the first pass and come back to each tactic as you need it; treat Tactics as your glossary.

3.2.3.1. intro, exact, apply and rw🔗

Let us start with a very simple example. If we want to prove the statement P → P (i.e. P implies P) we enter the following:

declaration uses `sorry`example (P : Prop) : P P := P:PropP P All goals completed! 🐙

Depending on the position of the cursor, you will find the corresponding proof state. If the cursor is directly after by, the initial proof state is seen. It is important to know that behind (called turnstile) stands the assertion, and everything above are hypotheses. (In the case shown, this is only the fact that P is an assertion/proposition.) This representation thus corresponds exactly to the assertion. If the cursor is after the sorry, there is now no goals, but the sorry tactic is only there to prove unproven assertions without further action, and a warning is issued in vscode. If you delete the sorry and replace it with an intro hP followed by exact hP, we get

example : P P := P:Sort ?u.7P P P:Sort ?u.7hP:PP All goals completed! 🐙

So we have transformed the statement P → P into a state where we have to assume hP : P and conclude P. The desired no goals appears.

The apply tactics is similar, but does not necessarily need to close the goal. Let us see how it works:

example (hPQ : P Q) (hQR : Q R) : P R := P:Sort ?u.23Q:Sort ?u.24R:Sort ?u.28hPQ:P QhQR:Q RP R P:Sort ?u.23Q:Sort ?u.24R:Sort ?u.28hPQ:P QhQR:Q RhP:PR P:Sort ?u.23Q:Sort ?u.24R:Sort ?u.28hPQ:P QhQR:Q RhP:PQ P:Sort ?u.23Q:Sort ?u.24R:Sort ?u.28hPQ:P QhQR:Q RhP:PP All goals completed! 🐙

Here, if the goal is R, and you have a proof hQR : Q → R, we only have to show Q and this transformation is done using apply hQR.

In fact, apply works iteratively. This means that apply hQR; apply hPQ; exact hP can be combined into

example (hPQ : P Q) (hQR : Q R) : P R := P:Sort ?u.23Q:Sort ?u.24R:Sort ?u.28hPQ:P QhQR:Q RP R P:Sort ?u.23Q:Sort ?u.24R:Sort ?u.28hPQ:P QhQR:Q RhP:PR All goals completed! 🐙

(Here, hPQ hP is a proof for Q, since we apply P → Q to a proof of P, which gives Q.)

We note that the above example is equivalent to

example : (P : Prop), P P := (P : Prop), P P P:PropP P P:ProphP:PP All goals completed! 🐙

So if we have a ∀ in the goal, we make progress by using intro.

Sometimes, we have statements of equality x = y or P ↔ Q, so we would like to use the one instead of the other. This works using rw:

example (hQ : Q) (hPQ : P Q) : P := Q:PropP:ProphQ:QhPQ:P QP Q:PropP:ProphQ:QhPQ:P QQ All goals completed! 🐙

Here, we use rw to transform the goal P to the rewritten goal Q. If we want to use rw reversely, we write rw [← hPQ], and we can use rw also in hypothesis by writing e.g. rw [hPQ] at hP. Here is an example.

example (hQ : Q) (hPQ : P Q) : P := Q:PropP:ProphQ:QhPQ:P QP Q:PropP:ProphQ:PhPQ:P QP All goals completed! 🐙

3.2.3.2. apply? and simp🔗

Of course, we want to make use of known facts when proving new ones. There are two main search functions built into our work: simp? and apply?. The first is based on simp, which works using a collection of simplification rules, which are searchable using simp?. Here is an example:

example : (0 < 1) := 0 < 1 Try this: [apply] simp only [zero_lt_one]All goals completed! 🐙

Lean suggests Try this: simp only [Nat.lt_one_iff, pos_of_gt], which is taken to our code when we click on it.

In fact, the same example can be solved using library search using apply?. Here, Lean searches its library for possible results, and often outputs very many results and the remaining goals. Here, it is simple:

example : (0 < 1) := 0 < 1 Try this: [apply] exact Nat.one_posAll goals completed! 🐙

where Lean suggests Try this: exact Nat.one_pos.

3.2.3.3. have, refine, and use🔗

In order to have some proper example, let us introduce Even and Odd. In fact, for a definition of Even, we can type

def Even.{u_2} : {α : Type u_2} [Add α] α Prop := fun {α} [Add α] a => r, a = r + r#print Even

which results in

def Even.{u_2} : {α : Type u_2}  [Add α]  α  Prop :=
fun {α} [Add α] a =>  r, a = r + r

Assume we cannot prove our goal in one step, but need some intermediate result. In this case, we have the have tactics. We simply claim what we need as an intermediate step. At the moment, we leave the rest using sorry.

declaration uses `sorry`example : (n : ), Even (n * n) := n, Even (n * n) have h : (n : ), Even n := n, Even (n * n) All goals completed! 🐙 All goals completed! 🐙

We come to our intermediate result later, but first want to use it in the rest of the proof. However, we have to disentangle the ∃ statement in h. Often, we have to take apart what we are given. Note that ∃ (n : ℕ), Even n consists of (n : ℕ) and a proof of Odd , i.e. a pair of objects, and pairs in Lean are gives using ⟨_, _⟩. We use obtain in order to get the two elements of the pair:

declaration uses `sorry`example : (n : ), Even (n * n) := n, Even (n * n) have h : (n : ), Even n := n, Even (n * n) All goals completed! 🐙 n:hn:Even n n, Even (n * n) All goals completed! 🐙

Recall that hn itself is an ∃-statement. This means we can use a nested version:

declaration uses `sorry`example : (n : ), Even (n * n) := n, Even (n * n) have h : (n : ), Even n := n, Even (n * n) All goals completed! 🐙 n:k:hk:n = k + k n, Even (n * n) All goals completed! 🐙

Now we can use that n is Even and therefore would like to use that n * n = (2*k) * (2*k) = 4*k*k for showing that n * n is even. In order to see what we have to show, let us simplify using the definition of Even:

declaration uses `sorry`example : (n : ), Even (n * n) := n, Even (n * n) have h : (n : ), Even n := n, Even (n * n) All goals completed! 🐙 n:k:hk:n = k + k n, Even (n * n) n:k:hk:n = k + k n r, n * n = r + r All goals completed! 🐙

Now we can use n from above, and r = 2*k*k. This works with use:

declaration uses `sorry`example : (n : ), Even (n * n) := n, Even (n * n) have h : (n : ), Even n := n, Even (n * n) All goals completed! 🐙 n:k:hk:n = k + k n, Even (n * n) n:k:hk:n = k + k n r, n * n = r + r n:k:hk:n = k + k r, n * n = r + r n:k:hk:n = k + kn * n = 2 * k * k + 2 * k * k All goals completed! 🐙

The rest should be easy, since it only remains a calculation. Here, we use rw and the ring tactics, which can do calculations within a ring (in fact in a monoid):

declaration uses `sorry`example : (n : ), Even (n * n) := n, Even (n * n) have h : (n : ), Even n := n, Even (n * n) All goals completed! 🐙 n:k:hk:n = k + k n, Even (n * n) n:k:hk:n = k + k n r, n * n = r + r n:k:hk:n = k + k r, n * n = r + r n:k:hk:n = k + kn * n = 2 * k * k + 2 * k * k n:k:hk:n = k + k(k + k) * (k + k) = 2 * k * k + 2 * k * k All goals completed! 🐙

It remains to show the intermediate step. Here, we have to give Lean a pair, i.e. some n as well as a proof that Even n. This can not only be done using use as above, but also using refine, which is able to make a pair from two separate objects. Assume ww want to use that Even 48, but do not have a proof yet. Then we write a ?_, which stands for a hole in the proof which is to be filled in later:

declaration uses `sorry`example : (n : ), Even n := n, Even n Even 48 All goals completed! 🐙

For a proof of Even 48, we need to find 24 and a proof that 48 = 24 + 24. Let us again work with ?_:

example : (n : ), Even n := n, Even n 48 = 24 + 24 All goals completed! 🐙

(In fact, the last step can also be solved by rfl, which means that the goal is true by definition.)

In total, we have the full example:

example : (n : ), Even (n * n) := n, Even (n * n) have h : (n : ), Even n := n, Even (n * n) refine 48, 24, 48 = 24 + 24 All goals completed! 🐙 n:k:hk:n = k + k n, Even (n * n) n:k:hk:n = k + kn * n = 2 * k ^ 2 + 2 * k ^ 2 n:k:hk:n = k + k(k + k) * (k + k) = 2 * k ^ 2 + 2 * k ^ 2 All goals completed! 🐙

3.2.4. Names of Mathlib Results🔗

Names like zero_add, add_zero, one_mul, add_assoc, succ_ne_zero, lt_of_succ_le,... seem cryptic. It is clear that individual relatively understandable abbreviations (zero, one, mul, add, succ,...) are separated by _. In general, the following two rules apply to naming:

  • The goal of the statement to be proven is described; if hypotheses are added in the name, then with of_. The statement lt_of_succ_le is therefore an < statement, where succ ≤ applies. In fact:

Nat.lt_of_succ_le {n m : } (h : n.succ m) : n < m#check Nat.lt_of_succ_le

results in

Nat.lt_of_succ_le {n m : } (h : n.succ  m) : n < m

This way, you can often guess the names of statements that you want to use.

3.2.5. Holes in proofs🔗

What is a hole in a proof? It is a missing argument...

3.2.6. Exercises🔗

It is now time to move to the exercises. So, proceed to vscode (or gitpod), copy the exercises folder and start coding. Further hints on tactics etc is given within the exercises. Tactics are given in alphabetical order in the next chapter.

3.3. Functional Programming Basics🔗

Lean is, before anything else, a functional programming language. If you have a background in mathematics but not in programming, do not worry: functional programming is in many ways closer to mathematics than imperative programming (like Python or Java). The central idea is that programs are built from functions in the mathematical sense -- they take inputs and produce outputs, without hidden side effects.

Earlier sections in this chapter already introduced the basic machinery we need: def for defining functions, fun for anonymous functions, inductive for defining new types, and pattern matching / structural recursion for writing functions on inductive types. The aim of this section is to collect a few extra examples and to introduce higher-order functions, one of the hallmarks of the functional style.

3.3.1. Pure functions and #eval🔗

A pure function is one whose output depends only on its inputs. Given the same arguments, it always returns the same result. This is exactly how functions work in mathematics: if f(x) = x + 1, then f(3) is always 4.

In Lean, we define functions with def and evaluate them with #eval:

def double (n : ) : := 2 * n 10#eval double 5 -- outputs 10 0#eval double 0 -- outputs 0

Functions of several arguments are curried: a function of two arguments is really a function that takes one argument and returns another function.

def add (a b : ) : := a + b 7#eval add 3 4 -- outputs 7

The type of add is ℕ → ℕ → ℕ, which reads as ℕ → (ℕ → ℕ). So add 3 is itself a function of type ℕ → ℕ.

3.3.2. Anonymous functions🔗

Sometimes we need a quick, throwaway function without giving it a name. We use fun (short for "function") with the arrow (typed \mapsto):

6#eval (fun x : x + 1) 5 -- outputs 6 12#eval (fun x y : x * y) 3 4 -- outputs 12

Anonymous functions are particularly useful when passing functions as arguments to other functions. In mathematics one would write x ↦ x^2; in Lean we write fun x ↦ x ^ 2.

3.3.3. More examples of recursion🔗

The Inductive types and Pattern matching sections above already covered the fundamentals. Here are a few further examples that we will reuse when discussing higher-order functions below.

A binary tree of natural numbers, together with a function that sums all its values:

namespace FPBasics inductive Tree where | leaf : Tree | node : Tree Tree Tree def Tree.sum : Tree | .leaf => 0 | .node l v r => Tree.sum l + v + Tree.sum r 8#eval Tree.sum (.node (.node .leaf 3 .leaf) 5 .leaf) -- outputs 8 end FPBasics

The Fibonacci sequence is a classical example with two base cases:

def fib : | 0 => 0 | 1 => 1 | n + 2 => fib n + fib (n + 1) 55#eval fib 10 -- outputs 55

3.3.4. Higher-order functions🔗

A higher-order function is one that takes a function as an argument or returns a function as a result. This is one of the central ideas of functional programming, and it is also very natural mathematically. For instance, the derivative is a higher-order function: it takes a function and returns a function.

The three most important higher-order functions on lists are List.map, List.filter, and List.foldl (also known as fold).

List.map applies a function to every element of a list:

[1, 4, 9, 16]#eval [1, 2, 3, 4].map (fun x x * x) -- outputs [1, 4, 9, 16] [11, 12, 13]#eval [1, 2, 3].map (· + 10) -- outputs [11, 12, 13]

Note the · notation: (· + 10) is shorthand for (fun x ↦ x + 10).

List.filter keeps only the elements satisfying a predicate:

[2, 4, 6]#eval [1, 2, 3, 4, 5, 6].filter (fun x x % 2 == 0) -- outputs [2, 4, 6]

List.foldl combines all elements of a list using a binary operation and an initial value. It "folds" the list from the left:

10#eval [1, 2, 3, 4].foldl (· + ·) 0 -- outputs 10 (sum) 24#eval [1, 2, 3, 4].foldl (· * ·) 1 -- outputs 24 (product)

We can define our own higher-order functions. For example, here is a function that applies a function n times:

def iterate {α : Type} (f : α α) : α α | 0, x => x | n + 1, x => f (iterate f n x) 5#eval iterate (· + 1) 5 0 -- outputs 5 16#eval iterate (· * 2) 4 1 -- outputs 16

We can also write a function that composes two functions:

def myCompose {α β γ : Type} (g : β γ) (f : α β) : α γ := fun x g (f x) 7#eval myCompose (· + 1) (· * 2) 3 -- outputs 7

In Lean, function composition is available as Function.comp or simply as (typed \circ).

3.3.5. Summary🔗

Functional programming in Lean is built on a few core ideas:

  • Pure functions: outputs depend only on inputs.

  • Inductive types: define new types by specifying constructors.

  • Pattern matching: define functions by cases on the constructors.

  • Structural recursion: define recursive functions that provably terminate.

  • Higher-order functions: pass functions as arguments or return them.

These ideas align closely with mathematical practice: inductive types correspond to inductively defined sets, pattern matching corresponds to case analysis, and recursion corresponds to inductive definitions.

3.4. Tactics🔗

3.4.1. Tactics Cheatsheet🔗

Proof state

Tactic

New proof state

⊢ P → Q

intro hP

hP : P
⊢ Q

⊢ P → Q → R

intro hP hQ

hP : P
hQ : Q
⊢ R

p : α → Prop
⊢ ∀ (x : α), f x

intro x

f: α → Prop
x : α
⊢ p x

h : P
⊢ P

exact h

no goals 🎉

h : P
⊢ P

assumption

no goals 🎉

h : P → Q
⊢ P

apply h

⊢ Q

h₁ : P → Q
h₂ : Q → R
⊢ R

apply h₂ h₁

h₁ : P → Q
h₂ : Q → R
⊢ P

⊢ P ∧ Q → P

tauto oder tauto!

no goals 🎉

⊢ true

triv

no goals 🎉

h : P
⊢ Q

exfalso

h : P
⊢ false

⊢ P

by_contra h

h : ¬P
⊢ false

⊢ P

by_cases h : Q

h : Q
⊢ P
h : ¬Q
⊢ P

h : P ∧ Q
⊢ R

cases' h with hP hQ

hP : P
hQ : Q
⊢ R

h : P ∧ Q
⊢ R

obtain ⟨hP, hQ⟩ := h

hP : P
hQ : Q
⊢ R

h : P ∨ Q
⊢ R

cases' h with hP hQ

hP : P
⊢ R
hQ : Q ⊢ R

h : false
⊢ P

cases h

no goals 🎉

⊢ : P → false

change ¬P

⊢ ¬P

⊢ P ∧ Q

constructor

⊢ P
⊢ Q

⊢ P ↔ Q

constructor

⊢ P → Q
⊢ Q → P

⊢ P ↔ P oder
⊢ P = P

rfl

no goals 🎉

h : P ↔ Q
⊢ P

rw h

h : P ↔ Q
⊢ Q

h : P ↔ Q
hP : P

rw h at hP

h : P ↔ Q
hP : Q

h : P ↔ Q
⊢ Q

rw ← h

h : P ↔ Q
⊢ P

h : P ↔ Q
hQ : Q

rw ← h at hQ

h : P ↔ Q
hQ : P

⊢ P ∨ Q

left

⊢ P

⊢ P ∨ Q

right

⊢ Q

⊢ 2 + 2 < 5

norm_num

no goals 🎉

p : α → Prop
y : α
⊢ ∃ (x : α), f x

use y

p : α → Prop
y : α
⊢ f y

x y : ℝ
⊢ x + y = y + x

ring

no goals 🎉

p : α → Prop
⊢ ∀ (x : α), p x

intro x

p : α → Prop
x : α
p x

h₁ : a < b
h₂ : b ≤ c
⊢ a < c

linarith

no goals 🎉

h : P
⊢ Q

clear h

⊢ Q

p : ℕ → Prop
h : ∀ (n : ℕ), p n
⊢ P

specialize h 13

p : ℕ → Prop
h : p 13
⊢ P

p : ℕ → ℕ → Prop
h : ∀ (n : ℕ), ∃ (m : ℕ), f n m

obtain ⟨m, hm⟩ := h 27

f : ℕ → ℕ → Prop
h : ∀ (n : ℕ), ∃ (m : ℕ), f n m
m : ℕ
hm : f 27 m

⊢ R

have h : P ↔ Q

⊢ P ↔ Q
h : P ↔ Q
⊢ R

h₁ : a < b
h₂ : b < c
⊢ a < c

apply?

no goals 🎉
Try this:
exact lt_trans h₁ h₂

hQ : Q
⊢ P ∧ Q

refine ⟨ _, hQ ⟩

hQ : Q
⊢ P

⊢ P ∨ Q → R

rintro (hP | hQ)
=
intro h
cases h with hP hQ

hP : P
⊢ R
hQ : Q
⊢ R

⊢ P ∧ Q → R

rintro ⟨hP , hQ⟩
=
intro h
cases h with h1 h2

hP : P
hQ : Q
⊢ R

h : P ∧ Q ∨ P ∧ R
⊢ S

rcases h with (⟨hP1,hQ⟩|⟨hP2,hR⟩)

hP1 : P
hQ : Q
⊢ S
hP2 : P
hR : R
⊢ S

⊢ n + 0 = n

simp

no goals 🎉

h : n + 0 = m
⊢ P

simp at h

h : n = m
⊢ P

f g : ℝ → ℝ
⊢ f = g

ext x or funext x

f g : ℝ → ℝ
x : ℝ
⊢ f x = g x

a b : ℕ
h : a ≤ b
⊢ a ≤ b + 1

omega

no goals 🎉

x : ℝ
⊢ 0 ≤ x ^ 2 + 1

nlinarith [sq_nonneg x]

no goals 🎉

a a' b : ℝ
h : a ≤ a'
⊢ a + b ≤ a' + b

gcongr

⊢ a ≤ a'

a b : ℝ
hb : b ≠ 0
⊢ a / b + 1 = (a + b) / b

field_simp

⊢ a + b = a + b

x : ℝ
⊢ (x + 1) * (x - 1) + 1 = x ^ 2

ring_nf

no goals 🎉

n : ℕ
⊢ ((n + 1 : ℕ) : ℝ) = (n : ℝ) + 1

push_cast

no goals 🎉

⊢ Continuous (fun x : ℝ => x^2 + Real.sin x)

fun_prop

no goals 🎉

hp : ∀ᶠ x in F, p x
hq : ∀ᶠ x in F, q x
⊢ ∀ᶠ x in F, p x ∧ q x

filter_upwards [hp, hq] with x hp hq

x : α
hp : p x
hq : q x
⊢ p x ∧ q x

h : False
⊢ P

contradiction

no goals 🎉

⊢ (P → Q)

contrapose

⊢ (¬Q → ¬P)

⊢ (2 + 2 : ℕ) = 4

decide

no goals 🎉

⊢ True

trivial

no goals 🎉

⊢ a = b

symm

⊢ b = a

⊢ 1 + 1 = 2

show 2 = 2

⊢ 2 = 2

a b : ℕ
⊢ a + b + (a + b) = 2*(a+b)

set s := a + b with hs

s := a + b
⊢ s + s = 2 * s

f : ℕ → ℕ
h : ∀ n, f n = 0
⊢ ∀ n, f n + 1 = 1

simp_rw [h]

⊢ ∀ n, 0 + 1 = 1

n : ℕ
⊢ (n : ℝ) + 1 = ((n+1 : ℕ) : ℝ)

norm_cast

no goals 🎉

x : ℝ
hx : 0 < x
⊢ 0 < x ^ 2 + 1

positivity

no goals 🎉

3.4.2. apply🔗

Summary: If we have the goal ⊢ Q, and a proof of h : P → Q, we only need to find a proof for P. This transition happens by apply h.

Proof state

Tactic

New proof state

h : P → Q
⊢ Q

apply h

h : P → Q
⊢ P

h : ¬P
⊢ False

apply h

h : ¬P
⊢ P

h₁ : P → Q
h₂ : Q → R
⊢ R

apply h₂ h₁

h₁ : P → Q
h₂ : Q → R
⊢ P

The apply-tactics works iteratively. This means that if apply h makes no progress, it uses the placeholder _ and tries to make apply h _.

Remarks:

  • apply works up to equality by definition. This can be seen in the example above, where ¬P ↔ (P → False) is true by definition.

  • apply h is frequently identical to refine ?_.

  • If the use of apply closes the current goal, you might as well use exact instead of apply.

example (hP : P) (hPQ : P Q) (hPQR : P Q R) : R := P:Sort ?u.27Q:Sort ?u.31R:Sort ?u.37hP:PhPQ:P QhPQR:P Q RR P:Sort ?u.27Q:Sort ?u.31R:Sort ?u.37hP:PhPQ:P QhPQR:P Q RPP:Sort ?u.27Q:Sort ?u.31R:Sort ?u.37hP:PhPQ:P QhPQR:P Q RQ P:Sort ?u.27Q:Sort ?u.31R:Sort ?u.37hP:PhPQ:P QhPQR:P Q RP All goals completed! 🐙 P:Sort ?u.27Q:Sort ?u.31R:Sort ?u.37hP:PhPQ:P QhPQR:P Q RQ P:Sort ?u.27Q:Sort ?u.31R:Sort ?u.37hP:PhPQ:P QhPQR:P Q RP All goals completed! 🐙 example (n : ) (hn : 0 < n) : n 2 * (n * n) := n:hn:0 < nn 2 * (n * n) have h₁ : n n * n := n:hn:0 < nn 2 * (n * n) All goals completed! 🐙 n:hn:0 < nh₁:n n * n := Nat.le_mul_of_pos_left n hnn * n 2 * (n * n) have h₂ (k : ) : k 2 * k := n:hn:0 < nn 2 * (n * n) All goals completed! 🐙 All goals completed! 🐙
🔗def
Lean.Parser.Tactic.apply : Lean.ParserDescr
Lean.Parser.Tactic.apply : Lean.ParserDescr

apply e tries to match the current goal against the conclusion of e's type. If it succeeds, then the tactic returns as many subgoals as the number of premises that have not been fixed by type inference or type class resolution. Non-dependent premises are added before dependent ones.

The apply tactic uses higher-order pattern matching, type class resolution, and first-order unification with dependent types.

Extensions:

    • apply (config := cfg) e allows for additional configuration (see Lean.Meta.ApplyConfig):

      • newGoals controls which new goals are added by apply, in which order.

      • -synthAssignedInstances will not synthesize instance implicit arguments if they have been assigned by isDefEq.

      • +allowSynthFailures will create new goals when instance synthesis fails, rather than erroring.

      • +approx enables isDefEq approximations (see Lean.Meta.approxDefEq).

3.4.3. apply?🔗

Summary: There are already a lot of proven statements in Mathlib. When using apply?, Mathlib is searched for statements whose types correspond to those of the statement to be proved. If this is not successful, Lean reports a timeout. If successful, it also reports which commands were found. If you click on it, this is inserted in place of apply?.

Examples

Proof state

Tactic

New proof state

h₁ : a < b
h₂ : b < c

apply?

no goals
Try this: exact lt_trans h₁ h₂

example (n : ) : 2 * n = n + n := n:2 * n = n + n Try this: [apply] exact Nat.two_mul nAll goals completed! 🐙
Try this:
  [apply] exact Nat.two_mul n
🔗def
Lean.Parser.Tactic.apply? : Lean.ParserDescr
Lean.Parser.Tactic.apply? : Lean.ParserDescr

Searches environment for definitions or theorems that can refine the goal using apply with conditions resolved when possible with solve_by_elim.

The optional using clause provides identifiers in the local context that must be used when closing the goal.

Use +grind to enable grind as a fallback discharger for subgoals. Use +try? to enable try? as a fallback discharger for subgoals. Use -star to disable fallback to star-indexed lemmas. Use +all to collect all successful lemmas instead of stopping at the first.

3.4.4. assumption🔗

Summary: If a hypothesis is identical to the goal, assumption closes the goal.

Examples

Proof state

Tactic

New proof state

h : P
⊢ P

assumption

no goals

h : ¬P
⊢ P → False

assumption

no goals

Remarks

  • As in other tactics, assumption works up to definitional equality.

  • Here is a trick: If you use <;> after a tactic, the forthcoming tactic is applied to apll goals.

example (hP : P) (hQ : Q) : P Q := P:PropQ:ProphP:PhQ:QP Q P:PropQ:ProphP:PhQ:QPP:PropQ:ProphP:PhQ:QQ P:PropQ:ProphP:PhQ:QPP:PropQ:ProphP:PhQ:QQ All goals completed! 🐙 example (P : Prop) (hP : P) : P := P:ProphP:PP All goals completed! 🐙
🔗def
Lean.Parser.Tactic.assumption : Lean.ParserDescr
Lean.Parser.Tactic.assumption : Lean.ParserDescr

assumption tries to solve the main goal using a hypothesis of compatible type, or else fails. Note also the t term notation, which is a shorthand for show t by assumption.

3.4.5. by_cases🔗

Summary: If you have a term P : Prop as a hypothesis, by_cases hP : P returns two goals. The first one has hP : P, and the second one hP : ¬P. This tactic is thus identical to have hP : P ∨ ¬ P, exact em P, cases hP,. (The second expression is em : ∀ (p : Prop), p ∨ ¬p.)

Examples

Proof state

Tactic

New proof state

⊢ P

by_cases h : Q

h : Q
⊢ P
h : ¬Q ⊢ P

x : Bool
⊢ x = True ∨ x = False

by_cases x = True

x: bool
h: x = True
⊢ x = True ∨ x = False
x: bool
h: ¬x = True
⊢ x = True ∨ x = False

In the second example, we use a variable of type bool This is defined as follows:

🔗inductive type
Bool : Type
Bool : Type

The Boolean values, true and false.

Logically speaking, this is equivalent to Prop (the type of propositions). The distinction is public important for programming: both propositions and their proofs are erased in the code generator, while Bool corresponds to the Boolean type in most programming languages and carries precisely one bit of run-time information.

Constructors

false : Bool

The Boolean value false, not to be confused with the proposition False.

true : Bool

The Boolean value true, not to be confused with the proposition True.

example (P Q : Prop) (hP: P Q) ( hP' : ¬P Q) : Q := P:PropQ:ProphP:P QhP':¬P QQ P:PropQ:ProphP:P QhP':¬P Qh:PQP:PropQ:ProphP:P QhP':¬P Qh:¬PQ P:PropQ:ProphP:P QhP':¬P Qh:PQ All goals completed! 🐙 P:PropQ:ProphP:P QhP':¬P Qh:¬PQ All goals completed! 🐙
🔗def
«tacticBy_cases_:_» : Lean.ParserDescr
«tacticBy_cases_:_» : Lean.ParserDescr

by_cases (h :)? p splits the main goal into two cases, assuming h : p in the first branch, and h : ¬ p in the second branch.

Notes

  • Apparently, the by_cases tactic (just like by_contra) assumes that a statement is either true or false. This is also known as the law of excluded middle. In mathematics, proofs that do not use this rule are called constructive.

  • For terms of type Prop, the tactic tauto (or tauto!) can draw various conclusions from a truth table.

3.4.6. by_contra🔗

Summary

The by_contra tactic provides a proof by contradiction. It is therefore assumed (i.e. transformed into a hypothesis) that the statement (after ) is not true, and this must be made to contradict itself, i.e. a proof of false must be found.

Examples

Proof state

Tactic

New proof state

⊢ P

by_contra h

h : ¬P
⊢ False

h: ¬¬P
⊢ P

by_contra hnegP

h: ¬¬P
hnegP: ¬P
⊢ False

Remarks

This tactic is stronger than exfalso. After all, there the goal is only converted to false without adding a new hypothesis. With by_contra, the new goal is also false, but there is still a new hypothesis.

example (P Q : Prop) (hP: P Q) ( hP' : ¬P Q) : Q := P:PropQ:ProphP:P QhP':¬P QQ P:PropQ:ProphP:P QhP':¬P Qh:PQP:PropQ:ProphP:P QhP':¬P Qh:¬PQ P:PropQ:ProphP:P QhP':¬P Qh:PQ All goals completed! 🐙 P:PropQ:ProphP:P QhP':¬P Qh:¬PQ All goals completed! 🐙

3.4.7. calc🔗

Summary: As the word suggests, calc is about concrete calculations. This is not a tactic, but a lean mode. This means that you can enter this mode (with the word calc) and enter calculation steps and proofs that each individual calculation step is correct.

Examples

Here is a proof of the first binomial formula that only comes about by rewriting of calculating properties from the mathlib.

example (n : ): (n+1)^2 = n^2 + 2*n + 1 := n:(n + 1) ^ 2 = n ^ 2 + 2 * n + 1 have h : n + n = 2 * n := n:(n + 1) ^ 2 = n ^ 2 + 2 * n + 1 n:1 * n + n = 2 * n n:1 * n + 1 * n = 2 * n n:(1 + 1) * n = 2 * n All goals completed! 🐙 calc (n+1)^2 = (n+1) * (n+1) := n:h:n + n = 2 * n := Eq.mpr (id (congrArg (fun _a => _a + n = 2 * n) (Eq.symm (one_mul n)))) (Eq.mpr (id (congrArg (fun _a => 1 * n + _a = 2 * n) (Eq.symm (one_mul n)))) (Eq.mpr (id (congrArg (fun _a => _a = 2 * n) (Eq.symm (add_mul 1 1 n)))) (Eq.refl ((1 + 1) * n))))(n + 1) ^ 2 = (n + 1) * (n + 1) All goals completed! 🐙 _ = (n + 1) * n + (n + 1) * 1 := n:h:n + n = 2 * n := Eq.mpr (id (congrArg (fun _a => _a + n = 2 * n) (Eq.symm (one_mul n)))) (Eq.mpr (id (congrArg (fun _a => 1 * n + _a = 2 * n) (Eq.symm (one_mul n)))) (Eq.mpr (id (congrArg (fun _a => _a = 2 * n) (Eq.symm (add_mul 1 1 n)))) (Eq.refl ((1 + 1) * n))))(n + 1) * (n + 1) = (n + 1) * n + (n + 1) * 1 All goals completed! 🐙 _ = n * n + 1 * n + (n + 1) := n:h:n + n = 2 * n := Eq.mpr (id (congrArg (fun _a => _a + n = 2 * n) (Eq.symm (one_mul n)))) (Eq.mpr (id (congrArg (fun _a => 1 * n + _a = 2 * n) (Eq.symm (one_mul n)))) (Eq.mpr (id (congrArg (fun _a => _a = 2 * n) (Eq.symm (add_mul 1 1 n)))) (Eq.refl ((1 + 1) * n))))(n + 1) * n + (n + 1) * 1 = n * n + 1 * n + (n + 1) All goals completed! 🐙 _ = n^2 + n + (n + 1) := n:h:n + n = 2 * n := Eq.mpr (id (congrArg (fun _a => _a + n = 2 * n) (Eq.symm (one_mul n)))) (Eq.mpr (id (congrArg (fun _a => 1 * n + _a = 2 * n) (Eq.symm (one_mul n)))) (Eq.mpr (id (congrArg (fun _a => _a = 2 * n) (Eq.symm (add_mul 1 1 n)))) (Eq.refl ((1 + 1) * n))))n * n + 1 * n + (n + 1) = n ^ 2 + n + (n + 1) All goals completed! 🐙 _ = n^2 + (n + n + 1) := n:h:n + n = 2 * n := Eq.mpr (id (congrArg (fun _a => _a + n = 2 * n) (Eq.symm (one_mul n)))) (Eq.mpr (id (congrArg (fun _a => 1 * n + _a = 2 * n) (Eq.symm (one_mul n)))) (Eq.mpr (id (congrArg (fun _a => _a = 2 * n) (Eq.symm (add_mul 1 1 n)))) (Eq.refl ((1 + 1) * n))))n ^ 2 + n + (n + 1) = n ^ 2 + (n + n + 1) All goals completed! 🐙 _ = n^2 + 2*n + 1 := n:h:n + n = 2 * n := Eq.mpr (id (congrArg (fun _a => _a + n = 2 * n) (Eq.symm (one_mul n)))) (Eq.mpr (id (congrArg (fun _a => 1 * n + _a = 2 * n) (Eq.symm (one_mul n)))) (Eq.mpr (id (congrArg (fun _a => _a = 2 * n) (Eq.symm (add_mul 1 1 n)))) (Eq.refl ((1 + 1) * n))))n ^ 2 + (n + n + 1) = n ^ 2 + 2 * n + 1 All goals completed! 🐙

The same can be achieved without the calc mode, like this:

example (n : ℕ): (n+1)^2 = n^2 + 2*n + 1 := by
  have h : n + n = 2*n, by { nth_rewrite 0 ← one_mul n,
  nth_rewrite 1 ← one_mul n, rw ← add_mul, },
  rw [pow_two, mul_add, add_mul, mul_one (n+1), one_mul,
  ← pow_two, add_assoc, ← add_assoc n n 1,
  ← add_assoc, ← h],

However, this is much less readable.

Remarks

  • The exact notation is important in calc mode.

  • The calc mode not only works for equalities, but also for inequalities, subset-relations etc.

  • The example above can be solved easily using linarith or ring.

  • In order to generate a proof in calc mode, one can do it as follows (see the example below):

    • give the exact calculation steps without proof (using by sorry)

    • fill in the proofs which are left over.

Here is how to start the proof of the binomial formula. First, leave out all proofs:

declaration uses `sorry`example (n : ) : n + n = 2*n := n:n + n = 2 * n calc n + n = 1 * n + 1 * n := n:n + n = 1 * n + 1 * n All goals completed! 🐙 _ = (1 + 1) * n := n:1 * n + 1 * n = (1 + 1) * n All goals completed! 🐙 _ = 2 * n := n:(1 + 1) * n = 2 * n All goals completed! 🐙

Then, fill in the details

example (n : ) : n + n = 2*n := n:n + n = 2 * n calc n + n = 1 * n + 1 * n := n:n + n = 1 * n + 1 * n All goals completed! 🐙 _ = (1 + 1) * n := n:1 * n + 1 * n = (1 + 1) * n All goals completed! 🐙 _ = 2 * n := n:(1 + 1) * n = 2 * n All goals completed! 🐙
🔗def
Lean.calc : Lean.ParserDescr
Lean.calc : Lean.ParserDescr

Step-wise reasoning over transitive relations.

calc
  a = b := pab
  b = c := pbc
  ...
  y = z := pyz

proves a = z from the given step-wise proofs. = can be replaced with any relation implementing the typeclass Trans. Instead of repeating the right- hand sides, subsequent left-hand sides can be replaced with _.

calc
  a = b := pab
  _ = c := pbc
  ...
  _ = z := pyz

It is also possible to write the first relation as <lhs>\n _ = <rhs> := <proof>. This is useful for aligning relation symbols, especially on longer identifiers:

calc abc
  _ = bce := pabce
  _ = cef := pbcef
  ...
  _ = xyz := pwxyz

calc works as a term, as a tactic or as a conv tactic.

See Theorem Proving in Lean 4 for more information.

3.4.8. cases'🔗

Summary: If a hypothesis is composed, i.e. can be expanded into two or more cases, cases' delivers exactly that. This can be used not only used with hypotheses h : P ∨ Q or h : P ∧ Q, but also with structures that consist of several cases, such as ∃... (here there is a variable and a statement) and x : bool or n : ℕ.

Examples:

Proof state

Tactic

New proof state

h : P ∧ Q

cases h with hP hQ

hP : P
hQ : Q
⊢ R

h : P ∨ Q

cases h with hP hQ

hP : P
⊢ R
hQ : Q
⊢ R

h : False
⊢ P

cases h

no goals🎉

P: ℕ → Prop
h: ∃ (m : ℕ), P m
⊢ Q

cases x with m h1

P : ℕ → Prop
m : ℕ
h1 : P m
⊢ Q

x : Bool
⊢ x = True ∨ x = False

cases x

⊢ False = True ∨ False = False
⊢ True = True ∨ True = False

n : ℕ
⊢ n > 0 → (∃ (k : ℕ), n = k + 1)

cases n

⊢ 0 > 0 → (∃ (k : ℕ), 0 = k + 1)
⊢ n.succ > 0 → (∃ (k : ℕ), n.succ = k + 1)

Remarks:

  • The application cases' n for n : ℕ is strictly weaker than complete induction (see induction). After all, cases only converts n : ℕ into the two cases 0 and succ n, but you cannot use the statement for n-1 to prove the statement for n.

  • A more flexible version of cases' is rcases.

example (P Q : Prop) (hP: P Q) ( hP' : ¬P Q) : Q := P:PropQ:ProphP:P QhP':¬P QQ P:PropQ:ProphP:P QhP':¬P Qh:PQP:PropQ:ProphP:P QhP':¬P Qh:¬PQ P:PropQ:ProphP:P QhP':¬P Qh:PQ All goals completed! 🐙 P:PropQ:ProphP:P QhP':¬P Qh:¬PQ All goals completed! 🐙

3.4.9. change🔗

Summary: Changes the goal (or a hypothesis) into a goal (or a hypothesis) that is defined the same.

Examples:

Proof state

Tactic

New proof state

⊢ : P → false

change ¬P

⊢ ¬P

h : ¬P
⊢ Q

change P → false at h

h: P → false
⊢ Q

xs : x ∈ s
⊢ x ∈ f ⁻¹' (f '' s)

change f x ∈ f '' s

xs : x ∈ s
⊢ f x ∈ f '' s

Remarks:

  • As can be seen from the penultimate example, change also works for hypotheses.

  • Since many tactics test for definitional equality anyway, change is often not necessary. However, it can help to make the proof more readable.

example (P : Prop) (hP : P) (hP' : ¬P) : False := P:ProphP:PhP':¬PFalse P:ProphP:PhP':P FalseFalse P:ProphP:PhP':P FalseP All goals completed! 🐙

A property of, say, the natural numbers, gives rise to Set ℕ by collecting all n : ℕ satisfying the property. In other words, P n and the membership n ∈ {m | P m} are equivalent.

example (P : Prop) (n : ) (hn : P n) : n {m | P m} := P: Propn:hn:P nn {m | P m} P: Propn:hn:P nP n All goals completed! 🐙

3.4.10. clear🔗

Summary: With clear h the hypothesis h is removed from the goal state (forgotten).

Example

Proof state

Tactic

New proof state

h : P
⊢ Q

clear h

⊢ Q

3.4.11. congr🔗

Summary: If you have to show an equality f x = f y, then congr uses the statement that the equality is particularly true if x = y.

Examples:

Proof state

Tactic

New proof state

⊢ f x = f y

congr

⊢ x = y

Remarks:

  • The related tactic congr' uses another parameter that determines how many recursive layers are eliminated in the goal; see the examples below.

  • Besides the congr tactic there are several related results which can be applied, e.g. tsum_congr.

Here, congr goes too deep since it tries to match inner arguments:

declaration uses `sorry`example (f : ) (x y : ) : f (x + y) = f (y + x) := f: x:y:f (x + y) = f (y + x) f: x:y:x = yf: x:y:y = x f: x:y:x = y All goals completed! 🐙 f: x:y:y = x All goals completed! 🐙

We can prevent this by specifying how deep congr shoud go. (The above example is equivalent to congr' 2)

example (f : ) (x y : ) : f (x + y) = f (y + x) := f: x:y:f (x + y) = f (y + x) f: x:y:x + y = y + x All goals completed! 🐙

tsum_congr can be made usefule by using apply or a related tactics.

theorem tsum_congr.{u_1, u_2} : {α : Type u_1} {β : Type u_2} [inst : AddCommMonoid α] [inst_1 : TopologicalSpace α] {L : SummationFilter β} {f g : β α}, (∀ (b : β), f b = g b) ∑'[L] (b : β), f b = ∑'[L] (b : β), g b := fun {α} {β} [AddCommMonoid α] [TopologicalSpace α] {L} {f g} hfg => congr_arg (fun x => tsum x L) (funext hfg)#print tsum_congr
theorem tsum_congr.{u_1, u_2} :  {α : Type u_1} {β : Type u_2} [inst : AddCommMonoid α] [inst_1 : TopologicalSpace α]
  {L : SummationFilter β} {f g : β  α}, (∀ (b : β), f b = g b)  ∑'[L] (b : β), f b = ∑'[L] (b : β), g b :=
fun {α} {β} [AddCommMonoid α] [TopologicalSpace α] {L} {f g} hfg => congr_arg (fun x => tsum x L) (funext hfg)

3.4.12. constructor🔗

Summary: If the goal is of the type ⊢ P ∧ Q, it is replaced by constructor into two goals ⊢ P and ⊢ Q.

Examples

Proof state

Tactic

New proof state

⊢ P ∧ Q

constructor

⊢ P
⊢ Q

⊢ P ↔ Q

constructor

⊢ P → Q
⊢ Q → P

Remarks

Note that ⊢ P ↔ Q is identical to ⊢ (P → Q) ∧ (Q → P).

example (P Q R : Prop) (hP : P) (hQ : Q) (hR : R) : P Q R := P:PropQ:PropR:ProphP:PhQ:QhR:RP Q R P:PropQ:PropR:ProphP:PhQ:QhR:RPP:PropQ:PropR:ProphP:PhQ:QhR:RQ R P:PropQ:PropR:ProphP:PhQ:QhR:RP All goals completed! 🐙 P:PropQ:PropR:ProphP:PhQ:QhR:RQ R P:PropQ:PropR:ProphP:PhQ:QhR:RQP:PropQ:PropR:ProphP:PhQ:QhR:RR P:PropQ:PropR:ProphP:PhQ:QhR:RQ All goals completed! 🐙 P:PropQ:PropR:ProphP:PhQ:QhR:RR All goals completed! 🐙

In fact, constructor is the same as refine ⟨?_, ?_⟩ (i.e. leaving two new goals). However, refine is more flexible since it is not bound to two variables:

example (P Q R : Prop) (hP : P) (hQ : Q) (hR : R) : P Q R := P:PropQ:PropR:ProphP:PhQ:QhR:RP Q R All goals completed! 🐙

When we have a property of the natural numbers, P : ℕ → Prop, {P // P m} is the subtype which consists of all m for which P m holds. Every (m : {P // P m}) is a pair ⟨m.val, m.prop⟩, where (m.val : ℕ) is its numerical value and m.prop is a proof for P m. So, we can construct a member of the subtype by giving a number n and a proof hn : P n. The constructor subtype_of_mem is the constructor for the subtype {m // P m}. It takes a number n and a proof hn : P n and returns the pair ⟨n, hn⟩.

def mySubtype.mk (P : Prop) (n : ) (hn : P n) : {m // P m} := :Sort ?u.7P: Propn:hn:P n{ m // P m } :Sort ?u.7P: Propn:hn:P nP ?val:Sort ?u.7P: Propn:hn:P n :Sort ?u.7P: Propn:hn:P nP ?val All goals completed! 🐙

3.4.13. contradiction🔗

Summary: contradiction closes the goal if the local context contains two contradictory hypotheses, or a hypothesis of type False, or h : a ≠ a, or similar trivially absurd facts.

Examples:

Proof state

Tactic

New proof state

h : False

contradiction

(no goals)

h₁ : P

contradiction

(no goals)

Remarks:

  • contradiction is usually faster and cleaner than exact absurd h h' or exfalso; exact h' h.

  • It also recognises Nat.zero_ne_one-style contradictions and evaluates numeric literals.

example (h : False) (P : Prop) : P := h:FalseP:PropP All goals completed! 🐙 example (P Q : Prop) (h₁ : P) (h₂ : ¬P) : Q := P:PropQ:Proph₁:Ph₂:¬PQ All goals completed! 🐙 example (h : (0 : ) = 1) (P : Prop) : P := h:0 = 1P:PropP All goals completed! 🐙

3.4.14. contrapose🔗

Summary: contrapose applies the contrapositive: given a goal P → Q, it turns it into ¬Q → ¬P. The variant contrapose! additionally pushes the negations inward using push_neg.

Examples:

Proof state

Tactic

New proof state

h : P
⊢ Q

contrapose h

h : ¬Q
⊢ ¬P

h : x ≤ y
⊢ f x ≤ f y

contrapose! h

h : f y < f x
⊢ y < x

Remarks:

  • contrapose (without !) simply inserts ¬. contrapose! additionally runs push_neg on both the hypothesis and the goal, so ¬(x ≤ y) becomes y < x and so on.

  • contrapose operates on a hypothesis by default; to contrapose the goal only, use contrapose without an argument.

example (P Q : Prop) (h : P Q) : ¬Q ¬P := P:PropQ:Proph:P Q¬Q ¬P intro hnQ P:PropQ:Proph:P QhnQ:¬QhP:PFalse All goals completed! 🐙 example (x y : ) : y + 1 < x + 1 y < x := x:y:y + 1 < x + 1 y < x x:y:h:y + 1 < x + 1y < x All goals completed! 🐙

3.4.15. decide🔗

Summary: decide closes a goal whose statement is decidable -- i.e. the typeclass Decidable P finds a proof or refutation of P by computation. Typical candidates are concrete arithmetic equalities and inequalities, membership in a finite set, divisibility, and propositional logic on concrete inputs.

Examples:

Proof state

Tactic

New proof state

⊢ (2 + 2 : ℕ) = 4

decide

(no goals)

⊢ (3 : ℕ) ∣ 12

decide

(no goals)

Remarks:

  • decide performs reduction: it reduces the decision procedure on the given inputs. For large inputs it can be slow (or time out); for general arithmetic, prefer omega or norm_num.

  • For propositions that depend on a variable (e.g. ∀ n, P n), decide will not work -- it needs concrete inputs.

example : (2 + 2 : ) = 4 := 2 + 2 = 4 All goals completed! 🐙 example : (3 : ) 12 := 3 12 All goals completed! 🐙 example : ¬ ((10 : ) = 20) := ¬10 = 20 All goals completed! 🐙

3.4.16. exact🔗

Summary: If the goal can be closed with a single command, then it can be closed with the exact tactic. Like many other tactics, exact also works with terms that are definitionally equal.

Examples:

Proof state

Tactic

New proof state

h : P
⊢ P

exact h

no goals

hP: P
hQ: Q ⊢ P ∧ Q

exact ⟨ hP, hQ ⟩

no goals

Remarks:

  • The related tyctics exact? searches for terms which close the goal; see apply?.

  • If the proof consists of a single call of exact, it is easy to translate it to term mode; see easy proofs in term mode.

  • In the third example, note the order in which the two hypotheses hP and hnP are applied. The first hypothesis after exact is always the one whose right side matches the goal. If the goal requires further input, it is appended afterwards.

example (P : Prop) (h : False) : P := P:Proph:FalseP All goals completed! 🐙

3.4.17. exfalso🔗

Summary: The statement false → P is true for all P. If the current goal is ⊢ P, and you would apply this true statement using apply, the new goal would be ⊢ false. This is exactly what the exfalso tactic does.

Examples:

Proof state

Tactic

New proof state

h : P
⊢ Q

exfalso

h : P
⊢ False

hP : P
hnP : ¬P
⊢ Q

exfalso

hP : P
hnP: ¬P
⊢ false

Remarks:

  • If you use this tactic, you leave the realm of constructive mathematics. (This dispenses with the rule of the excluded middle.)

  • exfalso is the same as apply False.elim; see the examples for exact.

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

3.4.18. ext🔗

Summary: Extensionality is a principle that states that two functions are equal if they give the same result for all arguments. The ext tactic applies this principle to prove the equality of two functions.

Examples:

Proof state

Tactic

New proof state

f g : ℝ → ℝ
⊢ f = g

ext x

f g : ℝ → ℝ

⊢ f x = g x

Remarks:

  • Extensionality works for functions and sets.

example (f g : ) (hf : f = fun x x + x) (hg : g = fun x 2 * x): f = g := f: g: hf:f = fun x => x + xhg:g = fun x => 2 * xf = g f: g: hf:f = fun x => x + xhg:g = fun x => 2 * x(fun x => x + x) = fun x => 2 * x f: g: hf:f = fun x => x + xhg:g = fun x => 2 * xx:x + x = 2 * x All goals completed! 🐙 example (s t : Set ) (hs : x s, x t) (ht : t s): s = t := s:Set t:Set hs: x s, x tht:t ss = t s:Set t:Set hs: x s, x tht:t sx:x s x t All goals completed! 🐙

3.4.19. funext🔗

Summary: funext is the function extensionality tactic. To prove a goal of the form f = g, where f g : (x : α) → β x are two (possibly dependent) functions, funext x introduces a fresh variable x : α and reduces the goal to f x = g x. This relies on the funext axiom from Lean's type theory.

Examples:

Proof state

Tactic

New proof state

f g : ℕ → ℕ
⊢ f = g

funext n

f g : ℕ → ℕ

⊢ f n = g n

f g : (x : α) → β x
⊢ f = g

funext x

f g : (x : α) → β x

⊢ f x = g x

Remarks:

  • funext is a special case of the more general ext tactic. Use funext when you specifically want to compare two functions pointwise; use ext when the goal involves more than one layer of extensionality (e.g. functions returning sets).

  • You can introduce several variables at once, e.g. funext x y to reduce (fun x y ↦ f x y) = (fun x y ↦ g x y) to f x y = g x y.

example : (fun n : n + 0) = (fun n : n) := (fun n => n + 0) = fun n => n n:n + 0 = n All goals completed! 🐙 example (f : ) : (fun n m f n m) = f := f: (fun n m => f n m) = f f: n:m:f n m = f n m All goals completed! 🐙
🔗theorem
funext.{u, v} {α : Sort u} {β : α Sort v} {f g : (x : α) β x} (h : (x : α), f x = g x) : f = g
funext.{u, v} {α : Sort u} {β : α Sort v} {f g : (x : α) β x} (h : (x : α), f x = g x) : f = g

Function extensionality. If two functions return equal results for all possible arguments, then they are equal.

It is called “extensionality” because it provides a way to prove two objects equal based on the properties of the underlying mathematical functions, rather than based on the syntax used to denote them. Function extensionality is a theorem that can be proved using quotient types.

3.4.20. field_simp🔗

Summary: field_simp clears denominators in a goal or hypothesis involving a field (like , , ). It rewrites expressions of the form a / b into a common form, turning an equation between rational expressions into one without division. Combined with ring, it closes most identities in fields.

Examples:

Proof state

Tactic

New proof state

a b : ℝ

⊢ a / b + 1 = (a + b) / b

field_simp

a b : ℝ

⊢ a + b = a + b

Remarks:

  • field_simp typically needs nonzeroness hypotheses for every denominator. These are taken from the context automatically, or can be supplied with field_simp [h₁, h₂].

  • The idiomatic combination is field_simp; ring: field_simp removes division, ring then finishes the algebraic identity.

  • It works over any Field, DivisionRing, or GroupWithZero.

example (a b : ) (hb : b 0) : a / b + 1 = (a + b) / b := a:b:hb:b 0a / b + 1 = (a + b) / b All goals completed! 🐙 example (x : ) (hx : x 0) : (x + 1) / x - 1 / x = 1 := x:hx:x 0(x + 1) / x - 1 / x = 1 x:hx:x 0x + 1 - 1 = x All goals completed! 🐙

3.4.21. filter_upwards🔗

Summary: filter_upwards is the workhorse tactic for proving goals of the form ∀ᶠ x in F, P x. Given a list of Eventually hypotheses, it intersects them, peels off the ∀ᶠ quantifier, and leaves you with a pointwise goal in which all of the supplied hypotheses appear specialized at the point x.

Examples:

Proof state

Tactic

New proof state

F : Filter α


⊢ ∀ᶠ x in F, p x ∧ q x

filter_upwards [h₁, h₂] with x hp hq

F : Filter α



⊢ p x ∧ q x

Remarks:

  • The list [h₁, h₂, ...] may be empty (filter_upwards []), in which case the tactic only peels off the ∀ᶠ and leaves the pointwise goal P x.

  • The with clause names the bound point and the specialized hypotheses; if you omit it, fresh names are chosen automatically.

  • Use filter_upwards whenever you want to combine several "eventually" facts and finish pointwise -- a very common pattern in analysis and measure theory (where ∀ᶠ x ∂μ is "almost everywhere").

  • It works for any filter, including atTop, nhds x, cofinite, and μ.ae.

open Filter example {α : Type*} {F : Filter α} {p q : α Prop} (hp : ∀ᶠ x in F, p x) (hq : ∀ᶠ x in F, q x) : ∀ᶠ x in F, p x q x := α:Type u_1F:Filter αp:α Propq:α Prophp:∀ᶠ (x : α) in F, p xhq:∀ᶠ (x : α) in F, q x∀ᶠ (x : α) in F, p x q x filter_upwards [hp, hq] with x α:Type u_1F:Filter αp:α Propq:α Prophp✝:∀ᶠ (x : α) in F, p xhq:∀ᶠ (x : α) in F, q xx:αhp:p xq x p x q x α:Type u_1F:Filter αp:α Propq:α Prophp✝:∀ᶠ (x : α) in F, p xhq✝:∀ᶠ (x : α) in F, q xx:αhp:p xhq:q xp x q x All goals completed! 🐙 -- If p holds eventually along F and ∀ x, p x → q x, -- then q holds eventually along F. example {α : Type*} {F : Filter α} {p q : α Prop} (hp : ∀ᶠ x in F, p x) (h : x, p x q x) : ∀ᶠ x in F, q x := α:Type u_1F:Filter αp:α Propq:α Prophp:∀ᶠ (x : α) in F, p xh: (x : α), p x q x∀ᶠ (x : α) in F, q x filter_upwards [hp] with x α:Type u_1F:Filter αp:α Propq:α Prophp✝:∀ᶠ (x : α) in F, p xh: (x : α), p x q xx:αhp:p xq x All goals completed! 🐙

3.4.22. fun_prop🔗

Summary: fun_prop is a general-purpose tactic for closing function-property goals such as Continuous f, Measurable f, Differentiable ℝ f, ContDiff ℝ n f, Integrable f, and others. It works by recursively applying composition, linearity, and other structural lemmas tagged @[fun_prop] in Mathlib.

Examples:

Proof state

Tactic

New proof state

⊢ Continuous (fun x : ℝ => x^2 + Real.sin x)

fun_prop

no goals 🎉

Remarks:

  • fun_prop subsumes and replaces the older continuity and measurability tactics for most purposes.

  • For arguments it does not know about (e.g. the continuity of a user-defined function), supply the fact as a hypothesis or add a @[fun_prop] lemma.

  • When fun_prop fails, it will often report a helpful subgoal indicating which atomic fact is missing.

example : Continuous (fun x : => x ^ 2 + Real.sin x) := Continuous fun x => x ^ 2 + Real.sin x All goals completed! 🐙 example : Measurable (fun x : => x ^ 3 + 2 * x) := Measurable fun x => x ^ 3 + 2 * x All goals completed! 🐙

3.4.23. gcongr🔗

Summary: gcongr ("generalized congruence") reduces a monotonicity goal by peeling off a common outer context. It turns a goal like f a + b ≤ f a' + b into a ≤ a' whenever f is known to be monotone in its argument. It is the standard tool for proving routine inequalities of compound expressions.

Examples:

Proof state

Tactic

New proof state

a a' b : ℝ

⊢ a + b ≤ a' + b

gcongr

a a' b : ℝ

⊢ a ≤ a'

Remarks:

  • gcongr relies on lemmas tagged @[gcongr] in Mathlib. Common operations (+, *, ^, div, sum, ...) are already tagged, so the tactic works out of the box in most situations.

  • After gcongr, the remaining goals are usually closed by assumption, linarith, or positivity.

  • gcongr can also prove strict inequalities.

example (a a' b : ) (h : a a') : a + b a' + b := a:a':b:h:a a'a + b a' + b All goals completed! 🐙 example (x y : ) (hx : 0 x) (h : x y) : x ^ 2 y ^ 2 := x:y:hx:0 xh:x yx ^ 2 y ^ 2 All goals completed! 🐙

3.4.24. nlinarith🔗

Summary: nlinarith is the nonlinear extension of linarith. Like linarith, it proves goals of the form a ≤ b, a < b, a = b, or False from linear hypotheses over ordered fields. Unlike linarith, it additionally multiplies pairs of hypotheses to produce quadratic witnesses, so it can close many goals involving products and squares.

Examples:

Proof state

Tactic

New proof state

x : ℝ
⊢ 0 ≤ x ^ 2 + 1

nlinarith [sq_nonneg x]

(no goals)

Remarks:

  • nlinarith is strictly more powerful than linarith but also slower; prefer linarith when possible.

  • You can supply extra lemmas as hints: nlinarith [sq_nonneg x, mul_self_nonneg y]. Hints of the form 0 ≤ ... are especially useful.

  • For purely polynomial identities (not inequalities), use ring or polyrith instead.

example (x : ) : 0 x ^ 2 + 1 := x:0 x ^ 2 + 1 All goals completed! 🐙 example (a b : ) (h : 0 a) (h' : 0 b) : 0 a * b := a:b:h:0 ah':0 b0 a * b All goals completed! 🐙

3.4.25. norm_cast🔗

Summary: norm_cast moves coercions outward, out of compound expressions and towards the root of the term. This is the inverse of push_cast, which moves coercions inward. Together they are the standard tools for manipulating goals that mix integer, natural, rational, and real coercions.

Examples:

Proof state

Tactic

New proof state

n : ℕ
⊢ (n : ℝ) + (1 : ℝ) = ((n + 1 : ℕ) : ℝ)

norm_cast

(no goals)

Remarks:

  • Use norm_cast when casts sit on atoms (e.g. (n : ℝ) + 1) and you want to pull them outside. Use push_cast for the opposite direction.

  • The variant exact_mod_cast h applies norm_cast automatically before trying to close the goal with h.

  • norm_cast at h rewrites a hypothesis.

example (n : ) : (n : ) + 1 = ((n + 1 : ) : ) := n:n + 1 = (n + 1) All goals completed! 🐙 example (m n : ) (h : (m : ) = (n : )) : m = n := m:n:h:m = nm = n All goals completed! 🐙

3.4.26. omega🔗

Summary: omega is a decision procedure for linear arithmetic over the integers and the natural numbers . It closes goals involving +, -, * (by a constant), /, %, , <, =, , , , and ¬, provided the goal reduces to a Presburger arithmetic statement. It is one of the most useful tactics for closing routine numerical side conditions.

Examples:

Proof state

Tactic

New proof state

a b : ℕ

⊢ a ≤ b + 1

omega

(no goals)

n : ℕ

⊢ 0 < n

omega

(no goals)

Remarks:

  • omega is complete for linear arithmetic: if the goal is a true statement in Presburger arithmetic, omega will close it.

  • It does not handle nonlinear products (use nlinarith) or real-valued arithmetic (use linarith).

  • It works equally well on and , automatically handling the fact that natural numbers are nonnegative.

example (a b : ) (h : a b) : a b + 1 := a:b:h:a ba b + 1 All goals completed! 🐙 example (n : ) (h : n 0) : 0 < n := n:h:n 00 < n All goals completed! 🐙 example (x y : ) (h₁ : x + y = 5) (h₂ : x - y = 1) : x = 3 := x:y:h₁:x + y = 5h₂:x - y = 1x = 3 All goals completed! 🐙

3.4.27. positivity🔗

Summary: positivity proves goals of the form 0 < e, 0 ≤ e, or e ≠ 0, where e is an arithmetic expression built from literals and variables by operations that preserve positivity (addition of nonnegatives, multiplication, powers, abs, exp, square roots, ...).

Examples:

Proof state

Tactic

New proof state

x : ℝ

⊢ 0 < x ^ 2 + 1

positivity

(no goals)

Remarks:

  • positivity is extensible: any @[positivity]-tagged lemma in Mathlib becomes available.

  • When a term depends on a variable whose sign is unknown but you have a hypothesis 0 < y, positivity will pick it up automatically.

example (x : ) (hx : 0 < x) : 0 < x ^ 2 + 1 := x:hx:0 < x0 < x ^ 2 + 1 All goals completed! 🐙 example (a b : ) (ha : 0 < a) (hb : 0 < b) : 0 < a + b := a:b:ha:0 < ahb:0 < b0 < a + b All goals completed! 🐙 example (n : ) : (0 : ) n := n:0 n All goals completed! 🐙

3.4.28. push_cast🔗

Summary: push_cast pushes coercions (like Nat.cast, Int.cast, Rat.cast) towards the leaves of an expression. It turns ((a + b : ℕ) : ℝ) into (a : ℝ) + (b : ℝ), and similarly for *, ^, etc. This is the inverse direction of norm_cast, which moves casts outward.

Examples:

Proof state

Tactic

New proof state

n : ℕ
⊢ ((n + 1 : ℕ) : ℝ) = (n : ℝ) + 1

push_cast

n : ℕ
⊢ (n : ℝ) + 1 = (n : ℝ) + 1

Remarks:

  • Use push_cast when the cast sits outside a compound expression and you want to distribute it; use norm_cast when the casts sit on atoms and you want to pull them out.

  • The idiomatic combination for closing casting goals is push_cast; ring (or push_cast; linarith).

  • push_cast at h rewrites a hypothesis.

example (n : ) : ((n + 1 : ) : ) = (n : ) + 1 := n:(n + 1) = n + 1 n:n + 1 = n + 1 All goals completed! 🐙 example (k : ) : ((k ^ 2 : ) : ) = (k : ) ^ 2 := k:(k ^ 2) = k ^ 2 k:k ^ 2 = k ^ 2 All goals completed! 🐙

3.4.29. ring_nf🔗

Summary: ring_nf is the normal-form variant of ring. While ring closes a goal that is an identity in a (semi)commutative ring, ring_nf instead rewrites both sides of the goal into a canonical polynomial form. It is useful when ring does not close the goal because it is not yet an equation (e.g. there are side hypotheses) or when you want to simplify a hypothesis.

Examples:

Proof state

Tactic

New proof state

a b : ℝ

⊢ ...

ring_nf at h

a b : ℝ

⊢ ...

Remarks:

  • ring_nf never fails; it just rewrites.

  • Use ring to close a pure ring identity; use ring_nf to normalize one side before continuing with another tactic.

  • You can target a hypothesis with ring_nf at h or normalize everywhere with ring_nf at *.

example (a b : ) (h : a * (b + b) = 6) : 2 * (a * b) = 6 := a:b:h:a * (b + b) = 62 * (a * b) = 6 a:b:h:a * b * 2 = 6a * b * 2 = 6 All goals completed! 🐙 example (x : ) : (x + 1) * (x - 1) + 1 = x ^ 2 := x:(x + 1) * (x - 1) + 1 = x ^ 2 All goals completed! 🐙

3.4.30. have🔗

Summary: By using have we introduce a new goal, which we have to prove first. Afterwards, it is available as a hypothesis in all further goals. This is identical to first proving a lemma h with the statement after have h : and then reusing it at the appropriate place in the proof (for example with apply or rw).

Examples:

Proof state

Tactic

New proof state

⊢ R

have h : P ↔ Q

⊢ P ↔ Q
h : P ↔ Q
⊢ R

⊢ P

have h1 : ∃ (m : ℕ), f 27 m, ...
cases h1 with m hm

m : ℕ
hm: f 27 m
⊢ P

Remarks:

  • Assume you want to use have. You could as well formulate a separate lemma and use it afterwards. It is not always clear which is better.

  • If the proof of the statement is short and is only used once in your proof, you might want to consider replacing its proof in the place where it is needed.

  • Suppose we have two goals (let's call them ⊢ 1 and ⊢ 2), and we need the statement of ⊢ 1 in the proof of ⊢ 2. We can first introduce a third goal with have h := ⊢ 1 (where ⊢ 1 is to be replaced by the statement). Then ⊢ 1 can be proved with exact, and has the statement ⊢ 1 available in the proof of ⊢ 2.

example (x : ) (d : ): 0 (d : ) * x^2 := x:d:0 d * x ^ 2 have h : d 0 := x:d:0 d * x ^ 2 All goals completed! 🐙 have h1 : (0 : ) = d * 0 := x:d:0 d * x ^ 2 All goals completed! 🐙 x:d:h:d 0 := zero_le dh1:0 = d * 0 := of_eq_true (Eq.trans (congrArg (Eq 0) (mul_zero d)) (eq_self 0))d * 0 d * x ^ 2 x:d:h:d 0 := zero_le dh1:0 = d * 0 := of_eq_true (Eq.trans (congrArg (Eq 0) (mul_zero d)) (eq_self 0))0 x ^ 2x:d:h:d 0 := zero_le dh1:0 = d * 0 := of_eq_true (Eq.trans (congrArg (Eq 0) (mul_zero d)) (eq_self 0))0 d x:d:h:d 0 := zero_le dh1:0 = d * 0 := of_eq_true (Eq.trans (congrArg (Eq 0) (mul_zero d)) (eq_self 0))0 d All goals completed! 🐙
example (P : Prop) : False → P := by
  exact False.elim

3.4.31. induction🔗

Summary:

Inductive types allow the possibility of proving statements about them by means of induction. This includes, for example, the usual case of complete induction over natural numbers.

Examples

Proof state

Tactic

New proof state

n : ℕ
⊢ n = 0 + n

induction n with
| zero ↦ ?_
| succ n hn ↦ ?_

⊢ 0 = 0 + 0
hd : d = 0 + d
⊢ d.succ = 0 + d.succ

example (n : ) : n = 0 + n := n:n = 0 + n induction n with 0 = 0 + 0 All goals completed! 🐙 n:hn:n = 0 + nn + 1 = 0 + (n + 1) All goals completed! 🐙

3.4.32. intro🔗

Summary

If the goal is of the form ⊢ P → Q or ∀ (n : ℕ), P n, you can proceed with intro P or intro n. You can use several intro commands at the same time to summarize a single one. A little more precisely, intro h1 h2 h3, is identical to intro h1; intro h2; intro h3.

Examples

Proof state

Tactic

New proof state

⊢ P → Q

intro hP

hP : P
⊢ Q

f : α → Prop
⊢ ∀ (x : α), f x

intro x

f: α → Prop
x : α
⊢ f x

⊢ P → Q → R

intro hP hQ

hP : P
hQ : Q
⊢ R

P : ℕ → Prop
⊢ ∀ (n : ℕ), P n → Q

intro n hP

P : ℕ → Prop
n : ℕ
hP: P n ⊢ Q

example (P : Prop) : P P := P:PropP P P:Proph:PP All goals completed! 🐙 example (P : Prop) : P P P P := P:PropP P P P intro h₁ P:Proph₁:Ph₂:PP P P:Proph₁:Ph₂:Ph₃:PP All goals completed! 🐙 example (P Q : Prop) : (P Q) P Q := P:PropQ:Prop(P Q) P Q P:PropQ:Proph₁:P QP Q All goals completed! 🐙

Remarks

  • Several intro commands in a row are best combined. Furthermore, rintro is a more flexible variant.

  • A reversal of intro is revert.

3.4.33. left🔗

Summary:

The application of left, is identical to apply h for h : P → P ∨ Q. So if you have a goal of the form ⊢ P ∨ Q, left, causes you to have only the goal ⊢ P. After all, it is sufficient to show P to close the goal.

Examples:

Proof state

Tactic

New proof state

⊢ P ∨ Q

left

⊢ P

⊢ ℕ

left

no goals🎉

The second example requires a little explanation. First of all, you have to understand that the goal ⊢ ℕ is to show that there is a term of type , i.e. that there is a natural number. Now you have to know how is implemented in Lean. This is

inductive nat | zero : nat | succ (n : nat) : nat

together with

notation `ℕ` := nat

This means: The type is defined by the fact that zero is a term of this type, and that there is a function succ : ℕ → ℕ. Thus, in the second example, the input left, is closed because by definition zero : ℕ holds, so in particular there is a term of type .

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

Remarks:

  • See also right, for the equivalent tactic, which is apply h for h : Q → P ∨ Q.

  • As in the second example, left, can always be applied when dealing with an inductive type with two constructors (such like ).

3.4.34. linarith🔗

Summary: This tactic can prove equations and inequalities with the help of hypotheses. It is important that the hypotheses used are also only equations and inequalities. So here we are working mainly with the transitivity of < together with arithmetic rules.

Examples:

Proof state

Tactic

New proof state

h₁ : a < b
h₂ : b < c
⊢ a < c

linarith

no goals

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

3.4.35. norm_num🔗

Summary:

As long as there are no variables, norm_num can do calculations which involve =, <, or .

Examples:

Proof state

Tactic

New proof state

⊢ 2 + 2 < 5

norm_num

no goals

⊢ | (1 : ℝ) | = 1

norm_num

no goals

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

Remarks

norm_num knows about some more operations, e.g. absolute values; see also the second example.

3.4.36. nth_rewrite🔗

Summary:

This tactic is related to rw. The difference is that you can specify the occurrence number of the term to be replaced on which rw is to be applied. The exact syntax is nth_rewrite k h, where k is the number (starting with $0$) of the term to be replaced and h is the hypothesis to be replaced. As with rw, this must be in the form h : x=y or h : A↔B.

Examples:

Proof state

Tactic

New proof state

n : ℕ
⊢ 0 + n = 0 + 0 + n

nth_rewrite 0 zero_add

n : ℕ
⊢ n = 0 + 0 + n

n : ℕ
⊢ 0 + n = 0 + 0 + n

nth_rewrite 1 zero_add

n : ℕ
⊢ 0 + n = 0 + n

n : ℕ
⊢ 0 + n = 0 + 0 + n

nth_rewrite 2 zero_add

n : ℕ
⊢ 0 + n = 0 + n

In the above example, Lean sees three terms of the form 0 + ?_: Number 0 is on the left-hand side, for numbers 1 and 2, on the right side (because of the parenthesis 0 + 0 + n = (0 + 0) + n), the second = is checked first. To the left of it is 0 + 0, which is by definition identical to 0. applying rw zero_add here, the term is converted to n. For number 2, you see 0 + 0, determine that it is of the desired form and convert it to 0.

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

3.4.37. obtain🔗

Summary: The obtain tactic can be used to merge have and cases into one command.

Examples:

Proof state

Tactic

New proof state

f : ℕ → ℕ → Prop
h : ∀ (n : ℕ), ∃ (m : ℕ), f n m

obtain ⟨ m, hm ⟩ := h 27

f: ℕ → ℕ → Prop
h : ∀ (n : ℕ), ∃ (m : ℕ), f n m
m : ℕ
hm : f 27 m

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

3.4.38. push_neg🔗

Summary: In many steps of a proof, a negation must be carried out. In order to process the corresponding quantifiers etc. as well and to better reusable, the tactic push_neg is available.

Examples

Proof state

Tactic

New proof state

⊢ ¬(P ∨ Q)

push_neg

⊢ ¬P ∧ ¬Q

h : ¬(P ∨ Q)

push_neg at h

h : ¬P ∧ ¬Q

⊢ ¬(P ∧ Q)

push_neg

⊢ P → ¬Q

P : X → Prop
⊢ ¬∀ (x : X), P x

push_neg

P : X → Prop
⊢ ∃ (x : X), ¬P x

P : X → Prop
⊢ ¬∃ (x : X), P x

push_neg

P : X → Prop
⊢ ∀ (x : X), ¬P x

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

Notes:

This tactic also works with other objects, such as sets.

3.4.39. rcases🔗

Proof state

Tactic

New proof state

h : P ∧ Q ∨ P ∧ R
⊢ P

rcases h with (⟨hP1,hQ⟩|⟨hP2,hR⟩)

hP1 : P
hQ : Q
⊢ P
hP2 : P
hR : R
⊢ P

Summary: rcases is a more flexible version of cases. Here, it is allowed to use ⟨ hP, hQ ⟩ (or (hP | hQ)) to to split the hypotheses hP and hQ into their cases. As can be seen in the example above, it is also possible to nest ⟨.,.⟩ and (.|.).

Examples:

Proof state

Tactic

New proof state

h : P ∧ Q
⊢ R

rcases h with⟨ hP, hQ ⟩ + hP : P {br}[]hQ : Q {br}[]⊢ R * + h : P ∨ Q{br}[]⊢ R + rcases h with( hP | hQ )

hP : P
⊢ R
hQ : Q
⊢ R

h : ∃ (m : ℕ) (hg : 0 ≤ m), m < n
⊢ P

rcases h with⟨m, h1, h2⟩

n m : ℕ
h1 : 0 ≤ m
h2 : m < n
⊢ 1 < n

Remarks:

The last example shows how to use rcases to directly resolve a ∃ quantifier in a hypothesis that has more than one constraint (here: 0 ≤ m) and m < n can be resolved directly.

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙
🔗inductive type
Lean.Elab.Tactic.RCases.RCasesPatt : Type
Lean.Elab.Tactic.RCases.RCasesPatt : Type

An rcases pattern can be one of the following, in a nested combination:

  • A name like foo

  • The special keyword rfl (for pattern matching on equality using subst)

  • A hyphen -, which clears the active hypothesis and any dependents.

  • A type ascription like pat : ty (parentheses are optional)

  • A tuple constructor like p1, p2, p3

  • An alternation / variant pattern p1 | p2 | p3

Parentheses can be used for grouping; alternation is higher precedence than type ascription, so p1 | p2 | p3 : ty means (p1 | p2 | p3) : ty.

N-ary alternations are treated as a group, so p1 | p2 | p3 is not the same as p1 | (p2 | p3), and similarly for tuples. However, note that an n-ary alternation or tuple can match an n-ary conjunction or disjunction, because if the number of patterns exceeds the number of constructors in the type being destructed, the extra patterns will match on the last element, meaning that p1 | p2 | p3 will act like p1 | (p2 | p3) when matching a1 a2 a3. If matching against a type with 3 constructors, p1 | (p2 | p3) will act like p1 | (p2 | p3) | _ instead.

Constructors

paren (ref : Lean.Syntax) :
  Lean.Elab.Tactic.RCases.RCasesPatt 
    Lean.Elab.Tactic.RCases.RCasesPatt

A parenthesized expression, used for hovers

one (ref : Lean.Syntax) :
  Lean.Name  Lean.Elab.Tactic.RCases.RCasesPatt

A named pattern like foo

clear (ref : Lean.Syntax) :
  Lean.Elab.Tactic.RCases.RCasesPatt

A hyphen -, which clears the active hypothesis and any dependents.

explicit (ref : Lean.Syntax) :
  Lean.Elab.Tactic.RCases.RCasesPatt 
    Lean.Elab.Tactic.RCases.RCasesPatt

An explicit pattern @pat.

typed (ref : Lean.Syntax) :
  Lean.Elab.Tactic.RCases.RCasesPatt 
    Lean.Term  Lean.Elab.Tactic.RCases.RCasesPatt

A type ascription like pat : ty (parentheses are optional)

tuple (ref : Lean.Syntax) :
  List Lean.Elab.Tactic.RCases.RCasesPatt 
    Lean.Elab.Tactic.RCases.RCasesPatt

A tuple constructor like p1, p2, p3

alts (ref : Lean.Syntax) :
  List Lean.Elab.Tactic.RCases.RCasesPatt 
    Lean.Elab.Tactic.RCases.RCasesPatt

An alternation / variant pattern p1 | p2 | p3

3.4.40. refine🔗

Summary: The refine tactic is like exact with holes. More precisely: if the goal is to apply a combination of hypotheses, you can do that with 'refine' and write an open term '' for each. You then get each '' back as a new goal (where those with definitional equality are solved immediately).

Examples:

Proof state

Tactic

New proof state

hQ : Q
⊢ P ∧ Q

refine ⟨?_, hQ⟩

hQ : Q
⊢ P

⊢ ∃ (n : ℕ) (h : n > 0), n ^ 2 = 9

refine ⟨3, , by normnum⟩

⊢ 3 > 0

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

3.4.41. revert🔗

Summary: revert is the opposite of intro: It takes a hypothesis from the local context and inserts it as a precondition into the goal.

Examples

Proof state

Tactic

New proof state

h : P
⊢ Q

revert hP

⊢ P → Q

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

Remarks:

revert is rarely needed; actually only when you want to apply an already proven result exactly and first want to establish the correct form of the goal.

3.4.42. rfl🔗

Summary: This tactic proves the equality (or equivalence) of two definitionally equal terms.

Examples:

Proof state

Tactic

New proof state

⊢ P ↔ P oder
⊢ P = P

rfl

no goals

⊢ 1 + 2 = 3

rfl

no goals

Remarks:

The second example works because both sides are by definition equal to succ succ succ 0.

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙
🔗def
Lean.Elab.Tactic.Rfl.evalApplyRfl : Lean.Elab.Tactic.Tactic
Lean.Elab.Tactic.Rfl.evalApplyRfl : Lean.Elab.Tactic.Tactic

This tactic applies to a goal whose target has the form x ~ x, where ~ is a reflexive relation, that is, a relation which has a reflexive lemma tagged with the attribute [refl].

Summary: See left, where the adjustments are obvious.

Examples

Proof state

Tactic

New proof state

h : P ∨ Q

right

⊢ Q

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

3.4.44. ring🔗

Summary: The ring uses rules of calculation such as associativity, commutativity, and distributivity to achieve the goal.

Examples

Proof state

Tactic

New proof state

x y : ℝ
⊢ x + y = y + x

ring

no goals

n : ℕ
⊢ (n + 1)^2 = n^2 + 2*n + 1

ring

no goals

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

Remarks:

  • The second example works even though is not a ring (but only a half-ring). It would also work with n : ℝ (since has more calculation rules than ).

  • ring is only used to close the goal.

3.4.45. rintro🔗

Summary: The rintro tactic is used to process several intro and cases tactics on one line.

Examples

Proof state

Tactic

New proof state

⊢ P ∨ Q → R

rintro ( hP | hQ )
=
intro h
cases h with hP hQ

hP : P
⊢ P
hQ : Q
⊢ Q

⊢ P ∧ Q → R

rintro ⟨ hP , hQ ⟩
=
intro h
cases h with h1 h2

hP : P
hQ : Q
⊢ R

Notes:

Here, more than two can also be split into cases in one step: With A ∨ B ∨ C, rintro (A | B | C) introduces three goals.

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙
🔗def
Lean.Elab.Tactic.RCases.rintro (pats : Lean.TSyntaxArray `rintroPat) (ty? : Option Lean.Term) (g : Lean.MVarId) : Lean.Elab.TermElabM (List Lean.MVarId)
Lean.Elab.Tactic.RCases.rintro (pats : Lean.TSyntaxArray `rintroPat) (ty? : Option Lean.Term) (g : Lean.MVarId) : Lean.Elab.TermElabM (List Lean.MVarId)

The implementation of the rintro tactic. It takes a list of patterns pats and an optional type ascription ty? and introduces the patterns, resulting in zero or more goals.

3.4.46. rw🔗

Summary: rw stands for rewrite. For rw h to work, h must be an expression of the type h : x=y or h : A↔B. In this case, rw h replaces every term that is syntactically identical to x (or A) is replaced by y (or B). This also works if h is an already proven result (i.e. a lemma or theorem). With rw ← h is applied from right to left. (In the above example, y is replaced by x and B by A.)

Examples

Proof state

Tactic

New proof state

h : P ↔ Q
⊢ P

rw [h]

h : P ↔ Q
⊢ Q

h : P ↔ Q
⊢ Q

rw [← h]

h : P ↔ Q
⊢ P

h : P ↔ Q
hP : P

rw [h] at hP

h : P ↔ Q
hP : Q

h : P ↔ Q
hQ : Q

rw [← h] at hQ

h : P ↔ Q
hQ : P

k m: ℕ
⊢ k + m + 0 = m + k + 0

rw [add_comm]

k m : ℕ
⊢ 0 + (k + m) = m + k + 0

k m : ℕ
⊢ k + m + 0 = m + k + 0

rw [add_comm k m]

⊢ m + k + 0 = m + k + 0

k m : ℕ
⊢ k + m + 0 = m + k + 0

rw [← add_comm k m]

⊢ k + m + 0 = k + m + 0

k m : ℕ
⊢ k + m + 0 = m + k + 0

rw [add_zero, add_zero]

k m : ℕ
⊢ k + m = m + k

For the last four examples, you first need to know that addcomm and addzero are the statements

lemma add_comm : ∀ {G : Type} [_inst_1 : add_comm_semigroup G] (a b : G),
a + b = b + a
add_zero : ∀ {M : Type} [_inst_1 : add_zero_class M] (a : M), a + 0 = a

In the first of the four examples, rw applies to the first occurrence of a term of type a + b. Due to the internal bracketing, (k + m) + 0 is on the left side, so that the rw results in a 0 + k + m. If you want to use the commutativity in the term k + m, you need the second (or third) example, where rw add_comm k m leads to the desired progress. In the last example, the two + 0 terms are first eliminated by rw add_zero.

Notes

  • rw is used very often in practice to apply statements from the mathlib (at least if they are of the type = or ).

  • If you want to combine several rw commands, you can do so in square brackets, for example rw [h1, h2] or rw [h1, ←h2, h3].

  • rw immediately executes a refl after its application. This leads to the second and third examples of the applications of add_comm and add_zero that the new proof state is not as specified, but no goals.

  • If you do not want to perform a rw in sequence (as in the double elimination of the +0 above), you can use nth_rewrite to rewrite the second occurrence of a term.

  • The rw tactic does not work when it comes after a binder, which could be a ∀ ∃ ∑. In this case, simp_rw will hopefully help.

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

3.4.47. set🔗

Summary: set x := e introduces a local abbreviation x for the expression e and substitutes every occurrence of e in the goal (and, with the with clause, in the hypotheses) with x. The substitution is definitional: x and e are interchangeable, but the proof state becomes much more readable.

Examples:

Proof state

Tactic

New proof state

a b : ℕ
⊢ a + b + (a + b) = 2 * (a + b)

set s := a + b with hs

a b : ℕ
s : ℕ := a + b
hs : s = a + b
⊢ s + s = 2 * s

Remarks:

  • Without with hs, no equation hypothesis is introduced, and x is still definitionally equal to e.

  • set is very useful when you want to rw with a complicated subterm, or when reading the goal becomes difficult.

  • The dual let x := e introduces a let-binding into the term but does not fold the expression in the goal.

example (a b : ) : a + b + (a + b) = 2 * (a + b) := a:b:a + b + (a + b) = 2 * (a + b) a:b:s: := a + bhs:s = a + b := rfls + s = 2 * s All goals completed! 🐙

3.4.48. show🔗

Summary: show e changes the syntactic form of the current goal to e, provided e is definitionally equal to the goal. It is the tactic counterpart of the term-mode (h : T) ascription.

Examples:

Proof state

Tactic

New proof state

⊢ 1 + 1 = 2

show 2 = 2

⊢ 2 = 2

⊢ P ∨ Q

show Q ∨ P

(fails: not def-equal)

Remarks:

  • show does not prove anything; it only reshapes the goal. It is useful before rw, apply, or exact when those tactics expect a specific syntactic shape.

  • show e fails if e is not definitionally equal to the goal. For propositional reshaping (which is not definitional) use change or suffices.

example : 1 + 1 = 2 := 1 + 1 = 2 2 = 2 All goals completed! 🐙 example (n : ) : n + 0 = n := n:n + 0 = n n:n = n All goals completed! 🐙

3.4.49. simp🔗

Summary: In mathlib there are many lemmas with = or statements that can be applied with rw and are marked with @[simp]. These marked lemmas have the property that the right side is a simplified form of the left side. With simp, lean looks for matching lemmas and tries to apply them.

Examples

Proof state

Tactic

New proof state

⊢ n + 0 = n

simp

no goals 🎉

h : n + 0 = m
⊢ P

simp at h

h : n = m
⊢ P

Remarks:

If you want to know which lemmas were used, try simp?. This provides some clues.

Proof state

Tactic

New proof state

⊢ n + 0 = n

simp?

no goals 🎉
Try this:
simp only add_zero, eq_self_iff_true

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

3.4.50. simp_rw🔗

Summary: simp_rw [h₁, h₂, ...] rewrites with each lemma in turn, but -- unlike rw -- can rewrite under binders (, , fun, , ...). Unlike simp, it does not apply any extra simp lemmas; it rewrites with exactly the lemmas you supply.

Examples:

Proof state

Tactic

New proof state

f : ℕ → ℕ

simp_rw [h]

⊢ ∀ n, 0 + 1 = 1

Remarks:

  • rw refuses to rewrite under binders, giving a "motive is not type correct" error in those situations. simp_rw is the right tool there.

  • If the lemmas you rewrite with are simp lemmas and you are happy for simp to apply other simp lemmas too, just use simp only.

example (f : ) (h : n, f n = 0) : n, f n + 1 = 1 := f: h: (n : ), f n = 0 (n : ), f n + 1 = 1 simp_rw f: h: (n : ), f n = 0 (n : ), f n + 1 = 1h] f: h: (n : ), f n = 0n:True All goals completed! 🐙

3.4.51. specialize🔗

Proof state

Tactic

New proof state

f : ℕ → Prop
h : ∀ (n : ℕ), f n
⊢ P

specialize h 13

f: ℕ → Prop
h : f 13
⊢ P

Summary: In a hypothesis h : ∀ n, ..., ... applies to all n, but for the proof of the goal, you may only need a specific n. If you specify specialize h followed by the value for which h is needed, the hypothesis changes accordingly.

Examples

example (p : Prop) (hp : (n : ), p n) : (p 0) := p: Prophp: (n : ), p np 0 p: Prophp:p 0p 0 All goals completed! 🐙

Remarks

  • Just as with use, you have to be careful that the goal remains provable.

  • If you want to use two values of the hypothesis h, then let h' := h first provides a duplication of the hypothesis, so that you can then apply specialize to h and h'.

example (h : (n : ), 0 n) : 0 0 := h: (n : ), 0 n0 0 h:0 00 0 All goals completed! 🐙 example (h : (ε : ) ( : 0 < ε), (n : ), 1 < n ε) := h: (ε : ), 0 < ε n, 1 < n ε?m.15 h h✝: (ε : ), 0 < ε n, 1 < n εh:0 < 0 n, 1 < n 0?m.15 h All goals completed! 🐙

3.4.52. symm🔗

Summary: symm swaps the two sides of a symmetric relation. For equality, it turns a goal a = b into b = a. It also works for , , Dvd.dvd, and any other relation tagged @[symm] in Mathlib.

Examples:

Proof state

Tactic

New proof state

a b : ℕ
⊢ a = b

symm

a b : ℕ
⊢ b = a

h : a = b
⊢ ...

symm at h

h : b = a
⊢ ...

Remarks:

  • The term-mode analogue is Eq.symm h (or .symm).

  • symm is often used just before exact h when you have h : b = a but a goal a = b.

example (a b : ) (h : a = b) : b = a := a:b:h:a = bb = a a:b:h:a = ba = b All goals completed! 🐙 example (P Q : Prop) (h : P Q) : Q P := P:PropQ:Proph:P QQ P P:PropQ:Proph:P QP Q All goals completed! 🐙

3.4.53. tauto🔗

Summary: tauto solves all goals that can be solved with a truth table.

Examples

Proof state

Tactic

New proof state

⊢ P ∧ Q → P

tauto oder tauto!

No goals

⊢ ((P → Q) → P) → P

tauto!

No goals

The truth tables for ¬P, P ∧ Q and P ∨ Q are as follows; if more terms of type Prop are involved, there are more lines.

P

¬P

true

false

false

true

P

Q

(P ∧ Q)

true

true

true

false

true

false

true

false

false

false

false

false

P

Q

(P ∨ Q)

true

true

true

false

true

true

true

false

true

false

false

false

Notes

The difference between tauto and tauto! is that in the latter tactic, the rule of the excluded middle is allowed. The second example can therefore only be solved with tauto!, but not with tauto.

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

3.4.54. triv🔗

Summary

triv solves an objective that is, by definition, identical to true. It also solves objectives that can be solved with refl .

Examples

Proof state

Tactic

New proof state

⊢ True

triv

No goals

⊢ x = x

triv

No goals

example (P : Prop) : False P := P:PropFalse P All goals completed! 🐙

3.4.55. trivial🔗

Summary: trivial tries a short list of simple tactics to close the goal: rfl, assumption, True.intro, decide, and a few others. It is a gentle hammer for goals that "should be obvious".

Examples:

Proof state

Tactic

New proof state

⊢ True

trivial

(no goals)

h : P
⊢ P

trivial

(no goals)

Remarks:

  • Unlike triv, trivial also closes goals that are true by decidability (like 2 + 2 = 4).

  • When trivial does not close a goal, it prints no specific reason -- if you want a more informative automated tactic, try exact? or simp.

example : True := True All goals completed! 🐙 example (P : Prop) (h : P) : P := P:Proph:PP All goals completed! 🐙 example : (2 + 2 : ) = 4 := 2 + 2 = 4 All goals completed! 🐙

3.4.56. use🔗

Proof state

Tactic

New proof state

f : α → Prop
y : α
∃ (x : α), f x

use y

f : α → Prop
y : α
f y

Summary: The use tactic is used for goals that begin with . Here, parameters are used to indicate which object quantified by is to be reused in the proof.

Examples

Proof state

Tactic

New proof state

⊢ ∃ (k : ℕ), k * k = 16

use 4

⊢ 4 * 4 = 16

⊢ ∃ (k l : ℕ), k * l = 16

use 8, 2

⊢ 8 * 2 = 16

In this example, Lean knows that 4 * 4 = 16 by rfl, so we need not type it.

example : (k : ), k * k = 16 := k, k * k = 16 All goals completed! 🐙

In various cases, refine can be used instead of use, e.g. here:

example : (k : ), k * k = 16 := k, k * k = 16 refine 4, 4 * 4 = 16 All goals completed! 🐙

Sometimes, one not only needs to give a term (e.g. δ : ℝ), but also a property (e.g. hδ : 0 < δ). This means we have to give two terms:

example (ε : ) ( : 0 < ε) : (δ : ) ( : 0 < δ), δ < ε := ε::0 < ε δ, (_ : 0 < δ), δ < ε ε::0 < ε (_ : 0 < ε / 2), ε / 2 < ε ε::0 < εε / 2 < ε All goals completed! 🐙

This can also be written as follows:

example (ε : ) ( : 0 < ε) : (δ : ) ( : 0 < δ), δ < ε := ε::0 < ε δ, (_ : 0 < δ), δ < ε ε::0 < εε / 2 < ε All goals completed! 🐙

Again, refine is an abbreviation for the whole proof. Note that we have to provide a triple, cosisting of δ, a proof of 0 < δ, and a proof of δ < ε:

example (ε : ) ( : 0 < ε) : (δ : ) ( : 0 < δ), δ < ε := ε::0 < ε δ, (_ : 0 < δ), δ < ε All goals completed! 🐙