Lean is a functional programming language (i.e. it actually only consists of functions). This paradigm is in contrast to imperative programming such as Python, Java and C. Lean comes with many features you might be familiar with, such as a library for output and input, but is still young and many things need to be developed.
In all programming languages, you have data types such as int, string and float. In Lean, these exist as well, but you can (and will in this course) define own data types. In all cases, we write x : α for a term x of type α, so we write False : Bool, 42 : ℕ, but also f : ℕ → ℝ (for a function from ℕ to ℝ, which is an own type) and 0 ≠ 1 : Prop (the proposition that 0 and 1 are different natural numbers), which is a proposition. Terms and types can depend on variables, e.g. in ∀ (n : ℕ), n < n + 1 : Prop and f : (n : ℕ) → (Fin n → ℝ) (where Fin n is the type which carries {0, ..., n-1}), which is a function f with domain ℕ such that f n ∈ ℝ^n.
As we see, these new data types are more abstract in the sense that Lean understands ℕ (and ℝ) as infinite types, which are not limited by floating point arithmetic. E.g., ℕ actually represents an infinite set that is characterized by containing 0, and if it contains n, then it also contains the successor of n (represented by succ n). Accordingly, the real numbers are defined by an equivalence relation on Cauchy sequences, which is quite elaborate. (Although ℝ is implemented as such a quotient within Lean, we will not have to deal with these implementation details when working with real numbers, since we will rely on results in Mathlib, the mathematical library, taking care of these details.)
In Lean, there are three levels of objects: universes, types and terms. We are concerned here with the last two. Of particular interest is the type Prop, which consists of statements that can be True or False. It includes mathematical statements, so either the hypotheses, or the goal of what is to be proven. A hypothesis in Lean has the form hP : P, which means P is true, and this statement is called hP. Synonomously, it meansthat P is true and hP is a proof of P. The hypotheses here have names P Q R S, and the proofs of the hypotheses hP hQ hR hS. All names can be arbitrary. Furthermore, there are hypotheses of the form P → Q, which is the statement that P implies Q. (Note the similarity to function notation as in f : ℝ → ℝ.)
In Lean, the function f : ℕ → ℕ, x ↦ 2x is defined as
deff:ℕ→ℕ:=funx↦2*x
(Write \mapsto for ↦.) It is assumed that the x is only introduced to
define f. The application of f to an x : ℕ is then done using f x. (The notation f x is an abbreviation for f(x), since Lean is sparing with parenthesis.)
For functions on inductive types (like ℕ, List α, Option α,
Bool), the most natural way to define them is by pattern matching
on the constructors of the input. The syntax uses match ... with
and one branch per constructor, each prefixed by a |.
For example, the factorial function on ℕ matches on whether the
input is 0 or a successor n + 1:
Each branch may use the variables introduced by its pattern (here
n on the right-hand side). Lean checks two things automatically:
Exhaustiveness. Every constructor of ℕ is covered (the cases
0 and n + 1 exhaust ℕ). If you forget a case, Lean
complains.
Termination. The recursive call factorial n is on a strictly
smaller argument than n + 1, so the definition is well-founded.
Pattern matching works for any inductive type:
defnegate:Bool→Bool|true=>false|false=>true
-- A function on Option α: extract or use a default
defOption.getD'{α:Type*}(d:α):Optionα→α|none=>d|somea=>a
-- A recursive function on List α
deflength'{α:Type*}:Listα→ℕ|[]=>0|_::xs=>1+length'xs
The same syntax can be used inline with match:
example(n:ℕ):ℕ:=matchnwith|0=>42|_+1=>0
You will see pattern matching extensively in the Functional
Programming chapter, where we use it to define data structures and
recursive algorithms.
Due to the multitude of types in Lean, we have to be careful about equality. In Lean, there are three types of equality:
Syntactic equality: If two terms are letter-for-letter equal, then they are syntactically equal. However, there are a few more situations in which two terms are syntactically equal. Namely, if one term is just an abbreviation for the other (for example, x = y is an abbreviation for Eq x y, where Eq is a function which takes two terms of the same type, and assigns True if they are the same and False otherwise), then these both terms are syntactically equal. Also equal are terms in which globally quantified variables have different letters. For example, ∀ x, ∃ y, f x y and ∀ y, ∃ x, f y x are syntactically equal.
Definitional equality: Some terms are equal by definition in Lean. For example, x : ℕ, x + 0 is by definition identical to x. However, 0 + x is not definitionally identical to x. This is apparently only due to the internal definition of addition of natural numbers in Lean.
Propositional equality: If there is a proof of x = y, then x and y are said to be propositionally equal. Similarly, terms P and Q are said to be propositionally equal if you can prove P ↔ Q. Some Lean tactics only work up to syntactic equality (such as rw), others (most) work up to definitional equality (such as apply, exact,...) This means that the tactic automatically transforms terms if they are syntactically or definitional equality.
There is a special kind of equality to be observed with sets and functions. For example, two functions are exactly the same if they return the same value for all values in the domain. This behavior is called extensionality in the theory of programming languages. (If extensionality applies, then, for example, two sorting algorithms are the same if they always produce the same result).
There are (essentially) three different types of parentheses in Lean statements. The simplest is (...), which, as in normal use, indicates parentheses in the sense of what belongs together. However, you have to learn how Lean brackets internally when no '()' are given. Operators like and (∧), or (∨), are right-associative, so e.g. P ∧ Q ∧ R := P ∧ (Q ∧ R). The application of functions in sequence, such as f : ℕ → ℝ and g : ℝ → ℝ , applied to n : ℕ is g (f n), because g expects an input of type ℝ, and this is what f n provides. You cannot omit (...), i.e. in this case the parenthesis is left-associative.
Now let's comment on the parentheses [...] and {...}. For example, #check@ gt_iff_lt (the statement that a>b holds if and only if b<a holds), where both types occur. This yields
gt_iff_lt : ∀ {α : Type u_1} [_inst_1 : has_lt α] {a b : α}, a > b ↔ b < a
When this result is applied, the statements in {...} and [...] are added by Lean itself. The statements in {...} depend on the type of the objects that have to be given, and can therefore be inferred. (Above, when applying gt_iff_lt, the variables a and b have to be given.) Therefore, their type is also known, and one does not have to α is not explicitly specified. Since the application is made to a concrete α (for example, ℕ), and Lean knows a lot about the natural numbers, the type class system can look up many properties of ℕ, and also finds that has_lt ℕ holds (i.e. on ℕ at least a partial order is defined).
3.1.7. Proving propositions and evaluating functions🔗
Although we almost exclusively prove propositions in tactic mode in these notes, it is instructive to know about the simplest example of how to turn the proof to term mode: There are two rules:
The tactic exact is the same as calling a function.
The tactic intro is like taking a variable, which will be the argument of a function which is evaluated in the next step.
Many everyday types in Lean -- Nat, List, Option, Bool, even
Empty -- are inductive types. You declare one by giving a name,
the type's universe, and a list of constructors: each constructor
says how to build a new element of the type out of existing pieces.
Proofs about an inductive type use the induction tactic, which
applies the recursor for you: one subgoal per constructor, with an
induction hypothesis for each recursive argument.
Inductive types are the main mechanism by which new data types enter
Lean; Mathlib uses them extensively, and understanding them is
essential for reading the library.
3.1.9. Exploring definitions with #check, #print and inferInstance🔗
Lean provides a handful of commands that are invaluable for exploring
a library like Mathlib. They all start with # and only print
information -- they do not contribute to a proof.
#check e prints the type of the term or expression e. This is
the fastest way to learn what a lemma says, or what type a definition
has.
#check @lemma (with a leading @) prints the type of the lemma
without hiding implicit and instance arguments. Use @ when you
want to see every argument.
#print name prints the definition of the constant name. For a
typeclass, this shows you the list of fields; for a def, the body;
for a structure, the constructor and fields.
#eval e evaluates the term e (when it is computable) and prints
the result. It works on concrete ℕ, List, etc., but not on
abstract propositions.
A very common idiom is to ask Lean whether a given type has a specific
instance (e.g. "is ℕ a commutative monoid?"):
The term inferInstance asks Lean to synthesize an instance of the
indicated type; if no such instance exists the command will fail with
a readable error message.
Two tactics complement these commands during an interactive proof:
exact? searches Mathlib for a single lemma that closes the current
goal and prints a exact <lemma> line you can copy.
apply? is the same, but it suggests lemmas whose conclusion
matches the goal, leaving side conditions as new subgoals.
Together, #check, #print, inferInstance, exact? and apply?
are the main tools for navigating an unfamiliar part of Mathlib.
3.1.10. Getting help: Loogle, LeanSearch, Moogle, and friends🔗
Mathlib is enormous, and naming conventions -- however consistent
-- will only take you so far. When you know what you want to say
but not which lemma says it, a handful of search services can
save hours of scrolling. All of them are available both as web
pages and, more conveniently, as commands inside Lean via the
LeanSearchClient package (already a dependency of Mathlib).
Loogle (loogle.lean-lang.org)
searches by type shape. You write a pattern (using _ for
holes) and Loogle returns every Mathlib lemma whose statement
unifies with it. Inside Lean:
#loogle (?a + ?b) * ?c = _
returns distributivity lemmas in all their variants. Loogle also
accepts a comma-separated list of constants that must appear,
e.g. #loogle Real.exp, Real.log, or a target conclusion
#loogle |- tsum _ = _.
LeanSearch (leansearch.net)
searches by natural language. You phrase your goal in English
("derivative of composition of functions") and it returns the
most relevant Mathlib lemmas, ranked by semantic similarity.
From inside Lean:
#leansearch "the derivative of a product of functions"
Moogle (moogle.ai) is another
natural-language search, with a different ranking model. In
Lean:
#moogle "bolzano-weierstrass"
The two tools often surface different lemmas, so it is worth
trying both.
Mathlib docs (the generated API reference at
leanprover-community.github.io/mathlib4_docs). If you already know the namespace
(Filter, MeasureTheory, Topology.Basic, ...), browsing the
module is faster than any search.
Zulip (leanprover.zulipchat.com, the #new members and
#Mathlib4 streams) is the community chat. For questions that
no search answers, posting a minimal example together with the
goal you're stuck on almost always gets a helpful reply within
hours.
AI assistants (ChatGPT, Claude, Gemini, Copilot, and the
Lean-focused plugins and Copilot-for-Lean projects built on top of
them) have become surprisingly effective at suggesting lemma names,
spotting why a tactic fails, and translating informal arguments into
Lean. They are fallible -- they will cheerfully invent lemmas that
do not exist, or quote outdated Mathlib 3 syntax -- but used
critically they are one of the fastest ways to get unstuck. A good
workflow is: paste your goal state and the surrounding example,
ask for two or three candidate approaches, and then verify each
suggestion in Lean (via exact?, #loogle, or simply by trying to
compile it). Treat AI output the same way you would treat a
confident but occasionally wrong colleague -- use it, but always
check.
You are strongly encouraged to use these tools alongside #loogle,
#leansearch, and the Mathlib docs.
As a rule of thumb: try exact? / apply? first (they know your
exact proof state), then #loogle (precise, fast), then
#leansearch / #moogle or an AI assistant (when you do not know
the vocabulary Mathlib uses), and finally the docs or Zulip for
open-ended questions.
There are at least two abbreviations used in Mathlib which you will encounter frequently.
If you have h : x = y and hx : P x (with P x : Prop), you can prove P y by replacing h in hx. The shorthand notation for this is h ▸ hx. (Write \t for ▸).
Sometimes, bracketing is critical, and it appears frequently that it has the form
apply first (second very long statement), and you might get lost since the closing brackets are far away from their opening counterparts. In this case, we write apply first <| second very long statement, which does not need a closing symbol.
Lean allows you to define custom notation using the notation command. This is useful when you want a concise mathematical symbol for a frequently used expression. The general syntax is
notation "symbol" arg1 arg2 ... => expression
where the left-hand side describes the syntax pattern (with arguments) and the right-hand side is the Lean expression it expands to. Here is a simple example:
After this definition, δ is replaced by 2 everywhere. The notation is purely syntactic: Lean replaces every occurrence of the new notation by the right-hand side before type checking. Here is a more interesting example with an argument:
Here, infixl means left-associative infix, 65 is the binding power (the same as +), and the spaces around ⊕⊕ are part of the syntax. Similarly, infixr gives right-associative infix, and prefix / postfix are available for unary operators.
notation can take more than one argument and mix custom tokens with them:
sectionTernaryDemo
-- "between a and b" means the half-open interval [a, b)
notation"between "a" and "b=>Set.Icoabbetween1and10 : Setℕ#checkbetween(1:ℕ)and10 -- Set ℕ
endTernaryDemo
Mathlib uses notation3 (and its underlying machinery syntax +
macro_rules) to introduce binder-style notation like
∑ k ∈ range n, f k and ∀ᶠ x in F, p x. These let you bind a
variable inside the expression that follows the comma. Writing such
notation from scratch involves a fair amount of macro plumbing and
is beyond the scope of this section -- the standard reference is the
Lean 4 documentation on macros.
For ordinary day-to-day notation, plain notation, infixl,
infixr, prefix, and postfix are almost always enough.
If a notation should only be active inside a namespace (so it does
not pollute the global symbol space), mark it scoped:
namespaceMyDemoscopednotation"✦"=>(42:ℕ)endMyDemo
-- Outside the namespace, `✦` is not in scope by default;
-- enable it with `open MyDemo` or `open scoped MyDemo`.
openscopedMyDemo42#eval✦ -- 42
This is the pattern used throughout Mathlib for notation like 𝓝,
𝓟, ∫, ∀ᶠ, etc.: they are scoped to the relevant namespace and
enabled with open scoped.
For more complex notation involving multiple tokens or bespoke
parsing rules, you can use the syntax and macro_rules commands,
but notation, the infix variants, prefix/postfix, and
notation3 cover most common use cases.
The abbrev command introduces a definition that is reducibly transparent, meaning Lean's elaborator will unfold it automatically whenever needed. In contrast, a definition introduced with def is semireducible and will not be unfolded automatically.
abbrevMyNat:=ℕ
After this, MyNat and ℕ are interchangeable everywhere — Lean treats them as definitionally equal without needing any extra work. In particular, all type class instances that apply to ℕ are automatically available for MyNat. Compare this with
defMyNat':=ℕ
where MyNat' is a genuinely new type: Lean will not automatically apply ℕ instances to MyNat', and you would have to derive or register them yourself.
Use abbrev when you want a short name for a type or expression but do not want to create a new abstraction barrier. Use def when you intentionally want to hide the definition (e.g. to prevent the simplifier from unfolding it).
In Lean every object has a type. Types itself fall into several categories, called universes. There are two main universes, called Prop and Type. Any mathematical statement comes with a claim and its proof. Say we want to claim something, such as Goldbach's conjecture:
we speak about a term ∀ (n : ℕ) (h₁ : n > 2) (h₂ : Even n), ∃ (i j : ℕ), Prime i ∧ Prime ∧ (n = i + j) of type Prop, which constitutes its own type. A term of this type (which sould repalce the sorry in the above Lean code) is equivalent to a proof of Goldbach's conjecture.
This is to say:
Types as theorems, terms as proofs!
Constructing a term of type ℕ is easier (0 : ℕ) is accepted by Lean for this construction) than constructing a term of type ∀ (n : ℕ) (h₁ : n > 2) (h₂ : Even n), ∃ (i j : ℕ), (Prime i) ∧ (Prime j) ∧ (n = i + j), for which we would require proving Goldbach's conjecture and implementing the proof in Lean.
Since we are already speaking about fundamentals: For a large part, it is safe to think of Types as Sets. Recall that it leads to logical self-inconsistencies if we allow for something like the set/type of all sets/types. For this reason, the Type universe is split into levels, such as Type 0 : Type 1, saying that the type Type 0 (of all objects in level 0) is of Type 1, i.e., we are moving up a ladder when constructing more complex types. The coresponding idea goes back to von Neumann and Ernst Zermelo.
The Lean-library which contains many mathematical results is called Mathlib. On its documentation page you can search for some results and concepts. (More precisely, you can search names of definitions, lemmas and theorems.) You will find results about e.g. Real numbers, Groups in algebra, Topology and Measure Theory.
Another way to search Mathlib is Moogle and Loogle (which you also have in vscode when clicking on the ∀ sign.)
Let us start with some simple examples in order to explain the first tactics in Lean. Here is a quick preview of the tactics we will meet in this chapter. They are explained in detail below, and a full alphabetical reference — with many more tactics — lives in the Tactics chapter.
Tactic
Use it when
Effect
intro hP
goal is P → Q or ∀ x, …
move the hypothesis (or bound variable) into context
exact e
the goal matches a known term e
close the goal with e
apply h
h : P → Q and goal is Q
reduce the goal to P
rw [h]
h : a = b or h : P ↔ Q
rewrite occurrences of a to b (resp. P to Q) in the goal; rw [h] at h' rewrites in a hypothesis; rw [← h] rewrites the other way
simp
goal / hypothesis can be normalised by standard lemmas
simplify using Mathlib's simp set (simp? suggests which lemmas)
apply?
stuck — need a suitable lemma from Mathlib
searches the library and proposes a closing exact …
have h : P := …
want to record an intermediate fact P
introduces h : P into context (provide the proof on the right, or enter tactic mode with by)
refine e
like exact, but with holes _ or ?_
closes the goal up to the remaining holes, which become new goals
obtain ⟨x, hx⟩ := h
h : ∃ x, P x or h : P ∧ Q
destructs the witness / pair into named pieces
You can also skip this chapter on the first pass and come back to each tactic as you need it; treat Tactics as your glossary.
Depending on the position of the cursor, you will find the corresponding proof state. If the cursor is directly after by, the initial proof state is seen. It is important to know that behind ⊢ (called turnstile) stands the assertion, and everything above are hypotheses. (In the case shown, this is only the fact that P is an assertion/proposition.) This representation thus corresponds exactly to the assertion. If the cursor is after the sorry, there is now no goals, but the sorry tactic is only there to prove unproven assertions without further action, and a warning is issued in vscode. If you delete the sorry and replace it with an intro hP followed by exact hP, we get
Here, we use rw to transform the goal P to the rewritten goal Q. If we want to use rw reversely, we write rw [← hPQ], and we can use rw also in hypothesis by writing e.g. rw [hPQ] at hP. Here is an example.
Of course, we want to make use of known facts when proving new ones. There are two main search functions built into our work: simp? and apply?. The first is based on simp, which works using a collection of simplification rules, which are searchable using simp?. Here is an example:
example:(0<1):=⊢ 0<1Try this:[apply]simp only [zero_lt_one]All goals completed! 🐙
Lean suggests Try this: simp only [Nat.lt_one_iff, pos_of_gt], which is taken to our code when we click on it.
In fact, the same example can be solved using library search using apply?. Here, Lean searches its library for possible results, and often outputs very many results and the remaining goals. Here, it is simple:
Assume we cannot prove our goal in one step, but need some intermediate result. In this case, we have the have tactics. We simply claim what we need as an intermediate step. At the moment, we leave the rest using sorry.
We come to our intermediate result later, but first want to use it in the rest of the proof. However, we have to disentangle the ∃ statement in h. Often, we have to take apart what we are given. Note that ∃ (n : ℕ), Even n consists of (n : ℕ) and a proof of Odd , i.e. a pair of objects, and pairs in Lean are gives using ⟨_, _⟩. We use obtain in order to get the two elements of the pair:
Now we can use that n is Even and therefore would like to use that n * n = (2*k) * (2*k) = 4*k*k for showing that n * n is even. In order to see what we have to show, let us simplify using the definition of Even:
The rest should be easy, since it only remains a calculation. Here, we use rw and the ring tactics, which can do calculations within a ring (in fact in a monoid):
It remains to show the intermediate step. Here, we have to give Lean a pair, i.e. some n as well as a proof that Even n. This can not only be done using use as above, but also using refine, which is able to make a pair from two separate objects. Assume ww want to use that Even 48, but do not have a proof yet. Then we write a ?_, which stands for a hole in the proof which is to be filled in later:
Names like zero_add, add_zero, one_mul, add_assoc, succ_ne_zero, lt_of_succ_le,... seem cryptic. It is clear that individual relatively understandable abbreviations (zero, one, mul, add, succ,...) are separated by _. In general, the following two rules apply to naming:
The goal of the statement to be proven is described; if hypotheses are added in the name, then with of_. The statement lt_of_succ_le is therefore an < statement, where succ ≤ applies. In fact:
It is now time to move to the exercises. So, proceed to vscode (or gitpod), copy the exercises folder and start coding. Further hints on tactics etc is given within the exercises. Tactics are given in alphabetical order in the next chapter.
Lean is, before anything else, a functional programming language.
If you have a background in mathematics but not in programming, do
not worry: functional programming is in many ways closer to
mathematics than imperative programming (like Python or Java). The
central idea is that programs are built from functions in the
mathematical sense -- they take inputs and produce outputs, without
hidden side effects.
Earlier sections in this chapter already introduced the basic
machinery we need: def for defining functions, fun for anonymous
functions, inductive for defining new types, and pattern matching
/ structural recursion for writing functions on inductive types.
The aim of this section is to collect a few extra examples and to
introduce higher-order functions, one of the hallmarks of the
functional style.
A pure function is one whose output depends only on its inputs.
Given the same arguments, it always returns the same result. This
is exactly how functions work in mathematics: if f(x) = x + 1,
then f(3) is always 4.
In Lean, we define functions with def and evaluate them with
#eval:
Anonymous functions are particularly useful when passing functions
as arguments to other functions. In mathematics one would write
x ↦ x^2; in Lean we write fun x ↦ x ^ 2.
The Inductive types and Pattern matching sections above already
covered the fundamentals. Here are a few further examples that we
will reuse when discussing higher-order functions below.
A binary tree of natural numbers, together with a function that
sums all its values:
A higher-order function is one that takes a function as an
argument or returns a function as a result. This is one of the
central ideas of functional programming, and it is also very
natural mathematically. For instance, the derivative is a
higher-order function: it takes a function and returns a function.
The three most important higher-order functions on lists are
List.map, List.filter, and List.foldl (also known as
fold).
List.map applies a function to every element of a list:
Functional programming in Lean is built on a few core ideas:
Pure functions: outputs depend only on inputs.
Inductive types: define new types by specifying constructors.
Pattern matching: define functions by cases on the constructors.
Structural recursion: define recursive functions that provably terminate.
Higher-order functions: pass functions as arguments or return them.
These ideas align closely with mathematical practice: inductive
types correspond to inductively defined sets, pattern matching
corresponds to case analysis, and recursion corresponds to
inductive definitions.
applye tries to match the current goal against the conclusion of e's type.
If it succeeds, then the tactic returns as many subgoals as the number of premises that
have not been fixed by type inference or type class resolution.
Non-dependent premises are added before dependent ones.
The apply tactic uses higher-order pattern matching, type class resolution,
and first-order unification with dependent types.
Extensions:
apply(config:=cfg)e allows for additional configuration (see Lean.Meta.ApplyConfig):
newGoals controls which new goals are added by apply, in which order.
-synthAssignedInstances will not synthesize instance implicit arguments if they have been
assigned by isDefEq.
+allowSynthFailures will create new goals when instance synthesis fails, rather than erroring.
+approx enables isDefEq approximations (see Lean.Meta.approxDefEq).
Summary: There are already a lot of proven statements in Mathlib. When using apply?, Mathlib is searched for statements whose types correspond to those of the statement to be proved. If this is not successful, Lean reports a timeout. If successful, it also reports which commands were found. If you click on it, this is inserted in place of apply?.
Searches environment for definitions or theorems that can refine the goal using apply
with conditions resolved when possible with solve_by_elim.
The optional using clause provides identifiers in the local context that must be
used when closing the goal.
Use +grind to enable grind as a fallback discharger for subgoals.
Use +try? to enable try? as a fallback discharger for subgoals.
Use -star to disable fallback to star-indexed lemmas.
Use +all to collect all successful lemmas instead of stopping at the first.
assumption tries to solve the main goal using a hypothesis of compatible type, or else fails.
Note also the ‹t› term notation, which is a shorthand for showtbyassumption.
Summary:
If you have a term P : Prop as a hypothesis, by_cases hP : P returns two goals. The first one has hP : P, and the second one hP : ¬P. This tactic is thus identical to have hP : P ∨ ¬ P, exact em P, cases hP,. (The second expression is em : ∀ (p : Prop), p ∨ ¬p.)
Examples
Proof state
Tactic
New proof state
⊢ P
by_cases h : Q
h : Q ⊢ P h : ¬Q⊢ P
x : Bool ⊢ x = True ∨ x = False
by_cases x = True
x: bool h: x = True ⊢ x = True ∨ x = False x: bool h: ¬x = True ⊢ x = True ∨ x = False
In the second example, we use a variable of type bool This is defined as follows:
Logically speaking, this is equivalent to Prop (the type of propositions). The distinction is
public important for programming: both propositions and their proofs are erased in the code generator,
while Bool corresponds to the Boolean type in most programming languages and carries precisely one
bit of run-time information.
Apparently, the by_cases tactic (just like by_contra) assumes that a statement is either true or false. This is also known as the law of excluded middle. In mathematics, proofs that do not use this rule are called constructive.
For terms of type Prop, the tactic tauto (or tauto!) can draw various conclusions from a truth table.
The by_contra tactic provides a proof by contradiction. It is therefore assumed (i.e. transformed into a hypothesis) that the statement (after ⊢) is not true, and this must be made to contradict itself, i.e. a proof of false must be found.
Examples
Proof state
Tactic
New proof state
⊢ P
by_contra h
h : ¬P ⊢ False
h: ¬¬P ⊢ P
by_contra hnegP
h: ¬¬P hnegP: ¬P ⊢ False
Remarks
This tactic is stronger than exfalso. After all, there the goal is only converted to false without adding a new hypothesis. With by_contra, the new goal is also false, but there is still a new hypothesis.
Summary: As the word suggests, calc is about concrete calculations. This is not a tactic, but a lean mode. This means that you can enter this mode (with the word calc) and enter calculation steps and proofs that each individual calculation step is correct.
Examples
Here is a proof of the first binomial formula that only comes about by rewriting of calculating properties from the mathlib.
proves a=z from the given step-wise proofs. = can be replaced with any
relation implementing the typeclass Trans. Instead of repeating the right-
hand sides, subsequent left-hand sides can be replaced with _.
calc
a = b := pab
_ = c := pbc
...
_ = z := pyz
It is also possible to write the first relation as <lhs>\n_=<rhs>:=<proof>. This is useful for aligning relation symbols, especially on longer
identifiers:
Summary: If a hypothesis is composed, i.e. can be expanded into two or more cases, cases' delivers exactly that. This can be used not only used with hypotheses h : P ∨ Q or h : P ∧ Q, but also with structures that consist of several cases, such as ∃... (here there is a variable and a statement) and x : bool or n : ℕ.
The application cases' n for n : ℕ is strictly weaker than complete induction (see induction). After all, cases only converts n : ℕ into the two cases 0 and succ n, but you cannot use the statement for n-1 to prove the statement for n.
A property of, say, the natural numbers, gives rise to Set ℕ by collecting all n : ℕ satisfying the property. In other words, P n and the membership n ∈ {m | P m} are equivalent.
In fact, constructor is the same as refine ⟨?_, ?_⟩ (i.e. leaving two new goals). However, refine is more flexible since it is not bound to two variables:
When we have a property of the natural numbers, P : ℕ → Prop, {P // P m} is the subtype which consists of all m for which P m holds. Every (m : {P // P m}) is a pair ⟨m.val, m.prop⟩, where (m.val : ℕ) is its numerical value and m.prop is a proof for P m. So, we can construct a member of the subtype by giving a number n and a proof hn : P n. The constructor subtype_of_mem is the constructor for the subtype {m // P m}. It takes a number n and a proof hn : P n and returns the pair ⟨n, hn⟩.
Summary:contradiction closes the goal if the local context
contains two contradictory hypotheses, or a hypothesis of type
False, or h : a ≠ a, or similar trivially absurd facts.
Examples:
Proof state
Tactic
New proof state
h : False
contradiction
(no goals)
h₁ : P
contradiction
(no goals)
Remarks:
contradiction is usually faster and cleaner than exact absurd h h'
or exfalso; exact h' h.
It also recognises Nat.zero_ne_one-style contradictions and
evaluates numeric literals.
Summary:contrapose applies the contrapositive: given a goal
P → Q, it turns it into ¬Q → ¬P. The variant contrapose!
additionally pushes the negations inward using push_neg.
Examples:
Proof state
Tactic
New proof state
h : P ⊢ Q
contrapose h
h : ¬Q ⊢ ¬P
h : x ≤ y ⊢ f x ≤ f y
contrapose! h
h : f y < f x ⊢ y < x
Remarks:
contrapose (without !) simply inserts ¬. contrapose!
additionally runs push_neg on both the hypothesis and the goal,
so ¬(x ≤ y) becomes y < x and so on.
contrapose operates on a hypothesis by default; to contrapose the
goal only, use contrapose without an argument.
Summary:decide closes a goal whose statement is decidable --
i.e. the typeclass Decidable P finds a proof or refutation of P
by computation. Typical candidates are concrete arithmetic
equalities and inequalities, membership in a finite set, divisibility,
and propositional logic on concrete inputs.
Examples:
Proof state
Tactic
New proof state
⊢ (2 + 2 : ℕ) = 4
decide
(no goals)
⊢ (3 : ℕ) ∣ 12
decide
(no goals)
Remarks:
decide performs reduction: it reduces the decision procedure
on the given inputs. For large inputs it can be slow (or time
out); for general arithmetic, prefer omega or norm_num.
For propositions that depend on a variable (e.g. ∀ n, P n),
decide will not work -- it needs concrete inputs.
Summary: If the goal can be closed with a single command, then it can be closed with the exact tactic. Like many other tactics, exact also works with terms that are definitionally equal.
Examples:
Proof state
Tactic
New proof state
h : P ⊢ P
exact h
no goals
hP: P hQ: Q⊢ P ∧ Q
exact ⟨ hP, hQ ⟩
no goals
Remarks:
The related tyctics exact? searches for terms which close the goal; see apply?.
If the proof consists of a single call of exact, it is easy to translate it to term mode; see easy proofs in term mode.
In the third example, note the order in which the two hypotheses hP and hnP are applied. The first hypothesis after exact is always the one whose right side matches the goal. If the goal requires further input, it is appended afterwards.
Summary: The statement false → P is true for all P. If the current goal is ⊢ P, and you would apply this true statement using apply, the new goal would be ⊢ false. This is exactly what the exfalso tactic does.
Examples:
Proof state
Tactic
New proof state
h : P ⊢ Q
exfalso
h : P ⊢ False
hP : P hnP : ¬P ⊢ Q
exfalso
hP : P hnP: ¬P ⊢ false
Remarks:
If you use this tactic, you leave the realm of constructive mathematics. (This dispenses with the rule of the excluded middle.)
exfalso is the same as apply False.elim; see the examples for exact.
Summary: Extensionality is a principle that states that two functions are equal if they give the same result for all arguments. The ext tactic applies this principle to prove the equality of two functions.
Summary:funext is the function extensionality tactic. To
prove a goal of the form f = g, where f g : (x : α) → β x are
two (possibly dependent) functions, funext x introduces a fresh
variable x : α and reduces the goal to f x = g x. This
relies on the funext axiom from Lean's type theory.
Examples:
Proof state
Tactic
New proof state
f g : ℕ → ℕ ⊢ f = g
funext n
f g : ℕ → ℕ
⊢ f n = g n
f g : (x : α) → β x ⊢ f = g
funext x
f g : (x : α) → β x
⊢ f x = g x
Remarks:
funext is a special case of the more general ext tactic. Use
funext when you specifically want to compare two functions
pointwise; use ext when the goal involves more than one layer
of extensionality (e.g. functions returning sets).
You can introduce several variables at once, e.g. funext x y to
reduce (fun x y ↦ f x y) = (fun x y ↦ g x y) to f x y = g x y.
Function extensionality. If two functions return equal results for all possible arguments, then
they are equal.
It is called “extensionality” because it provides a way to prove two objects equal based on the
properties of the underlying mathematical functions, rather than based on the syntax used to denote
them. Function extensionality is a theorem that can be proved using quotient
types.
Summary:field_simp clears denominators in a goal or hypothesis
involving a field (like ℚ, ℝ, ℂ). It rewrites expressions
of the form a / b into a common form, turning an equation between
rational expressions into one without division. Combined with
ring, it closes most identities in fields.
Examples:
Proof state
Tactic
New proof state
a b : ℝ
⊢ a / b + 1 = (a + b) / b
field_simp
a b : ℝ
⊢ a + b = a + b
Remarks:
field_simp typically needs nonzeroness hypotheses for every
denominator. These are taken from the context automatically, or
can be supplied with field_simp [h₁, h₂].
The idiomatic combination is field_simp; ring: field_simp
removes division, ring then finishes the algebraic identity.
It works over any Field, DivisionRing, or GroupWithZero.
Summary:filter_upwards is the workhorse tactic for proving
goals of the form ∀ᶠ x in F, P x. Given a list of Eventually
hypotheses, it intersects them, peels off the ∀ᶠ quantifier, and
leaves you with a pointwise goal in which all of the supplied
hypotheses appear specialized at the point x.
Examples:
Proof state
Tactic
New proof state
F : Filter α
⊢ ∀ᶠ x in F, p x ∧ q x
filter_upwards [h₁, h₂] with x hp hq
F : Filter α
⊢ p x ∧ q x
Remarks:
The list [h₁, h₂, ...] may be empty (filter_upwards []),
in which case the tactic only peels off the ∀ᶠ and leaves the
pointwise goal P x.
The with clause names the bound point and the specialized
hypotheses; if you omit it, fresh names are chosen automatically.
Use filter_upwards whenever you want to combine several
"eventually" facts and finish pointwise -- a very common pattern
in analysis and measure theory (where ∀ᶠ x ∂μ is "almost
everywhere").
It works for any filter, including atTop, nhds x, cofinite,
and μ.ae.
openFilterexample{α:Type*}{F:Filterα}{pq:α→Prop}(hp:∀ᶠxinF,px)(hq:∀ᶠxinF,qx):∀ᶠxinF,px∧qx:=α:Type u_1F:Filterαp:α→Propq:α→Prophp:∀ᶠ(x:α)inF,pxhq:∀ᶠ(x:α)inF,qx⊢ ∀ᶠ(x:α)inF,px∧qxfilter_upwards[hp,hq]withxα:Type u_1F:Filterαp:α→Propq:α→Prophp✝:∀ᶠ(x:α)inF,pxhq:∀ᶠ(x:α)inF,qxx:αhp:px⊢ qx→px∧qxα:Type u_1F:Filterαp:α→Propq:α→Prophp✝:∀ᶠ(x:α)inF,pxhq✝:∀ᶠ(x:α)inF,qxx:αhp:pxhq:qx⊢ px∧qxAll goals completed! 🐙
-- If p holds eventually along F and ∀ x, p x → q x,
-- then q holds eventually along F.
example{α:Type*}{F:Filterα}{pq:α→Prop}(hp:∀ᶠxinF,px)(h:∀x,px→qx):∀ᶠxinF,qx:=α:Type u_1F:Filterαp:α→Propq:α→Prophp:∀ᶠ(x:α)inF,pxh:∀(x:α),px→qx⊢ ∀ᶠ(x:α)inF,qxfilter_upwards[hp]withxα:Type u_1F:Filterαp:α→Propq:α→Prophp✝:∀ᶠ(x:α)inF,pxh:∀(x:α),px→qxx:αhp:px⊢ qxAll goals completed! 🐙
Summary:fun_prop is a general-purpose tactic for closing
function-property goals such as Continuous f, Measurable f,
Differentiable ℝ f, ContDiff ℝ n f, Integrable f, and others.
It works by recursively applying composition, linearity, and other
structural lemmas tagged @[fun_prop] in Mathlib.
Examples:
Proof state
Tactic
New proof state
⊢ Continuous (fun x : ℝ => x^2 + Real.sin x)
fun_prop
no goals 🎉
Remarks:
fun_prop subsumes and replaces the older continuity and
measurability tactics for most purposes.
For arguments it does not know about (e.g. the continuity of a
user-defined function), supply the fact as a hypothesis or add
a @[fun_prop] lemma.
When fun_prop fails, it will often report a helpful subgoal
indicating which atomic fact is missing.
Summary:gcongr ("generalized congruence") reduces a
monotonicity goal by peeling off a common outer context. It turns
a goal like f a + b ≤ f a' + b into a ≤ a' whenever f is
known to be monotone in its argument. It is the standard tool for
proving routine inequalities of compound expressions.
Examples:
Proof state
Tactic
New proof state
a a' b : ℝ
⊢ a + b ≤ a' + b
gcongr
a a' b : ℝ
⊢ a ≤ a'
Remarks:
gcongr relies on lemmas tagged @[gcongr] in Mathlib. Common
operations (+, *, ^, div, sum, ...) are already tagged,
so the tactic works out of the box in most situations.
After gcongr, the remaining goals are usually closed by
assumption, linarith, or positivity.
Summary:nlinarith is the nonlinear extension of linarith.
Like linarith, it proves goals of the form a ≤ b, a < b,
a = b, or False from linear hypotheses over ordered fields.
Unlike linarith, it additionally multiplies pairs of hypotheses
to produce quadratic witnesses, so it can close many goals
involving products and squares.
Examples:
Proof state
Tactic
New proof state
x : ℝ ⊢ 0 ≤ x ^ 2 + 1
nlinarith [sq_nonneg x]
(no goals)
Remarks:
nlinarith is strictly more powerful than linarith but also
slower; prefer linarith when possible.
You can supply extra lemmas as hints:
nlinarith [sq_nonneg x, mul_self_nonneg y]. Hints of the form
0 ≤ ... are especially useful.
For purely polynomial identities (not inequalities), use ring
or polyrith instead.
Summary:norm_cast moves coercions outward, out of compound
expressions and towards the root of the term. This is the inverse
of push_cast, which moves coercions inward. Together they are
the standard tools for manipulating goals that mix integer, natural,
rational, and real coercions.
Examples:
Proof state
Tactic
New proof state
n : ℕ ⊢ (n : ℝ) + (1 : ℝ) = ((n + 1 : ℕ) : ℝ)
norm_cast
(no goals)
Remarks:
Use norm_cast when casts sit on atoms (e.g. (n : ℝ) + 1) and
you want to pull them outside. Use push_cast for the opposite
direction.
The variant exact_mod_cast h applies norm_cast automatically
before trying to close the goal with h.
Summary:omega is a decision procedure for linear arithmetic
over the integers ℤ and the natural numbers ℕ. It closes goals
involving +, -, * (by a constant), /, %, ≤, <, =,
≠, ∧, ∨, and ¬, provided the goal reduces to a Presburger
arithmetic statement. It is one of the most useful tactics for
closing routine numerical side conditions.
Examples:
Proof state
Tactic
New proof state
a b : ℕ
⊢ a ≤ b + 1
omega
(no goals)
n : ℕ
⊢ 0 < n
omega
(no goals)
Remarks:
omega is complete for linear arithmetic: if the goal is a true
statement in Presburger arithmetic, omega will close it.
It does not handle nonlinear products (use nlinarith) or
real-valued arithmetic (use linarith).
It works equally well on ℤ and ℕ, automatically handling the
fact that natural numbers are nonnegative.
Summary:positivity proves goals of the form 0 < e, 0 ≤ e,
or e ≠ 0, where e is an arithmetic expression built from
literals and variables by operations that preserve positivity
(addition of nonnegatives, multiplication, powers, abs, exp,
square roots, ...).
Examples:
Proof state
Tactic
New proof state
x : ℝ
⊢ 0 < x ^ 2 + 1
positivity
(no goals)
Remarks:
positivity is extensible: any @[positivity]-tagged lemma in
Mathlib becomes available.
When a term depends on a variable whose sign is unknown but you
have a hypothesis 0 < y, positivity will pick it up
automatically.
Summary:push_cast pushes coercions (like Nat.cast,
Int.cast, Rat.cast) towards the leaves of an expression. It
turns ((a + b : ℕ) : ℝ) into (a : ℝ) + (b : ℝ), and similarly
for *, ^, etc. This is the inverse direction of norm_cast,
which moves casts outward.
Examples:
Proof state
Tactic
New proof state
n : ℕ ⊢ ((n + 1 : ℕ) : ℝ) = (n : ℝ) + 1
push_cast
n : ℕ ⊢ (n : ℝ) + 1 = (n : ℝ) + 1
Remarks:
Use push_cast when the cast sits outside a compound
expression and you want to distribute it; use norm_cast when
the casts sit on atoms and you want to pull them out.
The idiomatic combination for closing casting goals is
push_cast; ring (or push_cast; linarith).
Summary:ring_nf is the normal-form variant of ring. While
ringcloses a goal that is an identity in a (semi)commutative
ring, ring_nf instead rewrites both sides of the goal into a
canonical polynomial form. It is useful when ring does not close
the goal because it is not yet an equation (e.g. there are side
hypotheses) or when you want to simplify a hypothesis.
Examples:
Proof state
Tactic
New proof state
a b : ℝ
⊢ ...
ring_nf at h
a b : ℝ
⊢ ...
Remarks:
ring_nf never fails; it just rewrites.
Use ring to close a pure ring identity; use ring_nf to
normalize one side before continuing with another tactic.
You can target a hypothesis with ring_nf at h or normalize
everywhere with ring_nf at *.
Summary: By using have we introduce a new goal, which we have to prove first. Afterwards, it is available as a hypothesis in all further goals. This is identical to first proving a lemma h with the statement after have h : and then reusing it at the appropriate place in the proof (for example with apply or rw).
Examples:
Proof state
Tactic
New proof state
⊢ R
have h : P ↔ Q
⊢ P ↔ Q h : P ↔ Q ⊢ R
⊢ P
have h1 : ∃ (m : ℕ), f 27 m, ... cases h1 with m hm
m : ℕ hm: f 27 m ⊢ P
Remarks:
Assume you want to use have. You could as well formulate a separate lemma and use it afterwards. It is not always clear which is better.
If the proof of the statement is short and is only used once in your proof, you might want to consider replacing its proof in the place where it is needed.
Suppose we have two goals (let's call them ⊢ 1 and ⊢ 2), and we need the statement of ⊢ 1 in the proof of ⊢ 2. We can first introduce a third goal with have h := ⊢ 1 (where ⊢ 1 is to be replaced by the statement). Then ⊢ 1 can be proved with exact, and has the statement ⊢ 1 available in the proof of ⊢ 2.
Inductive types allow the possibility of proving statements about them by means of induction. This includes, for example, the usual case of complete induction over natural numbers.
If the goal is of the form ⊢ P → Q or ∀ (n : ℕ), P n, you can proceed with intro P or intro n. You can use several intro commands at the same time to summarize a single one. A little more precisely, intro h1 h2 h3, is identical to intro h1; intro h2; intro h3.
The application of left, is identical to apply h for h : P → P ∨ Q. So if you have a goal of the form ⊢ P ∨ Q, left, causes you to have only the goal ⊢ P. After all, it is sufficient to show P to close the goal.
Examples:
Proof state
Tactic
New proof state
⊢ P ∨ Q
left
⊢ P
⊢ ℕ
left
no goals🎉
The second example requires a little explanation. First of all, you have to understand that the goal ⊢ ℕ is to show that there is a term of type ℕ, i.e. that there is a natural number. Now you have to know how ℕ is implemented in Lean. This is
This means: The type ℕ is defined by the fact that zero is a term of this type, and that there is a function succ : ℕ → ℕ. Thus, in the second example, the input left, is closed because by definition zero : ℕ holds, so in particular there is a term of type ℕ.
Summary: This tactic can prove equations and inequalities with the help of hypotheses. It is important that the hypotheses used are also only equations and inequalities. So here we are working mainly with the transitivity of < together with arithmetic rules.
This tactic is related to rw. The difference is that you can specify the occurrence number of the term to be replaced on which rw is to be applied. The exact syntax is nth_rewrite k h, where k is the number (starting with $0$) of the term to be replaced and h is the hypothesis to be replaced. As with rw, this must be in the form h : x=y or h : A↔B.
Examples:
Proof state
Tactic
New proof state
n : ℕ ⊢ 0 + n = 0 + 0 + n
nth_rewrite 0 zero_add
n : ℕ ⊢ n = 0 + 0 + n
n : ℕ ⊢ 0 + n = 0 + 0 + n
nth_rewrite 1 zero_add
n : ℕ ⊢ 0 + n = 0 + n
n : ℕ ⊢ 0 + n = 0 + 0 + n
nth_rewrite 2 zero_add
n : ℕ ⊢ 0 + n = 0 + n
In the above example, Lean sees three terms of the form 0 + ?_: Number 0 is on the left-hand side, for numbers 1 and 2, on the right side (because of the parenthesis 0 + 0 + n = (0 + 0) + n), the second = is checked first. To the left of it is 0 + 0, which is by definition identical to 0. applying rw zero_add here, the term is converted to n. For number 2, you see 0 + 0, determine that it is of the desired form and convert it to 0.
Summary: In many steps of a proof, a negation must be carried out. In order to process the corresponding quantifiers etc. as well and to better reusable, the tactic push_neg is available.
Summary:rcases is a more flexible version of cases. Here, it is allowed to use ⟨ hP, hQ ⟩ (or (hP | hQ)) to to split the hypotheses hP and hQ into their cases. As can be seen in the example above, it is also possible to nest ⟨.,.⟩ and (.|.).
Examples:
Proof state
Tactic
New proof state
h : P ∧ Q ⊢ R
rcases h with⟨ hP, hQ ⟩
+ hP : P {br}[]hQ : Q {br}[]⊢ R
* + h : P ∨ Q{br}[]⊢ R
+ rcases h with( hP | hQ )
hP : P ⊢ R hQ : Q ⊢ R
h : ∃ (m : ℕ) (hg : 0 ≤ m), m < n ⊢ P
rcases h with⟨m, h1, h2⟩
n m : ℕ h1 : 0 ≤ m h2 : m < n ⊢ 1 < n
Remarks:
The last example shows how to use rcases to directly resolve a ∃ quantifier in a hypothesis that has more than one constraint (here: 0 ≤ m) and m < n can be resolved directly.
An rcases pattern can be one of the following, in a nested combination:
A name like foo
The special keyword rfl (for pattern matching on equality using subst)
A hyphen -, which clears the active hypothesis and any dependents.
A type ascription like pat:ty (parentheses are optional)
A tuple constructor like ⟨p1,p2,p3⟩
An alternation / variant pattern p1|p2|p3
Parentheses can be used for grouping; alternation is higher precedence than type ascription, so
p1|p2|p3:ty means (p1|p2|p3):ty.
N-ary alternations are treated as a group, so p1|p2|p3 is not the same as p1|(p2|p3),
and similarly for tuples. However, note that an n-ary alternation or tuple can match an n-ary
conjunction or disjunction, because if the number of patterns exceeds the number of constructors in
the type being destructed, the extra patterns will match on the last element, meaning that
p1|p2|p3 will act like p1|(p2|p3) when matching a1∨a2∨a3. If matching against a
type with 3 constructors, p1|(p2|p3) will act like p1|(p2|p3)|_ instead.
Summary: The refine tactic is like exact with holes. More precisely: if the goal is to apply a combination of hypotheses, you can do that with 'refine' and write an open term '' for each. You then get each '' back as a new goal (where those with definitional equality are solved immediately).
revert is rarely needed; actually only when you want to apply an already proven result exactly and first want to establish the correct form of the goal.
This tactic applies to a goal whose target has the form x~x, where ~ is a reflexive
relation, that is, a relation which has a reflexive lemma tagged with the attribute [refl].
The second example works even though ℕ is not a ring (but only a half-ring). It would also work with n : ℝ (since ℝ has more calculation rules than ℕ).
The implementation of the rintro tactic. It takes a list of patterns pats and
an optional type ascription ty? and introduces the patterns, resulting in zero or more goals.
Summary:rw stands for rewrite. For rw h to work, h must be an expression of the type h : x=y or h : A↔B. In this case, rw h replaces every term that is syntactically identical to x (or A) is replaced by y (or B). This also works if h is an already proven result (i.e. a lemma or theorem). With rw ← h is applied from right to left. (In the above example, y is replaced by x and B by A.)
Examples
Proof state
Tactic
New proof state
h : P ↔ Q ⊢ P
rw [h]
h : P ↔ Q ⊢ Q
h : P ↔ Q ⊢ Q
rw [← h]
h : P ↔ Q ⊢ P
h : P ↔ Q hP : P
rw [h] at hP
h : P ↔ Q hP : Q
h : P ↔ Q hQ : Q
rw [← h] at hQ
h : P ↔ Q hQ : P
k m: ℕ ⊢ k + m + 0 = m + k + 0
rw [add_comm]
k m : ℕ ⊢ 0 + (k + m) = m + k + 0
k m : ℕ ⊢ k + m + 0 = m + k + 0
rw [add_comm k m]
⊢ m + k + 0 = m + k + 0
k m : ℕ ⊢ k + m + 0 = m + k + 0
rw [← add_comm k m]
⊢ k + m + 0 = k + m + 0
k m : ℕ ⊢ k + m + 0 = m + k + 0
rw [add_zero, add_zero]
k m : ℕ ⊢ k + m = m + k
For the last four examples, you first need to know that addcomm and addzero are the statements
lemma add_comm : ∀ {G : Type} [_inst_1 : add_comm_semigroup G] (a b : G),
a + b = b + a
add_zero : ∀ {M : Type} [_inst_1 : add_zero_class M] (a : M), a + 0 = a
In the first of the four examples, rw applies to the first occurrence of a term of type a + b. Due to the internal bracketing, (k + m) + 0 is on the left side, so that the rw results in a 0 + k + m. If you want to use the commutativity in the term k + m, you need the second (or third) example, where rw add_comm k m leads to the desired progress. In the last example, the two + 0 terms are first eliminated by rw add_zero.
Notes
rw is used very often in practice to apply statements from the mathlib (at least if they are of the type = or ↔).
If you want to combine several rw commands, you can do so in square brackets, for example rw [h1, h2] or rw [h1, ←h2, h3].
rw immediately executes a refl after its application. This leads to the second and third examples of the applications of add_comm and add_zero that the new proof state is not as specified, but no goals.
If you do not want to perform a rw in sequence (as in the double elimination of the +0 above), you can use nth_rewrite to rewrite the second occurrence of a term.
The rw tactic does not work when it comes after a binder, which could be a ∀ ∃ ∑. In this case, simp_rw will hopefully help.
Summary:set x := e introduces a local abbreviation x for the
expression e and substitutes every occurrence of e in the goal
(and, with the with clause, in the hypotheses) with x. The
substitution is definitional: x and e are interchangeable, but
the proof state becomes much more readable.
Examples:
Proof state
Tactic
New proof state
a b : ℕ ⊢ a + b + (a + b) = 2 * (a + b)
set s := a + b with hs
a b : ℕ s : ℕ := a + b hs : s = a + b ⊢ s + s = 2 * s
Remarks:
Without with hs, no equation hypothesis is introduced, and x
is still definitionally equal to e.
set is very useful when you want to rw with a complicated
subterm, or when reading the goal becomes difficult.
The dual let x := e introduces a let-binding into the term but
does not fold the expression in the goal.
Summary:show e changes the syntactic form of the current goal
to e, provided e is definitionally equal to the goal. It is
the tactic counterpart of the term-mode (h : T) ascription.
Examples:
Proof state
Tactic
New proof state
⊢ 1 + 1 = 2
show 2 = 2
⊢ 2 = 2
⊢ P ∨ Q
show Q ∨ P
(fails: not def-equal)
Remarks:
show does not prove anything; it only reshapes the goal. It is
useful before rw, apply, or exact when those tactics expect
a specific syntactic shape.
show e fails if e is not definitionally equal to the goal.
For propositional reshaping (which is not definitional) use
change or suffices.
Summary: In mathlib there are many lemmas with = or ↔ statements that can be applied with rw and are marked with @[simp]. These marked lemmas have the property that the right side is a simplified form of the left side. With simp, lean looks for matching lemmas and tries to apply them.
Examples
Proof state
Tactic
New proof state
⊢ n + 0 = n
simp
no goals 🎉
h : n + 0 = m ⊢ P
simp at h
h : n = m ⊢ P
Remarks:
If you want to know which lemmas were used, try simp?. This provides some clues.
Proof state
Tactic
New proof state
⊢ n + 0 = n
simp?
no goals 🎉 Try this: simp only add_zero, eq_self_iff_true
Summary:simp_rw [h₁, h₂, ...] rewrites with each lemma in turn,
but -- unlike rw -- can rewrite under binders (∀, ∃, fun,
∑, ...). Unlike simp, it does not apply any extra simp lemmas;
it rewrites with exactly the lemmas you supply.
Examples:
Proof state
Tactic
New proof state
f : ℕ → ℕ
simp_rw [h]
⊢ ∀ n, 0 + 1 = 1
Remarks:
rw refuses to rewrite under binders, giving a "motive is not
type correct" error in those situations. simp_rw is the right
tool there.
If the lemmas you rewrite with are simp lemmas and you are happy
for simp to apply other simp lemmas too, just use simp only.
Summary: In a hypothesis h : ∀ n, ..., ... applies to all n, but for the proof of the goal, you may only need a specific n. If you specify specialize h followed by the value for which h is needed, the hypothesis changes accordingly.
Just as with use, you have to be careful that the goal remains provable.
If you want to use two values of the hypothesis h, then let h' := h first provides a duplication of the hypothesis, so that you can then apply specialize to h and h'.
Summary:symm swaps the two sides of a symmetric relation.
For equality, it turns a goal a = b into b = a. It also works
for ↔, ≃, Dvd.dvd, and any other relation tagged @[symm]
in Mathlib.
Examples:
Proof state
Tactic
New proof state
a b : ℕ ⊢ a = b
symm
a b : ℕ ⊢ b = a
h : a = b ⊢ ...
symm at h
h : b = a ⊢ ...
Remarks:
The term-mode analogue is Eq.symm h (or .symm).
symm is often used just before exact h when you have h : b = a
but a goal a = b.
Summary:tauto solves all goals that can be solved with a truth table.
Examples
Proof state
Tactic
New proof state
⊢ P ∧ Q → P
tauto oder tauto!
No goals
⊢ ((P → Q) → P) → P
tauto!
No goals
The truth tables for ¬P, P ∧ Q and P ∨ Q are as follows; if more terms of type Prop are involved, there are more lines.
P
¬P
true
false
false
true
P
Q
(P ∧ Q)
true
true
true
false
true
false
true
false
false
false
false
false
P
Q
(P ∨ Q)
true
true
true
false
true
true
true
false
true
false
false
false
Notes
The difference between tauto and tauto! is that in the latter tactic, the rule of the excluded middle is allowed. The second example can therefore only be solved with tauto!, but not with tauto.
Summary:trivial tries a short list of simple tactics to close
the goal: rfl, assumption, True.intro, decide, and a few
others. It is a gentle hammer for goals that "should be obvious".
Examples:
Proof state
Tactic
New proof state
⊢ True
trivial
(no goals)
h : P ⊢ P
trivial
(no goals)
Remarks:
Unlike triv, trivial also closes goals that are true by
decidability (like 2 + 2 = 4).
When trivial does not close a goal, it prints no specific
reason -- if you want a more informative automated tactic, try
exact? or simp.
Summary: The use tactic is used for goals that begin with ∃. Here, parameters are used to indicate which object quantified by ∃ is to be reused in the proof.
Examples
Proof state
Tactic
New proof state
⊢ ∃ (k : ℕ), k * k = 16
use 4
⊢ 4 * 4 = 16
⊢ ∃ (k l : ℕ), k * l = 16
use 8, 2
⊢ 8 * 2 = 16
In this example, Lean knows that 4 * 4 = 16 by rfl, so we need not type it.