omega tactic in lean
omega is a lean tactic that solvec linear optimization problems. It’s one of the first tactics I used and learning how it works helped me
understanding how to better express the boundaries of the problems I want to solve (in my case most of my constraints had to do with the boundedness of bitvectors’ values).
Some examples:
- Given the following hypotheses on
aandb, it’s impossible for their sum to be greater than20.example {a b : Nat} (ha : a < 10) (hb : b < 10) : a + b > 20 := by omega /- omega could not prove the goal: a possible counterexample may satisfy the constraints 0 ≤ d ≤ 9 0 ≤ c ≤ 9 c + d ≤ 20 where c := ↑a d := ↑b -/ - For any values of
aandbtheir sum will be greater than20:example {a b : Nat} (ha : a > 10) (hb : b > 10) : a + b > 20 := by omega ✅
Now if we consider non-linear functions, omega won’t work e.g. :
- Even though we know that in maths
a * bwitha > 10andb > 10will necessarily be greater than10,omegacan’t reason about non-linear operations such as non-constant multiplication:example {a b : Nat} (ha : a > 10) (hb : b > 10) : a * b + b > 20 := by omega /- omega could not prove the goal: a possible counterexample may satisfy the constraints e ≥ 0 d ≥ 11 d + e ≤ 20 c ≥ 11 where c := ↑a d := ↑b e := ↑a * ↑b - In these cases,
omegawill try to shadow the operands it can’t recognize, treating them as a variable whose constraints are unknown, and reason about whatever else remains in the expression. The example below is true regardless of whatever valuea * bcan have!example {a b : Nat} (ha : a > 10) (hb : b > 10) : a * b + b + a > 20 := by omega ✅