Lean formalization: Finding Colorings in One-Sided Expanders
I formalized in Lean the proofs of the results in Section 4 of our paper Finding Colorings in One-Sided Expanders.
The Lean code was generated using ChatGPT Pro 5.1 and GPT-5.1-Codex-Max. It took ~3 days to arrive at the final result.
Code overview
The main file is Colorexpanders.lean. Theorem 4.1 corresponds to theorem large_bottom_rank_implies_large_top_rank and Corollary 4.2 corresponds to theorem small_top_rank_implies_small_bottom_rank.
The statement of Theorem 4.1 is formalized as follows:
theorem large_bottom_rank_implies_large_top_rank
[Nonempty n] (A : Matrix n n ℝ)
(hHerm : A.IsHermitian)
(hNonneg : ∀ i j, 0 ≤ A i j)
(hOp : ‖A‖ ≤ (1 : ℝ))
{μ : ℝ} (hμ : 0 ≤ μ)
{t : ℕ} (hBottom : bottomThresholdRank A hHerm μ ≥ t)
{σ : ℝ} (hσ₀ : 0 < σ) (hσ₁ : σ < 1) :
(topThresholdRank A hHerm ((μ^(2:ℕ) - σ) / (1 - σ)) : ℝ)
≥ σ^2 * (t : ℝ)
In words, this says that given a real-valued Hermitian matrix A with non-negative entries and operator norm bounded by 1, if the number of eigenvalues of A smaller than or equal to μ is at least t, then for any 0 < σ < 1 we have that the number of eigenvalues of A larger than (μ^2 - σ) / (1 - σ) is at least σ^2 * t.
The statement uses the custom definitions topThresholdRank and bottomThresholdRank, which are:
noncomputable def topThresholdRank
(A : Matrix n n ℝ) (hA : A.IsHermitian) (τ : ℝ) : ℕ :=
Fintype.card { i : n // τ ≤ hA.eigenvalues i }
noncomputable def bottomThresholdRank
(A : Matrix n n ℝ) (hA : A.IsHermitian) (μ : ℝ) : ℕ :=
Fintype.card { i : n // hA.eigenvalues i ≤ -μ }
That is, the expression topThresholdRank A hHerm τ is defined as the cardinality of the set of eigenvalues of A with value at least τ, and similarly for bottomThresholdRank.