Skip to content

Comments

Bivariate Polynomials#103

Open
dhsorens wants to merge 16 commits intomasterfrom
dhsorens/bivariate
Open

Bivariate Polynomials#103
dhsorens wants to merge 16 commits intomasterfrom
dhsorens/bivariate

Conversation

@dhsorens
Copy link
Collaborator

As described on the Roadmap, for Phase 1 we want to have a basic, computable implementation for Bivariate polynomials. This pull request adds a Bivariate directory and a new type CBivariate. It defines bivariate polynomials as nested computable Univariate polynomials: CPolynomial (CPolynomial R). It includes the basic type definition, basic definitions for operations, and a basic structure for the equivalence with Mathlib's R[X][Y].

The rough outline of continued work on this, mimicking what's on Univariate:

  • Define computable operations and API on CBivariate
  • Prove ring equivalence with Mathlib's R[X][Y]
  • Prove operations and API correct wrt the Mathlib spec, over the ring equivalence

@dhsorens
Copy link
Collaborator Author

dhsorens commented Feb 18, 2026

@katyhr, would love your feedback and comments on this!

Main questions:

  1. are we happy with CPolynomial (CPolynomial R) as the formalization of the Bivariate type?
  2. I chose the namespace to be CompPoly.CBivariate to mimic the pattern of CompPoly.CPolynomial and CompPoly.CMvPolynomial - but I wonder if we should do CompPoly.CPolynomial.Bivariate to match Arklib, and since it's a special case of CPolynomial?
  3. the type signature of the toPoly isomorphism is currently into R[X][Y] - would you say this is right?

@github-actions
Copy link

github-actions bot commented Feb 18, 2026

🤖 Gemini PR Summary

This Pull Request introduces the foundational implementation for computable bivariate polynomials (CBivariate), following the project roadmap. The implementation represents bivariate polynomials as nested univariate polynomials (CPolynomial (CPolynomial R)), facilitating a computable API while maintaining compatibility with Mathlib’s R[X][Y].

Features

  • New Bivariate Type Definition: Introduced CBivariate in CompPoly/Bivariate/Basic.lean, defining it as an iterated univariate polynomial structure.
  • Computable Operations: Implemented core functionality for bivariate polynomials, including:
    • Evaluation logic.
    • Degree calculations.
    • Coefficient access.
  • Mathlib Integration: Established the initial structure for ring equivalence between the custom CBivariate representation and Mathlib’s R[X][Y] in ToPoly.lean.
  • Global Imports: Updated the main CompPoly.lean entry point to include the new bivariate module.

Refactoring & Infrastructure

  • Univariate Enhancements: Added necessary typeclass instances and fundamental theorems to CompPoly/Univariate/Basic.lean regarding evaluation and polynomial support.
  • Verification Theorems: Added a proof in CompPoly/Univariate/ToPoly.lean ensuring that the toPoly mapping correctly identifies the zero polynomial, a prerequisite for robust equivalence proofs.

Documentation

  • Bivariate Documentation: Added a new README.md within the CompPoly/Bivariate directory to document the CBivariate type and its relationship with Mathlib.

Next Steps:
Following this merge, subsequent work will focus on expanding the computable API for CBivariate and completing the formal proofs for ring equivalence and operational correctness against the Mathlib specification.


Analysis of Changes

Metric Count
📝 Files Changed 6
Lines Added 851
Lines Removed 0

Lean Declarations

❌ **Added:** 55 declaration(s)
  • theorem swap_toPoly_coeff {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • def Y [DecidableEq R] : CBivariate R in CompPoly/Bivariate/Basic.lean
  • theorem CC_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] (r : R) : in CompPoly/Bivariate/ToPoly.lean
  • theorem X_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] : in CompPoly/Bivariate/ToPoly.lean
  • theorem natDegreeX_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem toPoly_ofPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] [DecidableEq R] in CompPoly/Bivariate/ToPoly.lean
  • lemma toPoly_monomial {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] [DecidableEq R] in CompPoly/Bivariate/ToPoly.lean
  • theorem monomialXY_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem toPoly_coeff {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • lemma ofPoly_zero {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] [DecidableEq R] : in CompPoly/Bivariate/ToPoly.lean
  • def CBivariate (R : Type*) [BEq R] [LawfulBEq R] [Nontrivial R] [Semiring R] in CompPoly/Bivariate/Basic.lean
  • def natDegreeY (f : CBivariate R) : ℕ in CompPoly/Bivariate/Basic.lean
  • theorem supportX_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem evalX_toPoly_coeff {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • def ofPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem ofPoly_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] [DecidableEq R] in CompPoly/Bivariate/ToPoly.lean
  • theorem leadingCoeffX_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • def swap [DecidableEq R] (f : CBivariate R) : CBivariate R in CompPoly/Bivariate/Basic.lean
  • lemma toPoly_zero {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] : in CompPoly/Bivariate/ToPoly.lean
  • theorem coeff_toPoly_Y {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem support_toPoly_outer {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem totalDegree_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem natDegreeY_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • instance [Nontrivial R] : Nontrivial (CPolynomial R) where in CompPoly/Univariate/Basic.lean
  • theorem eval_eq_sum_support (p : CPolynomial R) (x : R) : in CompPoly/Univariate/Basic.lean
  • theorem natDegree_eq_support_sup (p : CPolynomial R) : in CompPoly/Univariate/Basic.lean
  • lemma ofPoly_monomial {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] [DecidableEq R] in CompPoly/Bivariate/ToPoly.lean
  • def X : CBivariate R in CompPoly/Bivariate/Basic.lean
  • def evalX [DecidableEq R] (a : R) (f : CBivariate R) : CPolynomial R in CompPoly/Bivariate/Basic.lean
  • theorem coeff_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • lemma ofPoly_add {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] [DecidableEq R] in CompPoly/Bivariate/ToPoly.lean
  • def CC (r : R) : CBivariate R in CompPoly/Bivariate/Basic.lean
  • theorem ofPoly_coeff {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] [DecidableEq R] in CompPoly/Bivariate/ToPoly.lean
  • theorem Y_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] [DecidableEq R] : in CompPoly/Bivariate/ToPoly.lean
  • def monomialXY [DecidableEq R] (n m : ℕ) (c : R) : CBivariate R in CompPoly/Bivariate/Basic.lean
  • theorem monomial_add [DecidableEq R] (n : ℕ) (a b : R) : in CompPoly/Univariate/Basic.lean
  • theorem toPoly_eq_map {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • def natDegreeX (f : CBivariate R) : ℕ in CompPoly/Bivariate/Basic.lean
  • theorem toPoly_mul {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem leadingCoeffY_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • def supportY (f : CBivariate R) : Finset ℕ in CompPoly/Bivariate/Basic.lean
  • def coeff (f : CBivariate R) (i j : ℕ) : R in CompPoly/Bivariate/Basic.lean
  • def evalY (a : R) (f : CBivariate R) : CPolynomial R in CompPoly/Bivariate/Basic.lean
  • lemma toPoly_add {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • def leadingCoeffX [DecidableEq R] (f : CBivariate R) : CPolynomial R in CompPoly/Bivariate/Basic.lean
  • theorem evalY_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • def evalEval (x y : R) (f : CBivariate R) : R in CompPoly/Bivariate/Basic.lean
  • theorem evalEval_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem eval₂_eq_sum_support {S : Type*} [Semiring S] in CompPoly/Univariate/Basic.lean
  • lemma toImpl_add {R : Type*} [BEq R] [LawfulBEq R] [Ring R] (p q : R[X]) : in CompPoly/Bivariate/ToPoly.lean
  • theorem evalX_toPoly_eval {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem toPoly_eq_zero_iff [LawfulBEq R] (p : CPolynomial R) : in CompPoly/Univariate/ToPoly.lean
  • def supportX (f : CBivariate R) : Finset ℕ in CompPoly/Bivariate/Basic.lean
  • def leadingCoeffY (f : CBivariate R) : CPolynomial R in CompPoly/Bivariate/Basic.lean
  • def totalDegree (f : CBivariate R) : ℕ in CompPoly/Bivariate/Basic.lean

sorry Tracking

❌ **Added:** 3 `sorry`(s)
  • theorem swap_toPoly_coeff {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem evalX_toPoly_eval {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem evalX_toPoly_coeff {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean

🎨 **Style Guide Adherence**

Based on the provided style guide, the following lines in the code changes violate the guidelines:

CompPoly/Bivariate/Basic.lean

  • Line 31: CPolynomial (CPolynomial R)
    • Violation: "Use 2 spaces for indentation."
  • Line 68: def CC (r : R) : CBivariate R := CPolynomial.C (CPolynomial.C r)
    • Violation: "Functions and Terms: lowerCamelCase" and "Acronyms: Treat as words (e.g., HtmlParser not HTMLParser)." (Should be cc).
  • Line 71: def X : CBivariate R := CPolynomial.C CPolynomial.X
    • Violation: "Functions and Terms: lowerCamelCase (e.g., binarySearch, isPrime)." (Should be x).
  • Line 74: def Y [DecidableEq R] : CBivariate R := CPolynomial.monomial 1 (CPolynomial.C 1)
    • Violation: "Functions and Terms: lowerCamelCase (e.g., binarySearch, isPrime)." (Should be y).
  • Line 77: def monomialXY [DecidableEq R] (n m : ℕ) (c : R) : CBivariate R :=
    • Violation: "Acronyms: Treat as words (e.g., HtmlParser not HTMLParser)." (Should be monomialXy).
  • Line 104, 109, 114, 126, 131: Definitions evalX, evalY, evalEval, leadingCoeffY, leadingCoeffX.
    • Violation: "Functions and Terms: lowerCamelCase" and "Acronyms: Treat as words." (Should be evalx, evaly, evalEval, leadingCoeffY, leadingCoeffX — letters X and Y in these contexts are treated as acronyms/words).

CompPoly/Bivariate/ToPoly.lean

  • Line 43, 51: Indentation of def bodies (4 spaces).
    • Violation: "Use 2 spaces for indentation."
  • Lines 45, 53, 67, 104, 155, 160, 179, 194, 219, 230, 247, 303, 307, 342, 347, 377, 389, 402, 410, 422, 439: Usage of fun x => ....
    • Violation: "Functions: Prefer fun x ↦ ... over λ x, ...."
  • Lines 60, 61, 72–101: Indentation within by blocks (6 spaces).
    • Violation: "Indent the tactic block." (Guidelines specify 2 spaces for general indentation).
  • Line 144: have h_add_trim : ∀ p q : CPolynomial.Raw R,
    • Violation: "Variable Conventions: x, y, z, ... : Elements of a generic type." (Should use x, y instead of p, q).
  • Line 58, 64, 110, 120, 125, 140, 153, 173, 191, 201, 213, 219, 266, 276, 288, 303, 314, 334, 344, 354, 363, 374, 386, 399, 406, 413, 454, 473, 480: Theorem names containing CamelCase (e.g., toPoly_zero, evalEval_toPoly, CC_toPoly).
    • Violation: "Theorems and Proofs: snake_case (e.g., add_comm, list_reverse_id)." (Should be to_poly_zero, eval_eval_to_poly, cc_to_poly, etc.).

CompPoly/Univariate/Basic.lean

  • Line 260, 261, 280, 284, 289: Usage of fun x => ....
    • Violation: "Functions: Prefer fun x ↦ ... over λ x, ...."
  • Line 549: theorem natDegree_eq_support_sup (p : CPolynomial R) :
    • Violation: "Theorems and Proofs: snake_case (e.g., add_comm, list_reverse_id)." (Should be nat_degree_eq_support_sup).

CompPoly/Univariate/ToPoly.lean

  • Line 213: theorem toPoly_eq_zero_iff [LawfulBEq R] (p : CPolynomial R) :
    • Violation: "Theorems and Proofs: snake_case (e.g., add_comm, list_reverse_id)." (Should be to_poly_eq_zero_iff).

📄 **Per-File Summaries**
  • CompPoly.lean: This update adds imports for bivariate polynomial definitions and conversions to the main project file.
  • CompPoly/Bivariate/Basic.lean: Defines computable bivariate polynomials as iterated univariate polynomials and implements core operations such as evaluation, degrees, and coefficient access.
  • CompPoly/Bivariate/README.md: Introduces documentation for the CBivariate type, which represents computable bivariate polynomials as nested univariate polynomials compatible with Mathlib.
  • CompPoly/Bivariate/ToPoly.lean: This file establishes a ring equivalence and conversion API between the custom CBivariate representation and Mathlib’s bivariate polynomials (R[X][Y]).
  • CompPoly/Univariate/Basic.lean: Adds basic typeclass instances and fundamental theorems relating evaluation and degree to the support of univariate polynomials.
  • CompPoly/Univariate/ToPoly.lean: Add a theorem proving that toPoly maps a canonical polynomial to zero if and only if the original polynomial is zero.

Last updated: 2026-02-25 07:25 UTC.

@dhsorens
Copy link
Collaborator Author

Currently kind of waffling on the namespace definition - we should either have bivariate polynomials defined within CPolynomial as CPolynomial.Bivariate R := CPolynomial (CPolynomial R) or in its own top-level namespace CBivariate - can't decide!

@alexanderlhicks
Copy link
Collaborator

I kinda lean in favour of CBivariate for conciseness

@alexanderlhicks
Copy link
Collaborator

/review

External:

Internal:

Comments:

@alexanderlhicks alexanderlhicks self-assigned this Feb 20, 2026
@dhsorens
Copy link
Collaborator Author

dhsorens commented Feb 20, 2026

weird .. something went wrong with the history of this branch, it should only affect 3 5 files. investigating

@dhsorens
Copy link
Collaborator Author

ok, fixed - now properly ready for review.

@github-actions
Copy link

🤖 AI Review (with external context)

🤖 AI Review

Overall Summary:
While the refactoring and maintenance updates to the existing library are verified as correct, the new Bivariate polynomial implementation is rejected due to critical logic and type errors. Specifically, the swap function incorrectly performs an identity transformation instead of transposing variables, and there is a pervasive type mismatch treating the CBivariate alias as a structure via invalid .val accessors. These issues in CompPoly/Bivariate/* must be resolved before merging.


📄 **Review for `CompPoly.lean`**

Hello. I have reviewed the changes in CompPoly.lean.

1. Analysis of Implementation

The changes in this PR involve adding two new import statements to the CompPoly.lean file:

  • import CompPoly.Bivariate.Basic
  • import CompPoly.Bivariate.ToPoly

Logic and Consistency:
This file appears to serve as a library root or "barrel file," aggregating sub-modules for users of the library. The existing imports include CompPoly.Univariate.* modules. The addition of CompPoly.Bivariate.* modules is structurally consistent with the existing pattern and indicates the exposure of bivariate polynomial formalizations to the library's public API.

Specification Compliance:
While no specific external document was provided regarding the addition of bivariate polynomials, the naming convention mirrors the existing Univariate modules.

2. Check for Misformalization

  • Off-by-One Errors: N/A (Import statements).
  • Recursive Definitions: N/A.
  • Incorrect Assumptions: N/A.
  • Prop vs Type: N/A.
  • Universe Levels: N/A.
  • Circular Dependencies: Since the code compiles (as stated in the prompt), there are no circular dependency cycles introduced by these imports.

3. Verdict & Feedback

Verdict: The formalization is correct.

The changes are low-risk and structural, properly integrating the new Bivariate modules into the top-level library entry point. No misformalization issues are present.

📄 **Review for `CompPoly/Bivariate/Basic.lean`**

This review focuses on CompPoly/Bivariate/Basic.lean.

The file introduces a computable bivariate polynomial type CBivariate R defined as CPolynomial (CPolynomial R). While the structural definition is sound, there are critical issues in the implementation of the swap operation and pervasive type errors regarding the accessor .val.

1. Critical Misformalization in swap

The swap function, intended to swap the roles of variables $X$ and $Y$ (mapping $P(X, Y) \mapsto P(Y, X)$), is implemented incorrectly. The current implementation reconstructs the original polynomial rather than transposing the variables.

  • Current Logic:
    It iterates over the support of $f$. For each term $c_{i,j} X^i Y^j$ (where $j$ comes from the outer polynomial index and $i$ from the inner), it constructs:
    let coeffJ = ... monomial i c $\to c X^i$
    monomial j coeffJ $\to (c X^i) Y^j$
    This results in the identity transformation.

  • Correct Logic:
    To swap variables, the coefficient $c_{i,j}$ associated with $X^i Y^j$ in the original polynomial must be associated with $X^j Y^i$ in the result (conceptually mapping $X \to Y$ and $Y \to X$, where the inner variable is always "X" and outer always "Y" in the data structure). The resulting term should be constructed via monomialXY j i c (since monomialXY n m creates $X^n Y^m$).

  • Correction:

    def swap [DecidableEq R] (f : CBivariate R) : CBivariate R :=
      (CPolynomial.support f).sum (fun j =>
        let inner := CPolynomial.coeff f j
        (CPolynomial.support inner).sum (fun i =>
          monomialXY j i (CPolynomial.coeff inner i)))

2. Pervasive Type Error (.val)

Issue:
The code consistently uses an accessor .val on variables of type CBivariate R (e.g., f.val.coeff, f.val.natDegree, f.val.eval).
However, CBivariate is defined as a definition alias, not a structure:

def CBivariate ... := CPolynomial (CPolynomial R)

Unless CPolynomial is a structure possessing a field named val (which is non-standard for polynomial types in Lean/Mathlib and inconsistent with the direct usage of f in CPolynomial.support f in supportX), f itself is the polynomial. Accessing f.val is likely a type error.

Recommendation:
Remove .val accessors throughout the file. Functions like coeff, natDegree, and eval should be called directly on f (or qualified as CPolynomial.coeff f ... if namespace resolution requires it).

3. Other Findings

  • leadingCoeffX: This definition relies on swap. It is mathematically correct as (f.swap).leadingCoeffY only if swap is fixed. With the current broken swap, leadingCoeffX returns leadingCoeffY.
  • evalX: The logic monomial j (eval a (coeff j)) correctly computes $\sum P_j(a) Y^j$, which represents the polynomial in $Y$ obtained by setting $X=a$. This is correct.

Verdict & Corrected Code

Verdict: Incorrect (Critical logic error in swap, Major type errors).

Corrected Code Snippets:

Below are the corrected definitions for swap and coeff (as an example of removing .val).

/-- Swap the roles of X and Y. -/
def swap [DecidableEq R] (f : CBivariate R) : CBivariate R :=
  (CPolynomial.support f).sum (fun j =>
    let inner := CPolynomial.coeff f j
    (CPolynomial.support inner).sum (fun i =>
      monomialXY j i (CPolynomial.coeff inner i)))

/-- Coefficient of `X^i Y^j` in the bivariate polynomial. -/
def coeff (f : CBivariate R) (i j : ℕ) : R :=
  CPolynomial.coeff (CPolynomial.coeff f j) i

Note: Ensure CPolynomial.coeff, CPolynomial.support, etc., are accessible. If CBivariate is opened or CPolynomial namespace is used, qualification may be adjusted, but the .val indirection must be removed.

📄 **Review for `CompPoly/Bivariate/ToPoly.lean`**

This review covers the new file CompPoly/Bivariate/ToPoly.lean.

Verdict: Incorrect

The formalization logic regarding the conversion between the custom CBivariate type and Mathlib's R[X][Y] appears algorithmically correct, but there are inconsistent usages of the CBivariate type that will likely cause type errors. The implementation treats CBivariate as a wrapper structure in some places and as the underlying polynomial type in others.

Issues Found

1. Critical Type Mismatch in toPoly

  • Location: toPoly, line 40.
  • Code: (CPolynomial.support p).sum ... vs p.val.coeff j.
  • Analysis:
    • The usage p.val.coeff j implies p (of type CBivariate R) is a structure or Subtype wrapping an underlying polynomial (accessible via .val).
    • However, CPolynomial.support p passes p directly to a function that likely expects a CPolynomial. If p is a wrapper, CPolynomial.support should typically be called on p.val.
    • While a coercion might exist, mixing direct access (p.val) and coerced access (support p) is fragile and suggests a misunderstanding of the type definition.
  • Correction: Assuming CBivariate is a wrapper, usage should be consistent: CPolynomial.support p.val.

2. Return Type Mismatch in ofPoly

  • Location: ofPoly, lines 50-52.
  • Code: The function body is a Finset.sum of monomials.
  • Analysis:
    • CPolynomial.monomial typically constructs a CPolynomial.
    • The sum of CPolynomials is a CPolynomial (via AddCommMonoid).
    • The function signature defines the return type as CBivariate R.
    • If CBivariate R is a structure/subtype (as implied by p.val in toPoly), the raw sum of polynomials cannot be returned directly. It must be wrapped in the CBivariate constructor (e.g., ⟨ ... ⟩), possibly requiring a proof if CBivariate enforces invariants (like trimming).
  • Correction: Wrap the result in the appropriate constructor for CBivariate.

3. Ambiguity in ofPoly Constraints

  • Location: ofPoly signature.
  • Observation: The constraint [DecidableEq R] is correctly added because cj.toImpl (converting a sparse Mathlib polynomial to a dense implementation) usually requires checking coefficients for equality with zero (trimming). This is a correct addition, not an issue.

Recommended Fixes

Assuming CBivariate R is a structure wrapping CPolynomial (Subtype ...):

-- In toPoly
noncomputable def toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R]
    (p : CBivariate R) : R[X][Y] :=
  -- FIX: Access p.val for support
  (CPolynomial.support p.val).sum (fun j =>
    monomial j (CPolynomial.toPoly (p.val.coeff j)))

-- In ofPoly
def ofPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R]
    [DecidableEq R] (p : R[X][Y]) : CBivariate R :=
  -- FIX: Wrap the sum in the CBivariate constructor (assuming simple structure or subtype)
  ⟨(p.support).sum (fun j =>
    let cj := p.coeff j
    CPolynomial.monomial j ⟨cj.toImpl, CPolynomial.Raw.trim_toImpl cj⟩)⟩

If CBivariate R is instead defined as abbrev CBivariate R := CPolynomial ..., then p.val in toPoly is incorrect and should be removed. Given the explicit usage of p.val, the structure/subtype interpretation is most likely intended.

📄 **Review for `CompPoly/Data/List/Lemmas.lean`**

This is a review of the changes in CompPoly/Data/List/Lemmas.lean.

1. Implementation Analysis

The diff modifies the proof of getLastI_append_single. The theorem states that for a list l and an element x, the "inhabited last" element of l ++ [x] is x.

  • Theorem Statement: (l ++ [x]).getLastI = x
    • Context: [Inhabited α] is present, which is required for getLastI to return a default value if the list were empty (though l ++ [x] is never empty).
  • Old Proof: Relied on List.getLastI_eq_getLast? (likely relating getLastI to Option.get! or similar) and Option.some_or.
  • New Proof:
    • Replaces List.getLastI_eq_getLast? with List.getLastI_eq_getLast?_getD. This suggests getLastI is formally treated as l.getLast?.getD default.
    • Adds Option.getD_some.
    • Retains List.getLast?_append, List.getLast?_singleton.

Logical Flow:

  1. List.getLastI_eq_getLast?_getD transforms the LHS to (l ++ [x]).getLast?.getD default.
  2. List.getLast?_append and List.getLast?_singleton reduce (l ++ [x]).getLast? to some x.
  3. The expression becomes (some x).getD default.
  4. Option.getD_some simplifies (some x).getD default to x.
  5. x = x closes the goal.

The change aligns with standard library updates in Lean 4 (Mathlib4), where definitions involving "Inhabited" accessors often standardize on Option.getD.

2. Misformalization Checklist

  • Off-by-One Errors: None. The logic correctly identifies the last element of a list appended with a singleton.
  • Recursive Definitions: N/A.
  • Incorrect Assumptions/Side-Conditions: The [Inhabited α] instance is correctly present. While l ++ [x] is provably non-empty and logically doesn't require a default value to extract the last element, the definition of getLastI structurally requires the Inhabited instance. The theorem is sound.
  • Prop vs. Type: Correct usage.
  • Universe Levels: No universe manipulation involved.

3. Verdict & Feedback

Verdict: Correct.

The changes represent a correct update to the proof script, likely adjusting for changes in underlying library definitions (specifically relating getLastI to Option.getD). The logic is sound and the formalization is robust.

Feedback:

  • The update correctly handles the reduction of getLastI via getD.
  • No issues found.
📄 **Review for `CompPoly/Fields/Binary/AdditiveNTT/AdditiveNTT.lean`**

You have asked me to review the changes in CompPoly/Fields/Binary/AdditiveNTT/AdditiveNTT.lean from a pull request. The changes primarily involve updating lemma names to match recent library conventions (refactoring Fin.coe_* to Fin.val_*) and completing a proof by filling a sorry.

1. Analysis of Changes

  • Lemma Renaming (Maintenance):

    • Throughout the file, Fin.coe_cast is replaced with Fin.val_cast, Fin.coe_castSucc with Fin.val_castSucc, etc.
    • Similarly, Finsupp.equivFunOnFinite_symm_apply_toFun is updated to Finsupp.coe_equivFunOnFinite_symm.
    • Verdict: These are syntactic updates to align with Mathlib naming conventions (preferring val over coe when referring to the struct field .val, and aligning Finsupp coercion lemmas). Since the code compiles, these changes are correct and semantic-preserving.
  • Proof Script Cleanup:

    • In getSDomainBasisCoeff_of_iteratedQuotientMap, rfl is added, and some simp only calls are consolidated or replaced with rw.
    • Verdict: These are harmless simplifications of the proof tactics.
  • Filling sorry in foldl_NTTStage_inductive_aux:

    • Context: The base case (| zero) of an induction on k : Fin (ℓ + 1).
    • Change:
      simp only [Fin.coe_ofNat_eq_mod, Nat.zero_mod, Fin.foldl_zero, tsub_zero]
      exact invariant_init
    • Logic: Fin.foldl with 0 iterations acts as the identity on the accumulator. The goal requires showing that the initial state satisfies the invariant, which is given by invariant_init. The simplifiers Fin.foldl_zero (reducing the fold) and tsub_zero (handling index arithmetic) correctly reduce the goal to the hypothesis invariant_init.
    • Verdict: Correct.
  • Removal of Commented Code:

    • A commented out -- sorry was removed in NTTStage_correctness.
    • Verdict: Housekeeping.

2. Check for Misformalization

  • Off-by-One Errors: The changes involving Fin (e.g., Fin.val_castSucc, Fin.val_last) are name changes for existing lemmas. No indices or bounds were logically altered in the diff.
  • Recursive Definitions: The induction base case for foldl is standard and correctly handled.
  • Incorrect Assumptions: The proof relies on invariant_init being available in the context, which is consistent with the induction setup.

3. Verdict

The changes in this file are Correct. They represent necessary maintenance to keep the code compiling with updated library versions and the completion of a proof obligation.

Summary:

  • Formalization Status: Correct.
  • Issues Found: None.
📄 **Review for `CompPoly/Fields/Binary/BF128Ghash/Basic.lean`**

The review of the changes in CompPoly/Fields/Binary/BF128Ghash/Basic.lean indicates that the modifications are primarily technical adjustments to the proof script to handle type unification and elaboration robustness, likely necessitating manual alignment between Fintype.card (ZMod 2) and the numeral 2. The mathematical logic remains consistent with Rabin's Irreducibility Test.

Verdict: Correct

Analysis:

  1. Robustness Fixes (ZMod.card vs 2): The main changes involve explicit rewrites of ZMod.card. The theorem Polynomial.irreducible_dvd_X_pow_sub_X_iff_natDegree_dvd is stated in terms of Fintype.card R. In this context (R = ZMod 2), the cardinality is 2. The previous code relied on implicit unification, which seemingly failed or became brittle. The new code explicitly rewrites ZMod.card to 2 inside tactic blocks (by rw [ZMod.card]; exact ...) to satisfy the type checker. This is a correct and standard way to fix such elaboration issues.
  2. Rewrite Strategy (CharTwo.sub_eq_add): The change from rw to have := ...; simp only and the use of convert later in the proof improves the script's stability against potential unification failures or ambiguity when applying the characteristic 2 property (a - b = a + b).
  3. Recursion Depth: set_option maxRecDepth 1000 is a benign configuration change often required for complex tactic proofs.
  4. Mathematical Correctness: The logic follows the standard irreducibility test:
    • Hypothesis: P is degree 128.
    • Assumption: P is reducible.
    • Implication: There exists an irreducible factor q with deg(q) ≤ 64.
    • Trace condition implies q ∣ X^(2^128) - X, so deg(q) ∣ 128.
    • deg(q) ≤ 64 and deg(q) ∣ 128 implies deg(q) ∣ 64.
    • This implies q ∣ X^(2^64) - X (and thus X^(2^64) + X).
    • Contradiction: q divides gcd(X^(2^64) + X, P), but the GCD is given as 1.
    • The formalization correctly maps to this logic.

No misformalization issues were found.

📄 **Review for `CompPoly/Fields/Binary/BF128Ghash/Impl.lean`**

The review of the file CompPoly/Fields/Binary/BF128Ghash/Impl.lean follows.

1. Analysis of Implementation Changes

The changes in this Pull Request are primarily focused on refactoring proof scripts to utilize the grind tactic (likely an automated reasoning/normalization tactic available in the project's environment) and removing redundant tactic calls.

  • set_option maxRecDepth 1500:

    • Logic: Increases the maximum recursion depth for the elaborator.
    • Context: Cryptographic formalizations involving bit vectors and polynomials (especially with tactics like omega or heavy type-class synthesis) often require higher recursion limits. This is a configuration change to enable the subsequent automated proofs.
  • fold_step_mod_eq:

    • Diff: Replaced congr 1; congr 1 with grind.
    • Correctness: The previous manual congruence steps were likely stripping away layers of constructors or function applications. The grind tactic automates this equality checking and normalization. Since the code compiles, grind successfully discharges the goal or prepares it for the subsequent subgoals (indicated by the · bullets).
  • reduce_clMul_correct:

    • Diff: Removed norm_cast at h_deg_R_nat and norm_cast at deg_R.
    • Correctness: The surrounding change tactics (change (n : WithBot ℕ) < (8 : ℕ) at ...) explicitly handle the type viewing/coercion that norm_cast was likely intended to facilitate. If the code compiles without norm_cast, the tactic was redundant or superceded by the change steps and rewrites.
  • toQuot_mul:

    • Diff: Replaced exact h_div with grind.
    • Correctness: The goal of this lemma is toQuot (a * b) = toQuot a * toQuot b. In quotient rings, $x = y$ is defined as $x - y \in I$. The hypothesis h_div establishes exactly that $ghashPoly \mid (a*b){poly} - a{poly} * b_{poly}$. Previously, exact matched this hypothesis to the goal. grind is now used to perform this matching (and potentially unfold definitions if necessary). While exact is more explicit, grind is correct as verified by the compiler.

2. Misformalization Checklist

  • Off-by-One Errors: None. The removal of norm_cast preserves the logical bounds checks (n < 8).
  • Recursive Definitions: No changes to definitions.
  • Incorrect Assumptions: No changes to theorem assumptions.
  • Prop vs. Type: Correct usage maintained.
  • Universe Levels: No changes.

3. Verdict & Feedback

Verdict: Correct

The changes are safe refactorings of the proof scripts. They streamline the code by using automation (grind) and removing unnecessary steps, without altering the definitions or the logical statements of the theorems.

Feedback:

  • Refactoring Quality: The move to grind simplifies the proof script maintenance, assuming grind is stable in the build environment.
  • Style: Replacing exact h_div with grind in toQuot_mul is acceptable, though exact is typically preferred for performance and clarity when the term is already constructed. However, this is a minor stylistic point and does not affect correctness.
📄 **Review for `CompPoly/Fields/Binary/BF128Ghash/Prelude.lean`**

Review of CompPoly/Fields/Binary/BF128Ghash/Prelude.lean

Verdict: Correct

Analysis:
The only change introduced in this pull request for this file is the addition of the compiler option set_option maxRecDepth 550.

  1. Logical Correctness: This change does not alter any definitions, theorems, or logical structures. It adjusts the resource limits of the Lean elaborator/kernel.
  2. Necessity: As noted in the comment (-- for ghashPoly_eq_P_val), this is likely required to handle deep recursion stacks that occur when elaborating proofs or definitions involving large concrete structures, such as 128-bit vectors or polynomials over $GF(2^{128})$, which are common in GHash implementations.
  3. Safety: Increasing the recursion depth is a standard operational adjustment in Lean and does not introduce misformalization risks (it does not change Prop vs Type, universe levels, or logical assumptions).

Conclusion:
The change is a safe configuration update required for compilation. No misformalization issues are present.

📄 **Review for `CompPoly/Fields/Binary/BF128Ghash/XPowTwoPowGcdCertificate.lean`**

This review focuses on the changes in CompPoly/Fields/Binary/BF128Ghash/XPowTwoPowGcdCertificate.lean.

Analysis of Changes

  1. Rewrite Strategy (conv_lhs => rw! to rw!):

    • Old: conv_lhs => rw! (castMode:=.all) [h_256_eq]
    • New: rw! (castMode:=.all) [h_256_eq]
    • Logic: The original code restricted the rewrite 256 = 128 + 128 to the left-hand side (LHS) of the goal. The new code applies it to the entire goal.
    • Correctness: This change is valid. The equality 256 = 128 + 128 holds definitionally in Nat. Applying it globally ensures that Fin 256 becomes Fin (128 + 128) wherever it appears (including potentially the type of gcd_b_0_val on the RHS), which enables the subsequent Fin.sum_univ_add to trigger correctly. Since rw (and rw!) operates on syntactic occurrences but respects definitional equality for the purposes of type-checking, this does not alter the logical statement being proven.
  2. Simp Lemma Removal (Fin.coe_castAdd):

    • Old: simp only [..., Fin.coe_castAdd, ...]
    • New: simp only [..., ...]
    • Logic: Fin.coe_castAdd simplifies ((Fin.castAdd n i) : Nat) to (i : Nat).
    • Correctness: Removing this lemma from simp only means the simplifier will not explicitly reduce the coercion ↑(Fin.castAdd ...) to ↑i at this step. However, Fin.castAdd n i is defined as ⟨i.val, ...⟩, so (Fin.castAdd n i).val is definitionally equal to i.val. Since subsequent tactics (like omega or term construction) work up to definitional equality, explicit simplification is not strictly required for correctness, assuming the proof script still compiles (which is confirmed). This suggests the lemma was redundant or handled elsewhere.

Checklist Verification

  • Off-by-One Errors: The split 256 = 128 + 128 is arithmetically correct and aligns with the bounds check val < 2^128.
  • Assumptions/Side-Conditions: The proof relies on h_gcd_b_0_val_lt (value fits in 128 bits), which is correctly established via omega.
  • Prop vs Type: No issues.
  • Universe Levels: No issues.

Verdict

The formalization in this file is correct. The changes represent a cleanup of the proof script (removing an unnecessary conv scope and a redundant simplification lemma) without altering the semantics of the theorem or introducing logical flaws.

Feedback:

  • The refactoring correctly simplifies the tactic script.
  • The global application of rw! [h_256_eq] is safe as it preserves the definitional properties required for the Fin.sum_univ_add split.
📄 **Review for `CompPoly/Fields/Binary/Common.lean`**

I have reviewed the changes in CompPoly/Fields/Binary/Common.lean. The diff includes a cleanup of toPoly_128_extend_256 and a complete, rigorous rewrite of toPoly_shiftLeft_no_overflow supported by several new helper lemmas.

Analysis of Changes

  1. toPoly_128_extend_256:

    • Logic: The proof logic remains unchanged: splitting the summation range $[0, 256)$ into $[0, 128)$ and $[128, 256)$. The lower part corresponds to the original BitVec 128, and the upper part is shown to be zero because to256 zero-extends.
    • Implementation: The changes update Fin.coe_ lemmas to their newer Fin.val_ counterparts (e.g., Fin.coe_castAdd $\to$ Fin.val_castAdd) and simplify the conv_lhs block. This is a correct maintenance update aligning with Mathlib conventions.
  2. Helper Lemmas:

    • BitVec_getLsb_eq_false_of_toNat_lt_two_pow: Correctly deduces that bits at or above index d are false if the number is strictly less than 2^d.
    • BitVec_getElem_eq_false_of_toNat_lt_two_pow: A convenient wrapper for the above using getElem syntax. The preconditions (n < w and d ≤ n) are handled correctly.
    • BitVec_getLsb_shiftLeft: Correctly characterizes the bits of a <<< shift. Logic: (a <<< shift)[n] = a[n - shift] if shift ≤ n, else false. This accurately reflects the semantics of left shift.
    • toPoly_coeff: Provides a precise characterization of the coefficients of the polynomial generated from a BitVec. The case analysis on n < w and the bit value is correct.
  3. toPoly_shiftLeft_no_overflow:

    • Logic: The theorem states that if a fits in d bits and d + shift ≤ w, then converting a <<< shift to a polynomial is equivalent to (toPoly a) * X^shift.
    • Proof Strategy: The new proof uses extensionality (ext n) to compare coefficients at every index $n$. It effectively breaks down the problem into cases:
      • Case $n &lt; w$: Splits into $n &lt; shift$ (coefficient is 0 for both sides) and $n \ge shift$ (coefficient comes from $a[n-shift]$).
      • Case $n \ge w$: LHS is 0 by definition of toPoly. RHS is 0 because either $n-shift \ge w$ (out of bounds for $a$) or $n-shift \ge d$ (zero because $a &lt; 2^d$).
    • Correctness: The case analysis is exhaustive. The boundary conditions (e.g., n - shift) are handled carefully with appropriate inequalities. The usage of the helper lemmas ensures that the "no overflow" condition (d + shift ≤ w) is correctly applied to show high bits are zero.

Formalization Checklist

  • Off-by-One Errors: None found. Inequalities (e.g., n < w, d ≤ n) are strict where required and inclusive where appropriate.
  • Recursive Definitions: Not applicable.
  • Incorrect Assumptions/Side-Conditions: The hypotheses ha : a.toNat < 2 ^ d and h_no_overflow : d + shift ≤ w are correct and sufficient for the theorem.
  • Prop vs. Type: Standard usage.
  • Universe Levels: Correct.

Verdict

The changes are correct. The refactoring of toPoly_shiftLeft_no_overflow significantly improves the readability and robustness of the proof compared to the previous conv-heavy approach.

Status: Approved

📄 **Review for `CompPoly/Fields/Binary/Tower/Basic.lean`**

This review focuses on the changes in CompPoly/Fields/Binary/Tower/Basic.lean.

General Analysis:
The pull request performs several cleanups (updating Fin.coe_cast to Fin.val_cast, removing unnecessary rewrites, adding rfl to close definitionally equal goals) and significantly refactors a proof in multilinearBasis_apply. The refactoring replaces a somewhat implicit rewrite strategy with a more rigorous one using a new helper lemma algebraMap_𝕏_eq_of_index_eq.

Specific Checklist & Findings:

  1. Off-by-One / Indexing: The use of h_k_sub_1_add_1 and omega in various places confirms that index arithmetic (e.g., k - 1 + 1 = k) is handled correctly. The logic in multilinearBasis_apply carefully manages the relationship between r, r1 (where r = r1 + 1), and l.
  2. Helper Lemma (algebraMap_𝕏_eq_of_index_eq):
    • The new lemma asserts algebraMap (𝕏 k) = algebraMap (𝕏 m) in BTField r given k = m.
    • It correctly uses subst to unify the indices and the dependent algebra instances (binaryAlgebraTower).
    • This is a robust way to handle "cast" issues where indices are provably equal but syntactically distinct, avoiding fragile rewrites in the main proof.
  3. Refactoring of multilinearBasis_apply:
    • The change from a conv_rhs strategy to refine congr_arg₂ is an improvement. It explicitly separates the proof into two goals: showing the product components are equal and showing the exponent base/power components are equal.
    • The usage of congr_arg₂ (· ^ ·) ... h_exp_eq ensures that both the base (via the new lemma) and the exponent are proven equal.
    • The term h_exp_eq relies on h_prevDiff (linking prevDiff to r1 - l) and the fact that r - l - 1 simplifies to r1 - l (given r = r1 + 1). This arithmetic is sound.
  4. Prop vs Type / Defeq: The use of rfl in multiple places indicates that the terms are definitionally equal after the preceding rewrites (rw! or subst). The change from coe_cast to val_cast aligns with Mathlib conventions for Fin.

Verdict:
The formalization is correct. The changes improve the robustness of the proofs and perform necessary maintenance.

Code Quality:

  • The refactored proof in multilinearBasis_apply is clearer and less prone to breaking due to implicit cast behavior.
  • The helper lemma is well-scoped and correctly implemented.

No misformalization issues were found.

Formalization status: Correct

📄 **Review for `CompPoly/Fields/Binary/Tower/Impl.lean`**

Here is the review of the changes in CompPoly/Fields/Binary/Tower/Impl.lean.

Summary

The changes in this PR primarily focus on:

  1. Completing proofs: A previous sorry in join_eq_iff_dcast_extractLsb has been replaced with a complete proof using bitwise manipulations and casting lemmas.
  2. Refactoring for robustness: The proof of towerEquiv_commutes_left_diff has been rewritten to use a clearer transitive inductive step rather than a deep manual expansion of definitions. Similarly, multilinearBasis_apply has been refactored to handle type casts more cleanly.
  3. Maintenance: Updates to tactic usage (Nat.add_eq_zero to Nat.add_eq_zero_iff, BitVec.not_lt_iff_le to BitVec.not_lt) and cleanups (removing unnecessary conv blocks).

Specific Feedback

1. join_eq_iff_dcast_extractLsb (Correctness of the new proof)

  • Verification: The logic establishing that a constructed field element $x = \text{join}(h, l)$ is equivalent to checking if the high and low bits of $x$ match $h$ and $l$ is correct.
  • Implementation: The use of BitVec.extractLsb_dcast_eq and BitVec.eq_append_iff_extract correctly bridges the gap between the dcast types and the bitwise operations. The explicit handling of the h_pos proof term via Subsingleton.elim is a good practice here to ensure rewrite tactics work smoothly with dependent types.

2. towerEquiv_commutes_left_diff (Refactor)

  • Verification: The previous proof attempted to unwind the definitions of the tower maps across d steps manually. The new proof essentially argues that if the diagram commutes for d steps (IH) and for 1 step (new lemma towerEquiv_commutes_left_succ), it commutes for d+1 steps via function composition.
  • Correctness: This algebraic approach is significantly more robust and readable. The helper lemmas BTField_algebraMap_succ_eq_join and towerEquiv_commutes_left_succ correctly isolate the inductive step behavior.

3. Tactic and Library Updates

  • grind Tactic: The use of grind in towerRingHomForwardMap_split_eq is noted. As the code compiles, this is acceptable, though it implies a dependency on a very recent or specific version of Lean/Mathlib that includes this tactic.
  • Standard Library Changes: The updates from Nat.add_eq_zero to Nat.add_eq_zero_iff and BitVec.not_lt are correct adaptations to standard library changes.

Checklist Review

  • Off-by-One Errors: Indices involving $k, k-1$, and ranges like $2^k-1$ appear correct and consistent with BitVec indexing (0-based) and the field tower definitions.
  • Recursive Definitions: The induction in towerEquiv_commutes_left_diff correctly covers the base case ($d=0$) and successor case.
  • Incorrect Assumptions: h_pos : k > 0 is correctly propagated to ensure operations on $k-1$ are valid.
  • Prop vs Type: No issues found.
  • Universe Levels: No issues found.

Verdict

Correct.
The formalization in this file is correct. The changes successfully resolve a missing proof (sorry) and improve the maintainability of existing proofs.

No changes are requested.

📄 **Review for `CompPoly/Fields/PrattCertificate.lean`**

1. Analyze Implementation

  • Change Analysis: The diff modifies the implementation of the pratt tactic in CompPoly/Fields/PrattCertificate.lean. specifically within the tactic's elaboration block.
  • Logic: The line let ⟨n', pn⟩ ← deriveNat n q(instAddMonoidWithOneNat) is updated to let ⟨n', pn⟩ ← deriveNat n q(Nat.instAddMonoidWithOne).
  • Context: This line invokes deriveNat, a meta-programming function likely responsible for normalizing the Nat expression n into a raw natural number n' and providing a proof pn that n = n'. To construct this proof or perform the normalization, it requires an instance of AddMonoidWithOne Nat.
  • Consistency: The change replaces the identifier instAddMonoidWithOneNat with Nat.instAddMonoidWithOne. This reflects standard refactoring patterns in the Lean 4 ecosystem/Mathlib, where top-level instance names are often moved into the namespace of the type they instance (i.e., instAddMonoidWithOneNat becoming Nat.instAddMonoidWithOne). Since the prompt confirms the code compiles, this change correctly aligns with the available definitions in the project's dependencies.

2. Check for Misformalization

  • Off-by-One Errors: Not applicable. The change is a namespace update for an instance.
  • Recursive Definitions: Not applicable.
  • Incorrect Assumptions/Side-Conditions: The underlying assumption is that Nat forms an AddMonoidWithOne, which remains true. The instance passed is simply the canonical one.
  • Prop vs. Type: No misuse.
  • Universe Levels: No impact.

3. Verdict & Feedback

Verdict: Correct.

Feedback:
The change is a necessary maintenance update to resolve a reference to a renamed or standardized instance identifier. Replacing instAddMonoidWithOneNat with Nat.instAddMonoidWithOne ensures the tactic utilizes the correct canonical instance for AddMonoidWithOne Nat as defined in current library versions. As the code compiles, there are no logical issues or misformalizations introduced by this change.

📄 **Review for `CompPoly/Multivariate/MvPolyEquiv.lean`**

Start with the Analysis of Implementation.

1. Analysis of Implementation

The changes in CompPoly/Multivariate/MvPolyEquiv.lean are strictly limited to the proof script of the lemma degreeOf_equiv.

  • Refactoring Proof Automation: The primary change replaces calls to the aesop tactic with the grind tactic.
    • In the isTrue g branch of the match expression, aesop is replaced by grind.
    • In the isFalse g branch, inside the split at h block, a complex aesop call (aesop (add safe cases Fin) (add safe (by simp only at g''))) is replaced by grind.
  • Logic Preservation: Since the instructions confirm the code compiles, the grind tactic successfully discharges the proof obligations that were previously handled by aesop. This suggests that the logical steps required to prove the goal are within the capabilities of the grind automation in this context.
  • Whitespace: There is a minor whitespace cleanup removing an empty line at the end of the file.

2. Check for Misformalization

  • Off-by-One Errors: Not applicable. No arithmetic definitions or indices are modified; only the proof method is changed.
  • Recursive Definitions: Not applicable.
  • Incorrect Assumptions/Side-Conditions: The theorem statement for degreeOf_equiv remains unchanged. The preconditions and hypotheses are identical to the previous version.
  • Prop vs. Type: No changes to type declarations.
  • Universe Levels: No changes to universe polymorphism.

The change does not alter what is being proven, only how it is proven. Therefore, no misformalization is introduced regarding the mathematical content.

3. Verdict & Feedback

Verdict: Correct

The changes are a valid refactoring of the proof script. Replacing specific aesop configurations with grind simplifies the code structure while maintaining correctness (evidenced by the code compiling). There are no changes to definitions or theorem statements that would impact the formalization specification.

Feedback:

  • The transition from aesop to grind simplifies the proof block significantly, removing the need for manual hint addition (add safe cases Fin, etc.).
  • The removal of the trailing blank line is a minor stylistic improvement.
📄 **Review for `CompPoly/ToMathlib/Finsupp/Fin.lean`**

Review of CompPoly/ToMathlib/Finsupp/Fin.lean

1. Analysis of Implementation

The changes in this file are restricted to the proofs of two simplification theorems: snoc_zero_zero and insertNth_zero_zero.

  • Hunk 1 (snoc_zero_zero):

    • Change: Replaced equivFunOnFinite_symm_apply_toFun with coe_equivFunOnFinite_symm in the simp only list.
    • Logic: The theorem proves that appending a zero element to a zero Finsupp vector results in a zero Finsupp vector. The proof strategy is by extensionality (ext a). The simplifier unpacks the definition of snoc. The snoc operation on Finsupp (over a finite domain) typically utilizes the equivalence equivFunOnFinite : (Fin n →₀ M) ≃ (Fin n → M). The definition likely involves mapping to functions, performing Fin.snoc, and mapping back via symm. To prove equality to 0, simp needs to reduce the application of the inverse equivalence.
    • Context: The lemma rename from ..._apply_toFun to coe_... suggests an alignment with Lean 4 / Mathlib naming conventions where functional objects are accessed via coercion () rather than explicit field accessors (.toFun) or specific application lemmas.
  • Hunk 2 (insertNth_zero_zero):

    • Change: Replaced equivFunOnFinite_symm_apply_toFun with coe_equivFunOnFinite_symm in the simp only list.
    • Logic: Similar to the previous theorem, this asserts that inserting a zero at the $n$-th position of a zero vector results in a zero vector. The logic mirrors snoc, relying on the interaction between Finsupp and the underlying function type.

2. Check for Misformalization

  • Off-by-One Errors: Not applicable. The change only affects the proof script, not the indices or boundaries of Fin n.
  • Recursive Definitions: None involved.
  • Incorrect Assumptions: The theorems rely on [Zero M], which is standard and implicitly present via the usage of 0. No assumptions were modified.
  • Prop vs. Type: No issues.
  • Universe Levels: No issues.
  • Compilation: The prompt confirms the code compiles. This implies coe_equivFunOnFinite_symm is a valid lemma in the context that successfully discharges the goal when combined with the other lemmas in the simp set.

3. Verdict & Feedback

Verdict: Correct

The changes represent a maintenance update, likely reflecting a refactor in the underlying library (Mathlib) or local definitions where a lemma regarding the application of equivFunOnFinite.symm was renamed or standardized to use coercions (coe). The logic of the theorems snoc_zero_zero and insertNth_zero_zero remains correct, and the proof scripts have been updated to use the appropriate lemma to close the goals.

No misformalizations found.

📄 **Review for `CompPoly/Univariate/Basic.lean`**

This is a rigorous review of the changes in CompPoly/Univariate/Basic.lean.

1. Analysis of Implementation

  • BEq Instance: The implementation beq p q := p.val == q.val is correct. Since CPolynomial R is a subtype defined by the canonicalization predicate p.trim = p, logical equality of polynomials implies structural equality of their underlying canonical arrays. Thus, checking the underlying arrays is sufficient.
  • LawfulBEq Instance:
    • eq_of_beq: The proof correctly lifts the equality from the subtype's value (p.val = q.val) to the subtype itself (p = q) using Subtype.ext.
    • rfl: The logic is sound (it relies on the underlying LawfulBEq), but the proof script provided (convert h_raw _, swap, exact #[]) is unnecessarily convoluted and brittle, likely a relic of an interactive proof session. It does not affect correctness but is poor code quality.
  • DecidableEq Instance: The instance correctly leverages the underlying DecidableEq of the raw representation (implied Array R) and the decidability of the predicate p.trim = p.
  • Nontrivial Instance: The proof correctly identifies 0 (empty array, size 0) and 1 (canonical #[1], size 1) as distinct elements based on their underlying array sizes. This relies on Nontrivial R ensuring 1 ≠ 0 so that Raw.C 1 does not collapse to an empty array.
  • coeff_mul_X_succ Lemma: The removal of omega suggests the goal n < n + 2 is now handled by simp (likely via Finset.mem_range simplification or ambient arithmetic simplifiers). As the code compiles, this is a valid optimization.

2. Checklist for Misformalization

  • Off-by-One Errors: None found. The Nontrivial proof correctly distinguishes size 0 and size 1.
  • Recursive Definitions: N/A.
  • Incorrect Assumptions:
    • The LawfulBEq instance correctly requires [LawfulBEq R].
    • The Nontrivial instance correctly requires [Nontrivial R].
  • Type Hierarchy / Universe Levels: The universe levels are implicitly handled correctly (Type*).
  • Redundancy: In the DecidableEq instance, the constraint [BEq R] is redundant because [DecidableEq R] is also present. DecidableEq R is sufficient to derive decidable equality for the arrays, and BEq is not used explicitly in the definition provided (Subtype.instDecidableEq uses propositional equality decidability).

3. Verdict & Feedback

Verdict: The formalization is Correct.

There are no critical logical errors or misformalizations. However, there are minor code quality issues regarding style and redundancy.

Minor Issues:

  1. Redundant Typeclass Constraint:
    In the DecidableEq instance, [BEq R] is unnecessary because [DecidableEq R] is already required.

    -- Current
    instance (R : Type*) [BEq R] [DecidableEq R] [Semiring R] : DecidableEq (CPolynomial R) ...
    -- Suggested
    instance (R : Type*) [DecidableEq R] [Semiring R] : DecidableEq (CPolynomial R) ...
  2. Obscure Proof Script (LawfulBEq):
    The rfl proof in LawfulBEq is messy. It creates a local lemma h_raw, applies it to a hole, and then forcibly fills that hole with an empty array. It essentially proves a == a by a roundabout way that ends up just using simp.

    -- Current
    rfl := by
      have h_raw : ∀ (a : CPolynomial.Raw R), a == a ↔ a = a := by
        aesop
      convert h_raw _
      swap
      exact #[]
      simp +decide [ BEq.beq ]
    
    -- Suggested (Cleaner)
    rfl := by
      simp [BEq.beq]
📄 **Review for `CompPoly/Univariate/Quotient.lean`**

Here is the review of the changes in CompPoly/Univariate/Quotient.lean.

Verdict

Formalization Correct. The logic implemented in the new proof is sound and correctly formalizes the theorem statement. However, there are code quality issues regarding variable shadowing and tactic usage that should be addressed to maintain engineering standards.

Analysis of Implementation

The new proof strategy refactors the previous explicit index construction into a list manipulation argument:

  1. Decomposition: It decomposes the zipped index list of p into a prefix of length p.trim.size and a suffix (witness l).
  2. Prefix Equality: It proves the prefix is equal to p.trim.zipIdx.toList. This relies on the fact that p.trim and p share the same coefficients within the bounds of p.trim, and that the indices in zipIdx are absolute. This logic is sound.
  3. Suffix Property: It proves that the remaining elements in the suffix l have coefficients of 0. This relies on the definition of trim (elements beyond the trimmed size are zero) and the fact that l contains elements with indices ≥ p.trim.size.
  4. Boundaries: The use of take and drop with p.trim.size correctly handles the list splitting. Since p.trim is a contraction of p, p.trim.size ≤ p.size, preventing out-of-bounds issues during the split.

Findings

While no logical misformalizations were found, the following code quality issues were identified:

1. Variable Shadowing (Minor Issue)
In the second main subgoal (proving ∀ x ∈ l, x.1 = 0), the variable name h_trailing is defined four separate times within the same tactic block.

  • Definitions 1 & 2: Relate to coefficients being zero.
  • Definition 3: Relates list membership to coefficient lookups.
  • Definition 4: Relates list membership to index bounds.
    While valid in Lean (the latest hypothesis shadows the previous ones), this makes the proof difficult to read, maintain, or debug.

2. Redundant Tactic Usage (Trivial Issue)
Inside the proof of h_trim, the code uses:

exact (by exact coeff_eq_coeff p i)

This is a redundant nested tactic block. It should simply be exact coeff_eq_coeff p i.

Suggestions

I recommend applying the following cleanup to resolve the shadowing and style issues while retaining the correct logic:

lemma zipIdx_trim_append {R : Type*} [Ring R] [BEq R] [LawfulBEq R]
    (p : CPolynomial.Raw R) :
    ∃ l, p.zipIdx.toList = p.trim.zipIdx.toList ++ l ∧ ∀ x ∈ l, x.1 = 0 := by
  have h_zipIdx : (p.zipIdx.toList.take p.trim.size) = (p.trim.zipIdx.toList) := by
    have h_coeff_eq : p.trim.coeff = p.coeff := by
      ext i; exact coeff_eq_coeff p i
    refine' List.ext_get _ _ <;> simp
    · exact size_le_size p
    · intro n _ _
      -- Use the coefficient equality to show the zipped pairs are identical
      have := congr_fun h_coeff_eq n
      aesop
  
  -- Define the witness as the dropped suffix
  refine' ⟨ List.drop (Array.size p.trim) (Array.zipIdx p |> Array.toList), _, _ ⟩ <;> simp_all
  · -- Prove the list concatenation equality
    rw [← h_zipIdx, List.take_append_drop]
  · -- Prove all elements in the suffix have 0 coefficient
    intro a b hab
    have h_zero_after_trim : ∀ i ≥ p.trim.size, p.coeff i = 0 := by
      intro i hi
      rw [Eq.symm (coeff_eq_coeff p i)]
      -- Assuming standard library behavior where out-of-bounds/trimmed coeffs are 0
      grind 
    have h_mem_coeff : ∀ x ∈ List.drop (Array.size p.trim) (Array.zipIdx p |> Array.toList),
        x.1 = p.coeff x.2 := by
      intro x hx
      have : ∀ y ∈ List.zipIdx p.toList, y.1 = p.coeff y.2 := by
        simp +decide [List.mem_iff_get]
        grind
      convert this x _
      exact List.mem_of_mem_drop hx
    have h_idx_ge : ∀ x ∈ List.drop (Array.size p.trim) (Array.zipIdx p |> Array.toList),
        x.2 ≥ p.trim.size := by
      intro x hx; have := List.mem_iff_get.mp hx; aesop
    aesop
📄 **Review for `CompPoly/Univariate/Raw.lean`**

This review focuses on the changes in CompPoly/Univariate/Raw.lean.

1. Analysis of Changes

The change occurs within the proof of the theorem coeff_eq_getElem_of_lt.
A single line exact Nat.succ_le_of_lt k.is_lt is removed from the proof of a local hypothesis h_size.

Diff Analysis:

  • Original Code:

    have h_size : k + 1 = (p.extract 0 (k + 1)).size := by
      simp [Array.size_extract]
      exact Nat.succ_le_of_lt k.is_lt

    The original proof used simp to presumably reduce the expression and then manually supplied the inequality k + 1 ≤ p.size (derived from k.is_lt) to close the goal.

  • New Code:

    have h_size : k + 1 = (p.extract 0 (k + 1)).size := by
      simp [Array.size_extract]

    The manual exact step is removed.

  • Correctness Logic:
    The hypothesis h_size asserts that the size of the extracted subarray p.extract 0 (k + 1) is exactly k + 1.
    For an array p, (p.extract 0 (k + 1)).size evaluates to min (k + 1) p.size.
    The equality k + 1 = min (k + 1) p.size holds if and only if k + 1 ≤ p.size.
    The term k.is_lt (likely referring to k : Fin p.size) provides the proof that k < p.size, which is definitionally equivalent to k + 1 ≤ p.size.
    Since the code compiles, the simp [Array.size_extract] tactic is now sufficient to locate the necessary bound in the context or simplify the min expression automatically.

2. Misformalization Checklist

  • Off-by-One Errors: The logic relies on k < p.size (from k.is_lt) implying k + 1 ≤ p.size. This is standard natural number arithmetic and is correct.
  • Recursive Definitions: N/A.
  • Incorrect Assumptions/Side-Conditions: The type of h_size is explicitly ascribed (k + 1 = ...), ensuring the semantic meaning of the hypothesis remains unchanged. The removal of the proof line does not weaken the statement being proven, only the automation used to prove it.
  • Prop vs. Type: No issues.
  • Universe Levels: No issues.

3. Verdict & Feedback

The formalization is Correct.

The change is a valid cleanup of the proof script. The removal of exact Nat.succ_le_of_lt k.is_lt indicates that the simplifier is able to discharge the goal automatically, likely by finding the bound in the local context. The integrity of the theorem coeff_eq_getElem_of_lt is preserved because the type of the intermediate hypothesis h_size has not changed.

Feedback:

  • The change correctly simplifies the proof by relying on simp to handle the arithmetic bound check.
  • The logic remains sound: (p.extract 0 (k+1)).size is indeed k+1 provided k < p.size, which is guaranteed by k being a Fin p.size.
📄 **Review for `lakefile.lean`**

Review of lakefile.lean

1. Analysis of Implementation

The changes in this file constitute a dependency update for the project CompPoly.

  • Hunk 1:
    • Updates the mathlib dependency from v4.26.0 to v4.28.0.
    • Updates the ExtTreeMapLemmas dependency from v4.26.0 to v4.28.0.

This appears to be a synchronized upgrade of the toolchain/library versions. Assuming ExtTreeMapLemmas follows a versioning scheme tied to Mathlib/Lean versions (which is common practice in the Lean ecosystem to ensure ABI compatibility), updating both to the same version tag (v4.28.0) is the correct procedure to maintain compatibility.

2. Check for Misformalization

As this is a build configuration file using the Lake DSL, the standard logical verification checklist applies differently:

  • Syntax and Semantics: The syntax adheres to valid Lake configuration standards.
  • Version Consistency: The update maintains version parity between the core library (mathlib) and the auxiliary library (ExtTreeMapLemmas). This minimizes the risk of diamond dependency conflicts or ABI mismatches, provided ExtTreeMapLemmas v4.28.0 is indeed built against mathlib v4.28.0.
  • Compilation: As the instructions state the code compiles, the git tags are presumed to exist and be reachable.

There are no logical definitions, theorems, or proofs in this file, so there are no risks of logical off-by-one errors, incorrect universe levels, or improper use of Prop/Type.

3. Verdict & Feedback

Verdict: Correct

The changes correctly update the project dependencies to a newer version. No formalization issues are present.

@dhsorens
Copy link
Collaborator Author

the ToPoly section is basically there, with a ring equivalence included. proofs of correctness for all the operations implemented incoming

@dhsorens dhsorens marked this pull request as draft February 23, 2026 14:10
@katyhr
Copy link

katyhr commented Feb 23, 2026

@katyhr, would love your feedback and comments on this!

Main questions:

  1. are we happy with CPolynomial (CPolynomial R) as the formalization of the Bivariate type?
  2. I chose the namespace to be CompPoly.CBivariate to mimic the pattern of CompPoly.CPolynomial and CompPoly.CMvPolynomial - but I wonder if we should do CompPoly.CPolynomial.Bivariate to match Arklib, and since it's a special case of CPolynomial?
  3. the type signature of the toPoly isomorphism is currently into R[X][Y] - would you say this is right?

@dhsorens sorry for the slow reply! I think yes to 1 and 3, and for 2 I also prefer the CompPoly.CBivariate.

Btw are there any naming conventions/rules for lemmas, namespaces, etc in CompPoly? Maybe this would be a good thing to have if it's not there yet :)

@dhsorens
Copy link
Collaborator Author

dhsorens commented Feb 24, 2026

@katyhr amazing, thank you for looking at this! we don't yet have explicit conventions/rules for lemmas, namespaces, etc in CompPoly, but I think it is a good idea to implement them. Julian had some good feedback on the Univariate interface which now needs to be improved, but this is something I would like to establish

what do you think about this as a starting point for bivariate? are we off to a reasonably good start?

@dhsorens
Copy link
Collaborator Author

/review

External:

Internal: CompPoly/Bivariate/*

Comments: Review this pull request, focusing on the Bivariate polynomials; review for correctness, style, and make any suggested improvements to the code. Review for the additions to Univariate/ for any redundancy or misplacement of the added lemmas

@github-actions
Copy link

🤖 AI Review (with external context)

🤖 AI Review

Overall Summary:
Executive Summary:

The PR correctly implements Bivariate polynomials and establishes the necessary algebraic isomorphisms to Mathlib, with sound logic and definitions throughout. However, this PR cannot be merged in its current state due to critical incompleteness in CompPoly/Bivariate/ToPoly.lean, where several proofs are currently marked with sorry. I recommend resolving these missing proofs and relaxing the unnecessary [Nontrivial R] typeclass constraint in the Univariate module before final approval.


📄 **Review for `CompPoly.lean`**

This is a review of the changes in CompPoly.lean.

1. Analysis of Implementation

The diff adds two import statements to the top-level file of the library:

  • import CompPoly.Bivariate.Basic
  • import CompPoly.Bivariate.ToPoly

Observation:

  • The file CompPoly.lean appears to serve as a library facade or entry point, aggregating sub-modules (Univariate, Fields, Mathlib extensions).
  • The addition of Bivariate modules aligns with expanding the library's scope from univariate to bivariate polynomials.
  • Since the code compiles, these modules exist and are accessible.

2. Check for Misformalization

  • Off-by-One Errors: N/A (Imports only).
  • Recursive Definitions: N/A.
  • Incorrect Assumptions/Side-Conditions: N/A.
  • Prop vs. Type: N/A.
  • Universe Levels: N/A.

3. Verdict & Feedback

Verdict: Correct.

Feedback:
The changes are purely administrative, exposing new Bivariate polynomial functionality in the main library entry point. There are no formalization risks associated with these import additions.

📄 **Review for `CompPoly/Bivariate/Basic.lean`**

This review focuses on the file CompPoly/Bivariate/Basic.lean which defines a computable bivariate polynomial type CBivariate and associated operations.

Analysis of Logic and Correctness

  1. Definition of CBivariate:
    The definition def CBivariate ... := CPolynomial (CPolynomial R) correctly captures the structure of bivariate polynomials $R[X][Y]$ (polynomials in $Y$ with coefficients in $R[X]$), consistent with the provided description and standard formalizations like Mathlib.

  2. Instances:
    The standard instances (Inhabited, AddCommMonoid, Semiring, Coe) are correctly inferred from the underlying CPolynomial type.

  3. Operations Implementation:

    • Variables (X, Y, CC):
      • X is defined as the inner variable ($X \cdot Y^0$), which is correct.
      • Y is defined as the outer variable ($1 \cdot Y^1$), which is correct.
      • CC lifts a scalar to a constant polynomial in both variables ($r \cdot X^0 \cdot Y^0$), which is correct.
    • Coefficients and Degree (coeff, natDegreeY, degreeX, totalDegree):
      • coeff correctly extracts the coefficient of $X^i$ from the coefficient of $Y^j$.
      • degreeX correctly computes the maximum X-degree across all Y-coefficients.
      • totalDegree correctly computes the maximum of deg_X + deg_Y over the support.
    • Evaluation (evalX, evalY, evalEval):
      • evalX (evaluating inner variable) correctly constructs a new polynomial in $Y$ by evaluating each coefficient polynomial $P_j(X)$ at $a$.
      • evalY (evaluating outer variable) correctly substitutes the constant polynomial C a for $Y$ in the outer structure, resulting in a polynomial in $X$.
      • evalEval correctly chains evaluation to compute $P(x,y)$.
    • Swap:
      • The implementation iterates over the support, constructing terms where the exponent of the outer variable ($Y$) becomes the index $i$ (original X-degree) and the exponent of the inner variable ($X$) becomes the index $j$ (original Y-degree). This correctly maps $c_{ij} X^i Y^j \mapsto c_{ij} Y^i X^j = c_{ij} X_{new}^j Y_{new}^i$.
  4. Consistency:

    • The file assumes CPolynomial behaves like a standard polynomial type (likely a structure). The usage of f.val in definitions like natDegreeY and evalY suggests CPolynomial is a structure with a val field (perhaps exposing the raw coefficients list/array). This is consistent within the file, assuming CPolynomial exposes these members.

Potential Specification/Style Ambiguities

  1. Type Class Constraints:
    The definition CBivariate includes [Nontrivial R]. While not logically incorrect, requiring Nontrivial for the type definition (rather than just for theorems or specific operations) is stricter than necessary (e.g., it precludes defining polynomials over the zero ring, however useless that may be). Mathlib's Polynomial does not require this. This is a design choice and not a misformalization.
  2. DecidableEq vs BEq:
    The file defines the type using [BEq R] [LawfulBEq R] but explicitly requests [DecidableEq R] for operations like monomialXY and swap. Since LawfulBEq implies the existence of a decidable equality, this is redundant but safe. However, if the underlying CPolynomial library is purely "computable" (using Bool), mixing in DecidableEq (Prop-level decidability) might require users to have specific instances in scope. This is valid code but potentially awkward API design.

Verdict

The formalization is Correct. The logic matches the docstrings and standard algebraic definitions for bivariate polynomials implemented as nested univariate polynomials.

Verdict: Correct

📄 **Review for `CompPoly/Bivariate/ToPoly.lean`**

To: Derek Sorensen (Author)
From: Senior Formal Verification Engineer
Subject: Review of CompPoly/Bivariate/ToPoly.lean

I have reviewed the new file CompPoly/Bivariate/ToPoly.lean which establishes the ring equivalence between CBivariate R and Mathlib's R[X][Y].

Verdict: Correct (with Incomplete Proofs)

The formalization correctly implements the isomorphism between the array-based bivariate polynomials (CBivariate) and Mathlib's iterated polynomial type Polynomial (Polynomial R). The logical mapping of inner/outer variables and coefficients aligns with standard mathematical definitions and Mathlib conventions.

However, there are several proofs left as sorry. While not a misformalization of the definitions, these gaps mean the equivalence is not yet fully verified.

Analysis of Changes

  1. toPoly and ofPoly Definitions:

    • The toPoly function correctly constructs a Mathlib polynomial by summing monomials over the support. It correctly treats CBivariate R as having an outer variable (mapped to the outer Polynomial) and inner coefficients (mapped recursively).
    • The ofPoly function correctly reverses this by extracting coefficients and converting them to the implementation type. The use of trim_toImpl (assuming it proves canonicity) is appropriate for constructing the raw types.
  2. Variable Mapping:

    • Y_toPoly maps the outer variable CBivariate.Y to Polynomial.X (the outer variable of R[X][Y]). This is correct.
    • X_toPoly maps the inner variable CBivariate.X to Polynomial.C Polynomial.X (the constant embedding of the inner variable). This is correct.
  3. Degree and Coefficient Lemmas:

    • natDegreeY_toPoly and degreeX_toPoly correctly relate the dimensions of the two representations.
    • coeff_toPoly and coeff_toPoly_outer correctly state that the isomorphism preserves coefficients.

Findings

1. Incomplete Proofs (sorry)

  • Severity: Major (for verification completeness), Minor (for logic correctness).
  • Location: evalX_toPoly_coeff, evalX_toPoly_eval, swap_toPoly_coeff.
  • Analysis: The statements guarded by sorry appear to be true.
    • swap_toPoly_coeff: States that swapping variables in the implementation commutes with swapping indices in the polynomial coefficient access. This is mathematically valid.
    • evalX_toPoly_eval: Connects partial evaluation in the implementation to the standard evaluation. Valid.
  • Recommendation: These proofs need to be completed before the PR is merged to ensure the ringEquiv and associated API are fully trusted.

2. Minor Duplication

  • Location: toPoly_coeff and coeff_toPoly_outer.
  • Analysis: These two theorems state nearly the exact same property: that the coefficient of the converted polynomial matches the converted coefficient of the original.
    • toPoly_coeff: (toPoly p).coeff n = (CPolynomial.coeff p n).toPoly
    • coeff_toPoly_outer: (toPoly f).coeff j = CPolynomial.toPoly (f.val.coeff j)
  • Recommendation: Consider removing one or making one an alias of the other to maintain a clean API.

3. Type Class Constraints

  • Location: toPoly vs ofPoly.
  • Analysis: toPoly requires [BEq R] [LawfulBEq R] while ofPoly requires [DecidableEq R]. While LawfulBEq allows deriving DecidableEq, the mix of constraints is slightly inconsistent.
  • Recommendation: This is safe as is, but standardizing on one set of constraints (or deriving DecidableEq from LawfulBEq where needed) might be cleaner.

Conclusion

The code is a correct formalization of the intended isomorphism. The logic for the conversion functions and the statements of the equivalence theorems are sound. The only action items are to finish the proofs marked with sorry and tidy up the duplicated lemma.

Formalization Check:

  • Off-by-One Errors: None found. monomial and support logic handles indices correctly.
  • Recursive Definitions: toPoly uses sums over support; no complex recursion. Correct.
  • Asssumptions: Nontrivial R and Ring R are appropriate.
  • Prop vs Type: Correct usage.

No critical issues found.

📄 **Review for `CompPoly/Univariate/Basic.lean`**

1. Implementation Analysis

  • Instance Definitions (BEq, LawfulBEq, DecidableEq):
    • The BEq instance correctly delegates to the underlying array comparison.
    • The LawfulBEq instance properly relates boolean equality to propositional equality using Subtype.ext.
    • The DecidableEq instance uses Subtype.instDecidableEq. This is valid because CPolynomial.Raw (an array) has decidable equality, and the trimming predicate p.trim = p is decidable.
  • Nontrivial Instance:
    • The instance constructs a pair (0, 1) and proves they are distinct by examining the size of their underlying arrays.
    • Logic: 0 has size 0 (empty array). 1 (in a nontrivial ring) corresponds to #[1], which has size 1. The proof Nat.zero_ne_one correctly derives the contradiction.
  • monomial_add:
    • Proves linearity of monomial construction with respect to coefficients.
    • The proof strategy uses functional extensionality on coefficients (eq_iff_coeff), splitting on indices. This is the standard correct approach.
  • eval_eq_sum_support & eval₂_eq_sum_support:
    • These theorems relate the computational evaluation (Horner's method or array traversal) to the declarative sum over the polynomial's support.
    • The proofs bridge the gap by converting the array evaluation to a sum over Finset.range, and then refining that range to the support using Finset.sum_subset.
    • The handling of zero coefficients (which are excluded from support) is mathematically sound: terms where coeff i = 0 contribute 0 to the sum.
  • natDegree_eq_support_sup:
    • Relates natDegree (computational, usually size - 1 or similar) to the maximum element of the support (declarative).
    • The proof handles the zero polynomial case separately and correctly uses the supremum properties of finite sets. It correctly identifies the last non-zero index k as the degree.

2. Misformalization Checklist & Assessment

  • Off-by-One Errors: None found. The natDegree logic correctly aligns array indices (0-based) with polynomial degree, and the Nontrivial proof correctly distinguishes size 0 from size 1.
  • Recursive Definitions: No new recursive definitions are introduced; the code relies on existing ones (eval, trim) which are assumed correct.
  • Incorrect Assumptions:
    • The instances generally require [Semiring R], and specific ones require [BEq R], [DecidableEq R], or [Nontrivial R]. These constraints are appropriate and sufficient.
    • monomial_add requires [DecidableEq R], which is necessary for monomial to correctly handle the 0 case (trimming).
  • Prop vs. Type: Correct usage. support returns Finset, Nontrivial is a Prop.
  • Universe Levels: The use of Type* is correct and standard.

3. Verdict & Feedback

The formalization in this file is correct.

The changes consistently implement standard polynomial operations and typeclasses for a computational polynomial type (CompPoly). The proofs link the computational representations (arrays, trim, eval) with their abstract mathematical counterparts (support, sum, natDegree) correctly.

Verdict: Correct.

📄 **Review for `CompPoly/Univariate/ToPoly.lean`**

This is a review of the changes in CompPoly/Univariate/ToPoly.lean.

General Verdict:
The logic of the added theorem toPoly_eq_zero_iff is sound, relying correctly on the retraction property (toImpl_toPoly_of_canonical) to establish injectivity at zero. However, there is a minor issue regarding the strength of the assumptions.

Findings:

  1. Too Strong Assumption ([Nontrivial R])
    • Severity: Minor
    • Location: theorem toPoly_eq_zero_iff
    • Description: The theorem assumes [Nontrivial R]. This appears unnecessary.
      • If R is a trivial ring (a subsingleton), then CPolynomial R and Polynomial R are also subsingletons. In this case, p = 0 and p.toPoly = 0 are always true, so the biconditional holds trivially.
      • The proof strategy uses toImpl_toPoly_of_canonical, which does not require Nontrivial R (based on the context provided for Raw.toImpl_toPoly).
      • Unless aesop relies on a specific lemma that strictly requires Nontrivial, this constraint restricts the library's generality unnecessarily.
    • Recommendation: Remove [Nontrivial R].

Refined Code Snippet:

/-- `toPoly` maps a canonical polynomial to `0` iff the polynomial is `0`. -/
theorem toPoly_eq_zero_iff [LawfulBEq R] (p : CPolynomial R) :
    p.toPoly = 0 ↔ p = 0 := by
  constructor
  · intro hp
    have hround : p.toPoly.toImpl = p := toImpl_toPoly_of_canonical p
    rw [eq_comm] at hround
    -- Substitutes p.toPoly = 0 -> p = 0.toImpl. 
    -- Assuming 0.toImpl = 0 (canonical zero), which aesop should handle.
    aesop
  · aesop

Summary:
The formalization is correct, but removing the Nontrivial typeclass constraint would improve the quality and generality of the theorem.

@Verified-zkEVM Verified-zkEVM deleted a comment from github-actions bot Feb 24, 2026
@dhsorens
Copy link
Collaborator Author

OK, implementations seem to be correct. There are three outstanding sorrys but I think they may take some effort, so I am inclined to lave them for now as TODOs, as well as naming conventions/rules for lemmas, namespaces, etc. mentioned by Katy. Stylistically, this is meant to match Univariate, but as Julian pointed out there is more work to do on that interface.

For now, though, I think this is a reasonable start and I am inclined to merge as-is @alexanderlhicks @katyhr - are you happy with that?

@dhsorens dhsorens marked this pull request as ready for review February 24, 2026 11:27

/-- Extensionality: two bivariate polynomials are equal iff their underlying values are. -/
@[ext] theorem ext {p q : CBivariate R} (h : p.val = q.val) : p = q :=
CPolynomial.ext h
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docstring says this is an "if and only if" statement but we only seem to have the p.val = q.val => p=q direction.

Also, are values the coefficients?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the @[ext] automatically derives ext_iff from this theorem which gives the iff statement,
p = q ↔ ↑p = ↑q

bivariate polynomials are defined in terms of CPolynomial which is a subtype, so p.val refers to the underlying (possibly untrimmed) coefficient array corresponding to p - p.property gives the proof that it is canonical (no trailing zeros)


/-- The `X`-degree: maximum over all Y-coefficients of their degree in X.
ArkLib: `Polynomial.Bivariate.degreeX`. -/
def degreeX (f : CBivariate R) : ℕ :=
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we want this to be natDegreeX too?

PS will PR a change in ArkLib

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep! Passed over this while importing the corresponding Arklib statements (changing docstring to reflect change in Arklib)


/-- Leading coefficient when viewed as a polynomial in Y.
ArkLib: `Polynomial.Bivariate.leadingCoeffY`. -/
def leadingCoeffY (f : CBivariate R) : CPolynomial R :=
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we care for a leadingCoeffX?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh we have it! Maybe move it up here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call - leadingCoeffX relies on the definition of swap - so I've opted to move leadingCoefficientY down to be next to leadingCoeffX

@katyhr
Copy link

katyhr commented Feb 24, 2026

OK, implementations seem to be correct. There are three outstanding sorrys but I think they may take some effort, so I am inclined to lave them for now as TODOs, as well as naming conventions/rules for lemmas, namespaces, etc. mentioned by Katy. Stylistically, this is meant to match Univariate, but as Julian pointed out there is more work to do on that interface.

For now, though, I think this is a reasonable start and I am inclined to merge as-is @alexanderlhicks @katyhr - are you happy with that?

@dhsorens some random thoughts!

  • Maybe the stylistic changes can be done once the Univariate is polished, i.e. use that as a guideline from then onwards? Alternatively, it might be nice to come up with some sort of style naming guide (or just "steal" mathLib's one)

  • I think it might be nice to have a section with toPoly properties (preserves zero, add and mul, etc). Also do you have that toPoly 1 = 1? Same question for ofPoly. I couldn't find them, sorry if I'm being blind! Then do a section for ofPoly with the operations in the same order. Might be nice to have the same layout of both homomorphisms. And then the section that they're inverses of each other. Also, do you have that they're homomorphisms as a unifying result or do you just put it inside the ring equiv? Happy to chat and help reorg :)

  • I have some idea how to approach the sorries on a high level, if you're not working on them I can think about it further :)

@dhsorens
Copy link
Collaborator Author

dhsorens commented Feb 25, 2026

@katyhr thank you for these comments! I have made fixes. I think a more detailed style guide is definitely in order. Since all the polynomials in this section follow the convention of
definition $\rightarrow$ operations/API $\rightarrow$ equivalence with Mathlib $\rightarrow$ proofs of correctness of the operations/API
I think it makes sense to have e.g. standardized naming for mathlib functions that we "steal" or make computable here. That will make this repo easier to read, use, and maintain!

I have not thought too much about the outstanding sorrys yet - would love a pull request for those! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants