Skip to content

feat: classes for inference systems and logical equivalence#398

Draft
fmontesi wants to merge 8 commits intomainfrom
logic-classes
Draft

feat: classes for inference systems and logical equivalence#398
fmontesi wants to merge 8 commits intomainfrom
logic-classes

Conversation

@fmontesi
Copy link
Collaborator

@fmontesi fmontesi commented Mar 4, 2026

This PR is the beginning of work for formalising general concepts about logic in CSLib, in order to streamline the development of logics. A major challenge is that judgements, proof systems, and semantics can be very different across logics, so we cannot assume any particular shapes for them.
To address this, this PR introduces:

  • A class for inference systems, which gives access to the notation for derivations and the associated notion of Derivability.
  • A class of heterogeneous contexts (HasHContext), which allows for defining contexts that, when filled with a value of type T, yield something of a different type than T. This is useful for formalising judgemental contexts: a context with a hole that can be filled with a proposition, resulting in a judgement.
  • A class for LogicalEquivalences. We formalise a logical equivalence as a congruence on propositions that preserves validity under any judgemental context.
  • Instances of these concepts for HML and CLL, which serve as examples of very different structures.

Depends on #393 for the new context fill notation.

@fmontesi fmontesi added the logic label Mar 4, 2026
Copy link
Collaborator

@chenson2018 chenson2018 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some nitpicks, I haven't properly thought about the big picture here yet.


/-- Logical equivalence for HML propositions. -/
@[scoped grind =]
def Proposition.Equiv {State : Type u} {Label : Type v} (a b : Proposition Label) : Prop :=
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get the impression that part of fixing the performance issues in LTS can start with being more strict about Prop-valued def used with grind. I don't quite understand all the new changes around defeq and reducibility, but it seems in general things like this should be a structure or irreducible.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have a link to a discussion or doc somewhere perhaps?
This sounds important, but I need to understand more myself.

Copy link
Collaborator

@chenson2018 chenson2018 Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The place in the Lean docs where discussion about transparency was recently updated is Lean.Meta.TransparencyMode. The problem it describes is relevant to us: when we have performance issues with grind, mostly within LTS and automata, it usually amounts to expensive checks for equality. I think some of this is caused by grind annotations we've written that end up doing a lot of unfolding, and it seems like having stricter API around these sorts of definitions could help a bit. Concretely I mean at minimum preferring structures, but also trying out tagging definitions like this as irreducible, which forces you to define API.

This Zulip thread is where discussion of effects of the changes in transparency have been happening. I've only passingly followed as it gets deep into Mathlib and things are still developing (this is why we have more release candidates than average this cycle).

Comment on lines +64 to +66
induction ctx
<;> simp only [Proposition.Context.fill, Proposition.denotation]
<;> grind
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's okay to write

Suggested change
induction ctx
<;> simp only [Proposition.Context.fill, Proposition.denotation]
<;> grind
induction ctx <;> simp [Proposition.Context.fill, Proposition.denotation] <;> grind

because grind is considered a tactic permissible to follow the flexible simp. (Though it's not clear to me what is missing that grind won't close this on its own)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, wouldn't this make the proof potentially slower and less robust?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea here is that it's more robust in a sense to not specify a long and fixed simp only at the end of a proof. Doesn't matter here, but imagine the times when this includes a dozen plus theorems.

The relevant section in the style guide is Squeezing simp calls. Occasionally there can be performance considerations, but this is the exception rather than the rule.

(Coincidentally I am working on a linter for Mathlib that checks for exactly this, including measuring the performance)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants