Skip to content

0.5.0

Latest

Choose a tag to compare

@neiljdo neiljdo released this 15 May 14:10
· 2 commits to main since this release

Added:

  • A new experimental lambeq.experimental.discocirc module that contains an efficient lambeq.experimental.discocirc.DisCoCircReader and all the required functionality for converting long texts and entire multi-paged documents into quantum circuits, based on the DisCoCirc framework.
  • A new tree representation of a pregroup diagram, termed pregroup tree, is implemented through the lambeq.text2diagram.pregroup_tree.PregroupTreeNode class. This lays the groundwork for drastically improving the parsing and internal processing of diagrams.
  • A new experimental end-to-end parser class, lambeq.text2diagram.OncillaParser, that simplifies the process of generating diagrams from text, minimizing or even eliminating exposure of the user to CCG representations and functionality. This parser utilises the pregroup tree representation of diagrams. This does not replace BobcatParser as the default parser.
  • A new lambeq.backend.grammar.Frame data structure that allows the recursive grouping of lambeq boxes and diagrams and can be seen as a quantum supermap acting on the enclosed arguments. Frames are used in DisCoCirc diagrams.
  • A new lambeq.training.PytorchQuantumModel class that allows Pytorch autograd to be used on quantum circuits, while so far it was possible to use it only on tensor networks (credit: Kin Ian Lo).
  • A new native lambeq.backend.symbol.Symbol class that eliminates any dependencies with SymPy and improves efficiency.
  • A new rewrite rule class, lambeq.rewrite.CollapseDomainRewriteRule, that converts boxes into domain-less boxes by uncurrying (credit: Kin Ian Lo).
  • New lambeq.backend.Diagram.remove_snakes and lambeq.backend.Diagram.rigid_normal_form methods that make the specific rewrites also available outside of the original lambeq.backend.Diagram.normal_form method (credit: Kin Ian Lo).
  • Caching options for fast access to already computed tensor contraction paths for tensor network models, specifically PytorchModel and PytorchQuantumModel. The constructor of these models now takes a tn_path_optimizer argument, which can be a TnPathOptimizer object, replicating the old un-cached behaviour, or a CachedTnPathOptimizer which allows caching of the computed tensor contraction paths for quick lookup.
  • Support for evaluating mixed-scalar PennyLane circuits, i.e. circuits where all qubits are either discarded or post-selected.
  • Two new ansätze from the Sim et al. paper (arXiv:1905.10876), Sim9Ansatz and Sim9CxAnsatz.
  • Support for ancilla qubits in lambeq's ansätze.

Changed:

  • Significantly improved the efficiency of the PennyLaneModel.
  • Refactored all models so that they do not depend on tket as an intermediate step for their conversions.
  • CircuitAnsatz now acts as a dagger functor (credit: Kin Ian Lo).
  • Refactored QuantumModel to be less numpy-specific and easier to extend with other backends.
  • Made the split tensor ansätze, i.e. SpiderAnsatz and MPSAnsatz, work on boxes with domains. This utilises the newly-implemented CollapseDomainRewriteRule (credit: Kin Ian Lo).
  • Changed the device keyword argument for model-based parsers, e.g. BobcatParser, so that it follows PyTorch convention and supports multiple types.
  • Added the new lambeq.text2diagram.OncillaParser as a parser option to the CLI via the -p oncilla argument.
  • Removed the deprecated lambeq.text2diagram.DepCCGParser as a parser option from the CLI.
  • Refactored tokeniser loading from SpacyTokeniser into a new utility function lambeq.core.utils.get_spacy_tokeniser.
  • Significantly extended and restructured the documentation pages, fixed various issues, and added more material and tutorials.
  • Made tket an optional dependency.

Fixed:

  • Fixed an enum incompatibility with Python > 3.10.
  • Fixed the behaviour of tensoring a type with the identity diagram.
  • Fixed a lambeq.backend.Diagram.lambdify method error when used with a daggered tensor box (credit: Kin Ian Lo).