Skip to content

Commit 6903c5a

Browse files
Thomas HoffmannianyfanDimitri KartsaklisnikhilkhatriCharles London
committed
Release version 0.3.2
Co-authored-by: Ian Fan <ian.fan@quantinuum.com> Co-authored-by: Dimitri Kartsaklis <dimitri.kartsaklis@quantinuum.com> Co-authored-by: Nikhil Khatri <nikhil.khatri@quantinuum.com> Co-authored-by: Charles London <charles.london@quantinuum.com> Co-authored-by: Alexis Toumi <alexis@toumi.email>
1 parent 5bc4d0a commit 6903c5a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+1953
-3959
lines changed

.github/workflows/build_test.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ on:
77
- 'beta'
88
- 'release'
99
pull_request:
10+
workflow_dispatch:
1011

1112
env:
1213
SRC_DIR: lambeq
@@ -103,7 +104,7 @@ jobs:
103104
path: ${{ steps.loc-depccg-cache.outputs.dir }}
104105
key: depccg
105106
- name: Download depccg pre-trained model if needed
106-
if: steps.depccg-cache.outputs.cache-hit == 'false'
107+
if: steps.depccg-enabled.outcome == 'success' && steps.depccg-cache.outputs.cache-hit != 'true'
107108
run: python -c 'import tarfile, urllib;from depccg.instance_models import MODEL_DIRECTORY;tarfile.open(urllib.request.urlretrieve("https://qnlp.cambridgequantum.com/models/tri_headfirst.tar.gz")[0]).extractall(MODEL_DIRECTORY)'
108109
- name: Test DepCCGParser
109110
if: steps.depccg-enabled.outcome == 'success'
@@ -123,7 +124,6 @@ jobs:
123124
run: >
124125
pytest --nbmake ${{ env.DOCS_DIR }}/tutorials/
125126
--nbmake-timeout=60
126-
--ignore ${{ env.DOCS_DIR }}/tutorials/trainer_hybrid.ipynb
127127
--ignore ${{ env.DOCS_DIR }}/tutorials/code
128128
- name: Coverage report
129129
run: coverage report -m

docs/clean_notebooks.py

-35
This file was deleted.

docs/conf.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@
4747
]
4848

4949
intersphinx_mapping = {
50-
'discopy': ("https://docs.discopy.org/en/0.5.1.1/", None),
50+
'discopy': ("https://docs.discopy.org/en/main/", None),
5151
'pennylane': ("https://pennylane.readthedocs.io/en/stable/", None),
5252
}
5353

docs/examples/circuit.ipynb

+129-3,389
Large diffs are not rendered by default.

docs/examples/pennylane.ipynb

+116-89
Large diffs are not rendered by default.

docs/examples/quantum_pipeline.ipynb

+19-21
Large diffs are not rendered by default.

docs/examples/quantum_pipeline_jax.ipynb

+15-14
Large diffs are not rendered by default.

docs/package-api.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ Package containing the interfaces for the :term:`CCG <Combinatory Categorial Gra
121121

122122
.. inheritance-diagram::
123123
lambeq.text2diagram.BobcatParser
124-
lambeq.text2diagram.CCGAtomicType
124+
lambeq.text2diagram.CCGType
125125
lambeq.text2diagram.CCGBankParser
126126
lambeq.text2diagram.CCGRule
127127
lambeq.text2diagram.CCGTree

docs/release_notes.rst

+38
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,44 @@ Release notes
44
=============
55

66

7+
.. _rel-0.3.2:
8+
9+
`0.3.2 <https://github.com/CQCL/lambeq/releases/tag/0.3.2>`_
10+
------------------------------------------------------------
11+
12+
Added:
13+
14+
- Support for :term:`DisCoPy` >= 1.1.4 (credit: `toumix <https://github.com/CQCL/lambeq/pull/89>`_).
15+
- replaced ``discopy.rigid`` with :py:mod:`discopy.grammar.pregroup` everywhere.
16+
- replaced ``discopy.biclosed`` with :py:mod:`discopy.grammar.categorial` everywhere.
17+
- Use ``Diagram.decode`` to account for the change in contructor signature ``Diagram(inside, dom, cod)``.
18+
- updated attribute names that were previously hidden, e.g. ``._data`` becomes ``.data``.
19+
- replaced diagrammatic conjugate with transpose.
20+
- swapped left and right currying.
21+
- dropped support for legacy DisCoPy.
22+
- Added :py:class:`~lambeq.CCGType` class for utilisation in the ``biclosed_type`` attribute of :py:class:`~lambeq.CCGTree`, allowing conversion to and from a discopy categorial object using :py:meth:`~lambeq.CCGType.discopy` and :py:meth:`~lambeq.CCGType.from_discopy` methods.
23+
- :py:class:`~lambeq.CCGTree`: added reference to the original tree from parsing by introducing a ``metadata`` field.
24+
25+
26+
Changed:
27+
28+
- Internalised DisCoPy quantum ansätze in lambeq.
29+
- :py:class:`~lambeq.IQPAnsatz` now ends with a layer of Hadamard gates in the multi-qubit case and the post-selection basis is set to be the computational basis (Pauli Z).
30+
31+
Fixed:
32+
33+
- Fixed a bottleneck during the initialisation of the :py:class:`~lambeq.PennyLaneModel` caused by the inefficient substitution of Sympy symbols in the circuits.
34+
- Escape special characters in box labels for symbol creation.
35+
- Documentation: fixed broken links to DisCoPy documentation.
36+
- Documentation: enabled sphinxcontrib.jquery extension for Read the Docs theme.
37+
- Fixed disentangling ``RealAnsatz`` in extend-lambeq tutorial notebook.
38+
- Fixed model loading in PennyLane notebooks.
39+
- Fixed typo in :py:class:`~lambeq.SPSAOptimizer` (credit: `Gopal-Dahale <https://github.com/CQCL/lambeq/pull/102>`_)
40+
41+
Removed:
42+
43+
- Removed support for Python 3.8.
44+
745
.. _rel-0.3.1:
846

947
`0.3.1 <https://github.com/CQCL/lambeq/releases/tag/0.3.1>`_

docs/scripts/check_errors.py

+20
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# check_errors.py
2+
import sys
3+
from nbformat import read
4+
5+
def check_error_tags(nbfile) -> bool:
6+
with open(nbfile) as f:
7+
nb = read(f, as_version=4)
8+
9+
for cell in nb['cells']:
10+
if cell['cell_type'] == 'code':
11+
for output in cell.get('outputs', []):
12+
if output['output_type'] == 'error':
13+
return True
14+
return False
15+
16+
if __name__ == '__main__':
17+
file_path = sys.argv[1]
18+
if check_error_tags(file_path):
19+
sys.exit(1)
20+
sys.exit(0)

docs/scripts/clean_notebooks.py

+65
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
"""
2+
This script performs the following actions on example and
3+
tutorial notebooks:
4+
5+
- Removes cell IDs
6+
- Keeps only `useful_metadata` for each cell
7+
- Renumbers code cells, ignoring hidden ones
8+
- Keeps only necessary notebook metadata
9+
- Pins nbformat version
10+
"""
11+
12+
from pathlib import Path
13+
from itertools import chain
14+
import nbformat as nbf
15+
16+
17+
print("Cleaning notebooks...")
18+
19+
nbs_path = Path("examples")
20+
tut_path = Path("tutorials")
21+
useful_metadata = ["nbsphinx", "raw_mimetype"]
22+
23+
for file in chain(nbs_path.iterdir(), tut_path.iterdir()):
24+
if not (file.is_file() and file.suffix == ".ipynb"):
25+
continue
26+
27+
ntbk = nbf.read(file, nbf.NO_CONVERT)
28+
29+
exec_count = 0
30+
31+
for cell in ntbk.cells:
32+
# Delete cell ID if it's there
33+
cell.pop("id", None)
34+
if cell.get("attachments") == {}:
35+
cell.pop("attachments", None)
36+
37+
# Keep only useful metadata
38+
new_metadata = {x: cell.metadata[x]
39+
for x in useful_metadata
40+
if x in cell.metadata}
41+
cell.metadata = new_metadata
42+
43+
# Renumber execution counts, ignoring hidden cells
44+
if cell.cell_type == "code":
45+
if cell.metadata.get("nbsphinx") == "hidden":
46+
cell.execution_count = None
47+
else:
48+
exec_count += 1
49+
cell.execution_count = exec_count
50+
51+
# Adjust the output execution count, if present
52+
if len(cell.outputs) > 0:
53+
output = cell.outputs[-1] # execute_result must be
54+
# the last entry
55+
if output.output_type == "execute_result":
56+
output.execution_count = cell.execution_count
57+
58+
ntbk.metadata = {"language_info": {"name": "python"}}
59+
60+
# We need the version of nbformat to be x.4, otherwise cells IDs
61+
# are regenerated automatically
62+
ntbk.nbformat = 4
63+
ntbk.nbformat_minor = 4
64+
65+
nbf.write(ntbk, file, version=nbf.NO_CONVERT)
+96
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
import os
2+
import pandas as pd
3+
import matplotlib.pyplot as plt
4+
import argparse
5+
import numpy as np
6+
7+
8+
PLOT_SAVE_PATH = 'notebook_execution_time_comparison.png'
9+
10+
# Function to load data from CSV files
11+
def load_data(csv_path):
12+
data = pd.read_csv(csv_path)
13+
# cut path from notebook name
14+
data['notebook'] = data['notebook'].apply(lambda x: x.rpartition('/')[2])
15+
data['source'] = csv_path
16+
data['median'] = data.filter(regex='run_').median(1)
17+
data['min'] = data.filter(regex='run_').min(1)
18+
data['std'] = data.filter(regex='run_').std(1)
19+
return data
20+
21+
22+
def plot_data(data: pd.DataFrame, title='Notebook Execution Time Comparison'):
23+
# Read out mean and standard deviation
24+
mean_data = data.groupby(['notebook', 'source'])['median'].mean().unstack().fillna(0)
25+
std_data = data.groupby(['notebook', 'source'])['std'].mean().unstack().fillna(0)
26+
min_data = data.groupby(['notebook', 'source'])['min'].mean().unstack().fillna(0)
27+
28+
# Create a figure and a set of subplots
29+
_, ax = plt.subplots(figsize=(15, 8))
30+
31+
# Define variable bar width
32+
samples = len(mean_data.keys())
33+
bar_width = 1/(2*samples)
34+
35+
# Define notebook positions
36+
notebook_positions = np.arange(len(mean_data.index))
37+
38+
n_samples = len(mean_data.columns)
39+
40+
# Generate bar plots with error bars for each source
41+
for i, source in enumerate(mean_data.columns):
42+
label = source.split('/')[1]
43+
offset = ((i - n_samples//2 + 1/2) if n_samples % 2 == 0 else
44+
(n_samples//2 - i))
45+
# Plot medium values as bar plot
46+
ax.bar(notebook_positions - offset * bar_width,
47+
mean_data[source],
48+
bar_width,
49+
alpha=0.2,
50+
color='bgrcmyk'[i],
51+
yerr=std_data[source],
52+
ecolor='gray',
53+
label='_nolegend_')
54+
# Plot minimum values as bar plot
55+
ax.bar(notebook_positions - offset * bar_width,
56+
min_data[source],
57+
bar_width,
58+
alpha=0.8,
59+
color='bgrcmyk'[i],
60+
label=label)
61+
62+
ax.set_title(title)
63+
ax.set_xlabel('Notebook')
64+
ax.set_ylabel('Time (s)')
65+
ax.set_xticks(notebook_positions)
66+
ax.set_xticklabels(mean_data.index, rotation=45, ha='right')
67+
plt.legend()
68+
plt.tight_layout()
69+
plt.savefig(PLOT_SAVE_PATH)
70+
plt.show()
71+
72+
73+
def main(notebook_runtimes_dir):
74+
# Traverse the 'notebook_runtimes_dir' and its subdirectories for CSV files
75+
data_frames = []
76+
for root, _, files in os.walk(notebook_runtimes_dir):
77+
for file in files:
78+
if file.endswith(".csv"):
79+
csv_path = os.path.join(root, file)
80+
data = load_data(csv_path)
81+
data_frames.append(data)
82+
83+
# Concatenate all dataframes
84+
all_data = pd.concat(data_frames, ignore_index=True)
85+
86+
# Plot the data
87+
plot_data(all_data)
88+
89+
if __name__ == "__main__":
90+
parser = argparse.ArgumentParser(description='Compare notebook execution times.')
91+
parser.add_argument('notebook_runtimes_dir', type=str,
92+
help='Path to the directory containing the CSV files.')
93+
94+
args = parser.parse_args()
95+
96+
main(args.notebook_runtimes_dir)

0 commit comments

Comments
 (0)