Skip to content

Commit e9ecf9e

Browse files
committed
update
1 parent 214c3c0 commit e9ecf9e

File tree

8 files changed

+442
-197
lines changed

8 files changed

+442
-197
lines changed

doc/pub/week15/html/week15-bs.html

Lines changed: 38 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -160,6 +160,10 @@
160160
2,
161161
None,
162162
'the-iris-data-and-classical-svm'),
163+
('Small addendum, $F1$-score',
164+
2,
165+
None,
166+
'small-addendum-f1-score'),
163167
('Iris Dataset', 2, None, 'iris-dataset'),
164168
('Qiskit implementation', 2, None, 'qiskit-implementation'),
165169
('Credit data classification',
@@ -338,6 +342,7 @@
338342
<!-- navigation toc: --> <li><a href="#pennylane-implementations" style="font-size: 80%;">PennyLane implementations</a></li>
339343
<!-- navigation toc: --> <li><a href="#steps-in-quantum-kernel-svm" style="font-size: 80%;">Steps in Quantum Kernel SVM</a></li>
340344
<!-- navigation toc: --> <li><a href="#the-iris-data-and-classical-svm" style="font-size: 80%;">The Iris data and classical SVM</a></li>
345+
<!-- navigation toc: --> <li><a href="#small-addendum-f1-score" style="font-size: 80%;">Small addendum, \( F1 \)-score</a></li>
341346
<!-- navigation toc: --> <li><a href="#iris-dataset" style="font-size: 80%;">Iris Dataset</a></li>
342347
<!-- navigation toc: --> <li><a href="#qiskit-implementation" style="font-size: 80%;">Qiskit implementation</a></li>
343348
<!-- navigation toc: --> <li><a href="#credit-data-classification" style="font-size: 80%;">Credit data classification</a></li>
@@ -971,7 +976,7 @@ <h2 id="quantum-svm-algorithms-large-scale-vs-nisq" class="anchor">Quantum SVM A
971976
<p>Early work by Rebentrost, Mohseni, and Lloyd (2014) formulated an SVM
972977
in terms of quantum linear algebra. They showed that one can invert
973978
the kernel matrix (a positive semidefinite matrix) using quantum
974-
algorithms (HHL algorithm) in time polylogarithmic in \( N \) and \( d \) .
979+
algorithms in time polylogarithmic in \( N \) and \( d \) .
975980
Concretely, they assumed quantum RAM (QRAM) access to data and used a
976981
quantum subroutine to solve the dual SVM as a linear system, yielding
977982
the vector of \( \alpha_i \) in superposition. Under ideal conditions
@@ -1008,7 +1013,7 @@ <h2 id="and-nisq-quantum-kernels" class="anchor">And NISQ Quantum Kernels </h2>
10081013
<h2 id="quantum-neural-network" class="anchor">Quantum neural network </h2>
10091014

10101015
<p>Another variation is the quantum variational classifier, sometimes
1011-
called a quantum neural network. Instead of precomputing a fixed
1016+
called a quantum neural network (to be discussed below). Instead of precomputing a fixed
10121017
kernel, one trains a parameterized quantum circuit to output labels.
10131018
Interestingly, Schuld (2021) shows that variational quantum models,
10141019
when trained by minimizing a loss, are mathematically equivalent to
@@ -1360,7 +1365,10 @@ <h2 id="training-svm-with-precomputed-quantum-kernels" class="anchor">Training S
13601365
on the test set.
13611366
</p>
13621367

1363-
<p>It is also possible to integrate PennyLane&#8217;s differentiable capabilities by defining a parameterized kernel and optimizing parameters via gradient descent, but here we keep a fixed feature map.</p>
1368+
<p>It is also possible to integrate PennyLane&#8217;s differentiable
1369+
capabilities by defining a parameterized kernel and optimizing
1370+
parameters via gradient descent, but here we keep a fixed feature map.
1371+
</p>
13641372

13651373
<!-- !split -->
13661374
<h2 id="discussion-of-implementation" class="anchor">Discussion of Implementation </h2>
@@ -1467,6 +1475,32 @@ <h2 id="the-iris-data-and-classical-svm" class="anchor">The Iris data and classi
14671475
</div>
14681476

14691477

1478+
<!-- !split -->
1479+
<h2 id="small-addendum-f1-score" class="anchor">Small addendum, \( F1 \)-score </h2>
1480+
1481+
<p>The \( F1 \) measure (or \( F1 \)-score) in machine learning is a metric used to
1482+
evaluate the accuracy of a classification model, particularly in
1483+
situations where class distribution is imbalanced.
1484+
It is the harmonic mean of precision and recall and is defined as
1485+
</p>
1486+
$$
1487+
\mathrm{F1 score} = 2 \times \frac{\mathrm{precision} \times \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}},
1488+
$$
1489+
1490+
<p>where we have defined</p>
1491+
$$
1492+
\mathrm{precision} = \frac{\mathrm{True-Positives}}{\mathrm{True-Positives} + \mathrm{False-Positives}},
1493+
$$
1494+
1495+
<p>and</p>
1496+
$$
1497+
\mathrm{recall} = \frac{\mathrm{True-Positives}}{\mathrm{True-Positives} + \mathrm{False-Negatives}}.
1498+
$$
1499+
1500+
<p>The \( F1 \)-score ranges from \( 0 \) to \( 1 \) where \( 1 \) means perfect precision and recall, while
1501+
\( 0 \) means either precision or recall is zero.
1502+
</p>
1503+
14701504
<!-- !split -->
14711505
<h2 id="iris-dataset" class="anchor">Iris Dataset </h2>
14721506

@@ -2036,7 +2070,7 @@ <h2 id="mathematical-example" class="anchor">Mathematical example </h2>
20362070

20372071
<p>and a variational layer is</p>
20382072
$$
2039-
V(\boldsymbol\theta)=R_y(\theta_1)\otimes R_y(\theta_2),\text{CNOT}(0,1),
2073+
V(\boldsymbol\theta)=R_y(\theta_1)\otimes R_y(\theta_2),\mathrm{CNOT}(0,1),
20402074
$$
20412075

20422076
<p>(apply \( R_y \) on each qubit then entangle). After

doc/pub/week15/html/week15-reveal.html

Lines changed: 40 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -790,7 +790,7 @@ <h2 id="quantum-svm-algorithms-large-scale-vs-nisq">Quantum SVM Algorithms (larg
790790
<p>Early work by Rebentrost, Mohseni, and Lloyd (2014) formulated an SVM
791791
in terms of quantum linear algebra. They showed that one can invert
792792
the kernel matrix (a positive semidefinite matrix) using quantum
793-
algorithms (HHL algorithm) in time polylogarithmic in \( N \) and \( d \) .
793+
algorithms in time polylogarithmic in \( N \) and \( d \) .
794794
Concretely, they assumed quantum RAM (QRAM) access to data and used a
795795
quantum subroutine to solve the dual SVM as a linear system, yielding
796796
the vector of \( \alpha_i \) in superposition. Under ideal conditions
@@ -827,7 +827,7 @@ <h2 id="and-nisq-quantum-kernels">And NISQ Quantum Kernels </h2>
827827
<h2 id="quantum-neural-network">Quantum neural network </h2>
828828

829829
<p>Another variation is the quantum variational classifier, sometimes
830-
called a quantum neural network. Instead of precomputing a fixed
830+
called a quantum neural network (to be discussed below). Instead of precomputing a fixed
831831
kernel, one trains a parameterized quantum circuit to output labels.
832832
Interestingly, Schuld (2021) shows that variational quantum models,
833833
when trained by minimizing a loss, are mathematically equivalent to
@@ -1187,7 +1187,10 @@ <h2 id="training-svm-with-precomputed-quantum-kernels">Training SVM with Precomp
11871187
on the test set.
11881188
</p>
11891189

1190-
<p>It is also possible to integrate PennyLane&#8217;s differentiable capabilities by defining a parameterized kernel and optimizing parameters via gradient descent, but here we keep a fixed feature map.</p>
1190+
<p>It is also possible to integrate PennyLane&#8217;s differentiable
1191+
capabilities by defining a parameterized kernel and optimizing
1192+
parameters via gradient descent, but here we keep a fixed feature map.
1193+
</p>
11911194
</section>
11921195

11931196
<section>
@@ -1309,6 +1312,39 @@ <h2 id="the-iris-data-and-classical-svm">The Iris data and classical SVM </h2>
13091312
</div>
13101313
</section>
13111314

1315+
<section>
1316+
<h2 id="small-addendum-f1-score">Small addendum, \( F1 \)-score </h2>
1317+
1318+
<p>The \( F1 \) measure (or \( F1 \)-score) in machine learning is a metric used to
1319+
evaluate the accuracy of a classification model, particularly in
1320+
situations where class distribution is imbalanced.
1321+
It is the harmonic mean of precision and recall and is defined as
1322+
</p>
1323+
<p>&nbsp;<br>
1324+
$$
1325+
\mathrm{F1 score} = 2 \times \frac{\mathrm{precision} \times \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}},
1326+
$$
1327+
<p>&nbsp;<br>
1328+
1329+
<p>where we have defined</p>
1330+
<p>&nbsp;<br>
1331+
$$
1332+
\mathrm{precision} = \frac{\mathrm{True-Positives}}{\mathrm{True-Positives} + \mathrm{False-Positives}},
1333+
$$
1334+
<p>&nbsp;<br>
1335+
1336+
<p>and</p>
1337+
<p>&nbsp;<br>
1338+
$$
1339+
\mathrm{recall} = \frac{\mathrm{True-Positives}}{\mathrm{True-Positives} + \mathrm{False-Negatives}}.
1340+
$$
1341+
<p>&nbsp;<br>
1342+
1343+
<p>The \( F1 \)-score ranges from \( 0 \) to \( 1 \) where \( 1 \) means perfect precision and recall, while
1344+
\( 0 \) means either precision or recall is zero.
1345+
</p>
1346+
</section>
1347+
13121348
<section>
13131349
<h2 id="iris-dataset">Iris Dataset </h2>
13141350

@@ -1903,7 +1939,7 @@ <h2 id="mathematical-example">Mathematical example </h2>
19031939
<p>and a variational layer is</p>
19041940
<p>&nbsp;<br>
19051941
$$
1906-
V(\boldsymbol\theta)=R_y(\theta_1)\otimes R_y(\theta_2),\text{CNOT}(0,1),
1942+
V(\boldsymbol\theta)=R_y(\theta_1)\otimes R_y(\theta_2),\mathrm{CNOT}(0,1),
19071943
$$
19081944
<p>&nbsp;<br>
19091945

doc/pub/week15/html/week15-solarized.html

Lines changed: 37 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -187,6 +187,10 @@
187187
2,
188188
None,
189189
'the-iris-data-and-classical-svm'),
190+
('Small addendum, $F1$-score',
191+
2,
192+
None,
193+
'small-addendum-f1-score'),
190194
('Iris Dataset', 2, None, 'iris-dataset'),
191195
('Qiskit implementation', 2, None, 'qiskit-implementation'),
192196
('Credit data classification',
@@ -873,7 +877,7 @@ <h2 id="quantum-svm-algorithms-large-scale-vs-nisq">Quantum SVM Algorithms (larg
873877
<p>Early work by Rebentrost, Mohseni, and Lloyd (2014) formulated an SVM
874878
in terms of quantum linear algebra. They showed that one can invert
875879
the kernel matrix (a positive semidefinite matrix) using quantum
876-
algorithms (HHL algorithm) in time polylogarithmic in \( N \) and \( d \) .
880+
algorithms in time polylogarithmic in \( N \) and \( d \) .
877881
Concretely, they assumed quantum RAM (QRAM) access to data and used a
878882
quantum subroutine to solve the dual SVM as a linear system, yielding
879883
the vector of \( \alpha_i \) in superposition. Under ideal conditions
@@ -909,7 +913,7 @@ <h2 id="and-nisq-quantum-kernels">And NISQ Quantum Kernels </h2>
909913
<h2 id="quantum-neural-network">Quantum neural network </h2>
910914

911915
<p>Another variation is the quantum variational classifier, sometimes
912-
called a quantum neural network. Instead of precomputing a fixed
916+
called a quantum neural network (to be discussed below). Instead of precomputing a fixed
913917
kernel, one trains a parameterized quantum circuit to output labels.
914918
Interestingly, Schuld (2021) shows that variational quantum models,
915919
when trained by minimizing a loss, are mathematically equivalent to
@@ -1261,7 +1265,10 @@ <h2 id="training-svm-with-precomputed-quantum-kernels">Training SVM with Precomp
12611265
on the test set.
12621266
</p>
12631267

1264-
<p>It is also possible to integrate PennyLane&#8217;s differentiable capabilities by defining a parameterized kernel and optimizing parameters via gradient descent, but here we keep a fixed feature map.</p>
1268+
<p>It is also possible to integrate PennyLane&#8217;s differentiable
1269+
capabilities by defining a parameterized kernel and optimizing
1270+
parameters via gradient descent, but here we keep a fixed feature map.
1271+
</p>
12651272

12661273
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
12671274
<h2 id="discussion-of-implementation">Discussion of Implementation </h2>
@@ -1368,6 +1375,32 @@ <h2 id="the-iris-data-and-classical-svm">The Iris data and classical SVM </h2>
13681375
</div>
13691376

13701377

1378+
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
1379+
<h2 id="small-addendum-f1-score">Small addendum, \( F1 \)-score </h2>
1380+
1381+
<p>The \( F1 \) measure (or \( F1 \)-score) in machine learning is a metric used to
1382+
evaluate the accuracy of a classification model, particularly in
1383+
situations where class distribution is imbalanced.
1384+
It is the harmonic mean of precision and recall and is defined as
1385+
</p>
1386+
$$
1387+
\mathrm{F1 score} = 2 \times \frac{\mathrm{precision} \times \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}},
1388+
$$
1389+
1390+
<p>where we have defined</p>
1391+
$$
1392+
\mathrm{precision} = \frac{\mathrm{True-Positives}}{\mathrm{True-Positives} + \mathrm{False-Positives}},
1393+
$$
1394+
1395+
<p>and</p>
1396+
$$
1397+
\mathrm{recall} = \frac{\mathrm{True-Positives}}{\mathrm{True-Positives} + \mathrm{False-Negatives}}.
1398+
$$
1399+
1400+
<p>The \( F1 \)-score ranges from \( 0 \) to \( 1 \) where \( 1 \) means perfect precision and recall, while
1401+
\( 0 \) means either precision or recall is zero.
1402+
</p>
1403+
13711404
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
13721405
<h2 id="iris-dataset">Iris Dataset </h2>
13731406

@@ -1937,7 +1970,7 @@ <h2 id="mathematical-example">Mathematical example </h2>
19371970

19381971
<p>and a variational layer is</p>
19391972
$$
1940-
V(\boldsymbol\theta)=R_y(\theta_1)\otimes R_y(\theta_2),\text{CNOT}(0,1),
1973+
V(\boldsymbol\theta)=R_y(\theta_1)\otimes R_y(\theta_2),\mathrm{CNOT}(0,1),
19411974
$$
19421975

19431976
<p>(apply \( R_y \) on each qubit then entangle). After

doc/pub/week15/html/week15.html

Lines changed: 37 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -264,6 +264,10 @@
264264
2,
265265
None,
266266
'the-iris-data-and-classical-svm'),
267+
('Small addendum, $F1$-score',
268+
2,
269+
None,
270+
'small-addendum-f1-score'),
267271
('Iris Dataset', 2, None, 'iris-dataset'),
268272
('Qiskit implementation', 2, None, 'qiskit-implementation'),
269273
('Credit data classification',
@@ -950,7 +954,7 @@ <h2 id="quantum-svm-algorithms-large-scale-vs-nisq">Quantum SVM Algorithms (larg
950954
<p>Early work by Rebentrost, Mohseni, and Lloyd (2014) formulated an SVM
951955
in terms of quantum linear algebra. They showed that one can invert
952956
the kernel matrix (a positive semidefinite matrix) using quantum
953-
algorithms (HHL algorithm) in time polylogarithmic in \( N \) and \( d \) .
957+
algorithms in time polylogarithmic in \( N \) and \( d \) .
954958
Concretely, they assumed quantum RAM (QRAM) access to data and used a
955959
quantum subroutine to solve the dual SVM as a linear system, yielding
956960
the vector of \( \alpha_i \) in superposition. Under ideal conditions
@@ -986,7 +990,7 @@ <h2 id="and-nisq-quantum-kernels">And NISQ Quantum Kernels </h2>
986990
<h2 id="quantum-neural-network">Quantum neural network </h2>
987991

988992
<p>Another variation is the quantum variational classifier, sometimes
989-
called a quantum neural network. Instead of precomputing a fixed
993+
called a quantum neural network (to be discussed below). Instead of precomputing a fixed
990994
kernel, one trains a parameterized quantum circuit to output labels.
991995
Interestingly, Schuld (2021) shows that variational quantum models,
992996
when trained by minimizing a loss, are mathematically equivalent to
@@ -1338,7 +1342,10 @@ <h2 id="training-svm-with-precomputed-quantum-kernels">Training SVM with Precomp
13381342
on the test set.
13391343
</p>
13401344

1341-
<p>It is also possible to integrate PennyLane&#8217;s differentiable capabilities by defining a parameterized kernel and optimizing parameters via gradient descent, but here we keep a fixed feature map.</p>
1345+
<p>It is also possible to integrate PennyLane&#8217;s differentiable
1346+
capabilities by defining a parameterized kernel and optimizing
1347+
parameters via gradient descent, but here we keep a fixed feature map.
1348+
</p>
13421349

13431350
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
13441351
<h2 id="discussion-of-implementation">Discussion of Implementation </h2>
@@ -1445,6 +1452,32 @@ <h2 id="the-iris-data-and-classical-svm">The Iris data and classical SVM </h2>
14451452
</div>
14461453

14471454

1455+
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
1456+
<h2 id="small-addendum-f1-score">Small addendum, \( F1 \)-score </h2>
1457+
1458+
<p>The \( F1 \) measure (or \( F1 \)-score) in machine learning is a metric used to
1459+
evaluate the accuracy of a classification model, particularly in
1460+
situations where class distribution is imbalanced.
1461+
It is the harmonic mean of precision and recall and is defined as
1462+
</p>
1463+
$$
1464+
\mathrm{F1 score} = 2 \times \frac{\mathrm{precision} \times \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}},
1465+
$$
1466+
1467+
<p>where we have defined</p>
1468+
$$
1469+
\mathrm{precision} = \frac{\mathrm{True-Positives}}{\mathrm{True-Positives} + \mathrm{False-Positives}},
1470+
$$
1471+
1472+
<p>and</p>
1473+
$$
1474+
\mathrm{recall} = \frac{\mathrm{True-Positives}}{\mathrm{True-Positives} + \mathrm{False-Negatives}}.
1475+
$$
1476+
1477+
<p>The \( F1 \)-score ranges from \( 0 \) to \( 1 \) where \( 1 \) means perfect precision and recall, while
1478+
\( 0 \) means either precision or recall is zero.
1479+
</p>
1480+
14481481
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
14491482
<h2 id="iris-dataset">Iris Dataset </h2>
14501483

@@ -2014,7 +2047,7 @@ <h2 id="mathematical-example">Mathematical example </h2>
20142047

20152048
<p>and a variational layer is</p>
20162049
$$
2017-
V(\boldsymbol\theta)=R_y(\theta_1)\otimes R_y(\theta_2),\text{CNOT}(0,1),
2050+
V(\boldsymbol\theta)=R_y(\theta_1)\otimes R_y(\theta_2),\mathrm{CNOT}(0,1),
20182051
$$
20192052

20202053
<p>(apply \( R_y \) on each qubit then entangle). After
0 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)