Skip to content

Commit 9718d87

Browse files
committed
update week 15
1 parent 50936d5 commit 9718d87

File tree

7 files changed

+1438
-229
lines changed

7 files changed

+1438
-229
lines changed

doc/pub/week15/html/week15-bs.html

Lines changed: 229 additions & 11 deletions
Large diffs are not rendered by default.

doc/pub/week15/html/week15-reveal.html

Lines changed: 215 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1096,7 +1096,7 @@ <h2 id="computing-quantum-kernel-matrices">Computing Quantum Kernel Matrices </h
10961096
</div>
10971097
</div>
10981098

1099-
<p>and compute kernel matrices. For training:</p>
1099+
<p>and compute kernel matrices. For training (<b>note the codes will not run properly since the data are not defined</b>):</p>
11001100

11011101
<!-- code=python (!bc pycod) typeset with pygments style "perldoc" -->
11021102
<div class="cell border-box-sizing code_cell rendered">
@@ -2215,7 +2215,6 @@ <h2 id="implementing-qnns-with-pennylane">Implementing QNNs with PennyLane </h2>
22152215
variational_layer(params[<span style="color: #B452CD">4</span>:<span style="color: #B452CD">8</span>])
22162216
<span style="color: #228B22"># measure expectation of Z on qubit 0</span>
22172217
<span style="color: #8B008B; font-weight: bold">return</span> qml.expval(qml.PauliZ(wires=<span style="color: #B452CD">0</span>))
2218-
In this code:
22192218
</pre>
22202219
</div>
22212220
</div>
@@ -2231,8 +2230,8 @@ <h2 id="implementing-qnns-with-pennylane">Implementing QNNs with PennyLane </h2>
22312230
</div>
22322231
</div>
22332232

2234-
<p>We instantiate a 2-qubit device dev. feature$\_$map(x) encodes the
2235-
2-dimensional input x using \( R_x \) rotations .
2233+
<p>In this code we instantiate a two-qubit device dev. feature$\_$map(x) encodes the
2234+
two-dimensional input x using \( R_x \) rotations .
22362235
variational$\_$layer(params) is a block of trainable gates (here two Rot
22372236
gates and a CNOT). The @qml.qnode(dev) decorator turns the Python
22382237
function qclassifier into a quantum node that returns the expectation
@@ -2325,6 +2324,218 @@ <h2 id="additional-exercises">Additional exercises </h2>
23252324
</ol>
23262325
</section>
23272326

2327+
<section>
2328+
<h2 id="variational-quantum-neural-network-for-credit-classification">Variational Quantum Neural Network for credit classification </h2>
2329+
2330+
<p>This self-contained PennyLane code demonstrates a simple hybrid
2331+
quantum-classical binary classifier on synthetic financial data, like
2332+
the one we discussed in connection with quantum support vector
2333+
machines.
2334+
</p>
2335+
2336+
<p>We build a quantum neural network (QNN) &#8211; a parameterized
2337+
(variational) quantum circuit &#8211; to classify synthetic credit data.
2338+
Quantum neural networks are typically implemented as variational
2339+
quantum circuits with trainable rotation angles.
2340+
</p>
2341+
2342+
<p>We first generate
2343+
small synthetic data with features like income, debt ratio, and age,
2344+
labeling each datapoint as &#8220;good&#8221; or &#8220;bad&#8221; credit. Classical features
2345+
are encoded into qubit rotation angles using an angle embedding, and a
2346+
layer of trainable entangling gates (PennyLane&#8217;s
2347+
StronglyEntanglingLayers) forms the variational ansatz. During
2348+
training, we optimize the circuit parameters via a classical optimizer
2349+
to minimize binary cross-entropy loss . Finally, we measure one qubit
2350+
to produce a probability for the positive class and evaluate the
2351+
classifier with accuracy, precision, and recall . The following code
2352+
implements all steps using PennyLane&#8217;s default.qubit simulator.
2353+
</p>
2354+
2355+
2356+
<!-- code=python (!bc pycod) typeset with pygments style "perldoc" -->
2357+
<div class="cell border-box-sizing code_cell rendered">
2358+
<div class="input">
2359+
<div class="inner_cell">
2360+
<div class="input_area">
2361+
<div class="highlight" style="background: #eeeedd">
2362+
<pre style="font-size: 80%; line-height: 125%;"><span style="color: #8B008B; font-weight: bold">import</span> <span style="color: #008b45; text-decoration: underline">numpy</span> <span style="color: #8B008B; font-weight: bold">as</span> <span style="color: #008b45; text-decoration: underline">np</span>
2363+
<span style="color: #8B008B; font-weight: bold">import</span> <span style="color: #008b45; text-decoration: underline">pennylane</span> <span style="color: #8B008B; font-weight: bold">as</span> <span style="color: #008b45; text-decoration: underline">qml</span>
2364+
<span style="color: #8B008B; font-weight: bold">from</span> <span style="color: #008b45; text-decoration: underline">pennylane</span> <span style="color: #8B008B; font-weight: bold">import</span> numpy <span style="color: #8B008B; font-weight: bold">as</span> pnp
2365+
<span style="color: #8B008B; font-weight: bold">from</span> <span style="color: #008b45; text-decoration: underline">sklearn.model_selection</span> <span style="color: #8B008B; font-weight: bold">import</span> train_test_split
2366+
<span style="color: #8B008B; font-weight: bold">from</span> <span style="color: #008b45; text-decoration: underline">sklearn.metrics</span> <span style="color: #8B008B; font-weight: bold">import</span> accuracy_score, precision_score, recall_score
2367+
2368+
<span style="color: #228B22"># Step 1: Generate synthetic credit data</span>
2369+
np.random.seed(<span style="color: #B452CD">0</span>)
2370+
N = <span style="color: #B452CD">100</span>
2371+
<span style="color: #228B22"># Features: income (in thousands), debt_ratio (percent), age (years)</span>
2372+
income = np.random.normal(<span style="color: #B452CD">50</span>, <span style="color: #B452CD">15</span>, N) <span style="color: #228B22"># mean 50, std 15</span>
2373+
debt_ratio = np.random.uniform(<span style="color: #B452CD">0</span>, <span style="color: #B452CD">100</span>, N) <span style="color: #228B22"># between 0 and 100%</span>
2374+
age = np.random.randint(<span style="color: #B452CD">18</span>, <span style="color: #B452CD">70</span>, N) <span style="color: #228B22"># ages 18 to 69</span>
2375+
X = np.column_stack((income, debt_ratio, age))
2376+
2377+
<span style="color: #228B22"># Label: simple linear rule with noise =&gt; 1 = good credit, 0 = bad</span>
2378+
score = <span style="color: #B452CD">0.3</span> * income - <span style="color: #B452CD">0.2</span> * debt_ratio + <span style="color: #B452CD">0.1</span> * age
2379+
y = (score &gt; np.median(score)).astype(<span style="color: #658b00">int</span>) <span style="color: #228B22"># threshold at median</span>
2380+
2381+
<span style="color: #228B22"># Split into train/test (80/20 split)</span>
2382+
X_train_np, X_test_np, y_train_np, y_test_np = train_test_split(X, y, test_size=<span style="color: #B452CD">0.2</span>, random_state=<span style="color: #B452CD">42</span>)
2383+
2384+
<span style="color: #228B22"># Step 2: Feature scaling for angle embedding</span>
2385+
<span style="color: #228B22"># Scale each feature to [0, pi] so they can serve as rotation angles</span>
2386+
max_income = X[:, <span style="color: #B452CD">0</span>].max()
2387+
max_debt = X[:, <span style="color: #B452CD">1</span>].max()
2388+
min_age, max_age = X[:, <span style="color: #B452CD">2</span>].min(), X[:, <span style="color: #B452CD">2</span>].max()
2389+
2390+
<span style="color: #228B22"># Scale training data</span>
2391+
X_train_scaled = X_train_np.copy()
2392+
X_train_scaled[:, <span style="color: #B452CD">0</span>] = X_train_scaled[:, <span style="color: #B452CD">0</span>] / max_income * np.pi
2393+
X_train_scaled[:, <span style="color: #B452CD">1</span>] = X_train_scaled[:, <span style="color: #B452CD">1</span>] / max_debt * np.pi
2394+
X_train_scaled[:, <span style="color: #B452CD">2</span>] = (X_train_scaled[:, <span style="color: #B452CD">2</span>] - min_age) / (max_age - min_age) * np.pi
2395+
2396+
<span style="color: #228B22"># Scale test data</span>
2397+
X_test_scaled = X_test_np.copy()
2398+
X_test_scaled[:, <span style="color: #B452CD">0</span>] = X_test_scaled[:, <span style="color: #B452CD">0</span>] / max_income * np.pi
2399+
X_test_scaled[:, <span style="color: #B452CD">1</span>] = X_test_scaled[:, <span style="color: #B452CD">1</span>] / max_debt * np.pi
2400+
X_test_scaled[:, <span style="color: #B452CD">2</span>] = (X_test_scaled[:, <span style="color: #B452CD">2</span>] - min_age) / (max_age - min_age) * np.pi
2401+
2402+
<span style="color: #228B22"># Convert data to PennyLane numpy arrays for differentiation</span>
2403+
X_train = pnp.array(X_train_scaled)
2404+
X_test = pnp.array(X_test_scaled)
2405+
y_train = pnp.array(y_train_np)
2406+
y_test = pnp.array(y_test_np)
2407+
2408+
<span style="color: #228B22"># Step 3: Define the variational quantum circuit</span>
2409+
n_qubits = <span style="color: #B452CD">3</span>
2410+
dev = qml.device(<span style="color: #CD5555">&quot;default.qubit&quot;</span>, wires=n_qubits)
2411+
2412+
<span style="color: #228B22"># Parameterized quantum neural network (variational circuit)</span>
2413+
<span style="color: #707a7c">@qml</span>.qnode(dev)
2414+
<span style="color: #8B008B; font-weight: bold">def</span> <span style="color: #008b45">circuit</span>(weights, x):
2415+
<span style="color: #228B22"># Feature map: encode features by rotation angles on each qubit</span>
2416+
<span style="color: #228B22"># Uses RY rotations (AngleEmbedding by default uses RX or can specify RY)</span>
2417+
qml.AngleEmbedding(features=x, wires=<span style="color: #658b00">range</span>(n_qubits), rotation=<span style="color: #CD5555">&#39;Y&#39;</span>)
2418+
<span style="color: #228B22"># Variational (trainable) layers: strong entangling rotations</span>
2419+
qml.templates.StronglyEntanglingLayers(weights, wires=<span style="color: #658b00">range</span>(n_qubits))
2420+
<span style="color: #228B22"># Measure expectation of Pauli-Z on the first qubit</span>
2421+
<span style="color: #8B008B; font-weight: bold">return</span> qml.expval(qml.PauliZ(<span style="color: #B452CD">0</span>))
2422+
2423+
<span style="color: #228B22"># Initialize trainable weights for the variational layers</span>
2424+
num_layers = <span style="color: #B452CD">1</span>
2425+
<span style="color: #228B22"># Shape for StronglyEntanglingLayers: (num_layers, n_qubits, 3)</span>
2426+
init_weights = <span style="color: #B452CD">0.01</span> * np.random.randn(num_layers, n_qubits, <span style="color: #B452CD">3</span>)
2427+
weights = pnp.array(init_weights, requires_grad=<span style="color: #8B008B; font-weight: bold">True</span>)
2428+
2429+
<span style="color: #228B22"># Step 4: Define cost (binary cross-entropy) and train the QNN</span>
2430+
<span style="color: #8B008B; font-weight: bold">def</span> <span style="color: #008b45">cross_entropy_loss</span>(weights, X, y):
2431+
<span style="color: #228B22"># Run circuit on each sample to get expectation values</span>
2432+
expvals = [circuit(weights, x=x) <span style="color: #8B008B; font-weight: bold">for</span> x <span style="color: #8B008B">in</span> X]
2433+
expvals = pnp.stack(expvals)
2434+
<span style="color: #228B22"># Convert expectation ⟨Z⟩ to probability for label=1: P(1) = (1 - ⟨Z⟩)/2</span>
2435+
probs = (<span style="color: #B452CD">1</span> - expvals) / <span style="color: #B452CD">2</span>
2436+
<span style="color: #228B22"># Clip probabilities to avoid log(0)</span>
2437+
probs = pnp.clip(probs, <span style="color: #B452CD">1e-6</span>, <span style="color: #B452CD">1</span> - <span style="color: #B452CD">1e-6</span>)
2438+
<span style="color: #228B22"># Binary cross-entropy loss</span>
2439+
loss = -pnp.mean(y * pnp.log(probs) + (<span style="color: #B452CD">1</span> - y) * pnp.log(<span style="color: #B452CD">1</span> - probs))
2440+
<span style="color: #8B008B; font-weight: bold">return</span> loss
2441+
2442+
<span style="color: #228B22"># Choose an optimizer (gradient descent)</span>
2443+
opt = qml.GradientDescentOptimizer(stepsize=<span style="color: #B452CD">0.5</span>)
2444+
2445+
<span style="color: #228B22"># Training loop</span>
2446+
epochs = <span style="color: #B452CD">20</span>
2447+
<span style="color: #8B008B; font-weight: bold">for</span> it <span style="color: #8B008B">in</span> <span style="color: #658b00">range</span>(epochs):
2448+
weights, cost_val = opt.step_and_cost(<span style="color: #8B008B; font-weight: bold">lambda</span> w: cross_entropy_loss(w, X_train, y_train), weights)
2449+
<span style="color: #8B008B; font-weight: bold">if</span> (it + <span style="color: #B452CD">1</span>) % <span style="color: #B452CD">5</span> == <span style="color: #B452CD">0</span>:
2450+
<span style="color: #658b00">print</span>(<span style="color: #CD5555">f&quot;Iteration {</span>it+<span style="color: #B452CD">1</span><span style="color: #CD5555">:&gt;2}: loss = {</span>cost_val<span style="color: #CD5555">:.4f}&quot;</span>)
2451+
2452+
<span style="color: #228B22"># Step 5: Evaluate performance on training and test sets</span>
2453+
<span style="color: #228B22"># Predict by evaluating circuit and thresholding at 0.5</span>
2454+
<span style="color: #8B008B; font-weight: bold">def</span> <span style="color: #008b45">predict</span>(weights, X):
2455+
preds = []
2456+
<span style="color: #8B008B; font-weight: bold">for</span> x <span style="color: #8B008B">in</span> X:
2457+
z = circuit(weights, x=x)
2458+
prob = <span style="color: #658b00">float</span>((<span style="color: #B452CD">1</span> - z) / <span style="color: #B452CD">2</span>) <span style="color: #228B22"># probability of class=1</span>
2459+
preds.append(<span style="color: #658b00">int</span>(prob &gt; <span style="color: #B452CD">0.5</span>))
2460+
<span style="color: #8B008B; font-weight: bold">return</span> np.array(preds)
2461+
2462+
y_train_pred = predict(weights, X_train)
2463+
y_test_pred = predict(weights, X_test)
2464+
2465+
<span style="color: #228B22"># Compute accuracy, precision, recall</span>
2466+
train_acc = accuracy_score(y_train_np, y_train_pred)
2467+
test_acc = accuracy_score(y_test_np, y_test_pred)
2468+
train_prec = precision_score(y_train_np, y_train_pred)
2469+
test_prec = precision_score(y_test_np, y_test_pred)
2470+
train_rec = recall_score(y_train_np, y_train_pred)
2471+
test_rec = recall_score(y_test_np, y_test_pred)
2472+
2473+
<span style="color: #658b00">print</span>(<span style="color: #CD5555">f&quot;Train Accuracy: {</span>train_acc<span style="color: #CD5555">:.2f}, Precision: {</span>train_prec<span style="color: #CD5555">:.2f}, Recall: {</span>train_rec<span style="color: #CD5555">:.2f}&quot;</span>)
2474+
<span style="color: #658b00">print</span>(<span style="color: #CD5555">f&quot;Test Accuracy: {</span>test_acc<span style="color: #CD5555">:.2f}, Precision: {</span>test_prec<span style="color: #CD5555">:.2f}, Recall: {</span>test_rec<span style="color: #CD5555">:.2f}&quot;</span>)
2475+
</pre>
2476+
</div>
2477+
</div>
2478+
</div>
2479+
</div>
2480+
<div class="output_wrapper">
2481+
<div class="output">
2482+
<div class="output_area">
2483+
<div class="output_subarea output_stream output_stdout output_text">
2484+
</div>
2485+
</div>
2486+
</div>
2487+
</div>
2488+
</div>
2489+
</section>
2490+
2491+
<section>
2492+
<h2 id="essential-steps-in-the-code">Essential steps in the code </h2>
2493+
<div class="alert alert-block alert-block alert-text-normal">
2494+
<b>Data Encoding:</b>
2495+
<p>
2496+
<p>We map each feature vector to quantum states via angle embedding: each
2497+
feature is used as the rotation angle of an RY gate on a qubit . This
2498+
creates a <b>quantum feature map</b> of our classical data.
2499+
</p>
2500+
</div>
2501+
2502+
<div class="alert alert-block alert-block alert-text-normal">
2503+
<b>Variational Ansatz:</b>
2504+
<p>
2505+
<p>After embedding, we apply a layer of trainable rotations and
2506+
entangling gates (StronglyEntanglingLayers), creating a parameterized
2507+
circuit whose outputs depend on adjustable weights . Measuring the
2508+
expectation &#10216;Z&#10217; of the first qubit yields a value in \( [&#8211;1,1] \), which we
2509+
convert to a class probability via \( (1 &#8211; &#10216;Z&#10217;)/2 \).
2510+
</p>
2511+
</div>
2512+
2513+
2514+
<div class="alert alert-block alert-block alert-text-normal">
2515+
<b>Training:</b>
2516+
<p>
2517+
<p>We optimize the circuit parameters by minimizing the binary
2518+
cross-entropy loss between the predicted probabilities and true
2519+
labels. Binary cross-entropy (log-loss) is a standard choice for
2520+
binary classification , adjusting weights to improve the match between
2521+
predictions and targets. We use PennyLane&#8217;s GradientDescentOptimizer
2522+
(or AdamOptimizer) to update parameters via backpropagated gradients.
2523+
</p>
2524+
</div>
2525+
2526+
2527+
<div class="alert alert-block alert-block alert-text-normal">
2528+
<b>Evaluation:</b>
2529+
<p>
2530+
<p>Finally, we compute accuracy, precision, and recall on the
2531+
dataset. Accuracy is the fraction of correct predictions. Precision is
2532+
the fraction of predicted &#8220;good&#8221; credits that are truly good, and
2533+
recall is the fraction of actual good credits that are correctly
2534+
identified. These metrics are standard in classification tasks .
2535+
</p>
2536+
</div>
2537+
</section>
2538+
23282539
<section>
23292540
<h2 id="applications-and-examples">Applications and examples </h2>
23302541

@@ -2387,12 +2598,6 @@ <h2 id="more-on-applications">More on applications </h2>
23872598
<p>Small-scale QML experiments have been run on IBM, Google, and IonQ devices. These serve as proof-of-concept for the hybrid model.</p>
23882599
</div>
23892600

2390-
<p>Each application comes with domain-specific twists, but all rely on
2391-
the core ideas of Chapters 1&#8211;4: encoding data, parameterized circuits,
2392-
measurement, and classical optimization. As hardware improves, more
2393-
complex QML tasks (e.g. image recognition, chemistry simulations,
2394-
finance models) may become feasible.
2395-
</p>
23962601

23972602
<!-- A. Abbas et al., &#8220;The power of quantum neural networks,&#8221; Nature Comput. Sci. 1, 403&#8211;409 (2021) . -->
23982603
<!-- E. Anschuetz and B. Kiani, &#8220;Quantum variational algorithms are swamped with traps,&#8221; Nat. Commun. 13, 7760 (2022) . -->

0 commit comments

Comments
 (0)