diff --git a/content/c-pointers-arrays-strings/pointers.md b/content/c-pointers-arrays-strings/pointers.md
index f7a14bc..371ea27 100644
--- a/content/c-pointers-arrays-strings/pointers.md
+++ b/content/c-pointers-arrays-strings/pointers.md
@@ -401,7 +401,6 @@ int main() {
}
```
-
:::{figure} images/array-indexing.png
:label: fig-ptr-indexing
:width: 80%
diff --git a/content/floating-point/exercises.md b/content/floating-point/exercises.md
new file mode 100644
index 0000000..e8f2add
--- /dev/null
+++ b/content/floating-point/exercises.md
@@ -0,0 +1,26 @@
+---
+title: "Exercises"
+subtitle: "Check your knowledge before section"
+---
+
+## Conceptual Review
+
+1. Question
+
+:::{note} Solution
+:class: dropdown
+
+Solution
+
+
+:::
+
+## Short Exercises
+
+1. **True/False**:
+
+:::{note} Solution
+:class: dropdown
+**True.** Explanation
+:::
+
diff --git a/content/floating-point/fp-discussion.md b/content/floating-point/fp-discussion.md
new file mode 100644
index 0000000..388abb7
--- /dev/null
+++ b/content/floating-point/fp-discussion.md
@@ -0,0 +1,145 @@
+---
+title: "Floating Point: More Discussion"
+subtitle: "This content is not tested"
+---
+
+(sec-float-discussion)=
+## Learning Outcomes
+
+* Understand which floating point formats are used in practice
+
+::::{note} 🎥 Lecture Video
+:class: dropdown
+
+:::{iframe} https://www.youtube.com/embed/VkLcogCQAho
+:width: 100%
+:title: "[CS61C FA20] Lecture 06.5 - Floating Point: Floating Point Discussion"
+:::
+
+::::
+
+In a previous version of the course, we covered floating point in much more detail over multiple lectures. In recent semesters, we have reduced floating point topics to focus on the core of the standard, and we have not covered more advanced topics like arithmetic, casting, and other floating-point representations. For now, we leave this out-of-scope content below as general reference.
+
+## Floating Point Addition
+
+Let's consider arithmetic with floating point numbers.
+
+Floating point addition is more complex than integer addition. We can't just add significands without considering the exponent value. In general:
+
+* Denormalize to match exponents
+* Add significands together
+* Keep the matched exponent
+* Normalize, possibly changing the exponent
+* (Note: If signs differ, just perform a subtract instead.)
+
+Because of how floating point numbers are stored, simple operations like addition are not always associative.
+
+Define `x`, `y`, and `z` as $-1.5 \times 10^{38}$, `y`: $1.5 \times 10^{38}$, and $1.0$, respectively.
+
+$$
+\begin{align}
+\texttt{x + (y + z)} &= -1.5 \times 10^{38} + (1.5 \times 10^{38} + 1.0) \\
+&= -1.5 \times 10^{38} + (1.5 \times 10^{38}) \\
+&= 0.0
+\end{align}
+$$
+
+$$
+\begin{align}
+\texttt{(x + y) + z} &= (-1.5 \times 10^{38} + 1.5 \times 10^{38}) + 1.0 \\
+&= 0.0 + 1.0\\
+&= 1.0
+\end{align}
+$$
+
+
+Remember, floating point effectively **approximates** real results. With bigger exponents, step size between floats gets bigger too. In this example, $1.5 \times 10^{38}$ is so much larger than $1.0$ that $1.5 \times 10^{38} + 1.0$ in floating point representation rounds to $1.5 \times 10^{38}$.
+
+## Floating Point Rounding Modes
+
+When we perform math on real numbers, we have to worry about rounding to fit the result in the significand field. The floating point hardware carries two extra bits of precision, and then rounds to get the proper value.
+
+There are four primary rounding modes:
+
+* **Round towards $+\infty$**. ALWAYS round “up”: 2.001 → 3, -2.001 → -2
+* **Round towards $-\infty$**. ALWAYS round “down”: 1.999 → 1, -1.999 → -2
+* **Truncate**. Just drop the last bits (round towards 0)
+* **Unbiased**. If midway, round to even.
+
+The unbiased mode is the default, though the others can be specified. Unbiased works _almost_ like normal rounding. Generally, we round to the nearest representable number, e.g., 2.4 rounds to 2, 2.6 to 3, 2.5 to 2, 3.5 to 4, etc. If the value is on the borderline, we round to the nearest even number. In other words, if there is a "tie", half the time we round up; the other half time we round down. This "unbiased" nature ensures fairness on calculation by balancing out inaccuracies.
+
+## Casting and converting
+
+Rounding also occurs when converting betwen numeric types. In C:
+
+* **`int` to `float`**: There are large integers that a `float` cannot handle exactly because it lacks enough bits in the significand. For instance, $2^24 + 1$ will "snap" to the closest even float.
+* **`float` to `int`**: Floating points with fractional components simply don't have integer representations. C uses **truncation** to coerce and convert floating point to the nearest integer. For example, `(int) 1.5` gets chopped off to `1`.
+
+Double-casting therefore does not work as expected. Code A and Code B below may not always print `"true"`:
+
+```c
+/* Code A */
+int i = …;
+if (i == (int)((float) i)) {
+ printf("true\n");
+}
+
+/* Code B */
+float f = …;
+if (f == (float)((int) f)) {
+ printf("true\n");
+}
+```
+
+## Other Floating Point Representations
+
+### Precision vs. Accuracy
+
+Recall from before:
+
+* **Precision** is a count of the number of bits used to represent a value.
+* **Accuracy** is the difference between the actual value of a number and its computer representation.
+
+High precision permits high accuracy but doesn’t guarantee it.
+It is possible to have high precision but low accuracy.
+
+For example, consider `float pi = 3.14;`. `pi` will be represented using all 23 bits of the significand ("highly precise"), but it is only an approximation of $\pi$ ("not accurate").
+
+Below, we discuss other floating point representations that can yield more accurate numbers in certain cases. However, because all of these representations are fixed precision (i.e., fixed bit-width) we cannot represent everything perfectly.
+
+### Even More Floating Point Representations
+
+Still more representations exist. Here are a few from the IEEE 754 standard:
+
+* **Quad-precision**, or IEEE 754 quadruple-precision format binary128. Defined as 128 bits (15 exponent bits, 112 significand bits) with unbelievable range and precision.
+* **Oct-Precision**, or IEEE 754 octuple-precision format binary256. Defined as 256 bits (19 exponent bits, 237 significand bits).
+* **Half-Precision**, or IEEE 754 half-precision format binary16. Defined as 16 bits (5 exponent bits, 10 significand bits).
+
+Domain-specific architectures demand different number formats (@tab-float-types). For example, the bfloat16[^bf16] on Google's Tensor Processing Unit (TPU) is defined over 16 bits (8 exponent bits, 7 significand bits); because of its wider exponent field, it covers the same range as IEEE 754 single-precision format at the expense of significand precision. This tradeoff is preferred given vanishing gradients towards zero for neural network training.
+
+:::{table} Different domain accelerators support various integer and floating-point formats.
+:label: tab-float-types
+:align: center
+
+| Accelerator | int4 | int8 | int16 | fp16 | bf16[^bf16] | fp32 | tf32[^tf32] |
+| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
+| Google TPU v1 | | x | | | | | |
+| Google TPU v2 | | | | | x | | |
+| Google TPU v3 | | | | | x | | |
+| Nvidia Volta TensorCore | | x | | x | | x | |
+| Nvidia Ampere TensorCore | x | x | x | x | x | x | x |
+| Nvidia DLA | | x | x | x | | | |
+| Intel AMX | | x | | | x | | |
+| Amazon AWS Inferentia | | x | | x | x | | |
+| Qualcomm Hexagon | | x | | | | | |
+| Huawei Da Vinci | | x | | x | | | |
+| MediaTek APU 3.0 | | x | x | x | | | |
+| Samsung NPU | | x | | | | | |
+| Tesla NPU | | x | | | | | |
+
+:::
+
+[^tf32]: See [Nvidia's TensorFloat-32](https://en.wikipedia.org/wiki/TensorFloat-32).
+[^bf16]: See [Google's bfloat16](https://docs.cloud.google.com/tpu/docs/bfloat16).
+
+For those interested, we recommend reading about the proposed [Unum format](https://en.wikipedia.org/wiki/Unum_%28number_format%29), which suggests using _variable_ field widths for the exponent and significand. This format adds a "u-bit" to tell whether the number is exact or in-between unums.
\ No newline at end of file
diff --git a/content/floating-point/fp-examples.md b/content/floating-point/fp-examples.md
new file mode 100644
index 0000000..9a43d33
--- /dev/null
+++ b/content/floating-point/fp-examples.md
@@ -0,0 +1,207 @@
+---
+title: "Normalized Numbers: Practice"
+---
+
+## Learning Outcomes
+
+* Practice converting between IEEE 754 single-precision floating point formats to decimal numbers.
+
+
+::::{note} 🎥 Lecture Video
+:class: dropdown
+
+:::{iframe} https://www.youtube.com/embed/7MRtSYK1IOI
+:width: 100%
+:title: "[CS61C FA20] Lecture 06.4 - Floating Point: Examples, Discussion"
+:::
+
+::::
+
+In this course, we will expect that you should be able to translate between a number's IEEE 754 Floating Point format and their decimal representation. We give some examples below.
+
+In practice, you can and should use floating point converters. [This web app](https://www.h-schmidt.net/FloatConverter/IEEE754.html) gives a fantastic converter that you can and should use to explore numbers beyond those discussed below!
+
+## Example 1: Floating Point to Decimal
+
+:::{tip} Example 1
+
+What is the decimal number represented by this IEEE 754 single-precision binary floating point format?
+
+| s | exponent | significand |
+| :--: | :--: | :--: |
+| `1` | `1000 0001` | `111 0000 0000 0000 0000 0000` |
+
+* **A.** $-7 \times 2^{129}$
+* **B.** -3.5
+* **C.** -3.75
+* **D.** 7
+* **E.** -7.5
+* **F.** Something else
+
+:::
+
+:::{note} Show Answer
+:class: dropdown
+
+**E.** -7.5.
+
+We separate these 32 bits into the bit fields of sign (1 bit), exponent (8 bits), and significand (23 bits) first and translate each part separately.
+
+* s: 1, so sign is negative
+* exponent: `1000 0001` is $128 + 1 = 129$, so exponent value is $129 - 127 = 2$
+* significand: `1110....0` is `111`, so mantissa value is `1.111` (in base 2)
+
+Plug into our formula, noting that components are decimal unless otherwise noted with subscript:
+
+$$
+\begin{align}
+(-1)^\text{s} \times (1 + \text{significand})_{\text{two}} \times 2^{(\text{exponent}-127)} \\
+= (-1)^1 \times (1 + .111)_{\text{two}} \times 2^{(129-127)} \\
+= -1 \times (1.111)_{\text{two}} \times 2^2 \\
+= -111.1_{\text{two}} \\
+= -7.5
+\end{align}
+$$
+
+* Second to last line: $(1.111)_{\text{two}} \times 2^2$ involves moving the binary point to the left by two spots, e.g., $111.1_{\text{two}}$.
+* Last line: Integer component `111` is $7$; fractional component `.1` is $\texttt{1} \times 2^{-1} = 1/2 = 0.5$.
+
+For those interested, this means that writing the C statement `float x = -7.5;` results in `x` having the bit pattern `0xC0F00000`.
+
+:::
+
+## Example 2: Step size with limited precision
+
+Because we have a fixed # of bits (precision), we cannot represent all numbers in a range. With floating point numbers, the exponent field informs our step size.
+
+:::{tip} Example 2
+Suppose `y` has the floating point format below. What is the **step size** around `y`?
+
+| s | exponent | significand |
+| :--: | :--: | :--: |
+| `0` | `1000 0001` | `111 0000 0000 0000 0000 0000` |
+
+_Hint_: Consider the difference between the bit patterns of `y` and the next representable number after (or before) `y`.
+
+:::
+
+:::{note} Show Answer
+:class: dropdown
+
+We consider the next representable number **after** `y`. This involves incrementing the significand by `1` in the least significant bit, which corresponds to the smallest possible increment ("step size"):
+
+| s | exponent | significand |
+| :--: | :--: | :--: |
+| `0` | `1000 0001` | `111 0000 0000 0000 0000 0001` |
+
+This new number is `y + z`, for some small step size `z`:
+
+$$
+= y + \left( (.0...01)_{\text{two}} \times 2^{(\text{exponent}-127)} \right)\\
+$$
+
+Instead of translating `y` and this new number `y + z`, then taking their difference, we instead note that we are trying to find `z` itself. Let's figure out exactly what power of 2 `z` represents, given the provided exponent.
+
+The least significant bit `.0....01`, for any mantissa of a binary normalized form:
+
+* Implicit leading 1 is not represent by any of 23 bits but corresponds to $2^0$
+* bit 22 (most significant bit) of significand is $2^{-1}$
+* bit 0 (least significant bit) of significand is $2^{-23}$
+
+In other words, for an exponent value of $127 - 127 = 0$ (with no shifts), `z` would be $2^{-23}$. However, with our exponent field, we shift over this bit to the appropriate power.
+
+The exponent value for `10000001` is $129 - 127 = 2$. This shifts over $2^{-23}$ right by two. Our step size `z` is therefore
+
+$$\left(2^{-23} \times 2^2\right) = 2^{-21}.$$
+
+:::
+
+Bigger exponents mean bigger step sizes, and vice versa. This is actually the desired behavior: when we have super large numbers, fractional differences become infinitesimal. However, with tiny numbers, smaller step sizes are more valuable and our precision (as represented by the bits of the significand) must go towards representing differences.
+
+## Example 3: Floating Point to Decimal
+
+:::{tip} Example 3
+
+What is the decimal number represented by this IEEE 754 single-precision binary floating point format?
+
+| s | exponent | significand |
+| :--: | :--: | :--: |
+| `0` | `0110 1000` | `101 0101 0100 0011 0100 0010` |
+
+:::
+
+::::{note} Show Answer
+:class: dropdown
+
+$$1.666115 \times 2^{-23} \approx 1.986 \times 10^{-7}$$
+
+
+
+Explanation (for now) is by image (@fig-float-ex3):
+
+:::{figure} images/float-ex3.png
+:label: fig-float-ex3
+:width: 100%
+:alt: "TODO"
+
+Example 3, explained
+:::
+
+::::
+
+## Example 4: Decimal to Floating Point
+
+
+:::{tip} Example 4
+What is $-2.340625 \times 10^1$ in IEEE 754 single-precision binary floating point format?
+:::
+
+::::{note} Show Answer
+:class: dropdown
+
+| s | exponent | significand |
+| :--: | :--: | :--: |
+| `1` | `1000 0011` | `011 1011 0100 0000 0000 0000` |
+
+
+
+Explanation (for now) is by image (@fig-float-ex4):
+
+:::{figure} images/float-ex4.png
+:label: fig-float-ex4
+:width: 100%
+:alt: "TODO"
+
+Example 4, explained
+:::
+
+::::
+
+## Example 5: Decimal to Floating Point
+
+This exercise shows the limitations of accurate representation using the fixed-_precision_ IEEE 754 standard. After all, fixed precision means we only have 32 bits, and binary representations sometimes fall short.
+
+:::{tip} Example 5
+What is $\frac{1}{3}$ in IEEE 754 single-precision binary floating point format?
+:::
+
+::::{note} Show Answer
+:class: dropdown
+
+| s | exponent | significand |
+| :--: | :--: | :--: |
+| `0` | `0111 1101` | `010 1010 1010 1010 1010 1010` |
+
+
+
+Explanation (for now) is by image (@fig-float-ex5):
+
+:::{figure} images/float-ex5.png
+:label: fig-float-ex5
+:width: 100%
+:alt: "TODO"
+
+Example 5, explained
+:::
+
+::::
\ No newline at end of file
diff --git a/content/floating-point/fp-floating-point.md b/content/floating-point/fp-floating-point.md
new file mode 100644
index 0000000..b8717ab
--- /dev/null
+++ b/content/floating-point/fp-floating-point.md
@@ -0,0 +1,193 @@
+---
+title: "Normalized Numbers"
+---
+
+## Learning Outcomes
+
+* Understand how the normalized number representation in the IEEE 754 single-precision floating point standard is inspired by scientific notation.
+* Identify how the IEEE 754 single-precision floating point format uses three fields:
+ * The sign bit represents sign
+ * The exponent field represents the exponent value.
+ * The significand field represents the fractional part of the mantissa value.
+* For normalized numbers:
+ * The exponent field represents the exponent value as a bias-encoded number
+ * The mantissa always has an implicit, leading one.
+
+::::{note} 🎥 Lecture Video
+:class: dropdown
+
+:::{iframe} https://www.youtube.com/embed/GzOMIRj1yO0
+:width: 100%
+:title: "[CS61C FA20] Lecture 06.2 - Floating Point: Floating Point"
+:::
+
+Omitted (for the purposes of the next section): Overflow and Underflow, 6:54 - 8:40
+
+::::
+
+## Intuition: Scientific Notation
+
+Instead of identifying the fixed binary point location across every number in our representation, the **floating point** paradigm determines a binary point location for _each_ number.
+
+Consider the decimal number $0.1640625$ which has binary representation[^exercise]
+
+[^exercise]: We leave this conversion as an exercise to the reader.
+
+$$
+\dots\texttt{000000.001010100000}\dots.
+$$
+
+There are many zeros in this binary representation. For example, there is no integer component, so everything to the left of the binary point is zero. In fact, there are really only one "interesting" range with some "energy", e.g., varying ones and zeros: `10101`. This bit pattern is located two values right of the binary point.
+
+With these two pieces of information—what the **significant bits** are, and what **exponent** the significant bits are associated with—we can suddently represent both very large and very small numbers. This is the intuition behind **scientific notation**.
+
+### Scientific Notation (Base-10)
+
+You may have seen scientific notation in a physics or chemistry course, but we review it here and introduce core terminology that carry over into the binary case. The scientific notation for the number $0.1640625$ is $1.640625 \times 10^{-1}$ (@fig-scientific-notation-base10):
+
+:::{figure} images/scientific-notation-base10.png
+:label: fig-scientific-notation-base10
+:width: 40%
+:alt: "TODO"
+
+Scientific notation assumes a **normalized** form, where the mantissa has exactly one non-zero digit to the left of the decimal point.
+:::
+
+* **Radix**: The base. In decimal, base 10.
+* **Mantissa**: The "energy" of the number, i.e., the [significant figures](https://en.wikipedia.org/wiki/Significant_figures) (1.640625 in @fig-scientific-notation-base10)
+* **Exponent**: The power the radix is raised to ($-1$ in @fig-scientific-notation-base10).
+
+Scientific notation assumes a **normalized form** of numbers, where the mantissa has exactly one digit to the left of the decimal point.
+
+Every number represented in scientific notation has exactly **one normalized form** for a given number of significant figures. For example, the number $1/1000000$ has normalized form (for two significant figures) $1.0 \times 10^{-9}$ and non-normalized forms $0.1 \times 10^{-8}, 10 \times 10^{-10}$, and so on.
+
+### Binary Normalized Form
+
+The number $0.1640625$ in a binary normalized form is $1.0101_{\text{two}} \times 2^{-3}$ (@fig-scientific-notation-base2):
+
+:::{figure} images/scientific-notation-base2.png
+:label: fig-scientific-notation-base2
+:width: 40%
+:alt: "TODO"
+
+In binary, we also assume a **normalized** form, where the mantissa has exactly one non-zero digit to the left of the binary point.
+:::
+
+* **Radix**: The base in binary is base 2.
+* **Mantissa**: $1.0101$ in @fig-scientific-notation-base2.
+* **Exponent**: $-3$ in @fig-scientific-notation-base2.
+
+We close this discussion with two important points:
+
+* A 32-bit **floating point representation** of binary normalized form should allocate two separate fields within the 32 bits—one to represent the mantissa, and one to represent the exponent.
+* With binary normalized form, mantissa **always have a leading one**.
+
+:::{warning} Binary normalized form means mantissa is always of the form `1.xyz...`!
+:class: dropdown
+
+Why? Consider the definition of normalized form:
+
+> The mantissa has exactly one non-zero digit to the left of the decimal point.
+
+In binary, there is only one non-zero digit: `1`. This means that a representation like $0.1101_{two} \times 2^{2}$ is not normalized and must be scaled _down_ by a power of 2 to its normalized form, $1.101_{two} \times 2^{1}$.
+
+:::
+
+## IEEE 754 Single-Precision Floating Point
+
+This discussion leads us to the definition of the **IEEE 754 Single-Precision Floating Point**, which is used for the C `float` variable type. This format leverages binary normalized form to represent a wide range of numbers for scientific use using **32 bits**.[^double]
+
+[^double]: The IEEE 754 Double-Precision Floating Point is used for the C `double` variable type, which is 64 bits. Read more in a [bonus section](#sec-double).
+
+This standard was pioneered by UC Berkeley Professor William ("Velvel") Kahan. Prior to Kahan's system, the ecosystem for representing floating points was chaotic, and calculations on one machine would give different answers on another. By leading the effort to centralize floating point arithmetic into the IEEE 754 standard, Professor Kahan [earned the Turing Award in 1988](https://people.eecs.berkeley.edu/~wkahan/ieee754status/754story.html).
+
+The three fields in the IEEE 754 standard are designed to maximize the **accuracy** of representing many numbers using the same limited **precision** of 32 bits (for single-precision, and 64 bits for double-precision). This leads us to two important definitions to help us quantify the efficacy of this number representation:
+
+* **Precision** is a count of the number of bits used to represent a value.
+* **Accuracy** is the difference between the actual value of a number and its computer representation.
+
+Without further ado, @fig-float defines the three fields in the IEEE 754 single-precision floating point for 32 bits.
+
+:::{figure} images/float.png
+:label: fig-float
+:width: 100%
+:alt: "TODO"
+
+Bit fields in IEEE 754 single-precision floating point. The least significant bit (rightmost) is indexed 0; the most significant bit (leftmost) is indexed 31.
+:::
+
+(sec-normalized)=
+## Normalized Numbers
+
+The three fields in @fig-float can be used to represent normalized numbers of the form in Equation @eq-float-normalized.
+
+(eq-float-normalized)=
+:::{math}
+:enumerated: true
+(-1)^\text{s} \times (1 + \text{significand}) \times 2^{(\text{exponent}-127)}
+:::
+
+This design will seem esoteric at first glance. Why is there a $1+$? Where did the mantissa go—what's a significand? Why is the exponent offset by $-127$? As stated before, these three fields are defined to maximize the accuracy of representable numbers in 32 bits. In other words, _none of the bit patterns in these three fields use two's complement_! Instead, the standard uses @tab-float defines precisely how to interpret each field's bit pattern for representing normalized numbers.
+
+:::{table} Sign, exponent, and significand fields for normalized numbers.
+:label: tab-float
+:align: center
+
+| Field Name | Represents | Normalized Numbers[^exp-norm] |
+| :--- | :-- | :--- |
+| s | Sign | 1 is negative; 0 is positive |
+| exponent | Bias-Encoded Exponent | Subtract 127 from exponent field to get the exponent value. |
+| significand | Fractional Component of the Mantissa | Interpret the significand as a 23-bit fraction (`0.xx...xx`) and add 1 to get the mantissa value. |
+
+:::
+
+[^exp-norm]: Only valid when exponent field is in the range `0000001` (1) to `1111110` (254), i.e., when it is neither 0 nor 255.
+
+:::{note} Why use bias-encoded exponents?
+:class: dropdown
+
+Designers wanted floating point numbers to be supported even when specialized floating point hardware didn't exist. For example, it should still be possible to sort floating point numbers using just integer compares, as long as we know which bit fields should be considered.
+
+To support sorting numbers of the same sign with just integer hardware, a bigger exponent field should represent bigger numbers. With two's complement, negative numbers would look bigger (because they lead with `1`, the sign-bit). With bias encoding, on the other hand, all floating point numbers are ordered by the value of the exponent: the `000...000` exponent field represents the smallest exponent value (numbers are $\times 2^{-127}$), and the `111....111` exponent field represents the largest exponent value (numbers are $\times 2^{128}$).
+
+:::
+
+:::{note} Why not directly represent mantissa? Why assume an implicit 1?
+:class: dropdown
+
+Remember that in binary normalized form, the mantissa _always_ leads with a 1. IEEE 754 represents normalized numbers by assuming that there is _always_ an implicit 1, then having the significand explicitly representing the bits bits after the binary point. In other words, it is always true that for normalized numbers, 0 < significand < 1.
+
+This assumption for normalized numbers helps IEEE 754 single-precision pack more representable (normalized) numbers into the same 23 bits, because now we represent 24-bit (normalized) mantissas! In other words, the precision is 24 bits, though we only 23 bits.
+:::
+
+## Zero, Infinity, and More
+
+[Normalized numbers](#sec-normalized) are not the only values that can be represented by the IEEE 754 standard. We discuss zero, infinity, and other numbers in [a later section](#sec-special-floats).
+
+(sec-double)=
+### IEEE 754 Double-Precision Floating Point
+
+The IEEE 754 double-precision floating point standard is used for the C `double` variable type. It has three fields, now over 64 bits:
+
+* Sign: Still 1 sign bit (most significant bit, bit index 63)
+* Exponent: 11 bits with bias -1023
+* Significand: now 52 bits
+
+The primary advantage is greater accuracy due to the larger significand. The normalized form can represent numbers from about $2.0 \times 10^{-308}$ to $2.0 \times 10^{308}$.
+
+## Use a Floating Point Converter
+
+Check out [this web app](https://www.h-schmidt.net/FloatConverter/IEEE754.html) for a simple converter between decimal numbers and their IEEE 754 single-precision floating point format.
+
+While we would love for you to use the converter to understand the comic in @fig-smbc-float, it should be noted that the robot is assuming IEEE 754 **double**-precision format (see [another section](#sec-double)).
+
+:::{figure} images/smbc-float.png
+:label: fig-smbc-float
+:alt: "TODO"
+:align: center
+:width: 50%
+
+Welcome to the Secret Robot Internet ([SMBC Comics](https://www.smbc-comics.com/comic/2013-06-05)).
+:::
+
+Because floating point is based on powers of two, it cannot represent most decimal fractions exactly. Here, 0.3 is inaccurately represented as `0.29999999999`, and $0.1 + 0.2$ with `double`s is `0.30000000000000004`. Read more about `double`s, addition, and accuracy in [another section](#sec-float-discussion).
diff --git a/content/floating-point/fp-special-numbers.md b/content/floating-point/fp-special-numbers.md
new file mode 100644
index 0000000..3fdfbef
--- /dev/null
+++ b/content/floating-point/fp-special-numbers.md
@@ -0,0 +1,55 @@
+---
+title: "Special Numbers"
+---
+
+(sec-special-floats)=
+## Learning Outcomes
+
+* Understand how the IEEE 754 standard represents zero, infinity, and NaNs
+* Understand when floating point numbers trigger overflow or overflow
+* Understand how denormalized numbers implement "gradual" underflow
+* Convert denormalized numbers into their decimal counterpart
+
+
+::::{note} 🎥 Lecture Video (overflow and underflow)
+:class: dropdown
+
+:::{iframe} https://www.youtube.com/embed/GzOMIRj1yO0
+:width: 100%
+:title: "[CS61C FA20] Lecture 06.2 - Floating Point: Floating Point"
+:::
+
+Overflow and Underflow, 6:54 - 8:40
+
+::::
+
+::::{note} 🎥 Lecture Video (everything else)
+:class: dropdown
+
+:::{iframe} https://www.youtube.com/embed/Gs0ARZzY-gM
+:width: 100%
+:title: "[CS61C FA20] Lecture 06.3 - Floating Point: Special Numbers"
+:::
+
+::::
+
+**Normalized numbers** are only a _fraction_ (heh) of floating point representations. For single-precision (32-bit), IEEE defines the following numbers based on the exponent field (here, the "biased exponent"):
+
+
+:::{table} Exponent field values for IEEE 754 single-precision.
+:label: tab-float-exp-fields
+:align: center
+
+| Biased Exponent | Significand field | Description |
+| :--- | :--- | :--- |
+| 0 (`0000000`) | all zeros | $\pm 0$ |
+| 0 (`0000000`) | nonzero | Denormalized numbers |
+| 1 – 254 | anything | Normalized floating point (mantissa has implicit leading 1) |
+| 255 (`1111111`) | all zeros | $\pm \infty$ |
+| 255 (`1111111`) | nonzero | `NaN`s |
+
+:::
+
+In this section, we will motivate why these "special numbers" exist by considering the pitfalls of overflow **and** underflow. Then, we'll define each of the special numbers.
+
+## Overflow and Underflow
\ No newline at end of file
diff --git a/content/floating-point/images/fixed-point-add.png b/content/floating-point/images/fixed-point-add.png
new file mode 100644
index 0000000..66e63d9
Binary files /dev/null and b/content/floating-point/images/fixed-point-add.png differ
diff --git a/content/floating-point/images/fixed-point-mul.png b/content/floating-point/images/fixed-point-mul.png
new file mode 100644
index 0000000..fa54354
Binary files /dev/null and b/content/floating-point/images/fixed-point-mul.png differ
diff --git a/content/floating-point/images/fixed-point.png b/content/floating-point/images/fixed-point.png
new file mode 100644
index 0000000..0359f32
Binary files /dev/null and b/content/floating-point/images/fixed-point.png differ
diff --git a/content/floating-point/images/float-ex3.png b/content/floating-point/images/float-ex3.png
new file mode 100644
index 0000000..2533dc4
Binary files /dev/null and b/content/floating-point/images/float-ex3.png differ
diff --git a/content/floating-point/images/float-ex4.png b/content/floating-point/images/float-ex4.png
new file mode 100644
index 0000000..9b0b016
Binary files /dev/null and b/content/floating-point/images/float-ex4.png differ
diff --git a/content/floating-point/images/float-ex5.png b/content/floating-point/images/float-ex5.png
new file mode 100644
index 0000000..9b883fe
Binary files /dev/null and b/content/floating-point/images/float-ex5.png differ
diff --git a/content/floating-point/images/float.png b/content/floating-point/images/float.png
new file mode 100644
index 0000000..1ec63ab
Binary files /dev/null and b/content/floating-point/images/float.png differ
diff --git a/content/floating-point/images/scientific-notation-base10.png b/content/floating-point/images/scientific-notation-base10.png
new file mode 100644
index 0000000..25643ab
Binary files /dev/null and b/content/floating-point/images/scientific-notation-base10.png differ
diff --git a/content/floating-point/images/scientific-notation-base2.png b/content/floating-point/images/scientific-notation-base2.png
new file mode 100644
index 0000000..d893be3
Binary files /dev/null and b/content/floating-point/images/scientific-notation-base2.png differ
diff --git a/content/floating-point/images/smbc-float.png b/content/floating-point/images/smbc-float.png
new file mode 100644
index 0000000..2cb4f28
Binary files /dev/null and b/content/floating-point/images/smbc-float.png differ
diff --git a/content/floating-point/index.md b/content/floating-point/index.md
new file mode 100644
index 0000000..364ca0f
--- /dev/null
+++ b/content/floating-point/index.md
@@ -0,0 +1,147 @@
+---
+title: "Motivation: Fixed Point"
+---
+
+## Learning Outcomes
+
+* Understand the limits of a "fixed point" representation of numbers with fractional components.
+* Compute range and step size for integer and fixed-point representations.
+* Identify limitations of fixed-point representations.
+
+::::{note} 🎥 Lecture Video
+:class: dropdown
+
+:::{iframe} https://www.youtube.com/embed/-nB32FCrlTA
+:width: 100%
+:title: "[CS61C FA20] Lecture 06.1 - Floating Point: Basics & Fixed Point"
+:::
+
+::::
+
+Learning architecture means learning binary abstraction! Let’s revisit number representation. Now, our goal is to use 32 bits to represent numbers with fractional components—like 4.25, 5.17, and so on. We will learn the **IEEE 754 Single-Precision Floating Point** standard.
+
+To start, I want to share a quote from James Gosling (the creator of Java) from back in 1998.[^gosling-cite]
+
+> “95% of the folks out there are completely clueless about floating-point.”
+
+[^gosling-cite]: From a keynote talk delivered by James Gosling on February 28, 1998. The original link is lost, but Professor Will Kahan's ["How Java’s Floating-Point Hurts Everyone Everywhere"](https://people.eecs.berkeley.edu/~wkahan/JAVAhurt.pdf) gives reasonable context for Gosling's talk, proposal, and alternatives.
+
+You will be in the 5% after this chapter :-)
+
+## Metrics: Range and Step Size
+
+Recall that N-bit strings can represent up to $2^\texttt{N}$ distinct values. We have covered several N-bit systems for representing integers so far, as shown in @tab-int-systems.
+
+:::{table} A subset of the N-bit integer representations we have discussed in this class.
+:label: tab-int-systems
+:align: center
+
+| System | \# bits | Minimum | Maximum | Step Size |
+| :--- | :-: | :---: | :---: | :---: |
+| Unsigned integers | 32 | 0 | $2^{32} - 1$, i.e.,
$4,294,967,295$ | 1 |
+| Signed integers with two's complement | 32 | $2^{31}$, i.e.,
$-2,147,483,648$ | $2^{31} - 1$, i.e.,
$2,147,483,647$ | 1 |
+
+:::
+
+The **range** of a number representation system can be quantified by computing the minimum representable and maximum representable numbers. Just like with integer representations we will explicitly compute the range and compare it with some target application. In @tab-int-systems, we see that unsigned integers give a maximum that is about twice that of two's complement—because the latter allocates about half of its system to representing negative numbers.
+
+The **step size** of a number representation system is defined as the spacing between two consecutive numbers. For integer representations that we consider in this class, step size is always 1 between consecutive integers. On the other hand, with fractional number representations we will not be able to represent the infinite range of (real) numbers between two consecutive numbers, hence step size becomes an important metric of **precision**.[^precision-footnote]
+
+[^precision-footnote]: We will define precision properly soon; for now, consider precision a metric of our representations's ability to represent small changes in numbers.
+
+## Fixed Point
+
+Let's motivate floating point by first exploring a strawman[^strawman] approach: **fixed point**. A "fixed point" system fixes the number of digits used for representing integer and fractional parts.
+
+[^strawman]: Wikipedia: [Straw Man](https://en.wikipedia.org/wiki/Straw_man)
+
+Think back to when you first learned the **decimal point**. To the left of the point, you have your 1s ($10^0$), 10s ($10^1$), 100s ($10^2$), and so on. To the right of the point, the numbers represent tenths ($10^{-1}$), hundredths ($10^{-2}$), thousandths ($10^{-3}$), and so on. The **binary point** functions similarly for a _string of bits_. To the left of the binary point are powers of two: 1 ($2^{0}$), 2 ($2^{1}$), 4 ($2^{2}$), and so on. To the right of the binary point are negative powers of two: 1/2 ($2^{-1}$), 1/4 ($2^{-2}$), 1/8 ($2^{-3}$), and so on.
+
+A 6-bit fixed-point binary representation fixes the binary point to be a specific location in a 6-bit bit pattern. Consider the 6-bit fixed-point representation shown in @fig-fixed-point. This fixes the binary point to be 4 bits in from the right, and assumes that we only represent non-negative numbers.
+
+:::{figure} images/fixed-point.png
+:label: fig-fixed-point
+:width: 80%
+:alt: "TODO"
+
+Under this 6-bit fixed-point representation, number 2.625
+:::
+
+Under this system, the number 2.625 has bit pattern `101010`:
+
+$$
+\begin{align}
+\texttt{1} Ă— 2^1 + \texttt{0} Ă— 2^0 &+ \texttt{1} \times 2^{-1} + \texttt{0} \times 2^{-2} + \texttt{1} \times 2^{-3} + \texttt{0} \times 2^{-4} \\
+ &= 2 + 0.5 + 0.125 \\
+ &= 2.625_{\text{ten}}
+\end{align}
+$$
+
+### Fixed Point: Range and Step Size
+
+:::{tip} Quick Check
+Using this fixed-point representation, what is the range of representable numbers?
+:::
+
+:::{note} Show Answer
+:class: dropdown
+
+Range:
+
+* Smallest number `000000`: zero.
+* Largest number: `111111`, or 3.9375 = $4 - \texttt{1} \times 2^{-4}$
+
+:::
+
+:::{tip} Quick Check
+Using this fixed-point representation, what is the **step size** between any two consecutive numbers?
+:::
+
+:::{note} Show answer
+:class: dropdown
+
+The smallest fractional step we can take is incrementing the least significant bit, which is $2^{-4} = 1/16$.
+
+:::
+
+### Arithmetic
+
+@fig-fixed-point-add shows that fixed-point addition is simple. By lining up the binary points, fixed-point formats can reuse integer adders.
+
+:::{figure} images/fixed-point-add.png
+:label: fig-fixed-point-add
+:width: 40%
+:alt: "TODO"
+
+$1.5 + 0.5 = 2$ using the 6-bit fixed point representation from @fig-fixed-point.
+:::
+
+However, @fig-fixed-point-mul shows that fixed-point multiplication is more complicated. Just like in decimal multiplication, we must shift the binary point based on the number of fractional digits in the factors.
+
+:::{figure} images/fixed-point-mul.png
+:label: fig-fixed-point-mul
+:width: 40%
+:alt: "TODO"
+
+$1.5 \times 0.5 = 0.75$ using the 6-bit fixed point representation from @fig-fixed-point.
+:::
+
+### Use Cases
+
+Fixed point representations are very useful in specific domains that prefer very fast computation of values with specific characteristics. Some graphics applications prefer fixed-point for fast calculations when rendering pictures.
+
+## Scientific Numbers
+
+Developers considered what was needed to define a number system that could represent the numbers used in common scientific applications:
+
+* Very large numbers, e.g., the number of seconds in a millenium is $31,556,926,010 = 3.155692610 \times 10^{10}$
+* Very small numbers, e.g., Bohr radius is approximately $0.000000000052917710 = 5.2917710 \times 10^{-11}$ meters
+* Numbers with both integer and fractional points, e.g., $2.625$
+
+A fixed-point representation that could represent all three of these example values would need to be at least 92 bits[^fp-huge], which is much larger than the 32-bit integer systems discussed in @tab-int-systems.
+
+[^fp-huge]: $31,556,926,010 = 3.155692610 \times 10^{10}$ needs 34 bits for the integer part, and $0.000000000052917710 = 5.2917710 \times 10^{-11}$ needs 58 bits for the fractional part.
+
+The problem with fixed point is that once we determine the placement of our binary point, we are stuck. In our six-bit representation from @fig-fixed-point, we have no way of representing numbers much larger than $3.975$ or numbers between, say, $0$ and $1/16$, much less the many numbers in across scientific applications.
+
+What if we had a way to "float" the binary point around and instead choose its location depending on the target number? Developers of the IEEE 754 floating point standard found a solution in a very common scientific practice for denoting decimal numbers: **scientific notation**. Let's read on!
diff --git a/content/floating-point/summary-precheck.md b/content/floating-point/summary-precheck.md
new file mode 100644
index 0000000..cd79b75
--- /dev/null
+++ b/content/floating-point/summary-precheck.md
@@ -0,0 +1,6 @@
+---
+title: "Precheck Summary"
+---
+
+## To Review$\dots$
+
diff --git a/content/floating-point/summary.md b/content/floating-point/summary.md
new file mode 100644
index 0000000..a7b528b
--- /dev/null
+++ b/content/floating-point/summary.md
@@ -0,0 +1,15 @@
+---
+title: "Summary"
+---
+
+## And in Conclusion$\dots$
+
+
+
+## Textbook Readings
+
+
+
+## Additional References
+
+* [IEEE 754 Simulator](https://www.h-schmidt.net/FloatConverter/IEEE754.html)
\ No newline at end of file
diff --git a/content/number-rep/integer-representations.md b/content/number-rep/integer-representations.md
index 333c656..dcff890 100644
--- a/content/number-rep/integer-representations.md
+++ b/content/number-rep/integer-representations.md
@@ -13,7 +13,6 @@ title: "Integer Representations"
* Identify when and why integer overflow occurs
* Perform simple binary operations like addition
-
::::{note} 🎥 Lecture Video
:class: dropdown
@@ -321,9 +320,8 @@ Example: $N = 5$ with bias $-(2^{N-1} - 1)$
^^^
* 5-bit integer representation
-* Bias: $-(2^{5-1} - 1) = 15$
+* Bias: $-(2^{5-1} - 1) = -15$
* All zeros: smallest negative number
-The leftmost bit (also known as **most significant bit**) is still effectively the **sign bit**.
:::
Here are some diagrams in case they are useful. @fig-bias-encoding-number-line represents a bias encoding where $N = 4$ and bias $ = -7$. The odometer just does the right thing; it counts up through zero with nothing strange happening.
diff --git a/myst.yml b/myst.yml
index ee41757..a112e2f 100644
--- a/myst.yml
+++ b/myst.yml
@@ -63,6 +63,16 @@ project:
- file: content/c-generics/index.md
- file: content/c-generics/fn-ptr-example.md
- file: content/c-generics/generics.md
+ - title: "L08 Floating Point"
+ children:
+ - file: content/floating-point/index.md
+ - file: content/floating-point/fp-floating-point.md
+ - file: content/floating-point/fp-examples.md
+ - file: content/floating-point/fp-special-numbers.md
+ - file: content/floating-point/fp-discussion.md
+ - file: content/floating-point/summary.md
+ - file: content/floating-point/summary-precheck.md
+ - file: content/floating-point/exercises.md
- title: "Miscellaneous"
children:
- file: content/misc/unit-prefixes.md