Motivating Banach Spaces: Norms Measure Size
Exploring why complete normed spaces without inner products (Banach spaces) are essential, with examples like Lp and C(K) spaces, and their impact on analysis and optimization.
Note: for the sake of readability, often kets (vectors) will have their type annotation dropped, writing just
as it’s otherwise quite messy when inside a norm. However, bras (covectors/linear functionals) will be written with their type annotation,
, so that we avoid ambiguity.
Welcome to Part 3 of our Functional Analysis crash course. In the previous post, we explored Hilbert spaces—complete inner product spaces that provide a rich geometric structure with notions of length, angle, and orthogonality. They are the natural home for quantum mechanics and Fourier analysis.
But what if the most natural way to measure a function’s “size” doesn’t come from an inner product? This question leads us directly to Banach spaces.
1. Introduction: Why Bother Without Inner Products?
Hilbert spaces are convenient. Their inner product gives us orthogonality, projections, and the powerful Riesz Representation Theorem, which cleanly identifies a Hilbert space with its dual.
However, in many applications, the concept of an “angle” is not relevant. We still need to measure the size of a vector (e.g., a function) and the distance between two vectors. Crucially, we also need our space to be complete—meaning that sequences that “should” converge actually do converge to a point within the space. Even when working in a space that could be a Hilbert space (like
), we might intentionally choose a different norm, like the
or
norm, because its geometry is better suited to our problem.
This brings us to a broader class of spaces: Banach spaces. By definition, these are complete normed vector spaces, but their norm does not necessarily arise from an inner product. We willingly trade the geometric luxury of an inner product for the flexibility to use norms that are better suited for the problem at hand.
2. Norms Beyond Inner Products
A norm is a function that assigns a strictly positive length or size to each vector in a vector space, except for the zero vector.
Definition 2.1: Norm
A norm on a vector space
\[ X \]over a field
\[ \mathbb{F} \](
\[ \mathbb{R} \]or
\[ \mathbb{C} \]) is a function
\[ \Vert \cdot \Vert: X \to \mathbb{R} \]satisfying for all
\[ \vert x \rangle, \vert y \rangle \in X \]and all scalars
\[ \alpha \in \mathbb{F} \]:
- Non-negativity:
\[ \Vert x \Vert \ge 0 \]
- Definiteness:
\[ \Vert x \Vert = 0 \iff \vert x \rangle = \vert \mathbf{0} \rangle \]
- Absolute homogeneity:
\[ \Vert \alpha x \Vert = \vert \alpha \vert \Vert x \Vert \]
- Triangle inequality:
\[ \Vert x + y \Vert \le \Vert x \Vert + \Vert y \Vert \]A vector space equipped with a norm is a normed vector space.
The Parallelogram Law: A Litmus Test for Inner Products
How can we determine if a norm is induced by an inner product? The parallelogram law provides a definitive test. A norm
is induced by an inner product
(where
) if and only if it satisfies:
Geometric Intuition and the Polarization Identity
The parallelogram law states that the sum of the squares of the diagonals’ lengths (
and
) equals the sum of the squares of the four sides’ lengths. This is a hallmark of Euclidean geometry, which is built on the dot product. If a norm fails this test, its geometry is not Euclidean.
If the law holds, the inner product can be recovered via the polarization identity. For real spaces:
If a norm fails the parallelogram law, it definitively does not come from an inner product. Let’s see this in action.
Example 1: The
Norms for
The
-norm is a common way to measure the size of functions or vectors. For a function
on a measure space
and
, the norm is:
Let’s test the parallelogram law in the simple space
with the **
-norm** (or “Manhattan norm”):
. Let
and
.
- Sides:
,
- Diagonals:
,
Plugging into the parallelogram law:
- Left-hand side (diagonals):
- Right-hand side (sides):
Since
, the
-norm violates the parallelogram law and is not derived from an inner product. The same is true for all
-norms where
.
**Why use
\[ L_p \]norms (
\[ p \neq 2 \])?**
\[ L_1 \]Norm:** Promotes sparsity. In machine learning (e.g., LASSO regression), penalizing a model’s weights with an
\[ L_1 \]term (
\[ \lambda \Vert \mathbf{w} \Vert_1 \]) forces many weights to become exactly zero. This is excellent for feature selection. The
\[ L_1 \]loss function (Mean Absolute Error) is also more robust to outliers than the standard
\[ L_2 \]loss (Mean Squared Error).
- **General
\[ L_p \]Norms:** Provide a spectrum of error measures with varying sensitivities to large versus small values.
Example 2: The
(Supremum) Norm For a continuous function
on a compact set
(e.g., an interval
), the supremum norm measures its peak value:
Let’s test this in
with
. Again, let
and
.
- Sides:
,
- Diagonals:
,
Checking the parallelogram law:
- LHS:
- RHS:
Since
, the
-norm also fails the test.
**Why use the
\[ L_\infty \]norm?** It measures the maximum deviation or “worst-case error.” A sequence of functions
\[ f_n \]converges to
\[ f \]in this norm (
\[ \Vert f_n - f \Vert_\infty \to 0 \]) if and only if the convergence is uniform. Uniform convergence is a powerful property, ensuring that approximations are good “everywhere at once” and that properties like continuity are preserved in the limit.
3. Completeness: The Defining Feature of Banach Spaces
A norm induces a metric
, allowing us to define convergence and Cauchy sequences. As with Hilbert spaces, completeness is the crucial property that ensures our space has no “holes.”
Definition 3.1: Banach Space
A Banach space is a normed vector space that is complete with respect to the metric induced by its norm. This means every Cauchy sequence of vectors converges to a limit that is also within the space.
All Hilbert spaces are Banach spaces, but the converse is not true. The spaces defined by the norms we just explored are prime examples of Banach spaces that are not Hilbert spaces.
Key Examples of Banach Spaces
- **The
spaces:** For any measure space
and
, the space
of functions with a finite
-norm is a Banach space. The proof of this (part of the Riesz-Fischer theorem) is a cornerstone of modern analysis. * For
,
is a Hilbert space. * For
,
is a Banach space but not a Hilbert space.
- **The space
K
[a,b]
\Vert f \Vert_\infty
1
2
<summary markdown="1">
\ast \ast Proof Sketch: Completeness of \]
</div> C(K)
(f_n)
C(K)
x \in K
(f_n(x))
\mathbb{R}
\mathbb{C}
f(x) := \lim_{n \to \infty} f_n(x)
f_n \to f
\Vert f_n - f \Vert_\infty \to 0
f \in C(K)
C(K)
\ell_p
1 \le p \le \infty
\ell_p
\mathbf{x}=(x_1, x_2, \dots)
\Vert \mathbf{x} \Vert_p
\ell_2
\ast \ast Theorem 4.1: The Hahn-Banach Theorem (Analytic Form)\ast \ast
Let ] </div> X
\[ be a vector space over \]\mathbb{F}
\[ ( \]\mathbb{R}
\[ or \]\mathbb{C}
\[ ), and let \]p: X \to \mathbb{R}
\[ be a sublinear functional (i.e., it satisfies \]p(\alpha x) = \alpha p(x)
\[ for \]\alpha \ge 0
\[ and \]p(x+y) \le p(x)+p(y)
\[ . The norm \]\Vert \cdot \Vert
\[ is an example of a sublinear functional). Let \]Z
\[ be a subspace of \]X
\[ , and let \]\langle g \vert
\[ be a linear functional on \]Z
\[ that is dominated by \]p
\[ , meaning \]\vert \langle g \vert z \rangle \vert \le p(z)
\[ for all \]\vert z \rangle \in Z
\[ . Then there exists a linear functional \]\langle f \vert
\[ defined on all of \]X
\[ such that: 1. \ast \ast Extension:\ast \ast \]\langle f \vert z \rangle = \langle g \vert z \rangle
\[ for all \]\vert z \rangle \in Z
\[ . 2. \ast \ast Domination:\ast \ast \]\vert \langle f \vert x \rangle \vert \le p(x)
\[ for all \]\vert x \rangle \in X
\[ . </blockquote> In the context of normed spaces, we set \]p(x) = \Vert g \Vert_\ast \Vert x \Vert
\[ . The theorem then states that any bounded linear functional on a subspace can be extended to the entire space \ast \ast without increasing its norm\ast \ast . ### Key Consequences of Hahn-Banach The Hahn-Banach theorem is a pure existence theorem (its proof relies on Zorn's Lemma), but its consequences are profound and practical. 1. \ast \ast The Dual Space is Rich:\ast \ast It guarantees that the dual space \]X^\ast
\[ of any non-trivial normed space is itself non-trivial. It's not just an empty collection of the zero functional. 2. \ast \ast Separation of Convex Sets:\ast \ast In its geometric form, the theorem allows us to find a hyperplane that separates two disjoint convex sets. This is fundamental to optimization theory. 3. \ast \ast Existence of "Witness" Functionals:\ast \ast For any vector in the space, we can find a functional that "sees" it perfectly. This is arguably its most important consequence for analysis.\ast \ast Proposition 4.2: Consequence of Hahn-Banach\ast \ast
For any non-zero vector ] </div> \vert x_0 \rangle
\[ in a normed space \]X
\[ , there exists a bounded linear functional \]\langle f \vert
\[ in the dual space \]X^\ast
\[ such that: \]\Vert \langle f \vert \Vert_\ast = 1 \quad \text{and} \quad \langle f \vert x_0 \rangle = \Vert x_0 \Vert
\[ </blockquote> This proposition confirms that the dual space is rich enough to distinguish all vectors. If \]\vert x \rangle \neq \vert y \rangle
\[ , there is a functional \]\langle f \vert
\[ such that \]\langle f \vert x-y \rangle = \Vert x-y \Vert \neq 0
\[ , so \]\langle f \vert x \rangle \neq \langle f \vert y \rangle
\[ . ## 5. The Dual Space and the Duality Mapping The Hahn-Banach theorem breathes life into the concept of the dual space. This section introduces the duality mapping of functional analysis to type cast between the primal and dual spaces.\ast \ast Definition 5.1: The Dual Space and Dual Norm\ast \ast
Let ] </div> X
\[ be a normed vector space. 1. The \ast \ast (continuous) dual space\ast \ast , denoted \]X^\ast
\[ , is the vector space of all bounded linear functionals \]\langle f \vert : X \to \mathbb{F}
\[ . 2. The \ast \ast dual norm\ast \ast on \]X^\ast
\[ is defined as: \]
1 2 3 4\Vert \langle f \vert \Vert_\ast = \sup_{\Vert x \Vert = 1} \vert \langle f \vert x \rangle \vert <div class="math-block" markdown="0"> \[ It's a theorem that if \] </div> X\[ is a Banach space, then \]X^\ast
\[ is also a Banach space. The Hahn-Banach theorem guarantees that for any non-zero functional, there exists a vector that achieves this supremum. </blockquote> Now we introduce the standard tool from functional analysis for relating the primal and dual spaces.\ast \ast Definition 5.2: The Duality Mapping (Standard Definition)\ast \ast
The \ast \ast duality mapping\ast \ast ] </div> J: X \to 2^{X^\ast}
\[ is a mapping that associates each vector \]\vert x \rangle \in X
\[ with the set of its "support functionals" \]\langle f \vert \in X^\ast
\[ . These are the functionals that are perfectly aligned with \]\vert x \rangle
\[ in the sense that they satisfy: 1. Alignment: \]\langle f \vert x \rangle = \Vert f \Vert_\ast \Vert x \Vert
\[ 2. Isometry: \]\Vert \langle f \vert \Vert_\ast = \Vert x \Vert
\[ For practical algebraic computation, we can also use the following equivalent convenient formulation. \]\langle f \vert x \rangle = \Vert x \Vert^2 = \Vert f \Vert_\ast^2
\[ The Hahn-Banach theorem guarantees that this set \]J(x)
\[ is always non-empty. This mapping is fundamental to the theory of monotone operators and nonlinear analysis. </blockquote>\ast \ast Analogy\ast \ast : Mechanical Leverage
Think of the covector ] </div> \langle f \vert
\[ as some force, \]\vert x \rangle
\[ as a desired displacement. The duality mapping embodies the \ast optimal mechanical advantage\ast : \]J(x) = \underset{\langle f \vert}{\mathrm{argmax}} \; \frac{\langle f \vert x \rangle}{\Vert f \Vert_\ast}
\[ where \]\Vert f \Vert_\ast
\[ represents the "effort budget". The maximum is achieved when the force and displacement are collinear. </blockquote>\ast \ast Proposition 5.3: Properties of the Duality Mapping\ast \ast
Because the duality mapping corresponds to finding the covector that is linearly dependent on the vector in order to saturate the generalized Cauchy-Schwarz inequality, we can write it as a variational problem. ] </div>
J(x) = \underset{\langle g \vert \in X^\ast,\Vert \langle g \vert \Vert_\ast = \Vert x \Vert}{\mathrm{argmax}} \; \vert \langle g \vert x \rangle \vert
\[ The function \]\phi(x) = \frac{1}{2} \Vert x \Vert^2
\[ is the simplest strictly convex gauge of vector magnitude, and is Gâteaux differentiable at any \]x \ne 0
\[ in any strictly convex Banach space. What's interesting is that the duality mapping is precisely the subdifferential of this function. \]J(x)=\partial \phi(x)
\[ A common interpretation is using an alternative variational formulation as seen as the proximal descent algorithm in convex optimization. If we consider a local/proximal model for a loss with \]\lambda>0
\[ : \]\mathcal{L}(w+\Delta w) \approx Q_\lambda(w,\Delta w) := \mathcal{L}(w) + \langle \nabla \mathcal{L}(w) \vert \Delta w \rangle + \frac{1}{2\lambda} \Vert \Delta w \Vert^2
\[ then we see that \]J^{-1}
\[ solves for the new iterate in proximal gradient descent. So it serves as a local minimization oracle. \]\underset{\Delta w \in X}{\mathrm{argmin}} \; Q_\lambda (w,\Delta w) = \underset{\Delta w \in X}{\mathrm{argmin}} \; \langle \nabla \mathcal{L}(w) \vert \Delta w \rangle + \frac{1}{2\lambda} \Vert \Delta w \Vert^2 = -\lambda J^{-1}(\nabla \mathcal{L}(w))
\[ </blockquote>\ast \ast The Optimizer’s View: The “Dualize” Operation\ast \ast
In optimization, the practical problem is different. Given a gradient covector ] </div> \langle g \vert \in X^\ast
\[ , we need to find the direction of steepest ascent, which is a \ast \ast vector\ast \ast \]\vert v \rangle \in X
\[ . In his work, Jeremy Bernstein refers to this operation as `dualize`. The core idea is to find the unit vector that maximizes the linear functional's value. The `dualize` mapping is defined as: \]\text{dualize}(\langle g \vert) := \underset{\vert v \rangle \in X, \Vert v \Vert=1}{\text{argmax}} \langle g \vert v \rangle
\[ The result of this operation is the \ast \ast unit vector\ast \ast \]\vert v \rangle
\[ pointing in the direction of steepest increase for the functional \]\langle g \vert
\[ . By definition of the dual norm, the value of this maximum is \]\langle g \vert v \rangle = \Vert \langle g \vert \Vert_\ast
\[ . This `dualize` operation is the crucial step for converting a gradient (covector) into an update direction (vector) for optimization algorithms in general Banach spaces. Compared to the standard definition of the duality mapping \]J
\[ in functional analysis, Jeremy Bernstein's definition only extracts direction, and decouples it from the gradient dual norm. \]\Vert \langle g \vert \Vert_\ast \text{dualize}(\langle g \vert) = J^{-1}(\langle g \vert)
\[ </blockquote>
\ast \ast Contrast with Hilbert Spaces\ast \ast
In a Hilbert space ] </div> H
\[ with norm induced by the inner product, life is simpler. The Riesz Representation Theorem provides a unique vector \]\vert g \rangle
\[ for every covector \]\langle g \vert
\[ such that \]\langle g \vert v \rangle = \langle g, v \rangle
\[ (the inner product). The direction of steepest ascent for \]\langle g \vert
\[ is simply the normalized vector \]\vert g \rangle / \Vert g \Vert
\[ . The `dualize` operation becomes: \]\text{dualize}(\langle g \vert) = \frac{\vert g \rangle}{\Vert g \Vert}
\[ Because of this unique correspondence, we often blur the distinction between \]H
\[ and \]H^\ast
\[ . This is why people tend to get confused in machine learning and simply treat the gradient as a vector/ket instead of a covector/bra/linear functional/1-form. Because we parameterize everything with real numbers, people start to think we are working in \]\mathbb{R}^n
\[ with the Euclidean inner product \]\langle x \vert y \rangle := x^T y
\[ , which is a Hilbert space. In doing so, however, we are ignoring the underlying geometry of the space, which empirically we have seen to be more appropriately captured by non-induced norms like the spectral norm in the Muon optimizer. In a general Banach space, we must use the more general `dualize` definition involving the argmax to find this direction. </details> ## 6. Other Foundational Theorems and Applications The analytical power of Banach spaces is further demonstrated by a "holy trinity" of theorems about bounded operators and a powerful fixed-point theorem. \ast \ast \ast The "Holy Trinity" of Bounded Operators:\ast \ast 1. \ast \ast Uniform Boundedness Principle:\ast \ast A family of operators that is pointwise bounded is uniformly bounded. (Pointwise good implies uniformly good). 2. \ast \ast Open Mapping Theorem:\ast \ast A surjective bounded linear operator between Banach spaces maps open sets to open sets. (A key corollary is the \ast \ast Bounded Inverse Theorem\ast \ast ). 3. \ast \ast Closed Graph Theorem:\ast \ast An operator is bounded if and only if its graph is a closed set. This is often an easier way to prove an operator is continuous. \ast \ast \ast The Banach Fixed-Point Theorem (Contraction Mapping Principle):\ast \ast Let \](X,d)
\[ be a complete metric space (every Banach space is one). If an operator \]T: X \to X
\[ is a \ast \ast contraction\ast \ast —meaning it shrinks distances by a uniform factor \]k < 1
\[ : \]d(T(x), T(y)) \le k \cdot d(x,y)
\[ Then \]T
\[ has \ast \ast one and only one\ast \ast fixed point ( \]x^\ast
\[ such that \]T(x^\ast) = x^\ast
\[ ). This point can be found by iterating \]x_{n+1} = T(x_n)
\[ from any starting point \]x_0 \in X
\[ .
1 2 3 4 <div class="title" markdown="1"> \ast \ast Application: Solving Differential Equations\ast \ast </div> The initial value problem \]</div> y’(t) = F(t, y(t))
\[ with \]y(t_0)=y_0
\[ can be rewritten as an integral equation: \]
1 2 3 4y(t) = y_0 + \int_{t_0}^t F(s, y(s)) ds <div class="math-block" markdown="0"> \[ A solution \] </div> y(t)\[ is a fixed point of the operator \]\mathcal{T}
\[ defined by the right-hand side: \](\mathcal{T}y)(t) = \dots
\[ . If \]F
\[ is Lipschitz continuous in its second argument, then for a small enough time interval, \]\mathcal{T}
\[ is a contraction on the Banach space \]C(I)
\[ of continuous functions. The Banach Fixed-Point Theorem then guarantees a unique local solution. This is the essence of the \ast \ast Picard-Lindelöf theorem\ast \ast . </blockquote> ## 7. Banach Spaces in Machine Learning and Optimization While much of ML operates in finite-dimensional Euclidean space (a Hilbert space), the theory behind advanced methods relies heavily on Banach space concepts. \ast \ast \ast Sparsity via \]L_1
\[ Regularization:\ast \ast Penalizing model weights with the \]L_1
\[ -norm ( \]\lambda \Vert \mathbf{w} \Vert_1
\[ ) is the core of LASSO and other techniques for feature selection. The "sharp corners" of the \]L_1
\[ unit ball geometrically encourage solutions where many weights are exactly zero. Optimization with the non-differentiable \]L_1
\[ norm requires tools like subgradient calculus, which are naturally studied in this context. \ast \ast \ast Robustness via \]L_1
\[ Loss:\ast \ast Using Mean Absolute Error ( \]L_1
\[ loss) instead of Mean Squared Error ( \]L_2
\[ loss) makes models less sensitive to outliers in the training data. \ast \ast \ast Probabilistic Models:\ast \ast Probability theory is built on measure theory. Spaces like \]L_1(\Omega, \mathcal{F}, P)
\[ are Banach spaces essential for defining expected values, \]E[X] = \int X dP
\[ . \ast \ast \ast Theory of Optimization:\ast \ast Analyzing the convergence of algorithms like gradient descent in non-Euclidean geometries requires the machinery described above. Converting a gradient (covector) into an update direction (vector) requires the `dualize` operation to find the direction of steepest descent. ## 8. Conclusion: A Broader Analytical Landscape Banach spaces generalize Hilbert spaces by dropping the requirement that a norm must come from an inner product. This trade-off is immensely fruitful. While we lose the universal geometric intuition of angles and orthogonality, we gain a far broader framework capable of handling diverse measures of "size." The \]L_1
\[ norm for sparsity, the \]L_\infty
\[ norm for uniform control, and the general \]L_p
\[ norms for modeling different error sensitivities are indispensable tools in modern science and engineering. The crucial property of \ast \ast completeness\ast \ast is retained, providing a solid foundation for analysis. Foundational results like the Hahn-Banach Theorem, the major theorems on bounded operators, and the Banach Fixed-Point Theorem form the backbone of modern analysis. They provide the tools to solve differential equations, understand operator theory, and build the theoretical underpinnings for advanced optimization and machine learning. In short, Banach spaces provide the language to explore a vast and varied landscape of mathematical structures far beyond the confines of Euclidean geometry. \ast \ast Next Up:\ast \ast In the final post of this mini-series, we will focus on linear operators, exploring how Matrix Spectral Analysis generalizes to operators on infinite-dimensional Hilbert and Banach spaces. ## 9. Summary Cheat Sheet \vert Concept \vert Description \vert Key Example(s) \vert Why Important \vert \vert :------------------------------ \vert :----------------------------------------------------------------------------------------------------------------------- \vert :------------------------------------------------------------------ \vert :---------------------------------------------------------------------------- \vert \vert \ast \ast Normed Space\ast \ast \vert Vector space with a function \]\Vert \cdot \Vert
\[ defining length/size. \vert \]C([a,b])
\[ with \]\Vert \cdot \Vert_\infty
\[ , \]L_p
\[ spaces \vert Basic structure for measuring size and distance. \vert \vert \ast \ast Parallelogram Law\ast \ast \vert Identity: \]\Vert x+y \Vert^2 + \Vert x-y \Vert^2 = 2(\Vert x \Vert^2 + \Vert y \Vert^2)
\[ . Test for inner product norm. \vert Fails for \]L_p
\[ ( \]p\neq 2
\[ ) and \]L_\infty
\[ norms. \vert Distinguishes Hilbert space norms from general norms. \vert \vert \ast \ast \]L_p
\[ Norms ( \]p\neq 2
\[ )\ast \ast \vert \]\Vert f \Vert_p = (\int \vert f\vert^p)^{1/p}
\[ . Measures size with varying sensitivity. \vert \]L_1
\[ (Manhattan/taxicab). \vert Model different error types; \]L_1
\[ promotes sparsity. \vert \vert \ast \ast \]L_\infty
\[ Norm\ast \ast \vert \]\Vert f \Vert_\infty = \sup \vert f\vert
\[ . Measures the peak value or worst-case error. \vert Space of continuous functions \]C(K)
\[ . \vert Equivalent to uniform convergence. \vert \vert \ast \ast Banach Space\ast \ast \vert A \ast \ast complete\ast \ast normed vector space. \vert \]L_p
\[ spaces ( \]1\le p \le \infty
\[ ), \]C(K)
\[ . \vert Ensures Cauchy sequences converge; robust analytical framework. \vert \vert \ast \ast Dual Space \]X^\ast
\[ \ast \ast \vert Space of all bounded linear functionals \]f: X \to \mathbb{F}
\[ . It is always a Banach space. \vert \](L_p)^\ast = L_q
\[ for \]1<p<\infty
\[ . \](L_1)^\ast = L_\infty
\[ . \vert The natural home for gradients (covectors). \vert \vert \ast \ast Hahn-Banach Theorem\ast \ast \vert Guarantees norm-preserving extension of bounded linear functionals from a subspace to the full space. \vert - \vert Ensures dual space is rich enough to define concepts like steepest ascent. \vert \vert \ast \ast `dualize` operation\ast \ast \vert An optimization-centric map. `dualize(g)` finds the \ast \ast unit vector\ast \ast `v` that maximizes `<g | v>`, i.e., the direction of steepest ascent for the covector `g`. \vert In Hilbert space, this is `g / | | g | | `. \vert Converts a gradient (covector) into a descent direction (vector). \vert \vert \ast \ast Banach Fixed-Point Thm.\ast \ast \vert A contraction map \]T
\[ on a complete metric space has a unique fixed point, \]T(x^\ast)=x^\ast$$. | Picard’s method for solving ODEs. | Guarantees existence/uniqueness of solutions; basis for iterative algorithms. | | “Holy Trinity” | Uniform Boundedness Principle, Open Mapping Thm., Closed Graph Thm. Foundational results for bounded linear operators. | - | Govern the fundamental properties of operators between Banach spaces. |
Further Reading
Wikipedia contributors. (2025, April 14). Banach space. Wikipedia. https://en.wikipedia.org/wiki/Banach_space#Linear_operators,_isomorphisms
Wikipedia contributors. (2024, July 26). List of Banach spaces. Wikipedia. https://en.wikipedia.org/wiki/List_of_Banach_spaces