forked from maurer/hott-notes
-
Notifications
You must be signed in to change notification settings - Fork 0
/
notes_week4.tex
275 lines (218 loc) · 23.7 KB
/
notes_week4.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
\documentclass[12pt]{article}
\usepackage[letterpaper]{geometry}
\input{macros}
\usepackage{proof-dashed}
\usepackage{tikz-cd}
\usepackage{booktabs}
\usepackage{amsmath}
\begin{document}
\title{Week 4 Lecture Notes}
\author{Jason Koenig}
\date{10/7/2013 and 10/9/2013}
\maketitle
\section{Dependent Types}
\subsection{Structural Foundation}
Dependent types are \emph{families} of types. Recall from last week the structural setup, which differs from IPL in that we need to express \emph{dependency} between terms and types in the context and the preceeding terms. Atomic judgements are of the form
\begin{alignat*}{2}
& \text{contexts / closed types:} && \Gamma \context \\
&&& \Gamma \equiv \Gamma' \\
& \text{open types / families of types:} \hspace{10pt} && \Gamma \vdash A \type \\
&&& \Gamma \vdash A \equiv A' \\
& \text{elements of types:} && \Gamma \vdash M : A \\
&&& \Gamma \vdash M \equiv M' : A
\end{alignat*}
The symbol $\equiv$ denotes what we will interpret as \emph{definitional equality}. We denote the \emph{empty context} by $\cdot$ when we need to. The introduction rules for contexts are:
\begin{equation*}
\infer{\cdot \context}{}
\hspace{50pt}
\infer{\Gamma, x:A \context}{\Gamma \context & \Gamma \vdash A \type}
\end{equation*}
Thus we have some notion of \emph{dependence}; it allows us to make sense of expressions like $x : \Nat, y : \Seq(x) \vdash \cdots$, which was impossible before.
\begin{equation*}
\infer{\cdot \equiv \cdot \context}{}
\hspace{50pt}
\infer{\Gamma, x : A \equiv \Gamma', x : A'}{\Gamma \equiv \Gamma' & \Gamma \vdash A \equiv A'}
\end{equation*}
The following rule corresponds with reflexivity:
\begin{equation*}
\infer{\Gamma, x:A, \Delta \vdash x:A}{}
\end{equation*}
The following rules (one for each judgement $J$) correspond with weakening:
\begin{equation*}
\infer{\Gamma, x:A, \Delta \vdash J}{\Gamma, \Delta \vdash J & \Gamma \vdash A \type}
\end{equation*}
\textbf{Exercise.} What are the corresponding rules for exchange and contraction?
The following rule, called \emph{substitution} or \emph{instantiation}, corresponds with transitivity:
\begin{equation*}
\infer{\Gamma[M/x]\Delta \vdash [M/x]J}{\Gamma, x:A, \Delta \vdash J & \Gamma \vdash M : A}
\end{equation*}
The following rules together are called \emph{functionality}
\begin{equation*}
\infer{\Gamma [M/x] \Delta \vdash [M/x]N \equiv [M'/x]N : [M/x]B}{\Gamma, x:A, \Delta \vdash N:B & \Gamma \vdash M \equiv M' : A}
\end{equation*}
\begin{equation*}
\infer{\Gamma [M/x] \Delta \vdash [M/x]B \equiv [M'/x]B}{\Gamma, x:A, \Delta \vdash B \type & \Gamma \vdash M \equiv M' : A}
\end{equation*}
Finally, the following rules are \emph{type equality}, which tell us that definitionally equal types classify the same things:
\begin{equation*}
\infer{\Gamma \vdash M : A'}{\Gamma \vdash M:A & \Gamma \vdash A \equiv A'} \hspace{50pt}
\infer{\Gamma \vdash M \equiv M' : A'}{\Gamma \vdash M \equiv M' : A & \Gamma \vdash A \equiv A'}
\end{equation*}
\subsection{Negative Dependent Types}
In dependent type theory, the negative types need to be generalized to express dependency. The type $\top$ does not change. There are new forms for $\conj$ and $\disj$, however:
\begin{center}
\begin{tabular}{c c c c c}
IPL & Type & & Dependent Type & Quantifier\\\hline
$A \conj B$ & $A \times B$ & $\;\;\to\;\;$ & $\Sigma x: A. B$ & $\exists x:A.B$ \\
$A \supset B$ & $B^A$ or $A\rightarrow B$ & $\;\;\to\;\;$ & $\Pi x:A .B$ & $\forall x:A.B$ \\
\end{tabular}
\end{center}
Before we present the rules governing these types, it is good to get some intuition for what they represent. The dependent type $\Sigma x:A.B$, called the \emph{sum}, \emph{dependent product}, or \emph{sigma-type}. Here unfortunately terminology becomes muddled, as $\Sigma$ properly generalizes the product. The $\Sigma$ type corresponds to (constructive) existential quantification, $\exists x:A.B$.
The type $\Pi x:A.B$ is called the \emph{dependent product} (ambiguously with with above), or \emph{pi-type}. This generalizes functions from $A \to B$ in that the type of the result of an application can depend on exactly which value the function is applied to. This corresponds to universal quantification $\forall x:A.B$.
\paragraph{Example:} We can form complex types from these connectives and the families we have seen. For example, we can form the type $\Pi x:\Nat. \Sigma y:\Nat. \Id{\Nat}(y, \text{succ}(x))$. By the types-propositions correspondence, this is the proposition $\forall x:\Nat. \exists y:\Nat. y =_\Nat \text{succ}(x)$, i.e. for every natural number, there is a successor. We can think of the as inhabited by proofs that the proposition is true. We can also think of elements of this type as terms which take a natural number $x$, and produce a term which is a proof that that particular natural has a successor.
\paragraph{Example:} A less logical example is the type $\Pi x: \Nat. \text{Seq}(x)$. This represents the type of functions which result in a length $n$ sequence when applied to the value $n$, as one might do if ``allocating" a sequence. This might be useful when programming, as the type encodes more information about a value than a non-dependent type could.
One important aspect of the quantifier view of these types is that they generalize classical logic in that they merge domains of quantification and proposition. In classical logic, and especially first order logic, these are completely distinct. With the identification of propositions and types, we can quantify not just over data (such as Nat), but also over proofs of propositions by letting $A$ be a ``proposition"-ish type like $\Id{\Nat}(x,y)$.
This identification of dependent types with quantifiers only holds if you take the quantifiers as \emph{constructive}. An element of type $\Sigma x:A.B$ consists of a \emph{pair} $\left<x,y\right>$ such that x is an witness of the existential, and $y$ is a proof that that element means the condition. A constructive existential requires there actually to be an explicit element, which the proof must give on its own. This means one cannot use certain kinds of proofs from classical logic. Some of these proofs roughly show: $(\forall x. P) \imp \bot$, so they conclude that $\exists x. P$, but nowhere in the proof can we actually find such an $x$. Similarly, but less counterintuitively, a proof of an universal quantifier is a map from elements to proofs for those specific elements, i.e. a function. We cannot prove $\forall x.P$ by showing $(\exists x.P) \imp \bot$.
The idea with constructivity is then that if you have proved $(\forall x. P) \imp \bot$, then just let that be the proof. Constructivity amounts to a certain carefulness, where we avoid the classical convention that $\neg \exists \iff \forall$ and $\neg \forall \iff \exists$. By making this distinction, we get not only the ability to regard proofs as programs, but also the rich structure of HoTT, which would otherwise collapse.
\subsubsection{Pi-types}
The formation and introduction rules for pi-types are:
\begin{equation*}
\infer[\Pi\text{-F}]{\Gamma \vdash \Pi x :A.B \type}{\Gamma \vdash A \type & \Gamma, x:A \vdash B \type} \hspace{50pt}
\infer[\Pi\text{-I}]{\Gamma \vdash \lambda x:M : \Pi x :A.B}{\Gamma, x:A \vdash M : B}
\end{equation*}
Note that the introduction form is almost the same except that $x$ is bound when forming the type $B$. The introduction rule is exactly the same as in IPL, except that the type $B$ depends on $x$ like $M$ does. This dependency of $B$ motivates the elimination rule:
\begin{equation*}
\infer[\Pi\text{-E}]{\Gamma \vdash M N : [N/x]B}{\Gamma \vdash M:\Pi x:A. B & \Gamma \vdash N:A}
\end{equation*}
which is the same as IPL except for the substitution of $N$, the actual argument, into the result type. Thus the result type can vary with the actual argument, which cannot happen in regular type theory.
We also have the $(\beta)$ and $(\eta)$ rules:
\begin{alignat*}{1}
(\lambda x.M) N \equiv [N/x]M &\hspace{10pt} (\beta)\\
(\lambda x.M x) \equiv M &\hspace{10pt} (\eta) \hspace{10pt} (\text{when}\; x\; \text{not free in} \;M)
\end{alignat*}
\subsubsection{Sigma-types}
The formation and introduction rules for sigma-types are:
\begin{equation*}
\infer[\Sigma\text{-F}]{\Gamma \vdash \Sigma x :A.B \type}{\Gamma \vdash A \type & \Gamma, x:A \vdash B \type} \hspace{50pt}
\infer[\Sigma\text{-I}]{\Gamma \vdash \left<M,N\right> : \Sigma x :A.B}{\Gamma \vdash M:A & \Gamma, x:A \vdash N : [M/x]B}
\end{equation*}
The addition here is that the second component can depend on the first. The elimination rules are what one expects, with the addition of the dependence of the type of second component.
\begin{equation*}
\infer[\Sigma\text{-E}_1]{\Gamma \vdash \fst(M) : A}{\Gamma \vdash M: \Sigma x:A.B} \hspace{50pt}
\infer[\Sigma\text{-E}_2]{\Gamma \vdash \snd(M) : [\fst(M)/x]B}{\Gamma \vdash M: \Sigma x:A.B}
\end{equation*}
The $(\beta)$ and $(\eta)$ rules are the familiar:
\begin{alignat*}{1}
\fst(\left<M,N\right>) \equiv M &\hspace{10pt} (\beta_1)\\
\snd(\left<M,N\right>) \equiv N &\hspace{10pt} (\beta_2)\\
\left<\fst(M),\snd(M)\right> \equiv M &\hspace{10pt} (\eta)\\
\end{alignat*}
\subsection{Positive Dependent Types}
With the positive types, the dependency does not affect the form of the types, but rather their elimination forms. The issue arises because we need a ``join point'' in the elimination. For example, $C$ is the join point in the elimination of $\disj$ in IPL because it needs to hold in both cases:
\begin{equation*}
\infer[\disj\text{-E}_\text{IPL}]{\Gamma \vdash \case(M,x.N,y.P) : C}{\Gamma, x:A \vdash N: C & \Gamma, y:B \vdash P: C & \Gamma \vdash M: A+B}
\end{equation*}
In IPL, there is no issue with not making $C$ dependent because types are fixed: there is nothing that can vary about $C$ in either branch. In dependent type theory, this is no longer the case.
\subsubsection{Sum Types}
To motivate why we need dependency in $C$, we consider the sum (i.e. disjunction). First we introduce the booleans, a simple case of the $(+)$ type. Let us define the boolean type $2$ as $1+1$, where $1$ is the unit type, and the values $\ttrue=\inl(\left<\right>):2$ and $\ffalse=\inr(\left<\right>):2$.
Consider the rather trivial proposition (written as a type) $\Pi x:2. (\Id{2}(x,\ffalse)) + (\Id{2}(x,\ttrue))$. In order to prove this, we need to perform a case analysis on $x$. In particular, the proposition we actually prove is different in each case:
\begin{enumerate}
\item False case: prove $(\Id{2}(\ffalse,\ffalse)) + (\Id{2}(\ffalse,\ttrue))$ by left injection of reflexivity
\item True case: prove $(\Id{2}(\ttrue,\ffalse)) + (\Id{2}(\ttrue,\ttrue))$ by right injection of reflexivity.
\end{enumerate}
We see than if we case analyze on $x$, then the actual proposition we need to prove in each case is the goal we are trying to prove, specialized to the particular case we are in. The following rule codifies this:
% Is there a way to make this smaller? Stack the premises or something? It trails off the page...
\begin{equation*}
\infer[\disj\text{-E}]{\Gamma \vdash \case[z.C](M,x.N,y.P) : [M/z]C}{\Gamma, x:A \vdash N: [\inl(x)/z]C & \Gamma, y:B \vdash P: [\inr(y)/z]C & \Gamma \vdash M: A+B & \Gamma, z: A+B \vdash C \type}
\end{equation*}
In the case(...) construct, the part $[z.C]$ is called the \emph{motive}, because it is the motivation for performing the case analysis in the first place. What is different here from IPL is that $C$ is parametric in $z:A+B$. The reason that this is necessary is that we need to be able to substitute the actual case we are analyzing (either inl or inr) in each branch into the type, so that our proof can vary. The case analysis proves $C$ specialized to $M$, which is the term $M$ flowing into the \emph{type} $C$. The parametricity in $z$ allows this dependency.
We can specialize this rule to the booleans, to obtain a new construct if:
\begin{equation*}
\infer[\disj\text{-E}]{\Gamma \vdash \caseif[z.C](M,N,P) : [M/z]C}{\Gamma \vdash N: [\ttrue /z]C & \Gamma \vdash P: [\ffalse/z]C & \Gamma \vdash M: 2 & \Gamma, z: 2 \vdash C \type}
\end{equation*}
where the terms $N$ and $P$ do not bind any variables because by $(\eta)$, the variables in the $\disj\text{-E}$ rule are both just the null tuple, and thus don't actually need to be bound.
This presentation seems justified in the logical interpretation, but in programming it can lead to some unexpected results. In most programming languages, $\caseif(x, 17, tt)$ would not be well typed, as the true branch has type Nat and the false branch has type 2. With the tools of dependent type theory, however, we can give this term a type. If we posit the existence of conditionals in the type level, which we have not formalized yet, we can imagine the typing:
\begin{equation*}
\caseif(M,17,\ttrue) : \caseif(M,\Nat,2)
\end{equation*}
for some suitable if construct at the type level, and noting that the ``2" is a type name, not $s(s(z))$. Even though the two branches have different simple types, $\caseif(M,\Nat,2)$ represents a join point for the two expressions. In normal programming languages, the join point type is not allowed to depend on any term, let alone the actual branch that was taken. This example then seems ill-typed, but in fact $17:\caseif(\ttrue,\Nat,2)$ and $\ttrue:\caseif(\ffalse,\Nat,2)$, which are the critical premises in the $\disj\text{-E}$ rule. This kind of construct is important when encoding $A+B$ using sigma types.
Finally, we can take some $\beta$ and $\eta$ rules as well, which mirror the non-dependent case. Here we omit the motive as it plays no role in the rules:
\begin{alignat*}{4}
&\case(\inl(M), x.N, y.P) \equiv [M/x]N && \quad && \beta_1 \\
&\case(\inr(M), x.N, y.P) \equiv [M/y]P && \quad && \beta_2 \\
&\case(M, x.[\inl(x)/z]P, y.[\inr(y)/z]P) \equiv [M/z]P && \quad && \eta
\end{alignat*}
\subsubsection{Natural Numbers}
Like sum types, there is a relatively straightforward generalization of the elimination rule for naturals in terms of a recursor augmented with a motive:
\begin{equation*}
\infer[\Nat\text{-E}]{\Gamma \vdash \rec[z.C](M,N,x.y.P) : [M/z]C}{\Gamma \vdash M:\Nat & \Gamma,z:\Nat \vdash C \type & \Gamma \vdash N: [0/z]C & \Gamma, x:\Nat,y:[x/z]C \vdash P: [s(x)/z]C}
\end{equation*}
Just as in the sum case, we can interpret the recursor as a proof of $C$ for the specific natural $M$, given $N$ which is a proof for zero and $P$ which proves $C$ for the successor of some natural. In G\"{o}del's T, we could show that omitting the variable $x:Nat$ was possible because in the presence of products we could add it if necessary. In this formulation, however, the type of $y$ is a proof of $C$ for some natural, so we need to give that natural a name to form the type.
We also have the ($\beta$) and ($\eta$) rules as expected:
\begin{alignat*}{5}
\rec[z.C](0, N, x.y.P) \equiv N && \quad \beta_1 \\
\rec[z.C](s(M), N, x.y.P) \equiv [M,\rec[z.C](M, N, x.y.P)/x,y]P && \quad \beta_2 \\
\infer{M\equiv \rec[z.C](w,N,x.y.P)}{[0/w]M\equiv N & [s(w)/w]M \equiv [w,M/x,y]P} && \quad \eta
\end{alignat*}
\begin{comment}
% Favonia: The above eta-rule is "too strong" in the sense that you can do induction.
We note that the $\eta$ rule is still not sufficient to prove the non-trivial equivalence, as we still can't induct over $\equiv$. But we can use the recursor to prove things inductively about naturals using identity types. For example, we can show as an exercise:
\begin{equation*}
\Pi x:\Nat.\Pi y:\Nat.\left(\Id{\Nat}(s(x),s(y)) \rightarrow \Id{\Nat}(x,y)\right)
\end{equation*}
using the $\eta\text{-}\Nat$ rule to perform the induction.
\end{comment}
%% TODO: Find a good place to put this. It doesn't seem to fit here.
\subsubsection{Sigma elimination}
We can characterize the sigma type elimination without \fst{} and \snd{} using a ``degenerate form'' of pattern matching where we have only one case:
\begin{equation*}
\infer[\Sigma\text{-E}]{\Gamma \vdash \casesplit[z.C](M, x.y.P) : [M/z]C}{\Gamma \vdash M : \Sigma x:A.B & \Gamma, z:\Sigma x:A.B \vdash C \type & \Gamma, x:A, y:B \vdash P : [\left<x,y\right>/z]C}
\end{equation*}
We can take the following beta rule:
\begin{alignat*}{5}
\casesplit[z.C](\left<M_1,M_2\right>, x.y.P) \equiv [M_1,M_2/x,y]P && \quad (\beta)
\end{alignat*}
We leave as an exercise showing that one can implement $\casesplit$ from \fst{} and \snd{}. We can also show the converse, that \fst{} and \snd{} are definable in terms of $\casesplit$.
\section{Identity Types}
With this background, we can begin to talk meaningfully about the identity type. We say that the type $\Id{A}(M,N)$ is the identity type of $M$ and $N$ in $A$, where $A \type$, $M:A$, and $N:A$. We can think of terms of this type as \emph{proofs} that $M$ and $N$ are equal as elements of $A$. Crucially, we need dependence to even form this type. The formation rule is:
\[\infer[\Id{}\text{-F}]{\Gamma \entails \Id{A}(M,N) \type}{\Gamma \entails A\type & \Gamma \entails M:A & \Gamma \entails N:A}\]
This type may also be written $M =_A N$. Confusingly, it may also be referred to as ``propositional equality," to distinguish it from definitional equality. The simplest element of this type is a proof of reflexivity, which has the introduction rule:
\[\infer[\Id{}\text{-I}]{\Gamma \entails \refl{A}(M): \Id{A}(M,M)}{\Gamma \entails M:A}\]
The elimination rule is:
\[\infer[\Id{}\text{-E}]{\Gamma \entails \J[x.y.z.C](P,x.Q): [M,N,P/x,y,z]C}
{\Gamma \entails P:\Id{A}(M,N) & \Gamma,x{:}A,y{:}A,z{:}\Id{A}(x,y) \entails C \type & \Gamma, x{:}A\entails Q:[x,x,\refl{A}(x)/x,y,z]C}\]
One way to think of the identity type for a particular $A$ is as an inductively generated family of types, where the induction is taken simultaneously over the two terms and the proof that they are equal (corresponding to $x$, $y$, and $z$ in the rule). This view leads us to call the elimination rule \emph{path induction}, where elements of type $\Id{}$ are ``paths." Then $M$ and $N$ are the \emph{endpoints} of this path. We call this an induction because we can prove $C$ for any two endpoints and proof of their equality if we can prove $C$ generically for some endpoint using $Q$. We need to consider a generic $x$ in $Q$ because the whole family $\Id{A}$ is inductively generated at once, so we can't fix the endpoints $M$ and $N$ while proving the ``inductive step." Further, because we only have one introduction rule, we can reason based on the fact that every path is just reflexivity, so $Q$ can make this assumption.
$\J$ also admits a $\beta$-like rule:
\begin{alignat*}{5}
\J[x.y.z. C](\refl{A}(M); x.Q) \equiv [M/x]Q : [M,M,\refl{A}(M)/x,y,z]C && \quad (\beta)
\end{alignat*}
\subsection{Identity Types as Equivalence Relations}
If $\Id{}$ is supposed to be equality, then we would like it to satisfy the three conditions of an equivalence relation: reflexivity, symmetry, and transitivity. Reflexivity is already part of the definition, but the other two properties are not part of the definition. We would like \emph{proofs} that identity types satisfy these properties. Because we are working in a proof-relevant system, these proofs can also be viewed as enriched functional programs. Symmetry states that $x=y \implies y=x$, so a proof of symmetry is a simply map from $\Id{A}(x,y)$ to $\Id{A}(y,x)$. We can write the type of such a map formally as $\Pi x,y{:}A.\, \Id{A}(x,y) \to \Id{A}(y,x)$ (which is shorthand for $\Pi x,y{:}A.\, \Pi p{:}\Id{A}(x,y).\, \Id{A}(y,x)$). A proof of symmetry is just a \emph{program} which has this type:
\[\text{sym}_A := \lambda x{:}A. \lambda y{:}A. \lambda p{:}\Id{A}(x,y). \J[x.y.z.\Id{A}(y,x)](p;x.\refl{A}(x))\]
The construction of symmetry is a function (using three $\lambda$'s) whose body just immediately invokes path induction. The motive just states the overall conclusion, and the proof $Q$ is just the only way we have to generate elements of $\Id{A}$. The reason this proof is simple is any path is just reflexivity. The relation defined by the identity type is effectively the diagonal relation which only relates things to themselves, about which it is easy to prove symmetry.
Transitivity is more involved. Here we have not one, but two equality proofs, but $\J$ will only let us inspect one at a time. Like symmetry, we write $\text{trans}_{A}$ as a map from elements and proofs of their equality to another proof of equality:
\begin{alignat*}{2}
\text{trans}_A := &\lambda x,y,z{:}A.\, \lambda u:\Id{A}(x,y).\, \lambda v{:}\Id{A}(y,z). \\
&\quad(\J[m.n.u.\, \Id{A}(n,z)\to\Id{A}(m,z)](u;m.\lambda v. v))(v)
\end{alignat*}
The way to understand this is that we need to inductively define a function, which will show that if $y$ is equal to $z$, then $x$ is equal to $z$. Then we can apply this function to the proof $v$, and obtain the desired result. For $Q$, we see that the motive degenerates to $\Id{A}(m,z)\to\Id{A}(m,z)$, so we can just write the identity function.
These specific proofs of symmetry and transitivity induce a kind of $\beta$-like behavior for definitional equality. In particular, due to the $\beta$ rule for $\J$, we see that
\[\text{sym}_A(M)(M)(\refl{A}(M)) \equiv \refl{A}(M)\]
Further,
\[\text{trans}_A(X)(X)(Z)(\refl{A}(X))(Q) \equiv Q\]
because the induction $\J$ is defined on $u$, and thus when $u$ is a reflexivity the whole $\J$ term reduces to the identity function. This definitional equivalence depends on the \emph{how} we proved $\text{trans}_A$. For example, a different proof might not make the same equivalences, which can introduce dependencies between a proof and the exact definition of its lemmas. From a programming perspective, this is anti-modular.
\subsection{Simple Functionality}
In addition being an equivalence relation, we might hope that maps respect equality. Suppose we have a map $F$ from $A$ to $B$ (so $x{:}A\entails F:B$). In the non-dependent case, $B$ will not depend on $x$, so we can can ask for some term to satisfy:
\[x,y{:}A, u{:}\Id{A}(x,y) \entails \_ : \Id{B}(F x, F y)\]
We introduce the term $\textsf{ap} F u$ to be the lifting of $F$ from a map between terms to a map between paths. $\textsf{ap}$ may also be known as the ``functorial action." We can define $\textsf{ap}$ using a path induction as:
\[\textsf{ap}\; F\; u := \J[x.y._.\Id{B}(F x, F y)](u; x. \refl{B}(F x))\]
\subsection{Transportation}
If we have a dependent map, then the above is no longer sufficient. In this case, we have that $z{:}A\entails B\type$ and $x{:}A\entails F:B$ as before. Crucially different is that $B$ may depend on $x$, so $F x$ and $F y$ won't even necessarily have the same type! Thus we can't even form $\Id{B}(F x, F y)$. If $x \defeq y$, then we would know that the two destination types are the same by substitution, but we only have that $x$ is equal to $y$ \emph{propositionally}. We would expect that $B[x]$ and $B[y]$ to be \emph{related} in some way, as but we do not go as far as to say the two spaces are the same.
%add display map, fibration diagram
What we will do is say that if two elements $x$ and $y$ have a path $p$ in $A$ between $x$ and $y$ (i.e. $p:\Id{A}(x,y)$), then there is a lifting of this path, $p_*$, to act as a transport map between $[x/z]B$ and $[y/z]B$. Here $A$ is the ``base space," and all $B$ taken over every element of $A$ is known as the ``total space." $B$ for some $z$ is known as a fiber.
We introduce the transport $\textsf{tr}$, with the following specification:
\[x,y{:}A,\, p{:}\Id{A}(x,y)\entails \textsf{tr}[z.B](p):[x/z]B\to[y/z]B\]
Informally, this says that a transport, which is specified by a total space ($z.B$) and a path $p$, takes elements from the fiber associated with the start of the path to the end-of-path fiber. We can define transport to be:
\[\textsf{tr}[z.B](p) := \J[m.n.\_.[m/z]B\to[n/z]B](p;m.\lambda w.w)\]
Because the motive is $[m/z]B\to[n/z]B$, we can use the identity function inside $\J$ as them motive will become $[m/z]B\to[m/z]B$. Outside, however, because $p:\Id{A}(x,y)$, this will become $[x/z]B\to[y/z]B$. The intuition here is that because the two elements $x$ and $y$ are ``equal," we don't need anything more than a glorified identity function to transport between the induced fibers.
\end{document}