-
Notifications
You must be signed in to change notification settings - Fork 4
/
induction.tex
1402 lines (1225 loc) · 92.4 KB
/
induction.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter{Induction}
\label{cha:induction}
In \cref{cha:typetheory}, we introduced many ways to form new types from old ones.
Except for (dependent) function types and universes, all these rules are special cases of the general notion of \emph{inductive definition}.\index{definition!inductive}\index{inductive!definition}
In this chapter we study inductive definitions more generally.
\section{Introduction to inductive types}
\label{sec:bool-nat}
\index{type!inductive|(}%
\index{generation!of a type, inductive|(}
An \emph{inductive type} $X$ can be intuitively understood as a type ``freely generated'' by a certain finite collection of \emph{constructors}, each of which is a function (of some number of arguments) with codomain $X$.
This includes functions of zero arguments, which are simply elements of $X$.
When describing a particular inductive type, we list the constructors with bullets.
For instance, the type \bool from \cref{sec:type-booleans} is inductively generated by the following constructors:
\begin{itemize}
\item $\bfalse:\bool$
\item $\btrue:\bool$
\end{itemize}
Similarly, $\unit$ is inductively generated by the constructor:
\begin{itemize}
\item $\ttt:\unit$
\end{itemize}
while $\emptyt$ is inductively generated by no constructors at all.
An example where the constructor functions take arguments is the coproduct $A+B$, which is generated by the two constructors
\begin{itemize}
\item $\inl:A\to A+B$
\item $\inr:B\to A+B$.
\end{itemize}
And an example with a constructor taking multiple arguments is the cartesian product $A\times B$, which is generated by one constructor
\begin{itemize}
\item $\pairr{\blank, \blank} : A\to B \to A\times B$.
\end{itemize}
Crucially, we also allow constructors of inductive types that take arguments from the inductive type being defined.
For instance, the type $\nat$ of natural numbers has constructors
\begin{itemize}
\item $0:\nat$
\item $\suc:\nat\to\nat$.
\end{itemize}
\symlabel{lst}
\index{type!of lists}%
\indexsee{list}{type of lists}%
Another useful example is the type $\lst A$ of finite lists of elements of some type $A$, which has constructors
\begin{itemize}\label{list_constructors}
\item $\nil:\lst A$
\item $\cons:A\to \lst A \to \lst A$.
\end{itemize}
\index{free!generation of an inductive type}%
Intuitively, we should understand an inductive type as being \emph{freely generated by} its constructors.
That is, the elements of an inductive type are exactly what can be obtained by starting from nothing and applying the constructors repeatedly.
(We will see in \cref{sec:identity-systems,cha:hits} that this conception has to be modified slightly for more general kinds of inductive definitions, but for now it is sufficient.)
For instance, in the case of \bool, we should expect that the only elements are $\bfalse$ and $\btrue$.
Similarly, in the case of \nat, we should expect that every element is either $0$ or obtained by applying $\suc$ to some ``previously constructed'' natural number.
\index{induction principle}%
Rather than assert properties such as this directly, however, we express them by means of an \emph{induction principle}, also called a \emph{(dependent) elimination rule}.
We have seen these principles already in \cref{cha:typetheory}.
For instance, the induction principle for \bool is:
%
\begin{itemize}
\item When proving a statement $E : \bool \to \type$ about \emph{all} inhabitants of \bool, it suffices to prove it for $\bfalse$ and $\btrue$, i.e., to give proofs $e_0 : E(\bfalse)$ and $e_1 : E(\btrue)$.
\end{itemize}
Furthermore, the resulting proof $\ind\bool(E,e_0,e_1): \prd{b : \bool}E(b)$ behaves as expected when applied to the constructors $\bfalse$ and $\btrue$; this principle is expressed by the \emph{computation rules}\index{computation rule!for type of booleans}:
\begin{itemize}
\item We have $\ind\bool(E,e_0,e_1,\bfalse) \jdeq e_0$.
\item We have $\ind\bool(E,e_0,e_1,\btrue) \jdeq e_1$.
\end{itemize}
\index{case analysis}%
Thus, the induction principle for the type $\bool$ of booleans allow us to reason by \emph{case analysis}.
Since neither of the two constructors takes any arguments, this is all we need for booleans.
\index{natural numbers}%
For natural numbers, however, case analysis is generally not sufficient: in the case corresponding to the inductive step $\suc(n)$, we also want to presume that the statement being proven has already been shown for $n$.
This gives us the following induction principle:
\begin{itemize}
\item When proving a statement $E : \nat \to \type$ about \emph{all} natural numbers, it suffices to prove it for $0$ and for $\suc(n)$, assuming it holds
for $n$, i.e., we construct $e_z : E(0)$ and $e_s : \prd{n : \nat} E(n) \to E(\suc(n))$.
\end{itemize}
As in the case of booleans, we also have the associated computation rules for the function $\ind\nat(E,e_z,e_s) : \prd{x:\nat} E(x)$:
\index{computation rule!for natural numbers}%
\begin{itemize}
\item $\ind\nat(E,e_z,e_s,0) \jdeq e_z$.
\item $\ind\nat(E,e_z,e_s,\suc(n)) \jdeq e_s(n,\ind\nat(E,e_z,e_s,n))$ for any $n : \nat$.
\end{itemize}
The dependent function $\ind\nat(E,e_z,e_s)$ can thus be understood as being defined recursively on the argument $x : \nat$, via the functions $e_z$ and $e_s$ which we call the \define{recurrences}\indexdef{recurrence}.
When $x$ is zero,\index{zero} the function simply returns $e_z$.
When $x$ is the successor\index{successor} of another natural number $n$, the result is obtained by taking the recurrence $e_s$ and substituting the specific predecessor\index{predecessor} $n$ and the recursive call value $\ind\nat(E,e_z,e_s,n)$.
The induction principles for all the examples mentioned above share this family resemblance.
In \cref{sec:strictly-positive} we will discuss a general notion of ``inductive definition'' and how to derive an appropriate \emph{induction principle} for it, but first we investigate various commonalities between inductive definitions.
\index{recursion principle}%
For instance, we have remarked in every case in \cref{cha:typetheory} that from the induction principle we can derive a \emph{recursion principle} in which the codomain is a simple type (rather than a family).
Both induction and recursion principles may seem odd, since they yield only the \emph{existence} of a function without seeming to characterize it uniquely.
However, in fact the induction principle is strong enough also to prove its own \emph{uniqueness principle}\index{uniqueness!principle, propositional!for functions on N@for functions on $\nat$}, as in the following theorem.
\begin{thm}\label{thm:nat-uniq}
Let $f,g : \prd{x:\nat} E(x)$ be two functions which satisfy the recurrences
%
\begin{equation*}
e_z : E(0)
\qquad\text{and}\qquad
e_s : \prd{n : \nat} E(n) \to E(\suc(n))
\end{equation*}
%
up to propositional equality, i.e., such that
\begin{equation*}
\id{f(0)}{e_z}
\qquad\text{and}\qquad
\id{g(0)}{e_z}
\end{equation*}
as well as
\begin{gather*}
\prd{n : \nat} \id{f(\suc(n))}{e_s(n, f(n))},\\
\prd{n : \nat} \id{g(\suc(n))}{e_s(n, g(n))}.
\end{gather*}
Then $f$ and $g$ are equal.
\end{thm}
\begin{proof}
We use induction on the type family $D(x) \defeq \id{f(x)}{g(x)}$. For the base case, we have \[f(0) = e_z = g(0)\]
For the inductive case, assume $n : \nat$ such that $f(n) = g(n)$. Then
\[ f(\suc(n)) = e_s(n, f(n)) = e_s(n, g(n)) = g(\suc(n)) \]
The first and last equality follow from the assumptions on $f$ and $g$. The middle equality follows from the inductive hypothesis and the fact that application preserves equality. This gives us pointwise equality between $f$ and $g$; invoking function extensionality finishes the proof.
\end{proof}
Note that the uniqueness principle applies even to functions that only satisfy the recurrences \emph{up to propositional equality}, i.e.\ a path.
Of course, the particular function obtained from the induction principle satisfies these recurrences judgmentally\index{judgmental equality}; we will return to this point in \cref{sec:htpy-inductive}.
On the other hand, the theorem itself only asserts a propositional equality between functions (see also \cref{ex:same-recurrence-not-defeq}).
From a homotopical viewpoint it is natural to ask whether this path is \emph{coherent}, i.e.\ whether the equality $f=g$ is unique up to higher paths; in \cref{sec:initial-alg} we will see that this is in fact the case.
Of course, similar uniqueness theorems for functions can generally be formulated and shown for other inductive types as well.
In the next section, we show how this uniqueness property, together with univalence, implies that an inductive type such as the natural numbers is completely characterized by its introduction, elimination, and computation rules.
\index{type!inductive|)}%
\index{generation!of a type, inductive|)}%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Uniqueness of inductive types}
\label{sec:appetizer-univalence}
\index{uniqueness!of identity types}%
We have defined ``the'' natural numbers to be a particular type \nat with particular inductive generators $0$ and $\suc$.
However, by the general principle of inductive definitions in type theory described in the previous section, there is nothing preventing us from defining \emph{another} type in an identical way.
\index{natural numbers!isomorphic definition of}
That is, suppose we let $\natp$ be the inductive type generated by the constructors
\begin{itemize}
\item $\zerop:\natp$
\item $\sucp:\natp\to\natp$.
\end{itemize}
Then $\natp$ will have identical-looking induction and recursion principles to $\nat$.
When proving a statement $E : \natp \to \type$ for all of these ``new'' natural numbers, it suffices to give the proofs $e_z : E(\zerop)$ and \narrowequation{e_s : \prd{n : \natp} E(n) \to E(\sucp(n)).}
And the function $\rec\natp(E,e_z,e_s) : \prd{n:\natp} E(n)$ has the following computation rules:
\begin{itemize}
\item $\rec\natp(E,e_z,e_s,\zerop) \jdeq e_z$,
\item $\rec\natp(E,e_z,e_s,\sucp(n)) \jdeq e_s(n,\rec\natp(E,e_z,e_s,n))$ for any $n : \natp$.
\end{itemize}
But what is the relation between $\nat$ and $\natp$?
This is not just an academic question, since structures that ``look like'' the natural numbers can be found in many other places.
\index{type!of lists}%
For instance, we may identify natural numbers with lists over the type with one element (this is arguably the oldest appearance, found on walls of caves\index{caves, walls of}), with the non-negative integers, with subsets of the rationals and the reals, and so on.
And from a programming\index{programming} point of view, the ``unary'' representation of our natural numbers is very inefficient, so we might prefer sometimes to use a binary one instead.
We would like to be able to identify all of these versions of ``the natural numbers'' with each other, in order to transfer constructions and results from one to another.
Of course, if two versions of the natural numbers satisfy identical induction principles, then they have identical induced structure.
For instance, recall the example of the function $\dbl$ defined in \cref{sec:inductive-types}. A similar function
for our new natural numbers is readily defined by duplication and adding primes:
\[ \dblp \defeq \rec\natp(\natp, \; \zerop, \; \lamu{n:\natp}{m:\natp} \sucp(\sucp(m))). \]
Simple as this may seem, it has the obvious drawback of leading to a
proliferation of duplicates. Not only functions have to be
duplicated, but also all lemmas and their proofs. For example,
an easy result such as $\prd{n : \nat} \dbl(\suc(n))=\suc(\suc(\dbl(n)))$, as well
as its proof by induction, also has to be ``primed''.
In traditional mathematics, one just proclaims that $\nat$ and $\natp$ are obviously ``the same'', and can be substituted for each other whenever the need arises.
This is usually unproblematic, but it sweeps a fair amount under the rug, widening the gap between informal mathematics and its precise description.
In homotopy type theory, we can do better.
First observe that we have the following definable maps:
\begin{itemize}
\item $f \defeq \rec\nat(\natp, \; \zerop, \; \lamu{n:\nat} \sucp)
: \nat \to\natp$,
\item $g \defeq \rec\natp(\nat, \; 0, \; \lamu{n:\natp} \suc)
: \natp \to\nat$.
\end{itemize}
Since the composition of $g$ and $f$ satisfies the same recurrences as the identity function on $\nat$, \cref{thm:nat-uniq} gives that $\prd{n : \nat} \id{g(f(n))}{n}$, and the ``primed'' version of the same theorem gives $\prd{n : \natp} \id{f(g(n))}{n}$.
Thus, $f$ and $g$ are quasi-inverses, so that $\eqv{\nat}{\natp}$.
We can now transfer functions on $\nat$ directly to functions on $\natp$ (and vice versa) along this equivalence, e.g.
\[ \dblp \defeq \lamu{n:\natp} f(\dbl(g(n))). \]
It is an easy exercise to show that this version of $\dblp$ is equal to the earlier one.
Of course, there is nothing surprising about this; such an isomorphism is exactly how a mathematician will envision ``identifying'' $\nat$ with $\natp$.
However, the mechanism of ``transfer'' across an isomorphism depends on the thing being transferred; it is not always as simple as pre- and post-composing a single function with $f$ and $g$.
Consider, for instance, a simple lemma such as
\[\prd{n : \natp} \dblp(\sucp(n))=\sucp(\sucp(\dblp(n))).\]
Inserting the correct $f$s and $g$s is only a little easier than re-proving it by induction on $n:\natp$ directly.
\index{isomorphism!transfer across}%
\index{univalence axiom}%
Here is where the univalence axiom steps in: since $\eqv{\nat}{\natp}$, we also have $\id[\type]{\nat}{\natp}$, i.e.\ $\nat$ and $\natp$ are
\emph{equal} as types.
Now the induction principle for identity guarantees that any construction or proof relating to $\nat$ can automatically be transferred to $\natp$ in the same way.
We simply consider the type of the function or theorem as a type-indexed family of types $P:\type\to\type$, with the given object being an element of $P(\nat)$, and transport along the path $\id \nat\natp$.
This involves considerably less overhead.
For simplicity, we have described this method in the case of two types \nat and \natp with \emph{identical}-looking definitions.
However, a more common situation in practice is when the definitions are not literally identical, but nevertheless one induction principle implies the other.
\index{type!unit}%
\index{natural numbers!encoded as list(unit)@encoded as $\lst\unit$}%
Consider, for instance, the type of lists from a one-element type, $\lst\unit$, which is generated by
\begin{itemize}
\item an element $\nil:\lst\unit$, and
\item a function $\cons:\unit \times \lst\unit \to\lst\unit$.
\end{itemize}
This is not identical to the definition of \nat, and it does not give rise to an identical induction principle.
The induction principle of $\lst\unit$ says that for any $E:\lst\unit\to\type$ together with recurrence data $e_\nil:E(\nil)$ and $e_\cons : \prd{u:\unit}{\ell:\lst\unit} E(\ell) \to E(\cons(u,\ell))$, there exists $f:\prd{\ell:\lst\unit} E(\ell)$ such that $f(\nil)\jdeq e_\nil$ and $f(\cons(u,\ell))\jdeq e_\cons(u,\ell,f(\ell))$.
(We will see how to derive the induction principle of an inductive definition in \cref{sec:strictly-positive}.)
Now suppose we define $0'' \defeq \nil: \lst\unit$, and $\suc'':\lst\unit\to\lst\unit$ by $\suc''(\ell) \defeq \cons(\ttt,\ell)$.
Then for any $E:\lst\unit\to\type$ together with $e_0:E(0'')$ and $e_s:\prd{\ell:\lst\unit} E(\ell) \to E(\suc''(\ell))$, we can define
\begin{align*}
e_\nil &\defeq e_0\\
e_\cons(\ttt,\ell,x) &\defeq e_s(\ell,x).
\end{align*}
(In the definition of $e_\cons$ we use the induction principle of \unit to assume that $u$ is $\ttt$.)
Now we can apply the induction principle of $\lst\unit$, obtaining $f:\prd{\ell:\lst\unit} E(\ell)$ such that
\begin{gather*}
f(0'') \jdeq f(\nil) \jdeq e_\nil \jdeq e_0\\
f(\suc''(\ell)) \jdeq f(\cons(\ttt,\ell)) \jdeq e_\cons(\ttt,\ell,f(\ell)) \jdeq e_s(\ell,f(\ell)).
\end{gather*}
Thus, $\lst\unit$ satisfies the same induction principle as $\nat$, and hence (by the same arguments above) is equal to it.
Finally, these conclusions are not confined to the natural numbers: they apply to any inductive type.
If we have an inductively defined type $W$, say, and some other type $W'$ which satisfies the same induction principle as $W$, then it follows that $\eqv{W}{W'}$, and hence $W=W'$.
We use the derived recursion principles for $W$ and $W'$ to construct maps $W\to W'$ and $W'\to W$, respectively, and then the induction principles for each to prove that both composites are equal to identities.
For instance, in \cref{cha:typetheory} we saw that the coproduct $A+B$ could also have been defined as $\sm{x:\bool} \rec{\bool}(\UU,A,B,x)$.
The latter type satisfies the same induction principle as the former; hence they are canonically equivalent.
This is, of course, very similar to the familiar fact in category theory that if two objects have the same \emph{universal property}, then they are equivalent.
In \cref{sec:initial-alg} we will see that inductive types actually do have a universal property, so that this is a manifestation of that general principle.
\index{universal property}
\section{\texorpdfstring{$\w$}{W}-types}
\label{sec:w-types}
Inductive types are very general, which is excellent for their usefulness and applicability, but makes them difficult to study as a whole.
Fortunately, they can all be formally reduced to a few special cases.
It is beyond the scope of this book to discuss this reduction --- which is anyway irrelevant to the mathematician using type theory in practice --- but we will take a little time to discuss the one of the basic special cases that we have not yet met.
These are Martin-L{\"o}f's \emph{$\w$-types}, also known as the types of \emph{well-founded trees}.
\index{tree, well-founded}%
$\w$-types are a generalization of such types as natural numbers, lists, and binary trees, which are sufficiently general to encapsulate the ``recursion'' aspect of \emph{any} inductive type.
A particular $\w$-type is specified by giving two parameters $A : \type$ and $B : A \to \type$, in which case the resulting $\w$-type is written $\wtype{a:A} B(a)$.
The type $A$ represents the type of \emph{labels} for $\wtype{a :A} B(a)$, which function as constructors (however, we reserve that word for the actual functions which arise in inductive definitions). For instance, when defining natural numbers as a $\w$-type,%
\index{natural numbers!encoded as a W-type@encoded as a $\w$-type}
the type $A$ would be the type $\bool$ inhabited by the two elements $\bfalse$ and $\btrue$, since there are precisely two ways to obtain a natural number --- either it will be zero or a successor of another natural number.
The type family $B : A \to \type$ is used to record the arity\index{arity} of labels: a label $a : A$ will take a family of inductive arguments, indexed over $B(a)$. We can therefore think of the ``$B(a)$-many'' arguments of $a$. These arguments are represented by a function $f : B(a) \to \wtype{a :A} B(a)$, with the understanding that for any $b : B(a)$, $f(b)$ is the ``$b$-th'' argument to the label $a$. The $\w$-type $\wtype{a :A} B(a)$ can thus be thought of as the type of well-founded trees, where nodes are labeled by elements of $A$ and each node labeled by $a : A$ has $B(a)$-many branches.
In the case of natural numbers, the label $\bfalse$ has arity 0, since it constructs the constant zero\index{zero}; the label $\btrue$ has arity 1, since it constructs the successor\index{successor} of its argument. We can capture this by using simple elimination on $\bool$ to define a function $\rec\bool(\bbU,\emptyt,\unit)$ into a universe of types; this function returns the empty type $\emptyt$ for $\bfalse$ and the unit\index{type!unit} type $\unit$ for $\btrue$. We can thus define
\symlabel{natw}
\index{type!unit}%
\index{type!empty}%
\[ \natw \defeq \wtype{b:\bool} \rec\bool(\bbU,\emptyt,\unit,b) \]
where the superscript $\mathbf{w}$ serves to distinguish this version of natural numbers from the previously used one.
Similarly, we can define the type of lists\index{type!of lists} over $A$ as a $\w$-type with $\unit + A$ many labels: one nullary label for the empty list, plus one unary label for each $a : A$, corresponding to appending $a$ to the head of a list:
\[ \lst A \defeq \wtype{x: \unit + A} \rec{\unit + A}(\bbU, \; \emptyt, \; \lamu{a:A} \unit,\; x). \]
%
\index{W-type@$\w$-type}%
\indexsee{type!W-@$\w$-}{$\w$-type}%
In general, the $\w$-type $\wtype{x:A} B(x)$ specified by $A : \type$ and $B : A \to \type$ is the inductive type generated by the following constructor:
\begin{itemize}
\item \label{defn:supp}
$\supp : \prd{a:A} \Big(B(a) \to \wtype{x:A} B(x)\Big) \to \wtype{x:A} B(x)$.
\end{itemize}
%
The constructor $\supp$ (short for supremum\index{supremum!constructor of a W-type@constructor of a $\w$-type}) takes a label $a : A$ and a function $f : B(a) \to \wtype{x:A} B(x)$ representing the arguments to $a$, and constructs a new element of $\wtype{x:A} B(x)$. Using our previous encoding of natural numbers as $\w$-types, we can for instance define
\begin{equation*}
\zerow \defeq \supp(\bfalse, \; \lamu{x:\emptyt} \rec\emptyt(\natw,x)).
\end{equation*}
Put differently, we use the label $\bfalse$ to construct $\zerow$. Then, $\rec\bool(\bbU,\emptyt,\unit, \bfalse)$ evaluates to $\emptyt$, as it should since $\bfalse$ is a nullary label. Thus, we need to construct a function $f : \emptyt \to \natw$, which represents the (zero) arguments supplied to $\bfalse$. This is of course trivial, using simple elimination on $\emptyt$ as shown. Similarly, we can define $1^{\mathbf{w}}$ and a successor function $\sucw$
\begin{align*}
1^{\mathbf{w}} &\defeq \supp(\btrue, \; \lamu{x:\unit} 0^{\mathbf{w}}) \\
\sucw&\defeq \lamu{n:\natw} \supp(\btrue, \; \lamu{x:\unit} n).
\end{align*}
\index{induction principle!for W-types@for $\w$-types}%
We have the following induction principle for $\w$-types:
\begin{itemize}
\item When proving a statement $E : \big(\wtype{x:A} B(x)\big) \to \type$ about \emph{all} elements of the $\w$-type $\wtype{x:A} B(x)$, it suffices to prove it for $\supp(a,f)$, assuming it holds for all $f(b)$ with $b : B(a)$.
In other words, it suffices to give a proof
\begin{equation*}
e : \prd{a:A}{f : B(a) \to \wtype{x:A} B(x)}{g : \prd{b : B(a)} E(f(b))} E(\supp(a,f))
\end{equation*}
\end{itemize}
\index{variable}%
The variable $g$ represents our inductive hypothesis, namely that all arguments of $a$ satisfy $E$. To state this, we quantify over all elements of type $B(a)$, since each $b : B(a)$ corresponds to one argument $f(b)$ of $a$.
How would we define the function $\dbl$ on natural numbers encoded as a $\w$-type? We would like to use the recursion principle of $\natw$ with a codomain of $\natw$ itself. We thus need to construct a suitable function
\[e : \prd{a : \bool}{f : B(a) \to \natw}{g : B(a) \to \natw} \natw\]
which will represent the recurrence for the $\dbl$ function; for simplicity we denote the type family $\rec\bool(\bbU,\emptyt,\unit)$ by $B$.
Clearly, $e$ will be a function taking $a : \bool$ as its first argument. The next step is to perform case analysis on $a$ and proceed based on whether it is $\bfalse$ or $\btrue$. This suggests the following form
\[ e \defeq \lamu{a:\bool} \rec\bool(C,e_0,e_1,a) \]
where \[C \defeq \prd{f : B(a) \to \natw}{g : B(a) \to \natw} \natw\]
If $a$ is $\bfalse$, the type $B(a)$ becomes $\emptyt$. Thus, given $f : \emptyt \to \natw$ and $g : \emptyt \to \natw$, we want to construct an element of $\natw$. Since the label $\bfalse$ represents $\emptyt$, it needs zero inductive arguments and the variables $f$ and $g$ are irrelevant. We return $\zerow$\index{zero} as a result:
\[ e_0 \defeq \lamu{f:\emptyt \to \natw}{g:\emptyt \to \natw} \zerow \]
Analogously, if $a$ is $\btrue$, the type $B(a)$ becomes $\unit$.
Since the label $\btrue$ represents the successor\index{successor} operator, it needs one inductive argument --- the predecessor\index{predecessor} --- which is represented by the variable $f : \unit \to \natw$.
The value of the recursive call on the predecessor is represented by the variable $g : \unit \to \natw$.
Thus, taking this value (namely $g(\ttt)$) and applying the successor function twice thus yields the desired result:
\begin{equation*}
e_1 \defeq \; \lamu{f:\unit \to \natw}{g:\unit \to \natw} \sucw(\sucw(g(\ttt))).
\end{equation*}
Putting this together, we thus have
\[ \dbl \defeq \rec\natw(\natw, e) \]
with $e$ as defined above.
\symlabel{defn:recursor-wtype}
The associated computation rule for the function $\rec{\wtype{x:A} B(x)}(E,e) : \prd{w : \wtype{x:A} B(x)} E(w)$ is as follows.
\index{computation rule!for W-types@for $\w$-types}%
\begin{itemize}
\item
For any $a : A$ and $f : B(a) \to \wtype{x:A} B(x)$ we have
\begin{equation*}
\rec{\wtype{x:A} B(x)}(E,e,\supp(a,f)) \jdeq
e(a,f,\big(\lamu{b:B(a)} \rec{\wtype{x:A} B(x)}(E,e,f(b))\big)).
\end{equation*}
\end{itemize}
In other words, the function $\rec{\wtype{x:A} B(x)}(E,e)$ satisfies the recurrence $e$.
By the above computation rule, the function $\dbl$ behaves as expected:
\begin{align*}
\dbl(\zerow) & \jdeq \rec\natw(\natw, e, \supp(\bfalse, \; \lamu{x:\emptyt} \rec\emptyt(\natw,x))) \\
& \jdeq e(\bfalse, \big(\lamu{x:\emptyt} \rec\emptyt(\natw,x)\big),
\big(\lamu{x:\emptyt} \dbl(\rec\emptyt(\natw,x))\big)) \\
& \jdeq e_0(\big(\lamu{x:\emptyt} \rec\emptyt(\natw,x)\big), \big(\lamu{x:\emptyt} \dbl(\rec\emptyt(\natw,x))\big))\\
& \jdeq \zerow \\
\intertext{and}
\dbl(1^{\mathbf{w}}) & \jdeq \rec\natw(\natw, e, \supp(\btrue, \; \lamu{x:\unit} \zerow)) \\
& \jdeq e(\btrue, \big(\lamu{x:\unit} \zerow\big), \big(\lamu{x:\unit} \dbl(\zerow)\big)) \\
& \jdeq e_1(\big(\lamu{x:\unit} \zerow\big), \big(\lamu{x:\unit} \dbl(\zerow)\big)) \\
& \jdeq \sucw(\sucw\big((\lamu{x:\unit} \dbl(\zerow))(\ttt)\big))\\
& \jdeq \sucw(\sucw(\zerow))
\end{align*}
and so on.
Just as for natural numbers, we can prove a uniqueness theorem for
$\w$-types:
\begin{thm}\label{thm:w-uniq}
\index{uniqueness!principle, propositional!for functions on W-types@for functions on $\w$-types}%
Let $g,h : \prd{w:\wtype{x:A}B(x)} E(w)$ be two functions which satisfy the recurrence
%
\begin{equation*}
e : \prd{a,f} \Parens{\prd{b : B(a)} E(f(b))} \to E(\supp(a,f)),
\end{equation*}
%
propositionally, i.e., such that
%
\begin{gather*}
\prd{a,f} \id{g(\supp(a,f))} {e(a,f,\lamu{b:B(a)} g(f(b)))}, \\
\prd{a,f} \id{h(\supp(a,f))}{e(a,f,\lamu{b:B(a)} h(f(b)))}.
\end{gather*}
Then $g$ and $h$ are equal.
\end{thm}
\section{Inductive types are initial algebras}
\label{sec:initial-alg}
\indexsee{initial!algebra characterization of inductive types}{homotopy-initial}%
As suggested earlier, inductive types also have a category-theoretic universal property.
They are \emph{homotopy-initial algebras}: initial objects (up to coherent homotopy) in a category of ``algebras'' determined by the specified constructors.
As a simple example, consider the natural numbers.
The appropriate sort of ``algebra'' here is a type equipped with the same structure that the constructors of $\nat$ give to it.
\index{natural numbers!as homotopy-initial algebra}
\begin{defn}\label{defn:nalg}
A \define{$\nat$-algebra}
\indexdef{N-algebra@$\nat$-algebra}%
\indexdef{algebra!N-@$\nat$-}%
is a type $C$ with two elements $c_0 : C$, $c_s : C \to C$. The type of such algebras is
\begin{equation*}
\nalg \defeq \sm {C : \type} C \times (C \to C).
\end{equation*}
\end{defn}
\begin{defn}\label{defn:nhom}
A \define{$\nat$-homomorphism}
\indexdef{N-homomorphism@$\nat$-homomorphism}%
\indexdef{homomorphism!N-@$\nat$-}%
between $\nat$-algebras $(C,c_0,c_s)$ and $(D,d_0,d_s)$ is a function $h : C \to D$ such that $h(c_0) = d_0$ and $h(c_s(c)) = d_s(h(c))$ for all $c : C$. The type of such homomorphisms is
\begin{narrowmultline*}
\nhom((C,c_0,c_s),(D,d_0,d_s)) \defeq \narrowbreak
\dsm {h : C \to D} (\id{h(c_0)}{d_0}) \times \tprd{c:C} (\id{h(c_s(c))}{d_s(h(c))}).
\end{narrowmultline*}
\end{defn}
We thus have a category of $\nat$-algebras and $\nat$-homomorphisms, and the claim is that $\nat$ is the initial object of this category.
A category theorist will immediately recognize this as the definition of a \emph{natural numbers object} in a category.
Of course, since our types behave like $\infty$-groupoids\index{.infinity-groupoid@$\infty$-groupoid}, we actually have an $(\infty,1)$-category\index{.infinity1-category@$(\infty,1)$-category} of $\nat$-algebras, and we should ask $\nat$ to be initial in the appropriate $(\infty,1)$-categorical sense.
Fortunately, we can formulate this without needing to define $(\infty,1)$-categories.
\begin{defn}
\index{universal!property!of natural numbers}%
A $\nat$-algebra $I$ is called \define{homotopy-initial},
\indexdef{homotopy-initial!N-algebra@$\nat$-algebra}%
\indexdef{N-algebra@$\nat$-algebra!homotopy-initial (h-initial)}%
or \define{h-initial}
\indexsee{h-initial}{homotopy-initial}%
for short, if for any other $\nat$-algebra $C$, the type of $\nat$-homomorphisms from $I$ to $C$ is contractible. Thus,
\begin{equation*}
\ishinitn(I) \defeq \prd{C : \nalg} \iscontr(\nhom(I,C)).
\end{equation*}
\end{defn}
When they exist, h-initial algebras are unique --- not just up to isomorphism, as usual in category theory, but up to equality, by the univalence axiom.
\begin{thm}
Any two h-initial $\nat$-algebras are equal.
Thus, the type of h-initial $\nat$-algebras is a mere proposition.
\end{thm}
\begin{proof}
Suppose $I$ and $J$ are h-initial $\nat$-algebras.
Then $\nhom(I,J)$ is contractible, hence inhabited by some $\nat$-homomorphism $f:I\to J$, and likewise we have an $\nat$-homomorphism $g:J\to I$.
Now the composite $g\circ f$ is a $\nat$-homomorphism from $I$ to $I$, as is $\idfunc[I]$; but $\nhom(I,I)$ is contractible, so $g\circ f = \idfunc[I]$.
Similarly, $f\circ g = \idfunc[J]$.
Hence $\eqv IJ$, and so $I=J$. Since being contractible is a mere proposition and dependent products preserve mere propositions, it follows that being h-initial is itself a mere proposition. Thus any two proofs that $I$ (or $J$) is h-initial are necessarily equal, which finishes the proof.
\end{proof}
We now have the following theorem.
\begin{thm}\label{thm:nat-hinitial}
The $\nat$-algebra $(\nat, \emptyt, \suc)$ is homotopy initial.
\end{thm}
\begin{proof}[Sketch of proof]
Fix an arbitrary $\nat$-algebra $(C,c_0,c_s)$.
The recursion principle of $\nat$ yields a function $f:\nat\to C$ defined by
\begin{align*}
f(0) &\defeq c_0\\
f(\suc(n)) &\defeq c_s(f(n)).
\end{align*}
These two equalities make $f$ an $\nat$-homomorphism, which we can take as the center of contraction for $\nhom(\nat,C)$.
The uniqueness theorem (\cref{thm:nat-uniq}) then implies that any other $\nat$-homomorphism is equal to $f$.
\end{proof}
To place this in a more general context, it is useful to consider the notion of \emph{algebra for an endofunctor}. \index{algebra!for an endofunctor}
Note that to make a type $C$ into a $\nat$-algebra is the same as to give a function $c:C+\unit\to C$, and a function $f:C\to D$ is a $\nat$-homomorphism just when $f \circ c \htpy d \circ (f+\unit)$.
In categorical language, this means the \nat-algebras are the algebras for the endofunctor $F(X)\defeq X+1$ of the category of types.
\indexsee{functor!polynomial}{endofunctor, polynomial}%
\indexsee{polynomial functor}{endofunctor, polynomial}%
\indexdef{endofunctor!polynomial}%
\index{W-type@$\w$-type!as homotopy-initial algebra}
For a more generic case, consider the $\w$-type associated to $A : \type$ and $B : A \to \type$.
In this case we have an associated \define{polynomial functor}:
\begin{equation}
\label{eq:polyfunc}
P(X) = \sm{x : A} (B(x) \rightarrow X).
\end{equation}
Actually, this assignment is functorial only up to homotopy, but this makes no difference in what follows.
By definition, a \define{$P$-algebra}
\indexdef{algebra!for a polynomial functor}%
\indexdef{algebra!W-@$\w$-}%
is then a type $C$ equipped a function $s_C : PC \rightarrow C$.
By the universal property of $\Sigma$-types, this is equivalent to giving a function $\prd{a:A} (B(a) \to C) \to C$.
We will also call such objects \define{$\w$-algebras}
\indexdef{W-algebra@$\w$-algebra}%
for $A$ and $B$, and we write
\symlabel{walg}
\begin{equation*}
\walg(A,B) \defeq \sm {C : \type} \prd{a:A} (B(a) \to C) \to C.
\end{equation*}
Similarly, for $P$-algebras $(C,s_C)$ and $(D,s_D)$, a \define{homomorphism}
\indexdef{homomorphism!of algebras for a functor}%
between them $(f, s_f) : (C, s_C) \rightarrow (D, s_D)$ consists of a function $f : C \rightarrow D$ and a homotopy between maps $PC \rightarrow D$
\[
s_f : f \circ s_C \, = s_{D} \circ Pf,
\]
where $Pf : PC\rightarrow PD$ is the result of the easily-definable action of $P$ on $f: C \rightarrow D$. Such an algebra homomorphism can be represented suggestively in the form:
\[
\xymatrix{
PC \ar[d]_{s_C} \ar[r]^{Pf} \ar@{}[dr]|{s_f} & PD \ar[d]^{s_D}\\
C \ar[r]_{f} & D }
\]
In terms of elements, $f$ is a $P$-homomorphism (or \define{$\w$-homomorphism}) if
\indexdef{W-homomorphism@$\w$-homomorphism}%
\indexdef{homomorphism!W-@$\w$-}%
\[f(s_C(a,h)) = s_D(a,f \circ h).\]
We have the type of $\w$-homomorphisms:
\symlabel{whom}
\begin{equation*}
\whom_{A,B}((C, s_C),(D,s_D)) \defeq \sm{f : C \to D} \prd{a:A}{h:B(a)\to C} \id{f(s_C(a,h))}{s_D(a, f\circ h)}
\end{equation*}
\index{universal!property!of $\w$-type}%
Finally, a $P$-algebra $(C, s_C)$ is said to be \define{homotopy-initial}
\indexdef{homotopy-initial!algebra for a functor}%
\indexdef{homotopy-initial!W-algebra@$\w$-algebra}%
if for every $P$-algebra $(D, s_D)$, the type of all algebra homomorphisms $(C, s_C) \rightarrow (D, s_D)$ is contractible.
That is,
\begin{equation*}
\ishinitw(A,B,I) \defeq \prd{C : \walg(A,B)} \iscontr(\whom_{A,B}(I,C)).
\end{equation*}
%
Now the analogous theorem to \cref{thm:nat-hinitial} is:
\begin{thm}\label{thm:w-hinit}
For any type $A : \type$ and type family $B : A \to \type$, the $\w$-algebra $(\wtype{x:A}B(x), \supp)$ is h-initial.
\end{thm}
\begin{proof}[Sketch of proof]
Suppose we have $A : \type$ and $B : A \to \type$,
and consider the associated polynomial functor $P(X)\defeq\sm{x:A}(B(x)\to X)$.
Let $W \defeq \wtype{x:A}B(x)$. Then using
the $\w$-introduction rule from \cref{sec:w-types}, we have a structure map $s_W\defeq\supp: PW \rightarrow W$.
We want to show that the algebra $(W, s_W)$ is
h-initial. So, let us consider another algebra $(C,s_C)$ and show that the type $T\defeq \whom_{A,B}((W, s_W),(C,s_C)) $
of $\w$-homomorphisms from $(W, s_W)$ to $(C, s_C)$ is contractible. To do
so, observe that the $\w$-elimination rule and the $\w$-computation
rule allow us to define a $\w$-homomorphism $(f, s_f) : (W, s_W) \rightarrow (C, s_C)$,
thus showing that $T$ is inhabited. It is furthermore necessary to show that for every $\w$-ho\-mo\-mor\-phism $(g, s_g) : (W, s_W) \rightarrow (C, s_C)$, there is an identity proof
\begin{equation}
\label{equ:prequired}
p : (f,s_f) = (g,s_g).
\end{equation}
This uses the fact that, in general, a type of the form $(f,s_f) = (g,s_g) $
is equivalent to the type of what we call \define{algebra $2$-cells},
\indexdef{algebra!2-cell}%
whose canonical
elements are pairs of the form $(e, s_e)$, where $e : f=g$ and $s_e$ is a higher identity proof between the identity proofs represented by the following pasting diagrams:
\[
\xymatrix{
PW \ar@/^1pc/[r]^{Pg} \ar[d]_{s_W} \ar@{}[r]_(.52){s_g} & PD \ar[d]^{s_D} \\
W \ar@/^1pc/[r]^g \ar@/_1pc/[r]_f \ar@{}[r]|{e} & D } \qquad
\xymatrix{
PW \ar@/^1pc/[r]^{Pg} \ar[d]_{s_W} \ar@/_1pc/[r]_{Pf} \ar@{}[r]|{Pe}
& PD \ar[d]^{s_D} \\
W \ar@/_1pc/[r]_f \ar@{}[r]^{s_f} & D }
\]
In light of this fact, to prove that there exists an element as in~\eqref{equ:prequired}, it is
sufficient to show that there is an algebra 2-cell
\[
(e,s_e) : (f,s_f) = (g,s_g).
\]
The identity proof $e : f=g$ is now constructed by function extensionality and
$\w$-elimination so as to guarantee the existence of the required identity
proof $s_e$.
\end{proof}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Homotopy-inductive types}
\label{sec:htpy-inductive}
In \cref{sec:w-types} we showed how to encode natural numbers as $\w$-types, with
\begin{align*}
\natw & \defeq \wtype{b:\bool} \rec\bool(\bbU,\emptyt,\unit), \\
\zerow & \defeq \supp(\bfalse, (\lamu{x:\emptyt} \rec\emptyt(\natw,x))), \\
\sucw & \defeq \lamu{n:\natw} \supp(\btrue, (\lamu{x:\unit} n)).
\end{align*}
We also showed how one can define a $\dbl$ function on $\natw$ using the recursion principle.
When it comes to the induction principle, however, this encoding is no longer satisfactory: given $E : \natw \to \type$ and recurrences $e_z : E(\zerow)$ and $e_s : \prd{n : \natw} E(n) \to E(\sucw(n))$, we can only construct a dependent function $r(E,e_z,e_s) : \prd{n : \natw} E(n)$ satisfying the given recurrences \emph{propositionally}, i.e.\ up to a path.
This means that the computation rules for natural numbers, which give judgmental equalities, cannot be derived from the rules for $\w$-types in any obvious way.
\index{type!homotopy-inductive}%
\index{homotopy-inductive type}%
This problem goes away if instead of the conventional inductive types we consider \emph{homotopy-inductive types}, where all computation rules are stated up to a path, i.e.\ the symbol $\jdeq$ is replaced by $=$. For instance, the computation rule for the homotopy version of $\w$-types $\mathsf{W^h}$ becomes:
\index{computation rule!propositional}%
\begin{itemize}
\item For any $a : A$ and $f : B(a) \to \wtypeh{x:A} B(x)$ we have
\begin{equation*}
\rec{\wtypeh{x:A} B(x)}(E,e,\supp(a,f)) = e\Big(a,f,\big(\lamu{b:B(a)} \rec{\wtypeh{x:A} B(x)}(E,f(b))\big)\Big)
\end{equation*}
\end{itemize}
Homotopy-inductive types have an obvious disadvantage when it comes to computational properties --- the behavior of any function constructed using the induction principle can now only be characterized propositionally.
But numerous other considerations drive us to consider homotopy-inductive types as well.
For instance, while we showed in \cref{sec:initial-alg} that inductive types are homotopy-initial algebras, not every homotopy-initial algebra is an inductive type (i.e.\ satisfies the corresponding induction principle) --- but every homotopy-initial algebra \emph{is} a homotopy-inductive type.
Similarly, we might want to apply the uniqueness argument from \cref{sec:appetizer-univalence} when one (or both) of the types involved is only a homotopy-inductive type --- for instance, to show that the $\w$-type encoding of $\nat$ is equivalent to the usual $\nat$.
Additionally, the notion of a homotopy-inductive type is now internal to the type theory.
For example, this means we can form a type of all natural numbers objects and make assertions about it.
In the case of $\w$-types, we can characterize a homotopy $\w$-type $\wtype{x:A} B(x)$ as any type endowed with a supremum function and an induction principle satisfying the appropriate (propositional) computation rule:
\begin{multline*}
\w_d(A,B) \defeq \sm{W : \type}
\sm{\supp : \prd {a} (B(a) \to W) \to W}
\prd{E : W \to \type} \\
\prd{e : \prd{a,f} (\prd{b : B(a)} E(f(b))) \to E(\supp(a,f))}
\sm{\ind{} : \prd{w : W} E(w)}
\prd{a,f} \\
\ind{}(\supp(a,f)) = e(a,\lamu{b:B(a)} \ind{}(f(b))).
\end{multline*}
In \cref{cha:hits} we will see some other reasons why propositional computation rules are worth considering.
In this section, we will state some basic facts about homotopy-inductive types.
We omit most of the proofs, which are somewhat technical.
\begin{thm}
For any $A : \type$ and $B : A \to \type$, the type $\w_d(A,B)$ is a mere proposition.
\end{thm}
It turns out that there is an equivalent characterization of $\w$-types using a recursion principle, plus certain \emph{uniqueness} and \emph{coherence} laws. First we give the recursion principle:
%
\begin{itemize}
\item When constructing a function from the $\w$-type $\wtypeh{x:A} B(x)$ into the type $C$, it suffices to give its value for $\supp(a,f)$, assuming we are given the values of all $f(b)$ with $b : B(a)$.
In other words, it suffices to construct a function
\begin{equation*}
c : \prd{a:A} (B(a) \to C) \to C.
\end{equation*}
\end{itemize}
\index{computation rule!propositional}%
The associated computation rule for $\rec{\wtypeh{x:A} B(x)}(C,c) : (\wtype{x:A} B(x)) \to C$ is as follows:
\begin{itemize}
\item For any $a : A$ and $f : B(a) \to \wtypeh{x:A} B(x)$ we have
a witness $\beta(C,c,a,f)$ for equality
\begin{equation*}
\rec{\wtypeh{x:A} B(x)}(C,c,\supp(a,f)) =
c(a,\lamu{b:B(a)} \rec{\wtypeh{x:A} B(x)}(C,c,f(b))).
\end{equation*}
\end{itemize}
Furthermore, we assert the following uniqueness principle, saying that any two functions defined by the same recurrence are equal:
\index{uniqueness!principle, propositional!for homotopy W-types@for homotopy $\w$-types}%
\begin{itemize}
\item Let $C : \type$ and $c : \prd{a:A} (B(a) \to C) \to C$ be given. Let $g,h : (\wtypeh{x:A} B(x)) \to C$ be two functions which satisfy the recurrence $c$ up to propositional equality, i.e., such that we have
\begin{align*}
\beta_g &: \prd{a,f} \id{g(\supp(a,f))}{c(a,\lamu{b: B(a)} g(f(b)))}, \\
\beta_h &: \prd{a,f} \id{h(\supp(a,f))}{c(a,\lamu{b: B(a)} h(f(b)))}.
\end{align*}
Then $g$ and $h$ are equal, i.e.\ there is $\alpha(C,c,f,g,\beta_g,\beta_h)$ of type $g = h$.
\end{itemize}
\index{coherence}%
Recall that when we have an induction principle rather than only a recursion principle, this propositional uniqueness principle is derivable (\cref{thm:w-uniq}).
But with only recursion, the uniqueness principle is no longer derivable --- and in fact, the statement is not even true (exercise). Hence, we postulate it as an axiom.
We also postulate the following coherence\index{coherence} law, which tells us how the proof of uniqueness behaves on canonical elements:
\begin{itemize}
\item
For any $a : A$ and $f : B(a) \to C$, the following diagram commutes propositionally:
\[\xymatrix{
g(\supp(x,f)) \ar_{\alpha(\supp(x,f))}[d] \ar^-{\beta_g}[r] & c(a,\lamu{b:B(a)} g(f(b)))
\ar^{c(a,\blank)(\funext (\lam{b} \alpha(f(b))))}[d] \\
h(\supp(x,f)) \ar_-{\beta_h}[r] & c(a,\lamu{b: B(a)} h(f(b))) \\
}\]
where $\alpha$ abbreviates the path $\alpha(C,c,f,g,\beta_g,\beta_h) : g = h$.
\end{itemize}
Putting all of this data together yields another characterization of $\wtype{x:A} B(x)$, as a type with a supremum function, satisfying simple elimination, computation, uniqueness, and coherence rules:
\begin{multline*}
\w_s(A,B) \defeq \sm{W : \type}
\sm{\supp : \prd {a} (B(a) \to W) \to W}
\prd{C : \type}
\prd{c : \prd{a} (B(a) \to C) \to C}\\
\sm{\rec{} : W \to C}
\sm{\beta : \prd{a,f} \rec{}(\supp(a,f)) = c(a,\lamu{b: B(a)} \rec{}(f(b)))} \narrowbreak
\prd{g : W \to C}
\prd{h : W \to C}
\prd{\beta_g : \prd{a,f} g(\supp(a,f)) = c(a,\lamu{b: B(a)} g(f(b)))} \\
\prd{\beta_h : \prd{a,f} h(\supp(a,f)) = c(a,\lamu{b: B(a)} h(f(b)))}
\sm{\alpha : \prd {w : W} g(w) = h(w)} \\
\alpha(\supp(x,f)) \ct \beta_h = \beta_g \ct c(a,-)(\funext \; \lam{b} \alpha(f(b)))
\end{multline*}
\begin{thm}
For any $A : \type$ and $B : A \to \type$, the type $\w_s (A,B)$ is a mere proposition.
\end{thm}
Finally, we have a third, very concise characterization of $\wtype{x:A} B(x)$ as an h-initial $\w$-algebra:
\begin{equation*}
\w_h(A,B) \defeq \sm{I : \walg(A,B)} \ishinitw(A,B,I).
\end{equation*}
\begin{thm}
For any $A : \type$ and $B : A \to \type$, the type $\w_h (A,B)$ is a mere proposition.
\end{thm}
It turns out all three characterizations of $\w$-types are in fact equivalent:
\begin{lem}\label{lem:homotopy-induction-times-3}
For any $A : \type$ and $B : A \to \type$, we have
\[ \w_d(A,B) \eqvsym \w_s(A,B) \eqvsym \w_h(A,B) \]
\end{lem}
Indeed, we have the following theorem, which is an improvement over \cref{thm:w-hinit}:
\begin{thm}
The types satisfying the formation, introduction, elimination, and propositional computation rules for $\w$-types are precisely the homotopy-initial $\w$-algebras.
\end{thm}
%%%%%
\begin{proof}[Sketch of proof]
%%%%%
Inspecting the proof of \cref{thm:w-hinit}, we see that only the \emph{propositional} computation rule was required to establish the h-initiality of $\wtype{x:A}B(x)$.
For the converse implication, let us assume that the polynomial functor associated
to $A : \type$ and $B : A \to \UU$, has an h-initial algebra $(W,s_W)$; we show that $W$ satisfies the propositional rules of $\w$-types.
The $\w$-introduction rule is simple; namely, for $a : A$ and $t : B(a) \rightarrow W$, we define $\supp(a,t) : W$ to be the
result of applying the structure map $s_W : PW \rightarrow W$ to $(a,t) : PW$.
For the $\w$-elimination rule, let us assume its premisses and in particular that $C' : W \to \type$.
Using the other premisses, one shows that the type $C \defeq \sm{ w : W} C'(w)$
can be equipped with a structure map $s_C : PC \rightarrow C$. By the h-initiality of $W$,
we obtain an algebra homomorphism $(f, s_f) : (W, s_W) \rightarrow (C, s_C)$. Furthermore,
the first projection $\proj1 : C \rightarrow W$ can be equipped with the structure of a
homomorphism, so that we obtain a diagram of the form
\[
\xymatrix{
PW \ar[r]^{Pf} \ar[d]_{s_W} & PC \ar[d]^{s_C} \ar[r]^{P \proj1} & PW \ar[d]^{s_W} \\
W \ar[r]_f & C \ar[r]_{\proj1} & W.}
\]
But the identity function $1_W : W \rightarrow W$ has a canonical structure of an
algebra homomorphism and so, by the contractibility of the type of homomorphisms
from $(W,s_W)$ to itself, there must be an identity proof between the composite
of $(f,s_f)$ with $(\proj1, s_{\proj1})$ and $(1_W, s_{1_W})$. This implies, in particular,
that there is an identity proof $p : \proj1 \circ f = 1_W$.
Since $(\proj2 \circ f) w : C( (\proj1 \circ f) w)$, we can define
\[
\rec{}(w,c) \defeq
p_{\, * \,}( ( \proj2 \circ f) w ) : C(w)
\]
where the transport $p_{\, * \,}$ is with respect to the family
\[
\lamu{u}C\circ u : (W\to W)\to W\to \UU.
\]
The verification of the propositional $\w$-computation rule is a calculation,
involving the naturality properties of operations of the form $p_{\, * \,}$.
\end{proof}
%%%%%
\index{natural numbers!encoded as a W-type@encoded as a $\w$-type}%
Finally, as desired, we can encode homotopy-natural-numbers as homo\-topy-$\w$-types:
\begin{thm}
The rules for natural numbers with propositional computation rules can be derived from the rules for $\w$-types with propositional computation rules.
\end{thm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The general syntax of inductive definitions}
\label{sec:strictly-positive}
\index{type!inductive|(}%
\indexsee{inductive!type}{type, inductive}%
So far, we have been discussing only particular inductive types: $\emptyt$, $\unit$, $\bool$, $\nat$, coproducts, products, $\Sigma$-types, $\w$-types, etc.
However, an important aspect of type theory is the ability to define \emph{new} inductive types, rather than being restricted only to some particular fixed list of them.
In order to be able to do this, however, we need to know what sorts of ``inductive definitions'' are valid or reasonable.
To see that not everything which ``looks like an inductive definition'' makes sense, consider the following ``constructor'' of a type $C$:
\begin{itemize}
\item $g:(C\to \nat) \to C$.
\end{itemize}
The recursion principle for such a type $C$ ought to say that given a type $P$, in order to construct a function $f:C\to P$, it suffices to consider the case when the input $c:C$ is of the form $g(\alpha)$ for some $\alpha:C\to\nat$.
Moreover, we would expect to be able to use the ``recursive data'' of $f$ applied to $\alpha$ in some way.
However, it is not at all clear how to ``apply $f$ to $\alpha$'', since both are functions with domain $C$.
We could write down a ``recursion principle'' for $C$ by just supposing (unjustifiably) that there is some way to apply $f$ to $\alpha$ and obtain a function $P\to\nat$.
Then the input to the recursion rule would ask for a type $P$ together with a function
\begin{equation}
h:(C\to\nat) \to (P\to\nat) \to P\label{eq:fake-recursor}
\end{equation}
where the two arguments of $h$ are $\alpha$ and ``the result of applying $f$ to $\alpha$''.
However, what would the computation rule for the resulting function $f:C\to P$ be?
Looking at other computation rules, we would expect something like ``$f(g(\alpha)) \jdeq h(\alpha,f(\alpha))$'' for $\alpha:C\to\nat$, but as we have seen, ``$f(\alpha)$'' does not make sense.
The induction principle of $C$ is even more problematic; it's not even clear how to write down the hypotheses.
On the other hand, we could write down a different ``recursion principle'' for $C$ by ignoring the ``recursive'' presence of $C$ in the domain of $\alpha$, considering it as merely an indexing type for a family of natural numbers.
In this case the input would ask for a type $P$ together with a function
\begin{equation*}
h:(C\to \nat) \to P,
\end{equation*}
so the type of the recursion principle would be $\rec{C}:\prd{P:\UU} ((C\to \nat) \to P) \to C\to P$, and similarly for the induction principle.
Now it is possible to write down a computation rule, namely $\rec{C}(P,h,g(\alpha))\jdeq h(\alpha)$.
However, the existence of a type $C$ with this recursor and computation rule turns out to be inconsistent.
See \cref{ex:loop,ex:loop2,ex:inductive-lawvere,ex:ilunit} for proofs of this and other variations.
This example suggests one restriction on inductive definitions: the domains of all the constructors must be \emph{covariant functors}\index{functor!covariant}\index{covariant functor} of the type being defined, so that we can ``apply $f$ to them'' to get the result of the ``recursive call''.
In other words, if we replace all occurrences of the type being defined with a variable
\index{variable!type}%
$X:\type$, then each domain of a constructor
\index{domain!of a constructor}%
must be an expression that can be made into a covariant functor of $X$.
This is the case for all the examples we have considered so far.
For instance, with the constructor $\inl:A\to A+B$, the relevant functor is constant at $A$ (i.e.\ $X\mapsto A$), while for the constructor $\suc:\nat\to\nat$, the functor is the identity functor ($X\mapsto X$).
However, this necessary condition is also not sufficient.
Covariance prevents the inductive type from occurring on the left of a single function type, as in the argument $C\to\nat$ of the ``constructor'' $g$ considered above, since this yields a contravariant\index{functor!contravariant}\index{contravariant functor} functor rather than a covariant one.
However, since the composite of two contravariant functors is covariant, \emph{double} function types such as $((X\to \nat)\to \nat)$ are once again covariant.
This enables us to reproduce Cantorian-style paradoxes\index{paradox}.
For instance, consider an ``inductive type'' $D$ with the following constructor:
\begin{itemize}
\item $k:((D\to\prop)\to\prop)\to D$.
\end{itemize}
Assuming such a type exists, we define functions
\begin{align*}
r&:D\to (D\to\prop)\to\prop,\\
f&:(D\to\prop) \to D,\\
p&:(D\to \prop) \to (D\to\prop)\to \prop,\\
\intertext{by}
r(k(\theta)) &\defeq \theta,\\
f(\delta) &\defeq k(\lam{x} (x=\delta)),\\
p(\delta) &\defeq \lam{x} \delta(f(x)).
\end{align*}
Here $r$ is defined by the recursion principle of $D$, while $f$ and $p$ are defined explicitly.
Then for any $\delta:D\to\prop$, we have $r(f(\delta)) = \lam{x}(x=\delta)$.
In particular, therefore, if $f(\delta)=f(\delta')$, then we have a path $s:(\lam{x}(x=\delta)) = (\lam{x}(x=\delta'))$.
Thus, $\happly(s,\delta) : (\delta=\delta) = (\delta=\delta')$, and so in particular $\delta=\delta'$ holds.
Hence, $f$ is ``injective'' (although \emph{a priori} $D$ may not be a set).
This already sounds suspicious --- we have an ``injection'' of the ``power set''\index{power set} of $D$ into $D$ --- and with a little more work we can massage it into a contradiction.
Suppose given $\theta:(D\to\prop)\to\prop$, and define $\delta:D\to\prop$ by
\begin{equation}
\delta(d) \defeq \exis{\gamma:D\to\prop} (f(\gamma) = d) \times \theta(\gamma).\label{eq:Pinj}
\end{equation}
We claim that $p(\delta)=\theta$.
By function extensionality, it suffices to show $p(\delta)(\gamma) =_\prop \theta(\gamma)$ for any $\gamma:D\to\prop$.
And by univalence, for this it suffices to show that each implies the other.
Now by definition of $p$, we have
\begin{align*}
p(\delta)(\gamma) &\jdeq \delta(f(\gamma))\\
&\jdeq \exis{\gamma':D\to\prop} (f(\gamma') = f(\gamma)) \times \theta(\gamma').
\end{align*}
Clearly this holds if $\theta(\gamma)$, since we may take $\gamma'\defeq \gamma$.
On the other hand, if we have $\gamma'$ with $f(\gamma') = f(\gamma)$ and $\theta(\gamma')$, then $\gamma'=\gamma$ since $f$ is injective, hence also $\theta(\gamma)$.
This completes the proof that $p(\delta)=\theta$.
Thus, every element $\theta:(D\to\prop)\to\prop$ is the image under $p$ of some element $\delta:D\to\prop$.
However, if we define $\theta$ by a classic diagonalization:
\[ \theta(\gamma) \defeq \neg p(\gamma)(\gamma) \quad\text{for all $\gamma:D\to\prop$} \]
then from $\theta = p(\delta)$ we deduce $p(\delta)(\delta) = \neg p(\delta)(\delta)$.
This is a contradiction: no proposition can be equivalent to its negation.
(Supposing $P\Leftrightarrow \neg P$, if $P$, then $\neg P$, and so $\emptyt$; hence $\neg P$, but then $P$, and so $\emptyt$.)
\begin{rmk}
There is a question of universe size to be addressed.
In general, an inductive type must live in a universe that already contains all the types going into its definition.
Thus if in the definition of $D$, the ambiguous notation \prop means $\prop_{\UU}$, then we do not have $D:\UU$ but only $D:\UU'$ for some larger universe $\UU'$ with $\UU:\UU'$.
\index{mathematics!predicative}%
\indexsee{impredicativity}{mathematics, predicative}%
\indexsee{predicative mathematics}{mathematics, predicative}%
In a predicative theory, therefore, the right-hand side of~\eqref{eq:Pinj} lives in $\prop_{\UU'}$, not $\prop_\UU$.
So this contradiction does require the propositional resizing axiom
\index{propositional!resizing}%
mentioned in \cref{subsec:prop-subsets}.
\end{rmk}
\index{consistency}%
This counterexample suggests that we should ban an inductive type from ever appearing on the left of an arrow in the domain of its constructors, even if that appearance is nested in other arrows so as to eventually become covariant.
(Similarly, we also forbid it from appearing in the domain of a dependent function type.)
This restriction is called \define{strict positivity}
\indexdef{strict!positivity}%
\indexsee{positivity, strict}{strict positivity}%
(ordinary ``positivity'' being essentially covariance), and it turns out to suffice.
\index{constructor}%
In conclusion, therefore, a valid inductive definition of a type $W$ consists of a list of \emph{constructors}.
Each constructor is assigned a type that is a function type taking some number (possibly zero) of inputs (possibly dependent on one another) and returning an element of $W$.
Finally, we allow $W$ itself to occur in the input types of its constructors, but only strictly positively.
This essentially means that each argument of a constructor is either a type not involving $W$, or some iterated function type with codomain $W$.
For instance, the following is a valid constructor type:
\begin{equation}
c:(A\to W) \to (B\to C \to W) \to D \to W \to W.\label{eq:example-constructor}
\end{equation}
All of these function types can also be dependent functions ($\Pi$-types).%
\footnote{In the language of \cref{sec:initial-alg}, the condition of strict positivity ensures that the relevant endofunctor is polynomial.\indexfoot{endofunctor!polynomial}\indexfoot{algebra!initial}\indexsee{algebra!initial}{homotopy-initial} It is well-known in category theory that not \emph{all} endofunctors can have initial algebras; restricting to polynomial functors ensures consistency.
One can consider various relaxations of this condition, but in this book we will restrict ourselves to strict positivity as defined here.}
Note we require that an inductive definition is given by a \emph{finite} list of constructors.
This is simply because we have to write it down on the page.
If we want an inductive type which behaves as if it has an infinite number of constructors, we can simply parametrize one constructor by some infinite type.
For instance, a constructor such as $\nat \to W \to W$ can be thought of as equivalent to countably many constructors of the form $W\to W$.
(Of course, the infinity is now \emph{internal} to the type theory, but this is as it should be for any foundational system.)
Similarly, if we want a constructor that takes ``infinitely many arguments'', we can allow it to take a family of arguments parametrized by some infinite type, such as $(\nat\to W) \to W$ which takes an infinite sequence\index{sequence} of elements of $W$.
\index{recursion principle!for an inductive type}%
Now, once we have such an inductive definition, what can we do with it?
Firstly, there is a \define{recursion principle} stating that in order to define a function $f:W\to P$, it suffices to consider the case when the input $w:W$ arises from one of the constructors, allowing ourselves to recursively call $f$ on the inputs to that constructor.
For the example constructor~\eqref{eq:example-constructor}, we would require $P$ to be equipped with a function of type
\begin{narrowmultline}\label{eq:example-rechyp}
d : (A\to W) \to (A\to P) \to (B\to C\to W) \to
\narrowbreak
(B\to C \to P) \to D \to W \to P \to P.
\end{narrowmultline}
Under these hypotheses, the recursion principle yields $f:W\to P$, which moreover ``preserves the constructor data'' in the evident way --- this is the computation rule, where we use covariance of the inputs.
\index{computation rule!for inductive types}%
For instance, in the example~\eqref{eq:example-constructor}, the computation rule says that for any $\alpha:A\to W$, $\beta:B\to C\to W$, $\delta:d$, and $\omega:W$, we have
\begin{equation}
f(c(\alpha,\beta,\delta,\omega)) \jdeq d(\alpha,f\circ \alpha,\beta, f\circ \beta, \delta, \omega,f(\omega)).\label{eq:example-comp}
\end{equation}
% As we have before in particular cases, when defining a particular function $f$, we may write these rules with $\defeq$ as a way of specifying the data $d$, and say that $f$ is defined by them.
\index{induction principle!for an inductive type}%
The \define{induction principle} for a general inductive type $W$ is only a little more complicated.
Of course, we start with a type family $P:W\to\type$, which we require to be equipped with constructor data ``lying over'' the constructor data of $W$.
That means the ``recursive call'' arguments such as $A\to P$ above must be replaced by dependent functions with types such as $\prd{a:A} P(\alpha(a))$.
In the full example of~\eqref{eq:example-constructor}, the corresponding hypothesis for the induction principle would require
\begin{multline}\label{eq:example-indhyp}
d : \prd{\alpha:A\to W}\Parens{\prd{a:A} P(\alpha(a))} \to \narrowbreak
\prd{\beta:B\to C\to W} \Parens{\prd{b:B}{c:C} P(\beta(b,c))} \to\\
\prd{\delta:D}
\prd{\omega:W} P(\omega) \to
P(c(\alpha,\beta,\delta,\omega)).
\end{multline}
The corresponding computation rule looks identical to~\eqref{eq:example-comp}.
Of course, the recursion principle is the special case of the induction principle where $P$ is a constant family.
As we have mentioned before, the induction principle is also called the \define{eliminator}, and the recursion principle the \define{non-dependent eliminator}.
As discussed in \cref{sec:pattern-matching}, we also allow ourselves to invoke the induction and recursion principles implicitly, writing a definitional equation with $\defeq$ for each expression that would be the hypotheses of the induction principle.
This is called giving a definition by (dependent) \define{pattern matching}.
\index{pattern matching}%
\index{definition!by pattern matching}%
In our running example, this means we could define $f:\prd{w:W} P(w) $ by
\[ f(c(\alpha,\beta,\delta,\omega)) \defeq \cdots \]
where $\alpha:A\to W$ and $\beta:B\to C\to W$ and $\delta:D$ and $\omega:W$ are variables
\index{variable}%
that are bound in the right-hand side.
Moreover, the right-hand side may involve recursive calls to $f$ of the form $f(\alpha(a))$, $f(\beta(b,c))$, and $f(\omega)$.
When this definition is repackaged in terms of the induction principle, we replace such recursive calls by $\bar\alpha(a)$, $\bar\beta(b,c)$, and $\bar\omega$, respectively, for new variables
\begin{align*}
\bar\alpha &: \prd{a:A} P(\alpha(a))\\
\bar\beta &: \prd{b:B}{c:C} P(\beta(b,c))\\
\bar\omega &: P(\omega).
\end{align*}
\symlabel{defn:induction-wtype}%
Then we could write
\[ f \defeq \ind{W}(P,\, \lam{\alpha}{\bar\alpha}{\beta}{\bar\beta}{\delta}{\omega}{\bar\omega} \cdots ) \]
where the second argument to $\ind{W}$ has the type of~\eqref{eq:example-indhyp}.
We will not attempt to give a formal presentation of the grammar of a valid inductive definition and its resulting induction and recursion principles and pattern matching rules.
This is possible to do (indeed, it is necessary to do if implementing a computer proof assistant), but provides no additional insight.
With practice, one learns to automatically deduce the induction and recursion principles for any inductive definition, and to use them without having to think twice.
\section{Generalizations of inductive types}
\label{sec:generalizations}
\index{type!inductive!generalizations}%
The notion of inductive type has been studied in type theory for many years, and admits many, many generalizations: inductive type families, mutual inductive types, inductive-inductive types, inductive-recur\-sive types, etc.
In this section we give an overview of some of these, a few of which will be used later in the book.
(In \cref{cha:hits} we will study in more depth a very different generalization of inductive types, which is particular to \emph{homotopy} type theory.)
Most of these generalizations involve allowing ourselves to define more than one type by induction at the same time.
One very simple example of this, which we have already seen, is the coproduct $A+B$.
It would be tedious indeed if we had to write down separate inductive definitions for $\nat+\nat$, for $\nat+\bool$, for $\bool+\bool$, and so on every time we wanted to consider the coproduct of two types.
Instead, we make one definition in which $A$ and $B$ are variables standing for types;
\index{variable!type}%
in type theory they are called \define{parameters}.%
\indexdef{parameter!of an inductive definition}
Thus technically speaking, what results from the definition is not a single type, but a family of types $+ : \type\to\type\to\type$, taking two types as input and producing their coproduct.
Similarly, the type $\lst A$ of lists\index{type!of lists} is a family $\lst{\blank}:\type\to\type$ in which the type $A$ is a parameter.
In mathematics, this sort of thing is so obvious as to not be worth mentioning, but we bring it up in order to contrast it with the next example.
Note that each type $A+B$ is \emph{independently} defined inductively, as is each type $\lst A$.
\index{type!family of!inductive}%
\index{inductive!type family}%
By contrast, we might also consider defining a whole type family $B:A\to\type$ by induction \emph{together}.
The difference is that now the constructors may change the index $a:A$, and as a consequence we cannot say that the individual types $B(a)$ are inductively defined, only that the entire family is inductively defined.
\index{type!of vectors}%
\index{vector}%
The standard example is the type of \emph{lists of specified length}, traditionally called \define{vectors}.
We fix a parameter type $A$, and define a type family $\vect n A$, for $n:\nat$, generated by the following constructors:
\begin{itemize}
\item a vector $\nil:\vect 0 A$ of length zero,
\item a function $\cons:\prd{n:\nat} A\to \vect n A \to \vect{\suc (n)} A$.
\end{itemize}
In contrast to lists, vectors (with elements from a fixed type $A$) form a family of types indexed by their length.
While $A$ is a parameter, we say that $n:\nat$ is an \define{index}
\indexdef{index of an inductive definition}%
of the inductive family.
An individual type such as $\vect3A$ is not inductively defined: the constructors which build elements of $\vect3A$ take input from a different type in the family, such as $\cons:A \to \vect2A \to \vect3A$.
\index{induction principle!for type of vectors}
\index{vector!induction principle for}
In particular, the induction principle must refer to the entire type family as well; thus the hypotheses and the conclusion must quantify over the indices appropriately.
In the case of vectors, the induction principle states that given a type family $C:\prd{n:\nat} \vect n A \to \type$, together with
\begin{itemize}
\item an element $c_\nil : C(0,\nil)$, and
\item a function \narrowequation{c_\cons : \prd{n:\nat}{a:A}{\ell:\vect n A} C(n,\ell) \to C(\suc(n),\cons(a,\ell))}
\end{itemize}
there exists a function $f:\prd{n:\nat}{\ell:\vect n A} C(n,\ell)$ such that
\begin{align*}
f(0,\nil) &\jdeq c_\nil\\
f(\suc(n),\cons(a,\ell)) &\jdeq c_\cons(n,a,\ell,f(\ell)).
\end{align*}