From 3aba3bfef8bc88706ad3b6e5823a94023bb9f3b7 Mon Sep 17 00:00:00 2001 From: Yuko Munakata <58264197+yukomunakata@users.noreply.github.com> Date: Thu, 17 Oct 2024 21:16:39 -0700 Subject: [PATCH] fix typo --- ch4/self_org/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ch4/self_org/README.md b/ch4/self_org/README.md index ef83926..a8fb8b9 100644 --- a/ch4/self_org/README.md +++ b/ch4/self_org/README.md @@ -55,7 +55,7 @@ The net result of this self-organizing learning is a *combinatorial* distributed Another thing to notice in the weights shown in the grid view is that some units are obviously not selective for anything. These "loser" units (also known as "dead" units) were never reliably activated by any input feature, and thus did not experience much learning. It is typically quite important to have such units lying around, because self-organization requires some "elbow room" during learning to sort out the allocation of units to stable correlational features. Having more hidden units also increases the chances of having a large enough range of initial random selectivities to seed the self-organization process. The consequence is that you need to have more units than is minimally necessary, and that you will often end up with leftovers (plus the redundant units mentioned previously). -From a biological perspective, we know that the cortex does not produce a lot of new cortical neurons in adults, so we conclude that in general there is probably an excess of neural capacity relative to the demands of any given learning context. Thus, it is useful to have these leftover and redundant units, because they constitute a "reserve" that could presumably get activated if new features were later presented to the network (e.g., diagonal lines). We are much more suspicious ofrecisely tuned quantities of hidden units to work properly (more on this later). +From a biological perspective, we know that the cortex does not produce a lot of new cortical neurons in adults, so we conclude that in general there is probably an excess of neural capacity relative to the demands of any given learning context. Thus, it is useful to have these leftover and redundant units, because they constitute a "reserve" that could presumably get activated if new features were later presented to the network (e.g., diagonal lines). We are much more suspicious of precisely tuned quantities of hidden units to work properly (more on this later). # Unique Pattern Statistic