Skip to content

Commit

Permalink
fix typo
Browse files Browse the repository at this point in the history
  • Loading branch information
yukomunakata authored Oct 18, 2024
1 parent 6f135d6 commit 3aba3bf
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion ch4/self_org/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ The net result of this self-organizing learning is a *combinatorial* distributed

Another thing to notice in the weights shown in the grid view is that some units are obviously not selective for anything. These "loser" units (also known as "dead" units) were never reliably activated by any input feature, and thus did not experience much learning. It is typically quite important to have such units lying around, because self-organization requires some "elbow room" during learning to sort out the allocation of units to stable correlational features. Having more hidden units also increases the chances of having a large enough range of initial random selectivities to seed the self-organization process. The consequence is that you need to have more units than is minimally necessary, and that you will often end up with leftovers (plus the redundant units mentioned previously).

From a biological perspective, we know that the cortex does not produce a lot of new cortical neurons in adults, so we conclude that in general there is probably an excess of neural capacity relative to the demands of any given learning context. Thus, it is useful to have these leftover and redundant units, because they constitute a "reserve" that could presumably get activated if new features were later presented to the network (e.g., diagonal lines). We are much more suspicious ofrecisely tuned quantities of hidden units to work properly (more on this later).
From a biological perspective, we know that the cortex does not produce a lot of new cortical neurons in adults, so we conclude that in general there is probably an excess of neural capacity relative to the demands of any given learning context. Thus, it is useful to have these leftover and redundant units, because they constitute a "reserve" that could presumably get activated if new features were later presented to the network (e.g., diagonal lines). We are much more suspicious of precisely tuned quantities of hidden units to work properly (more on this later).

# Unique Pattern Statistic

Expand Down

0 comments on commit 3aba3bf

Please sign in to comment.