Skip to content

Commit

Permalink
Merge pull request #1361 from PrincetonUniversity/devel
Browse files Browse the repository at this point in the history
Devel
  • Loading branch information
dillontsmith authored Oct 18, 2019
2 parents 6eefbea + 0ba16b0 commit 9b4c27f
Show file tree
Hide file tree
Showing 247 changed files with 28,419 additions and 21,626 deletions.
5 changes: 2 additions & 3 deletions .appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,8 @@ install:
- pip install --user -U certifi
- pip install --user git+https://github.com/benureau/leabra.git@master

# pytorch does not distribute windows packages over pypi.
# Install it directly, or remove from requirements if not available (win32).
- if "%ARCH%" == "" (findstr /V torch < dev_requirements.txt > tmp_req && move /Y tmp_req dev_requirements.txt) else (pip install --user torch -f https://download.pytorch.org/whl/cpu/torch_stable.html)
# pytorch does not distribute windows packages over pypi. Install it directly.
- if not "%ARCH%" == "" (pip install --user torch -f https://download.pytorch.org/whl/cpu/torch_stable.html)

- pip install --user -e .[dev]

Expand Down
6 changes: 3 additions & 3 deletions .idea/runConfigurations/Make_HTML.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

64 changes: 26 additions & 38 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,53 +5,38 @@ branches:
- /devel-.*/
- /travis.*/

language: python
language: shell

python:
- 3.6
- 3.7

os: linux
dist: xenial
os:
- linux
- osx
dist: bionic

env:
matrix:
- PYTHON=3.6.8
- PYTHON=3.7.4
global:
- PYTHONWARNINGS="ignore::DeprecationWarning"
- PIP_PROGRESS_BAR="off"

# Cache downloaded(built) python packages
# and homebrew downloads
cache:
pip
directories:
- $HOME/.cache/pip
- $HOME/Library/Caches/Homebrew
- $HOME/Library/Caches/pip

addons:
apt:
packages:
- graphviz

matrix:
include:
- os: osx
python: 3.6
language: minimal
env: PYTHON=3.6.8
# Cache pip and homebrew downloads
cache:
directories:
- $HOME/Library/Caches/Homebrew
- $HOME/Library/Caches/pip

- os: osx
python: 3.7
language: minimal
env: PYTHON=3.7.4
# Cache pip and homebrew downloads
cache:
directories:
- $HOME/Library/Caches/Homebrew
- $HOME/Library/Caches/pip
- python3-pip

before_install:
- |
# OSX Python is not directly supported on travis
# Hombrew doesn't provide older python versions.
# Install it manually from python.org
if [ "$TRAVIS_OS_NAME" == "osx" ]; then
# homebrew plugin is not working correctly
Expand All @@ -70,20 +55,23 @@ before_install:
echo "Deploying new python venv"
python3 -m pip install virtualenv
python3 -m venv $HOME/venv
source $HOME/venv/bin/activate

# Upgrade pip
pip install -U pip
elif [ "$TRAVIS_OS_NAME" == "linux" ]; then
sudo apt-get install -y python${PYTHON%.*}-dev python${PYTHON%.*}-venv
python${PYTHON%.*} -m venv $HOME/venv
# Provide fake xdg-open
echo "#!/bin/sh" > $HOME/venv/bin/xdg-open
chmod +x $HOME/venv/bin/xdg-open
fi
- source $HOME/venv/bin/activate

# Upgrade pip
- pip install -U pip
- python --version
- pip --version

install:
- pip install coveralls
- pip install git+https://github.com/benureau/leabra.git@master
# Travis bundles pytest 4.3.1 in Linux Xenial, new pytest-xdist
# requires pytest>=4.4.0, and the dependencies are broken
- pip install -U pytest
- pip install -e .[dev]


Expand Down
14 changes: 7 additions & 7 deletions CONVENTIONS.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,10 @@ Made up of two types of classes:
- **LearningProjection**
- **ControlProjection**
- **GatingProjection**
- *State*
- **InputState**
- **ParameterState**
- **OutputState**
- *Port*
- **InputPort**
- **ParameterPort**
- **OutputPort**
- *ModulatorySignal*
- **LearningSignal**
- **ControlSignal**
Expand Down Expand Up @@ -65,7 +65,7 @@ Extensions of Core objects
- Component names always end in their type (e.g., TransferMechanism, LearningProjection)
(the only exception is the DDM)
- Components and Compositions should *always* be referred to in caps
(e.g., All Mechanisms have Projections; the receiver for a Projection is an InputState; etc.).
(e.g., All Mechanisms have Projections; the receiver for a Projection is an InputPort; etc.).

#### Format:
- class names:
Expand Down Expand Up @@ -233,8 +233,8 @@ Terminology used here:
the module must be explicitly referenced (e.g., `ControlMechanism <ControlMechanism>`);
[this appears to be redundant, but it is necessary]
- conversely, for classes without subclasses, the title in the rst file is singular;
therefore, to refer to the plural of such a class (e.g., InputState),
the module must be explicitly referenced (e.g., `InputStates <InputState>`);
therefore, to refer to the plural of such a class (e.g., InputPort),
the module must be explicitly referenced (e.g., `InputStates <InputPort>`);
- to flag references to sections that have not yet been documented (or labelled),
use the following construction: `section <LINK>` (so that <LINK> can be searched for replace these later).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ WORD_OUTPUT_LAYER = pnl.RecurrentTransferMechanism(size = 3,
# integrator_function= pnl.InteractiveActivation(rate = 0.0015, decay=0.0, offset=-6),
name='WORD OUTPUT LAYER')
WORD_OUTPUT_LAYER.set_log_conditions('value')
WORD_OUTPUT_LAYER.set_log_conditions('InputState-0')
WORD_OUTPUT_LAYER.set_log_conditions('InputPort-0')



Expand Down
4 changes: 2 additions & 2 deletions Scripts/Debug/Markus Stroop.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,8 +201,8 @@ def trial_dict(red_color, green_color, red_word, green_word, CN, WR):
# CREATE THRESHOLD FUNCTION
# first value of DDM's value is DECISION_VARIABLE
def pass_threshold(mech1, mech2, thresh):
results1 = mech1.output_states[0].value
results2 = mech2.output_states[0].value
results1 = mech1.output_ports[0].value
results2 = mech2.output_ports[0].value
for val in results1:
if val >= thresh:
return True
Expand Down
10 changes: 5 additions & 5 deletions Scripts/Debug/Predator-Prey Sebastian.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ def new_episode():
initial_observation = ddqn_agent.env.reset()
new_episode_flag = True

# Initialize both states to verdical state based on first observation
# Initialize both ports to verdical state based on first observation
perceptual_state = veridical_state = ddqn_agent.buffer.next(initial_observation, is_new_episode=True)

def get_optimal_action(observation):
Expand Down Expand Up @@ -143,7 +143,7 @@ def get_action(variable=[[0,0],[0,0],[0,0]]):
# note: unitization is done in main loop, to allow compilation of LinearCombination function in ObjectiveMech) (TBI)
action_mech = ProcessingMechanism(default_variable=[[0,0],[0,0],[0,0]],
function=get_action, name='ACTION',
output_states='agent action')
output_ports='agent action')

# ************************************** BASIC COMPOSITION *************************************************************

Expand All @@ -156,9 +156,9 @@ def get_action(variable=[[0,0],[0,0],[0,0]]):

agent_comp.add_node(action_mech, required_roles=[NodeRole.OUTPUT])

a = MappingProjection(sender=player_percept, receiver=action_mech.input_states[0])
b = MappingProjection(sender=predator_percept, receiver=action_mech.input_states[1])
c = MappingProjection(sender=prey_percept, receiver=action_mech.input_states[2])
a = MappingProjection(sender=player_percept, receiver=action_mech.input_ports[0])
b = MappingProjection(sender=predator_percept, receiver=action_mech.input_ports[1])
c = MappingProjection(sender=prey_percept, receiver=action_mech.input_ports[2])
agent_comp.add_projections([a,b,c])


Expand Down
18 changes: 9 additions & 9 deletions Scripts/Debug/StabilityFlexibility.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
# the associated accuracy of the trial will be the probability that the DDM hits the upper threshold
def computeAccuracy(variable):

# variable is the list of values given by the monitored output states in the Objective Mechanism
# variable is the list of values given by the monitored output ports in the Objective Mechanism

print("Inputs to ComputeAccuracy Function: ", variable)

Expand Down Expand Up @@ -79,7 +79,7 @@ def computeAccuracy(variable):
inputLayer = pnl.TransferMechanism(#default_variable=[[0.0, 0.0]],
size=2,
function=pnl.Linear(slope=1, intercept=0),
output_states = [pnl.RESULT],
output_ports = [pnl.RESULT],
name='Input')
inputLayer.set_log_conditions([pnl.RESULT])

Expand All @@ -93,7 +93,7 @@ def computeAccuracy(variable):
integrator_mode = True,
integrator_function=pnl.AdaptiveIntegrator(rate=(tau)),
initial_value=np.array([[0.0, 0.0]]),
output_states = [pnl.RESULT],
output_ports = [pnl.RESULT],
name = 'Activity')

activation.set_log_conditions([pnl.RESULT, "mod_gain"])
Expand All @@ -102,23 +102,23 @@ def computeAccuracy(variable):
stimulusInfo = pnl.TransferMechanism(default_variable=[[0.0, 0.0]],
size = 2,
function = pnl.Linear(slope=1, intercept=0),
output_states = [pnl.RESULT],
output_ports = [pnl.RESULT],
name = "Stimulus Info")

stimulusInfo.set_log_conditions([pnl.RESULT])

controlledElement = pnl.TransferMechanism(default_variable=[[0.0, 0.0]],
size = 2,
function=pnl.Linear(slope=1, intercept= 0),
input_states=pnl.InputState(combine=pnl.PRODUCT),
output_states = [pnl.RESULT],
input_ports=pnl.InputPort(combine=pnl.PRODUCT),
output_ports = [pnl.RESULT],
name = 'Stimulus Info * Activity')

controlledElement.set_log_conditions([pnl.RESULT])

ddmCombination = pnl.TransferMechanism(size = 1,
function = pnl.Linear(slope=1, intercept=0),
output_states = [pnl.RESULT],
output_ports = [pnl.RESULT],
name = "DDM Integrator")
ddmCombination.set_log_conditions([pnl.RESULT])

Expand All @@ -127,7 +127,7 @@ def computeAccuracy(variable):
threshold = THRESHOLD,
noise = NOISE,
t0 = T0),
output_states = [pnl.DECISION_VARIABLE, pnl.RESPONSE_TIME,
output_ports = [pnl.DECISION_VARIABLE, pnl.RESPONSE_TIME,
pnl.PROBABILITY_UPPER_THRESHOLD, pnl.PROBABILITY_LOWER_THRESHOLD],
name='DDM')

Expand Down Expand Up @@ -170,7 +170,7 @@ def computeAccuracy(variable):
function = computeAccuracy)

meta_controller = pnl.OptimizationControlMechanism(agent_rep = stabilityFlexibility,
features = [inputLayer.input_state,stimulusInfo.input_state],
features = [inputLayer.input_port,stimulusInfo.input_port],
# features = {pnl.SHADOW_INPUTS: [inputLayer, stimulusInfo]},
# features = [(inputLayer, pnl.SHADOW_INPUTS),
# (stimulusInfo, pnl.SHADOW_INPUTS)],
Expand Down
10 changes: 5 additions & 5 deletions Scripts/Debug/Umemoto_Feb.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@

Target_Rep.set_log_conditions('value')#, log_condition=pnl.PROCESSING) # Log Target_Rep
Target_Rep.set_log_conditions('mod_slope')#, log_condition=pnl.PROCESSING)
Target_Rep.set_log_conditions('InputState-0')#, log_condition=pnl.PROCESSING)
Target_Rep.set_log_conditions('InputPort-0')#, log_condition=pnl.PROCESSING)

Distractor_Rep = pnl.TransferMechanism(name='Distractor Representation')

Expand All @@ -62,7 +62,7 @@
starting_point=(x_0),
t0=t0
),name='Decision',
output_states=[
output_ports=[
pnl.DECISION_VARIABLE,
pnl.RESPONSE_TIME,
pnl.PROBABILITY_UPPER_THRESHOLD,
Expand All @@ -73,7 +73,7 @@
}
],) #drift_rate=(1.0),threshold=(0.2645),noise=(0.5),starting_point=(0), t0=0.15

Decision.set_log_conditions('InputState-0')#, log_condition=pnl.PROCESSING)
Decision.set_log_conditions('InputPort-0')#, log_condition=pnl.PROCESSING)

# Outcome Mechanisms:
Reward = pnl.TransferMechanism(name='Reward')
Expand Down Expand Up @@ -119,10 +119,10 @@
allocation_samples=signalSearchRange)

Umemoto_comp.add_model_based_optimizer(optimizer=pnl.OptimizationControlMechanism(agent_rep=Umemoto_comp,
features=[Target_Stim.input_state, Distractor_Stim.input_state, Reward.input_state],
features=[Target_Stim.input_port, Distractor_Stim.input_port, Reward.input_port],
feature_function=pnl.AdaptiveIntegrator(rate=1.0),
objective_mechanism=pnl.ObjectiveMechanism(monitor_for_control=[Reward,
(Decision.output_states[pnl.PROBABILITY_UPPER_THRESHOLD], 1, -1)],
(Decision.output_ports[pnl.PROBABILITY_UPPER_THRESHOLD], 1, -1)],
),
function=pnl.GridSearch(),
control_signals=[Target_Rep_Control_Signal, Distractor_Rep_Control_Signal]
Expand Down
14 changes: 7 additions & 7 deletions Scripts/Debug/Umemoto_Feb2.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@

Target_Rep.set_log_conditions('value')#, log_condition=pnl.PROCESSING) # Log Target_Rep
Target_Rep.set_log_conditions('mod_slope')#, log_condition=pnl.PROCESSING)
Target_Rep.set_log_conditions('InputState-0')#, log_condition=pnl.PROCESSING)
Target_Rep.set_log_conditions('InputPort-0')#, log_condition=pnl.PROCESSING)

Distractor_Rep = pnl.TransferMechanism(name='Distractor Representation')

Expand All @@ -64,7 +64,7 @@
starting_point=(x_0),
t0=t0
),name='Decision',
output_states=[
output_ports=[
pnl.DECISION_VARIABLE,
pnl.RESPONSE_TIME,
pnl.PROBABILITY_UPPER_THRESHOLD,
Expand All @@ -75,7 +75,7 @@
}
],) #drift_rate=(1.0),threshold=(0.2645),noise=(0.5),starting_point=(0), t0=0.15

Decision.set_log_conditions('InputState-0')#, log_condition=pnl.PROCESSING)
Decision.set_log_conditions('InputPort-0')#, log_condition=pnl.PROCESSING)
Decision.set_log_conditions('PROBABILITY_UPPER_THRESHOLD')
print(Decision.loggable_items)
# Outcome Mechanisms:
Expand Down Expand Up @@ -129,13 +129,13 @@

Umemoto_comp.add_model_based_optimizer(optimizer=pnl.OptimizationControlMechanism(
agent_rep=Umemoto_comp,
features=[Target_Stim.input_state,
Distractor_Stim.input_state,
Reward.input_state],
features=[Target_Stim.input_port,
Distractor_Stim.input_port,
Reward.input_port],
feature_function=pnl.AdaptiveIntegrator(rate=1.0),
objective_mechanism=pnl.ObjectiveMechanism(
monitor_for_control=[Reward,
(Decision.output_states[pnl.PROBABILITY_UPPER_THRESHOLD], 1, -1)],
(Decision.output_ports[pnl.PROBABILITY_UPPER_THRESHOLD], 1, -1)],
),
function=pnl.GridSearch(save_values=True),
control_signals=[Target_Rep_Control_Signal,
Expand Down
Loading

0 comments on commit 9b4c27f

Please sign in to comment.