Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v0.4.7.0 #776

Merged
merged 54 commits into from
May 1, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
10cf7ef
• IntegratorMechanism (#742)
jdcpni Mar 30, 2018
755d35c
Fix/function/stability (#743)
jdcpni Mar 30, 2018
c021994
Refactor/context/structured (#744)
jdcpni Apr 1, 2018
7817392
Refactor/context/deprecate init status (#745)
jdcpni Apr 1, 2018
5ef84ff
• Context (#746)
jdcpni Apr 1, 2018
4f58e67
test,function/LinearCombination: Rename second test function to preve…
jvesely Apr 1, 2018
684f4ee
tests,function/LinearCombination: Add tests with absent parameters
jvesely Apr 1, 2018
4bca3e3
Scheduling: fix bug where termination conditions persisted across cal…
kmantel Apr 3, 2018
1a165bc
Feat/mechanism/input target label dicts (#751)
jdcpni Apr 5, 2018
7d74e34
testing: correct pytest setup ovewrite, losing some settings
kmantel Apr 5, 2018
97eba90
testing: Resolve leftover merge conflicts, fixes #747
kmantel Apr 6, 2018
9414090
Feat/mechanism/input target lable dicts (#752)
jdcpni Apr 6, 2018
3ca267b
utilites: get method to prune unused args from a function from compos…
kmantel Apr 9, 2018
2edbdd9
scheduling: decouple Scheduler and Condition, allow multiple executio…
kmantel Apr 6, 2018
bb8e013
Merge branch 'feat/scheduling/decouple-conditions-scheduler' into devel
kmantel Apr 9, 2018
bb19e4d
LearningAuxilliary: rename to LearningAuxiliary (correct double l typo)
kmantel Apr 11, 2018
ebeba45
Refactor/mechanisms/ execute (#754)
jdcpni Apr 13, 2018
3a6ab54
• Mechanism (#756)
jdcpni Apr 13, 2018
1e23c31
Docs/context/context (#757)
jdcpni Apr 13, 2018
9d7a13b
PredictionErrorMechanism: correct infinite recursion in _execute
kmantel Apr 13, 2018
9961fc5
Fix/misc/misc (#759)
jdcpni Apr 14, 2018
7acac9e
fixing drift diffusion integrator bug - returned threshold even when …
KristenManning Apr 16, 2018
d2796b1
adding a note to the ddm plot() documentation to clarify that the plo…
KristenManning Apr 16, 2018
5f5007c
Merge pull request #760 from PrincetonUniversity/fix/ddm/threshold
KristenManning Apr 17, 2018
1facbfd
Functions: simplify noise validation
kmantel Apr 18, 2018
eca7ec7
LinearCombination, Reduce: handle negative exponents during init (avo…
kmantel Apr 19, 2018
f84e698
fixing bug in looking up target values for learning which caused firs…
KristenManning Apr 20, 2018
c45f836
Merge pull request #761 from PrincetonUniversity/fix/learning/target-…
KristenManning Apr 20, 2018
cdce0f1
Refactor/context/set and test (#762)
jdcpni Apr 21, 2018
b834baa
Params: copy dict/ROODs nested in params arguments to avoid side effects
kmantel Apr 21, 2018
5af229c
imports: pycharm-optimize to avoid unnecessary circularities
kmantel Feb 13, 2018
cd71735
parsing: add method to parse function variable from variable
kmantel Feb 13, 2018
a8aaac0
Component: override __deepcopy__ method to use shared items
kmantel Aug 30, 2017
4def1d8
utilities: add function to detect if obj is an instance or a subclass…
kmantel Sep 7, 2017
a835077
Defaults: add ClassDefaults.function to several classes, and use it a…
kmantel Oct 31, 2017
3a8ac96
Component: add class method to get new param class defaults
kmantel Apr 17, 2018
ae8dda4
Defaults: add enum to denote flexibility of assignments
kmantel Apr 20, 2018
d557009
refactor/rename _variable_not_specified to be more generic, using enum
kmantel Apr 20, 2018
7e1907c
LinearMatrix: make keyword method static to reflect how it is used
kmantel Apr 10, 2018
70c9f45
Component: create fewer instances during init when function arg is a …
kmantel Apr 21, 2018
945a91f
Rewrite Function instantiation:
kmantel Sep 7, 2017
efa7940
Merge pull request #764 from PrincetonUniversity/refac/function/init
kmantel Apr 23, 2018
bf3d16b
Defaults: ensure that function attr appears when examining Defaults
kmantel Apr 27, 2018
bd9917b
Refactor/context/source (#765)
jdcpni Apr 29, 2018
4f2ce2f
Feat/inputstate/combine (#766)
jdcpni Apr 30, 2018
5d4004a
Fix default_variable bug (#767)
dcw3 Apr 30, 2018
9b4a95e
Refactor/mechanism/transfermechanism (#768)
jdcpni May 1, 2018
e77f253
• LCA (#769)
jdcpni May 1, 2018
66c9207
• LCA (#770)
jdcpni May 1, 2018
5956d87
Feat/projections/masked mapping projection (#771)
jdcpni May 1, 2018
c176c90
adding exceptions to get_current_function_param and get_current_mecha…
KristenManning May 1, 2018
7144963
Merge pull request #773 from PrincetonUniversity/bug/get-params/varia…
KristenManning May 1, 2018
d7e5c69
Feat/projections/masked mapping projection (#775)
jdcpni May 1, 2018
6900613
Merge branch 'devel'
kmantel May 1, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .idea/inspectionProfiles/Project_Default.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 3 additions & 2 deletions .idea/runConfigurations/EVC_Gratton.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 1 addition & 2 deletions .idea/runConfigurations/Make_HTML.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 3 additions & 2 deletions .idea/runConfigurations/Scratch_Pad.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

19 changes: 0 additions & 19 deletions .idea/runConfigurations/Tests.xml

This file was deleted.

5 changes: 3 additions & 2 deletions .idea/runConfigurations/_Multilayer_Learning.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ Contributors
* **Peter Johnson**, Princeton Neuroscience Institute, Princeton University
* **Justin Junge**, Department of Psychology, Princeton University
* **Kristen Manning**, Princeton Neuroscience Institute, Princeton University
* **Kevin Mantel**, Princeton Neuroscience Institute, Princeton University
* **Katherine Mantel**, Princeton Neuroscience Institute, Princeton University
* **Markus Spitzer**, Princeton Neuroscience Institute, Princeton University
* **Jan Vesely**, Department of Computer Science, Rutgers University
* **Changyan Wang**, Princeton Neuroscience Institute, Princeton University
Expand Down
137 changes: 137 additions & 0 deletions Scripts/Aida.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@

# coding: utf-8

# In[1]:


import psyneulink as pnl
import numpy as np


# In[2]:


# ECin = pnl.KWTA(size=8, function=pnl.Linear)
# DG = pnl.KWTA(size=400, function=pnl.Linear)
# CA3 = pnl.KWTA(size=80, function=pnl.Linear)
# CA1 = pnl.KWTA(size=100, function=pnl.Linear)
# ECout = pnl.KWTA(size=8, function=pnl.Linear)
ECin = pnl.TransferMechanism(size=8, function=pnl.Linear(), name='ECin')
DG = pnl.TransferMechanism(size=400, function=pnl.Logistic(), name='DG')
CA3 = pnl.TransferMechanism(size=80, function=pnl.Logistic(), name='CA3')
CA1 = pnl.TransferMechanism(size=100, function=pnl.Linear(), name='CA1')
ECout = pnl.TransferMechanism(size=8, function=pnl.Logistic(), name='ECout')


# In[3]:


def make_mask(in_features, out_features, connectivity):
mask = np.zeros((in_features, out_features))
rand = np.random.random(mask.shape)
idxs = np.where(rand < connectivity)
mask[idxs[0], idxs[1]] = 1
return mask

def make_mat(in_features, out_features, lo, high, mask):
w = np.random.uniform(lo ,high, size=(in_features, out_features))
w = mask * w
return w


# In[4]:


ECin_s, ECout_s, DG_s, CA3_s, CA1_s = 8, 8, 400, 80, 100


# In[5]:


mask_ECin_DG = make_mask(ECin_s, DG_s, 0.25)
mask_DG_CA3 = make_mask(DG_s, CA3_s, 0.05)
mask_ECin_CA3 = make_mask(ECin_s, CA3_s, 0.25)

mat_ECin_DG = make_mat(ECin_s, DG_s, 0.25, 0.75, mask_ECin_DG)
mat_DG_CA3 = make_mat(DG_s, CA3_s, 0.89, 0.91, mask_DG_CA3)
mat_ECin_CA3 = make_mat(ECin_s, CA3_s, 0.25, 0.75, mask_ECin_CA3)

mat_CA3_CA1 = make_mat(CA3_s, CA1_s, 0.25, 0.75, np.ones((CA3_s, CA1_s)))
mat_CA1_ECout = make_mat(CA1_s, ECout_s, 0.25, 0.75, np.ones((CA1_s, ECout_s)))
mat_ECin_CA1 = make_mat(ECin_s, CA1_s, 0.25, 0.75, np.ones((ECin_s, CA1_s)))


# In[6]:


ECin_to_DG=pnl.MappingProjection(matrix=mat_ECin_DG)
DG_to_CA3=pnl.MappingProjection(matrix=mat_DG_CA3)
ECin_to_CA3=pnl.MappingProjection(matrix=mat_ECin_CA3)
CA3_to_CA1=pnl.MappingProjection(matrix=mat_CA3_CA1)
CA1_to_ECout=pnl.MappingProjection(sender=CA1, receiver=ECout, matrix=mat_CA1_ECout)
ECin_to_CA1=pnl.MappingProjection(sender=ECin, receiver=CA1, matrix=mat_ECin_CA1)


# In[7]:


proc_ECin_DG = pnl.Process(pathway=[ECin, ECin_to_DG, DG], learning=pnl.ENABLED, learning_rate=0.2)
proc_ECin_CA3 = pnl.Process(pathway=[ECin, ECin_to_CA3, CA3], learning=pnl.ENABLED, learning_rate=0.2)
proc_DG_CA3 = pnl.Process(pathway=[DG, DG_to_CA3, CA3], learning=pnl.ENABLED, learning_rate=0)
proc_CA3_CA1 = pnl.Process(pathway=[CA3, CA3_to_CA1, CA1], learning=pnl.ENABLED, learning_rate=0.05)
proc_CA1_ECout = pnl.Process(pathway=[CA1, ECout], learning=pnl.ENABLED, learning_rate=0.02)
proc_ECin_CA1 = pnl.Process(pathway=[ECin, CA1], learning_rate=0.02)


# In[8]:


TSP = pnl.System(processes=[proc_ECin_DG, proc_ECin_CA3, proc_DG_CA3, proc_CA3_CA1, proc_CA1_ECout])
# MSP = pnl.System(processes=[proc_ECin_CA1, proc_CA1_ECout])


# In[9]:


TSP.show_graph()
assert True

# In[10]:


## Method for making input
def statistical():
chars = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']
sequence = ''
letters = range(8)
starters = range(0, 8, 2)
enders = range(1, 8, 2)

## minus phase
idx = np.random.randint(len(starters))
s = starters[idx]
e = enders[idx]
minus_input, minus_target = np.zeros((8)), np.zeros((8))
minus_input[s] = 1.0
minus_target[e] = 1.0
minus_target[s] = 0.9
sequence += chars[s]
sequence += chars[e]

## plus phase
plus_input, plus_target = minus_target, np.zeros((8))
plus_target[s] = 1
plus_target[e] = 1

return (minus_input, minus_target, plus_input, plus_target)


# In[ ]:


epochs = 100
for epoch in range(epochs):
minus_x, minus_y, plus_x, plus_y = statistical()
TSP.run(inputs={ECin:minus_x}, targets={ECout:minus_y})
## Running the above line of code causes weights to get too large

6 changes: 3 additions & 3 deletions Scripts/Examples/EVC-Gratton.py
Original file line number Diff line number Diff line change
Expand Up @@ -145,8 +145,8 @@

# Show graph of system (with control components)
# mySystem.show_graph(show_dimensions=pnl.ALL, show_projection_labels=True)
# mySystem.show_graph(show_control=True, show_projection_labels=True)
mySystem.show_graph(show_control=True, show_mechanism_structure=True, show_headers=False)
mySystem.show_graph(show_control=True, show_projection_labels=False)
# mySystem.show_graph(show_control=True, show_mechanism_structure=True, show_headers=False)

# configure EVC components
mySystem.controller.control_signals[0].intensity_cost_function = pnl.Exponential(rate=0.8046).function
Expand Down Expand Up @@ -193,7 +193,7 @@

mySystem.controller.reportOutputPref = True

Flanker_Rep.set_log_conditions((pnl.SLOPE, pnl.ContextStatus.CONTROL))
Flanker_Rep.set_log_conditions((pnl.SLOPE, pnl.ContextFlags.CONTROL))

mySystem.run(
num_trials=nTrials,
Expand Down
Loading