Skip to content

Commit

Permalink
Merge pull request #2897 from PrincetonUniversity/devel
Browse files Browse the repository at this point in the history
Devel
  • Loading branch information
kmantel authored Feb 2, 2024
2 parents f062be4 + 47a59ec commit 9b68ab8
Show file tree
Hide file tree
Showing 39 changed files with 502 additions and 427 deletions.
5 changes: 4 additions & 1 deletion .github/workflows/codeql.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,11 @@ jobs:
- name: Autobuild
uses: github/codeql-action/autobuild@v3

- name: Cache cleanup
- name: Pip cache cleanup
shell: bash
# CODEQL_PYTHON is only defined if dependencies were installed [0]
# [0] https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning?learn=code_security_actions&learnProduct=code-security#analyzing-python-dependencies
if: ${{ env.CODEQL_PYTHON != '' }}
run: |
$CODEQL_PYTHON -m pip cache info
$CODEQL_PYTHON -m pip cache purge
Expand Down
16 changes: 9 additions & 7 deletions .github/workflows/pnl-ci-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ jobs:
echo "pip_cache_dir=$(python -m pip cache dir)" | tee -a $GITHUB_OUTPUT
- name: Wheels cache
uses: actions/cache@v3
uses: actions/cache@v4
with:
path: ${{ steps.pip_cache.outputs.pip_cache_dir }}/wheels
key: ${{ runner.os }}-python-${{ matrix.python-version }}-${{ matrix.python-architecture }}-pip-wheels-${{ hashFiles('requirements.txt', 'doc_requirements.txt') }}-${{ github.sha }}
Expand Down Expand Up @@ -124,19 +124,21 @@ jobs:
run: git tag -d 'v0.0.0.0'

- name: Upload Documentation
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: Documentation-${{matrix.pnl-version}}-${{ matrix.os }}-${{ matrix.python-version }}-${{ matrix.python-architecture }}
retention-days: 1
path: docs/build/html

- name: Store PR number
if: ${{ github.event_name == 'pull_request' }}
# The 'base' variant runs only on pull requests and has only one job
if: ${{ matrix.pnl-version == 'base' }}
run: echo ${{ github.event.pull_request.number }} > ./pr_number.txt

- name: Upload PR number for other workflows
if: ${{ github.event_name == 'pull_request' }}
uses: actions/upload-artifact@v3
# The 'base' variant runs only on pull requests and has only one job
if: ${{ matrix.pnl-version == 'base' }}
uses: actions/upload-artifact@v4
with:
name: pr_number
path: ./pr_number.txt
Expand Down Expand Up @@ -168,7 +170,7 @@ jobs:
ref: gh-pages

- name: Download branch docs
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: Documentation-head-${{ matrix.os }}-${{ matrix.python-version }}-x64
path: _built_docs/${{ github.ref }}
Expand All @@ -185,7 +187,7 @@ jobs:
if: github.ref == 'refs/heads/master' || github.ref == 'refs/heads/devel' || github.ref == 'refs/heads/docs'

- name: Download main docs
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: Documentation-head-${{ matrix.os }}-${{ matrix.python-version }}-x64
# This overwrites files in current directory
Expand Down
21 changes: 14 additions & 7 deletions .github/workflows/pnl-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ jobs:
echo "pip_cache_dir=$(python -m pip cache dir)" | tee -a $GITHUB_OUTPUT
- name: Wheels cache
uses: actions/cache@v3
uses: actions/cache@v4
with:
path: ${{ steps.pip_cache.outputs.pip_cache_dir }}/wheels
key: ${{ runner.os }}-python-${{ matrix.python-version }}-${{ matrix.python-architecture }}-pip-wheels-${{ hashFiles('requirements.txt', 'dev_requirements.txt') }}-${{ github.sha }}
Expand All @@ -163,22 +163,28 @@ jobs:
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Print test machine/env info
- name: Print numpy info
shell: bash
run: |
python -c "import numpy; numpy.show_config()"
- name: Print machine info
shell: bash
run: |
case "$RUNNER_OS" in
Linux*) lscpu;;
Linux*) lscpu; lsmem;;
macOS*) sysctl -a | grep '^hw' ;;
Windows*) wmic cpu get description,currentclockspeed,NumberOfCores,NumberOfEnabledCore,NumberOfLogicalProcessors; wmic memorychip get capacity,speed,status,manufacturer ;;
esac
- name: Test with pytest
timeout-minutes: 180
run: pytest --junit-xml=tests_out.xml --verbosity=0 -n auto ${{ matrix.extra-args }}
run: pytest --junit-xml=tests_out.xml --verbosity=0 -n logical ${{ matrix.extra-args }}

- name: Upload test results
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: test-results-${{ matrix.os }}-${{ matrix.python-version }}-${{ matrix.python-architecture }}
name: test-results-${{ matrix.os }}-${{ matrix.python-version }}-${{ matrix.python-architecture }}-${{ matrix.version-restrict }}
path: tests_out.xml
retention-days: 5
if: (success() || failure()) && ! contains(matrix.extra-args, 'forked')
Expand All @@ -202,7 +208,8 @@ jobs:
python setup.py sdist bdist_wheel
- name: Upload dist packages
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: matrix.version-restrict == ''
with:
name: dist-${{ matrix.os }}-${{ matrix.python-version }}-${{ matrix.python-architecture }}
path: dist/
Expand Down
10 changes: 5 additions & 5 deletions .github/workflows/test-release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ jobs:
echo "wheel=$(ls *.whl)" >> $GITHUB_OUTPUT
- name: Upload Python dist files
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: Python-dist-files
path: dist/
Expand Down Expand Up @@ -78,7 +78,7 @@ jobs:

steps:
- name: Download dist files
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: Python-dist-files
path: dist/
Expand Down Expand Up @@ -126,7 +126,7 @@ jobs:
pytest --junit-xml=tests_out.xml --verbosity=0 -n auto tests
- name: Upload test results
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: test-results-${{ matrix.os }}-${{ matrix.python-version }}
path: tests_out.xml
Expand All @@ -141,7 +141,7 @@ jobs:

steps:
- name: Download dist files
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: Python-dist-files
path: dist/
Expand Down Expand Up @@ -175,7 +175,7 @@ jobs:

steps:
- name: Download dist files
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: Python-dist-files
path: dist/
Expand Down
12 changes: 12 additions & 0 deletions broken_trans_deps.txt
Original file line number Diff line number Diff line change
Expand Up @@ -29,3 +29,15 @@ cattrs != 23.1.1; python_version < '3.8'
# cattrs==23.2.{1,2} breaks json serialization
# https://github.com/python-attrs/cattrs/issues/453
cattrs != 23.2.1, != 23.2.2

# The following need at least sphinx-5 without indicating it in dependencies:
# * sphinxcontrib-applehelp >=1.0.8,
# * sphinxcontrib-devhelp >=1.0.6,
# * sphinxcontrib-htmlhelp >=2.0.5,
# * sphinxcontrib-serializinghtml >=1.1.10,
# * sphinxcontrib-qthelp >=1.0.7
sphinxcontrib-applehelp <1.0.8
sphinxcontrib-devhelp <1.0.6
sphinxcontrib-htmlhelp <2.0.5
sphinxcontrib-serializinghtml <1.1.10
sphinxcontrib-qthelp <1.0.7
2 changes: 1 addition & 1 deletion cuda_requirements.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
pycuda >2018, <2023
pycuda >2018, <2024
2 changes: 1 addition & 1 deletion dev_requirements.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
jupyter<1.0.1
packaging<24.0
pytest<7.4.4
pytest<8.0.1
pytest-benchmark<4.0.1
pytest-cov<4.1.1
pytest-forked<1.7.0
Expand Down
5 changes: 3 additions & 2 deletions psyneulink/core/components/component.py
Original file line number Diff line number Diff line change
Expand Up @@ -1301,7 +1301,7 @@ def _get_compilation_state(self):
"intensity"}
# Prune subcomponents (which are enabled by type rather than a list)
# that should be omitted
blacklist = { "objective_mechanism", "agent_rep", "projections"}
blacklist = { "objective_mechanism", "agent_rep", "projections", "shadow_inputs"}

# Only mechanisms use "value" state, can execute 'until finished',
# and need to track executions
Expand Down Expand Up @@ -1426,7 +1426,8 @@ def _get_compilation_params(self):
"randomization_dimension", "save_values", "save_samples",
"max_iterations", "duplicate_keys",
"search_termination_function", "state_feature_function",
"search_function",
"search_function", "weight", "exponent", "gating_signal_params",
"retain_old_simulation_data",
# not used in compiled learning
"learning_results", "learning_signal", "learning_signals",
"error_matrix", "error_signal", "activation_input",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1442,7 +1442,8 @@ def _function(self,
elif operation == CROSS_ENTROPY:
v1 = variable[0]
v2 = variable[1]
combination = np.where(np.logical_and(v1 == 0, v2 == 0), 0.0, v1 * np.log(v2))
both_zero = np.logical_and(v1 == 0, v2 == 0)
combination = v1 * np.where(both_zero, 0.0, np.log(v2, where=np.logical_not(both_zero)))
else:
raise FunctionError("Unrecognized operator ({0}) for LinearCombination function".
format(operation.self.Operation.SUM))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@

import numpy as np
from beartype import beartype
from scipy.special import erfinv

from psyneulink._typing import Optional

Expand Down Expand Up @@ -371,11 +372,6 @@ def _function(self,
params=None,
):

try:
from scipy.special import erfinv
except:
raise FunctionError("The UniformToNormalDist function requires the SciPy package.")

mean = self._get_current_parameter_value(DIST_MEAN, context)
standard_deviation = self._get_current_parameter_value(STANDARD_DEVIATION, context)
random_state = self.parameters.random_state._get(context)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -806,7 +806,7 @@ def progress_callback(study, trial):
optuna.logging.set_verbosity(optuna.logging.WARNING)

study = optuna.create_study(
sampler=self.method, direction=self.direction
sampler=opt_func, direction=self.direction
)
study.optimize(
objfunc_wrapper_wrapper,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1207,7 +1207,8 @@ def _function(self,
# MODIFIED CW 3/20/18: avoid divide by zero error by plugging in two zeros
# FIX: unsure about desired behavior when v2 = 0 and v1 != 0
# JDC: returns [inf]; leave, and let it generate a warning or error message for user
result = -np.sum(np.where(np.logical_and(v1 == 0, v2 == 0), 0.0, v1 * np.log(v2)))
both_zero = np.logical_and(v1 == 0, v2 == 0)
result = -np.sum(v1 * np.where(both_zero, 0.0, np.log(v2, where=np.logical_not(both_zero))))

# Energy
elif self.metric == ENERGY:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -301,7 +301,7 @@ def reset(self, previous_value=None, context=None):
if previous_value is None:
previous_value = self._get_current_parameter_value("initializer", context)

if previous_value is None or previous_value == []:
if previous_value is None or np.asarray(previous_value).size == 0:
self.parameters.previous_value._get(context).clear()
value = deque([], maxlen=self.parameters.history.get(context))

Expand Down Expand Up @@ -1752,7 +1752,7 @@ def _get_distance(self, cue:Union[list, np.ndarray],
field_weights = self._get_current_parameter_value('distance_field_weights', context)
# Set any items in field_weights to None if they are None or an empty list:
field_weights = np.atleast_1d([None if
fw is None or fw == [] or isinstance(fw, np.ndarray) and fw.tolist()==[]
fw is None or np.asarray(fw).size == 0
else fw
for fw in field_weights])
if granularity == 'per_field':
Expand All @@ -1763,7 +1763,7 @@ def _get_distance(self, cue:Union[list, np.ndarray],
if len(field_weights)==1:
field_weights = np.full(num_fields, field_weights[0])
for i in range(num_fields):
if not any([item is None or item == [] or isinstance(item, np.ndarray) and item.tolist() == []
if not any([item is None or np.asarray(item).size == 0
for item in [cue[i], candidate[i], field_weights[i]]]):
distances_by_field[i] = distance_fct([cue[i], candidate[i]]) * field_weights[i]
return list(distances_by_field)
Expand Down Expand Up @@ -2623,7 +2623,7 @@ def reset(self, previous_value=None, context=None):
if previous_value is None:
previous_value = self._get_current_parameter_value("initializer", context)

if previous_value == []:
if np.asarray(previous_value).size == 0:
value = np.ndarray(shape=(2, 0, len(self.defaults.variable[0])))
self.parameters.previous_value._set(copy.deepcopy(value), context)

Expand Down
2 changes: 1 addition & 1 deletion psyneulink/core/components/mechanisms/mechanism.py
Original file line number Diff line number Diff line change
Expand Up @@ -3053,7 +3053,7 @@ def _gen_llvm_output_port_parse_variable(self, ctx, builder,
if name == OWNER_VALUE:
data = value
elif name in self.llvm_state_ids:
data = pnlvm.helpers.get_state_ptr(builder, self, mech_state, name)
data = ctx.get_param_or_state_ptr(builder, self, name, state_struct_ptr=mech_state)
else:
data = None

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3358,13 +3358,15 @@ def _gen_llvm_evaluate_alloc_range_function(self, *, ctx:pnlvm.LLVMBuilderContex

nodes_params = pnlvm.helpers.get_param_ptr(builder, self.composition,
params, "nodes")
my_idx = self.composition._get_node_index(self)
my_params = builder.gep(nodes_params, [ctx.int32_ty(0),
ctx.int32_ty(my_idx)])
num_trials_per_estimate_ptr = pnlvm.helpers.get_param_ptr(builder, self,
my_params, "num_trials_per_estimate")
controller_idx = self.composition._get_node_index(self)
controller_params = builder.gep(nodes_params,
[ctx.int32_ty(0), ctx.int32_ty(controller_idx)])
num_trials_per_estimate_ptr = ctx.get_param_or_state_ptr(builder,
self,
"num_trials_per_estimate",
param_struct_ptr=controller_params)
func_params = pnlvm.helpers.get_param_ptr(builder, self,
my_params, "function")
controller_params, "function")
search_space = pnlvm.helpers.get_param_ptr(builder, self.function,
func_params, "search_space")

Expand Down Expand Up @@ -3428,7 +3430,7 @@ def _gen_llvm_evaluate_function(self, *, ctx:pnlvm.LLVMBuilderContext, tags=froz
assert self.composition.controller is self
assert self.composition is self.agent_rep
nodes_states = pnlvm.helpers.get_state_ptr(builder, self.composition,
comp_state, "nodes", None)
comp_state, "nodes")
nodes_params = pnlvm.helpers.get_param_ptr(builder, self.composition,
comp_params, "nodes")

Expand All @@ -3442,15 +3444,16 @@ def _gen_llvm_evaluate_function(self, *, ctx:pnlvm.LLVMBuilderContext, tags=froz
assert len(self.output_ports) == len(allocation_sample.type.pointee)
controller_out = builder.gep(comp_data, [ctx.int32_ty(0), ctx.int32_ty(0),
ctx.int32_ty(controller_idx)])
all_op_state = pnlvm.helpers.get_state_ptr(builder, self,
controller_state, "output_ports")
all_op_params = pnlvm.helpers.get_param_ptr(builder, self,
controller_params, "output_ports")
all_op_params, all_op_states = ctx.get_param_or_state_ptr(builder,
self,
"output_ports",
param_struct_ptr=controller_params,
state_struct_ptr=controller_state)
for i, op in enumerate(self.output_ports):
op_idx = ctx.int32_ty(i)

op_f = ctx.import_llvm_function(op, tags=frozenset({"simulation"}))
op_state = builder.gep(all_op_state, [ctx.int32_ty(0), op_idx])
op_state = builder.gep(all_op_states, [ctx.int32_ty(0), op_idx])
op_params = builder.gep(all_op_params, [ctx.int32_ty(0), op_idx])
op_in = builder.alloca(op_f.args[2].type.pointee)
op_out = builder.gep(controller_out, [ctx.int32_ty(0), op_idx])
Expand Down Expand Up @@ -3483,9 +3486,10 @@ def _gen_llvm_evaluate_function(self, *, ctx:pnlvm.LLVMBuilderContext, tags=froz


# Determine simulation counts
num_trials_per_estimate_ptr = pnlvm.helpers.get_param_ptr(builder, self,
controller_params,
"num_trials_per_estimate")
num_trials_per_estimate_ptr = ctx.get_param_or_state_ptr(builder,
self,
"num_trials_per_estimate",
param_struct_ptr=controller_params)

num_trials_per_estimate = builder.load(num_trials_per_estimate_ptr, "num_trials_per_estimate")

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -443,7 +443,7 @@
**noise** (it must be the same length as the Mechanism's `variable <Mechanism_Base.variable>`), in which case each
element is applied Hadamard (elementwise) to the result, as shown here::
>>> my_linear_tm.noise = [1.0,1.2,.9]
>>> my_linear_tm.noise.base = [1.0,1.2,.9]
>>> my_linear_tm.execute([1.0, 1.0, 1.0])
array([[2. , 2.2, 1.9]])
Expand Down
Loading

0 comments on commit 9b68ab8

Please sign in to comment.