This version adds support for PyTorch 1.4.0. There are also several minor feature improvements and bug fixes listed below.
- added option for neutral elements to be used in the place of empty tensors in reduction operations (
operations.__reduce_op
) (cf. #369 and #444) var
andstd
both now support iterable axis arguments- updated pull request template
This version fixes the packaging, such that installed versions of HeAT contain all required Python packages.
This version varies greatly from the previous version (0.1.0). This version includes a great increase in functionality and there are many changes. Many functions which were working previously now behave more closely to their numpy counterparts. Although a large amount of progress has been made, work is still ongoing. We appreciate everyone who uses this package and we work hard to solve the issues which you report to us. Thank you!
- python >= 3.5
- mpi4py >= 3.0.0
- numpy >= 1.13.0
- torch >= 1.3.0
- h5py >= 2.8.0
- netCDF4 >= 1.4.0, <= 1.5.2
- pre-commit >= 1.18.3 (development requirement)
#415 GPU support was added for this release. To set the default device use ht.use_device(dev)
where dev
can be either
"gpu" or "cpu". Make sure to specify the device when creating DNDarrays if the desired device is different than the
default. If no device is specified then that device is assumed to be "cpu".
- #308 balance
- #308 convert DNDarray to numpy NDarray
- #412 diag and diagonal
- #388 diff
- #362 distributed random numbers
- #327 exponents and logarithms
- #423 Fortran memory layout
- #330 load csv
- #326 maximum
- #324 minimum
- #304 nonzero
- #367 prod
- #402 modf
- #428 redistribute
- #345 resplit out of place
- #402 round
- #312 sort
- #423 strides
- #304 where
- Code of conduct
- Contribution guidelines
- pre-commit and black checks added to Pull Requests to ensure proper formatting
- Issue templates
- #357 Logspace factory
- #428 lshape map creation
- Pull Request Template
- Removal of the ml folder in favor of regression and clustering folders
- #365 Test suite
- KMeans bug fixes
- Working in distributed mode
- Fixed shape cluster centers for
init='kmeans++'
- __local_op now returns proper gshape
- allgatherv fix -> elements now sorted in the correct order
- getitiem fixes and improvements
- unique now returns a distributed result if the input was distributed
- AllToAll on single process now functioning properly
- optional packages are truly optional for running the unit tests
- the output of mean and var (and std) now set the correct split axis for the returned DNDarray