-
Notifications
You must be signed in to change notification settings - Fork 47
Magicl NG #63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Magicl NG #63
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love where this is headed. Great job!
Here are a few general comments and desired features that are probably not within the scope of the current project but that we should discuss:
- Use the Lisp-equivalent of expression templates.
Good linear algebra libraries for C++ like Eigen or Armadillo go out of their way to use expression templates in order to avoid creating intermediate results. This is essentially lazy evaluation done via template meta-programming.
For example, if you want to compute α Aᵀ x + y, where A is a matrix, α is a scalar, and x and y are vectors, the library will not transpose A physically, then multiply by x, followed by a scaling by α and finally summing with y. Instead, these operations will be represented in a parse tree of sorts and will eventually be evaluated into a single call to LAPACK's daxpy routine (avoiding all the intermediate results). Considering that meta-programming is one of Common Lisp's strong points, it would be great if we could have that sort of functionality.
- Use macrology to generate the code for different types.
I think it may be useful to avoid all the boilerplate code on matrices of different types by creating macros that generate all the relevant classes and generic functions at compile-time.
- Use LAPACK more aggressively.
There are plenty of opportunities to do this (e.g., vector norms, etc.) and this point is also related to my first point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only a partial review. Looks amazing. Just realized you might not get the "im in ur" meme. sad.
Also, change-class
? You're lisping in the big leagues now, kid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review 2/n
src/high-level/abstract-tensor.lisp
Outdated
|
||
;; Stuff with somewhat sensible defaults | ||
|
||
(defgeneric rank (abstract-tensor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not a good idea to call this the rank of a tensor, it should be called order
(or valence
) instead.
Also, IMHO this should be stored as a slot within the tensor definition (O(1) operation) if shape is a list (in which case it is O(n)).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is named RANK to follow how common lisp does ARRAY-RANK. Order is taken by the column/row order but I am willing to move things around if it makes using magicl more intuitive
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll leave it up to you and others to decide whether to change it or not. My opinion is that if we are going to diverge from MATLAB/NumPy [1,2], it ought to be for a good reason and I don't think this is one. On the other hand, both Tensorflow and PyTorch adhere to this misnomer.
Similarly, I'd rename order
to layout
, which is the name that CBLAS uses (see [3] and [4]).
Just for the record and, again, I expect other reviewers and future users to chime in with their preferences, I'm also against calling these objects tensors but I'm not particularly invested in changing this term (I guess the name has been so widely misused, that it is now OK). They are n-dimensional arrays (NumPy gets this right and calls them NDArrays).
[1] https://www.mathworks.com/help/matlab/ref/rank.html
[2] https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.matrix_rank.html
[3] http://www.netlib.org/lapack/explore-html/de/da0/cblas_8h_a79f58380f0c81f707d29e2e85b81b794.html
[4] https://software.intel.com/en-us/mkl-developer-reference-c-matrix-storage-schemes-for-blas-routines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mostly agree with Juan on this, although I would prefer to first get a clearer picture of the API relative to others (which I think should explicitly be addressed in documentation somewhere) and related design decisions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See #63 (comment)
I skimmed this, but I'm having a pretty hard time reviewing it without actually trying to use it. I am grateful that the LAPACK macros expose the different multiplication options. I still think it'd be great to use some macrology (or perhaps just some metadata attached to matrix objects) to avoid computing intermediate conjugate/transposes and instead shovel that computation off to the loop structure in zgemm. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
partial review
src/high-level/abstract-tensor.lisp
Outdated
(assert (equalp (shape source) (shape target)) | ||
() "Incompatible shapes. Cannot map tensor of shape ~a to tensor of shape ~a." | ||
(shape source) (shape target)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this assert make sense in a :before
method similar to lisp-array
above? Or conversely, should the :before
method on lisp-array
go away?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably. Might be something for a future PR to add :before
methods
Yeah, Cole wrote some nice LAPACK macros indeed. I'd also like to have a more parsimonious way of computing with matrices but I'm not entirely sure that that belongs to this specific PR. A lazy evaluator of sorts (see my comment above on expression templates) could be built on top of the current patch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
round 2. most comments are ignorable, other than adding the test files to magicl-tests.asd so they get run by make test
.
I also feel like it would be good to beef the tests up some. maybe that could happen in a follow-on PR.
Co-Authored-By: Erik Davis <erik@rigetti.com>
layout is not needed because it returns a vector
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3/n
Almost done. Odd behavior with inv
, otherwise the comments are pretty minor.
(coerce (funcall rand-function) type)))) | ||
(into! f (make-tensor tensor-class shape :layout layout))))) | ||
|
||
(defun eye (shape &key value (offset 0) (type *default-tensor-type*) layout) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is exactly what
numpy.eye
does. I can foresee it being useful when doing the SVD of a matrix and replacing the diagonal factor by another diagonal where all the singular values are one.
Yes, I think that is fine, although in the numpy case it is still strictly limited to matrices. I don't have strong opinions either way, but
MAGICL> (eye '(2))
#<VECTOR/DOUBLE-FLOAT (2):
1.000
1.000>
feels very weird to me. So I would request that either we adopt the convention suggested above: namely, that the resulting tensor has entry 1 iff all indices are equal -- which implies that (= (zeros '(2)) (eye '(2)))
-- or we restrict to matrices.
Co-Authored-By: Erik Davis <erik@rigetti.com>
INV now calls the LU LAPACK routine directly and does not require LU to be defined.
|
||
(in-package #:magicl) | ||
|
||
;; XXX: This is a little overkill for the small amount of compatible |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment kinda summarizes this. It is definitely overkill for now but it will help when we want to start supporting broadcasting matrices
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
n/n
Looks good to me. Approved contingent on consensus of the eye
behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to see the generic function norm
fixed before approving.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I take that back. We can fix/change that function later.
This PR completely reworks the magicl high level interface, allowing for easier and more intuitive use with more types.