Replies: 2 comments
-
Another thing that I will mention is that I understand the potential desire to not diverge einsum from the pytorch implementation because this could confuse people. However, this new function could just be its own object which could also take on one other aspect which is how reduction happens. In einsum, reduction is done through summation, however, obviously, there are different ways in which someone might want to reduce. There are then two potential implementations of this new function. A reduction type could be passed to the object like in reduce so that this function can be called in one line. Take the following math equation and example. inner_exp = einfunc(x, y, 'i, i j -> j', lambda a, b : a ** 2 - torch.exp(b), 'prod')
final_exp = einfunc(z, inner_exp, 'i, i -> ', lambda a, b : torch.log(a) / b, 'mean') To me, this code makes a lot more sense than just applying the operator over the entire tensor and going from there. The second approach would be to keep the inner_exp = einfunc(x, y, 'i, i j -> i j', lambda a, b : a ** 2 - torch.exp(b))
inner_exp = reduce(inner_exp, 'i j -> j' prod')
final_exp = einfunc(z, inner_exp, 'i, i -> i', lambda a, b : torch.log(a) / b)
final_exp = reduce(final_exp, 'i -> ', 'mean') I find the first example a lot easier to understand, however, I also see the desire to keep the functions separate. However, the main functionality is a way to use Einstein notation to apply a function along a tensor in a non-uniform way. |
Beta Was this translation helpful? Give feedback.
-
If anyone is interested in this, I created a repo. This only works with PyTorch and is currently limited to python >= 3.11 and torch >= 2.0 . I will add more functionality and versions of Python soon. https://github.com/Hprairie/einfunc . Also, the package might be a little rough around the edges but I will improve it. |
Beta Was this translation helpful? Give feedback.
-
First off I want to say how much I love this package.
While I have no idea about the feasibility of making this fast, I have been wishing that there was a way to apply a custom function to einsum rather than multiplication. For example, consider the following equation which is taking a log softmax along the diagonal.
The math isn't important, but here I am wanting to take a row and column-wise logSoftmax only along the diagonal. Sure I could just apply it to every value in the tensor, but this seems wasteful. Here would be the code applying the softmax.
Instead, it would really be nice to just have an operator which could apply a lambda function with the einstein notation. I was thinking about something like this.
This would both reduce the number of operations done and also at the same time improve readability. There are other cases where this would be very useful. The order of the values passed to lambda would be the order of tensors in the equation.
Another thing, which would add another degree of complexity, but could definitely improve readability, would be to pass an
indicator to einsum. The indicator would tell einsum to pass the index of the current working tensor with its value. While this doesn't apply to the function above it could be useful for indexing. Here is an example
Again this would reduce the number of operations and then also improve readability.
IMO, this feature would be incredibly useful and I think a lot of people would enjoy it. To me, this would make einops a universal tool for working with tensors as it could do almost any operation to a tensor.
Again thank you for making such a great package, it has completely changed how I program.
Beta Was this translation helpful? Give feedback.
All reactions