Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add fast operator evaluation for tensor-product discretizations #362

Draft
wants to merge 65 commits into
base: main
Choose a base branch
from

Conversation

a-alveyblanc
Copy link
Contributor

@a-alveyblanc a-alveyblanc commented Sep 6, 2024

Supersedes #313, #354

Adds:

  • Fast operator evaluation for tensor-product discretizations
  • Metadata (tags) relevant to tensor-product discretizations used during (eager and lazy) compilation
  • WADG + Overintegration
  • Evaluation of operators using quadrature

Non-essential transformations will be added in a later PR. This is to keep this (already large) PR manageable. The plan is to break this PR up into a sequence of smaller PRs to ease the review process. Keeping this up until that starts happening.

TODOs:

  • Gradient
    • Strong form
    • Weak form
  • Divergence
    • Strong form
    • Weak form
  • Mass
  • Inverse mass
  • Support overintegration + fast operator evaluation
  • Face mass
  • Add generic bilinear form evaluation
  • Rewrite op.py to use generic bilinear form evaluation

cc @MTCam

@inducer
Copy link
Owner

inducer commented Sep 7, 2024

Should this be marked draft given that there are pending TODOs?

@a-alveyblanc a-alveyblanc marked this pull request as draft September 7, 2024 21:27
# {{{ BilinearForm base class

@dataclass
class _BilinearForm:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Classes

new_args = []
new_access_descrs = []
for iarg, arg in enumerate(expr.args):
# we assume that the 0th argument will be the one with this tag
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • So check that?
  • Generally, beef up the matching. (maybe come up with some tools, match statement)

included in the original einsum), and properly reshapes to and from
tensor-product form to apply the 1D mass operator.
"""
def map_einsum(self, expr, *args, **kwargs):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't do this varargs-style.

return expr.copy(args=tuple(new_args))


class InverseMassDistributor(CopyMapperWithExtraArgs):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Split into two mappers (with the inner only handling what's allowed as part of distribution)
  • Bonus points for generalizing to simplex

return expr.copy(args=tuple(new_args))


class RedundantMassTimesMassInverseRemover(CopyMapperWithExtraArgs):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Split in two parts

return expr.copy(args=tuple(new_args))


class RedundantMassTimesMassInverseRemover(CopyMapperWithExtraArgs):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

grudge/op.py Outdated
out_element_group: NodalElementGroupBase,
in_element_group: InterpolatoryElementGroupBase):
in_element_group: InterpolatoryElementGroupBase) -> ArrayOrContainer:

@keyed_memoize_in(
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's have @keyed_memoize_in_first_arg.

grudge/op.py Outdated
else:
quadrature_rule = in_grp.quadrature_rule()

if isinstance(in_grp, SimplexElementGroupBase):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the TP and full versions want to be separate functions?

grudge/op.py Outdated
return get_ref_derivative_mats(out_element_group, in_element_group)


def _strong_scalar_grad(dcoll, dd_in, vec):
assert isinstance(dd_in.domain_tag, VolumeDomainTag)
def _reference_stiffness_transpose_matrices(
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unify with above via basis_getter?

grudge/op.py Outdated

discr = dcoll.discr_from_dd(dd_in)
actx = vec.array_context
if in_grp == out_grp:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if in_grp == out_grp:
if isinstance(in_grp, Quadrature)

?

Suggested change
if in_grp == out_grp:
if not isinstance(in_grp, Interpolatory)

?

grudge/op.py Outdated
out_grp.shape
)
else:
quadrature_rule = in_grp.quadrature_rule()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check that quadrature_rule provides sufficient exactness.

grudge/op.py Outdated
return get_reference_stiffness_transpose_matrices(out_grp, in_grp)


def reference_mass_matrix(actx: ArrayContext,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rewire to use above machinery.

grudge/op.py Outdated
"""

per_group_grads = []
for out_grp, in_grp, vec_i, ijm_i in zip(
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cough comprehension cough

use_tensor_product_fast_eval=use_tensor_product_fast_eval
)

if input_group == output_group:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if isinstance(input_group, Interpolatory):

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants