-
Notifications
You must be signed in to change notification settings - Fork 112
Issues: openxla/stablehlo
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Figure out the future of
dynamic_slice
vs real_dynamic_slice
Dynamism
Process
Spec
#2176
opened Apr 8, 2024 by
ghpvnist
Support for global device IDs in operation collective permute
Spec
#1716
opened Aug 7, 2023 by
sogartar
Decide on unifying specification and implementation of
QuantizedTensorType
Spec
#1691
opened Jul 19, 2023 by
sdasgup3
Audit the use of
constant(value, element_type)
for quantized types
Spec
#1687
opened Jul 13, 2023 by
sdasgup3
Clarify which inputs to an op are input values vs attributes in the spec
Spec
#1683
opened Jul 12, 2023 by
bchetioui
Determine whether i1 <=> other element type conversions in BitcastConvertOp are supported.
Spec
#1672
opened Jul 6, 2023 by
ghpvnist
Type mismatch between spec.md and tablegen specification for rng_bit_generator op
Spec
#1607
opened Jun 9, 2023 by
sdasgup3
Consider merging UniformDequantizeOp and UniformQuantizeOp into ConvertOp
Quantization
Spec
#1576
opened Jun 4, 2023 by
burmako
Explore the representation of finer quantization granularities than per-axis
Quantization
Spec
#1569
opened Jun 2, 2023 by
sdasgup3
Align spec and implementation of reduction operations
Interpreter
Spec
#1551
opened May 29, 2023 by
burmako
Consider adding TopK
Migrate from MHLO
Issue or PR that migrates from an MLIR-HLO commit
Spec
#1514
opened May 21, 2023 by
burmako
Add support for GPTQ style INT3/4/5 quantization of LLMs like LLaMA
Quantization
Spec
#1491
opened May 16, 2023 by
powderluv
Explore adding support for big endian architectures
Interpreter
Spec
#1460
opened May 3, 2023 by
ghpvnist
Consider adding Sharding, SPMDFullToShardShape and SPMDShardToFullShape ops
Spec
#1420
opened Apr 19, 2023 by
burmako
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.