-
Notifications
You must be signed in to change notification settings - Fork 1k
Issues: oneapi-src/oneDNN
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Jit uni reorders give incorrect results with certain 4d matrices and src zero point != 0
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
sighting
Suspicious library behavior. Should be promoted to a bug when confirmed
#2213
opened Nov 12, 2024 by
Ryo-not-rio
3.6.1: please stop running benchmarking with parallelisation
question
#2200
opened Nov 6, 2024 by
kloczek
supported matmul data types
documentation
A request to change/fix/improve the documentation. Codeowner: @oneapi-src/onednn-doc
question
#2196
opened Oct 31, 2024 by
jinz2014
[ARM] Support fp16 data type in JIT Reorder kernel
enhancement
A feature or an optimization request
help wanted
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#2185
opened Oct 28, 2024 by
dmitry-gorokhov
Bug in memory_desc_init_by_tag: Incorrect Differentiation Between Memory Tags abcd and acbd
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
sighting
Suspicious library behavior. Should be promoted to a bug when confirmed
#2175
opened Oct 21, 2024 by
taoye9
MacOS ci release mode build issue with gcc-14
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
sighting
Suspicious library behavior. Should be promoted to a bug when confirmed
#2167
opened Oct 15, 2024 by
theComputeKid
Extend support for JIT Backward Convolution Operators with ARM SVE 128bit
enhancement
A feature or an optimization request
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#2165
opened Oct 14, 2024 by
snadampal
How to modify oneDNN to enable GEMM operation acceleration on your own hardware
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
question
#2114
opened Sep 24, 2024 by
nanzh-19
[ARM] Support 8bit/4bit weights decompression for Matmul primitive
enhancement
A feature or an optimization request
help wanted
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#2081
opened Sep 4, 2024 by
dmitry-gorokhov
RFC: Integrate Arm Compute Library (ACL) as an in-tree module into oneDNN
question
RFC
A design document
#2076
opened Sep 3, 2024 by
snadampal
[ARM] Suport 32-bits CPUs within ACL integration
enhancement
A feature or an optimization request
help wanted
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#2069
opened Sep 2, 2024 by
dmitry-gorokhov
[ARM] Support FP16 post-ops fusion into ACL kernels
enhancement
A feature or an optimization request
help wanted
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#2067
opened Aug 30, 2024 by
dmitry-gorokhov
Build with SYCL fails using intel/llvm compiler
sighting
Suspicious library behavior. Should be promoted to a bug when confirmed
#2035
opened Aug 13, 2024 by
dvrogozh
brg:sve_256 fails benchdnn accuracy tests
bug
A confirmed library bug
help wanted
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#2008
opened Jul 24, 2024 by
jondea
brgconv:sve_256 uses a lot of memory
help wanted
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
sighting
Suspicious library behavior. Should be promoted to a bug when confirmed
#2007
opened Jul 24, 2024 by
jondea
New/other Matrix multiplication algorithm implementation
enhancement
A feature or an optimization request
help wanted
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#1971
opened Jun 20, 2024 by
vineel96
GPU tests pass when they probably shouldn't
bug
A confirmed library bug
help wanted
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#1961
opened Jun 13, 2024 by
nwnk
Generic OpenCL kernels are broken
enhancement
A feature or an optimization request
help wanted
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#1960
opened Jun 13, 2024 by
nwnk
batchnorm requires consistent in- and output mem format_tags
sighting
Suspicious library behavior. Should be promoted to a bug when confirmed
#1944
opened Jun 4, 2024 by
IngmarVoigt2
[ACL] 3D convolution kernel A feature or an optimization request
help wanted
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
NEConv3D
is not integrated
enhancement
#1908
opened May 10, 2024 by
alvoron
[Proposal] Add cpu alloc/free callback to support customlize memory alloctor APIs.
enhancement
A feature or an optimization request
#1898
opened May 7, 2024 by
xuhancn
GEMM API for efficient LLM inference with W8A16
enhancement
A feature or an optimization request
help wanted
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#1788
opened Jan 20, 2024 by
oleotiger
Add support for dimension selection for layer normalization
enhancement
A feature or an optimization request
help wanted
#1766
opened Dec 6, 2023 by
WilliamTambellini
[nvidia|amd] Add missing synchronization
bug
A confirmed library bug
help wanted
platform:gpu-amd
Codeowner: @oneapi-src/onednn-gpu-amd
platform:gpu-nvidia
Codeowner: @oneapi-src/onednn-gpu-nvidia
#1732
opened Oct 3, 2023 by
densamoilov
[nvidia] batch normalization primitive fails correctness check
bug
A confirmed library bug
platform:gpu-nvidia
Codeowner: @oneapi-src/onednn-gpu-nvidia
#1725
opened Sep 14, 2023 by
dzarukin
Previous Next
ProTip!
Add no:assignee to see everything that’s not assigned.