Open
Description
Issue Metrics
Metric | Average | Median | 90th percentile |
---|---|---|---|
Time to first response | 7:44:40 | 0:35:16 | 18:10:45 |
Time to close | 2 days, 13:18:18 | 6:56:41 | 7 days, 23:57:10 |
Time to answer | None | None | None |
Metric | Count |
---|---|
Number of items that remain open | 7 |
Number of items closed | 132 |
Number of most active mentors | 0 |
Total number of items created | 139 |
Title | URL | Author | Time to first response | Time to close | Time to answer |
---|---|---|---|---|---|
memory_planning algos take the specs as inputs instead of calculating them themselves | pytorch/executorch#9952 | JacobSzwejbka | 0:00:11 | 2 days, 18:15:34 | None |
[ET-VK][ez] Allow logit linear layer to be lowered to Vulkan | pytorch/executorch#9951 | SS-JIA | 0:02:10 | 0:03:49 | None |
[ET-VK][ez] Make squeeze insertion requirements more strict | pytorch/executorch#9950 | SS-JIA | 0:02:03 | 0:03:17 | None |
[ET-VK] Improve packing format for int4 linear operator + misc improvements | pytorch/executorch#9949 | SS-JIA | 0:00:48 | 0:02:45 | None |
refactor-attention | pytorch/executorch#9948 | lucylq | None | None | None |
[Release 0.6] update torchao pin | pytorch/executorch#9947 | metascroy | 16:46:41 | 20:51:37 | None |
Expose L4 ops to ExecuTorch Client and add MWA to ExecuTorch Client | pytorch/executorch#9946 | derekxu | 0:00:10 | 4:13:05 | None |
[Executorch][llama] Enable quantized sdpa | pytorch/executorch#9945 | kimishpatel | 0:00:25 | 2 days, 16:23:27 | None |
[Executorch][llama] Renamed quantized_kv_cache to custom_kv_cache | pytorch/executorch#9944 | kimishpatel | 0:00:31 | 2 days, 16:23:27 | None |
[Executorch][SDPA] Refactor + Make quantized sdpa handle sequence at dim 1 or 2 | pytorch/executorch#9943 | kimishpatel | 0:00:32 | 2 days, 16:23:24 | None |
Fix naming convention in quantizer | pytorch/executorch#9941 | mcremon-meta | 0:00:10 | 19:22:08 | None |
Minor fixes on Intro How It Works | pytorch/executorch#9939 | mergennachin | 0:05:14 | 20:05:26 | None |
[ez][release blocker fix] Insert linalg_vector_norm into decomp table used for Edge export |
pytorch/executorch#9938 | SS-JIA | 2:06:48 | 1 day, 21:40:10 | None |
introducing filter to etdumpgen | pytorch/executorch#9937 | Gasoonjia | 0:00:10 | 8:45:10 | None |
[Easy] Fix list numbering typo | pytorch/executorch#9936 | Jack-Khuu | 3:34:15 | 3:48:57 | None |
Allow emitting mutable buffer names in schema | pytorch/executorch#9935 | JacobSzwejbka | 0:00:10 | 23:15:29 | None |
per_channel_group can't be dynamic | pytorch/executorch#9934 | mcr229 | 0:00:14 | 3:48:34 | None |
Add a path to use quantized gemm from torchao in sdpa | pytorch/executorch#9933 | kimishpatel | 0:00:15 | 3:59:04 | None |
Arm backend: Convert assert to raise ValueError in op_clamp | pytorch/executorch#9932 | Sebastian-Larsson | 3 days, 17:58:19 | 3 days, 17:59:03 | None |
Arm backend: Convert assert to raise ValueError for comparison operators | pytorch/executorch#9931 | Sebastian-Larsson | 1:24:51 | 1:25:24 | None |
Arm backend: Convert assert to throw ValueError in op_exp | pytorch/executorch#9929 | Sebastian-Larsson | 3:49:08 | 3:49:28 | None |
Arm backend: Add support for sqrt | pytorch/executorch#9928 | fumchin | 0:09:33 | 3 days, 23:05:55 | None |
Qualcomm AI Engine Direct - Fix mobilebert finetune script | pytorch/executorch#9927 | shewu-quic | 2:16:22 | 14:19:40 | None |
Arm backend: Add pytest.mark.flaky on U85 tests in test_mm.py | pytorch/executorch#9926 | martinlsm | 0:20:18 | 6:36:29 | None |
[WIP] Devtool end-to-end tests | pytorch/executorch#9925 | HonestDeng | None | None | None |
Support slice ops with default start | pytorch/executorch#9923 | pssrawat | 0:00:14 | 1 day, 5:48:30 | None |
Reapply "Depend on extension/threadpool, not thread_parallel_interface, in buck (pytorch#9511)" | pytorch/executorch#9922 | kirklandsign | 2:34:09 | 2:35:00 | None |
Update Executorch ops registration for rms_norm | pytorch/executorch#9920 | Vysarat | 0:00:09 | 2 days, 21:56:28 | None |
[ET-VK][ez] Allow logit linear layer to be lowered to Vulkan | pytorch/executorch#9918 | SS-JIA | 0:00:30 | 3 days, 0:44:37 | None |
[ET-VK][ez] Make squeeze insertion requirements more strict | pytorch/executorch#9917 | SS-JIA | 0:00:28 | 3 days, 0:44:28 | None |
export llama with lora | pytorch/executorch#9916 | lucylq | None | None | None |
Fix mobile bert fine tune | pytorch/executorch#9915 | cccclai | 0:00:10 | None | None |
port hardtanh and add hardtanh test | pytorch/executorch#9914 | zonglinpeng | 0:00:12 | 10 days, 23:49:44 | None |
[Executorch][sdpa] Setup the structure to enable quantized gemms for sdpa | pytorch/executorch#9912 | pytorchbot | 2:46:32 | 2:46:42 | None |
[Executorch][SDPA] Remove slice creation | pytorch/executorch#9911 | pytorchbot | 2:41:27 | 2:42:59 | None |
[Executorch][sdpa] Add accidentaly removed flash attentiona args check | pytorch/executorch#9910 | pytorchbot | 2:39:35 | 2:41:11 | None |
[Executorch][sdpa] Refactor sdpa into impl and op | pytorch/executorch#9909 | pytorchbot | 2:02:20 | 2:03:56 | None |
Arm backend: Add TOSA support for gt.Scalar and lt.Scalar | pytorch/executorch#9908 | YufengShi-dudu | 6 days, 16:35:07 | 6 days, 16:40:43 | None |
Arm backend: Reduce arm_executor_runner binary size | pytorch/executorch#9907 | AdrianLundell | 4:11:37 | 4:54:15 | None |
Arm backend: Skip instead of returning in Llama test | pytorch/executorch#9906 | martinlsm | 4:23:35 | 5:06:21 | None |
Arm backend: Convert assert to throw ValueError in tosa_backend | pytorch/executorch#9905 | Sebastian-Larsson | 1:36:44 | 1:37:03 | None |
Arm backend: Remove node vistor for full | pytorch/executorch#9904 | gggekov | 6:00:55 | 9 days, 22:51:50 | None |
Arm backend: Add support for Leaky ReLU | pytorch/executorch#9903 | gggekov | 2:06:54 | 2:09:53 | None |
Arm backend: Handle memory mode corectly in test_model.py | pytorch/executorch#9902 | zingo | 4:41:44 | 4:42:38 | None |
Arm backend: Remove use of TOSA_DBG_VERBOSE | pytorch/executorch#9901 | oscarandersson8218 | 1:12:13 | 1:12:48 | None |
Arm backend: Align MobileNetV2 with MobileNetV3 unittest | pytorch/executorch#9900 | oscarandersson8218 | 1:11:21 | 1:13:17 | None |
Arm backend: Add pre-push checks for op tests | pytorch/executorch#9899 | AdrianLundell | 8:56:37 | 4 days, 1:48:40 | None |
Arm backend: Add expected failure for test_torch_functions.py | pytorch/executorch#9898 | SaoirseARM | 1:18:36 | 1:19:32 | None |
Arm backend: Convert assert to throw TypeError in op_add | pytorch/executorch#9897 | Sebastian-Larsson | 1:38:49 | 1:39:21 | None |
Arm backend: Convert assert to throw ValueError in op_tanh | pytorch/executorch#9896 | Sebastian-Larsson | 3:35:37 | 3:36:14 | None |
Arm backend: Add info message to assertions in fold_qdq_with_annotated_qparams_pass | pytorch/executorch#9895 | Sebastian-Larsson | 3:35:54 | 3:36:21 | None |
Add a namespace for ATen mode | pytorch/executorch#9894 | larryliu0820 | 0:00:36 | 5 days, 20:16:26 | None |
Arm backend: Add FuseEqualPlaceholdersPass | pytorch/executorch#9893 | AdrianLundell | 4:18:04 | 25 days, 8:31:29 | None |
[ET-VK] Minor performance improvements to native layer norm. | pytorch/executorch#9892 | trivedivivek | 0:00:08 | None | None |
Update release.yaml | pytorch/executorch#9891 | metascroy | 17:47:14 | 3 days, 18:08:57 | None |
Reapply "Depend on extension/threadpool, not thread_parallel_interface, in buck (pytorch#9511)" | pytorch/executorch#9890 | swolchok | 0:00:08 | 21:38:18 | None |
[Executorch][sdpa] Setup the structure to enable quantized gemms for sdpa | pytorch/executorch#9889 | kimishpatel | 0:00:29 | 15:21:23 | None |
[Executorch][SDPA] Remove slice creation | pytorch/executorch#9888 | kimishpatel | 0:00:34 | 15:21:20 | None |
[Executorch][sdpa] Add accidentaly removed flash attentiona args check | pytorch/executorch#9887 | kimishpatel | 0:00:21 | 15:21:15 | None |
[Executorch][sdpa] Refactor sdpa into impl and op | pytorch/executorch#9886 | kimishpatel | 0:00:21 | 15:21:11 | None |
Update test build instructions in kernels readme | pytorch/executorch#9885 | manuelcandales | 0:00:10 | 2:35:58 | None |
Create release.yml | pytorch/executorch#9884 | metascroy | 0:37:07 | 0:48:40 | None |
[ET-VK] Improve packing format for int4 linear operator + misc improvements | pytorch/executorch#9883 | SS-JIA | 0:00:12 | 4 days, 0:45:02 | None |
[WIP] Mimi 4-bit quant on transformer and 8-bit on conv | pytorch/executorch#9882 | iseeyuan | 1 day, 1:40:33 | None | None |
Support QK norm in static attention | pytorch/executorch#9879 | sxu | 0:00:10 | 6:16:03 | None |
Adding bmm, mm, view_copy, slice_copy, split_with_sizes_copy optimizations | pytorch/executorch#9877 | cad-audio | 16:25:06 | 22 days, 10:01:29 | None |
NXP backend: Add NeutronQuantizer | pytorch/executorch#9876 | skywall | 22:46:55 | 11 days, 4:40:37 | None |
Arm backend: Convert assert to throw ValueError in op_sigmoid | pytorch/executorch#9875 | Sebastian-Larsson | 4:48:49 | 4:49:06 | None |
Arm backend: Limit number of build jobs | pytorch/executorch#9874 | mansnils | 1:50:16 | 1:50:33 | None |
Arm backend: Improve pre-push hook | pytorch/executorch#9873 | Sebastian-Larsson | 3:11:51 | 3:12:42 | None |
[ET-VK] Replace Uniform buffers with push constants for native layer norm op | pytorch/executorch#9872 | pytorchbot | 5:42:30 | 5:43:39 | None |
[ET-VK] Adding round op support. | pytorch/executorch#9871 | pytorchbot | 5:41:52 | 5:43:28 | None |
[ET-VK] Adding all tensor packing support for native layer norm. | pytorch/executorch#9870 | pytorchbot | 5:41:28 | 5:43:16 | None |
Arm backend: Add where.self | pytorch/executorch#9869 | Sebastian-Larsson | 4:31:18 | 4:31:40 | None |
Fix breaks from googletest upgrade (vulkan_compute_api_test) (pytorch#9760) | pytorch/executorch#9868 | hershi | 0:00:14 | None | None |
Fix breaks from googletest upgrade (vulkan_compute_api_test) (pytorch#9760) | pytorch/executorch#9867 | hershi | 0:00:11 | None | None |
Arm backend: Convert assert to throw TypeError in arm_pass_utils | pytorch/executorch#9866 | Sebastian-Larsson | 4:59:00 | 5:00:08 | None |
Add Intel macOS check; update docs | pytorch/executorch#9865 | keyprocedure | None | None | None |
forward fix | pytorch/executorch#9864 | cccclai | 0:00:32 | 17:32:59 | None |
[do not land] lora experiment | pytorch/executorch#9863 | lucylq | None | None | None |
Release/0.6 | pytorch/executorch#9862 | metascroy | None | None | None |
Update android docs | pytorch/executorch#9861 | pytorchbot | 0:01:01 | 0:01:10 | None |
Improve android related docs | pytorch/executorch#9860 | pytorchbot | 0:03:52 | 0:03:58 | None |
Fix link in using-executorch-building-from-source.md | pytorch/executorch#9859 | pytorchbot | 0:08:33 | 0:08:41 | None |
Fix scalar type logging from pytorch#9751 | pytorch/executorch#9845 | swolchok | 20:20:52 | 20:29:41 | None |
RMSNorm support - Executorch | pytorch/executorch#9844 | ThomasJannaud | 0:00:13 | 1 day, 3:03:16 | None |
Update static attention IO manager to use "smart mask" style update | pytorch/executorch#9843 | sxu | 0:00:10 | 9:55:05 | None |
Save some size in dtype_util when dtype selective build is not in use | pytorch/executorch#9842 | swolchok | 1 day, 0:09:57 | 20 days, 7:56:07 | None |
Migrate elementwise_util callers to the variants with out_dtypes in template arguments | pytorch/executorch#9841 | swolchok | 1 day, 0:09:08 | 20 days, 7:55:47 | None |
Add build_optimized_size_test.sh | pytorch/executorch#9840 | swolchok | 0:37:26 | 20 days, 2:02:48 | None |
set INTERFACE_LINK_LIBARIES for extension_threadpool | pytorch/executorch#9839 | swolchok | 1:24:38 | 2 days, 0:05:44 | None |
Fix build gating on optimized_portable_kernels | pytorch/executorch#9838 | swolchok | 1 day, 0:09:54 | 2 days, 0:05:13 | None |
Arm backend: Add ERF operator | pytorch/executorch#9836 | maddun01 | 14:32:02 | 1 day, 20:53:12 | None |
Arm backend: Disable test case test_w2l_arm.py::test_w2l_u85_BI | pytorch/executorch#9835 | martinlsm | 1:48:51 | 1:49:22 | None |
Arm Backend: Fixes related to pytorch updates | pytorch/executorch#9834 | SaoirseARM | 1:18:03 | 1:20:24 | None |
Make Android Module thread-safe and prevent destruction during inference | pytorch/executorch#9833 | GregoryComer | 0:00:10 | 9:32:01 | None |
introduce DelegateDebugIntId | pytorch/executorch#9832 | Gasoonjia | 0:00:12 | 14:03:35 | None |
[ET-VK] Replace Uniform buffers with push constants for native layer norm op | pytorch/executorch#9831 | trivedivivek | 0:00:09 | 1 day, 5:40:01 | None |
DO NOT COMMIT test for u55 + mv2 | pytorch/executorch#9830 | digantdesai | 2 days, 3:13:53 | None | None |
Save some size in dtype_util when dtype selective build is not in use | pytorch/executorch#9829 | swolchok | None | None | None |
Migrate elementwise_util callers to the variants with out_dtypes in template arguments | pytorch/executorch#9828 | swolchok | 0:34:46 | None | None |
Add build_optimized_size_test.sh | pytorch/executorch#9827 | swolchok | None | None | None |
set INTERFACE_LINK_LIBARIES for extension_threadpool | pytorch/executorch#9826 | swolchok | None | None | None |
Fix build gating on optimized_portable_kernels | pytorch/executorch#9825 | swolchok | 0:35:47 | None | None |
[ExecuTorch][to_backend] Enable to_backend API to leverage preprocess_multimethod | pytorch/executorch#9824 | mcr229 | 5 days, 17:28:38 | 14 days, 16:38:29 | None |
[Executorch][to_backend] Introduce preprocess_multimethod | pytorch/executorch#9823 | mcr229 | 5 days, 17:21:19 | 14 days, 16:40:11 | None |
[ExecuTorch][to_backend] add AllNodePartitioner | pytorch/executorch#9822 | mcr229 | 1 day, 5:06:27 | 2 days, 21:35:39 | None |
[Executorch][to_backend] Introduce preprocess_multimethod | pytorch/executorch#9821 | mcr229 | None | None | None |
[ExecuTorch][to_backend] add AllNodePartitioner | pytorch/executorch#9820 | mcr229 | None | None | None |
Expose symbols on macos in the xplat pytorch stack | pytorch/executorch#9819 | stepanhruda | 0:00:09 | 2 days, 23:16:43 | None |
A short term fix for editable mode install failed to import root level module such as exir | pytorch/executorch#9818 | larryliu0820 | 19:05:37 | 22:53:38 | None |
Support multiple prompts in the runner | pytorch/executorch#9817 | cccclai | 0:00:09 | 1 day, 2:11:04 | None |
[Bug Fix] Fix padding when running in NHWC | pytorch/executorch#9816 | pytorchbot | 16:57:37 | 19:37:19 | None |
[Executorch][custo ops] Add prototype defs for custom op | pytorch/executorch#9815 | kirklandsign | 0:18:11 | 3:25:46 | None |
[ET-VK] Moving repeat functionality from copy_packed_dim_offset into a separate repeat shader. | pytorch/executorch#9814 | pytorchbot | 0:03:21 | 0:04:53 | None |
[ET-VK] Adding all tensor packing support for repeat op. | pytorch/executorch#9813 | pytorchbot | 0:02:19 | 0:02:46 | None |
[ET-VK] Simplify lane offset copy logic in copy_packed_dim_offset shader. | pytorch/executorch#9812 | pytorchbot | 0:01:48 | 0:01:59 | None |
[ExecuTorch][to_backend] Enable to_backend API to leverage preprocess_all | pytorch/executorch#9811 | mcr229 | 0:00:17 | None | None |
[ExecuTorch][to_backend] Introduce preprocess_all method to backend details | pytorch/executorch#9810 | mcr229 | 0:00:18 | None | None |
Change training pybind extension_module linkage to static | pytorch/executorch#9809 | larryliu0820 | 0:06:37 | 1:31:58 | None |
fix qnn export | pytorch/executorch#9808 | cccclai | 0:00:13 | 4:56:49 | None |
Try upgrading to CoreML Tools 8.2 | pytorch/executorch#9807 | pytorchbot | 3:17:29 | 2 days, 22:27:09 | None |
[ExecuTorch][Llama] Change runner to enable chunked prefill | pytorch/executorch#9805 | pytorchbot | 0:03:55 | 0:04:25 | None |
[ET-VK] Efficient tiled int8 matmul | pytorch/executorch#9804 | pytorchbot | 0:27:41 | 0:27:55 | None |
[ET-VK] Store weights transposed for int8 linear | pytorch/executorch#9803 | pytorchbot | 0:26:36 | 0:26:51 | None |
Refactor internal switch cases | pytorch/executorch#9802 | manuelcandales | 0:00:11 | 13 days, 13:42:43 | None |
[Core ML] Improve error logging | pytorch/executorch#9801 | cymbalrush | 0:54:13 | 13 days, 9:49:23 | None |
Try upgrading to CoreML Tools 8.2 | pytorch/executorch#9799 | mergennachin | 0:00:49 | 2:52:37 | None |
[0.6 Release] Add 0.6 to ET docs index | pytorch/executorch#9798 | mergennachin | 6:55:27 | 6:56:41 | None |
Replace third-party/pkg_resources:pkg_resources with third-party/pypi/setuptools:setuptools | pytorch/executorch#9797 | kkolur76 | 0:00:31 | None | None |
Arm backend: Add additional unsupported checks to Ethos-U55 backend | pytorch/executorch#9796 | Erik-Lundell | 0:01:58 | 1:21:40 | None |
Arm backend: Change _is_ok_for_quantization to support output check | pytorch/executorch#9795 | Sebastian-Larsson | 1:30:42 | 1:31:13 | None |
Arm Backend: temp fix for flaky eq op test | pytorch/executorch#9794 | fumchin | 1:34:46 | 1:35:24 | None |
Add code structure and a few other links to CONTRIBUTING.md | pytorch/executorch#9793 | larryliu0820 | 6:08:44 | 6 days, 11:14:15 | None |
[ET-VK] Adding round op support. | pytorch/executorch#9792 | trivedivivek | 0:00:15 | 2 days, 5:06:06 | None |
Allow passing in calibration data to convert_pt2 |
pytorch/executorch#9791 | mcremon-meta | 0:00:09 | 19:05:12 | None |
Bump nightly version in xcode projects | pytorch/executorch#9790 | shoumikhin | 0:54:33 | 3:44:17 | None |
Add a util to print tag easily | pytorch/executorch#9789 | larryliu0820 | 0:00:09 | 5:11:19 | None |
Update android docs | pytorch/executorch#9788 | kirklandsign | 0:40:05 | 0:40:41 | None |
This report was generated with the Issue Metrics Action
Search query used to find these items: repo:pytorch/executorch is:pr created:2025-04-01..2025-04-07
Metadata
Metadata
Assignees
Labels
No labels