-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add DirectX12 support to kernel #22
base: main
Are you sure you want to change the base?
Conversation
Added support as a module (so it won't be installed for all platforms by default.) This should be enabled for all VMs that run on Windows but at the very minimum should support Hyper-V.
eh that config option does not exist in the 5.18 kernel at least... |
Hmmm, that's weird. I could of swore the official patch was merged already. I'll check 5.19 and see if it's there. |
Ok I found that mail history. Guess the first attempt got dropped and it's recently been struck up again. It's being used by the WSL2 kernel fork but they're upstreaming from the 5.10 LTS and don't even keep it up as much as they should either. If you like, you could patch it in directly from https://github.com/microsoft/WSL2-Linux-Kernel/tree/linux-msft-wsl-5.10.y/drivers/hv/dxgkrnl. From personal experience building a stable kernel with the patches works just fine. It's stand-alone code mostly, thanks to the upstream reviewers forcing MS to do so in the first go around. So it could easily be shipped as a .patch if installing on Hyper-V and can ensure it stays current too if you point directly to the git repository as I linked above. But it's your choice. See microsoft/WSL2-Linux-Kernel@7fb380b for the initial implementation on 5.10. The only files not directly in that dxg directory are the files necessary to get it to build, mainly |
[ Upstream commit 37c3b9fa7ccf5caad6d87ba4d42bf00be46be1cf ] The cited commit adds a compeletion to remove dependency on rtnl lock. But it causes a deadlock for multiple encapsulations: crash> bt ffff8aece8a64000 PID: 1514557 TASK: ffff8aece8a64000 CPU: 3 COMMAND: "tc" #0 [ffffa6d14183f368] __schedule at ffffffffb8ba7f45 #1 [ffffa6d14183f3f8] schedule at ffffffffb8ba8418 #2 [ffffa6d14183f418] schedule_preempt_disabled at ffffffffb8ba8898 #3 [ffffa6d14183f428] __mutex_lock at ffffffffb8baa7f8 #4 [ffffa6d14183f4d0] mutex_lock_nested at ffffffffb8baabeb #5 [ffffa6d14183f4e0] mlx5e_attach_encap at ffffffffc0f48c17 [mlx5_core] #6 [ffffa6d14183f628] mlx5e_tc_add_fdb_flow at ffffffffc0f39680 [mlx5_core] clearlinux-pkgs#7 [ffffa6d14183f688] __mlx5e_add_fdb_flow at ffffffffc0f3b636 [mlx5_core] clearlinux-pkgs#8 [ffffa6d14183f6f0] mlx5e_tc_add_flow at ffffffffc0f3bcdf [mlx5_core] clearlinux-pkgs#9 [ffffa6d14183f728] mlx5e_configure_flower at ffffffffc0f3c1d1 [mlx5_core] clearlinux-pkgs#10 [ffffa6d14183f790] mlx5e_rep_setup_tc_cls_flower at ffffffffc0f3d529 [mlx5_core] clearlinux-pkgs#11 [ffffa6d14183f7a0] mlx5e_rep_setup_tc_cb at ffffffffc0f3d714 [mlx5_core] clearlinux-pkgs#12 [ffffa6d14183f7b0] tc_setup_cb_add at ffffffffb8931bb8 clearlinux-pkgs#13 [ffffa6d14183f810] fl_hw_replace_filter at ffffffffc0dae901 [cls_flower] clearlinux-pkgs#14 [ffffa6d14183f8d8] fl_change at ffffffffc0db5c57 [cls_flower] clearlinux-pkgs#15 [ffffa6d14183f970] tc_new_tfilter at ffffffffb8936047 clearlinux-pkgs#16 [ffffa6d14183fac8] rtnetlink_rcv_msg at ffffffffb88c7c31 clearlinux-pkgs#17 [ffffa6d14183fb50] netlink_rcv_skb at ffffffffb8942853 clearlinux-pkgs#18 [ffffa6d14183fbc0] rtnetlink_rcv at ffffffffb88c1835 clearlinux-pkgs#19 [ffffa6d14183fbd0] netlink_unicast at ffffffffb8941f27 clearlinux-pkgs#20 [ffffa6d14183fc18] netlink_sendmsg at ffffffffb8942245 clearlinux-pkgs#21 [ffffa6d14183fc98] sock_sendmsg at ffffffffb887d482 clearlinux-pkgs#22 [ffffa6d14183fcb8] ____sys_sendmsg at ffffffffb887d81a clearlinux-pkgs#23 [ffffa6d14183fd38] ___sys_sendmsg at ffffffffb88806e2 clearlinux-pkgs#24 [ffffa6d14183fe90] __sys_sendmsg at ffffffffb88807a2 clearlinux-pkgs#25 [ffffa6d14183ff28] __x64_sys_sendmsg at ffffffffb888080f clearlinux-pkgs#26 [ffffa6d14183ff38] do_syscall_64 at ffffffffb8b9b6a8 #27 [ffffa6d14183ff50] entry_SYSCALL_64_after_hwframe at ffffffffb8c0007c crash> bt 0xffff8aeb07544000 PID: 1110766 TASK: ffff8aeb07544000 CPU: 0 COMMAND: "kworker/u20:9" #0 [ffffa6d14e6b7bd8] __schedule at ffffffffb8ba7f45 #1 [ffffa6d14e6b7c68] schedule at ffffffffb8ba8418 #2 [ffffa6d14e6b7c88] schedule_timeout at ffffffffb8baef88 #3 [ffffa6d14e6b7d10] wait_for_completion at ffffffffb8ba968b #4 [ffffa6d14e6b7d60] mlx5e_take_all_encap_flows at ffffffffc0f47ec4 [mlx5_core] #5 [ffffa6d14e6b7da0] mlx5e_rep_update_flows at ffffffffc0f3e734 [mlx5_core] #6 [ffffa6d14e6b7df8] mlx5e_rep_neigh_update at ffffffffc0f400bb [mlx5_core] clearlinux-pkgs#7 [ffffa6d14e6b7e50] process_one_work at ffffffffb80acc9c clearlinux-pkgs#8 [ffffa6d14e6b7ed0] worker_thread at ffffffffb80ad012 clearlinux-pkgs#9 [ffffa6d14e6b7f10] kthread at ffffffffb80b615d clearlinux-pkgs#10 [ffffa6d14e6b7f50] ret_from_fork at ffffffffb8001b2f After the first encap is attached, flow will be added to encap entry's flows list. If neigh update is running at this time, the following encaps of the flow can't hold the encap_tbl_lock and sleep. If neigh update thread is waiting for that flow's init_done, deadlock happens. Fix it by holding lock outside of the for loop. If neigh update is running, prevent encap flows from offloading. Since the lock is held outside of the for loop, concurrent creation of encap entries is not allowed. So remove unnecessary wait_for_completion call for res_ready. Fixes: 95435ad ("net/mlx5e: Only access fully initialized flows in neigh update") Signed-off-by: Chris Mi <cmi@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
Added support as a module (so it won't be installed for all platforms by default.) This should be enabled for all VMs that run on Windows but at the very minimum should support Hyper-V.