Commit c556418
authored
llama-bench : use local GPUs along with RPC servers (#14917)
Currently if RPC servers are specified with '--rpc' and there is a local
GPU available (e.g. CUDA), the benchmark will be performed only on the
RPC device(s) but the backend result column will say "CUDA,RPC" which is
incorrect. This patch is adding all local GPU devices and makes
llama-bench consistent with llama-cli.1 parent db16e28 commit c556418
1 file changed
+15
-0
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
950 | 950 | | |
951 | 951 | | |
952 | 952 | | |
| 953 | + | |
953 | 954 | | |
954 | 955 | | |
955 | 956 | | |
| |||
959 | 960 | | |
960 | 961 | | |
961 | 962 | | |
| 963 | + | |
| 964 | + | |
| 965 | + | |
| 966 | + | |
| 967 | + | |
| 968 | + | |
| 969 | + | |
| 970 | + | |
| 971 | + | |
| 972 | + | |
| 973 | + | |
| 974 | + | |
| 975 | + | |
| 976 | + | |
962 | 977 | | |
963 | 978 | | |
964 | 979 | | |
| |||
0 commit comments