-
Notifications
You must be signed in to change notification settings - Fork 12k
More optimizations on metal #2959
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
M2 Ultra results:
Perplexity
|
With the latest commit 363f0bf TG for fp16 is basically 2X faster compared to master. PP is also improved by a margin rapidly increasing with context length. On 30-core M2-Max:
@ggerganov I'm curious to know how this compares to #2891 on you M2 Ultra. |
Q4_0 7B Perplexity time also dropped - is this expected?
Also, if F16 and Q4_0
Is this a wrong ETA calculation, or do we have some significant overhead somewhere? |
My experience with the ETA is that it is not very accurate. For OK, it finished in 12.5 minutes, so faster than
main: build = 1157 (363f0bf)
main: seed = 1693672429
llama_model_loader: loaded meta data with 14 key-value pairs and 291 tensors from ../models/L2_7B/ggml-model-f16.gguf (version GGUF V1 (support until nov 2023))
llama_model_loader: - tensor 0: token_embd.weight f16 [ 4096, 32000, 1, 1 ]
llama_model_loader: - tensor 1: output_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 2: output.weight f16 [ 4096, 32000, 1, 1 ]
llama_model_loader: - tensor 3: blk.0.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 4: blk.0.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 5: blk.0.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 6: blk.0.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 7: blk.0.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 8: blk.0.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 9: blk.0.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 10: blk.0.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 11: blk.0.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 12: blk.1.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 13: blk.1.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 14: blk.1.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 15: blk.1.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 16: blk.1.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 17: blk.1.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 18: blk.1.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 19: blk.1.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 20: blk.1.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 21: blk.2.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 22: blk.2.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 23: blk.2.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 24: blk.2.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 25: blk.2.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 26: blk.2.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 27: blk.2.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 28: blk.2.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 29: blk.2.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 30: blk.3.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 31: blk.3.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 32: blk.3.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 33: blk.3.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 34: blk.3.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 35: blk.3.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 36: blk.3.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 37: blk.3.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 38: blk.3.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 39: blk.4.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 40: blk.4.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 41: blk.4.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 42: blk.4.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 43: blk.4.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 44: blk.4.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 45: blk.4.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 46: blk.4.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 47: blk.4.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 48: blk.5.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 49: blk.5.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 50: blk.5.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 51: blk.5.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 52: blk.5.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 53: blk.5.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 54: blk.5.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 55: blk.5.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 56: blk.5.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 57: blk.6.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 58: blk.6.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 59: blk.6.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 60: blk.6.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 61: blk.6.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 62: blk.6.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 63: blk.6.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 64: blk.6.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 65: blk.6.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 66: blk.7.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 67: blk.7.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 68: blk.7.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 69: blk.7.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 70: blk.7.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 71: blk.7.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 72: blk.7.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 73: blk.7.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 74: blk.7.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 75: blk.8.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 76: blk.8.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 77: blk.8.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 78: blk.8.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 79: blk.8.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 80: blk.8.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 81: blk.8.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 82: blk.8.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 83: blk.8.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 84: blk.9.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 85: blk.9.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 86: blk.9.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 87: blk.9.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 88: blk.9.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 89: blk.9.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 90: blk.9.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 91: blk.9.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 92: blk.9.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 93: blk.10.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 94: blk.10.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 95: blk.10.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 96: blk.10.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 97: blk.10.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 98: blk.10.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 99: blk.10.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 100: blk.10.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 101: blk.10.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 102: blk.11.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 103: blk.11.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 104: blk.11.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 105: blk.11.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 106: blk.11.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 107: blk.11.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 108: blk.11.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 109: blk.11.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 110: blk.11.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 111: blk.12.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 112: blk.12.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 113: blk.12.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 114: blk.12.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 115: blk.12.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 116: blk.12.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 117: blk.12.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 118: blk.12.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 119: blk.12.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 120: blk.13.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 121: blk.13.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 122: blk.13.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 123: blk.13.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 124: blk.13.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 125: blk.13.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 126: blk.13.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 127: blk.13.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 128: blk.13.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 129: blk.14.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 130: blk.14.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 131: blk.14.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 132: blk.14.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 133: blk.14.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 134: blk.14.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 135: blk.14.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 136: blk.14.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 137: blk.14.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 138: blk.15.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 139: blk.15.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 140: blk.15.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 141: blk.15.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 142: blk.15.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 143: blk.15.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 144: blk.15.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 145: blk.15.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 146: blk.15.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 147: blk.16.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 148: blk.16.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 149: blk.16.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 150: blk.16.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 151: blk.16.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 152: blk.16.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 153: blk.16.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 154: blk.16.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 155: blk.16.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 156: blk.17.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 157: blk.17.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 158: blk.17.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 159: blk.17.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 160: blk.17.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 161: blk.17.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 162: blk.17.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 163: blk.17.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 164: blk.17.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 165: blk.18.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 166: blk.18.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 167: blk.18.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 168: blk.18.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 169: blk.18.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 170: blk.18.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 171: blk.18.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 172: blk.18.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 173: blk.18.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 174: blk.19.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 175: blk.19.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 176: blk.19.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 177: blk.19.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 178: blk.19.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 179: blk.19.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 180: blk.19.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 181: blk.19.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 182: blk.19.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 183: blk.20.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 184: blk.20.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 185: blk.20.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 186: blk.20.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 187: blk.20.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 188: blk.20.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 189: blk.20.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 190: blk.20.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 191: blk.20.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 192: blk.21.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 193: blk.21.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 194: blk.21.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 195: blk.21.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 196: blk.21.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 197: blk.21.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 198: blk.21.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 199: blk.21.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 200: blk.21.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 201: blk.22.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 202: blk.22.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 203: blk.22.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 204: blk.22.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 205: blk.22.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 206: blk.22.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 207: blk.22.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 208: blk.22.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 209: blk.22.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 210: blk.23.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 211: blk.23.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 212: blk.23.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 213: blk.23.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 214: blk.23.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 215: blk.23.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 216: blk.23.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 217: blk.23.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 218: blk.23.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 219: blk.24.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 220: blk.24.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 221: blk.24.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 222: blk.24.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 223: blk.24.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 224: blk.24.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 225: blk.24.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 226: blk.24.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 227: blk.24.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 228: blk.25.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 229: blk.25.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 230: blk.25.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 231: blk.25.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 232: blk.25.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 233: blk.25.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 234: blk.25.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 235: blk.25.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 236: blk.25.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 237: blk.26.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 238: blk.26.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 239: blk.26.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 240: blk.26.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 241: blk.26.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 242: blk.26.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 243: blk.26.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 244: blk.26.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 245: blk.26.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 246: blk.27.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 247: blk.27.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 248: blk.27.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 249: blk.27.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 250: blk.27.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 251: blk.27.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 252: blk.27.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 253: blk.27.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 254: blk.27.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 255: blk.28.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 256: blk.28.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 257: blk.28.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 258: blk.28.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 259: blk.28.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 260: blk.28.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 261: blk.28.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 262: blk.28.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 263: blk.28.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 264: blk.29.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 265: blk.29.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 266: blk.29.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 267: blk.29.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 268: blk.29.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 269: blk.29.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 270: blk.29.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 271: blk.29.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 272: blk.29.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 273: blk.30.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 274: blk.30.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 275: blk.30.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 276: blk.30.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 277: blk.30.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 278: blk.30.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 279: blk.30.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 280: blk.30.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 281: blk.30.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 282: blk.31.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 283: blk.31.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 284: blk.31.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 285: blk.31.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 286: blk.31.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 287: blk.31.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 288: blk.31.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 289: blk.31.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 290: blk.31.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - kv 0: general.architecture str
llama_model_loader: - kv 1: general.name str
llama_model_loader: - kv 2: llama.context_length u32
llama_model_loader: - kv 3: llama.embedding_length u32
llama_model_loader: - kv 4: llama.block_count u32
llama_model_loader: - kv 5: llama.feed_forward_length u32
llama_model_loader: - kv 6: llama.rope.dimension_count u32
llama_model_loader: - kv 7: llama.attention.head_count u32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32
llama_model_loader: - kv 10: tokenizer.ggml.model str
llama_model_loader: - kv 11: tokenizer.ggml.tokens arr
llama_model_loader: - kv 12: tokenizer.ggml.scores arr
llama_model_loader: - kv 13: tokenizer.ggml.token_type arr
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type f16: 226 tensors
llm_load_print_meta: format = GGUF V1 (support until nov 2023)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_ctx = 512
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: f_norm_eps = 1.0e-05
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: freq_base = 10000.0
llm_load_print_meta: freq_scale = 1
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = mostly F16 (guessed)
llm_load_print_meta: model size = 6.74 B
llm_load_print_meta: general.name = LLaMA
llm_load_print_meta: BOS token = 1 '
system_info: n_threads = 8 / 12 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | llama_print_timings: load time = 2157.42 ms |
I'm also running.
So definitely a wrong ETA calculation. We should fix this |
Have you fetched the latest branch of #2891 ? I think I updated it yesterday by merging |
Yes, I just updated this afternoon. I'm on b46ae7b for #2891 |
uint ith = tpitg.x; | ||
uint nth = tptg.x; | ||
float sumf = 0; | ||
if (ne00 < 128) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we have 2 separate kernels to avoid this branch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was considering it, but taking into account the desire for shorter and simple code didn't do it. This is something one needs to study more carefully anyway. The best way to perform this computation is not just a function of ne00
.
Causing incoherent paragraphs very fast in M2 Metal inference. -ngl 0 works fine though. Sample: (With -ngl 99)
Sample with -ngl 0:
Please revert. |
Yes, I just observed that F16 inference is broken with the PR - starts ok and then degrades into incoherent text. I'm looking into it Edit: 363f0bf is the problematic commit |
This restores the generated text to be the same as before #2959
|
||
float all_sum = simd_sum(sumf); | ||
if (tiisg == 0) { | ||
for (int i = 4*(ne00/4); i < ne00; ++i) sumf += (float) x[i] * y[i]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing this to all_sum += (float) x[i] * y[i];
in both kernels seems to resolve the issue
On 30-core M2 Max:
With these changes along with the merged #2951, perplexity now runs in 13.6 minutes on my M2 Max laptop vs ~24 minutes before.