-
Notifications
You must be signed in to change notification settings - Fork 11.8k
Attempt at OpenElm #6986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attempt at OpenElm #6986
Conversation
Okay I think it might have something to do with how i'm calculating the offesets from kqv |
llama.cpp
Outdated
// So because our original wo matrix wasn't 3x, the below function fails because there aren't enough elems in it. | ||
// Got: [head_dim][n_tokens][n_head_v] | ||
// Want: [n_embd_v_gqa(384)][n_tokens] | ||
// I guess this means that i need to be able to able to repeat them |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the problem, these things need to be repeated like they are in the python part on line 10806
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if self.num_groups != 1:
# GQA
# [B, k_h, S, h] --> [B, q_h, S, h] // so, k=3 -> q=12
keys = keys.repeat_interleave(self.num_groups, dim=1)
# [B, v_h, S, h] --> [B, q_h, S, h] // so, v=3 -> q=12
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
trying to do something similar to this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is solved now
LLM_NORM_RMS, cb, il); | ||
cb(cur, "ffn_norm", il); | ||
|
||
cur = llm_build_ffn(ctx0, cur, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay so this does the build purely because the tensor sizes all match up, which doesn't necesarily mean that it's the correct implementation shown by the fact that there's still a sigsev somewhere |
Debugging this because some of the transformations are obvs wrong:
|
How is progress on this? |
mahn, we all are waiting for you...!! |
Still slaving away at debugging, |
Closing because i'm not making any meaningful progress on this |
Currently failing on line 821 of sgemm.cpp, still some parsing of ffn/attention head info needs to occur. Currently hard coded some stuff.
Fixes: #6868
Raising this PR as a draft because I need help. Will be adding comments to original source for reference purposes.