We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
尝试适配glm-4v-9b模型(其实是13.9b,视觉部分有4.9B),发现glm4v里面对输入的position_ids做了特殊处理: new_input_embeds.append(torch.cat( (inputs_embeds[i, :boi_token_pos], images_features[i], inputs_embeds[i, eoi_token_pos + 1:]))) new_position_ids.append(torch.cat( (position_ids[i, :boi_token_pos + 1], position_ids[i, boi_token_pos + 1].repeat(num_patches), position_ids[i, eoi_token_pos:]) )) 其将视觉特征部分的position_ids统一设定为同一个值,在计算RoPE时会用到。 turbomind引擎好像没有修改position_ids的接口。我们对glm-4v在我们的场景里全参数微调后效果是开源模型里面最好的,希望官方能支持glm-4v-9b模型
GLM-4模型链接: https://github.com/THUDM/GLM-4
No response
The text was updated successfully, but these errors were encountered:
@liyuan1208 hi, glm-4v-9b will be supported by lmdeploy's pytorch engine. Will update once the pr is created.
Sorry, something went wrong.
大概要多久可以上线,期待😚
个人感觉哈,如果比vllm先支持会吸引一大波用户 。。。
@danxuan2022 hi, you could try this PR #1947
RunningLeon
No branches or pull requests
Motivation
尝试适配glm-4v-9b模型(其实是13.9b,视觉部分有4.9B),发现glm4v里面对输入的position_ids做了特殊处理:
new_input_embeds.append(torch.cat(
(inputs_embeds[i, :boi_token_pos], images_features[i], inputs_embeds[i, eoi_token_pos + 1:])))
new_position_ids.append(torch.cat(
(position_ids[i, :boi_token_pos + 1], position_ids[i, boi_token_pos + 1].repeat(num_patches),
position_ids[i, eoi_token_pos:])
))
其将视觉特征部分的position_ids统一设定为同一个值,在计算RoPE时会用到。
turbomind引擎好像没有修改position_ids的接口。我们对glm-4v在我们的场景里全参数微调后效果是开源模型里面最好的,希望官方能支持glm-4v-9b模型
Related resources
GLM-4模型链接:
https://github.com/THUDM/GLM-4
Additional context
No response
The text was updated successfully, but these errors were encountered: