-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
Description
Your current environment
The output of `python collect_env.py`
🐛 Describe the bug
I am using:
from vllm import LLM
model = LLM(
model="llava-hf/llava-1.5-13b-hf",
image_input_type="pixel_values",
download_dir="/tmp/models",
image_token_id=32000,
image_input_shape="1,3,336,336",
image_feature_size=576,
)
which throws error: ValueError: Model architectures ['LlavaForCausalLM'] are not supported for now. Supported architectures: ['AquilaModel', 'AquilaForCausalLM', 'BaiChuanForCausalLM', 'BaichuanForCausalLM', 'BloomForCausalLM', 'ChatGLMModel', 'ChatGLMForConditionalGeneration', 'CohereForCausalLM', 'DbrxForCausalLM', 'DeciLMForCausalLM', 'DeepseekForCausalLM', 'FalconForCausalLM', 'GemmaForCausalLM', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTJForCausalLM', 'GPTNeoXForCausalLM', 'InternLMForCausalLM', 'InternLM2ForCausalLM', 'JAISLMHeadModel', 'LlamaForCausalLM', 'LlavaForConditionalGeneration', 'LLaMAForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'QuantMixtralForCausalLM', 'MptForCausalLM', 'MPTForCausalLM', 'OLMoForCausalLM', 'OPTForCausalLM', 'OrionForCausalLM', 'PhiForCausalLM', 'QWenLMHeadModel', 'Qwen2ForCausalLM', 'Qwen2MoeForCausalLM', 'RWForCausalLM', 'StableLMEpochForCausalLM', 'StableLmForCausalLM', 'Starcoder2ForCausalLM', 'XverseForCausalLM']
After checking huggingface llava-1.5-7b-hf uses LlavaForConditionalGeneration and llava-1.5-13b-hf uses LlavaForCausalLM
?
Any easy workaround / fix for this?