diff --git a/docs/development/HOWTO-add-model.md b/docs/development/HOWTO-add-model.md index 51e0b0b20f58d..5989b873a611d 100644 --- a/docs/development/HOWTO-add-model.md +++ b/docs/development/HOWTO-add-model.md @@ -23,11 +23,19 @@ The convert script reads the model configuration, tokenizer, tensor names+data a The required steps to implement for an HF model are: -1. Define the model `Model.register` annotation in a new `Model` subclass, example: +1. Define the model `ModelBase.register` annotation in a new `TextModel` or `MmprojModel` subclass, example: ```python -@Model.register("MyModelForCausalLM") -class MyModel(Model): +@ModelBase.register("MyModelForCausalLM") +class MyModel(TextModel): + model_arch = gguf.MODEL_ARCH.MYMODEL +``` + +or + +```python +@ModelBase.register("MyModelForConditionalGeneration") +class MyModel(MmprojModel): model_arch = gguf.MODEL_ARCH.MYMODEL ``` @@ -75,9 +83,10 @@ block_mappings_cfg: dict[MODEL_TENSOR, tuple[str, ...]] = { `transformer.blocks.{bid}.norm_1` will be mapped to `blk.{bid}.attn_norm` in GGUF. Depending on the model configuration, tokenizer, code and tensors layout, you will have to override: -- `Model#set_gguf_parameters` -- `Model#set_vocab` -- `Model#write_tensors` +- `TextModel#set_gguf_parameters` +- `MmprojModel#set_gguf_parameters` +- `ModelBase#set_vocab` +- `ModelBase#modify_tensors` NOTE: Tensor names must end with `.weight` or `.bias` suffixes, that is the convention and several tools like `quantize` expect this to proceed the weights.