You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What service are you using for DeepSeek? If you are using something like Ollama to deploy it locally you should be able to deploy it to an openAI compatible endpoint. Then you can just use the openai_generic_client and set the model name to your model (the deploy should also give you the necessary API key).
If you are using a third-party API service to use DeepSeek, we support that as well. We have a groq client for example, but you can also use any openAI compatible API with the above generic client. If the service you use isn't supported, we can also help you submit a new llm_client for your use case.
For the huggingface embeddings, we don't actually have an example of a client for that which is an oversight on our part (we actually use HuggingFace models for our own deployment of Graphiti as part of Zep). I will work on adding that to Graphiti today though. In the meantime, you can look at how we implement an HF model for the cross-encoder and do something similar for the embedder: graphiti_core/cross_encoder/bge_reranker_client.py.
Hello Graphiti Team! 👋
I'm exploring integrating alternative models and would appreciate your guidance on two aspects:
DeepSeek LLM Support
OpenAIClient
's config withmodel="deepseek-chat"
be the correct approach?HuggingFace Embeddings
intfloat/multilingual-e5
)?GraphitiCore
initialization?Thank you for your time and for building such a valuable tool. Looking forward to your insights.
The text was updated successfully, but these errors were encountered: