Skip to content

How to disable prompt cache? #1608

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mscheong01 opened this issue Jul 19, 2024 · 1 comment
Closed

How to disable prompt cache? #1608

mscheong01 opened this issue Jul 19, 2024 · 1 comment

Comments

@mscheong01
Copy link

Hi, I'm running some offline inference benchmarks using llama-cpp-python, and the prompt cache that was implemented here (#158) is getting in the way of measuring prompt evaluation time. Is there an option to disable it?

@mscheong01
Copy link
Author

I was able to do this by calling model.reset()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant