Skip to content

feat(loader): enhance single active backend by treating as singleton #5107

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 1, 2025

Conversation

mudler
Copy link
Owner

@mudler mudler commented Apr 1, 2025

Description

This PR tries to increase stability when using single active backend option

Notes for Reviewers

Signed commits

  • Yes, I signed my commits.

@mudler mudler requested a review from Copilot April 1, 2025 17:12
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR enhances the handling of a single active backend by introducing a singleton mode in the ModelLoader. Key changes include adding lock/unlock methods to manage singleton access, updating various consumers to pass the singleton flag and use defer to release locks on errors, and adjusting the options initialization accordingly.

Reviewed Changes

Copilot reviewed 24 out of 24 changed files in this pull request and generated no comments.

Show a summary per file
File Description
pkg/model/initializers.go Added Close and lockBackend methods to manage the singleton lock
core/http/routes/localai.go Updated endpoints to use variable ml instead of a new loader instance
core/http/endpoints/openai/assistant_test.go Updated NewModelLoader call to include the single backend flag
core/http/endpoints/localai/stores.go Added defer sl.Close() to ensure lock release in endpoints
core/cli/* Updated ModelLoader initializations to pass the single backend flag
core/backend/* Added defer loader.Close() calls and updated options handling
core/application/* Updated loader initialization and load conditions based on singleton mode
Comments suppressed due to low confidence (2)

core/http/routes/localai.go:53

  • The variable 'ml' is used but not defined in this scope. Consider initializing 'ml' or renaming it back to the previously declared 'sl'.
router.Post("/stores/set", localai.StoresSetEndpoint(ml, appConfig))

core/backend/token_metrics.go:26

  • The error message contains a spelling mistake ('loadmodel'); consider changing it to "could not load model" for clarity.
return nil, fmt.Errorf("could not loadmodel model")

Copy link

netlify bot commented Apr 1, 2025

Deploy Preview for localai ready!

Name Link
🔨 Latest commit a5995d6
🔍 Latest deploy log https://app.netlify.com/sites/localai/deploys/67ec1e7d9abb7800087aef95
😎 Deploy Preview https://deploy-preview-5107--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@mudler mudler added the enhancement New feature or request label Apr 1, 2025
@mudler mudler changed the title feat(loader): enhance single active backend by treating at singleton feat(loader): enhance single active backend by treating as singleton Apr 1, 2025
@mudler mudler merged commit 2c425e9 into master Apr 1, 2025
24 of 26 checks passed
@mudler mudler deleted the feat/singleton_mode branch April 1, 2025 18:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant