Skip to content

[SwiftLexicalLookup] Unqualified lookup caching #3068

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

MAJKFL
Copy link
Contributor

@MAJKFL MAJKFL commented Apr 30, 2025

This PR introduces optional caching support to SwiftLexicalLookup. In order to use it, clients can pass an instance of LookupCache as a parameter to the lookup function.

LookupCache keeps track of cache member hits. In order to prevent the cache from taking too much memory, clients can call the LookupCache.evictEntriesWithoutHit function to remove members without a hit and reset the hit property for the remaining members. Calling this function every time after lookup effectively maintains one path from a leaf to the root of the scope tree in cache.

Clients can also optionally set the drop value:

/// Creates a new unqualified lookup cache.
/// `drop` parameter specifies how many eviction calls will be
/// ignored before evicting not-hit members of the cache.
///
/// Example cache eviction sequences (s - skip, e - evict):
/// - `drop = 0` - `e -> e -> e -> e -> e -> ...`
/// - `drop = 1` - `s -> e -> s -> s -> e -> ...`
/// - `drop = 3` - `s -> s -> s -> e -> s -> ...`
///
/// - Note: `drop = 0` effectively maintains exactly one path of cached results to
/// the root in the cache (assuming we evict cache members after each lookup in a sequence of lookups).
/// Higher the `drop` value, more such paths can potentially be stored in the cache at any given moment.
/// Because of that, a higher `drop` value also translates to a higher number of cache-hits,
/// but it might not directly translate to better performance. Because of a larger memory footprint,
/// memory accesses could take longer, slowing down the eviction process. That's why the `drop` value
/// could be fine-tuned to maximize the performance given file size,
/// number of lookups, and amount of available memory.
public init(drop: Int = 0) {
  self.dropMod = drop + 1
}

@MAJKFL
Copy link
Contributor Author

MAJKFL commented Apr 30, 2025

swiftlang/swift#81209

@swift-ci Please test

Copy link
Member

@ahoppen ahoppen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without diving too deeply into the details: I am a little concerned about the cache eviction behavior and the fact that you need to manually call evictEntriesWithoutHit (which incidentally doesn’t seem to be called in this PR or swiftlang/swift#81209) and I think it’s easy for clients to forget to call it. Does this more complex cache eviction policy provide significant benefits over a simple LRU cache that keeps, say 100, cache entries? We could share the LRUCache type that we currently have in SwiftCompilerPluginMessageHandling for that. Curious to hear your opinion.

/// memory accesses could take longer, slowing down the eviction process. That's why the `drop` value
/// could be fine-tuned to maximize the performance given file size,
/// number of lookups, and amount of available memory.
public init(drop: Int = 0) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m not a fan of the drop naming here. I don’t have a better suggestion yet, maybe I’ll come up with one.

) -> [LookupResult] {
scope?.lookup(identifier, at: self.position, with: config) ?? []
if let cache, let identifier {
let filteredResult: [LookupResult] = (scope?.lookup(nil, at: self.position, with: config, cache: cache) ?? [])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be missing something but why is this filtering now needed with the cache lookup?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants