-
Notifications
You must be signed in to change notification settings - Fork 467
spotlessInternalRegisterDependencies task fails due creating configuration with existing name #941
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks @nedtwigg for the quick response. I'll update our build tomorrow and then I'll keep an eye on this the next few days! Thanks again, highly appreciated |
5.15.1 contains a fix for our reported race conditition bug raised at diffplug/spotless#941 Fixes elastic#77837
5.15.1 contains a fix for our reported race conditition bug raised at diffplug/spotless#941 Fixes #77837
5.15.1 contains a fix for our reported race conditition bug raised at diffplug/spotless#941 Fixes elastic#77837
5.15.1 contains a fix for our reported race conditition bug raised at diffplug/spotless#941 Fixes #77837
@nedtwigg even with the latest snapshot version we now saw this failure happening unfortunately. seems the fix isn't 100% working. should we reopen this? |
Bummer. Same |
I guess the problem is still that the cache is being accessed concurrently. If so, we have a few possible fixes:
|
I think we probably won't be able to get any further with more I think we need to do one of two things (could maybe do both)
Solution 2 would look like so: spotlessRoot {
java { googleJavaFormat() }
// any other formatters, but don't need to specify target This extension would only be valid in the root project, it wouldn't have a target, and it would create all the configurations used in all subprojects. When a subproject requested a formatter which the root project didn't have, it would throw an error with a suggested fix to I think we could maybe do both. Do 1 to fix this issue, but we'll be slow and configuration cache still won't work, and then later users have the option to speed up their builds and add configuration-cache support with 2. Actually doing this is a big project and it's not gonna make the top of my todo, but I'm happy to coach someone who wants to take a whack at it. |
Hi! Is there any known workaround for this? I.e., getting Spotless to run at all with current version of Gradle. I don't know if it helps, but here is the build.gradle of a small example project where I get this issue every time I run Gradle output
|
Thanks for the example @skagedal. We've put this off too long, I'm digging in now. |
The two PRs above allow us to sidestep the synchronization issue entirely, which will hopefully resolve this one and for all. It is a breaking change though, and will require a little bit of integration effort. I'll post back when they are merged and ready to use. |
I believe these issues are all resolved in plugin-gradle 6.0.0, see release notes for details. Please reopen if the issue re-appears. |
Seems to work great, thank you very much! Now running into the next issue, but I'll comment there instead if needed. :) |
We occasionally see this error in our elasticsearch build:
The according stack trace:
The issue was initially raised in the elasticsearch issue tracker at: elastic/elasticsearch#77837
The source is available at https://github.com/elastic/elasticsearch
Looking into this bug I noticed that moving away from detached configurations in
GradleProvisioner#fromRootBuildscript
introduced this issue. It seems like either an issue with the caching of the Requests or a problem with parallel execution (though I'm not sure how that could happen here with the current gradle behaviour)Maybe using
maybeCreate
instead ofcreate
for getting the configuration is enough here for now. Happy to create a Pull Request with that change.The text was updated successfully, but these errors were encountered: