-
-
Notifications
You must be signed in to change notification settings - Fork 138
Adopt Datadog Test Optimization tool #1721
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@aduh95 thank you for letting us know. Can you file an issue following these guidelines and we will review. |
I went through the doc for flaky tests - https://docs.datadoghq.com/tests/flaky_test_management. The tool would provide some analytics on our flaky tests at best. As mentioned by other folks earlier, detecting flaky tests is not the problem. We want a solution for making those tests reliable. So we don't need what this tool offers. It's just nice to have. If we adopt this tool:
All of this would attract more users towards this commercial product and grow Datadog's brand, which means more profit for Datadog. Since we don't need this and it really is a creative way of Datadog to get profit, we have leverage. I'd say that if they donate a respectable amount of money to our GitHub Sponsor / OpenCollective we can consider this. |
Why do you think that would be necessary? |
I initially thought that collaborators working on test reliability would need to use it but now that I think more about it, it's optional. |
We do have integration of BuildPulse which IIUC offers similar features: nodejs/build#3575 (and the results are available to see nodejs/build#3653 (comment)), so it seems fine to integrate another product if someone is up for doing that work of integration. Though similar to BuildPulse I wouldn't get my hopes up in this actually improving the CI situation, unless someone is motivated to follow up and actually do something about the flakes. I think detecting flakes is more or less a solved problem for years, the unsolved problem is actually having human beings devoting their time to investigating and fixing them (cue the "this is fine" house-on-fire meme). Maybe DataDog would be different since they have workers who are also Node.js collaborators who might be motivated enough to do this followup. |
BuildPulse has been added 1 year back in nodejs/build#3653 and it is not moving the needle. Considering that it is a commercial tool, it feels like a lost opportunity of using our leverage to negotiate for a donation for adoption in node but that's in the past now. I don't think it is a good idea to grandfather in other commercial products using BuildPulse as an argument. I also don't think any Datadog collaborator has been organically working on test reliability recently. I don't think the absence of this tool could be considered a blocker for them to work on that. Datadog funding someone to specifically work on test reliability even part-time in node in exchange for having this tool in node would sound really cool although I'm aware of much more reputable companies making empty promises, so this being one would not be a surprise. So negotiating for some kind of commitment with instant results like a nice donation before adopting into node still feels like something worth considering. |
Personally I feel that it's fine to accept an integration as long as someone volunteers to do the integration work and will manage the data responsibly, or I don't find it necessary to reject volunteered work done out of good will so that we can turn it into leverage we can use, but YMMV. (IIUC, the discussions about using Datadog's tool wasn't meant to be about publicity of Datadog, more like them offering that this tool might be useful to us, out of good will?). But even if it's done in exchange for an instant donation, until nodejs/admin#955 gets fixed it seems the donation cannot be used for anything meaningful soon-ish anyway? |
I'd normally support integrating tools like this but this one is a commercial product that is not needed by Node.js. I would still encourage Datadog to add this to Node.js, paired with a meaningful donation. From Datadog's perspective, if I wanted Node.js to adopt my commercial project, I'd pitch it as doing Node.js a favour and not focus on what I'll gain in return (like publicity) because that wouldn't help my case. These are Datadog's open source guidelines for open source project requirements:
It shows they're quite selective - they clearly target open source projects with a strong community because they know that translates into paying customers. This kind of offer is ultimately creative marketing, which is fine as long as it is balanced. If Node.js adopts this tool, Datadog stands to gain real value in terms of publicity, users and even paid customers but for Node.js, the impact would be minimal, as CI reliability would still remain an unsolved problem. That imbalance is why I believe a donation is a fair and reasonable ask. I don't think nodejs/admin#955 not landing prevents Datadog from donating through our GitHub Sponsors and OpenCollective. Even if we aren't able to use the funds immediately, it won't go to waste. It will support Node.js once the processes are in place. |
I think there might be a bit of overthinking about what Datadog wants here, as far as I can tell, it's just some collaborators discussing about the CI problem over the summit, and a collaborator who happens to work at Datadog mention that they have a tool that might help and can be offered for free. I think we are wearing our collaborator hats when we discuss this and thinking what would help the project, not really wearing business hat and speaking on behalf of Node.js v.s. some company. |
I get that the Datadog collaborator raised this in good faith and their goal was just to help the project. But the tool is a Datadog product and any use of it would be an arrangement with the company, not the collaborator suggesting it. Datadog's open source guidelines reflect goals like brand publicity and customer growth, which feels different from the collaborator's intent, so I thought that context was worth considering. |
Based on the discussion in the TSC meeting today we agreed that without a strong advocate for the addition (which we don't believe there is one currently), that the incremental information which would be available would not significantly improve our ability to deal with flaky tests so at this point we should not proceed with the suggestion. Closing based on that. Please let us know if that was not the right thing to do. |
What was the perceived cost of proceeding with the suggestion? |
So, yes, I'm a Datadog employee and a Node.js collaborator, and yes, I brought this up at the Dublin summit and again in Paris. Because of the discussion of intentions, it's worth clarifying some things. I can assure you that I suggested this as a result of the following:
That's it. I'm certainly not wearing my marketing hat here.
That's exactly my intent here. |
@bengl ... I refrained from commenting on this during the tsc discussion simply because I'm not familiar enough with the tool to have a reasonable opinion one way or the other. I'm open to further discussion on it but need to understand more about the tool and its benefits. |
I'll reopen since there seems to be ongoing questions/discussion. |
From the discussion in the TSC meeting today. It really comes down to if there is a volunteer to implement, maintain and monitor. @bengl we are not aware of a volunteer right now, do you know if there is somebody whou would step up to do that? |
As we were discussing #1614 at the Dublin Collab Summit last year, @bengl mentioned we could use https://www.datadoghq.com/product/test-optimization/, hopefully for free, if we fill in https://www.datadoghq.com/partner/open-source/. So… should we do it? Are there any blockers?
/cc @nodejs/tsc @rginn @bensternthal
The text was updated successfully, but these errors were encountered: