Skip to content

Allow out-of-tree worker modules #431

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 27, 2020

Conversation

mattaezell
Copy link
Contributor

If a worker is specified with a '.' in the name, assume it is a
fully qualified module name that should be used. This supports
custom worker classes that are distributed outside of ClusterShell.

@degremont
Copy link
Collaborator

Interesting. I globally ok with the patch.

Could be good to add a test in DefaultTest.py.
You could create a temporary file in with a dummy worker class in it, add this file directory into python loading path and try loading it. That should cover you new case in your patch.

@degremont
Copy link
Collaborator

I'm curious to know what's your use case? :)

@mattaezell
Copy link
Contributor Author

I'm curious to know what's your use case? :)

I have a python library that has a pretty heavy initialization overhead, because it loads yaml configs and runs plugins to discover facts about nodes (such as the BMC address for a node). The (existing) program flow is library => clustershell => exec the library, potentially resulting in two initializations. Using a native worker allows a much more direct path.

I don't expect that my worker would be desired upstream, and distributing it with the library makes it easier to iterate features without having to worry about maintaining cross-version compatibility between the worker and the library.

@mattaezell
Copy link
Contributor Author

Could be good to add a test in DefaultTest.py.

I'll admit I'm not familiar with the test suite. I'll give it a try and look into adding a test. Thanks

@mattaezell
Copy link
Contributor Author

Instead of looking for a '.' to indicate that the module is 'fully qualified', should we instead just manually map 'short' known ClusterShell workers into their fully qualified module name and then attempt to import whatever is left?

    if workername in [ 'exec', 'pdsh', 'rsh', 'ssh', 'worker' ]:
        modname = "ClusterShell.Worker.%s" % workername.capitalize()
    else:
        modname = workername

If you don't like the idea of keeping a list of in-tree workers, we could detect valid workers at runtime instead (with slightly more overhead).

I was just trying to rationalize if it made sense to require out-of-tree workers to be in a subpackage, or if a top-level module could be a valid worker. I don't see any reason that it couldn't.

@mattaezell mattaezell force-pushed the fully_qualified_worker branch from c941c1c to 1863979 Compare March 17, 2020 22:32
@degremont
Copy link
Collaborator

Thanks for this update. Thanks also for adding the tests.

I'm not fond of explicitly listing the in-core supported modules. The idea was that it could be possible, as an addons, to add new Workers just adding a new class in this directory (you could if you need to). So, they need to be able to work, just adding the file.

I'm thinking if we could not just first try to import like it is a ClusterShell modules, and if it fails, try importing using a global module name?

@degremont
Copy link
Collaborator

We probably also add somewhere in the doc that a fully qualified module name is also now supported.

@mattaezell mattaezell force-pushed the fully_qualified_worker branch from 1863979 to 9df69d3 Compare March 18, 2020 13:45
@degremont
Copy link
Collaborator

Except for the python3 incompat, I'm OK with the patch.

Please update the commit message subject to have the same kind of pattern we use.
like:

Defaults: xxxx

Probably good to add a Signed-Off line too

Attempt to load worker modules distributed with ClusterShell as well
as custom worker modules distributed outside of ClusterShell.

Signed-off-by: Matt Ezell <[email protected]>
@degremont degremont requested a review from thiell March 18, 2020 14:42
Copy link
Collaborator

@thiell thiell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it, thanks!

@thiell thiell merged commit b7fd686 into cea-hpc:master Mar 27, 2020
@thiell thiell added this to the 1.8.4 milestone Nov 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants