Skip to content

Inconsistancy in Training With Multiple Threads  #1144

Closed
@ArieJones

Description

@ArieJones

We are currently using version 0.50 in creating some classifier models but are seeing some strange behavior. We are currently setting the number of threads in our classifiers due to #217 and wanting to be able to control the CPU usage on the server.
So when using a classifier like so ..
var algo = new StochasticDualCoordinateAscentClassifier() { Caching = CachingOptions.Disk, MaxIterations = 100, LossFunction = new SmoothedHingeLossSDCAClassificationLossFunction(), Shuffle = false, NumThreads = System.Environment.ProcessorCount - 1 //We use one less than the number of processors available, };

What we are noticing is that if we run this from a box with 4 cores on it then we get a decent model where the microaccuracy is above 90%. However, when we move this same code over to a larger server with 8 cores we are getting wildly different results. The microaccuracy drops down to around <60%.
Yikes!

Is there possibly something we are missing in the documentation that would address this?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions