Skip to content

Support Windows x86 builds #97

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
eerhardt opened this issue May 9, 2018 · 18 comments
Closed

Support Windows x86 builds #97

eerhardt opened this issue May 9, 2018 · 18 comments
Labels
Build Build related issue
Milestone

Comments

@eerhardt
Copy link
Member

eerhardt commented May 9, 2018

System information

  • OS version/distro: Windows
  • .NET Version (eg., dotnet --info): All - desktop, .net core

Issue

  • What did you do? Try to use ML.NET in an x86 process
  • What happened? It doesn't work because CpuMathNative can't be loaded into an x86 process
  • What did you expect? I expected it to work in a Windows x86 process.

Notes

There are cases where developers are forced to use x86 processes. For example, if their hosting environment only supports x86 processes. Or if they are using other native libraries that are only available for x86, and they don't want to (or can't) spin up multiple processes (x64 for ML.NET and x86 for other native libraries).

@Ivanidzo4ka
Copy link
Contributor

Is this good place for discussion?
From C# point of view, we currently have SseUtils, AvxUtils classes which we call directly. We need to change our code to use something like CpuUtils which will perform platform validation and instruction availability and fallback on proper implementation.

From native point of view, is it possible to have x64 and x86 code in one library or we need to compile two version?

Or shall we get rid of native code and switch to hardware Intrinsics in .core 2.1?

@eerhardt
Copy link
Member Author

eerhardt commented May 9, 2018

From native point of view, is it possible to have x64 and x86 code in one library or we need to compile two version?

We compile 2 versions - one for x64 and one for x86. These two versions get put into different folders in our NuGet package runtimes\win-{arch}\CpuMathNative.dll.

Then depending how the architecture used by the application, they get the correct assembly for the specified architecture. This is exactly how we support macOS and Linux assemblies as well - each native asset is in a folder in the NuGet package, and it gets picked out correctly.

In the case of a "portable" application (one that can run anywhere on any processor), all the assets across all the architectures are published into a runtimes folder. The user can then move that same app between multiple OS/archs and the app still runs because all the assets are there.

Or shall we get rid of native code and switch to hardware Intrinsics in .core 2.1?

Hardware intrinsics only work in .NET Core, so we would no longer be a netstandard library.
Note: we also have the FastTree native code in C++.

@fiigii
Copy link

fiigii commented May 10, 2018

Or shall we get rid of native code and switch to hardware Intrinsics in .core 2.1?

Hardware intrinsic in .NET Core 2.1 supports 64-bit and 32-bit both, so switching to use HW intrinsic will solve the problem for .NET Core build at least. We can keep the native code for other platform builds (e.g., .NET framework, Mono, etc.).

@rowanmiller
Copy link

Adding a +1 to x86 support. I hit this issue when building out a demo, as I was working in a WPF app that used some UI controls that required x86. There aren't x64 alternatives for the controls, so moving wasn't an option.

@dan-drews
Copy link
Contributor

Adding another +1 to this. I looked to add this to an older application that is set to be x86. Will look to change that to be 64-bit (I'm assuming without issue) in the near future in order to use this.

I could spin up a new web service to handle the ML predictions, but that'll take a while to get the traction. Thankfully, our application is not complicated, so converting to x64 should go off without a hitch, but I can only assume that others will expect this support.

@Anipik
Copy link
Contributor

Anipik commented Jul 21, 2018

Another +1 for x86 . I was trying to use ML.net in azure functions. But Azure Functions only support x86 processes. And for training as well predicting the value we require CpuMathNative.
The error occurs while calculating L2Norm

Microsoft.ML.Runtime.Data.LpNormNormalizerTransform.GetGetterCore.AnonymousMethod__5(ref Microsoft.ML.Runtime.Data.VBuffer<float> dst) Line 434
 	Microsoft.ML.Data.dll!Microsoft.ML.Runtime.Data.ConcatTransform.MakeGetter.AnonymousMethod__0(ref Microsoft.ML.Runtime.Data.VBuffer<float> dst) Line 835
 	Microsoft.ML.Data.dll!Microsoft.ML.Runtime.Data.ConcatTransform.MakeGetter.AnonymousMethod__0(ref Microsoft.ML.Runtime.Data.VBuffer<float> dst) Line 835
 	Microsoft.ML.Data.dll!Microsoft.ML.Runtime.Data.SchemaBindablePredictorWrapperBase.GetValueGetter.AnonymousMethod__0(ref Microsoft.ML.Runtime.Data.VBuffer<float> dst) Line 157
 	Microsoft.ML.Data.dll!Microsoft.ML.Runtime.Data.PredictedLabelScorerBase.EnsureCachedPosition<Microsoft.ML.Runtime.Data.VBuffer<float>>(ref long cachedPosition, ref Microsoft.ML.Runtime.Data.VBuffer<float> score, Microsoft.ML.Runtime.Data.IRow boundRow, Microsoft.ML.Runtime.Data.ValueGetter<Microsoft.ML.Runtime.Data.VBuffer<float>> scoreGetter) Line 443	C#
 	Microsoft.ML.Data.dll!Microsoft.ML.Runtime.Data.MultiClassClassifierScorer.GetPredictedLabelGetter.AnonymousMethod__0(ref uint dst) Line 564
 	Microsoft.ML.Data.dll!Microsoft.ML.Runtime.Data.KeyToValueTransform.KeyToValueMap<uint, Microsoft.ML.Runtime.Data.DvText>.GetMappingGetter.AnonymousMethod__0(ref Microsoft.ML.Runtime.Data.DvText dst) Line 309
 	Microsoft.ML.Data.dll!Microsoft.ML.Runtime.Data.DataViewUtils.Splitter.InPipe.Impl<Microsoft.ML.Runtime.Data.DvText>.Fill() Line 732
 	Microsoft.ML.Data.dll!Microsoft.ML.Runtime.Data.DataViewUtils.Splitter.Consolidator.ConsolidateCore.AnonymousMethod__2() Line 418
 	System.Threading.Thread.dll!System.Threading.Thread.ThreadMain_ThreadStart() Line 93
 	System.Private.CoreLib.dll!System.Threading.ThreadHelper.ThreadStart_Context(object state) Line 62
 	System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state) Line 167
 	System.Private.CoreLib.dll!System.Threading.ThreadHelper.ThreadStart() Line 91	

@klausmh
Copy link
Contributor

klausmh commented Aug 23, 2018

Is there any work or decision on this? I am looking to integrate into a scenario where we need both x86 and x64.

@danmoseley
Copy link
Member

@tannergooding is going to have a quick look to see whether it's trivial.

@tannergooding
Copy link
Member

Looks like most of the infrastructure for building 32-bit native binaries already exists (just need to ensure /p:TargetArchitecture=x86, I tested by setting an environment variable).

The mlnetmkldeps package needs to be updated to also include x86 libraries and the build system likely needs some easy mechanism to pass through x86 as the target architecture.

@tannergooding
Copy link
Member

With #860, build succeeds for everything except the copying the native Tensorflow dependency.

Tensorflow is 64-bit only, with no planned support for 32-bit (from what I can tell). Building from source (on Windows) also requires building several other build dependencies from source to produce a 32-bit build.

Ignoring Tensorflow, the following test assemblies fully pass:

  • Microsoft.ML.CodeAnalyzer.Tests (13 Passed, 0 Failed, 0 Skipped)
  • Microsoft.ML.CpuMath.UnitTests.netstandard (72 Passed, 0 Failed, 0 Skipped)
  • Microsoft.ML.StaticPipelineTesting (10 Passed, 0 Failed, 0 Skipped)
  • Microsoft.ML.Sweeper.Tests (21 Passed, 0 Failed, 0 Skipped)

The following had failures:

  • Microsoft.ML.Core.Tests (90 Passed, 4 Failed, 1 Skipped)
  • Microsoft.ML.Predictor.Tests (23 Passed, 26 Failed, 56 Skipped)
  • Microsoft.ML.TestFramework (13 Passed, 1 Failed, 57 Skipped)
  • Microsoft.ML.Tests (101 Passed, 10 Failed, 5 Skipped)
    • Failures: Microsoft.ML.Tests.txt
      • 2: Unable to find lib_lightgbm
      • 6: Unable to find tensorflow
      • 1: Output and baseline mismatch
      • 1: Expected vs Actual different
  • Microsoft.ML.FSharp.Tests (0 Passed, 3 Failed, 0 Skipped)

@eerhardt
Copy link
Member Author

Unabled to find lib_lightgbm

LightGBM is a separate project (https://github.com/Microsoft/LightGBM/) and I don't believe they have support for x86 currently. @guolinke - do you have plans for making Windows x86 binaries available?

@tannergooding - if not, we will need to treat LightGBM like we do for TensorFlow.

Microsoft.ML.FSharp.Tests - Attempt to load program with incorrect format

@dsyme - any thoughts here why the F# tests wouldn't support x86?

@tannergooding
Copy link
Member

For F#, based on the failure, they are likely compiled for 64-bit only. Will confirm after lunch.

@tannergooding
Copy link
Member

Yep. F# is explicitly targeting x64: https://github.com/dotnet/machinelearning/blob/master/test/Microsoft.ML.FSharp.Tests/Microsoft.ML.FSharp.Tests.fsproj#L11

Changing it to AnyCPU works and all 3 tests pass.

@dsyme
Copy link
Contributor

dsyme commented Sep 12, 2018

@dsyme - any thoughts here why the F# tests wouldn't support x86?

No specific reason, fine to make them AnyCPU

@guolinke
Copy link
Contributor

@eerhardt The 32-bit version LightGBM could be compiled, however, it will be much slower since many data type are 64-bit in LightGBM code.

@danmoseley
Copy link
Member

@tannergooding @artidoro @eerhardt I believe this can move to "Done"?

@artidoro already has #1295 for CI/builds.

@eerhardt
Copy link
Member Author

I didn't think we marked things "done" until they were merged.

There are 2 outstanding PRs for this:

#1295 - for PR validation
#1306 - for official build support

When both of those PRs are merged, I believe this can be marked "Done".

@danmoseley
Copy link
Member

I didn't think we marked things "done" until they were merged.

Agreed, I figured there was a separate issue. Seems we can close as done now, though?

@shauheen shauheen added this to the 1018 milestone Oct 29, 2018
@shauheen shauheen added the Build Build related issue label Oct 29, 2018
@tannergooding tannergooding removed their assignment May 26, 2020
@ghost ghost locked as resolved and limited conversation to collaborators Mar 31, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Build Build related issue
Projects
None yet
Development

No branches or pull requests