Skip to content

Fixes #591: typos, adding the type attribute to lists, and moving the name attribute for some examples #592

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jul 28, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 47 additions & 1 deletion src/Microsoft.ML.Data/Transforms/doc.xml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
</summary>
<remarks>
The TextToKeyConverter transform builds up term vocabularies (dictionaries).
The TextToKey Converter and the <see cref="T:Microsoft.ML.Transforms.HashConverter"/> are the two one primary mechanisms by which raw input is transformed into keys.
The TextToKeyConverter and the <see cref="T:Microsoft.ML.Transforms.HashConverter"/> are the two one primary mechanisms by which raw input is transformed into keys.
If multiple columns are used, each column builds/uses exactly one vocabulary.
The output columns are KeyType-valued.
The Key value is the one-based index of the item in the dictionary.
Expand All @@ -49,6 +49,52 @@
</code>
</example>
</example>

<member name="NAHandle">
<summary>
Handle missing values by replacing them with either the default value or the indicated value.
</summary>
<remarks>
This transform handles missing values in the input columns. For each input column, it creates an output column
where the missing values are replaced by one of these specified values:
<list type='bullet'>
<item>
<description>The default value of the appropriate type.</description>
</item>
<item>
<description>The mean value of the appropriate type.</description>
</item>
<item>
<description>The max value of the appropriate type.</description>
</item>
<item>
<description>The min value of the appropriate type.</description>
</item>
</list>
<para>The last three work only for numeric/TimeSpan/DateTime kind columns.</para>
<para>
The output column can also optionally include an indicator vector for which slots were missing in the input column.
This can be done only when the indicator vector type can be converted to the input column type, i.e. only for numeric columns.
</para>
<para>
When computing the mean/max/min value, there is also an option to compute it over the whole column instead of per slot.
This option has a default value of true for variable length vectors, and false for known length vectors.
It can be changed to true for known length vectors, but it results in an error if changed to false for variable length vectors.
</para>
</remarks>
<seealso cref=" Microsoft.ML.Runtime.Data.MetadataUtils.Kinds.HasMissingValues"/>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❔ Is this missing a T: prefix? It's unclear why the others are given in full form but I noticed this one is different.

<seealso cref="T:Microsoft.ML.Data.DataKind"/>
</member>
<example name="NAHandle">
<example>
<code language="csharp">
pipeline.Add(new MissingValueHandler(&quot;FeatureCol&quot;, &quot;CleanFeatureCol&quot;)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❔ Why is the &quot; escape needed? I would only expect this if it appeared as the value of an XML attribute.

{
ReplaceWith = NAHandleTransformReplacementKind.Mean
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Should likely be indented

});
</code>
</example>
</example>

</members>
</doc>
2 changes: 1 addition & 1 deletion src/Microsoft.ML.FastTree/TreeEnsembleFeaturizer.cs
Original file line number Diff line number Diff line change
Expand Up @@ -807,7 +807,7 @@ public static partial class TreeFeaturize
Desc = TreeEnsembleFeaturizerTransform.TreeEnsembleSummary,
UserName = TreeEnsembleFeaturizerTransform.UserName,
ShortName = TreeEnsembleFeaturizerBindableMapper.LoadNameShort,
XmlInclude = new[] { @"<include file='../Microsoft.ML.FastTree/doc.xml' path='doc/members/member[@name=""TreeEnsembleFeaturizerTransform""]'/>" })]
XmlInclude = new[] { @"<include file='../Microsoft.ML.FastTree/doc.xml' path='doc/members/member[@name=""TreeEnsembleFeaturizerTransform""]/*'/>" })]
public static CommonOutputs.TransformOutput Featurizer(IHostEnvironment env, TreeEnsembleFeaturizerTransform.ArgumentsForEntryPoint input)
{
Contracts.CheckValue(env, nameof(env));
Expand Down
32 changes: 16 additions & 16 deletions src/Microsoft.ML.FastTree/doc.xml
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@
<para>Generally, ensemble models provide better coverage and accuracy than single decision trees.
Each tree in a decision forest outputs a Gaussian distribution.</para>
<para>For more see: </para>
<list>
<list type='bullet'>
<item><description><a href='http://en.wikipedia.org/wiki/Random_forest'>Wikipedia: Random forest</a></description></item>
<item><description><a href='http://jmlr.org/papers/volume7/meinshausen06a/meinshausen06a.pdf'>Quantile regression forest</a></description></item>
<item><description><a href='https://blogs.technet.microsoft.com/machinelearning/2014/09/10/from-stumps-to-trees-to-forests/'>From Stumps to Trees to Forests</a></description></item>
Expand Down Expand Up @@ -146,7 +146,7 @@
<summary>
Trains a tree ensemble, or loads it from a file, then maps a numeric feature vector
to three outputs:
<list>
<list type='number'>
<item><description>A vector containing the individual tree outputs of the tree ensemble.</description></item>
<item><description>A vector indicating the leaves that the feature vector falls on in the tree ensemble.</description></item>
<item><description>A vector indicating the paths that the feature vector falls on in the tree ensemble.</description></item>
Expand All @@ -157,28 +157,28 @@
</summary>
<remarks>
In machine learning​ it is a pretty common and powerful approach to utilize the already trained model in the process of defining features.
<para>One such example would be the use of model's scores as features to downstream models. For example, we might run clustering on the original features,
<para>One such example would be the use of model&apos;s scores as features to downstream models. For example, we might run clustering on the original features,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❔ Why is this required? It's awkward to read and seems like it would be easy to regress.

and use the cluster distances as the new feature set.
Instead of consuming the model's output, we could go deeper, and extract the 'intermediate outputs' that are used to produce the final score. </para>
Instead of consuming the model&apos;s output, we could go deeper, and extract the &apos;intermediate outputs&apos; that are used to produce the final score. </para>
There are a number of famous or popular examples of this technique:
<list>
<item><description>A deep neural net trained on the ImageNet dataset, with the last layer removed, is commonly used to compute the 'projection' of the image into the 'semantic feature space'.
It is observed that the Euclidean distance in this space often correlates with the 'semantic similarity': that is, all pictures of pizza are located close together,
<list type='bullet'>
<item><description>A deep neural net trained on the ImageNet dataset, with the last layer removed, is commonly used to compute the &apos;projection&apos; of the image into the &apos;semantic feature space&apos;.
It is observed that the Euclidean distance in this space often correlates with the &apos;semantic similarity&apos;: that is, all pictures of pizza are located close together,
and far away from pictures of kittens. </description></item>
<item><description>A matrix factorization and/or LDA model is also often used to extract the 'latent topics' or 'latent features' associated with users and items.</description></item>
<item><description>The weights of the linear model are often used as a crude indicator of 'feature importance'. At the very minimum, the 0-weight features are not needed by the model,
and there's no reason to compute them. </description></item>
<item><description>A matrix factorization and/or LDA model is also often used to extract the &apos;latent topics&apos; or &apos;latent features&apos; associated with users and items.</description></item>
<item><description>The weights of the linear model are often used as a crude indicator of &apos;feature importance&apos;. At the very minimum, the 0-weight features are not needed by the model,
and there&apos;s no reason to compute them. </description></item>
</list>
<para>Tree featurizer uses the decision tree ensembles for feature engineering in the same fashion as above.</para>
<para>Let's assume that we've built a tree ensemble of 100 trees with 100 leaves each (it doesn't matter whether boosting was used or not in training).
<para>Let&apos;s assume that we&apos;ve built a tree ensemble of 100 trees with 100 leaves each (it doesn&apos;t matter whether boosting was used or not in training).
If we associate each leaf of each tree with a sequential integer, we can, for every incoming example x,
produce an indicator vector L(x), where Li(x) = 1 if the example x 'falls' into the leaf #i, and 0 otherwise.</para>
produce an indicator vector L(x), where Li(x) = 1 if the example x &apos;falls&apos; into the leaf #i, and 0 otherwise.</para>
<para>Thus, for every example x, we produce a 10000-valued vector L, with exactly 100 1s and the rest zeroes.
This 'leaf indicator' vector can be considered the ensemble-induced 'footprint' of the example.</para>
<para>The 'distance' between two examples in the L-space is actually a Hamming distance, and is equal to the number of trees that do not distinguish the two examples.</para>
This &apos;leaf indicator&apos; vector can be considered the ensemble-induced &apos;footprint&apos; of the example.</para>
<para>The &apos;distance&apos; between two examples in the L-space is actually a Hamming distance, and is equal to the number of trees that do not distinguish the two examples.</para>
<para>We could repeat the same thought process for the non-leaf, or internal, nodes of the trees (we know that each tree has exactly 99 of them in our 100-leaf example),
and produce another indicator vector, N (size 9900), for each example, indicating the 'trajectory' of each example through each of the trees.</para>
<para>The distance in the combined 19900-dimensional LN-space will be equal to the number of 'decisions' in all trees that 'agree' on the given pair of examples.</para>
and produce another indicator vector, N (size 9900), for each example, indicating the &apos;trajectory&apos; of each example through each of the trees.</para>
<para>The distance in the combined 19900-dimensional LN-space will be equal to the number of &apos;decisions&apos; in all trees that &apos;agree&apos; on the given pair of examples.</para>
<para>The TreeLeafFeaturizer is also producing the third vector, T, which is defined as Ti(x) = output of tree #i on example x.</para>
</remarks>
<example>
Expand Down
2 changes: 1 addition & 1 deletion src/Microsoft.ML.KMeansClustering/doc.xml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
YYK-Means observes that there is a lot of redundancy across iterations in the KMeans algorithms and most points do not change their clusters during an iteration.
It uses various bounding techniques to identify this redundancy and eliminate many distance computations and optimize centroid computations.
<para>For more information on K-means, and K-means++ see:</para>
<list>
<list type='bullet'>
<item><description><a href='https://en.wikipedia.org/wiki/K-means_clustering'>K-means</a></description></item>
<item><description><a href='https://en.wikipedia.org/wiki/K-means%2b%2b'>K-means++</a></description></item>
</list>
Expand Down
2 changes: 1 addition & 1 deletion src/Microsoft.ML.PCA/doc.xml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
Its training is done using the technique described in the paper: <a href='https://arxiv.org/pdf/1310.6304v2.pdf'>Combining Structured and Unstructured Randomness in Large Scale PCA</a>,
and the paper <a href='https://arxiv.org/pdf/0909.4061v2.pdf'>Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions</a>
<para>For more information, see also:</para>
<list>
<list type='bullet'>
<item><description>
<a href='http://web.stanford.edu/group/mmds/slides2010/Martinsson.pdf'>Randomized Methods for Computing the Singular Value Decomposition (SVD) of very large matrices</a>
</description></item>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,14 @@
<para>See references below for more details.
This trainer is essentially faster the one introduced in [2] because of some implemtation tricks[3].
</para>
<list >
<list type='bullet'>
<item>
[1] <description><a href='http://www.csie.ntu.edu.tw/~cjlin/papers/ffm.pdf'>Field-aware Factorization Machines for CTR Prediction</a></description></item>
<description><a href='http://www.csie.ntu.edu.tw/~cjlin/papers/ffm.pdf'>Field-aware Factorization Machines for CTR Prediction</a></description></item>
<item>
[2] <description><a href='http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf'>Adaptive Subgradient Methods for Online Learning and Stochastic Optimization</a></description>
<description><a href='http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf'>Adaptive Subgradient Methods for Online Learning and Stochastic Optimization</a></description>
</item>
<item>
[3] <description><a href='https://github.com/wschin/fast-ffm/blob/master/fast-ffm.pdf'>An Improved Stochastic Gradient Method for Training Large-scale Field-aware Factorization Machine.</a></description>
<description><a href='https://github.com/wschin/fast-ffm/blob/master/fast-ffm.pdf'>An Improved Stochastic Gradient Method for Training Large-scale Field-aware Factorization Machine.</a></description>
</item>
</list>
</remarks>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -123,8 +123,8 @@ public override MultiClassNaiveBayesPredictor Train(TrainContext context)
Desc = "Train a MultiClassNaiveBayesTrainer.",
UserName = UserName,
ShortName = ShortName,
XmlInclude = new[] { @"<include file='../Microsoft.ML.StandardLearners/Standard/MultiClass/doc.xml' path='doc/members/member[@name=""MultiClassNaiveBayesTrainer""]'/>",
@"<include file='../Microsoft.ML.StandardLearners/Standard/MultiClass/doc.xml' path='doc/members/example[@name=""MultiClassNaiveBayesTrainer""]'/>" })]
XmlInclude = new[] { @"<include file='../Microsoft.ML.StandardLearners/Standard/MultiClass/doc.xml' path='doc/members/member[@name=""MultiClassNaiveBayesTrainer""]/*'/>",
@"<include file='../Microsoft.ML.StandardLearners/Standard/MultiClass/doc.xml' path='doc/members/example[@name=""MultiClassNaiveBayesTrainer""]/*'/>" })]
public static CommonOutputs.MulticlassClassificationOutput TrainMultiClassNaiveBayesTrainer(IHostEnvironment env, Arguments input)
{
Contracts.CheckValue(env, nameof(env));
Expand Down
4 changes: 2 additions & 2 deletions src/Microsoft.ML.StandardLearners/Standard/Online/doc.xml
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
and an option to update the weight vector using the average of the vectors seen over time (averaged argument is set to True by default).
</remarks>
</member>
<example>
<example name="OGD">
<example name="OGD">
Copy link
Contributor

@Zruty0 Zruty0 Jul 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

example [](start = 5, length = 7)

is this not an error to have nested examples? #Pending

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea I see it's used everywhere. I wonder why though?


In reply to: 205926076 [](ancestors = 205926076)

Copy link
Member Author

@sfilipi sfilipi Jul 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is legal in xml.
The internal example node is needed by the documentation to display the snippet as an example.

I needed a named wrapper to get the node, and ...just called it example.

this is how they are being included:

I don't need to include them with the /* notation, i can just include the named node itself, but i wasn't sure if the docs tools would tolerate the name attribute on it.
But with both you and Ivan commenting on it, though, might be worth to try it, and if they are robust to it, clean this two level example.

In my TODO list.


In reply to: 205926101 [](ancestors = 205926101,205926076)

Copy link
Contributor

@Ivanidzo4ka Ivanidzo4ka Jul 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

example name="OGD" [](start = 5, length = 18)

out of curiosity, is order of having name / not having name matter? In AP example you have and here you have #Pending

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it depends how you include the node in the code.

see the long answer to Pete's comment. Should clarify it.
Note the * notation:


In reply to: 205926135 [](ancestors = 205926135)

<example>
<code language="csharp">
new OnlineGradientDescentRegressor
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@
Assuming that the dependent variable follows a Poisson distribution, the parameters of the regressor can be estimated by maximizing the likelihood of the obtained observations.
</remarks>
</member>
<example>
<example name="PoissonRegression">
<example name="PoissonRegression">
<example>
<code language="csharp">
new PoissonRegressor
{
Expand Down
2 changes: 1 addition & 1 deletion src/Microsoft.ML.StandardLearners/Standard/doc.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
In general, the larger the 'L2Const', the faster SDCA converges.
</para>
<para>For more information, see also:</para>
<list>
<list type='bullet'>
<item><description>
<a href='https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/main-3.pdf'>Scaling Up Stochastic Dual Coordinate Ascent</a>.
</description></item>
Expand Down
8 changes: 4 additions & 4 deletions src/Microsoft.ML.Transforms/EntryPoints/SelectFeatures.cs
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ public static class SelectFeatures
[TlcModule.EntryPoint(Name = "Transforms.FeatureSelectorByCount",
Desc = CountFeatureSelectionTransform.Summary,
UserName = CountFeatureSelectionTransform.UserName,
XmlInclude = new[] { @"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/member[@name=""CountFeatureSelection""]'/>",
@"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/example[@name=""CountFeatureSelection""]'/>"})]
XmlInclude = new[] { @"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/member[@name=""CountFeatureSelection""]/*'/>",
@"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/example[@name=""CountFeatureSelection""]/*'/>"})]
public static CommonOutputs.TransformOutput CountSelect(IHostEnvironment env, CountFeatureSelectionTransform.Arguments input)
{
Contracts.CheckValue(env, nameof(env));
Expand All @@ -31,8 +31,8 @@ public static CommonOutputs.TransformOutput CountSelect(IHostEnvironment env, Co
Desc = MutualInformationFeatureSelectionTransform.Summary,
UserName = MutualInformationFeatureSelectionTransform.UserName,
ShortName = MutualInformationFeatureSelectionTransform.ShortName,
XmlInclude = new[] { @"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/member[@name=""MutualInformationFeatureSelection""]'/>",
@"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/example[@name=""MutualInformationFeatureSelection""]'/>"})]
XmlInclude = new[] { @"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/member[@name=""MutualInformationFeatureSelection""]/*'/>",
@"<include file='../Microsoft.ML.Transforms/doc.xml' path='doc/members/example[@name=""MutualInformationFeatureSelection""]/*'/>"})]
public static CommonOutputs.TransformOutput MutualInformationSelect(IHostEnvironment env, MutualInformationFeatureSelectionTransform.Arguments input)
{
Contracts.CheckValue(env, nameof(env));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

namespace Microsoft.ML.Runtime.Data
{
/// <include file='doc.xml' path='doc/members/member[@name="MutualInformationFeatureSelection"]' />
/// <include file='doc.xml' path='doc/members/member[@name="MutualInformationFeatureSelection"]/*' />
public static class MutualInformationFeatureSelectionTransform
{
public const string Summary =
Expand Down
Loading