Skip to content

Reference to See Also section for example of usage in all estimators #3577

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Apr 25, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion docs/api-reference/algo-details-fastforest.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,6 @@ For more see:
* [Quantile regression
forest](http://jmlr.org/papers/volume7/meinshausen06a/meinshausen06a.pdf)
* [From Stumps to Trees to
Forests](https://blogs.technet.microsoft.com/machinelearning/2014/09/10/from-stumps-to-trees-to-forests/)
Forests](https://blogs.technet.microsoft.com/machinelearning/2014/09/10/from-stumps-to-trees-to-forests/)

Check the See Also section for links to examples of the usage.
Copy link
Member

@sfilipi sfilipi Apr 25, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"to examples of the usage" doesn't sound well..

I think they should all be "to usage examples."

Sorry.. probably should have put that in the bug..

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @sfilipi but we could live with what is here

4 changes: 3 additions & 1 deletion docs/api-reference/algo-details-fasttree.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,6 @@ For more information see:
* [Wikipedia: Gradient boosting (Gradient tree
boosting).](https://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting)
* [Greedy function approximation: A gradient boosting
machine.](https://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1013203451)
machine.](https://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1013203451)

Check the See Also section for links to examples of the usage.
4 changes: 3 additions & 1 deletion docs/api-reference/algo-details-gam.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,6 @@ the average prediction over the training set, and the shape functions are
normalized to represent the deviation from the average prediction. This results
in models that are easily interpreted simply by inspecting the intercept and the
shape functions. See the sample below for an example of how to train a GAM model
and inspect and interpret the results.
and inspect and interpret the results.

Check the See Also section for links to examples of the usage.
2 changes: 2 additions & 0 deletions docs/api-reference/algo-details-lightgbm.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,5 @@ LightGBM is an open source implementation of gradient boosting decision tree.
For implementation details, please see [LightGBM's official
documentation](https://lightgbm.readthedocs.io/en/latest/index.html) or this
[paper](https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree.pdf).

Check the See Also section for links to examples of the usage.
4 changes: 3 additions & 1 deletion docs/api-reference/algo-details-sdca.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,4 +57,6 @@ For more information, see:
* [Scaling Up Stochastic Dual Coordinate
Ascent.](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/main-3.pdf)
* [Stochastic Dual Coordinate Ascent Methods for Regularized Loss
Minimization.](http://www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf)
Minimization.](http://www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf)

Check the See Also section for links to examples of the usage.
4 changes: 3 additions & 1 deletion docs/api-reference/algo-details-sgd.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,6 @@ Hogwild Stochastic Gradient Descent for binary classification that supports
multi-threading without any locking. If the associated optimization problem is
sparse, Hogwild Stochastic Gradient Descent achieves a nearly optimal rate of
convergence. For more details about Hogwild Stochastic Gradient Descent can be
found [here](http://arxiv.org/pdf/1106.5730v2.pdf).
found [here](http://arxiv.org/pdf/1106.5730v2.pdf).

Check the See Also section for links to examples of the usage.
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ namespace Microsoft.ML.Transforms
/// If the input columns' data type is a vector the output column data type remains the same. However, the size of
/// the vector will be the sum of the sizes of the input vectors.
///
/// Check the See Also section for links to examples of the usage.
/// Check the See Also section for links to usage examples.
/// ]]></format>
/// </remarks>
/// <seealso cref="TransformExtensionsCatalog.Concatenate(TransformsCatalog, string, string[])"/>
Expand Down
2 changes: 1 addition & 1 deletion src/Microsoft.ML.Data/Transforms/ColumnCopying.cs
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ namespace Microsoft.ML.Transforms
///
/// The resulting [ColumnCopyingTransformer](xref:Microsoft.ML.Transforms.ColumnCopyingTransformer) creates a new column, named as specified in the output column name parameters, and
/// copies the data from the input column to this new column.
/// Check the See Also section for links to examples of the usage.
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
2 changes: 1 addition & 1 deletion src/Microsoft.ML.Data/Transforms/ColumnSelecting.cs
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ namespace Microsoft.ML.Transforms
/// In the case of serialization, every column in the schema will be written out. If there are columns
/// that should not be saved, this estimator can be used to remove them.
///
/// Check the See Also section for links to examples of the usage.
/// Check the See Also section for links to usage examples.
/// ]]></format>
/// </remarks>
/// <seealso cref="TransformExtensionsCatalog.DropColumns(TransformsCatalog, string[])"/>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -284,6 +284,8 @@ private Delegate GetValueGetter<TSrc>(DataViewRow input, int colSrc)
/// while keeping the other features constant. The contribution of feature F1 for the given example is the difference between the original score
/// and the score obtained by taking the opposite decision at the node corresponding to feature F1. This algorithm extends naturally to models with
/// many decision trees.
///
/// Check the See Also section for links to usage examples.
/// ]]></format>
/// </remarks>
/// <seealso cref="ExplainabilityCatalog.CalculateFeatureContribution(TransformsCatalog, ISingleFeaturePredictionTransformer{ICalculateFeatureContribution}, int, int, bool)"/>
Expand Down
1 change: 1 addition & 0 deletions src/Microsoft.ML.Data/Transforms/Hashing.cs
Original file line number Diff line number Diff line change
Expand Up @@ -1115,6 +1115,7 @@ public override void Process()
/// | Input column data type | Vector or scalars of numeric, boolean, [text](xref:Microsoft.ML.Data.TextDataViewType), [DateTime](xref: System.DateTime) and [key](xref:Microsoft.ML.Data.KeyDataViewType) type. |
/// | Output column data type | Vector or scalar [key](xref:Microsoft.ML.Data.KeyDataViewType) type. |
///
/// Check the See Also section for links to usage examples.
/// ]]></format>
/// </remarks>
/// <seealso cref="ConversionsExtensionsCatalog.Hash(TransformsCatalog.ConversionTransforms, string, string, int, int)"/>
Expand Down
1 change: 1 addition & 0 deletions src/Microsoft.ML.Data/Transforms/KeyToValue.cs
Original file line number Diff line number Diff line change
Expand Up @@ -512,6 +512,7 @@ public override JToken SavePfa(BoundPfaContext ctx, JToken srcToken)
/// | Input column data type | [key](xref:Microsoft.ML.Data.KeyDataViewType) type. |
/// | Output column data type | Type of the original data, prior to converting to [key](xref:Microsoft.ML.Data.KeyDataViewType) type. |
///
/// Check the See Also section for links to usage examples.
/// ]]></format>
/// </remarks>
/// <seealso cref="ConversionsExtensionsCatalog.MapKeyToValue(TransformsCatalog.ConversionTransforms, InputOutputColumnPair[])"/>
Expand Down
2 changes: 2 additions & 0 deletions src/Microsoft.ML.Data/Transforms/KeyToVector.cs
Original file line number Diff line number Diff line change
Expand Up @@ -737,6 +737,8 @@ private bool SaveAsOnnxCore(OnnxContext ctx, int iinfo, ColInfo info, string src
///
/// It iterates over keys in data, and for each key it produces vector of key cardinality filled with zeros except position of key value in which it put's `1.0`.
/// For vector of keys it can either produce vector of counts for each key or concatenate them together into one vector.
///
/// Check the See Also section for links to usage examples.
/// ]]></format>
/// </remarks>
/// <seealso cref=" ConversionsExtensionsCatalog.MapKeyToVector(TransformsCatalog.ConversionTransforms, InputOutputColumnPair[], bool)"/>
Expand Down
2 changes: 2 additions & 0 deletions src/Microsoft.ML.Data/Transforms/Normalizer.cs
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,8 @@ namespace Microsoft.ML.Transforms
/// * [NormalizeLogMeanVariance](xref:Microsoft.ML.NormalizationCatalog.NormalizeLogMeanVariance(Microsoft.ML.TransformsCatalog,System.String,System.String,System.Int64,System.Boolean))
/// * [NormalizeBinning](xref:Microsoft.ML.NormalizationCatalog.NormalizeBinning(Microsoft.ML.TransformsCatalog,System.String,System.String,System.Int64,System.Boolean,System.Int32))
/// * [NormalizeSupervisedBinning](xref:Microsoft.ML.NormalizationCatalog.NormalizeSupervisedBinning(Microsoft.ML.TransformsCatalog,System.String,System.String,System.String,System.Int64,System.Boolean,System.Int32,System.Int32))
///
/// Check the above links for usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
1 change: 1 addition & 0 deletions src/Microsoft.ML.Data/Transforms/TypeConverting.cs
Original file line number Diff line number Diff line change
Expand Up @@ -526,6 +526,7 @@ private bool SaveAsOnnxCore(OnnxContext ctx, int iinfo, string srcVariableName,
/// | Input column data type | Vector or primitive numeric, boolean, [text](xref:Microsoft.ML.Data.TextDataViewType), [System.DateTime](xref:System.DateTime) and [key](xref:Microsoft.ML.Data.KeyDataViewType) type. |
/// | Output column data type | Vector or primitive numeric, boolean, [text](xref:Microsoft.ML.Data.TextDataViewType), [System.DateTime](xref:System.DateTime) and [key](xref:Microsoft.ML.Data.KeyDataViewType) type. |
///
/// Check the See Also section for links to usage examples.
/// ]]></format>
/// </remarks>
/// <seealso cref="ConversionsExtensionsCatalog.ConvertType(TransformsCatalog.ConversionTransforms, InputOutputColumnPair[], DataKind)"/>
Expand Down
4 changes: 4 additions & 0 deletions src/Microsoft.ML.Data/Transforms/ValueMapping.cs
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,8 @@ namespace Microsoft.ML.Transforms
///
/// Values can be repeated to allow for multiple keys to map to the same value, however keys can not be repeated. The mapping between keys and values
/// can be specified either through lists, where the key list and value list must be the same size or can be done through an [System.IDataView](xref:Microsoft.ML.IDataView).
///
/// Check the See Also section for links to usage examples.
/// ]]></format>
/// </remarks>
/// <seealso cref="ConversionsExtensionsCatalog.MapValue(TransformsCatalog.ConversionTransforms, string, IDataView, DataViewSchema.Column, DataViewSchema.Column, string)"/>
Expand Down Expand Up @@ -152,6 +154,8 @@ public sealed override SchemaShape GetOutputSchema(SchemaShape inputSchema)
///
/// Values can be repeated to allow for multiple keys to map to the same value, however keys can not be repeated. The mapping between keys and values
/// can be specified either through lists, where the key list and value list must be the same size or can be done through an [System.IDataView](xref:Microsoft.ML.IDataView).
///
/// Check the See Also section for links to usage examples.
/// ]]></format>
/// </remarks>
/// <typeparam name="TKey">Specifies the key type.</typeparam>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,8 @@ namespace Microsoft.ML.Transforms
/// If the key is not found in the dictionary, it is assigned the missing value indicator.
/// This dictionary mapping values to keys is most commonly learnt from the unique values in input data,
/// but can be defined through other means: either with the mapping defined, or as loaded from an external file.
///
/// Check the See Also section for links to usage examples.
/// ]]></format>
/// </remarks>
/// <seealso cref="ConversionsExtensionsCatalog.MapValueToKey(TransformsCatalog.ConversionTransforms, InputOutputColumnPair[], int, KeyOrdinality, bool, IDataView)"/>
Expand Down
2 changes: 2 additions & 0 deletions src/Microsoft.ML.FastTree/FastTreeTweedie.cs
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,8 @@ namespace Microsoft.ML.Trainers.FastTree
/// For an introduction to Gradient Boosting, and more information, see:
/// [Wikipedia: Gradient boosting(Gradient tree boosting)](https://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting) or
/// [Greedy function approximation: A gradient boosting machine](https://projecteuclid.org/DPubS?service=UI&amp;version=1.0&amp;verb=Display&amp;handle=euclid.aos/1013203451).
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
3 changes: 2 additions & 1 deletion src/Microsoft.ML.ImageAnalytics/ImageGrayscale.cs
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,8 @@ protected override Delegate MakeGetter(DataViewRow input, int iinfo, Func<int, b
/// a technique known as [data augmentation](http://www.stat.harvard.edu/Faculty_Content/meng/JCGS01.pdf).
/// For end-to-end image processing pipelines, and scenarios in your applications, see the
/// [examples](https://github.com/dotnet/machinelearning-samples/tree/master/samples/csharp/getting-started) in the machinelearning-samples github repository.
/// Check the See Also section for links to more examples of the usage.
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
3 changes: 2 additions & 1 deletion src/Microsoft.ML.ImageAnalytics/ImageLoader.cs
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,8 @@ protected override DataViewSchema.DetachedColumn[] GetOutputColumnsCore()
/// The images to load need to be in the formats supported by <xref:System.Drawing.Bitmap>.
/// For end-to-end image processing pipelines, and scenarios in your applications, see the
/// [examples](https://github.com/dotnet/machinelearning-samples/tree/master/samples/csharp/getting-started) in the machinelearning-samples github repository.</a>
/// Check the See Also section for links to examples of the usage.
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
3 changes: 2 additions & 1 deletion src/Microsoft.ML.ImageAnalytics/ImagePixelExtractor.cs
Original file line number Diff line number Diff line change
Expand Up @@ -488,7 +488,8 @@ private VectorDataViewType[] ConstructTypes()
/// converts image into vector of known size of floats or bytes. Size and data type depends on specified paramaters.
/// For end-to-end image processing pipelines, and scenarios in your applications, see the
/// [examples](https://github.com/dotnet/machinelearning-samples/tree/master/samples/csharp/getting-started) in the machinelearning-samples github repository.
/// Check the See Also section for links to examples of the usage.
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
3 changes: 2 additions & 1 deletion src/Microsoft.ML.ImageAnalytics/ImageResizer.cs
Original file line number Diff line number Diff line change
Expand Up @@ -419,7 +419,8 @@ protected override Delegate MakeGetter(DataViewRow input, int iinfo, Func<int, b
/// further processing.
/// For end-to-end image processing pipelines, and scenarios in your applications, see the
/// [examples](https://github.com/dotnet/machinelearning-samples/tree/master/samples/csharp/getting-started) in the machinelearning-samples github repository.
/// Check the See Also section for links to examples of the usage.
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
3 changes: 2 additions & 1 deletion src/Microsoft.ML.ImageAnalytics/VectorToImageTransform.cs
Original file line number Diff line number Diff line change
Expand Up @@ -437,7 +437,8 @@ private static ImageDataViewType[] ConstructTypes(VectorToImageConvertingEstimat
///
/// The resulting <xref:Microsoft.ML.Transforms.Image.VectorToImageConvertingTransformer> creates a new column, named as specified in the output column name parameters, and
/// creates image from the data in the input column to this new column.
/// Check the See Also section for links to examples of the usage.
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
2 changes: 2 additions & 0 deletions src/Microsoft.ML.KMeansClustering/KMeansPlusPlusTrainer.cs
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,8 @@ namespace Microsoft.ML.Trainers
/// For more information on K-means, and K-means++ see:
/// [K-means](https://en.wikipedia.org/wiki/K-means_clustering)
/// [K-means++](https://en.wikipedia.org/wiki/K-means%2b%2b)
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
2 changes: 2 additions & 0 deletions src/Microsoft.ML.Mkl.Components/OlsLinearRegression.cs
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,8 @@ namespace Microsoft.ML.Trainers
/// [Ordinary least squares (OLS)](https://en.wikipedia.org/wiki/Ordinary_least_squares) is a parameterized regression method.
/// It assumes that the conditional mean of the dependent variable follows a linear function of the dependent variables.
/// The regression parameters can be estimated by minimizing the squares of the difference between observed values and the predictions
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,8 @@ namespace Microsoft.ML.Trainers
/// to produce the same result as what a sequential symbolic stochastic gradient descent would have produced, in expectation.
///
/// For more information see [Parallel Stochastic Gradient Descent with Sound Combiners](https://arxiv.org/abs/1705.08030).
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,8 @@ internal DnnImageFeaturizerInput(string outputColumnName, string inputColumnName
/// the names of the ONNX model nodes.
///
/// Any platform requirement for this estimator will follow the requirements on the <xref:Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator>.
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
4 changes: 4 additions & 0 deletions src/Microsoft.ML.OnnxTransformer/OnnxTransform.cs
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ namespace Microsoft.ML.Transforms.Onnx
///
/// To create this estimator use the following:
/// [ApplyOnnxModel](xref:Microsoft.ML.OnnxCatalog.ApplyOnnxModel*)
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down Expand Up @@ -541,6 +543,8 @@ public NamedOnnxValue GetNamedOnnxValue()
///
/// To create this estimator use the following:
/// [ApplyOnnxModel](xref:Microsoft.ML.OnnxCatalog.ApplyOnnxModel*)
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
2 changes: 2 additions & 0 deletions src/Microsoft.ML.PCA/PcaTrainer.cs
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,8 @@ namespace Microsoft.ML.Trainers
///
/// Note that the algorithm can be made into Kernel PCA by applying the <xref:Microsoft.ML.Transforms.ApproximatedKernelTransformer>
/// to the data before passing it to the trainer.
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
1 change: 1 addition & 0 deletions src/Microsoft.ML.Recommender/MatrixFactorizationTrainer.cs
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,7 @@ namespace Microsoft.ML.Trainers
/// * For the parallel coordinate descent method used and one-class matrix factorization formula, see [Selection of Negative Samples for One-class Matrix Factorization](https://www.csie.ntu.edu.tw/~cjlin/papers/one-class-mf/biased-mf-sdm-with-supp.pdf).
/// * For details in the underlying library used, see [LIBMF: A Library for Parallel Matrix Factorization in Shared-memory Systems](https://www.csie.ntu.edu.tw/~cjlin/papers/libmf/libmf_open_source.pdf).
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ namespace Microsoft.ML.Trainers
/// Algorithm details is described in Algorithm 3 in [this online document](https://github.com/wschin/fast-ffm/blob/master/fast-ffm.pdf).
/// The minimized loss function is [logistic loss](https://en.wikipedia.org/wiki/Loss_functions_for_classification), so the trained model can be viewed as a non-linear logistic regression.
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,8 @@ namespace Microsoft.ML.Trainers
///
/// An aggressive regularization (that is, assigning large coefficients to L1-norm or L2-norm regularization terms) can harm predictive capacity by excluding important variables out of the model.
/// Therefore, choosing the right regularization coefficients is important when applying logistic regression.
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,8 @@ namespace Microsoft.ML.Trainers
///
/// An aggressive regularization (that is, assigning large coefficients to L1-norm or L2-norm regularization terms) can harm predictive capacity by excluding important variables out of the model.
/// Therefore, choosing the right regularization coefficients is important when applying maximum entropy classifier.
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,8 @@ namespace Microsoft.ML.Trainers
/// This multi-class trainer accepts "binary" feature values of type float:
/// feature values that are greater than zero are treated as `true` and feature values
/// that are less or equal to 0 are treated as `false`.
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,8 @@ namespace Microsoft.ML.Trainers
/// requires that the trainer store a lot more intermediate state in the form of
/// L-BFGS history for all classes *simultaneously*, rather than just one-by-one
/// as would be needed for a one-versus-all classification model.
///
/// Check the See Also section for links to usage examples.
/// ]]>
/// </format>
/// </remarks>
Expand Down
Loading