Skip to content

Some micro optimizations for performance are possible. #243

Closed
@Oxoron

Description

@Oxoron

Some micro optimizations for performance are possible.

For.ex. in the PerLabelL1 property ( see link) we divide all array's elements onto the single _sumWeight value, in the loop. If we compute 1/_sumWeight division before a loop, we can multiply all element on computed value, that speeds up an execution time 4 times (see example below and in the attach).

Source code / logs

        private static int _sumWeights = 14;        

        public static double[] OriginDivideArray(this double[] _l1Loss)
        {
            var res = new double[_l1Loss.Length];
            if (_sumWeights == 0)
                return res;
            for (int i = 0; i < _l1Loss.Length; i++)
                res[i] = _l1Loss[i] / _sumWeights;
            return res;
        }

        public static double[] OptimizedDivideArray(this double[] _l1Loss)
        {
            var res = new double[_l1Loss.Length];
            if (_sumWeights == 0)
                return res;

            var revSumWeights = 1 / _sumWeights; // Cache division result here

            for (int i = 0; i < _l1Loss.Length; i++)
                res[i] = _l1Loss[i] * revSumWeights; // Update division to mult here
            return res;
        }

The question is: do we need these optimizations? On a 1000 elements array it saves us just some microseconds.

image
PointsToImprove.txt
Program.zip

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions