Skip to content
This repository was archived by the owner on Jan 23, 2023. It is now read-only.
/ corefx Public archive

Use Span overloads with Rfc2898DeriveBytes computations #23269

Merged
merged 1 commit into from
Aug 15, 2017
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
// The .NET Foundation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.

using System.Buffers;
using System.Text;
using System.Diagnostics;

Expand Down Expand Up @@ -249,30 +250,45 @@ private void Initialize()
}

// This function is defined as follows:
// Func (S, i) = HMAC(S || i) | HMAC2(S || i) | ... | HMAC(iterations) (S || i)
// Func (S, i) = HMAC(S || i) ^ HMAC2(S || i) ^ ... ^ HMAC(iterations) (S || i)
// where i is the block number.
private byte[] Func()
{
byte[] temp = new byte[_salt.Length + sizeof(uint)];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How big is _salt, or is it arbitrarily large? Wondering if temp should be a stackalloc. If not, array pool?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's user-provided and arbitrarily large. I thought about removing this one, too, but the average number of calls per object lifetime is ~2. So I went with sanitization's 5-nines instead of sterilizations 9-nines.

Buffer.BlockCopy(_salt, 0, temp, 0, _salt.Length);
Helpers.WriteInt(_block, temp, _salt.Length);

temp = _hmac.ComputeHash(temp);

byte[] ret = temp;
for (int i = 2; i <= _iterations; i++)
byte[] ui = ArrayPool<byte>.Shared.Rent(_blockSize);
try
{
temp = _hmac.ComputeHash(temp);
Span<byte> uiSpan = new Span<byte>(ui, 0, _blockSize);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'm confused. Why are we getting an array from the pool, hashing into it, then allocating a return array, and copying the results into that... why not just allocate the return array initially and do the initial hash into that? i.e. why bother with the pool at all?

If we're going to use the pool, seems like we'd want to use it for temp.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pooled the U_i, and allocated the return value.

F (P, S, c, i) = U_1 \xor U_2 \xor ... \xor U_c

where

U_1 = PRF (P, S || INT (i)) ,
U_2 = PRF (P, U_1) ,
...
U_c = PRF (P, U_{c-1}) .


for (int j = 0; j < _blockSize; j++)
if (!_hmac.TryComputeHash(temp, uiSpan, out int bytesWritten) || bytesWritten != _blockSize)
throw new CryptographicException();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's interesting that we're throwing when we didn't previously. In what situations could we hit this? Should this just be an assert instead?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be unreachable (a CryptographicException should have already been thrown); but I'd rather be runtime defensive here than possibly have an edge case where we give back a non-compatible answer.


byte[] ret = new byte[_blockSize];
uiSpan.CopyTo(ret);

for (int i = 2; i <= _iterations; i++)
{
ret[j] ^= temp[j];
if (!_hmac.TryComputeHash(uiSpan, uiSpan, out bytesWritten) || bytesWritten != _blockSize)
throw new CryptographicException();

for (int j = 0; j < _blockSize; j++)
{
ret[j] ^= ui[j];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I think this answers my question above... we need to xor each time with the initial hashed data?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separately, looks like this should be pretty straightforward to vectorize, though I don't know how big _blockSize generally is or whether this is a bottleneck compared to the hashing.

}
}
}

// increment the block count.
_block++;
return ret;
// increment the block count.
_block++;
return ret;
}
finally
{
Array.Clear(ui, 0, _blockSize);
ArrayPool<byte>.Shared.Return(ui);
}
}
}
}