Skip to content

Commit c2b24c3

Browse files
Ard Biesheuvelherbertx
authored andcommitted
crypto: arm64/aes-gcm-ce - fix scatterwalk API violation
Commit 71e52c2 ("crypto: arm64/aes-ce-gcm - operate on two input blocks at a time") modified the granularity at which the AES/GCM code processes its input to allow subsequent changes to be applied that improve performance by using aggregation to process multiple input blocks at once. For this reason, it doubled the algorithm's 'chunksize' property to 2 x AES_BLOCK_SIZE, but retained the non-SIMD fallback path that processes a single block at a time. In some cases, this violates the skcipher scatterwalk API, by calling skcipher_walk_done() with a non-zero residue value for a chunk that is expected to be handled in its entirety. This results in a WARN_ON() to be hit by the TLS self test code, but is likely to break other user cases as well. Unfortunately, none of the current test cases exercises this exact code path at the moment. Fixes: 71e52c2 ("crypto: arm64/aes-ce-gcm - operate on two ...") Reported-by: Vakul Garg <vakul.garg@nxp.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Vakul Garg <vakul.garg@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
1 parent e5b954e commit c2b24c3

File tree

1 file changed

+23
-6
lines changed

1 file changed

+23
-6
lines changed

arch/arm64/crypto/ghash-ce-glue.c

Lines changed: 23 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -417,7 +417,7 @@ static int gcm_encrypt(struct aead_request *req)
417417
__aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds);
418418
put_unaligned_be32(2, iv + GCM_IV_SIZE);
419419

420-
while (walk.nbytes >= AES_BLOCK_SIZE) {
420+
while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) {
421421
int blocks = walk.nbytes / AES_BLOCK_SIZE;
422422
u8 *dst = walk.dst.virt.addr;
423423
u8 *src = walk.src.virt.addr;
@@ -437,11 +437,18 @@ static int gcm_encrypt(struct aead_request *req)
437437
NULL);
438438

439439
err = skcipher_walk_done(&walk,
440-
walk.nbytes % AES_BLOCK_SIZE);
440+
walk.nbytes % (2 * AES_BLOCK_SIZE));
441441
}
442-
if (walk.nbytes)
442+
if (walk.nbytes) {
443443
__aes_arm64_encrypt(ctx->aes_key.key_enc, ks, iv,
444444
nrounds);
445+
if (walk.nbytes > AES_BLOCK_SIZE) {
446+
crypto_inc(iv, AES_BLOCK_SIZE);
447+
__aes_arm64_encrypt(ctx->aes_key.key_enc,
448+
ks + AES_BLOCK_SIZE, iv,
449+
nrounds);
450+
}
451+
}
445452
}
446453

447454
/* handle the tail */
@@ -545,7 +552,7 @@ static int gcm_decrypt(struct aead_request *req)
545552
__aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds);
546553
put_unaligned_be32(2, iv + GCM_IV_SIZE);
547554

548-
while (walk.nbytes >= AES_BLOCK_SIZE) {
555+
while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) {
549556
int blocks = walk.nbytes / AES_BLOCK_SIZE;
550557
u8 *dst = walk.dst.virt.addr;
551558
u8 *src = walk.src.virt.addr;
@@ -564,11 +571,21 @@ static int gcm_decrypt(struct aead_request *req)
564571
} while (--blocks > 0);
565572

566573
err = skcipher_walk_done(&walk,
567-
walk.nbytes % AES_BLOCK_SIZE);
574+
walk.nbytes % (2 * AES_BLOCK_SIZE));
568575
}
569-
if (walk.nbytes)
576+
if (walk.nbytes) {
577+
if (walk.nbytes > AES_BLOCK_SIZE) {
578+
u8 *iv2 = iv + AES_BLOCK_SIZE;
579+
580+
memcpy(iv2, iv, AES_BLOCK_SIZE);
581+
crypto_inc(iv2, AES_BLOCK_SIZE);
582+
583+
__aes_arm64_encrypt(ctx->aes_key.key_enc, iv2,
584+
iv2, nrounds);
585+
}
570586
__aes_arm64_encrypt(ctx->aes_key.key_enc, iv, iv,
571587
nrounds);
588+
}
572589
}
573590

574591
/* handle the tail */

0 commit comments

Comments
 (0)