Skip to content
Snippets Groups Projects
  1. Apr 27, 2019
    • Johannes Berg's avatar
      netlink: make validation more configurable for future strictness · 8cb08174
      Johannes Berg authored
      
      We currently have two levels of strict validation:
      
       1) liberal (default)
           - undefined (type >= max) & NLA_UNSPEC attributes accepted
           - attribute length >= expected accepted
           - garbage at end of message accepted
       2) strict (opt-in)
           - NLA_UNSPEC attributes accepted
           - attribute length >= expected accepted
      
      Split out parsing strictness into four different options:
       * TRAILING     - check that there's no trailing data after parsing
                        attributes (in message or nested)
       * MAXTYPE      - reject attrs > max known type
       * UNSPEC       - reject attributes with NLA_UNSPEC policy entries
       * STRICT_ATTRS - strictly validate attribute size
      
      The default for future things should be *everything*.
      The current *_strict() is a combination of TRAILING and MAXTYPE,
      and is renamed to _deprecated_strict().
      The current regular parsing has none of this, and is renamed to
      *_parse_deprecated().
      
      Additionally it allows us to selectively set one of the new flags
      even on old policies. Notably, the UNSPEC flag could be useful in
      this case, since it can be arranged (by filling in the policy) to
      not be an incompatible userspace ABI change, but would then going
      forward prevent forgetting attribute entries. Similar can apply
      to the POLICY flag.
      
      We end up with the following renames:
       * nla_parse           -> nla_parse_deprecated
       * nla_parse_strict    -> nla_parse_deprecated_strict
       * nlmsg_parse         -> nlmsg_parse_deprecated
       * nlmsg_parse_strict  -> nlmsg_parse_deprecated_strict
       * nla_parse_nested    -> nla_parse_nested_deprecated
       * nla_validate_nested -> nla_validate_nested_deprecated
      
      Using spatch, of course:
          @@
          expression TB, MAX, HEAD, LEN, POL, EXT;
          @@
          -nla_parse(TB, MAX, HEAD, LEN, POL, EXT)
          +nla_parse_deprecated(TB, MAX, HEAD, LEN, POL, EXT)
      
          @@
          expression NLH, HDRLEN, TB, MAX, POL, EXT;
          @@
          -nlmsg_parse(NLH, HDRLEN, TB, MAX, POL, EXT)
          +nlmsg_parse_deprecated(NLH, HDRLEN, TB, MAX, POL, EXT)
      
          @@
          expression NLH, HDRLEN, TB, MAX, POL, EXT;
          @@
          -nlmsg_parse_strict(NLH, HDRLEN, TB, MAX, POL, EXT)
          +nlmsg_parse_deprecated_strict(NLH, HDRLEN, TB, MAX, POL, EXT)
      
          @@
          expression TB, MAX, NLA, POL, EXT;
          @@
          -nla_parse_nested(TB, MAX, NLA, POL, EXT)
          +nla_parse_nested_deprecated(TB, MAX, NLA, POL, EXT)
      
          @@
          expression START, MAX, POL, EXT;
          @@
          -nla_validate_nested(START, MAX, POL, EXT)
          +nla_validate_nested_deprecated(START, MAX, POL, EXT)
      
          @@
          expression NLH, HDRLEN, MAX, POL, EXT;
          @@
          -nlmsg_validate(NLH, HDRLEN, MAX, POL, EXT)
          +nlmsg_validate_deprecated(NLH, HDRLEN, MAX, POL, EXT)
      
      For this patch, don't actually add the strict, non-renamed versions
      yet so that it breaks compile if I get it wrong.
      
      Also, while at it, make nla_validate and nla_parse go down to a
      common __nla_validate_parse() function to avoid code duplication.
      
      Ultimately, this allows us to have very strict validation for every
      new caller of nla_parse()/nlmsg_parse() etc as re-introduced in the
      next patch, while existing things will continue to work as is.
      
      In effect then, this adds fully strict validation for any new command.
      
      Signed-off-by: default avatarJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8cb08174
  2. Apr 25, 2019
  3. Apr 19, 2019
    • Eric Biggers's avatar
      crypto: ccm - fix incompatibility between "ccm" and "ccm_base" · 6a1faa4a
      Eric Biggers authored
      
      CCM instances can be created by either the "ccm" template, which only
      allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base",
      which allows choosing the ctr and cbcmac implementations, e.g.
      "ccm_base(ctr(aes-generic),cbcmac(aes-generic))".
      
      However, a "ccm_base" instance prevents a "ccm" instance from being
      registered using the same implementations.  Nor will the instance be
      found by lookups of "ccm".  This can be used as a denial of service.
      Moreover, "ccm_base" instances are never tested by the crypto
      self-tests, even if there are compatible "ccm" tests.
      
      The root cause of these problems is that instances of the two templates
      use different cra_names.  Therefore, fix these problems by making
      "ccm_base" instances set the same cra_name as "ccm" instances, e.g.
      "ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))".
      
      This requires extracting the block cipher name from the name of the ctr
      and cbcmac algorithms.  It also requires starting to verify that the
      algorithms are really ctr and cbcmac using the same block cipher, not
      something else entirely.  But it would be bizarre if anyone were
      actually using non-ccm-compatible algorithms with ccm_base, so this
      shouldn't break anyone in practice.
      
      Fixes: 4a49b499 ("[CRYPTO] ccm: Added CCM mode")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      6a1faa4a
    • Eric Biggers's avatar
      crypto: gcm - fix incompatibility between "gcm" and "gcm_base" · f699594d
      Eric Biggers authored
      
      GCM instances can be created by either the "gcm" template, which only
      allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base",
      which allows choosing the ctr and ghash implementations, e.g.
      "gcm_base(ctr(aes-generic),ghash-generic)".
      
      However, a "gcm_base" instance prevents a "gcm" instance from being
      registered using the same implementations.  Nor will the instance be
      found by lookups of "gcm".  This can be used as a denial of service.
      Moreover, "gcm_base" instances are never tested by the crypto
      self-tests, even if there are compatible "gcm" tests.
      
      The root cause of these problems is that instances of the two templates
      use different cra_names.  Therefore, fix these problems by making
      "gcm_base" instances set the same cra_name as "gcm" instances, e.g.
      "gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)".
      
      This requires extracting the block cipher name from the name of the ctr
      algorithm.  It also requires starting to verify that the algorithms are
      really ctr and ghash, not something else entirely.  But it would be
      bizarre if anyone were actually using non-gcm-compatible algorithms with
      gcm_base, so this shouldn't break anyone in practice.
      
      Fixes: d00aa19b ("[CRYPTO] gcm: Allow block cipher parameter")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f699594d
  4. Apr 18, 2019
    • Eric Biggers's avatar
      crypto: shash - fix missed optimization in shash_ahash_digest() · 67cb60e4
      Eric Biggers authored
      
      shash_ahash_digest(), which is the ->digest() method for ahash tfms that
      use an shash algorithm, has an optimization where crypto_shash_digest()
      is called if the data is in a single page.  But an off-by-one error
      prevented this path from being taken unless the user happened to provide
      extra data in the scatterlist.  Fix it.
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      67cb60e4
    • Eric Biggers's avatar
      crypto: cryptd - remove ability to instantiate ablkciphers · 0a877e35
      Eric Biggers authored
      
      Remove cryptd_alloc_ablkcipher() and the ability of cryptd to create
      algorithms with the deprecated "ablkcipher" type.
      
      This has been unused since commit 0e145b47 ("crypto: ablk_helper -
      remove ablk_helper").  Instead, cryptd_alloc_skcipher() is used.
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      0a877e35
    • Sebastian Andrzej Siewior's avatar
      crypto: scompress - initialize per-CPU variables on each CPU · 8c3fffe3
      Sebastian Andrzej Siewior authored
      
      In commit 71052dcf ("crypto: scompress - Use per-CPU struct instead
      multiple variables") I accidentally initialized multiple times the memory on a
      random CPU. I should have initialize the memory on every CPU like it has
      been done earlier. I didn't notice this because the scheduler didn't
      move the task to another CPU.
      Guenter managed to do that and the code crashed as expected.
      
      Allocate / free per-CPU memory on each CPU.
      
      Fixes: 71052dcf ("crypto: scompress - Use per-CPU struct instead multiple variables")
      Reported-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      8c3fffe3
    • Eric Biggers's avatar
      crypto: run initcalls for generic implementations earlier · c4741b23
      Eric Biggers authored
      
      Use subsys_initcall for registration of all templates and generic
      algorithm implementations, rather than module_init.  Then change
      cryptomgr to use arch_initcall, to place it before the subsys_initcalls.
      
      This is needed so that when both a generic and optimized implementation
      of an algorithm are built into the kernel (not loadable modules), the
      generic implementation is registered before the optimized one.
      Otherwise, the self-tests for the optimized implementation are unable to
      allocate the generic implementation for the new comparison fuzz tests.
      
      Note that on arm, a side effect of this change is that self-tests for
      generic implementations may run before the unaligned access handler has
      been installed.  So, unaligned accesses will crash the kernel.  This is
      arguably a good thing as it makes it easier to detect that type of bug.
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c4741b23
    • Eric Biggers's avatar
      crypto: testmgr - fuzz AEADs against their generic implementation · 40153b10
      Eric Biggers authored
      
      When the extra crypto self-tests are enabled, test each AEAD algorithm
      against its generic implementation when one is available.  This
      involves: checking the algorithm properties for consistency, then
      randomly generating test vectors using the generic implementation and
      running them against the implementation under test.  Both good and bad
      inputs are tested.
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      40153b10
    • Eric Biggers's avatar
      crypto: testmgr - fuzz skciphers against their generic implementation · d435e10e
      Eric Biggers authored
      
      When the extra crypto self-tests are enabled, test each skcipher
      algorithm against its generic implementation when one is available.
      This involves: checking the algorithm properties for consistency, then
      randomly generating test vectors using the generic implementation and
      running them against the implementation under test.  Both good and bad
      inputs are tested.
      
      This has already detected a bug in the skcipher_walk API, a bug in the
      LRW template, and an inconsistency in the cts implementations.
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      d435e10e
    • Eric Biggers's avatar
      crypto: testmgr - fuzz hashes against their generic implementation · 9a8a6b3f
      Eric Biggers authored
      
      When the extra crypto self-tests are enabled, test each hash algorithm
      against its generic implementation when one is available.  This
      involves: checking the algorithm properties for consistency, then
      randomly generating test vectors using the generic implementation and
      running them against the implementation under test.  Both good and bad
      inputs are tested.
      
      This has already detected a bug in the x86 implementation of poly1305,
      bugs in crct10dif, and an inconsistency in cbcmac.
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      9a8a6b3f
    • Eric Biggers's avatar
      crypto: testmgr - add helpers for fuzzing against generic implementation · f2bb770a
      Eric Biggers authored
      
      Add some helper functions in preparation for fuzz testing algorithms
      against their generic implementation.
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f2bb770a
    • Eric Biggers's avatar
      crypto: testmgr - identify test vectors by name rather than number · 951d1332
      Eric Biggers authored
      
      In preparation for fuzz testing algorithms against their generic
      implementation, make error messages in testmgr identify test vectors by
      name rather than index.  Built-in test vectors are simply "named" by
      their index in testmgr.h, as before.  But (in later patches) generated
      test vectors will be given more descriptive names to help developers
      debug problems detected with them.
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      951d1332
    • Eric Biggers's avatar
      crypto: testmgr - expand ability to test for errors · 5283a8ee
      Eric Biggers authored
      
      Update testmgr to support testing for specific errors from setkey() and
      digest() for hashes; setkey() and encrypt()/decrypt() for skciphers and
      ciphers; and setkey(), setauthsize(), and encrypt()/decrypt() for AEADs.
      This is useful because algorithms usually restrict the lengths or format
      of the message, key, and/or authentication tag in some way.  And bad
      inputs should be tested too, not just good inputs.
      
      As part of this change, remove the ambiguously-named 'fail' flag and
      replace it with 'setkey_error = -EINVAL' for the only test vector that
      used it -- the DES weak key test vector.  Note that this tightens the
      test to require -EINVAL rather than any error code, but AFAICS this
      won't cause any test failure.
      
      Other than that, these new fields aren't set on any test vectors yet.
      Later patches will do so.
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      5283a8ee
    • Vitaly Chikunov's avatar
      crypto: ecrdsa - add EC-RDSA test vectors to testmgr · 32fbdbd3
      Vitaly Chikunov authored
      
      Add testmgr test vectors for EC-RDSA algorithm for every of five
      supported parameters (curves). Because there are no officially published
      test vectors for the curves, the vectors are generated by gost-engine.
      
      Signed-off-by: default avatarVitaly Chikunov <vt@altlinux.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      32fbdbd3
    • Vitaly Chikunov's avatar
      crypto: ecrdsa - add EC-RDSA (GOST 34.10) algorithm · 0d7a7864
      Vitaly Chikunov authored
      
      Add Elliptic Curve Russian Digital Signature Algorithm (GOST R
      34.10-2012, RFC 7091, ISO/IEC 14888-3) is one of the Russian (and since
      2018 the CIS countries) cryptographic standard algorithms (called GOST
      algorithms). Only signature verification is supported, with intent to be
      used in the IMA.
      
      Summary of the changes:
      
      * crypto/Kconfig:
        - EC-RDSA is added into Public-key cryptography section.
      
      * crypto/Makefile:
        - ecrdsa objects are added.
      
      * crypto/asymmetric_keys/x509_cert_parser.c:
        - Recognize EC-RDSA and Streebog OIDs.
      
      * include/linux/oid_registry.h:
        - EC-RDSA OIDs are added to the enum. Also, a two currently not
          implemented curve OIDs are added for possible extension later (to
          not change numbering and grouping).
      
      * crypto/ecc.c:
        - Kenneth MacKay copyright date is updated to 2014, because
          vli_mmod_slow, ecc_point_add, ecc_point_mult_shamir are based on his
          code from micro-ecc.
        - Functions needed for ecrdsa are EXPORT_SYMBOL'ed.
        - New functions:
          vli_is_negative - helper to determine sign of vli;
          vli_from_be64 - unpack big-endian array into vli (used for
            a signature);
          vli_from_le64 - unpack little-endian array into vli (used for
            a public key);
          vli_uadd, vli_usub - add/sub u64 value to/from vli (used for
            increment/decrement);
          mul_64_64 - optimized to use __int128 where appropriate, this speeds
            up point multiplication (and as a consequence signature
            verification) by the factor of 1.5-2;
          vli_umult - multiply vli by a small value (speeds up point
            multiplication by another factor of 1.5-2, depending on vli sizes);
          vli_mmod_special - module reduction for some form of Pseudo-Mersenne
            primes (used for the curves A);
          vli_mmod_special2 - module reduction for another form of
            Pseudo-Mersenne primes (used for the curves B);
          vli_mmod_barrett - module reduction using pre-computed value (used
            for the curve C);
          vli_mmod_slow - more general module reduction which is much slower
           (used when the modulus is subgroup order);
          vli_mod_mult_slow - modular multiplication;
          ecc_point_add - add two points;
          ecc_point_mult_shamir - add two points multiplied by scalars in one
            combined multiplication (this gives speed up by another factor 2 in
            compare to two separate multiplications).
          ecc_is_pubkey_valid_partial - additional samity check is added.
        - Updated vli_mmod_fast with non-strict heuristic to call optimal
            module reduction function depending on the prime value;
        - All computations for the previously defined (two NIST) curves should
          not unaffected.
      
      * crypto/ecc.h:
        - Newly exported functions are documented.
      
      * crypto/ecrdsa_defs.h
        - Five curves are defined.
      
      * crypto/ecrdsa.c:
        - Signature verification is implemented.
      
      * crypto/ecrdsa_params.asn1, crypto/ecrdsa_pub_key.asn1:
        - Templates for BER decoder for EC-RDSA parameters and public key.
      
      Cc: linux-integrity@vger.kernel.org
      Signed-off-by: default avatarVitaly Chikunov <vt@altlinux.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      0d7a7864
    • Vitaly Chikunov's avatar
      crypto: ecc - make ecc into separate module · 4a2289da
      Vitaly Chikunov authored
      
      ecc.c have algorithms that could be used togeter by ecdh and ecrdsa.
      Make it separate module. Add CRYPTO_ECC into Kconfig. EXPORT_SYMBOL and
      document to what seems appropriate. Move structs ecc_point and ecc_curve
      from ecc_curve_defs.h into ecc.h.
      
      No code changes.
      
      Signed-off-by: default avatarVitaly Chikunov <vt@altlinux.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      4a2289da
    • Vitaly Chikunov's avatar
      crypto: Kconfig - create Public-key cryptography section · 3d6228a5
      Vitaly Chikunov authored
      
      Group RSA, DH, and ECDH into Public-key cryptography config section.
      
      Signed-off-by: default avatarVitaly Chikunov <vt@altlinux.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      3d6228a5
    • Vitaly Chikunov's avatar
      X.509: parse public key parameters from x509 for akcipher · f1774cb8
      Vitaly Chikunov authored
      
      Some public key algorithms (like EC-DSA) keep in parameters field
      important data such as digest and curve OIDs (possibly more for
      different EC-DSA variants). Thus, just setting a public key (as
      for RSA) is not enough.
      
      Append parameters into the key stream for akcipher_set_{pub,priv}_key.
      Appended data is: (u32) algo OID, (u32) parameters length, parameters
      data.
      
      This does not affect current akcipher API nor RSA ciphers (they could
      ignore it). Idea of appending parameters to the key stream is by Herbert
      Xu.
      
      Cc: David Howells <dhowells@redhat.com>
      Cc: Denis Kenzior <denkenz@gmail.com>
      Cc: keyrings@vger.kernel.org
      Signed-off-by: default avatarVitaly Chikunov <vt@altlinux.org>
      Reviewed-by: default avatarDenis Kenzior <denkenz@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f1774cb8
    • Vitaly Chikunov's avatar
      KEYS: do not kmemdup digest in {public,tpm}_key_verify_signature · 83bc0299
      Vitaly Chikunov authored
      
      Treat (struct public_key_signature)'s digest same as its signature (s).
      Since digest should be already in the kmalloc'd memory do not kmemdup
      digest value before calling {public,tpm}_key_verify_signature.
      
      Patch is split from the previous as suggested by Herbert Xu.
      
      Suggested-by: default avatarDavid Howells <dhowells@redhat.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: keyrings@vger.kernel.org
      Signed-off-by: default avatarVitaly Chikunov <vt@altlinux.org>
      Reviewed-by: default avatarDenis Kenzior <denkenz@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      83bc0299
    • Vitaly Chikunov's avatar
      crypto: akcipher - new verify API for public key algorithms · c7381b01
      Vitaly Chikunov authored
      
      Previous akcipher .verify() just `decrypts' (using RSA encrypt which is
      using public key) signature to uncover message hash, which was then
      compared in upper level public_key_verify_signature() with the expected
      hash value, which itself was never passed into verify().
      
      This approach was incompatible with EC-DSA family of algorithms,
      because, to verify a signature EC-DSA algorithm also needs a hash value
      as input; then it's used (together with a signature divided into halves
      `r||s') to produce a witness value, which is then compared with `r' to
      determine if the signature is correct. Thus, for EC-DSA, nor
      requirements of .verify() itself, nor its output expectations in
      public_key_verify_signature() wasn't sufficient.
      
      Make improved .verify() call which gets hash value as input and produce
      complete signature check without any output besides status.
      
      Now for the top level verification only crypto_akcipher_verify() needs
      to be called and its return value inspected.
      
      Make sure that `digest' is in kmalloc'd memory (in place of `output`) in
      {public,tpm}_key_verify_signature() as insisted by Herbert Xu, and will
      be changed in the following commit.
      
      Cc: David Howells <dhowells@redhat.com>
      Cc: keyrings@vger.kernel.org
      Signed-off-by: default avatarVitaly Chikunov <vt@altlinux.org>
      Reviewed-by: default avatarDenis Kenzior <denkenz@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c7381b01
    • Vitaly Chikunov's avatar
      crypto: rsa - unimplement sign/verify for raw RSA backends · 3ecc9725
      Vitaly Chikunov authored
      
      In preparation for new akcipher verify call remove sign/verify callbacks
      from RSA backends and make PKCS1 driver call encrypt/decrypt instead.
      
      This also complies with the well-known idea that raw RSA should never be
      used for sign/verify. It only should be used with proper padding scheme
      such as PKCS1 driver provides.
      
      Cc: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
      Cc: qat-linux@intel.com
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Gary Hook <gary.hook@amd.com>
      Cc: Horia Geantă <horia.geanta@nxp.com>
      Cc: Aymen Sghaier <aymen.sghaier@nxp.com>
      Signed-off-by: default avatarVitaly Chikunov <vt@altlinux.org>
      Reviewed-by: default avatarHoria Geantă <horia.geanta@nxp.com>
      Acked-by: default avatarGary R Hook <gary.hook@amd.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      3ecc9725
    • Vitaly Chikunov's avatar
      crypto: akcipher - default implementations for request callbacks · 78a0324f
      Vitaly Chikunov authored
      
      Because with the introduction of EC-RDSA and change in workings of RSA
      in regard to sign/verify, akcipher could have not all callbacks defined,
      check the presence of callbacks in crypto_register_akcipher() and
      provide default implementation if the callback is not implemented.
      
      This is suggested by Herbert Xu instead of checking the presence of the
      callback on every request.
      
      Signed-off-by: default avatarVitaly Chikunov <vt@altlinux.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      78a0324f
    • Herbert Xu's avatar
      crypto: des_generic - Forbid 2-key in 3DES and add helpers · d7198ce4
      Herbert Xu authored
      
      This patch adds a requirement to the generic 3DES implementation
      such that 2-key 3DES (K1 == K3) is no longer allowed in FIPS mode.
      
      We will also provide helpers that may be used by drivers that
      implement 3DES to make the same check.
      
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      d7198ce4
    • Eric Biggers's avatar
      crypto: salsa20 - don't access already-freed walk.iv · edaf28e9
      Eric Biggers authored
      
      If the user-provided IV needs to be aligned to the algorithm's
      alignmask, then skcipher_walk_virt() copies the IV into a new aligned
      buffer walk.iv.  But skcipher_walk_virt() can fail afterwards, and then
      if the caller unconditionally accesses walk.iv, it's a use-after-free.
      
      salsa20-generic doesn't set an alignmask, so currently it isn't affected
      by this despite unconditionally accessing walk.iv.  However this is more
      subtle than desired, and it was actually broken prior to the alignmask
      being removed by commit b62b3db7 ("crypto: salsa20-generic - cleanup
      and convert to skcipher API").
      
      Since salsa20-generic does not update the IV and does not need any IV
      alignment, update it to use req->iv instead of walk.iv.
      
      Fixes: 2407d608 ("[CRYPTO] salsa20: Salsa20 stream cipher")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      edaf28e9
    • Eric Biggers's avatar
      crypto: lrw - don't access already-freed walk.iv · aec286cd
      Eric Biggers authored
      
      If the user-provided IV needs to be aligned to the algorithm's
      alignmask, then skcipher_walk_virt() copies the IV into a new aligned
      buffer walk.iv.  But skcipher_walk_virt() can fail afterwards, and then
      if the caller unconditionally accesses walk.iv, it's a use-after-free.
      
      Fix this in the LRW template by checking the return value of
      skcipher_walk_virt().
      
      This bug was detected by my patches that improve testmgr to fuzz
      algorithms against their generic implementation.  When the extra
      self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free
      splat occured during lrw(aes) testing.
      
      Fixes: c778f96b ("crypto: lrw - Optimize tweak computation")
      Cc: <stable@vger.kernel.org> # v4.20+
      Cc: Ondrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      aec286cd
    • Herbert Xu's avatar
      crypto: lrw - Fix atomic sleep when walking skcipher · b257b48c
      Herbert Xu authored
      
      When we perform a walk in the completion function, we need to ensure
      that it is atomic.
      
      Fixes: ac3c8f36 ("crypto: lrw - Do not use auxiliary buffer")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Acked-by: default avatarOndrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      b257b48c
    • Herbert Xu's avatar
      crypto: xts - Fix atomic sleep when walking skcipher · 44427c0f
      Herbert Xu authored
      
      When we perform a walk in the completion function, we need to ensure
      that it is atomic.
      
      Reported-by: default avatar <syzbot+6f72c20560060c98b566@syzkaller.appspotmail.com>
      Fixes: 78105c7e ("crypto: xts - Drop use of auxiliary buffer")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Acked-by: default avatarOndrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      44427c0f
  5. Apr 08, 2019
    • Eric Biggers's avatar
      crypto: x86/poly1305 - fix overflow during partial reduction · 678cce40
      Eric Biggers authored
      
      The x86_64 implementation of Poly1305 produces the wrong result on some
      inputs because poly1305_4block_avx2() incorrectly assumes that when
      partially reducing the accumulator, the bits carried from limb 'd4' to
      limb 'h0' fit in a 32-bit integer.  This is true for poly1305-generic
      which processes only one block at a time.  However, it's not true for
      the AVX2 implementation, which processes 4 blocks at a time and
      therefore can produce intermediate limbs about 4x larger.
      
      Fix it by making the relevant calculations use 64-bit arithmetic rather
      than 32-bit.  Note that most of the carries already used 64-bit
      arithmetic, but the d4 -> h0 carry was different for some reason.
      
      To be safe I also made the same change to the corresponding SSE2 code,
      though that only operates on 1 or 2 blocks at a time.  I don't think
      it's really needed for poly1305_block_sse2(), but it doesn't hurt
      because it's already x86_64 code.  It *might* be needed for
      poly1305_2block_sse2(), but overflows aren't easy to reproduce there.
      
      This bug was originally detected by my patches that improve testmgr to
      fuzz algorithms against their generic implementation.  But also add a
      test vector which reproduces it directly (in the AVX2 case).
      
      Fixes: b1ccc8f4 ("crypto: poly1305 - Add a four block AVX2 variant for x86_64")
      Fixes: c70f4abe ("crypto: poly1305 - Add a SSE2 SIMD variant for x86_64")
      Cc: <stable@vger.kernel.org> # v4.3+
      Cc: Martin Willi <martin@strongswan.org>
      Cc: Jason A. Donenfeld <Jason@zx2c4.com>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Reviewed-by: default avatarMartin Willi <martin@strongswan.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      678cce40
    • Eric Biggers's avatar
      crypto: testmgr - add panic_on_fail module parameter · eda69b0c
      Eric Biggers authored
      
      Add a module parameter cryptomgr.panic_on_fail which causes the kernel
      to panic if any crypto self-tests fail.
      
      Use cases:
      
      - More easily detect crypto self-test failures by boot testing,
        e.g. on KernelCI.
      - Get a bug report if syzkaller manages to use the template system to
        instantiate an algorithm that fails its self-tests.
      
      The command-line option "fips=1" already does this, but it also makes
      other changes not wanted for general testing, such as disabling
      "unapproved" algorithms.  panic_on_fail just does what it says.
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      eda69b0c
    • Eric Biggers's avatar
      crypto: cts - don't support empty messages · c31a8719
      Eric Biggers authored
      My patches to make testmgr fuzz algorithms against their generic
      implementation detected that the arm64 implementations of
      "cts(cbc(aes))" handle empty messages differently from the cts template.
      Namely, the arm64 implementations forbids (with -EINVAL) all messages
      shorter than the block size, including the empty message; but the cts
      template permits empty messages as a special case.
      
      No user should be CTS-encrypting/decrypting empty messages, but we need
      to keep the behavior consistent.  Unfortunately, as noted in the source
      of OpenSSL's CTS implementation [1], there's no common specification for
      CTS.  This makes it somewhat debatable what the behavior should be.
      
      However, all CTS specifications seem to agree that messages shorter than
      the block size are not allowed, and OpenSSL follows this in both CTS
      conventions it implements.  It would also simplify the user-visible
      semantics to have empty messages no longer be a special case.
      
      Therefore, make the cts template return -EINVAL on *all* messages
      shorter than the block size, including the empty message.
      
      [1] https://github.com/openssl/openssl/blob/master/crypto/modes/cts128.c
      
      
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c31a8719
    • Eric Biggers's avatar
      crypto: streebog - fix unaligned memory accesses · c5c46887
      Eric Biggers authored
      
      Don't cast the data buffer directly to streebog_uint512, as this
      violates alignment rules.
      
      Fixes: fe18957e ("crypto: streebog - add Streebog hash function")
      Cc: Vitaly Chikunov <vt@altlinux.org>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Reviewed-by: default avatarVitaly Chikunov <vt@altlinux.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c5c46887
    • Eric Biggers's avatar
      crypto: chacha20poly1305 - set cra_name correctly · 5e27f38f
      Eric Biggers authored
      
      If the rfc7539 template is instantiated with specific implementations,
      e.g. "rfc7539(chacha20-generic,poly1305-generic)" rather than
      "rfc7539(chacha20,poly1305)", then the implementation names end up
      included in the instance's cra_name.  This is incorrect because it then
      prevents all users from allocating "rfc7539(chacha20,poly1305)", if the
      highest priority implementations of chacha20 and poly1305 were selected.
      Also, the self-tests aren't run on an instance allocated in this way.
      
      Fix it by setting the instance's cra_name from the underlying
      algorithms' actual cra_names, rather than from the requested names.
      This matches what other templates do.
      
      Fixes: 71ebc4d1 ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539")
      Cc: <stable@vger.kernel.org> # v4.2+
      Cc: Martin Willi <martin@strongswan.org>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Reviewed-by: default avatarMartin Willi <martin@strongswan.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      5e27f38f
    • Eric Biggers's avatar
      crypto: skcipher - don't WARN on unprocessed data after slow walk step · dcaca01a
      Eric Biggers authored
      
      skcipher_walk_done() assumes it's a bug if, after the "slow" path is
      executed where the next chunk of data is processed via a bounce buffer,
      the algorithm says it didn't process all bytes.  Thus it WARNs on this.
      
      However, this can happen legitimately when the message needs to be
      evenly divisible into "blocks" but isn't, and the algorithm has a
      'walksize' greater than the block size.  For example, ecb-aes-neonbs
      sets 'walksize' to 128 bytes and only supports messages evenly divisible
      into 16-byte blocks.  If, say, 17 message bytes remain but they straddle
      scatterlist elements, the skcipher_walk code will take the "slow" path
      and pass the algorithm all 17 bytes in the bounce buffer.  But the
      algorithm will only be able to process 16 bytes, triggering the WARN.
      
      Fix this by just removing the WARN_ON().  Returning -EINVAL, as the code
      already does, is the right behavior.
      
      This bug was detected by my patches that improve testmgr to fuzz
      algorithms against their generic implementation.
      
      Fixes: b286d8b1 ("crypto: skcipher - Add skcipher walk interface")
      Cc: <stable@vger.kernel.org> # v4.10+
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      dcaca01a
    • Eric Biggers's avatar
      crypto: crct10dif-generic - fix use via crypto_shash_digest() · 307508d1
      Eric Biggers authored
      
      The ->digest() method of crct10dif-generic reads the current CRC value
      from the shash_desc context.  But this value is uninitialized, causing
      crypto_shash_digest() to compute the wrong result.  Fix it.
      
      Probably this wasn't noticed before because lib/crc-t10dif.c only uses
      crypto_shash_update(), not crypto_shash_digest().  Likewise,
      crypto_shash_digest() is not yet tested by the crypto self-tests because
      those only test the ahash API which only uses shash init/update/final.
      
      This bug was detected by my patches that improve testmgr to fuzz
      algorithms against their generic implementation.
      
      Fixes: 2d31e518 ("crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework")
      Cc: <stable@vger.kernel.org> # v3.11+
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      307508d1
    • Andi Kleen's avatar
      crypto: aes - Use ___cacheline_aligned for aes data · 61abc356
      Andi Kleen authored
      
      cacheline_aligned is a special section. It cannot be const at the same
      time because it's not read-only. It doesn't give any MMU protection.
      
      Mark it ____cacheline_aligned to not place it in a special section,
      but just align it in .rodata
      
      Cc: herbert@gondor.apana.org.au
      Suggested-by: default avatarRasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      61abc356
    • Sebastian Andrzej Siewior's avatar
      crypto: scompress - Use per-CPU struct instead multiple variables · 71052dcf
      Sebastian Andrzej Siewior authored
      
      Two per-CPU variables are allocated as pointer to per-CPU memory which
      then are used as scratch buffers.
      We could be smart about this and use instead a per-CPU struct which
      contains the pointers already and then we need to allocate just the
      scratch buffers.
      Add a lock to the struct. By doing so we can avoid the get_cpu()
      statement and gain lockdep coverage (if enabled) to ensure that the lock
      is always acquired in the right context. On non-preemptible kernels the
      lock vanishes.
      It is okay to use raw_cpu_ptr() in order to get a pointer to the struct
      since it is protected by the spinlock.
      
      The diffstat of this is negative and according to size scompress.o:
         text    data     bss     dec     hex filename
         1847     160      24    2031     7ef dbg_before.o
         1754     232       4    1990     7c6 dbg_after.o
         1799      64      24    1887     75f no_dbg-before.o
         1703      88       4    1795     703 no_dbg-after.o
      
      The overall size increase difference is also negative. The increase in
      the data section is only four bytes without lockdep.
      
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      71052dcf
Loading