Skip to content
Snippets Groups Projects
  1. Apr 25, 2019
    • Eric Biggers's avatar
      crypto: shash - remove shash_desc::flags · 877b5691
      Eric Biggers authored
      
      The flags field in 'struct shash_desc' never actually does anything.
      The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP.
      However, no shash algorithm ever sleeps, making this flag a no-op.
      
      With this being the case, inevitably some users who can't sleep wrongly
      pass MAY_SLEEP.  These would all need to be fixed if any shash algorithm
      actually started sleeping.  For example, the shash_ahash_*() functions,
      which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP
      from the ahash API to the shash API.  However, the shash functions are
      called under kmap_atomic(), so actually they're assumed to never sleep.
      
      Even if it turns out that some users do need preemption points while
      hashing large buffers, we could easily provide a helper function
      crypto_shash_update_large() which divides the data into smaller chunks
      and calls crypto_shash_update() and cond_resched() for each chunk.  It's
      not necessary to have a flag in 'struct shash_desc', nor is it necessary
      to make individual shash algorithms aware of this at all.
      
      Therefore, remove shash_desc::flags, and document that the
      crypto_shash_*() functions can be called from any context.
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      877b5691
  2. Dec 23, 2018
    • Eric Biggers's avatar
      crypto: skcipher - remove remnants of internal IV generators · c79b411e
      Eric Biggers authored
      
      Remove dead code related to internal IV generators, which are no longer
      used since they've been replaced with the "seqiv" and "echainiv"
      templates.  The removed code includes:
      
      - The "givcipher" (GIVCIPHER) algorithm type.  No algorithms are
        registered with this type anymore, so it's unneeded.
      
      - The "const char *geniv" member of aead_alg, ablkcipher_alg, and
        blkcipher_alg.  A few algorithms still set this, but it isn't used
        anymore except to show via /proc/crypto and CRYPTO_MSG_GETALG.
        Just hardcode "<default>" or "<none>" in those cases.
      
      - The 'skcipher_givcrypt_request' structure, which is never used.
      
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c79b411e
  3. Oct 26, 2018
  4. Jul 08, 2018
  5. Jun 15, 2018
  6. May 08, 2018
  7. Mar 30, 2018
  8. Feb 15, 2018
  9. Nov 03, 2017
  10. Jul 14, 2017
  11. Jun 22, 2017
  12. Jun 19, 2017
  13. May 18, 2017
  14. May 16, 2017
  15. Apr 04, 2017
  16. Mar 16, 2017
  17. Feb 15, 2017
  18. Feb 03, 2017
  19. Dec 13, 2016
  20. Dec 01, 2016
  21. May 31, 2016
  22. Feb 06, 2016
  23. Oct 21, 2015
    • David Howells's avatar
      KEYS: Merge the type-specific data with the payload data · 146aa8b1
      David Howells authored
      
      Merge the type-specific data with the payload data into one four-word chunk
      as it seems pointless to keep them separate.
      
      Use user_key_payload() for accessing the payloads of overloaded
      user-defined keys.
      
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      cc: linux-cifs@vger.kernel.org
      cc: ecryptfs@vger.kernel.org
      cc: linux-ext4@vger.kernel.org
      cc: linux-f2fs-devel@lists.sourceforge.net
      cc: linux-nfs@vger.kernel.org
      cc: ceph-devel@vger.kernel.org
      cc: linux-ima-devel@lists.sourceforge.net
      146aa8b1
  24. Mar 09, 2015
  25. Nov 13, 2014
  26. Jul 03, 2013
  27. Oct 08, 2012
  28. Aug 30, 2009
    • Dan Williams's avatar
      async_tx: add support for asynchronous RAID6 recovery operations · 0a82a623
      Dan Williams authored
      
       async_raid6_2data_recov() recovers two data disk failures
      
       async_raid6_datap_recov() recovers a data disk and the P disk
      
      These routines are a port of the synchronous versions found in
      drivers/md/raid6recov.c.  The primary difference is breaking out the xor
      operations into separate calls to async_xor.  Two helper routines are
      introduced to perform scalar multiplication where needed.
      async_sum_product() multiplies two sources by scalar coefficients and
      then sums (xor) the result.  async_mult() simply multiplies a single
      source by a scalar.
      
      This implemention also includes, in contrast to the original
      synchronous-only code, special case handling for the 4-disk and 5-disk
      array cases.  In these situations the default N-disk algorithm will
      present 0-source or 1-source operations to dma devices.  To cover for
      dma devices where the minimum source count is 2 we implement 4-disk and
      5-disk handling in the recovery code.
      
      [ Impact: asynchronous raid6 recovery routines for 2data and datap cases ]
      
      Cc: Yuri Tikhonov <yur@emcraft.com>
      Cc: Ilya Yanok <yanok@emcraft.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: David Woodhouse <David.Woodhouse@intel.com>
      Reviewed-by: default avatarAndre Noll <maan@systemlinux.org>
      Acked-by: default avatarMaciej Sosnowski <maciej.sosnowski@intel.com>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      
      0a82a623
    • Dan Williams's avatar
      async_tx: add support for asynchronous GF multiplication · b2f46fd8
      Dan Williams authored
      
      [ Based on an original patch by Yuri Tikhonov ]
      
      This adds support for doing asynchronous GF multiplication by adding
      two additional functions to the async_tx API:
      
       async_gen_syndrome() does simultaneous XOR and Galois field
          multiplication of sources.
      
       async_syndrome_val() validates the given source buffers against known P
          and Q values.
      
      When a request is made to run async_pq against more than the hardware
      maximum number of supported sources we need to reuse the previous
      generated P and Q values as sources into the next operation.  Care must
      be taken to remove Q from P' and P from Q'.  For example to perform a 5
      source pq op with hardware that only supports 4 sources at a time the
      following approach is taken:
      
      p, q = PQ(src0, src1, src2, src3, COEF({01}, {02}, {04}, {08}))
      p', q' = PQ(p, q, q, src4, COEF({00}, {01}, {00}, {10}))
      
      p' = p + q + q + src4 = p + src4
      q' = {00}*p + {01}*q + {00}*q + {10}*src4 = q + {10}*src4
      
      Note: 4 is the minimum acceptable maxpq otherwise we punt to
      synchronous-software path.
      
      The DMA_PREP_CONTINUE flag indicates to the driver to reuse p and q as
      sources (in the above manner) and fill the remaining slots up to maxpq
      with the new sources/coefficients.
      
      Note1: Some devices have native support for P+Q continuation and can skip
      this extra work.  Devices with this capability can advertise it with
      dma_set_maxpq.  It is up to each driver how to handle the
      DMA_PREP_CONTINUE flag.
      
      Note2: The api supports disabling the generation of P when generating Q,
      this is ignored by the synchronous path but is implemented by some dma
      devices to save unnecessary writes.  In this case the continuation
      algorithm is simplified to only reuse Q as a source.
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: David Woodhouse <David.Woodhouse@intel.com>
      Signed-off-by: default avatarYuri Tikhonov <yur@emcraft.com>
      Signed-off-by: default avatarIlya Yanok <yanok@emcraft.com>
      Reviewed-by: default avatarAndre Noll <maan@systemlinux.org>
      Acked-by: default avatarMaciej Sosnowski <maciej.sosnowski@intel.com>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      b2f46fd8
  29. Jun 03, 2009
    • Dan Williams's avatar
      async_xor: permit callers to pass in a 'dma/page scribble' region · 04ce9ab3
      Dan Williams authored
      
      async_xor() needs space to perform dma and page address conversions.  In
      most cases the code can simply reuse the struct page * array because the
      size of the native pointer matches the size of a dma/page address.  In
      order to support archs where sizeof(dma_addr_t) is larger than
      sizeof(struct page *), or to preserve the input parameters, we utilize a
      memory region passed in by the caller.
      
      Since the code is now prepared to handle the case where it cannot
      perform address conversions on the stack, we no longer need the
      !HIGHMEM64G dependency in drivers/dma/Kconfig.
      
      [ Impact: don't clobber input buffers for address conversions ]
      
      Reviewed-by: default avatarAndre Noll <maan@systemlinux.org>
      Acked-by: default avatarMaciej Sosnowski <maciej.sosnowski@intel.com>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      04ce9ab3
    • Dan Williams's avatar
      async_tx: structify submission arguments, add scribble · a08abd8c
      Dan Williams authored
      
      Prepare the api for the arrival of a new parameter, 'scribble'.  This
      will allow callers to identify scratchpad memory for dma address or page
      address conversions.  As this adds yet another parameter, take this
      opportunity to convert the common submission parameters (flags,
      dependency, callback, and callback argument) into an object that is
      passed by reference.
      
      Also, take this opportunity to fix up the kerneldoc and add notes about
      the relevant ASYNC_TX_* flags for each routine.
      
      [ Impact: moves api pass-by-value parameters to a pass-by-reference struct ]
      
      Signed-off-by: default avatarAndre Noll <maan@systemlinux.org>
      Acked-by: default avatarMaciej Sosnowski <maciej.sosnowski@intel.com>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      a08abd8c
    • Dan Williams's avatar
      async_tx: kill ASYNC_TX_DEP_ACK flag · 88ba2aa5
      Dan Williams authored
      
      In support of inter-channel chaining async_tx utilizes an ack flag to
      gate whether a dependent operation can be chained to another.  While the
      flag is not set the chain can be considered open for appending.  Setting
      the ack flag closes the chain and flags the descriptor for garbage
      collection.  The ASYNC_TX_DEP_ACK flag essentially means "close the
      chain after adding this dependency".  Since each operation can only have
      one child the api now implicitly sets the ack flag at dependency
      submission time.  This removes an unnecessary management burden from
      clients of the api.
      
      [ Impact: clean up and enforce one dependency per operation ]
      
      Reviewed-by: default avatarAndre Noll <maan@systemlinux.org>
      Acked-by: default avatarMaciej Sosnowski <maciej.sosnowski@intel.com>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      88ba2aa5
Loading