crypto(9) removal and refactoring dm_target_crypt

Michael Neumann mneumann at ntecs.de
Tue Dec 24 17:00:55 PST 2024


Hi,

Here is a massive patch:

https://leaf.dragonflybsd.org/~mneumann/0001-Retire-crypto-9-and-tcplay-8-and-refactor-dm_target_.patch

I'd like to get some feedback concerning:

* the workqueue implementation found in sys/dev/dm/crypto/dm_target_crypt.c.
  It's a "simple" multi-producer single consumer queue with a
  pre-allocated free list, so 100% non-allocating when it's in use.
  The worker threads "own" the memory used to process the crypto
  requests, so we no longer need malloc_pipe to allocate the big buffers
  from at each BIO request.

* bogus page handling: e.g. in dmtc_bio_read_decrypt_job_do(), which 
  decrypts the read bio data and runs in the worker thread,
  I unconditionally memcpy(data_buf, bio->bio_buf->b_data), in order to
  avoid decrypting in-place.  After decryption, I memcpy the data_buf back
  to bio->bio_buf->b_data. I don't see the difference in decrypting
  in-place, vs. copy back... Maybe someone can enligthen me :)

  The code before mentioned the bogus page handling, but it was
  disabled. The relevant comment:

   	 * For reads with bogus page we can't decrypt in place as stuff
 	 * can get ripped out from under us.
 	 *
 	 * XXX actually it looks like we can, and in any case the initial
 	 * read already completed and threw crypted data into the buffer
 	 * cache buffer.  Disable for now.

  Same in dmtc_bio_write_encrypt_job_do(), which processes a BIO write
  request. Here, I unconditionally memcpy from bio->bio_buf->b_data to
  data_buf again, then encrypt data_buf, and then, rewire tht bio->bio_buf->b_data
  to data_buf before calling vn_strategy.
  In this case, copying back the encrypted data_buf to bio->bio_buf->b_data is
  avoided. But this comes at the cost of having to block the worker thread
  until vn_strategy finishes and calls bio->bio_done. It has to wait, because it
  owns data_buf and cannot handle the next request until it is
  completely written. A memcpy back to the original bio->bio_buf
  would simply that code, as we could immediatly process the
  next request, and the bio_done callback would simply call
  biodone(bio).
  
* The "new" cypto API as found in crypto/crypto_cipher.{h,c}. It's only
  used by dm_target_crypt, and basically consists of struct crypto_cipher
  definitions for each crypto algorithm/implementation (e.g. AES-CBC or
  AES-XTS w/ and w/o AESNI). Each crypto_cipher comes with a "probe"
  function, in order to check if the cipher is "available", mainly to
  support hardware AESNI and fall back to software implementations.

  To find a "struct crypto_cipher", simply call e.g.

  struct crypto_cipher *cipher =
      crypto_cipher_find("aes", "cbc", klen_in_bits);

  and then you can use this struct to set the key and call encrypt /
  decrypt functions. The crypto_cipher_context contains the keys
  required for encryption/decryption. And the crypto_cipher_iv, for the
  initialization vector. They are both unions that contain all
  "context"s or "iv"s of all supported algorithms, so adding AES-512,
  if that exists, would grow the structs. Of course, you could
  also dynamically allocate it. The old crypto(9) framework
  had the same approach only for the IV, but the context had to be
  dynamically allocated.

In case you test, please don't do this with very important data.  While
I have tested it quite a bit, it's always better to have backups :)

Feel free to torture test this either using cryptsetup luksFormat
--cipher aes-cbc-essiv:sha256 or --cipher aes-xts-essiv:sha256. And
also sysctl hw.aesni_disable=0 or 1.

Best regards,

  Michael










-- 
Michael Neumann
NTECS Consulting
www.ntecs.de


More information about the Kernel mailing list