Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
over a year ago.
|
|
buffer, because hal_hashsig_sign assembles the signature incrementally,
and will overwrite the digest before it's ready to sign it.
|
|
|
|
just call memcpy here.
(Although it turns out to be more efficient to use an inline version of
memcpy than the library function.)
|
|
|
|
- Add support for null pointer arguments in RPCs for get_digest_algorithm_id
and get_public_key. This is years overdue, and would have obviated the need
for get_public_key_len as a separate RPC.
- Refactor pkey_local_get_public_key_len in terms of pkey_local_get_public_key.
- Add more parameter sanity checks to rpc_api.c.
- Add a len_max parameter to hal_xdr_decode_variable_opaque, rather than
having len be an in/out parameter. This brings xdr slightly more in line
with the rest of the code base (again after literal years), and slightly
simplifies several calls in rpc_client.c.
|
|
- Move hashsig.h contents into hal.h.
- Uppercase lmots and lms algorithm types, because we have a convention
that enum values are uppercase.
- Change all I to hal_uuid_t, because that how we're using them, and it
seems silly to have two different 16-byte array types.
- Change all "memcpy(&this, &that, sizeof(this))" to "this = that",
because it's more succinct, more type-safe, and harder to get wrong.
- Slightly tighten up lmots_generate, lmots_sign, and
lmots_public_key_candidate.
- Remove verbatim draft text, now that I'm pretty sure I implemented it
correctly.
|
|
|
|
|
|
method, and it's missing one or more lmots keys, those keys can be
regenerated.
OTOH, if an lms key is damaged or missing, it's still a fatal error,
because that's the only place we record the current q value.
|
|
|
|
|
|
|
|
This forces each hal_mkmif_* function to alloc/free the core, which is a
miniscule performance hit, but the only sane thing to do in a tasking
environment. Otherwise (with a stored/shared core pointer), one task will
initiate a read, yield in hal_io_wait, another task will initiate a read,
and both will be unhappy.
|
|
|
|
rebuilding the hash tree.
|
|
supports it.
In particular, the version of newlib distributed by Ubuntu is not
configured with --enable-newlib-io-c99-formats, and now includes guard
code that treats %hhx as an error, rather than silently interpreting it as
%hx. The net effect was to break hal_uuid_parse.
(Ironically, vfprintf.c does not (yet) include this guard code, but it's
probably only a matter of time, and it seemed expedient to change
hal_uuid_format at the same time.)
|
|
Found when upgrading Ubuntu to 18.10.
|
|
It's an edge case, but it's supported, and it's used in a few places.
|
|
|
|
This fixes CT-01-006 MCU: Value cast allows a bypass of the size checks (Critical)
|
|
This fixes CT-01-005: OOB writes through dynamic stack allocations (Critical)
|
|
|
|
|
|
|
|
Move lm[ot]s_algorithm_t definitions to hal.h, prefix all public symbols with 'hal_'.
Remove some unused functions.
Wrap hal_pkey_slot_t initializers in an extra set of curly braces.
Remove an unused-argument kludge (x=x;) because gcc doesn't care, and clang complains.
Make timersub a proper macro.
Add some casts to printf arguments, because !@#$ printf formats.
|
|
timersub() is a macro on *BSD, including MacOS, so redefinition as a
function in hashsig test code was breaking the whole build.
Clang has other comments on the hashsig code, leaving those for Paul.
|
|
|
|
|
|
consistency.
|
|
the minimum size necessary, so hal_asn1_decode_lms_algorithm and
hal_asn1_decode_lmots_algorithm were writing 4 bytes of data into 1-byte
variables. Hilarity ensued. Yes, I already knew that conflating enum with
uint32_t was a bad idea, I was just being lazy.
For that matter, sizeof(size_t) isn't guaranteed either, although
arm-none-eabi-gcc treats it as 32 bits on this 32-bit target (for now), so
exercise proper data hygiene in hal_asn1_decode_size_t as well.
|
|
rebuilding the tree.
|
|
each object module.
|
|
blobs are really inscrutable.
|
|
|
|
|
|
|
|
|
|
Profiling reports significant time spent in the hal_io_fmc.c debugging
code even when runtime debugging is off. This is odd, and may be a
profiling artifact, but we don't use that debugging code often, so if
it costs anything at all we might as well disable it when not needed.
|
|
|
|
Various fixes extracted from the abandoned(-for-now?) reuse-cores
branch, principally:
* Change hal_core_alloc*() to support core reuse and to pick the
least-recently-used core of a particular type otherwise;
* Replace assert() and printf() calls with hal_assert() and hal_log(),
respectively. assert() is particularly useless on the HSM, since it
sends its error message into hyperspace then hangs the HSM.
|
|
Aside from not really needing to use every crayon in the box, using a
simpler control structure makes exceptions behave more as one expects.
|
|
Generating new RSA blinding factors turns out to be relatively
expensive, but we can amortize that cost by maintaining a small cache
and simply mutating old values after each use with a cheaper
operation. Squaring works, pretty much by definition.
Blinding factors are only sort-of-sensitive: we don't want them to
leak out of the HSM, but they're only based on the public modulus, not
the private key components, and we're only using them to foil side
channel attacks, so the risk involved in caching them seems small.
For the moment, the cache is very small, since we only care about this
for bulk signature operations. Tune this later if it becomes an issue.
|
|
|
|
hal_ks_fetch() was written as lock-at-the-top, unlock-at-the-bottom to
keep it as simple as possible, but this turns out to have bad
performance implications when unwrapping the key is slow. So now we
grab the wrapped key, release the lock, then unwrap, which should be
safe enough given that hal_ks_fetch() is read-only. This lets us make
better use of multiple AES cores to unwrap in parallel when we have
multiple active clients.
|
|
generation and deletion.
|