Age | Commit message (Collapse) | Author |
|
rebuilding the hash tree.
|
|
supports it.
In particular, the version of newlib distributed by Ubuntu is not
configured with --enable-newlib-io-c99-formats, and now includes guard
code that treats %hhx as an error, rather than silently interpreting it as
%hx. The net effect was to break hal_uuid_parse.
(Ironically, vfprintf.c does not (yet) include this guard code, but it's
probably only a matter of time, and it seemed expedient to change
hal_uuid_format at the same time.)
|
|
Found when upgrading Ubuntu to 18.10.
|
|
It's an edge case, but it's supported, and it's used in a few places.
|
|
|
|
This fixes CT-01-006 MCU: Value cast allows a bypass of the size checks (Critical)
|
|
This fixes CT-01-005: OOB writes through dynamic stack allocations (Critical)
|
|
|
|
|
|
|
|
Move lm[ot]s_algorithm_t definitions to hal.h, prefix all public symbols with 'hal_'.
Remove some unused functions.
Wrap hal_pkey_slot_t initializers in an extra set of curly braces.
Remove an unused-argument kludge (x=x;) because gcc doesn't care, and clang complains.
Make timersub a proper macro.
Add some casts to printf arguments, because !@#$ printf formats.
|
|
timersub() is a macro on *BSD, including MacOS, so redefinition as a
function in hashsig test code was breaking the whole build.
Clang has other comments on the hashsig code, leaving those for Paul.
|
|
|
|
|
|
consistency.
|
|
the minimum size necessary, so hal_asn1_decode_lms_algorithm and
hal_asn1_decode_lmots_algorithm were writing 4 bytes of data into 1-byte
variables. Hilarity ensued. Yes, I already knew that conflating enum with
uint32_t was a bad idea, I was just being lazy.
For that matter, sizeof(size_t) isn't guaranteed either, although
arm-none-eabi-gcc treats it as 32 bits on this 32-bit target (for now), so
exercise proper data hygiene in hal_asn1_decode_size_t as well.
|
|
rebuilding the tree.
|
|
each object module.
|
|
blobs are really inscrutable.
|
|
|
|
|
|
|
|
|
|
Profiling reports significant time spent in the hal_io_fmc.c debugging
code even when runtime debugging is off. This is odd, and may be a
profiling artifact, but we don't use that debugging code often, so if
it costs anything at all we might as well disable it when not needed.
|
|
|
|
Various fixes extracted from the abandoned(-for-now?) reuse-cores
branch, principally:
* Change hal_core_alloc*() to support core reuse and to pick the
least-recently-used core of a particular type otherwise;
* Replace assert() and printf() calls with hal_assert() and hal_log(),
respectively. assert() is particularly useless on the HSM, since it
sends its error message into hyperspace then hangs the HSM.
|
|
Aside from not really needing to use every crayon in the box, using a
simpler control structure makes exceptions behave more as one expects.
|
|
Generating new RSA blinding factors turns out to be relatively
expensive, but we can amortize that cost by maintaining a small cache
and simply mutating old values after each use with a cheaper
operation. Squaring works, pretty much by definition.
Blinding factors are only sort-of-sensitive: we don't want them to
leak out of the HSM, but they're only based on the public modulus, not
the private key components, and we're only using them to foil side
channel attacks, so the risk involved in caching them seems small.
For the moment, the cache is very small, since we only care about this
for bulk signature operations. Tune this later if it becomes an issue.
|
|
|
|
hal_ks_fetch() was written as lock-at-the-top, unlock-at-the-bottom to
keep it as simple as possible, but this turns out to have bad
performance implications when unwrapping the key is slow. So now we
grab the wrapped key, release the lock, then unwrap, which should be
safe enough given that hal_ks_fetch() is read-only. This lets us make
better use of multiple AES cores to unwrap in parallel when we have
multiple active clients.
|
|
generation and deletion.
|
|
|
|
|
|
|
|
add ability to export public key to xdr for interop testing
|
|
|
|
|
|
premature optimization.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Copy ContextManagedUnpacker from latest version of libhal.py so that
this script won't depend on the current development code.
|
|
At the moment this only handles RSA keys, and can only handle one size
of key at a time. More bells and whistles will follow eventually,
now that the basic asynchronous API to our RPC protocol works.
|
|
Failing to clear the temporary buffer used to transfer bits from the
TRNG into a bignum was a real leak of something very close to keying
material, albeit only onto the local stack where it was almost certain
to have been overwritten by subsequent operations (generation of other
key components, wrap and PKCS #8 encoding) before pkey_generate_rsa()
ever returned to its caller. Still, bad coder, no biscuit.
Failing to clear the remainders array was probably harmless, but
doctrine says clear it anyway.
|