libdecnumber: Fix decNumberSetBCD
Fix a simple bug in the decNumberSetBCD() function. This function encodes a decNumber with "n" BCD digits. The original code erroneously computed the number of declets from the dn argument, which is the output decNumber value, and hence may contain garbage. Instead, the input "n" value is used. Signed-off-by: Tom Musta <tommusta@gmail.com> Signed-off-by: Alexander Graf <agraf@suse.de>
This commit is contained in:
parent
79af357225
commit
0a322e7e7c
@ -3541,7 +3541,7 @@ uByte * decNumberGetBCD(const decNumber *dn, uint8_t *bcd) {
|
||||
/* and bcd[0] zero. */
|
||||
/* ------------------------------------------------------------------ */
|
||||
decNumber * decNumberSetBCD(decNumber *dn, const uByte *bcd, uInt n) {
|
||||
Unit *up=dn->lsu+D2U(dn->digits)-1; /* -> msu [target pointer] */
|
||||
Unit *up = dn->lsu + D2U(n) - 1; /* -> msu [target pointer] */
|
||||
const uByte *ub=bcd; /* -> source msd */
|
||||
|
||||
#if DECDPUN==1 /* trivial simple copy */
|
||||
|
Loading…
Reference in New Issue
Block a user