The buffer size check should be added to DecodeBase16() nonetheless.
And while we are at this: I've got a question.
From eMule 0.42e sources:
bool DecodeBase16(const char *base16Buffer, unsigned int base16BufLen, byte *buffer, unsigned int bufflen) {
unsigned int uDecodeLengthBase16=DecodeLengthBase16(base16BufLen);
if (uDecodeLengthBase16 > bufflen)
return false;
memset(buffer, 0, uDecodeLengthBase16);
and
unsigned int DecodeLengthBase16(unsigned int base16Length) {
return base16Length / 2U;
}
If I'm not completely mistaken this code may still cause a buffer overflow if base16BufLen is an odd number.
If for example base16BufLen==3 and bufflen==1, then uDecodeLengthBase16=3/2==1 (integer arithmetics) and therefore the size check doesn't trigger. The decode loop will write to buffer[0] and buffer[1] because 2 bytes are needed to decode 3 Base16 characters, but bufflen is 1 byte.
This is no problem if base16BufLen always is an even number, but I don't know if that's a valid constraint.
A fix for this would be to change DecodeLengthBase16 to
unsigned int DecodeLengthBase16(unsigned int base16Length) {
return (unsigned int)ceil((float)base16Length / 2.0f);
}
Any comments on this?
Greets, fz.
EDIT:
To avoid any misunderstandings: This does not mean that eMule is still vulnerable to any exploits. An explicit size check precedes any call to DecodeBase16() (for example if (base16BufLen==32 || DecodeBase16(...))), so there's no chance to exploit.