I had the same memory problem on amuled.
My system is a Ubuntu-9.04-Server with dev tools, all updated, amuled-2.2.5 compiled in this system. PIII, 512 MB. Amuled with kad on and ed2k on is started with the files "known.met", "known2_64.met", "key_index.dat", "src_index.dat" and "load_index.dat" deleted and a good "nodes.dat".
Checking the virtual memory size (vsize) of the process, a steady rise of vsize is detected at a rate of 9MB per hour, a month ago, and 3MB/h the last days. The vsize grows till a stable value between 100MB and 200MB. But, every 1 or 2 days there is a brisk rise of memory of about 100MB till the system memory is exhausted. This amount of memory is related with the line "Escritos 192 contactos Kad" on the logfile (sorry, my amuled speak spanish), it is when the files key_index.dat, src_index.dat, load_index.dat and nodes.dat are stored again on the .aMule dir.
After days of work around the problem, it seemed to be on the KAD part, I saw 3 different issues:
(1) Memory leaks, (2) Big memory fragmentation and (3) Steady memory rise.
1- MEMORY LEAKS
I looked for memory leaks on the KAD code. The stable memory value after a time says that the memory leak can be on the brisk rises, when that logfile line was written. Then, looking at the source code, the KAD is restarted with StopKad (the KAD objects are deleted) and StartKad (the KAD objects are created). Searching for some object not released at StopKad I only found in the CIndexed class that the maps m_Load_map, m_Sources_map and m_Keyword_map are not cleared inside ~CIndexed, surprisingly m_Notes_map is cleared. I don’t think this is a true memory leak (someone can check) cause this maps must be cleared and deleted when the CIndexed object is destroyed. First try, I added on the code the clearing of maps, if it is not a memory leak this could improve memory fragmentation, issue (2). Testing the new code the brisk rises were produced less often but the net was different at this time, no final conclusion.
2- BIG MEMORY FRAGMENTATION
The brisk amount of memory is related to the size of the file key_index.dat, in my tests x 7, this file is the big one stored. It looked like that the memory released on StopKad is not reused on StartKad. Putting a trace based on mallinfo on the code I saw that the freed memory is free memory but the new one is not accounted. This can be a corrupted memory heap (someone can check again) or more probably due to the big memory fragmentation the process is working in a new arena that mallinfo does not account.
The conclusion is that it is not normal free and take this lots of memory (>100MB) briefly in a machinewith few Megs.
3- STEADY MEMORY RISE
At this point the conclusion is that the main part of memory used by amuled is due of the contents of the file key_index.dat is expanded seven times or more, it can be the toll of using C++ if the data is too scattered.
The contents of key_index.dat are the index sources Kad has acquired in the steady memory rise. Looking inside that file I can see that a lot of sources (in my tests 66%) are from a continuous range of 16 IP addresses and many of the sources are nonsense: For example, the same file is published with more 500 different IDs.
I don’t know if this is an error of a new-code or a net attack. I don’t tell these addresses because I don’t know what is the attack policy in this forum but I found these addresses inside public ipfilter.dat files.
Putting these IPs on the ipfilter.dat unfortunately does nothing because amuled does not filter IPs on kad acquiring sources only kad contacts are filtered (this can be an issue of a next amuled version).
Finally I put them on a iptables filter. Then, running amuled with the index files removed, only KAD, after 8 hours the vsize is 64M and almost stable.
Disconnecting kad the size of the created key_index.dat is of 1162527 bytes, the memory freed is 12M (mallinfo trace, no memory released to the system).
Evaluating the number of different values of IPs, IDs an Filenales on key_index.dat:
New key_index.dat(size 1162527) with IP filtering:
7214 IPs, 4396 IDs, 5658 Filenames
Last key_index.dat(size 11130892) without IP filtering:
22785 IPs, 48828 IDs, 15436 Filenames
The new one can be OK. But the old one has a lot more source IDs than Filenames.
I'll check the process during some days.