Depends on how much you share: I'm not an expert but I'd say that if you are sharing let's say 100GB a 2GB Ram buffer would not be that effective in limiating disc accesses,
I think that even 50-250MB could help a lot.
Let`s say you allow 10 uploads at any time. Once an upload is initiated the program can fetch maybe 5MB of the file and monitor how quick the upload is.
Then it can easiliy be calculated how many MB will be uploaded within maybe the next 10-30 minutes.
That data can be pre-loaded to RAM.
Handling it this way would mean that the program can have 1 read process at a time because there is no hurry since a lot of the required data will be in RAM.
In case of reading it wouldn`t even require much change in programming I suppose.
When it comes to writing - I don`t know.
I'd suggest a different solution, if you share a directory with the most uploaded files in a SSD unit, you may have several advantages
Still too expensive just for file sharing. And not required because there might be enough RAM available for buffering.
That`s not rocket science.
about fragmentation: i found a definitive solution putting the temp folder of the a/e mule on a different partition from the one in which the completed file is stored, while the temporary .part file is awfully fragmented, when the file is completed and copied in an other disk or partition, it becomes sequential.
Defragmentation is not my main issue. I did not experience much of a harddrive slow down.
I would also like to add that your way of doing that means that the files have to be copied from one partition to another when finished.
I don`t see a real problem with that but generally I think this is a waste of resources (time and also a little bit of extra wearout). It`s a workaround and not elegant.