aMule Forum
English => Feature requests => Topic started by: bobmarleyfan on November 12, 2008, 10:38:09 AM
-
Having many uploads and/or downloads at the same time 24/7 for 1-2 years probably means some stress for the harddrive since the drives RW-heads have to jockey around on the drive all the time getting a fraction of that shared file, saving a fraction of a download, getting a fraction of antoher uploaded file etc...
I would sleep better if I could assign some 50-512MB of RAM (overnight I would allow to use more RAM, maybe 2-3GB) for eMule to buffer downloads and uploads so it can read/write bigger chunks of uploaded/downloaded files each at a time instead of constant mutliple read/write jobs.
The uploaded/downloaded files could instead be kept in RAM until a certain size is reached and the harddrive is idle or at least not very stressed.
I just had a harddrive go down. And I used to have eMule up 24/7 letting people download without having downloads myself - no problem, but now I am re-considering this, because I don`t want to pay for new harddrives repeatedly.
It is not any particular application`s fault but it seems plausible that running a drive day and night with constant read/write stress will cause wearout and increases the chance of an early death of the storage device.
So I may not share as much as I did but if the issue is addressed with the funciton I proposed it might be less stress on the HDD and I wouldn`t have as much reason to worry anymore.
-
since the drives RW-heads have to jockey around on the drive all the time getting a fraction of that shared file, saving a fraction of a download, getting a fraction of antoher uploaded file etc...
That's what HD is designed for.
I could assign some 50-512MB of RAM (overnight I would allow to use more RAM, maybe 2-3GB) for eMule to buffer downloads and uploads so it can read/write bigger chunks of uploaded/downloaded files each at a time instead of constant mutliple read/write jobs.
This idea is called "OS cache for mass storage device". Some OS's allow you to configure this size, other adjust it automatically. OS doing this so application won't have to.
I just had a harddrive go down.
Buy RAID array.
constant read/write stress will cause wearout and increases the chance of an early death of the storage device.
Yep. When you start to use things, those things "wear out". Same principle apply for cars, shoes, etc.
I wouldn`t have as much reason to worry anymore
You worries are irrational. Sorry. Other things may cause HD failure, with "crappy drive" being #1
-
Depends on how much you share: I'm not an expert but I'd say that if you are sharing let's say 100GB a 2GB Ram buffer would not be that effective in limiating disc accesses,
I'd suggest a different solution, if you share a directory with the most uploaded files in a SSD unit, you may have several advantages:
no coding required, any file sharing program can use it straight away
SSD don't suffer for repeated reading, only heavy rewriting should be harmful, but this is not your case.
SSD prices have been cotinuosly dropping in the last year
the mechanics of your HD would be stressed only for less common uploads
spotting the most uploaded files should be simple looking at the shared files statistics
about fragmentation: i found a definitive solution putting the temp folder of the a/e mule on a different partition from the one in which the completed file is stored, while the temporary .part file is awfully fragmented, when the file is completed and copied in an other disk or partition, it becomes sequential.
-
about fragmentation:
The solution is called "good filesystem". Hint - it's not NTFS.
-
Or nay other for that matter. "Good filesystems" can't fight fragmentation much, either.
-
about fragmentation:
The solution is called "good filesystem". Hint - it's not NTFS.
The solution would be using compact temp files (http://forum.amule.org/index.php?topic=15924.0) instead of sparse files.
-
since the drives RW-heads have to jockey around on the drive all the time getting a fraction of that shared file, saving a fraction of a download, getting a fraction of antoher uploaded file etc...
That's what HD is designed for.
Harddrives are made to store data. Increasing the size of the files` chunks inteads of reading/writing tiny chunks would be a more intelligent way to spare the resource HDD unecessary stress.
There is no reason not to except ignorance.
I could assign some 50-512MB of RAM ... to buffer downloads and uploads so it can read/write bigger chunks ...
This idea is called "OS cache for mass storage device". ...OS doing this so application won't have to.
The OS cannot anticipate which parts of which files will most likely be requested.
In the same way "OS cache for mass storage device" will not cache data which the application does not request.
In other words:
If an application could estimate from a file`s current upload rate that it will need let`s say 10MB of that file in the next 5 minutes but does not request it, then "OS cache for mass storage device" won`t know that this data will soon be required.
I just had a harddrive go down.
Buy RAID array.
RAID for data security is nonsense. There is no reason accept for lazyness that a backup drive is permanently running.
constant read/write stress will cause wearout and increases the chance of an early death of the storage device.
Yep. When you start to use things, those things "wear out". Same principle apply for cars, shoes, etc.
There are different ways to use things. If you drive your car like you are Evel Knievel ;) then it will sooner be ready for a repair than your grandpa`s car.
I wouldn`t have as much reason to worry anymore
You worries are irrational. Sorry. Other things may cause HD failure, with "crappy drive" being #1
Yes, Western Digital is "crappy", I see.
-
Depends on how much you share: I'm not an expert but I'd say that if you are sharing let's say 100GB a 2GB Ram buffer would not be that effective in limiating disc accesses,
I think that even 50-250MB could help a lot.
Let`s say you allow 10 uploads at any time. Once an upload is initiated the program can fetch maybe 5MB of the file and monitor how quick the upload is.
Then it can easiliy be calculated how many MB will be uploaded within maybe the next 10-30 minutes.
That data can be pre-loaded to RAM.
Handling it this way would mean that the program can have 1 read process at a time because there is no hurry since a lot of the required data will be in RAM.
In case of reading it wouldn`t even require much change in programming I suppose.
When it comes to writing - I don`t know.
I'd suggest a different solution, if you share a directory with the most uploaded files in a SSD unit, you may have several advantages
Still too expensive just for file sharing. And not required because there might be enough RAM available for buffering.
That`s not rocket science.
about fragmentation: i found a definitive solution putting the temp folder of the a/e mule on a different partition from the one in which the completed file is stored, while the temporary .part file is awfully fragmented, when the file is completed and copied in an other disk or partition, it becomes sequential.
Defragmentation is not my main issue. I did not experience much of a harddrive slow down.
I would also like to add that your way of doing that means that the files have to be copied from one partition to another when finished.
I don`t see a real problem with that but generally I think this is a waste of resources (time and also a little bit of extra wearout). It`s a workaround and not elegant.
-
RAID for data security is nonsense. There is no reason accept for lazyness that a backup drive is permanently running.
I can see your feature request is founded on a lot of knowledge about HD technology... ::)
Harddrives are made to store data. Increasing the size of the files` chunks inteads of reading/writing tiny chunks would be a more intelligent way to spare the resource HDD unecessary stress.
There is no reason not to except ignorance.
If you want to convince us to do something you want, you are doing a lousy job. :P
Right now we use the same code for the monolith and the amuled which is supposed to run in low memory environment. It would be possible to add an option to read a full chunk ahead for each upload. For the downloads it depends on how much already downloaded data you are willing to lose in case of a crash (which should not happen of course). I admit that memory is plentyfull on most platforms nowadays, so you might have a point. At least for people sleeping near their mule machines. ;)
-
The OS and harddrive manufacturers set the read/write buffer sizes. Go talk to them, tell them I sent you.
-
It would be possible to add an option to read a full chunk ahead for each upload.
It's called "read-ahead optimization", and both HD and OS doing it.
There is no reason not to except ignorance.
Your lack of understanding is hilarious.
Answer is NO.
-
The problem is that it is very random what chunks the people are requesting from you. You don't know when the disconnect, what other chunks they would ask and so on. The normal read-ahead is already doing a good job. If you want a better, than you need 100 GB-Cache-memory to cache the 100 GB on your harddrive. If you did buy that much memory, I would suggest that you simply make a ramdisc and copy the files over it and than start aMule. This way everybody is happy.