aMule Forum

Please login or register.

Login with username, password and session length
Advanced search  

News:

We're back! (IN POG FORM)

Author Topic: Ramdisk Buffer To Prevent Stress For Harddrives And Fragmentation  (Read 7025 times)

bobmarleyfan

  • Approved Newbie
  • *
  • Karma: 0
  • Offline Offline
  • Posts: 42

Having many uploads and/or downloads at the same time 24/7 for 1-2 years probably means some stress for the harddrive since the drives RW-heads have to jockey around on the drive all the time getting a fraction of that shared file, saving a fraction of a download, getting a fraction of antoher uploaded file etc...

I would sleep better if I could assign some 50-512MB of RAM (overnight I would allow to use more RAM, maybe 2-3GB) for eMule to buffer downloads and uploads so it can read/write bigger chunks of uploaded/downloaded files each at a time instead of constant mutliple read/write jobs.
The uploaded/downloaded files could instead be kept in RAM until a certain size is reached and the harddrive is idle or at least not very stressed.

I just had a harddrive go down. And I used to have eMule up 24/7 letting people download without having downloads myself - no problem, but now I am re-considering this, because I don`t want to pay for new harddrives repeatedly.
It is not any particular application`s fault but it seems plausible that running a drive day and night with constant read/write stress will cause wearout and increases the chance of an early death of the storage device.
So I may not share as much as I did but if the issue is addressed with the funciton I proposed it might be less stress on the HDD and I wouldn`t have as much reason to worry anymore.
Logged

lfroen

  • Guest
Re: Ramdisk Buffer To Prevent Stress For Harddrives And Fragmentation
« Reply #1 on: November 12, 2008, 01:01:13 PM »

Quote
since the drives RW-heads have to jockey around on the drive all the time getting a fraction of that shared file, saving a fraction of a download, getting a fraction of antoher uploaded file etc...
That's what HD is designed for.

Quote
I could assign some 50-512MB of RAM (overnight I would allow to use more RAM, maybe 2-3GB) for eMule to buffer downloads and uploads so it can read/write bigger chunks of uploaded/downloaded files each at a time instead of constant mutliple read/write jobs.
This idea is called "OS cache for mass storage device". Some OS's allow you to configure this size, other adjust it automatically. OS doing this so application won't have to.

Quote
I just had a harddrive go down.
Buy RAID array.

Quote
constant read/write stress will cause wearout and increases the chance of an early death of the storage device.
Yep. When you start to use things, those things "wear out". Same principle apply for cars, shoes, etc.

Quote
I wouldn`t have as much reason to worry anymore
You worries are irrational. Sorry. Other things may cause HD failure, with "crappy drive" being #1
Logged

nikio

  • Approved Newbie
  • *
  • Karma: 0
  • Offline Offline
  • Posts: 14
Re: Ramdisk Buffer To Prevent Stress For Harddrives And Fragmentation
« Reply #2 on: November 12, 2008, 01:51:15 PM »

Depends on how much you share: I'm not an expert but I'd say that if you are sharing let's say 100GB  a 2GB Ram buffer would not be that effective in limiating disc accesses,

I'd suggest a different solution, if you share a directory with the most uploaded files in a SSD unit, you may have several advantages:

no coding required, any file sharing program can use it straight away
SSD don't suffer for repeated reading, only heavy rewriting should be harmful, but this is not your case.
SSD prices have been cotinuosly dropping in the last year
the mechanics of your HD would be stressed only for less common uploads

spotting the most uploaded files should be simple looking at the shared files statistics

about fragmentation: i found a definitive solution putting the temp folder of the a/e mule on a different partition from the one in which the completed file is stored, while the temporary .part file is awfully fragmented, when the file is completed and copied in an other disk or partition, it becomes sequential.
« Last Edit: November 12, 2008, 02:12:59 PM by nikio »
Logged

lfroen

  • Guest
Re: Ramdisk Buffer To Prevent Stress For Harddrives And Fragmentation
« Reply #3 on: November 12, 2008, 02:16:37 PM »

Quote
about fragmentation:
The solution is called "good filesystem". Hint - it's not NTFS.
Logged

Kry

  • Ex-developer
  • Retired admin
  • Hero Member
  • *****
  • Karma: -665
  • Offline Offline
  • Posts: 5795
Re: Ramdisk Buffer To Prevent Stress For Harddrives And Fragmentation
« Reply #4 on: November 12, 2008, 08:18:52 PM »

Or nay other for that matter. "Good filesystems" can't fight fragmentation much, either.
Logged

Stu Redman

  • Administrator
  • Hero Member
  • *****
  • Karma: 214
  • Offline Offline
  • Posts: 3739
  • Engines screaming
Re: Ramdisk Buffer To Prevent Stress For Harddrives And Fragmentation
« Reply #5 on: November 12, 2008, 10:50:20 PM »

Quote
about fragmentation:
The solution is called "good filesystem". Hint - it's not NTFS.
The solution would be using compact temp files instead of sparse files.
Logged
The image of mother goddess, lying dormant in the eyes of the dead, the sheaf of the corn is broken, end the harvest, throw the dead on the pyre -- Iron Maiden, Isle of Avalon

bobmarleyfan

  • Approved Newbie
  • *
  • Karma: 0
  • Offline Offline
  • Posts: 42
Re: Ramdisk Buffer To Prevent Stress For Harddrives And Fragmentation
« Reply #6 on: November 19, 2008, 10:06:51 AM »


Quote
Quote
since the drives RW-heads have to jockey around on the drive all the time getting a fraction of that shared file, saving a fraction of a download, getting a fraction of antoher uploaded file etc...
That's what HD is designed for.

Harddrives are made to store data. Increasing the size of the files` chunks inteads of reading/writing tiny chunks would be a more intelligent way to spare the resource HDD unecessary stress.
There is no reason not to except ignorance.

Quote
Quote
I could assign some 50-512MB of RAM ... to buffer downloads and uploads so it can read/write bigger chunks ...
This idea is called "OS cache for mass storage device". ...OS doing this so application won't have to.

The OS cannot anticipate which parts of which files will most likely be requested.
In the same way "OS cache for mass storage device" will not cache data which the application does not request.
In other words:
If  an application could estimate from a file`s current upload rate that it will need let`s say 10MB of that file in the next 5 minutes but does not request it, then "OS cache for mass storage device" won`t know that this data will soon be required.

Quote
Quote
I just had a harddrive go down.
Buy RAID array.

RAID for data security is nonsense. There is no reason accept for lazyness that a backup drive is permanently running.

Quote
Quote
constant read/write stress will cause wearout and increases the chance of an early death of the storage device.
Yep. When you start to use things, those things "wear out". Same principle apply for cars, shoes, etc.

There are different ways to use things. If you drive your car like you are Evel Knievel ;) then it will sooner be ready for a repair than your grandpa`s car.

Quote
Quote
I wouldn`t have as much reason to worry anymore
You worries are irrational. Sorry. Other things may cause HD failure, with "crappy drive" being #1


Yes, Western Digital is "crappy",  I see.
Logged

bobmarleyfan

  • Approved Newbie
  • *
  • Karma: 0
  • Offline Offline
  • Posts: 42
Re: Ramdisk Buffer To Prevent Stress For Harddrives And Fragmentation
« Reply #7 on: November 19, 2008, 10:18:50 AM »

Depends on how much you share: I'm not an expert but I'd say that if you are sharing let's say 100GB  a 2GB Ram buffer would not be that effective in limiating disc accesses,

I think that even 50-250MB could help a lot.
Let`s say you allow 10 uploads at any time. Once an upload is initiated the program can fetch maybe 5MB of the file and monitor how quick the upload is.
Then it can easiliy be calculated how many MB will be uploaded within maybe the  next 10-30 minutes.
That data can be pre-loaded to RAM.
Handling it this way would mean that the program can have 1 read process at a time because there is no hurry since a lot of the required data will be in RAM.
In case of reading it wouldn`t even require much change in programming I suppose.
When it comes to writing - I don`t know.

I'd suggest a different solution, if you share a directory with the most uploaded files in a SSD unit, you may have several advantages

Still too expensive just for file sharing. And not required because there might be enough RAM available for buffering.
That`s not rocket science.

about fragmentation: i found a definitive solution putting the temp folder of the a/e mule on a different partition from the one in which the completed file is stored, while the temporary .part file is awfully fragmented, when the file is completed and copied in an other disk or partition, it becomes sequential.

Defragmentation is not my main issue. I did not experience much of a harddrive slow down.
I would also like to add that your way of doing that means that the files have to be copied from one partition to another when finished.
I don`t see a real problem with that but generally I think this is a waste of resources (time and also a little bit of extra wearout). It`s a workaround and not elegant.
Logged

Stu Redman

  • Administrator
  • Hero Member
  • *****
  • Karma: 214
  • Offline Offline
  • Posts: 3739
  • Engines screaming
Re: Ramdisk Buffer To Prevent Stress For Harddrives And Fragmentation
« Reply #8 on: November 19, 2008, 11:28:26 PM »

RAID for data security is nonsense. There is no reason accept for lazyness that a backup drive is permanently running.
I can see your feature request is founded on a lot of knowledge about HD technology...  ::)

Quote
Harddrives are made to store data. Increasing the size of the files` chunks inteads of reading/writing tiny chunks would be a more intelligent way to spare the resource HDD unecessary stress.
There is no reason not to except ignorance.
If you want to convince us to do something you want, you are doing a lousy job.  :P

Right now we use the same code for the monolith and the amuled which is supposed to run in low memory environment. It would be possible to add an option to read a full chunk ahead for each upload. For the downloads it depends on how much already downloaded data you are willing to lose in case of a crash (which should not happen of course). I admit that memory is plentyfull on most platforms nowadays, so you might have a point. At least for people sleeping near their mule machines.  ;)
Logged
The image of mother goddess, lying dormant in the eyes of the dead, the sheaf of the corn is broken, end the harvest, throw the dead on the pyre -- Iron Maiden, Isle of Avalon

Kry

  • Ex-developer
  • Retired admin
  • Hero Member
  • *****
  • Karma: -665
  • Offline Offline
  • Posts: 5795
Re: Ramdisk Buffer To Prevent Stress For Harddrives And Fragmentation
« Reply #9 on: November 20, 2008, 12:02:54 AM »

The OS and harddrive manufacturers set the read/write buffer sizes. Go talk to them, tell them I sent you.
Logged

lfroen

  • Guest
Re: Ramdisk Buffer To Prevent Stress For Harddrives And Fragmentation
« Reply #10 on: November 20, 2008, 05:37:49 AM »

Quote
It would be possible to add an option to read a full chunk ahead for each upload.
It's called "read-ahead optimization", and both HD and OS doing it.

Quote
There is no reason not to except ignorance.
Your lack of understanding is hilarious.

Answer is NO.
Logged

Archmage

  • Full Member
  • ***
  • Karma: 5
  • Offline Offline
  • Posts: 119
Re: Ramdisk Buffer To Prevent Stress For Harddrives And Fragmentation
« Reply #11 on: November 21, 2008, 11:42:05 AM »

The problem is that it is very random what chunks the people are requesting from you. You don't know when the disconnect, what other chunks they would ask and so on. The normal read-ahead is already doing a good job. If you want a better, than you need 100 GB-Cache-memory to cache the 100 GB on your harddrive. If you did buy that much memory, I would suggest that you simply make a ramdisc and copy the files over it and than start aMule. This way everybody is happy.
Logged