aMule Forum

Please login or register.

Login with username, password and session length
Advanced search  

News:

We're back! (IN POG FORM)

Pages: [1] 2 3

Author Topic: to be able to load more than 600 files without crashing...  (Read 20764 times)

hopelessone

  • Full Member
  • ***
  • Karma: 1
  • Offline Offline
  • Posts: 107
  • Ubuntu 8.04
to be able to load more than 600 files without crashing...
« on: December 24, 2008, 01:22:02 PM »

Hi Ya,

I would like to see:

 to be able to load more than 600 + files in paused mode without aMule crashing

...i have 30 downloads going and every time i get above 600+ downloads waiting on paused mode ...i get the error 24 random cannot write... i been waiting for a feature like this..

Thanks...
Logged

Kry

  • Ex-developer
  • Retired admin
  • Hero Member
  • *****
  • Karma: -665
  • Offline Offline
  • Posts: 5795
Re: to be able to load more than 600 files without crashing...
« Reply #1 on: December 24, 2008, 06:46:32 PM »

Raise your system's ulimits for open files.
Logged

lfroen

  • Guest
Re: to be able to load more than 600 files without crashing...
« Reply #2 on: December 24, 2008, 10:01:58 PM »

amuled will likely to crash in this case anyway (when fd number will pass 1023)
Logged

Kry

  • Ex-developer
  • Retired admin
  • Hero Member
  • *****
  • Karma: -665
  • Offline Offline
  • Posts: 5795
Re: to be able to load more than 600 files without crashing...
« Reply #3 on: December 24, 2008, 10:45:04 PM »

Then raise it.
Logged

skolnick

  • Global Moderator
  • Hero Member
  • *****
  • Karma: 24
  • Offline Offline
  • Posts: 1188
  • CentOS 6 User
Re: to be able to load more than 600 files without crashing...
« Reply #4 on: December 25, 2008, 04:16:22 AM »

I have had more than 800 files queued on my aMule and it works fine. Using it under fedora 10 with default ulimit settings.

Regards.

BTW: Why is the 1023 limit you talk about lfroen? I have loaded like 1300 torrents on azureus and it works, so if aureus can, i guess amule can do it also. Is it something to do with the C++ structure or something?

Regards.
Logged

Kry

  • Ex-developer
  • Retired admin
  • Hero Member
  • *****
  • Karma: -665
  • Offline Offline
  • Posts: 5795
Re: to be able to load more than 600 files without crashing...
« Reply #5 on: December 25, 2008, 07:07:36 AM »

No, the aMuled code. amuleD
Logged

skolnick

  • Global Moderator
  • Hero Member
  • *****
  • Karma: 24
  • Offline Offline
  • Posts: 1188
  • CentOS 6 User
Re: to be able to load more than 600 files without crashing...
« Reply #6 on: December 25, 2008, 04:10:55 PM »

:D It seems I didn't read properly the post. However I do not get why amule can but amuleD can't if AFAIK they share tons of code.

Regards.
Logged

Kry

  • Ex-developer
  • Retired admin
  • Hero Member
  • *****
  • Karma: -665
  • Offline Offline
  • Posts: 5795
Re: to be able to load more than 600 files without crashing...
« Reply #7 on: December 25, 2008, 08:30:02 PM »

But not the low-level network one. A simple change would suffice to increase it, of course. Or even make it as big as ulimit's
Logged

Stu Redman

  • Administrator
  • Hero Member
  • *****
  • Karma: 214
  • Offline Offline
  • Posts: 3739
  • Engines screaming
Re: to be able to load more than 600 files without crashing...
« Reply #8 on: December 25, 2008, 09:47:41 PM »

Raise your system's ulimits for open files.
We could also make a pool of, say, 50 file handles for the downloads instead of permanently assigning one handle to each. And then assign the handles at need.

But not the low-level network one.
Can we change that when we switch to wx 3.0 (when wx-sockets become available for non-GUI apps) ?
Logged
The image of mother goddess, lying dormant in the eyes of the dead, the sheaf of the corn is broken, end the harvest, throw the dead on the pyre -- Iron Maiden, Isle of Avalon

Kry

  • Ex-developer
  • Retired admin
  • Hero Member
  • *****
  • Karma: -665
  • Offline Offline
  • Posts: 5795
Re: to be able to load more than 600 files without crashing...
« Reply #9 on: December 26, 2008, 02:24:00 AM »

If we ever do.
Logged

freddy77

  • Developer
  • Full Member
  • *****
  • Karma: 20
  • Offline Offline
  • Posts: 113
Re: to be able to load more than 600 files without crashing...
« Reply #10 on: January 24, 2009, 02:19:54 PM »

Mumble mumble... I didn't understand the problem. I though was sufficient a number of file descriptors like

   upload connections + download connections + uploading/hashing files

but I noted that partial files require always an open file descriptor, from PartFile.h

   CFile m_hpartfile;  //permanent opened handle to avoid write conflicts

which conflicts refer this comment??
Logged

Stu Redman

  • Administrator
  • Hero Member
  • *****
  • Karma: 214
  • Offline Offline
  • Posts: 3739
  • Engines screaming
Re: to be able to load more than 600 files without crashing...
« Reply #11 on: January 24, 2009, 02:44:07 PM »

It's just a lazy implementation imho. It would of course be possible to make a pool of filehandles for the downloads and release the longest unused one to save handles. Well - it had been possible at least until you came up with the mapping code.  ;)
Logged
The image of mother goddess, lying dormant in the eyes of the dead, the sheaf of the corn is broken, end the harvest, throw the dead on the pyre -- Iron Maiden, Isle of Avalon

freddy77

  • Developer
  • Full Member
  • *****
  • Karma: 20
  • Offline Offline
  • Posts: 113
Re: to be able to load more than 600 files without crashing...
« Reply #12 on: January 24, 2009, 05:01:20 PM »

It's just a lazy implementation imho. It would of course be possible to make a pool of filehandles for the downloads and release the longest unused one to save handles. Well - it had been possible at least until you came up with the mapping code.  ;)


Oh my god :) Well... believe or not is instead a way to support more than 1024 open files!! Well... I have still to test this idea but open a file with open, mmap it, close file descriptor, now you have mmap-ed area which you can write. Then you have remapXXX (I don't remember XXX) which is Linux extension which allow you to move the area without using file descriptor :) However I think it would be better (if not already implemented) to have some sort of queuing long processes (like hashing) in order to limit disk seeking (wait to finish an hashing before starting with another file) and file opening.

I still don't understand write conflicts problems... a PartFile represent a partial file and real file is accesses using the PartFile object. Obviously you can't handle two tasks on same PartFile at the same time so you use m_hpartfile serialized so I don't see any problem if I open file before operation and close it just after read/write.

About cache I would suggest a LazyClose/LazyOpen either in PartFile or in CFile that don't close file if there are less than X file "lazy opened". A simple counter and a fixed array suffice.
Logged

Stu Redman

  • Administrator
  • Hero Member
  • *****
  • Karma: 214
  • Offline Offline
  • Posts: 3739
  • Engines screaming
Re: to be able to load more than 600 files without crashing...
« Reply #13 on: January 24, 2009, 05:47:19 PM »

Before you start digging into it:

- The 1024 is only a amuled issue, not an amule issue. With amule, file handles are unlimited (if they are unlimited in your OS at least). For amuled there is a chance to get rid of the limit and the legacy code when wx 3.0 comes out and sockets/events for non-gui apps become available.
- 600 downloads at a time is overdoing things anyway. A machine capable of that can also run amule instead of amuled and so work around the problem.
- I don't want to know what happens if you open a partfile, memory-map part of it for reading, close it (still mapped), reopen it, and then write to it. Especially on the multiple platforms we support. No, thanks (shiver).
- Full hashing is done on background thread (single part hashing is done in the main thread). Download is finished then anyway, and upload is suspended meanwhile. Otherwise access is single-threaded, so there are no write conflicts. Don't trust any ole comment lying around.  ;)

Bottom line: I would just leave it like it is. The risk outweights the gain imho.
Logged
The image of mother goddess, lying dormant in the eyes of the dead, the sheaf of the corn is broken, end the harvest, throw the dead on the pyre -- Iron Maiden, Isle of Avalon

freddy77

  • Developer
  • Full Member
  • *****
  • Karma: 20
  • Offline Offline
  • Posts: 113
Re: to be able to load more than 600 files without crashing...
« Reply #14 on: January 24, 2009, 08:13:54 PM »

The 1024 is only a amuled issue, not an amule issue. With amule, file handles are unlimited (if they are unlimited in your OS at least). For amuled there is a chance to get rid of the limit and the legacy code when wx 3.0 comes out and sockets/events for non-gui apps become available.

?? I though 1024 was a OS system limit so it would be a limit even for amule GUI... I think that even wx will use a file descriptor for sockets and files... I'm not digging this stuff that much... just I would like to know some aMule detail...
Logged
Pages: [1] 2 3