aMule Forum
English => aMule Help => Topic started by: pholimetre on June 22, 2010, 05:00:02 PM
-
Hello, I recently installed amuled/amuleweb on Ubuntu hardy server without xorg. I prefer the amuleweb against amulegui, which is said to be more stable, because I want to provide the mule to all the boxes connected to our private network without installing anything on the end user boxes.
Here are informations concerning my settings: # lsb_release -d
Description: Ubuntu 8.04.4 LTS
# dpkg -l | grep 'amule'
ii amule-common 2.2.6-0ubuntu1~hardy1 common files for the rest of aMule packages
ii amule-daemon 2.2.6-0ubuntu1~hardy1 non-graphic version of aMule, a client for t
ii amule-utils 2.2.6-0ubuntu1~hardy1 utilities for aMule (command-line version)
$ ls /home/habitant/.aMule/webserver/
chicane default litoral php-default tchoum
I remarked problems already mentioned in previous topics :
1) a lot of CLOSE_WAIT connections (~1000) leading to blank pages in the web browser, when amuleweb doesn't crashes:
Example of CLOSE_WAIT connections
# netstat -tuapn | awk '/amule(web|d)|Proto/ { if ( match($0, "CLOSE_WAIT") ) cw++; else print $0; tot++} END {print "Number of CLOSE_WAIT entries: " cw "\nTotal Number of entries: " tot}'
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:4711 0.0.0.0:* LISTEN 18284/amuleweb
tcp 0 0 0.0.0.0:4712 0.0.0.0:* LISTEN 18282/amuled
tcp 0 0 0.0.0.0:52734 0.0.0.0:* LISTEN 18282/amuled
tcp 0 0 172.123.0.12:37288 212.63.206.35:4242 ESTABLISHED 18282/amuled
tcp 0 0 127.0.0.1:4712 127.0.0.1:47182 ESTABLISHED 18282/amuled
tcp 0 0 127.0.0.1:47182 127.0.0.1:4712 ESTABLISHED 18284/amuleweb
tcp 0 0 172.123.0.12:52734 111.167.156.80:1565 ESTABLISHED 18282/amuled
udp 0 0 0.0.0.0:52737 0.0.0.0:* 18282/amuled
udp 0 0 0.0.0.0:52754 0.0.0.0:* 18282/amuled
Number of CLOSE_WAIT entries: 1018
Total Number of entries: 1028
2) amuleweb crashes randomly, while a client asks for a new page. Re-starting amuleweb from the command line provides "Segmentation Fault" error display
but didn't find yet related solutions on the forum.
I don't know but guess that it doesn't depend on the skin chosen for the web interface. Can you confirm? I read some of heroes or developpers speaking about webserver skins better than others, for instance "php-default" or "default". Can someone confirm about the most stable interface ?
Let me tell also that sometimes no CLOSE_WAIT connection appears for a long time after starting amuleweb, and on other occasions many CLOSE_WAIT connections appear quite quickly.
Also I tried to debug using $ grep -E "(Template|VerboseDebug)=" /home/habitant/.aMule/amule.conf
VerboseDebug=1
Template=php-default
But no way to use this feature since login (or after 2 or 3 demands for pages to the server, don't remember) leads to a "Segmentation Fault". Is that way to debug impossible ?
Even though I am not an expert, I am ready to try testing, using tcpdump or related program, in order to solve this problem, if it isn't yet solved, as I guess from what I read on the Forum.
Thanks, Xavier.
-
If you are getting crashes please try to get a backtrace (http://forum.amule.org/index.php?topic=4115.0)(from amuleweb) so we can fix it.
You can also try the SVN version (both of amuled and amuleweb) to see if the bug is fixed there.
-
I prefer the amuleweb against amulegui, which is said to be more stable
In fact, currently aMuleWeb is the most unstable of the aMule applications. :-\
-
Thanks for your answers. I will try to install svn version, but as dependencies are not met with devel packages for ubuntu hardy (LTS), and as I must from now leave this LTS version as is, my road ends here for the moment, at least for install and tests on this box. I will try some tests on a more recent Ubuntu box.
Again Thank you, Xavier.
-
In fact, currently aMuleWeb is the most unstable of the aMule applications. :-\
Go ahead and fix it. :)
-
In fact, currently aMuleWeb is the most unstable of the aMule applications. :-\
Go ahead and fix it. :)
I will once I have the knowledge required. But you are welcome to help me, of course. ;)