Outgoing connections never use the preferences ports - those are for incoming connections.
Yeah, there's something I never mentioned. Among the iptables rules that I have set up, there are two that I use for outgoing connections.
$ sudo iptables -t nat -A POSTROUTING -p tcp --source 83.233.181.199 -j SNAT --to 83.233.181.199:44976
$ sudo iptables -t nat -A POSTROUTING -p udp --source 83.233.181.199 -j SNAT --to 83.233.181.199:44977
I don't remember exactly what I was thinking when I decided to use these rules. I was just fumbling around, trying to get this set-up right for Azureus, and I think my reasoning went like, "If I force Azureus to only initiate connections from a certain IP and a certain port, then all replies from other clients will come to that IP and port."
This kind of assumption seems to work well enough in Azureus. I thought it would work in amule too. I guess the problem is because of what phoenix says: that the "Bind address" feature is not meant to be used for outgoing connections.
The bind interface is for incomming connections. In a normal situation, it should be "0.0.0.0", which means "any interface will do". If you put some address here, then you can limit the interface that is able to accept a connection.
I must say, it surprised me when I read this from you. I just assumed that it did more than that. I mean, I thought amule would accept only packets that have the bind address as a destination,
and would set the bind address as the source for all packets that it generates. I guess, from what you're saying, amule only does the first part. Well, some of the iptables rules that I had set were based on this wrong assumption.
If aMule or any other program starts a connections, then it is up to the kernel to look at the routing tables to choose the right outgoing interface. Messing with that is possible, though subtle. Those connections you see using the wrong interface have probably been started by aMule itself. For some reason the kernel has choosen this interface. Take a look at the routing table to understand what is going on, or post it here.
Yeah, I do believe that these problematic connections are all out-going. But in Azureus, the routing I have set up already works exactly the way I want, so I'm a bit reluctant to mess with them any more. I've pasted below the routing tables that I'm using.
What you describe seems perfectly normal to me. If you start the connection, you will use a random port number in the interface that the kernel chooses. If you receive and accecpt a connection, then you will use the bind interface and the port number specified.
Try to figure out which machine has initiated the connection, yours or the other one. That must be the difference.
Sorry about this, but my wording was wrong in my post. What I meant was that I was seeing clients begin downloads from me after spending some time in queue. Most of these clients would download on ppp0, but some would download on the wrong interface, on eth1. For the clients that download on eth1, I could not see anything they have in common. I was looking for some cause, some reason, why amule ignores the bind address for these clients, and initiates connections with them on the wrong interface, using some random port that completely ignores the port that I set using the iptables rules that I mentioned above. I think you already gave the answer to this question. amule will not use the bind address for outgoing connections, and I do believe that these trouble-some connections are all out-going. Same thing happens when my amule client contacts previously-known sources for files that I am downloading.
I'm not sure what else to try now. Xaignar has already applied a patch that I was hoping to fix this, but it looks like there's still a hole in my routing or iptables set-up somewhere. I'll probably run amule a few more times, download more SVN snap-shots, examine closely these clients that keep showing up on the wrong interface, etc.
Here is the routing table I'm using at the moment:
$ sudo route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
83.233.181.2 192.168.0.1 255.255.255.255 UGH 0 0 0 eth1
83.233.181.2 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth1
The first line is one that I added. It just sets an outgoing route to the Internet for packets that are going out through the ppp0 interface.
The second line is added by pppd (I think) as soon as the ppp0 interface is established.
The last 2 lines are just default. They're always there.
I also have created another routing table, specifically for the ppp0 interface. It looks like this:
$ sudo ip route show table vpn
83.233.181.2 dev ppp0 scope link src 83.233.181.199
default via 83.233.181.2 dev ppp0
And, just for completeness' sake,
$ sudo ifconfig
eth1 Link encap:Ethernet HWaddr 00:a0:cc:a2:9c:ab
inet addr:192.168.0.2 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::2a0:ccff:fea2:9cab/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18044484 errors:0 dropped:0 overruns:0 frame:0
TX packets:24706410 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1569668379 (1.4 GiB) TX bytes:2007729161 (1.8 GiB)
Interrupt:11 Base address:0x6000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:235896 errors:0 dropped:0 overruns:0 frame:0
TX packets:235896 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:32111883 (30.6 MiB) TX bytes:32111883 (30.6 MiB)
ppp0 Link encap:Point-to-Point Protocol
inet addr:83.233.181.199 P-t-P:83.233.181.2 Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1400 Metric:1
RX packets:2250098 errors:0 dropped:0 overruns:0 frame:0
TX packets:3194497 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:3
RX bytes:114763113 (109.4 MiB) TX bytes:3708930705 (3.4 GiB)
Since this post is already becoming ridiculously long, here's my iptables set-up:
$ sudo iptables -t mangle -L
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
MARK all -- 83.233.181.199 anywhere MARK set 0x1
MARK tcp -- anywhere anywhere multiport ports 6543,4232,4242,4321,4661,4662,5000 MARK set 0x1
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
$ sudo iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DNAT tcp -- anywhere 83.233.181.199 to:83.233.181.199:44976
DNAT udp -- anywhere 83.233.181.199 to:83.233.181.199:44977
DNAT tcp -- anywhere anywhere multiport sports 6543,4232,4242,4321,4661,4662,5000 to:83.233.181.199:44976
DNAT udp -- anywhere anywhere multiport sports 6543,4232,4242,4321,4661,4662,5000 to:83.233.181.199:44977
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
SNAT tcp -- 83.233.181.199 anywhere to:83.233.181.199:44976
SNAT udp -- 83.233.181.199 anywhere to:83.233.181.199:44977
SNAT tcp -- anywhere anywhere multiport dports 6543,4232,4242,4321,4661,4662,5000 to:83.233.181.199:44976
SNAT udp -- anywhere anywhere multiport dports 6543,4232,4242,4321,4661,4662,5000 to:83.233.181.199:44977
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Note: about the "MARK" targets in the mangle table, I have another rule that forces all packets marked with '1' into the ppp0 routing table. I'm talking about the second rule displayed below.
$ sudo ip rule show
0: from all lookup local
32765: from all fwmark 0x1 lookup vpn
32766: from all lookup main
32767: from all lookup default