Platform : Fedora 13, 32-bit machine
I am running tshark in my client and rpcapd in my remote machine.
Here is an example:-
Remote machine :- IP Address 192.168.100.100 (say) and Interface name - eth1 (say)
bash$:- sudo ./rpcapd -n
Client side :- IP Address 192.168.100.200
bash$:- sudo tshark -w output.pcap -i rpcap://192.168.100.100/eth6 -f "ip proto 132"
Packet Capture is running successfully and packets are also being captured.
But it also captures the packets to and from own machine's interfaces which are not related to remote machine's interface.
Please help me out to understand this..
You can exclude traffic to your capture host by adding a filter:
tshark -f '(host not 192.168.100.200) and (ip proto 132)'
The problem occurred due to promiscuous mode.
I tried this :
$sudo tshark -p -w output.pcap -i rpcap://192.168.100.100/eth6 -f "ip proto 132"
and it worked !!
The option -p signifies that the interface will not be put on promiscuous mode.
Here are the details :-
https://www.wireshark.org/docs/man-pages/tshark.html
Related
I have installed docker for windows version 3.1.0(51484) and setup a container running a memcache service on it using WSL2.
'''docker run -it -d --privileged --restart always -p 11212:11211 platform/centos/memcached /usr/sbin/init'''
I'm able to call memcache from localhost (127.0.0.1) but not from my machine's IP.
Any clue?
The problem can be about the IP setting. To check it try to listen to connection to a random port
terminal 1
nc -l 29123
terminal 2
nc localhost 29123
hello there
terminal 3
nc -vv -l 0.0.0.0 29124
terminal 4
nc <your-ip> 29124
hello there
if "hello there" appears in terminal 1, but not in terminal 3, so the problem is probably your IP/internet settings
Are you sure that your machine IP constant (not change)?
Does "Machine IP" means your "Host public IP" or "Host network IP"?
I'm trying to understand how TPROXY works in an effort to build a transparent proxy for Docker containers.
After lots of research I managed to create a network namespace, inject an veth interface into it and add TPROXY rules. The following script worked on a clean Ubuntu 18.04.3:
ip netns add ns0
ip link add br1 type bridge
ip link add veth0 type veth peer name veth1
ip link set veth0 master br1
ip link set veth1 netns ns0
ip addr add 192.168.3.1/24 dev br1
ip link set br1 up
ip link set veth0 up
ip netns exec ns0 ip addr add 192.168.3.2/24 dev veth1
ip netns exec ns0 ip link set veth1 up
ip netns exec ns0 ip route add default via 192.168.3.1
iptables -t mangle -A PREROUTING -i br1 -p tcp -j TPROXY --on-ip 127.0.0.1 --on-port 1234 --tproxy-mark 0x1/0x1
ip rule add fwmark 0x1 tab 30
ip route add local default dev lo tab 30
After that I launched a toy Python server from Cloudflare blog:
import socket
IP_TRANSPARENT = 19
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.setsockopt(socket.IPPROTO_IP, IP_TRANSPARENT, 1)
s.bind(('127.0.0.1', 1234))
s.listen(32)
print("[+] Bound to tcp://127.0.0.1:1234")
while True:
c, (r_ip, r_port) = s.accept()
l_ip, l_port = c.getsockname()
print("[ ] Connection from tcp://%s:%d to tcp://%s:%d" % (r_ip, r_port, l_ip, l_port))
c.send(b"hello world\n")
c.close()
And finally by running ip netns exec ns0 curl 1.2.4.8 I was able to observe a connection from 192.168.3.2 to 1.2.4.8 and receive the "hello world" message.
The problem is that it seems to have compatibility issues with Docker. All worked well in a clean environment, but once I start Docker things start to go wrong. It seems like the TPROXY rule was no longer working. Running ip netns exec ns0 curl 192.168.3.1 gave "Connection reset" and running ip netns exec ns0 curl 1.2.4.8 timed out (both should have produced the "hello world" message). I tried restoring all iptables rules, deleting ip routes and rules generated by Docker and shutting down Docker, but none worked even if I didn't configure any networks or containers.
What is happening behind the scenes and how can I get TPROXY working normally?
I traced all processes created by Docker using strace -f dockerd, and looked for lines containing exec. Most commands are iptables commands, which I have already excluded, and the lines with modprobe looked interesting. I loaded these modules one by one and figured out that the module causing the trouble is br_netfilter.
The module enables filtering of bridged packets through iptables, ip6tables and arptables. The iptables part can be disabled by executing echo "0" | sudo tee /proc/sys/net/bridge/bridge-nf-call-iptables. After executing the command, the script worked again without impacting Docker containers.
I am still confused though. I haven't understood the consequences of such a setting. I enabled packet tracing, but it seems that the packets matched the exact same set of rules before and after enabling bridge-nf-call-iptables, but in the former case the first TCP SYN packet got delivered to the Python server, in the latter case the packet got dropped for unknown reasons.
Try running docker with -p 1234
"By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag."
https://docs.docker.com/config/containers/container-networking/
exploring Docker 17.06.
I've installed docker on Centos 7 and created a container. Started the container with the default bridge. I can ping both host adapters, but not the outside world e.g. www.google.com
All advise out there is based on older versions of Docker and it's iptables settings.
I would like to understand how to ping to the outside world, what is required please?
TIA!
If you able to ping www.google.com from host machine try following these steps :
run on host machine:
sudo ip addr show docker0
You will get output which includes :
inet 172.17.2.1/16 scope global docker0
The docker host has the IP address 172.17.2.1 on the docker0 network interface.
Then start the container :
docker run --rm -it ubuntu:trusty bash
and run
ip addr show eth0
output will include :
inet 172.17.1.29/16 scope global eth0
Your container has the IP address 172.17.1.29. Now look at the routing table:
run:
route
output will include:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.2.1 0.0.0.0 UG 0 0 0 eth0
It Means the IP Address of the docker host 172.17.2.1 is set as the default route and is accessible from your container.
try ping now to your host machine ip :
root#e21b5c211a0c:/# ping 172.17.2.1
PING 172.17.2.1 (172.17.2.1) 56(84) bytes of data.
64 bytes from 172.17.2.1: icmp_seq=1 ttl=64 time=0.071 ms
64 bytes from 172.17.2.1: icmp_seq=2 ttl=64 time=0.211 ms
64 bytes from 172.17.2.1: icmp_seq=3 ttl=64 time=0.166 ms
If this works most probably you'll be able to ping www.google.com
Hope it will help!
In my case restarting docker daemon helped
sudo systemctl restart docker
If iptables is not a reason and if you have no some limitation for change containers network mode - set it to "host" mode. This should solve this issue.
Please verify your existing iptables:
iptables --list
It should show you list of iptables with source and destination details.
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
If it is anywhere for both source and destination it should ping outside IPs.(By Default its anywhere)
If not use this command to set your iptable(DOCKER-USER)
iptables -I DOCKER-USER -i eth0 -s 0.0.0.0/0 -j ACCEPT
Hope this will help!
I had a similar problem, an api docker container needed connection to outside, but the others containers not. So my option was add the flag --dns 8.8.8.8 to the docker run command , and with that the container can ping to outside. I consider this a solution for one container, if you need for more containers, maybe other responses are better. Here the documentation. And full line example:
docker run -d --rm -p 8080:8080 --dns 8.8.8.8 <docker-image-name>
where:
-d, detach mode for run containers in background
--rm, remove containers if is stop (careful if you are testing and maybe you need to inspect logs, with docker logs , don't use it)
-p, specify the port ( <host-port> : <container-port> )
--dns, the container can resolve internet domains
Let me first explain what I'm trying to do, as there may be multiple ways to solve this. I have two containers in docker 1.9.0:
node001 (172.17.0.2) (sudo docker run --net=<<bridge or test>> --name=node001 -h node001 --privileged -t -i -v /sys/fs/cgroup:/sys/fs/cgroup <<image>>)
node002 (172.17.0.3) (,,)
When I launch them with --net=bridge I get the correct value for SSH_CLIENT when I ssh from one to the other:
[root#node001 ~]# ssh root#172.17.0.3
root#172.17.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.17.0.3 56194 22
[root#node001 ~]# ping -c 1 node002
ping: unknown host node002
In docker 1.8.3 I could also use the hostnames I supply when I start them, in 1.8.3 that last ping statement works!
In docker 1.9.0 I don't see anything being added in /etc/hosts, and the ping statement fails. This is a problem for me. So I tried creating a custom network...
docker network create --driver bridge test
When I launch the two containers with --net=test I get a different value for SSH_CLIENT:
[root#node001 ~]# ssh root#172.18.0.3
root#172.18.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.1 57388 22
[root#node001 ~]# ping -c 1 node002
PING node002 (172.18.0.3) 56(84) bytes of data.
64 bytes from node002 (172.18.0.3): icmp_seq=1 ttl=64 time=0.041 ms
Note that the ip address is not node001's, it seems to represent the docker host itself. The hosts file is correct though, containing:
172.18.0.2 node001
172.18.0.2 node001.test
172.18.0.3 node002
172.18.0.3 node002.test
My current workaround is using docker 1.8.3 with the default bridge network, but I want this to work with future docker versions.
Is there any way I can customize the test network to make it behave similarly to the default bridge network?
Alternatively:
Maybe make the default bridge network write out the /etc/hosts file in docker 1.9.0?
Any help or pointers towards different solutions will be greatly appreciated..
Edit: 21-01-2016
Apparently the problem is fixed in 1.9.1, with bridge in docker 1.8 and with a custom (--net=test) in 1.9.1, now the behaviour is correct:
[root#node001 tmp]# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.5
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.3 52162 22
Retried in 1.9.0 to see if I wasn't crazy, and yeah there the problem occurs:
[root#node001 tmp]# ip route
default via 172.18.0.1 dev eth0
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3
[root#node002 ~]# env|grep SSH_CLI
SSH_CLIENT=172.18.0.1 53734 22
So after remove/stop/start-ing the instances the IP-addresses were not exactly the same, but it can be easily seen that the ssh_client source ip is not correct in the last code block. Thanks #sourcejedi for making me re-check.
Firstly, I don't think it's possible to change any settings on the default network, i.e. to write /etc/hosts. You apparently can't delete the default networks, so you can't recreate them with different options.
Secondly
Docker is careful that its host-wide iptables rules fully expose containers to each other’s raw IP addresses, so connections from one container to another should always appear to be originating from the first container’s own IP address. docs.docker.com
I tried reproducing your issue with the random containers I've been playing with. Running wireshark on the bridge interface for the network, I didn't see my ping packets. From this I conclude my containers are indeed talking directly to each other; the host was not doing routing and NAT.
You need to check the routes on your client container ip route. Do you have a route for 172.18.0.2/16? If you only have a default route, it could try to send everything through the docker host. And it might get confused and do masquerading as if it was talking with the outside world.
This might happen if you're running some network configuration in your privileged container. I don't know what's happening if you're just booting it with bash though.
As part of a security test of an iOS app I'm developing, I'd like to verify that it correctly validates SSL/TLS certificates when connecting to various APIs. I installed mitmproxy on my Mac and configured it as a transparent proxy, then configured the WiFi based on this transparent proxy iOS WiFi screenshot. The iPhone takes a very long time to show that it's connected to WiFi, and after it does, it can't access the network. Nothing at all shows up in mitmproxy, including in its event log:
Details of the mitmproxy configuration
I installed mitmproxy 0.11.3 in my system python (and renamed the outdated pyOpenSSL that ships with OSX, so that it uses pyOpenSSL 0.14 as installed with mitmproxy by pip).
I'm using the following script to configure and start pf and mitmproxy:
#!/bin/bash -x
sudo sysctl -w net.inet.ip.forwarding=1
# sudo sysctl -w net.inet.ip.scopedroute=0
## OSX can't change the net.inet.ip.scopedroute kernel flag in user space so I used:
## sudo defaults write "/Library/Preferences/SystemConfiguration/com.apple.Boot" "Kernel Flags" "net.inet.ip.scopedroute=0
## and then rebooted
sudo defaults read /Library/Preferences/SystemConfiguration/com.apple.Boot
cat > pf.conf << _EOF_
rdr on en0 inet proto tcp to any port 80 -> 127.0.0.1 port 8080
rdr on en0 inet proto tcp to any port 443 -> 127.0.0.1 port 8080
_EOF_
cat pf.conf
sudo pfctl -d
sudo pfctl -f pf.conf
sudo pfctl -e
mitmproxy -T --host
Interface en0 is my Mac's WiFi connection.
The output from that script (visible after stopping mitmproxy with control-C) looks like this:
$ ./transparent.sh
+ sudo sysctl -w net.inet.ip.forwarding=1
net.inet.ip.forwarding: 1 -> 1
+ sudo defaults read /Library/Preferences/SystemConfiguration/com.apple.Boot
{
"Kernel Flags" = "net.inet.ip.scopedroute=0";
}
+ cat
+ cat pf.conf
rdr on en0 inet proto tcp to any port 80 -> 127.0.0.1 port 8080
rdr on en0 inet proto tcp to any port 443 -> 127.0.0.1 port 8080
+ sudo pfctl -d
No ALTQ support in kernel
ALTQ related functions disabled
pf disabled
+ sudo pfctl -f pf.conf
pfctl: Use of -f option, could result in flushing of rules
present in the main ruleset added by the system at startup.
See /etc/pf.conf for further details.
No ALTQ support in kernel
ALTQ related functions disabled
+ sudo pfctl -e
No ALTQ support in kernel
ALTQ related functions disabled
pf enabled
+ mitmproxy -T --host
Details of the iOS configuration
I'm using a physical iPhone5s on iOS 8.1 and connecting to the same WiFi network as the Mac. My WiFi configuration looks like this:
I've used 192.168.20.118 because that is the IP address of my Mac on the same WiFi network, which I found using ifconfig:
$ ifconfig
[...]
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether xx:xx:xx:xx:xx:xx
inet6 fe80::22c9:d0ff:fe84:983b%en0 prefixlen 64 scopeid 0x4
inet 192.168.20.118 netmask 0xffffff00 broadcast 192.168.20.255
nd6 options=1<PERFORMNUD>
media: autoselect
status: active
[...]
I got the same problem. In my case, i turn off Mac OS Firewall in Setting panel. It works and i can use mitmproxy as a transparent proxy.
i occur the same problem today .and i solve it only set the dns.and i think the mitmproxy did not provide dns