Below is my dante configuration file
logoutput: /var/log/socks.log
internal: enp0s3 port = 1080
external: enp0s3
clientmethod: none
socksmethod: none
user.privileged: root
user.notprivileged: nobody
client pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
log: error connect disconnect
}
client block {
from: 0.0.0.0/0 to: 0.0.0.0/0
log: connect error
}
socks pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
log: error connect disconnect
}
socks block {
from: 0.0.0.0/0 to: 0.0.0.0/0
log: connect error
}
It works in ipv4 when ipv6 disabled, this how I disabled ipv6
vim /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv6.conf.enp0s3.disable_ipv6 = 1
sysctl -p
But when ipv6 is enabled, its not returning expected result
This is how I test the connection
curl -x socks5://user:pass#192.168.1.11:1080 ifconfig.co
Result
curl: (7) Can't complete SOCKS5 connection to 2606:4700:3036::ac43:85e4:80. (6)
Related
I have inserted an iptables rule to block access to my containers from the internet (according to the official docker docs), but now my containers cannot access the internet either.
I run a container on a dedicated server like this:
docker run --name mycontainer --network network1 -d -p 10000:80 someImage
I can access that container from my home network...:
telnet servername.com 10000
... even though I have limited access using ufw:
ufw status
Status: active
To Action From
-- ------ ----
22 ALLOW Anywhere
80/tcp ALLOW Anywhere
...
It makes no mention of port 10000, and I initially denied all incoming ports.
According to the docs at https://docs.docker.com/network/iptables/ when starting a container with -p, docker will automatically expose the port through the firewall by manipulating iptables, using rules which are evaluated before ufw rules.
Those docs then go on to suggest blocking incoming requests like this, in the section titled "Restrict connections to the Docker host":
iptables -I DOCKER-USER -i eno ! -s 192.168.1.1 -j DROP
I got the interface name by running route:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default static-ip-182-1 0.0.0.0 UG 0 0 0 eno
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-ef37f7b34afa
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-979cf8868fcd
182.133.43.627 0.0.0.0 255.255.255.192 U 0 0 0 eno
It works, I can no longer telnet onto port 10000. (surprisingly I didn't need to reload / restart anything).
Unfortunately, I can no longer access the internet from inside a container. The dedicated server can still ping google.com, but my container cannot:
docker exec mycontainer ping google.com
ping: unknown host
That was working before inserting the iptables rule, but doesn't work any more.
Question 1: what do I have to change in ufw / iptables so that my containers can again access the internet (outgoing), but so that I still cannot access my container from the internet (incoming)?
Question 2: what do I have to change in the iptables rule so that it survives a reboot, because I noticed that after a reboot, I can again telnet to port 10000.
UPDATE:
docker network ls
NETWORK ID NAME DRIVER SCOPE
a1e2c7cdbc65 bridge bridge local
218e121af9cd host host local
979cf8868fcd network1 bridge local
cee02cfd1dba none null local
ef37f7b34afa network2 bridge local
I noticed that /etc/resolv.conf is different depending upon whether I check on a container inside a network or one with the default network:
docker run -it --rm --network network1 alpine cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0
docker run -it --rm alpine cat /etc/resolv.conf
nameserver 80.237.128.56
nameserver 80.237.128.57
Those two nameservers are the ones that my dedicated server uses.
ufw show raw | grep DROP
Chain INPUT (policy DROP 0 packets, 0 bytes)
Chain FORWARD (policy DROP 0 packets, 0 bytes)
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * br-ef37f7b34afa 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * br-979cf8868fcd 0.0.0.0/0 0.0.0.0/0
14262 1201304 DROP all -- eno * !127.0.0.1 0.0.0.0/0
173 37057 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
427 35720 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
Chain INPUT (policy DROP 0 packets, 0 bytes)
Chain FORWARD (policy DROP 0 packets, 0 bytes)
0 0 DROP all * * ::/0 ::/0 rt type:0
0 0 DROP all * * ::/0 ::/0 rt type:0
0 0 DROP all * * ::/0 ::/0 ctstate INVALID
0 0 DROP all * * ::/0 ::/0 rt type:0
0 0 DROP all * * ::/0 ::/0
0 0 DROP all * * ::/0 ::/0
a little more detail around those drops and docker generally:
Chain DOCKER (3 references)
pkts bytes target prot opt in out source destination
2 120 ACCEPT tcp -- !br-979cf8868fcd br-979cf8868fcd 0.0.0.0/0 172.19.0.2 tcp dpt:3306
1 60 ACCEPT tcp -- !br-ef37f7b34afa br-ef37f7b34afa 0.0.0.0/0 172.18.0.2 tcp dpt:18088
0 0 ACCEPT tcp -- !br-979cf8868fcd br-979cf8868fcd 0.0.0.0/0 172.19.0.3 tcp dpt:80
0 0 ACCEPT tcp -- !br-ef37f7b34afa br-ef37f7b34afa 0.0.0.0/0 172.18.0.3 tcp dpt:3306
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
19 1164 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
14533 1228562 DOCKER-ISOLATION-STAGE-2 all -- br-ef37f7b34afa !br-ef37f7b34afa 0.0.0.0/0 0.0.0.0/0
168 14646 DOCKER-ISOLATION-STAGE-2 all -- br-979cf8868fcd !br-979cf8868fcd 0.0.0.0/0 0.0.0.0/0
25893 4430799 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (3 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * br-ef37f7b34afa 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * br-979cf8868fcd 0.0.0.0/0 0.0.0.0/0
14720 1244372 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
14465 1218356 DROP all -- eno2 * !127.0.0.1 0.0.0.0/0
25893 4430799 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
ufw show raw | grep REJECT # contains ip addresses from fail2ban
0 0 REJECT all -- * * 85.202.169.48 0.0.0.0/0 reject-with icmp-port-unreachable
25 1904 REJECT all -- * * 112.85.42.128 0.0.0.0/0 reject-with icmp-port-unreachable
19 1664 REJECT all -- * * 61.177.172.89 0.0.0.0/0 reject-with icmp-port-unreachable
23 1884 REJECT all -- * * 112.85.42.74 0.0.0.0/0 reject-with icmp-port-unreachable
21 1764 REJECT all -- * * 112.85.42.15 0.0.0.0/0 reject-with icmp-port-unreachable
23 1884 REJECT all -- * * 112.85.42.87 0.0.0.0/0 reject-with icmp-port-unreachable
22 1748 REJECT all -- * * 122.194.229.54 0.0.0.0/0 reject-with icmp-port-unreachable
19 1644 REJECT all -- * * 122.194.229.45 0.0.0.0/0 reject-with icmp-port-unreachable
22 1844 REJECT all -- * * 218.92.0.221 0.0.0.0/0 reject-with icmp-port-unreachable
18 1104 REJECT all -- * * 112.85.42.88 0.0.0.0/0 reject-with icmp-port-unreachable
0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
0 0 REJECT all * * ::/0 ::/0 reject-with icmp6-port-unreachable
Launching a Docker Container that supports three networks: Default Bridge(OAM), Bridge(Data In), & MacVLAN(Data Out). The problem is that the Docker Provider seems to setup all the defined ports on the Default Bridge. How do I tell Terraform to bind the defined ports to a specific Docker Network??
The .tf file snippit:
ports {
# Data-In (Bridge)
internal = 881
external = 881
}
ports {
# SSH Access on Default Bridge
internal = 22
external = 222
}
networks_advanced {
name = "bridge"
}
networks_advanced {
name = "data-in-net"
}
networks_advanced {
name = "data-out-net"
}
The Docker networks:
# docker network list
NETWORK ID NAME DRIVER SCOPE
1c2441b0b530 bridge bridge local
c68892f0c6e5 host host local
bb45d9dcbad1 none null local
a318d3bf3075 data-out-net macvlan local
af806334c7bf data-in-net bridge local
Port 222 works, port 881 does not.
IPTables from the host OS running Docker:
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:9000
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:8000
ACCEPT tcp -- anywhere 192.168.32.2 tcp dpt:601
ACCEPT udp -- anywhere 192.168.32.2 udp dpt:syslog
ACCEPT tcp -- anywhere 172.17.0.3 tcp dpt:881
ACCEPT tcp -- anywhere 172.17.0.3 tcp dpt:https
ACCEPT tcp -- anywhere 172.17.0.3 tcp dpt:ssh
The tcp dpt:881 line needs to have a destination of 192.168.32.3. The syslog container ONLY uses the Data-In Docker Network, and thus has a correct IP address.
Any sugggestions/workarounds?? Thanks! :)
I have a simple python application which stores and searches it's data in an Elasticsearch instance. The python application runs in it's own container, just as Elasticsearch is.
Elasticsearch exposes it's default ports 9200 and 9300, the python application exposes port 5000. The networktype used for Docker is a user defined bridged network.
When i start both containers the application starts up nicely, both containers see each other by container name and communicate just fine.
But from the docker host (linux) it's not possible to connect to the exposed port 5000. So a simple curl http://localhost:5000/ renders in a time-out. The Docker tips from this documentation: https://docs.docker.com/network/bridge/ did not solve this.
After a lot of struggling I tried something completely different, I tried connecting from the outside of the docker host to the python application. I was baffled, from anywhere in the world i could do curl http://<fqdn>:5000/ and was served with the application.
So that means, real problem solved because I'm able to serve the application to the outside world. (So yes, the application inside the container listens on 0.0.0.0 as is the solution for 90% of the network problems reported by others.)
But that still leaves me puzzled, what causes this strange behavior? On my development machine (Windows 10, WSL, Docker desktop, Linux containers) I am able to connect to the service on localhost:5000, 127.0.0.1:5000 etc. On my Linux (production) machine everything works except connecting from the docker host to the containers.
I hope someone can shed a light on this, I trying to understand why this is happening.
Some relevant information
Docker host:
# ifconfig -a
br-77127ce4b631: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
[snip]
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
[snip]
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 1xx.1xx.199.134 netmask 255.255.255.0 broadcast 1xx.1xx.199.255
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1e7f2f7a271b pplbase_api "flask run --host=0.…" 20 hours ago Up 19 hours 0.0.0.0:5000->5000/tcp pplbase_api_1
fdfa10b1ce99 elasticsearch:7.5.1 "/usr/local/bin/dock…" 21 hours ago Up 19 hours 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp pplbase_elastic_1
# docker network ls
NETWORK ID NAME DRIVER SCOPE
[snip]
77127ce4b631 pplbase_pplbase bridge local
# iptables -L -n
[snip]
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5000
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.18.0.2 tcp dpt:9300
ACCEPT tcp -- 0.0.0.0/0 172.18.0.2 tcp dpt:9200
ACCEPT tcp -- 0.0.0.0/0 172.18.0.3 tcp dpt:5000
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0
DROP all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Docker compose file:
version: '3'
services:
api:
build: .
links:
- elastic
ports:
- "5000:5000"
networks:
- pplbase
environment:
- ELASTIC_HOSTS=elastic localhost
- FLASK_APP=app.py
- FLASK_ENV=development
- FLASK_DEBUG=0
tty: true
elastic:
image: "elasticsearch:7.5.1"
ports:
- "9200:9200"
- "9300:9300"
networks:
- pplbase
environment:
- discovery.type=single-node
volumes:
- ${PPLBASE_STORE}:/usr/share/elasticsearch/data
networks:
pplbase:
driver: bridge
After more digging in the riddle is getting bigger and bigger. When using netcat I can establish a connection
Connection to 127.0.0.1 5000 port [tcp/*] succeeded!
Checking with netstat when no clients are connected is see:
tcp6 0 0 :::5000 :::* LISTEN 27824/docker-proxy
While trying to connect from the dockerhost, the connection is made:
tcp 0 1 172.20.0.1:56866 172.20.0.3:5000 SYN_SENT 27824/docker-proxy
tcp6 0 0 :::5000 :::* LISTEN 27824/docker-proxy
tcp6 0 0 ::1:58900 ::1:5000 ESTABLISHED 31642/links
tcp6 592 0 ::1:5000 ::1:58900 ESTABLISHED 27824/docker-proxy
So I'm suspecting now some networkvoodoo on the docker host.
The Flask instance is running at 0.0.0.0:5000.
Have you tried: curl http://0.0.0.0:5000/?
It might be that your host configuration maps localhost with 127.0.0.1 rather than 0.0.0.0
So as I was working on this problem, slowly towards a solution I found my last suggestion was right after all. In the firewall (iptables) I logged all dropped packets and yes, the packets between the docker-bridge (not docker0, but br- and the container (veth) were being dropped by iptables. Adding a rule allowing traffic from the interfaces to flow resolved the problem.
In my case: sudo iptables -I INPUT 3 -s 172.20.0.3 -d 172.20.0.1 -j ACCEPT
Where 172.20.0.0/32 is the bridged network generated by Docker.
I use docker service to setup a container network. and I just open a port 7035 for a target ip and expose it to the host.
when i check the iptables with 'iptables -nvL'
I saw the FORWARD chain:
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DROP tcp -- * * 0.0.0.0/0 172.18.0.2 tcp dpt:7053
1680K 119M DOCKER-ISOLATION all -- * * 0.0.0.0/0 0.0.0.0/0
1680K 119M DOCKER all -- * br-287ce7f19804 0.0.0.0/0 0.0.0.0/0
1680K 119M ACCEPT all -- * br-287ce7f19804 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
and the DOCKER chain:
Chain DOCKER (4 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.2 tcp dpt:7053
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.2 tcp dpt:7051
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.3 tcp dpt:2181
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.4 tcp dpt:7053
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.4 tcp dpt:7051
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.6 tcp dpt:7053
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.6
AndI want to block the container 172.18.0.2, and it's port 7053. so I use the sudo iptables -I FORWARD -p tcp -d 172.18.0.2 --dport 7053 -j DROP.
But, It doesn't work.
So, what should I do to block the target ip and port?
The following should work:
iptables -I DOCKER 1 -p tcp --dport 7053 -j DROP
This will insert the DROP rule before all the other rules in the DOCKER chain.
The following is a useful commands well:
iptables --list DOCKER -n --line
As well, if you add -v (verbose) you get more detail
By now, you probably have your answer, but it may help others.
I have created a docker host on openstack and launched a container with it's port 22 mapped to a port on docker host. Followed this link
Still i can't ssh from docker host to container. It gives this error:
$> ssh -v root#172.17.0.9 -p 32775
OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 56: Applying options for *
debug1: Connecting to 172.17.0.9 [172.17.0.9] port 32775.
debug1: connect to address 172.17.0.9 port 32775: Connection refused
ssh: connect to host 172.17.0.9 port 32775: Connection refused
Iptables rule is added by default when i used -P option in docker run. It looks like this:
$> iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
MASQUERADE tcp -- 172.17.0.3 172.17.0.3 tcp dpt:80
MASQUERADE tcp -- 172.17.0.9 172.17.0.9 tcp dpt:22
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:9090 to:172.17.0.3:80
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:32775 to:172.17.0.9:22
And container looks like:
$> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
46111bb52063 sshns "/usr/sbin/sshd -D" 9 hours ago Up 3 hours 0.0.0.0:32776->22/tcp TestSSHcontainer
I need to have ssh only for my purpose. I'm aware about docker exec option. Tried changes like PermitRootLogin yes on sshd_config and ssh_config on both docker host and container with no success.
bash-4.2# /usr/sbin/sshd -Dd
WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several problems.
debug1: sshd version OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: key_parse_private2: missing begin marker
debug1: read PEM private key done: type RSA
debug1: private host key: #0 type 1 RSA
debug1: key_parse_private2: missing begin marker
debug1: read PEM private key done: type ECDSA
debug1: private host key: #1 type 3 ECDSA
debug1: private host key: #2 type 4 ED25519
debug1: rexec_argv[0]='/usr/sbin/sshd'
debug1: rexec_argv[1]='-Dd'
Set /proc/self/oom_score_adj from 0 to -1000
debug1: Bind to port 22 on ::.
Bind to port 22 on :: failed: Address already in use.
debug1: Bind to port 22 on 0.0.0.0.
Bind to port 22 on 0.0.0.0 failed: Address already in use.
Cannot bind any address.
bash-4.2# netstat -anp | grep 22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
bash-4.2# ps -eaf | grep ssh
root 1 0 0 19:17 ? 00:00:00 /usr/sbin/sshd -D
root 26 16 0 22:58 ? 00:00:00 grep ssh
Is there something that i'm still missing?
You're using the ip of your container but the host port mapping of container. Try either ssh -v root#172.17.0.9 or ssh -v root#localhost -p <port_mapping_on_host>(Your docker ps -a shows your porting mapping on host is 32776)