I execute command netstat -a|grep 7777 and do not see port 7777, I try command with option -an and see it, I try with option -n and again don't see it. Please tell me why is this so? Thanks.
From the man pages for netstat:
-a, --all
Show both listening and non-listening (for TCP this means established connections)
sockets. With the --interfaces option, show interfaces that are not up
--numeric-ports
shows numerical port numbers but does not affect the resolution of host or user
names.
You will need to -n to show ip addresses as opposed to resolved DNS addresses and numbered ports as opposed to named ports. You will also need -a to show listening as well as established connections (which doesn't show without -a)
Related
I've been wondering why docker installation does not enable by default port forwarding to containers.
To save you a click, what I mean is:
$ sysctl net.ipv4.conf.all.forwarding=1
$ sudo iptables -P FORWARD ACCEPT
I assume it is some sort of security risk, but I just wonder what the risk it is.
Basically I want to create some piece of code that enables this by default, but I want to know what is the bad that can happen.
I googled this and couldn't find anything.
Generally FORWARD ACCEPT seems to be considered too permissive (?)
If so, what can I change to make this more secure?
My network is rather simple, it is a bunch of pcs in a local lan (10.0.0.0/24) with an openvpn server and those pcs may deploy docker hosts (I'm doing this by hand, not using docker compose or swarm or anything because nodes change) that need to see each other. So no real outside access. Another detail is that I am not using network overlay which I could do without swarm, but the writer of the post warns it could be deprecated soon, so also wonder if I should just start using docker-swarm straight away.
EDIT: My question here is maybe more theoretical I guess than what it may seem at first. I want to know why they decided not to do this. I pretty much need/want full communication between docker instances, they need to be ssh'd into and open up a bunch of different ports to talk to each other (and this is the limitation of my networking knowledge, I don't know how this really works, I suppose they are all high ports, but are those also blocked by docker?). I am not sure docker-swarm would help me much here either. They aimed at micro-services I maybe need interactive sessions from time to time, but this is probably asking too much in a single question.
Maybe the simplest version of this question is: "if I put that code up there as a script to load each time my computer boots up, how can someone abuse it".
Each docker container runs on a local bridge network with IPs generally in the range of 172.1x.xx.xx. You can get the ip address running:
docker inspect <container name> | jq -r ".[].NetworkSettings.Networks[].IPAddress"
You should either run your container exposing and publishing the specific container ports on the host running the containers.
Alternatively, you can use iptables to redirect traffic to a specific port from outside:
iptables -t nat -I PREROUTING -i <incoming interface> -p tcp -m tcp --dport <host listening port> --j DNAT --to-destination <container ip address>:<container port>
Change tcp for udp if the port is listening on a udp socket.
If you want to redirect all traffic you can still use the same approach, but may need to specify a secondary ip address on your host (e.g., 192.168.1.x) and redirect any traffic coming to that address to your container.
I have a docker nginx image running with the command
sudo docker run -p 8002:80 nginx
What I am wondering is how does the routing work to go from hostmachine:8002 to get to the container listening on port 80. Usually there are iptables that very explicitly do that, but if you disable iptables it still works. Then I noticed that there is a docker-proxy process listening on each exposed port that I would assume does the proxy/nat. So I disabled the userland proxy with --userland-proxy=false. After doing that I now only see one process docker-current, still listening on all exposed ports. I can only assume that the docker-current process is doing the nat, but that makes me wondering why the userland proxy and/or iptables are ever there? And is there a way that I can see/prove to myself where the nating is happening (ie turn something on/off and not be able to curl my nginx container and then be able to)?
Just for completeness I will answer my own questions.
The iptables rules are there to hopefully save time by nat'ing in kernel space rather than sending it up to userspace to do the nat'ing. You can kill the process that is listening on the port and remove the iptables nat rules and not get a response from the traffic, which is how I (among other tests) was able to prove to myself what was going on.
I have a rails server setup on a CentOS with a static ip that's accessible to the outside network.
If I go to http://my.ip.address on that machine, it works fine and I can see my rails server and the access is logged in /var/log/httpd/access_log
However, if I do the same thing on another computer, the connection times out and I don't see the access in the access_log.
netstat shows that httpd is listening on port 80, so as far as I can tell, everything seems to be working fine.
What else could be blocking this connection if it's not the network blocking the access?
You probably need to start the rails server with -b 0.0.0.0 (older versions do not require this).
Aetherus was correct. CentOS was blocking port 80 by default. I followed the guide in his link and was able to solve the problem.
For future users, these are the commands that fixed the problem:
iptables -I INPUT 5 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
service iptables save
The first line adds a line (at line position 5) to your iptables configuration that accepts traffic on port 80. The second line saves the configuration so it's persistent through reboots.
Note that if you have any iptables configuration other than default, you may need to adjust the command so it inserts at a line other than position 5. In this case, position 5 is used because it is above the last REJECT filter that is at line number 5 by default.
I'm playing with coreos and digitalocean, and I'd like to start allowing internal communication between my containers.
I've got private networking set up for all the hosts, and now I'd like to ensure that some containers only open ports to localhost and to the internal interface.
I've explored a lot of options for this, but none of them seem satisfactory:
Using the '-p', I can ensure docker binds to the local interface, but this has two downsides:
I can't easily test services by SSHing in, because that traffic originates from localhost
I need to write somewhat hacky shell scripts to start my services, in order to inject the address of the machine that the container is running on
I tried using flannel, but it doesn't make the traffic private (or I didn't set it up right)
I considered using iptables on the containers to prevent external access, but that doesn't seem as secure
I tried using iptables on the coreos hosts, but ... it's tricky, and I couldn't get it working.
When I tried to configured iptables on the host, I used the method here: https://docs.docker.com/articles/networking/#communication-between-containers-and-the-wider-world, by adding a DROP rule to the docker chain, but it didn't work, and packets still got through
So what's the best approach, and I'll invest time in making it work.
Overall, I guess I need to find something that I can:
Roll out to all the hosts reliably
Something that is reasonably flexible going forward
Something that allows for 'edge machines' which are accessible from the wider internet.
Solution
I'll go into how I ended up solving this. Thanks to larsks for their help. In the end, their approach was the correct one. It's tricky on coreos, because there aren't really stable addresses, like larsks assumes. The whole point of coreos it to be able to forget about ip addresses.
I solved this by finding a not-too-bad way to inject the ip address into the command in the service file. The tricky thing about this is that it doesn't really support a lot of the shell features I expected. What I wanted to do was to assign the ip address of the machine to a variable then inject it into the command:
ip=$(ifconfig eth1 | grep -o 'inet [0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*');
/usr/bin/docker run -p $ip:7000:7000 ...
But, as mentioned, that doesn't work. So what to do? Get the shell!
ExecStart=/usr/bin/sh -c "\
export ip=$(ifconfig eth1 | grep -o 'inet [0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*');\
echo $ip;\
/usr/bin/docker run -p $ip:7000:7000"
I hit a few problems along the way.
I'm pretty sure there aren't newlines in that command, so I had to add the ';' characters
when you test the above bash -c command in a shell, it'll have very different effects to when systemd does it. In the shell you need to escape the '$' characters, while in systemd config files, you don't.
I included the echo so that I could see what the command thought the ip was.
When I was doing all this, I actually inserted a small webserver to the docker image, so that I could just test using curl.
Downsides of this approach is that it's tied to the way ifconfig works, and ipv4. In fact, this approach doesn't work on my linux mint laptop, where ifconfig produces differently formatted output. The important lesson here is to output things in yaml or json, so that shell json tools can access things more easily.
Instead of grep-ping the IP address, you can use the environment files to get the IP address (both public and private) of the host the service gets scheduled on. This allows you to bind your container ports to either public or private ports in a simple way.
Like so:
[Service]
EnvironmentFile=/etc/environment
ExecStart=/usr/bin/docker run --name myservice -p \
${COREOS_PUBLIC_IPV4}:80:80 \
${COREOS_PRIVATE_IPV4}:3306:3306 \
ubuntu /bin/bash
I've got private networking set up for all the hosts, and now I'd like
to ensure that some containers only open ports to localhost and to the
internal interface.
This is exactly the behavior that you get with the -p option when you specify an ip address. Let's say I have a host with two external interfaces, eth0 (with address 10.0.0.10) and eth1 (with address 192.168.0.10), and the docker0 bridge at 172.17.42.1/16.
If I start a container like this:
docker run -p 192.168.0.10:80:80 -d larsks/mini-httpd
This will start a container that is accessible over the eth1 interface at 192.168.0.10, port 80. This service is also accessible -- from the host on which the container is located -- at the address assigned to the container on the docker0 network. This would be something like 172.17.0.39, port 80.
This seems to meet your goals:
The container port is exposed over the "private" eth1 interface.
The container port is accessible from the host.
I can't easily test services by SSHing in, because that traffic originates from localhost.
If you were running ssh inside a container, you would ssh to it at the "internal" address assigned by Docker. But if you are running ssh inside your containers, you may want to consider not doing that and rely on tools like docker exec instead.
I need to write somewhat hacky shell scripts to start my services, in order to inject the address of the machine that the container is running on
With this solution, there is no need to inject the machine ip into the container.
I have a windows services that bind to some TCP port, this port is use for IPC between my application.
Is there a programming (WinAPI/WinSocket and etc) way to know which application connected to my port?
i.e. in my Windows Services I would like to get a PID of the process that connected to my port.
If you're looking for WinAPI way of doing the same as netstat. You probably want the following API:
GetExtendedTcpTable
Look for the results with TCP_TABLE_OWNER_PID_ALL argument.
The resulting MIB_TCPTABLE_OWNER_PID structure has many MIB_TCPROW_OWNER_PID structures that has dwOwningPid which is the process ID you are looking for.
If you mean what process is using (listening on or connected using) your ports, use the following command:
netstat -a -b -o -n
-a will show you all connection (even if they in LISTENING state)
-b will show you the application executable that uses that port
-o will show you the PID of the application
-n will not do dns translations (you probably don't need these for knowing about the application), not necessary