I have a windows services that bind to some TCP port, this port is use for IPC between my application.
Is there a programming (WinAPI/WinSocket and etc) way to know which application connected to my port?
i.e. in my Windows Services I would like to get a PID of the process that connected to my port.
If you're looking for WinAPI way of doing the same as netstat. You probably want the following API:
GetExtendedTcpTable
Look for the results with TCP_TABLE_OWNER_PID_ALL argument.
The resulting MIB_TCPTABLE_OWNER_PID structure has many MIB_TCPROW_OWNER_PID structures that has dwOwningPid which is the process ID you are looking for.
If you mean what process is using (listening on or connected using) your ports, use the following command:
netstat -a -b -o -n
-a will show you all connection (even if they in LISTENING state)
-b will show you the application executable that uses that port
-o will show you the PID of the application
-n will not do dns translations (you probably don't need these for knowing about the application), not necessary
Related
I'm building a development Docker image I intend to run on my local machine. On this image I want to put two programs; I'll call them progA and progB. I am not the author of these programs, so I cannot change how they communicate. Both programs can only send & receive data on stdin/stdout.
I have a third program—progC—that I want to run on my host. progC needs to communicate with both progA and progB independently (meaning progC⇔progA and progC⇔progB) using stdin/stdout.
While I'm definitely a n00b when it comes to socat, from what I've read I feel like this should be possible. This is my mental model so far:
Inside the container: Establish a bidirectional connection between progA and a TCP port. Do the same for progB using a different port.
On the host: Run the Docker container publishing the ports to the host. Have a local script that—when invoked—binds the ports to stdin/stdout. There will be a script for progA and another for progB. progC will control when either script is invoked, and the binding created from the script should remain open and active until progC terminates the script.
Is this possible? If so, how? Is this advisable? If not, is there a better way to accomplish the same goal?
Think I figured it out:
On the provider (container) side (for progA):
socat -dd SYSTEM:progA TCP-LISTEN:3344,forever,reuseaddr
On the consumer (host) side:
socat -/!/!STDOUT TCP:localhost:3344
I plop that second value in to progC as the command it needs to run to talk to progA over stdin, and it works! socat is pretty magical!
I execute command netstat -a|grep 7777 and do not see port 7777, I try command with option -an and see it, I try with option -n and again don't see it. Please tell me why is this so? Thanks.
From the man pages for netstat:
-a, --all
Show both listening and non-listening (for TCP this means established connections)
sockets. With the --interfaces option, show interfaces that are not up
--numeric-ports
shows numerical port numbers but does not affect the resolution of host or user
names.
You will need to -n to show ip addresses as opposed to resolved DNS addresses and numbered ports as opposed to named ports. You will also need -a to show listening as well as established connections (which doesn't show without -a)
I've been wondering why docker installation does not enable by default port forwarding to containers.
To save you a click, what I mean is:
$ sysctl net.ipv4.conf.all.forwarding=1
$ sudo iptables -P FORWARD ACCEPT
I assume it is some sort of security risk, but I just wonder what the risk it is.
Basically I want to create some piece of code that enables this by default, but I want to know what is the bad that can happen.
I googled this and couldn't find anything.
Generally FORWARD ACCEPT seems to be considered too permissive (?)
If so, what can I change to make this more secure?
My network is rather simple, it is a bunch of pcs in a local lan (10.0.0.0/24) with an openvpn server and those pcs may deploy docker hosts (I'm doing this by hand, not using docker compose or swarm or anything because nodes change) that need to see each other. So no real outside access. Another detail is that I am not using network overlay which I could do without swarm, but the writer of the post warns it could be deprecated soon, so also wonder if I should just start using docker-swarm straight away.
EDIT: My question here is maybe more theoretical I guess than what it may seem at first. I want to know why they decided not to do this. I pretty much need/want full communication between docker instances, they need to be ssh'd into and open up a bunch of different ports to talk to each other (and this is the limitation of my networking knowledge, I don't know how this really works, I suppose they are all high ports, but are those also blocked by docker?). I am not sure docker-swarm would help me much here either. They aimed at micro-services I maybe need interactive sessions from time to time, but this is probably asking too much in a single question.
Maybe the simplest version of this question is: "if I put that code up there as a script to load each time my computer boots up, how can someone abuse it".
Each docker container runs on a local bridge network with IPs generally in the range of 172.1x.xx.xx. You can get the ip address running:
docker inspect <container name> | jq -r ".[].NetworkSettings.Networks[].IPAddress"
You should either run your container exposing and publishing the specific container ports on the host running the containers.
Alternatively, you can use iptables to redirect traffic to a specific port from outside:
iptables -t nat -I PREROUTING -i <incoming interface> -p tcp -m tcp --dport <host listening port> --j DNAT --to-destination <container ip address>:<container port>
Change tcp for udp if the port is listening on a udp socket.
If you want to redirect all traffic you can still use the same approach, but may need to specify a secondary ip address on your host (e.g., 192.168.1.x) and redirect any traffic coming to that address to your container.
What I am trying to do is to run the Erlang Observer App locally and then connect to a remote Docker container that is running my Elixir/Phoenix app in production.
The problem I am having is not being able to connect.
From my research it seems that I need to know the IP address of the Docker image before starting the Phoenix server, so that I can start it like so:
iex --name my_app#10.20.57.123 -S mix phoenix.server
I'm not sure whether a cookie is needed, so I've also tried
iex --name my_app#10.20.57.123 --cookie random_cookie -S mix phoenix.server
I've tried using a hostname instead of an IP address, that did not seem to work.
Once I have that running then I expect to run Observer like this
erl -name observe#127.0.0.1 -setcookie random_cookie -run observer
Or, with IEx
iex --name observe#127.0.0.1 --cookie random_cookie
iex> :observer.start()
Can I start a Phoenix server without needing to know the IP address and still be able to remotely connect with Observer?
I can figure out what the IP address will be of the docker image during building it up with this shell command
ip addr | grep -Eo 'inet (.*) scope global' | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
But can't figure out how to put this in the command to start the Phoenix server.
I know there is a possible solution with starting up docker images with static IP address, but I cannot set static IP addresses with my setup.
Any help is appreciated.
Can I start a Phoenix server without needing to know the IP address
and still be able to remotely connect with Observer?
Yes, with DNS you can. Of course you will at least need to know the fully qualified domain name of the server running the Erlang node. While not quite as short as an Erlang node short name (e.g. node#server) it's still probably better than an IP address. I'm not too familiar with Docker, so it may be easier to stick with an IP address. In this situation it doesn't get you a whole lot.
Once I have that running then I expect to run Observer like this
erl -name observe#127.0.0.1 -setcookie random_cookie -run observer
What server are you running this command on? It will need to be on a machine that has Erlang compiled with Wx support. If this is on a different machine than the one you are running your Phoenix server on this will not work (which is what I understand to be the case).
You will need to do something like this instead:
Find the epmd port on the container running phoenix
$ ssh phoenix-host "epmd -names"
epmd: up and running on port 4369 with data:
name some_phoenix_node at port 58769
Note the port for epmd itself and the port of the node you're interested in debugging. Reconnect to the phoenix host with the ports you found forwarded:
$ ssh -L 4369:localhost:4369 -L 58769:localhost:58769 phoenix-host
On your machine, start a hidden Erlang node running the observer app:
$ iex -name debug#127.0.0.1 -setcookie <phoenix-server-cookie> -hidden -run observer
The app should open up and you should be able to select the node running the phoenix server.
Source: https://gist.github.com/pnc/9e957e17d4f9c6c81294
Update 2/20/2017
I wrote a script that can do the above automatically. All ports epmd knows about are forwarded to localhost: https://github.com/Stratus3D/dotfiles/blob/master/scripts/tools/epmd_port_forwarder
I'm playing with coreos and digitalocean, and I'd like to start allowing internal communication between my containers.
I've got private networking set up for all the hosts, and now I'd like to ensure that some containers only open ports to localhost and to the internal interface.
I've explored a lot of options for this, but none of them seem satisfactory:
Using the '-p', I can ensure docker binds to the local interface, but this has two downsides:
I can't easily test services by SSHing in, because that traffic originates from localhost
I need to write somewhat hacky shell scripts to start my services, in order to inject the address of the machine that the container is running on
I tried using flannel, but it doesn't make the traffic private (or I didn't set it up right)
I considered using iptables on the containers to prevent external access, but that doesn't seem as secure
I tried using iptables on the coreos hosts, but ... it's tricky, and I couldn't get it working.
When I tried to configured iptables on the host, I used the method here: https://docs.docker.com/articles/networking/#communication-between-containers-and-the-wider-world, by adding a DROP rule to the docker chain, but it didn't work, and packets still got through
So what's the best approach, and I'll invest time in making it work.
Overall, I guess I need to find something that I can:
Roll out to all the hosts reliably
Something that is reasonably flexible going forward
Something that allows for 'edge machines' which are accessible from the wider internet.
Solution
I'll go into how I ended up solving this. Thanks to larsks for their help. In the end, their approach was the correct one. It's tricky on coreos, because there aren't really stable addresses, like larsks assumes. The whole point of coreos it to be able to forget about ip addresses.
I solved this by finding a not-too-bad way to inject the ip address into the command in the service file. The tricky thing about this is that it doesn't really support a lot of the shell features I expected. What I wanted to do was to assign the ip address of the machine to a variable then inject it into the command:
ip=$(ifconfig eth1 | grep -o 'inet [0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*');
/usr/bin/docker run -p $ip:7000:7000 ...
But, as mentioned, that doesn't work. So what to do? Get the shell!
ExecStart=/usr/bin/sh -c "\
export ip=$(ifconfig eth1 | grep -o 'inet [0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*');\
echo $ip;\
/usr/bin/docker run -p $ip:7000:7000"
I hit a few problems along the way.
I'm pretty sure there aren't newlines in that command, so I had to add the ';' characters
when you test the above bash -c command in a shell, it'll have very different effects to when systemd does it. In the shell you need to escape the '$' characters, while in systemd config files, you don't.
I included the echo so that I could see what the command thought the ip was.
When I was doing all this, I actually inserted a small webserver to the docker image, so that I could just test using curl.
Downsides of this approach is that it's tied to the way ifconfig works, and ipv4. In fact, this approach doesn't work on my linux mint laptop, where ifconfig produces differently formatted output. The important lesson here is to output things in yaml or json, so that shell json tools can access things more easily.
Instead of grep-ping the IP address, you can use the environment files to get the IP address (both public and private) of the host the service gets scheduled on. This allows you to bind your container ports to either public or private ports in a simple way.
Like so:
[Service]
EnvironmentFile=/etc/environment
ExecStart=/usr/bin/docker run --name myservice -p \
${COREOS_PUBLIC_IPV4}:80:80 \
${COREOS_PRIVATE_IPV4}:3306:3306 \
ubuntu /bin/bash
I've got private networking set up for all the hosts, and now I'd like
to ensure that some containers only open ports to localhost and to the
internal interface.
This is exactly the behavior that you get with the -p option when you specify an ip address. Let's say I have a host with two external interfaces, eth0 (with address 10.0.0.10) and eth1 (with address 192.168.0.10), and the docker0 bridge at 172.17.42.1/16.
If I start a container like this:
docker run -p 192.168.0.10:80:80 -d larsks/mini-httpd
This will start a container that is accessible over the eth1 interface at 192.168.0.10, port 80. This service is also accessible -- from the host on which the container is located -- at the address assigned to the container on the docker0 network. This would be something like 172.17.0.39, port 80.
This seems to meet your goals:
The container port is exposed over the "private" eth1 interface.
The container port is accessible from the host.
I can't easily test services by SSHing in, because that traffic originates from localhost.
If you were running ssh inside a container, you would ssh to it at the "internal" address assigned by Docker. But if you are running ssh inside your containers, you may want to consider not doing that and rely on tools like docker exec instead.
I need to write somewhat hacky shell scripts to start my services, in order to inject the address of the machine that the container is running on
With this solution, there is no need to inject the machine ip into the container.