I would like to have custom server which listens inside docker's container (e.g on TCP 192.168.0.1:4000). How to send data in and out from outside of the container. I don't want to use host ports for bridging. I would rather use pipelines or something which not take host network resources. Please show me full example of docker command.
You can use Docker volumes.
You start your container as
docker run -v /host/path:/container/path ...
and then you can pipe data to files in /host/path and they will be visible in /container/path and vise versa.
As long as your server's clients are docker containers as well you don't need to expose any host ports:
docker run --name s1 -d -p 3000 myserver
docker run -d --link s1:serverName client
Now you can reach your server from the client container at serverName:3000.
UPDATE: I just saw that you want to be able to send data from outside any containers. You can still use the same approach, depending on your use case/data volume. Every time you want to send data, create a container that sends it. Using the cli it might look like:
echo "Lots of data" | docker run --rm --link s1:serverName client
Client would have to read from stdin and send the data to serverName:3000. After it's finished it will be automatically removed.
I don't think what you're asking for makes sense. Let's say you use a UNIX pipe to capture standard output from a docker container.
$ docker run --rm -t busybox dd if=/dev/urandom count=1 > junk
$ du -hs junk
4.0K junk
If your docker client is connected to the docker host via tcp, of course that traffic uses the hosts's networking stack. It uses a method called hijacking to transport data on the same socket as the http-ish connection between the client and host.
If your docker client is connected to the host via a unix socket, then your client is on the host, and that pipeline is not using the tcp stack. But you still can't transport that data off the host without using the host's networking.
So using the networking stack is unavoidable if you want to get data from the host. That said, if your criteria is just to avoid allocating additional ports, pipelines do allow you to use the original docker host socket instead of creating new ports. But pipelines aren't the same as a tcp socket, so your application needs to be designed to understand standard input and output.
One approach that grants you access to an already-created container's internally-exposed ports is the ambassador container-linking pattern.
Related
I am quite new to the docker topics and I have a question of connecting container services with traditional ones.
Currently I am thinking of replacing an traditional grafana installation (directly on a linux server) with a grafana docker container.
In grafana I have to connect to different data sources like a mysql instance, a Winsows SQL Database and so on. So grafana is doing a pull of data. All these data sources reside (and will still reside) on other hosts and they are not containers.
So how can I implement that my container is able to communicate with this data sources? Is it possible by default or do I have to implement a special kind of network? I saw that there is an option called macvlan...is that the correct way?
BR
Jan
This should work out of the box, as far as I understand. At least, I'm using Grafana inside a docker container and it works perfectly.
You can test a connectivity from inside your docker container to some external resource by opening a container shell like this:
docker exec -it <container ID> /bin/bash
And then
root#a9cbebfc4564:/# curl google.com
Or
root#a9cbebfc4564:/# ping <bla-bla>
Commands above depend on a docker image environment (like OS or installed software), but this can be solved in a same was as you can do on a regular Unix env
P.S. I encountered a docker2host connection issue once, but it was due to incorrect firewall configuration on a host side.
Since you are replacing a traditional installation, you can start with host networking. This mode give you same connectivity experience as installing on the host. A quick start is as simple as:
docker run --network host grafana/grafana
Notice there's no need to --publish or --publish-all ports as the Grafana container now share the host network.
I have a python-gunicorn web server that runs on docker. I want to run multiple servers on the same machine so I assigned a "random" port to each container like this:
$ docker run -d -p 0:80 image
If I run $ docker ps, I can see the port been used by the container:
0.0.0.0:32771->80/tcp
Now, I want to retrieve this number (32771) from within the container. Is there anyway to do this?
Edit:
I want this information because I need to connect to these servers from another machine and the way it is implemented requires that the server sends its url via HTTP Post: IP:Port/path
I have a container for which I expose my port to access a service running within the container. I am not exposing my ports outside the container i.e. to the host (using host network on mac). On getting inside the container using exec -t and running a curl for a post request, I get the error:
curl command: curl http://localhost:19999
Failed connect to localhost:19999; Connection refused.
I have the expose command in my dockerfile and do not want to expose ports to my host. My service is also up and running inside the container. I also have the property within config set as
"ExposedPorts": {"19999/tcp": {}}
(obtained through `docker inspect <container id/name>\ Any idea on why this is not working? Using docker for Mac
I'd post my docker-compose file too but this is being built through maven. I can ensure that I am exposing my port using 19999:19999. Another weird issue is that on disabling my proxies it would run a very light weight command for my custom service and wouldn't run it again returning the same error as above. The issue only occurs on my machine and not others
Hints:
The app must be listening on port 19999 which is probably not.
The EXPOSE that you're using inside the Dockerfile does nothing.
Usually there is no need to change the default port on which an application is listening, hence each container has its own IP and you shouldn't run in a port conflict.
Answer:
Instead of curling 19999 try to use the default port on which your app would normally be listening to(it's hard to guess what you are trying to run).
If you don't publish a port (with the docker run -p option or the Docker Compose ports: option), you cannot directly reach the container on Docker for Mac. See the Known limitations, use cases, and workarounds in the Docker Desktop for Mac documentation: the "per-container IP addressing is not possible" item ism what you're trying to attempt.
The docker inspect IP address is basically useless, except in one very specific Docker configuration (on a native-Linux host, calling from outside of Docker, on the same host); I wouldn't bother looking it up.
The Dockerfile EXPOSE directive and similar runtime options do very little and mostly serve as documentation. Even if you have that configured you still need to separately publish the port when you start the container to reach it from outside of Docker space.
I wrote a simple peer to peer system, in which, when starting a node
the node looks for a free port and then makes its service accessible on that port,
it registers its URL including the port to a central server, so that other nodes know how to connect to it.
I have the impression this is a typical kind of task that docker is useful for, so I thought of making a container for these peers (my previous experience with docker has only been to write a hello world container).
Ideally I would map (publish) my exposed port to a host port from within the container using the same code that I am running now, but I could imagine that is simply not possible, and I could get around that by starting the image using a script that checks for availability of ports and then runs the container on an appropriate free host port. If the first is possible however, that would be even better. To be explicit, I do something like the following in Python
port = 5001
while not port_is_free(port):
port += 1
The second part really has to be taken care of from within the container. Assume that it has been started with the command docker run -p 5005:80 p2p-node then I need to find out the published port 5005 that the exposed port 80 is mapped to from within the container.
Searching this site and the internet it looks like more people are interested in doing the same, but I couldn't find a solution, nor an affirmation that this simply cannot be done.
So this is the main question I want to ask: how can I see which published ports my exposed ports are mapped to from within a running docker container?
Some of your requirements are not clear to me.
However if you want to know only which host port is mapped with your container's port, you can simply pass an environment variable, -e VAR=val. Just an idea
Start container:
docker run -p 5005:80 -e HOST_PORT=5005 p2p-node
Access the variable from container
echo $HOST_PORT
there is docker-py, a python library of docker.
It is not clear why you want the host port. In docker, containers can communicate with each other without having to expose ports on host machines.
As long as the peer apps are containerized, you don't need the expose port. The containers can be connected via a Docker network and the internal port can be used for communication between the containers.
I have following problem:
Assume that I started two Docker containers on host machine: A and B.
docker run A -ti -p 2000:2000
docker run B -ti -p 2001:2001
I want to be able to get to each of this containers FROM INTERNET by:
http://example.com:2000
http://example.com:2001
How to reach that?
The rest of the equation here is just normal TCP / IP flow. You'll need to make sure of the following:
If the host has some an implicit deny for incoming traffic on its physical interface, you will need to open up ports 2000 and 2001, just like you would for any service (Docker or not).
If the host is behind a NAT or other external means of routing, you'll need to punch holes for those ports there as well.
You'll need the external IP address (either the one attached to the host or the one in front of the NAT allowing access to the ports).
As far as Docker is concerned, you've done what is required to open the ports to the service running in that container correctly.