I'm building a development Docker image I intend to run on my local machine. On this image I want to put two programs; I'll call them progA and progB. I am not the author of these programs, so I cannot change how they communicate. Both programs can only send & receive data on stdin/stdout.
I have a third program—progC—that I want to run on my host. progC needs to communicate with both progA and progB independently (meaning progC⇔progA and progC⇔progB) using stdin/stdout.
While I'm definitely a n00b when it comes to socat, from what I've read I feel like this should be possible. This is my mental model so far:
Inside the container: Establish a bidirectional connection between progA and a TCP port. Do the same for progB using a different port.
On the host: Run the Docker container publishing the ports to the host. Have a local script that—when invoked—binds the ports to stdin/stdout. There will be a script for progA and another for progB. progC will control when either script is invoked, and the binding created from the script should remain open and active until progC terminates the script.
Is this possible? If so, how? Is this advisable? If not, is there a better way to accomplish the same goal?
Think I figured it out:
On the provider (container) side (for progA):
socat -dd SYSTEM:progA TCP-LISTEN:3344,forever,reuseaddr
On the consumer (host) side:
socat -/!/!STDOUT TCP:localhost:3344
I plop that second value in to progC as the command it needs to run to talk to progA over stdin, and it works! socat is pretty magical!
Related
I am using the default bridge network for docker (and yes, I am relatively new to docker). I have two docker containers.
The first container provides a service on port 12345. When creating this container, I did not specify the --publish option because I did not want to expose this port to the outside world.
The second container needs to use the service from the first container. However, the application running in this second container was hardcoded to access the service at 127.0.0.1:12345. Clearly, the second container's localhost is not the same as the first container. Is there a way to course docker networking to think that localhost in the second container should actually be connected to the port in the first container, without exposing anything to the outside world?
Option N: (this works but may not be the best solution)
One way you can force this to behave the way you need is through injecting an additional service to bind to the port within on the application container and redirecting it outward.
socat TCP-LISTEN:12345,fork TCP:172.18.0.2:12345
A quick test here, I was able to confirm 127.0.0.1:12345 is treated as the remote 12345
Things to consider:
The two containers needs to be able to reach each other
It breaks the recommendation of one service per container.
Getting the app into the docker container. (yum / apt-get install socat, source build = ?)
Getting it to run on startup on container start/restart.
I have an application running in container A, and i'd like to write to stdin of a process running in container B. I see that if I want to write to B's stdin from the host machine I can use docker attach. Effectively, I want to call docker attach B from within A.
Ideally i'd like to be able to configure this through docker-compose.yml. Maybe I could tell docker-compose to create a unix domain socket A that pipes to B's stdin, or connect some magic port number to B's stdin.
I realize that if I have to I can always put a small webserver in B's container that redirects all input from an open port in B to the process, but i'd rather use an out of the box solution if it exists.
For anyone interested in the details, I have a python application running from container A and I want it to talk to stockfish (chess engine) in container B.
A process in one Docker container can't directly use the stdin/stdout/stderr of another container. This is one of the ways containers are "like VMs". Note that this is also pretty much impossible in ordinary Linux/Unix without a parent/child process relationship.
As you say, the best approach is to put an HTTP or other service in front of the process in the other container, or else to use only a single container and launch the thing that only communicates via stdin as a subprocess.
(There might be a way to make this work if you give the calling process access to the host's Docker socket, but you'd be giving it unrestricted access over the host and tying the implementation to Docker; either the HTTP or subprocess paths are straightforward to develop and test without Docker, then move into container land separately, and don't involve the possibility of taking over the host.)
if it will help, you may try to create and read/write to socket.
and mount this socket in both containers like:
docker run -d -v /var/run/app.sock:/var/run/app.sock:ro someapp1
docker run -d -v /var/run/app.sock:/var/run/app.sock someapp2
disclaimer: it is just idea, never did something like it by myself
I wrote a simple peer to peer system, in which, when starting a node
the node looks for a free port and then makes its service accessible on that port,
it registers its URL including the port to a central server, so that other nodes know how to connect to it.
I have the impression this is a typical kind of task that docker is useful for, so I thought of making a container for these peers (my previous experience with docker has only been to write a hello world container).
Ideally I would map (publish) my exposed port to a host port from within the container using the same code that I am running now, but I could imagine that is simply not possible, and I could get around that by starting the image using a script that checks for availability of ports and then runs the container on an appropriate free host port. If the first is possible however, that would be even better. To be explicit, I do something like the following in Python
port = 5001
while not port_is_free(port):
port += 1
The second part really has to be taken care of from within the container. Assume that it has been started with the command docker run -p 5005:80 p2p-node then I need to find out the published port 5005 that the exposed port 80 is mapped to from within the container.
Searching this site and the internet it looks like more people are interested in doing the same, but I couldn't find a solution, nor an affirmation that this simply cannot be done.
So this is the main question I want to ask: how can I see which published ports my exposed ports are mapped to from within a running docker container?
Some of your requirements are not clear to me.
However if you want to know only which host port is mapped with your container's port, you can simply pass an environment variable, -e VAR=val. Just an idea
Start container:
docker run -p 5005:80 -e HOST_PORT=5005 p2p-node
Access the variable from container
echo $HOST_PORT
there is docker-py, a python library of docker.
It is not clear why you want the host port. In docker, containers can communicate with each other without having to expose ports on host machines.
As long as the peer apps are containerized, you don't need the expose port. The containers can be connected via a Docker network and the internal port can be used for communication between the containers.
It is really easy to mount directories into a docker container. How can I just as easily "mount a port into" a docker container?
Example:
I have a MySQL server running on my local machine. To connect to it from a docker container I can mount the mysql.sock socket file into the container. But let's say for some reason (like intending to run a MySQL slave instance) I cannot use mysql.sock to connect and need to use TCP.
How can I accomplish this most easily?
Things to consider:
I may be running Docker natively if I'm using Linux, but I may also be running it in a VM if I'm on Mac or Windows, through Docker Machine or Docker for Mac/Windows (Beta). The answer should handle both scenarios seamlessly, without me as the user having to decide which solution is right depending on my specific Docker setup.
Simply assigning the container to the host network is often not an option, so that's unfortunately not a proper solution.
Potential solution directions:
1) I understand that setting up proper local DNS and making the Docker container (network) talk to it might be a proper, robust solution. If there is such a DNS service that can be set up with 1, max 2 commands and then "just work", that might be something.
2) Essentially what's needed here is that something will listen on a port inside the container and like a sort of proxy route traffic between the TCP/IP participants. There's been discussion on this closed Docker GH issue that shows some ip route command-line magic, but that's a bit too much of a requirement for many people, myself included. But if there was something akin to this that was fully automated while understanding Docker and, again, possible to get up and running with 1-2 commands, that'd be an acceptable solution.
I think you can run your container with --net=host option. In this case container will bind to the host's network and will be able to access all the ports on your local machine.
I am writing an application that is composed of a few spring boot based microservices with a zuul based reverse proxy in the front-
It works when I start the services on my machine, but for server rollout I'd like to use docker for the services, but this seems not to be possible right now.
Normally you would have a fixed "internal" port and randomized ports at the outside of the container. But the app in the container doesn't know the outside port (and IP).
The Netflix tools match what I would want to write an efficient microservice architecture and conceptually I really like docker.
As far as I can see it would be very troublesome to start the container, gather the outside port on the host and pass it to the app, because you can't simply change the port after the app is started.
Is there any way to use eureka with docker based clients?
[Update]
I guess I did a poor job explaining the problem. So maybe this clarifies it a bit more:
The eureka server itself can run in docker, as I have only one and the outside port doesn't matter. I can use the link feature to access it from the clients.
The problem is the URL that the clients register themselves with.
This is for example https://localhost:8080/ but due to dynamic port assignment it is really only accessible via https://localhost:54321/
So eureka will return the wrong URL for the services.
UPDATE
I have updated my answer below, so have a look there.
I have found a solution myself, which is maybe not the best solution, but it fits for me...
When you start docker with "--net=host" (host networking), then you use the hosts network stack directly. Then I just use 0 as port for spring-boot and spring randomizes the port for me and as it's using the hosts networking stack there is no translation to a different port (and IP).
There are some drawbacks though:
When you use host networking you can't use the link-feature for these containers as link source or target.
Using the hosts network stack leads to less encapsulation of the instance, which maybe a problem depending on your project.
I hope it helps
A lot of time has passed and I think I should elaborate this a little bit further:
If you use docker to host your spring application, just don't use a random port! Use a fixed port because every container gets his own IP anyway so every service can use the same port. This makes life a lot easier.
If you have a public facing service then you would use a fixed port anyway.
For local starts via maven or for example the command line have a dedicated profile that uses randomized ports so you don't have conflicts (but be aware that there are or have been a few bugs surrounding random ports and service registration)
if for whatever reason you want to or need to use host networking you can use randomized ports of course, but most of the time you shouldn't!
You can set up a directory for each docker instance and share it between the host and the instance and then write the port and IP address to a file in that directory.
$ instanceName=$(generate random instance name)
$ dirName=/var/lib/docker/metadata/$instanceName
$ mkdir -p $dirName
$ docker run -name $instanceName -v ${dirName}:/mnt/metadata ...
$ echo $(get port number and host IP) > ${dirName}/external-address
Then you just read /mnt/metadata/external-address from your application and use that information with Eureka.