Docker proxy logging urls while build - docker

I am struggling with a problem which is best described on a picture.
I need somehow log all urls requested when I run docker build command.
Can somebody help me how to achieve this?

You cannot monitor the connections called by build command. They are just normal connections pass through your network interface.
You may want to install tcpdump and use it to monitor a specific network interface or to filter specific HTTP requests. That is the best what you can do as far as I know.
UPDATE
If you want to monitor build process which happens inside docker container, you may use tcpdump as mentioned above to monitor the network interface of that Docker container using its binding IP address. By that you can see the connections flows in and out that Docker container only.

Related

Read host's ifconfig in the running Docker container

I would like to read host's ifconfig output during the run of the Docker container, to be able to parse it and get OpenVPN interface (tap0) IP address and process it within my application.
Unfortunately, propagating this value via the environment is not my case, because IP address could change in time of running the container and I don't want to restart my application container each time to see a new value.
Current working solution is a CRON on the host which writes the IP into the file on a shared volume and container reads from it - but I am looking for better solution as it seems to me as a workaround. Also, there was a plan to create new container with network: host which will see host's interfaces - it works, but it also looks like a workaround as it involves many steps and probably security issues.
I have a question, is there any valid and more clean way to achieve my goal - read host's ifconfig in docker container in realtime?
A specific design goal of Docker is that containers can’t directly access the host’s network configuration. The workarounds you’ve identified are pretty much the only way to do these.
If you’re trying to modify the host’s network configuration in some way (you’re trying to actually run a VPN, for example) you’re probably better off running it outside of Docker. You’ll still need root permission either way, but you won’t need to disable a bunch of standard restrictions to do what you need.
If you’re trying to provide some address where the service can be reached, using configuration like an environment variable is required. Even if you could access the host’s configuration, this might not be the address you need: consider a cloud environment where you’re running on a cloud instance behind a load balancer, and external clients need the load balancer; that’s not something you can directly know given only the host’s network configuration.

Easy, straightforward, robust way to make host port available to Docker container?

It is really easy to mount directories into a docker container. How can I just as easily "mount a port into" a docker container?
Example:
I have a MySQL server running on my local machine. To connect to it from a docker container I can mount the mysql.sock socket file into the container. But let's say for some reason (like intending to run a MySQL slave instance) I cannot use mysql.sock to connect and need to use TCP.
How can I accomplish this most easily?
Things to consider:
I may be running Docker natively if I'm using Linux, but I may also be running it in a VM if I'm on Mac or Windows, through Docker Machine or Docker for Mac/Windows (Beta). The answer should handle both scenarios seamlessly, without me as the user having to decide which solution is right depending on my specific Docker setup.
Simply assigning the container to the host network is often not an option, so that's unfortunately not a proper solution.
Potential solution directions:
1) I understand that setting up proper local DNS and making the Docker container (network) talk to it might be a proper, robust solution. If there is such a DNS service that can be set up with 1, max 2 commands and then "just work", that might be something.
2) Essentially what's needed here is that something will listen on a port inside the container and like a sort of proxy route traffic between the TCP/IP participants. There's been discussion on this closed Docker GH issue that shows some ip route command-line magic, but that's a bit too much of a requirement for many people, myself included. But if there was something akin to this that was fully automated while understanding Docker and, again, possible to get up and running with 1-2 commands, that'd be an acceptable solution.
I think you can run your container with --net=host option. In this case container will bind to the host's network and will be able to access all the ports on your local machine.

Inter-process communication between docker container and its host

I am setting up a continuous integration system with gtilab and docker. For some reason I have to commit the current container as a new docker image in one stage of the CI system, so I can reuse the image in subsequent stages.
To summarize, I have to execute this command:
docker commit $CONTAINER_ID $NEW_IMAGE_NAME
But from inside a container. And later from another container:
docker rmi $NEW_IMAGE_NAME
One solution might be setting up ssh public key authentication and:
ssh user#172.17.0.1 docker ...
In which, 172.17.0.1 is the host IP address. I can restrict the ssh user to access only specific commands for security.
Another solution is to create a public service on a network socket in the host. But what is the best approach here? I prefer a secure solution so from inside the container you can only commit a docker image and delete the created image (and not other images). So, a wild ssh is not so secure. And, I prefer a more portable solution that doesn't relay on the host IP address. What do you suggest?
Issue/Question
How to execute some Docker API call from within a container?
Did you know
Did you know that the Docker API can be served on a networking socket by adding the option -H tcp://0.0.0.0:2375? Hence you can execute calls from within the containers directly to the Docker daemon.
Note that you can (and should) also enable TLS for this socket, cf man docker daemon.
Security is a must
If ever this option does not seem clean or secure enough, then a local networking* service will be needed. I would suggest a web API in java or python that would respond to two different call that could be:
commit: http[s]://localhost:service-port/commit?containder_id=123456789&image_name=my_name
rmi: http[s]://localhost:service-port/rmi?containder_id=123456789
I did not understand in your comment what user you refer to.
The local service would then answer a HTTP 201 Created if the image is created, or HTTP 406 Not Acceptable if the name already exists. It could also checks if no more than one rmi in a raw are performed. It could answer a HTTP 204 Not Content if no image with this ID exists, HTTP 403 Forbidden is the image cannot be deleted or HTTP 200 OK if everything went well. In a last resort it could answer HTTP 418 I'm a teapot.
*: local-networking is a fast, mostly secure, easy to deploy and natively works with Docker. FIFO, see man mkfifo, could also be used but would require another shared volume (for the FIFO file) and, maybe, more code.

run docker after setup network

I'm new to docker. Now I encounter some problems, can anyone help me?
I want to run a container with macvlan.
In my case, I will run a container with --net=none first.
Then configure the network using ip command (or using netns in python).
the order is :
run a container
run app inside container
setup network
my question is that how to setup the network first.
Then run the app.
the order is :
run a container
setup network
run app inside docker
Maybe I can write network configuration script on a file and run it before the other stuff on Dockerfile. But in this way, the network and container are highly coupling, and I need edit it for every container everytime manually.
So is there a better way to handle this situation?
thx in advance.
There is the --net=container argument to docker run which shares the network namespace of the container with another container.
So you could first launch a container with --net=none and a script to set up the networking, then launch your application container with --net=network_container to use that networking stack. That would keep the network configuration and application uncoupled.
Also, take a look at the pipework project if you haven't already.
In general though, I would suggest you are better off looking at existing solutions like Weave and Project Calico.

Use Eureka despite having random external port of docker containers

I am writing an application that is composed of a few spring boot based microservices with a zuul based reverse proxy in the front-
It works when I start the services on my machine, but for server rollout I'd like to use docker for the services, but this seems not to be possible right now.
Normally you would have a fixed "internal" port and randomized ports at the outside of the container. But the app in the container doesn't know the outside port (and IP).
The Netflix tools match what I would want to write an efficient microservice architecture and conceptually I really like docker.
As far as I can see it would be very troublesome to start the container, gather the outside port on the host and pass it to the app, because you can't simply change the port after the app is started.
Is there any way to use eureka with docker based clients?
[Update]
I guess I did a poor job explaining the problem. So maybe this clarifies it a bit more:
The eureka server itself can run in docker, as I have only one and the outside port doesn't matter. I can use the link feature to access it from the clients.
The problem is the URL that the clients register themselves with.
This is for example https://localhost:8080/ but due to dynamic port assignment it is really only accessible via https://localhost:54321/
So eureka will return the wrong URL for the services.
UPDATE
I have updated my answer below, so have a look there.
I have found a solution myself, which is maybe not the best solution, but it fits for me...
When you start docker with "--net=host" (host networking), then you use the hosts network stack directly. Then I just use 0 as port for spring-boot and spring randomizes the port for me and as it's using the hosts networking stack there is no translation to a different port (and IP).
There are some drawbacks though:
When you use host networking you can't use the link-feature for these containers as link source or target.
Using the hosts network stack leads to less encapsulation of the instance, which maybe a problem depending on your project.
I hope it helps
A lot of time has passed and I think I should elaborate this a little bit further:
If you use docker to host your spring application, just don't use a random port! Use a fixed port because every container gets his own IP anyway so every service can use the same port. This makes life a lot easier.
If you have a public facing service then you would use a fixed port anyway.
For local starts via maven or for example the command line have a dedicated profile that uses randomized ports so you don't have conflicts (but be aware that there are or have been a few bugs surrounding random ports and service registration)
if for whatever reason you want to or need to use host networking you can use randomized ports of course, but most of the time you shouldn't!
You can set up a directory for each docker instance and share it between the host and the instance and then write the port and IP address to a file in that directory.
$ instanceName=$(generate random instance name)
$ dirName=/var/lib/docker/metadata/$instanceName
$ mkdir -p $dirName
$ docker run -name $instanceName -v ${dirName}:/mnt/metadata ...
$ echo $(get port number and host IP) > ${dirName}/external-address
Then you just read /mnt/metadata/external-address from your application and use that information with Eureka.

Resources