Using docker within vagrant behind a proxy - docker

I want to run apt from within a docker container, within a vagrant machine (running on virtualbox), but this fails because I'm behind a proxy.
I use vagrant-proxyconf to allow the vagrant machine itself to connect to the internet, which works fine:
if Vagrant.has_plugin?("vagrant-proxyconf")
config.proxy.http = ...
config.proxy.https = ...
config.proxy.no_proxy = "localhost,127.0.0.1,.example.com"
end
However, these settings aren't carried through to docker containers started within the vagrant machine. When I start a debian-based docker container with
docker run -it debian /bin/bash
and within the bash I run
apt-get update
then apt can't establish a connection. I can fix this problem by adding the following to my Dockerfile
ENV http_proxy <myproxy>
but adjusting all Dockerfile's would be cumbersome, and I'd prefer not to hardcode my proxy into the Dockerfile's themselves, as those are also used in a different setup.
I've also tried telling docker what proxy to use using: https://docs.docker.com/engine/admin/systemd/
However, this appears not to have any effect on the proxy that apt uses within the docker container.
Is there a way to pass the http_proxy environment variable to all docker containers started within my machine by default? Alternatively, would it be possible to configure vagrant / virtualbox to "emulate" a "proxyless" internet connection so that I don't have to reach the proxy settings down through all the virtualization layers?

You can add the vars passing them as arguments on docker build command. In that way it will work and the proxy ip won't be on Dockerfile. In this way:
docker build -t --build-arg http_proxy="http://yourIp" yourImage
Then on Dockerfile you must set the var as argument:
ARG http_proxy
Automatically the var is able to be used in this way:
RUN echo ${http_proxy}
But in your case you don't need to use it, only setting the proxy var is enough to be used during building.
This technique could be very useful too in order to avoid write passwords on Dockerfiles.
Hope it helps

Related

how to configure docker containers proxy?

how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/

Using SOCKS5 proxy with docker build-args

I am having trouble figuring out how we can use docker build behind a SOCKS5 proxy.
Example Dockerfile:
FROM clearlinux:base
RUN swupd bundle-add c-basic
Doing docker build --build-arg="ALL_PROXY='socks5://proxy.company.com'" -t test-socks . does not work i.e. the RUN command times out.
However, if I define ENV ALL_PROXY='socks5://proxy.company.com' within the Dockerfile, build works fine.
I've noticed that it is the case only with ALL_PROXY environment variable.

Set permanent docker build --build-arg value for my environment

Working behind a corporate proxy - I need to build my docker images like
docker build --build-arg http_proxy=http://my.proxy:80 .
and that's fine.
I have a script that I've checked out that does a bunch of docker builds - and that's fail because it's not reaching the proxy.
Is there a way to set my local environment to always use my proxy settings when doing docker build?
I did look at creating an alias - but that seems a bit gnarly giving there's a space between the commands? Is there a simple global config I can modify?
First of all make sure to configure the http_proxy setting for the docker deamon as described in HTTP/HTTPS proxy.
This configuration should be enough for docker to pick it up and use it when building the image. However, If the internal commands that the dockerfile is running are creating some custom connections, this configuration may not be picked up properly.
The proxy settings can be picked up from docker info:
$ docker info | grep Proxy
Http Proxy: http://localhost:3128
Https Proxy: http://localhost:3128
You can use the values provided picked up by docker info.
However, what I recommend is to install a tool to transparently route all the traffic to the http proxy. That way you can forget about the proxy and all tools on your machine should work seemlessly.
If you are on linux, there is redsocks. The is also a docker image for it if you don't want to install it directly on the machine. For other platforms you can use proxycap.

Start service using systemctl inside docker container

In my Dockerfile I am trying to install multiple services and want to have them all start up automatically when I launch the container.
One among the services is mysql and when I launch the container I don't see the mysql service starting up. When I try to start manually, I get the error:
Failed to get D-Bus connection: Operation not permitted
Dockerfile:
FROM centos:7
RUN yum -y install mariadb mariadb-server
COPY start.sh start.sh
CMD ["/bin/bash", "start.sh"]
My start.sh file:
service mariadb start
Docker build:
docker build --tag="pbellamk/mariadb" .
Docker run:
docker run -it -d --privileged=true pbellamk/mariadb bash
I have checked the centos:systemd image and that doesn't help too. How do I launch the container with the services started using systemctl/service commands.
When you do docker run with bash as the command, the init system (e.g. SystemD) doesn’t get started (nor does your start script, since the command you pass overrides the CMD in the Dockerfile). Try to change the command you use to /sbin/init, start the container in daemon mode with -d, and then look around in a shell using docker exec -it <container id> sh.
Docker is designed around the idea of a single service/process per container. Although it definitely supports running multiple processes in a container and in no way stops you from doing that, you will run into areas eventually where multiple services in a container doesn't quite map to what Docker or external tools expect. Things like moving to scaling of services, or using Docker swarm across hosts only support the concept of one service per container.
Docker Compose allows you to compose multiple containers into a single definition, which means you can use more of the standard, prebuilt containers (httpd, mariadb) rather than building your own. Compose definitions map to Docker Swarm services fairly easily. Also look at Kubernetes and Marathon/Mesos for managing groups of containers as a service.
Process management in Docker
It's possible to run systemd in a container but it requires --privileged access to the host and the /sys/fs/cgroup volume mounted so may not be the best fit for most use cases.
The s6-overlay project provides a more docker friendly process management system using s6.
It's fairly rare you actually need ssh access into a container, but if that's a hard requirement then you are going to be stuck building your own containers and using a process manager.
You can avoid running a systemd daemon inside a docker container altogether. You can even avoid to write a special start.sh script - that is another benefit when using the docker-systemctl-replacement script.
The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
The current testsuite includes testcases for the LAMP stack including centos, so it should run fine specifically in your setup.
I found this project:
https://github.com/defn/docker-systemd
which can be used to create an image based on the stock ubuntu image but with systemd and multiuser mode.
My use case is the first one mentioned in its Readme. I use it to test the installer script of my application that is installed as a systemd service. The installer creates a systemd service then enables and starts it. I need CI tests for the installer. The test should create the installer, install the application on an ubuntu, and connect to the service from outside.
Without systemd the installer would fail, and it would be much more difficult to write the test with vagrant. So, there are valid use cases for systemd in docker.

Reference env variable from host at runtime in "env-file" to be passed to docker image

Is there a syntax to reference an environment variable from the host in a Docker env-file.
Specifically I'd like to do something like DOCKER_HOST=${HOSTNAME} where HOSTNAME would come the environment of the machine hosting the docker image.
The above doesn't get any attempt at replacement whatsoever and gets passed into the Docker image literally as ${HOSTNAME}.
This is generally not done at the image level, but at runtime, on docker run:
See "How to get the hostname of the docker host from inside a docker container on that host without env vars"
docker run .. -e HOST_HOSTNAME=$(hostname) ..
That does use an environment variable.
You can do so without environment variables, using -h
docker run -h=$(hostname)
But that does not work when your docker run is part of a docker compose. See issue 3840.

Resources