Set permanent docker build --build-arg value for my environment - docker

Working behind a corporate proxy - I need to build my docker images like
docker build --build-arg http_proxy=http://my.proxy:80 .
and that's fine.
I have a script that I've checked out that does a bunch of docker builds - and that's fail because it's not reaching the proxy.
Is there a way to set my local environment to always use my proxy settings when doing docker build?
I did look at creating an alias - but that seems a bit gnarly giving there's a space between the commands? Is there a simple global config I can modify?

First of all make sure to configure the http_proxy setting for the docker deamon as described in HTTP/HTTPS proxy.
This configuration should be enough for docker to pick it up and use it when building the image. However, If the internal commands that the dockerfile is running are creating some custom connections, this configuration may not be picked up properly.
The proxy settings can be picked up from docker info:
$ docker info | grep Proxy
Http Proxy: http://localhost:3128
Https Proxy: http://localhost:3128
You can use the values provided picked up by docker info.
However, what I recommend is to install a tool to transparently route all the traffic to the http proxy. That way you can forget about the proxy and all tools on your machine should work seemlessly.
If you are on linux, there is redsocks. The is also a docker image for it if you don't want to install it directly on the machine. For other platforms you can use proxycap.

Related

docker-compose build and up

I am not an advance user so please bear with me.
I am building a docker image using docker-compose -f mydocker-compose-file.yml ... on my machine.
The image then been pushed to a remote docker registry.
Then from a remote server I pull down this image.
To run this image; I have to copy mydocker-compose-file.yml from my machine to remote server and then run docker-compose -f mydocker-compose-file.yml up -d.
I find this very inefficient as why I need the same YAML file to run the docker image (should I?).
Is there a way to just spin up the container without this file from remote machine?
As of compose 1.24 along with the 18.09 release of docker (you'll need at least that client version on the remote host), you can run docker commands to a remote host over SSH.
# all docker commands in this shell will not talk to the remote host
export DOCKER_HOST=ssh://user#host
# you can verify that with docker info to see which engine you're talking to
docker info
# and now run your docker-compose up command locally to start/stop containers
docker-compose up -d
With previous versions, you could configure TLS certificates to allow specific clients to connect to the docker API over a network connection. See these docs for more details.
Note, if you have host volumes, the variables and paths will be expanded to your laptop directories, but the host mounts will happen on the remote server where those directories may not exist. This is a good situation to switch to named volumes.
Everything you can do with Docker Compose, you can do with plain docker commands.
Depending on how exactly you're interacting with the remote server, your tooling might have native ways to do this. One specific example I'm familiar with is the Ansible docker_container module. If you're already using a tool like Ansible, Chef, or Salt, you can probably use a tool like this to do the same thing your docker-compose.yml file does.
But otherwise there's more or less a direct translation between a docker-compose.yml file
version: '3'
services:
foo:
image: me/foo:20190510.01
ports: ['8080:8080']
and a command line
docker run -d --name foo -p 8080:8080 me/foo:20190510.01
My experience has been that the docker run commands quickly become unwieldy and you want to record them in a file; and once they're in a file, you start to wish they were in a more structured format, even if you need an auxiliary tool to run them; which brings you back to copying around the docker-compose.yml file. I think that's pretty routine. (Something needs to tell the server what to run.)

how to configure docker containers proxy?

how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/

How to run container in a remote docker host with Jenkins

I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.

How do you handle nontrivial environment differences with docker?

I recognize that docker is intended to reduce the friction of moving an application from one environment to another, and in many cases doing things like overriding environment variables is pretty easy at runtime.
Consider a situation where all development happens behind a corporate proxy, but then the images (or containers or Dockerfiles) need to be shipped to a different environment which has different architecture requirements. The specific case I'm thinking of is that the development environment includes a pretty invasive corporate proxy. The image needs (in order to function) the ability to hit services on the internet, so the working Dockerfile looks something like this in development:
FROM centos
ENV http_proxy=my.proxy.url \
https_proxy=my.proxy.url \
# these lines required for the proxy to be trusted, most apps block it otherwise b/c SSL inspection
COPY ./certs/*.pem /etc/pki/ca-trust/source/anchors/
RUN /usr/bin/bupdate-ca-trust extract
## more stuff to actually run the app, etc
In the production environment, there is no proxy and no need to extract pem files. I recognize that I can set the environment variables to not use the proxy at runtime (or conversely, set them only during development), but either way this feels pretty leaky to me in terms of the quasi-encapsulation I expect from Docker.
I recognize as well that this particular example, it's not that big a deal to copy and extract the pem files that won't be used in production, but it made me wonder about best practices in this space, as I'm sure this isn't the only example.
Ideally I would like to let the host machine manage the proxy requirements (and really, any environment differences), but I haven't been able to find a way to do that except by modifying environment variables.
You might be able to use iptables on your development machine to proxy traffic from containers to a proxy. Then your image would be the same in each environment it runs in, the network differences would be managed by the host. See http://silarsis.blogspot.nl/2014/03/proxy-all-containers.html for more information.
IMO I wouldn't worry too much about it if it works. Image still runs in every environment so you're not really "giving something up" other than semantics :)
You can probably configure this at the Docker Engine level, using the instruction at: https://docs.docker.com/engine/admin/systemd/#httphttps-proxy
Create a systemd drop-in directory for the docker service:
$ mkdir -p /etc/systemd/system/docker.service.d
Create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Or, if you are behind an HTTPS proxy server, create a file called
/etc/systemd/system/docker.service.d/https-proxy.conf that adds the
HTTPS_PROXY environment variable:
[Service]
Environment="HTTPS_PROXY=https://proxy.example.com:443/"
If you have internal Docker registries that you need to contact without
proxying you can specify them via the NO_PROXY environment variable:
Environment="HTTP_PROXY=http://proxy.example.com:80/"
"NO_PROXY=localhost,127.0.0.1,docker-registry.somecorporation.com"
Or, if you are behind an HTTPS proxy server:
Environment="HTTPS_PROXY=https://proxy.example.com:443/"
"NO_PROXY=localhost,127.0.0.1,docker-registry.somecorporation.com"
Flush changes:
$ sudo systemctl daemon-reload Restart Docker:
$ sudo systemctl restart docker
Verify that the configuration has been loaded:
$ systemctl show --property=Environment docker
Environment=HTTP_PROXY=http://proxy.example.com:80/
Or, if you are behind an HTTPS proxy server:
$ systemctl show --property=Environment docker
Environment=HTTPS_PROXY=https://proxy.example.com:443/

Using docker within vagrant behind a proxy

I want to run apt from within a docker container, within a vagrant machine (running on virtualbox), but this fails because I'm behind a proxy.
I use vagrant-proxyconf to allow the vagrant machine itself to connect to the internet, which works fine:
if Vagrant.has_plugin?("vagrant-proxyconf")
config.proxy.http = ...
config.proxy.https = ...
config.proxy.no_proxy = "localhost,127.0.0.1,.example.com"
end
However, these settings aren't carried through to docker containers started within the vagrant machine. When I start a debian-based docker container with
docker run -it debian /bin/bash
and within the bash I run
apt-get update
then apt can't establish a connection. I can fix this problem by adding the following to my Dockerfile
ENV http_proxy <myproxy>
but adjusting all Dockerfile's would be cumbersome, and I'd prefer not to hardcode my proxy into the Dockerfile's themselves, as those are also used in a different setup.
I've also tried telling docker what proxy to use using: https://docs.docker.com/engine/admin/systemd/
However, this appears not to have any effect on the proxy that apt uses within the docker container.
Is there a way to pass the http_proxy environment variable to all docker containers started within my machine by default? Alternatively, would it be possible to configure vagrant / virtualbox to "emulate" a "proxyless" internet connection so that I don't have to reach the proxy settings down through all the virtualization layers?
You can add the vars passing them as arguments on docker build command. In that way it will work and the proxy ip won't be on Dockerfile. In this way:
docker build -t --build-arg http_proxy="http://yourIp" yourImage
Then on Dockerfile you must set the var as argument:
ARG http_proxy
Automatically the var is able to be used in this way:
RUN echo ${http_proxy}
But in your case you don't need to use it, only setting the proxy var is enough to be used during building.
This technique could be very useful too in order to avoid write passwords on Dockerfiles.
Hope it helps

Resources