I know it is possible to pass http_proxy and https_proxy environment variables to a container as shown in eg. this SO answer. However, this only works for proxy-aware commands like wget and curl as they merely read and use these environment variables.
I need to connect everything through the proxy, so that all internet access is routed via the proxy. Essentially, the proxy should be transformed into a kind of VPN.
I am thinking about something similar to the --net=container option where the container gets its network from another container.
How do I configure a container to run everything through the proxy?
Jan Garaj's comment actually pointed me in the right direction.
As noted in my question, not all programs and commands use the proxy environment variables so simply passing the http_proxy and https_proxy env vars to docker is not a solution. I needed a solution where the whole docker container is directing every network requests (on certain ports) through the proxy. No matter which program or command.
The Medium article demonstrates how to build and setup a docker container that, by the help of redsocks, will redirect all ftp requests to another running docker container acting as a proxy. The communication between the containers is done via a docker network.
In my case I already have a running proxy so I don't need a docker network and a docker proxy. Also, I need to proxy http and https, not ftp.
By changing the configuration files I got it working. In this example I simply call wget ipecho.net/plain to retrieve my outside IP. If it works, this should be the IP of the proxy, not my real IP.
Configuration
Dockerfile:
FROM debian:latest
LABEL maintainer="marlar"
WORKDIR /app
ADD . /app
RUN apt-get update
RUN apt-get upgrade -qy
RUN apt-get install iptables redsocks curl wget lynx -qy
COPY redsocks.conf /etc/redsocks.conf
ENTRYPOINT /bin/bash run.sh
setup script (run.sh):
#!/bin/bash
echo "Configuration:"
echo "PROXY_SERVER=$PROXY_SERVER"
echo "PROXY_PORT=$PROXY_PORT"
echo "Setting config variables"
sed -i "s/vPROXY-SERVER/$PROXY_SERVER/g" /etc/redsocks.conf
sed -i "s/vPROXY-PORT/$PROXY_PORT/g" /etc/redsocks.conf
echo "Restarting redsocks and redirecting traffic via iptables"
/etc/init.d/redsocks restart
iptables -t nat -A OUTPUT -p tcp --dport 80 -j REDIRECT --to-port 12345
iptables -t nat -A OUTPUT -p tcp --dport 443 -j REDIRECT --to-port 12345
echo "Getting IP ..."
wget -q -O- https://ipecho.net/plain
redsocks.conf:
base {
log_debug = off;
log_info = on;
log = "file:/var/log/redsocks.log";
daemon = on;
user = redsocks;
group = redsocks;
redirector = iptables;
}
redsocks {
local_ip = 127.0.0.1;
local_port = 12345;
ip = vPROXY-SERVER;
port = vPROXY-PORT;
type = http-connect;
}
Building the container
build -t proxy-via-iptables .
Running the container
docker run -i -t --privileged -e PROXY_SERVER=x.x.x.x -e PROXY_PORT=xxxx proxy-via-iptables
Replace the proxy server and port with the relevant numbers.
If the container works and uses the external proxy, wget should spit out the IP of the proxy even though the wget command does not use the -e use_proxy=yes option. If it doesn't work, it will give you your own IP. Or perhaps no IP at all, depending on how it fails.
You can use the proxy env var:
docker container run \
-e HTTP_PROXY=http://username:password#proxy2.domain.com \
-e HTTPS_PROXY=http://username:password#proxy2.domain.com \
yourimage
If you want the proxy-server to be automatically used when starting a container, you can configure default proxy-servers in the Docker CLI configuration file (~/.docker/config.json). You can find instructions for this in the networking section in the user guide
for exemple :
{
"proxies": {
"default": {
"httpProxy": "http://username:password#proxy2.domain.com",
"httpsProxy": "http://username:password#proxy2.domain.com"
}
}
}
To verify if the ~/.docker/config.json configuration is working, start a container and print its env:
docker container run --rm busybox env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=220e4df13604
HTTP_PROXY=http://username:password#proxy2.domain.com
http_proxy=http://username:password#proxy2.domain.com
HTTPS_PROXY=http://username:password#proxy2.domain.com
https_proxy=http://username:password#proxy2.domain.com
HOME=/root
Related
I am starting a local docker container as an environment to run my applications and I use CLion's remote host capabilities to manage the toolchain. My applications communicate on a specific network interface across various ports and ip addresses.
In a perfect world I would be able to run my applications locally and then also start one in a docker container through CLion and communicate with the locally running apps.
I know I can start a docker container with --network=host but that seems to remove the ability to SSH into a docker container which is a prerequisite to using CLion and docker. Is there a way to maintain both? Use the host network but also enable ssh'ing into the docker container?
Snippet from my Dockerfile that configures the SSH agent
########################################################
# Remote debugging and login in
########################################################
RUN mkdir /var/run/sshd
RUN echo 'root:root' | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
# 22 for ssh server. 7777 for gdb server.
EXPOSE 22 7777
RUN useradd -ms /bin/bash debugger
RUN echo 'debugger:pwd' | chpasswd
CMD ["/usr/sbin/sshd", "-D"]
UPDATE:
With CLion 2021.3 you no longer need to ssh into your docker container. It is now supported as its own toolchain type https://blog.jetbrains.com/clion/2021/10/clion-2021-3-eap-new-docker-toolchain/#new_docker_toolchain
Using --network=host means that your container will use the hosting machine's port 22 and if the machine already runs a process that uses port 22, the SSH Agent will fail.
To confirm, you can look at the agent's log files.
You can configure the SSH Agent to run on a different port than 22 (e.g., 2233), thus avoiding the port collision. In your Dockerfile add the following line:
RUN sed -i 's/\(^Port\)/#\1/' /etc/ssh/sshd_config && echo Port 2233 >> /etc/ssh/sshd_config
Then configure CLion to connect to the container using the alternative port.
How can I make internet http calls from inside docker on Ubuntu 16.04 over Oracle VM (5.2.4) and cntlm proxy on Windows 7?
The Proxy is configured (IP 192.168.56.1, the VMs host). Internet access is successful within Ubuntus Firefox or with wget from commandline.
Docker CE (17.12.0-ce) is configured to use also the proxy ip:
/etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.56.1:3128/"
Environment="HTTPS_PROXY=http://192.168.56.1:3128/"
All docker images I could pull successful.
Only the wget or any install calls inside a docker container fails.
Many help pages later, I haven't no idea more.
My tries:
docker run --name test --network host -e "https_proxy=https://192.168.56.101:3128" -it alpine:latest wget https://www.web.de
wget: bad address 'www.web.de'
docker run --name test --dns 8.8.8.8 -e "https_proxy=https://192.168.56.101:3128" -it alpine:latest wget https://www.web.de
wget: bad address 'www.web.de'
docker run --name test -e "https_proxy=https://192.168.56.101:3128" -it alpine:latest wget https://www.web.de
wget: bad address 'www.web.de'
docker run --name test --network host --dns 8.8.8.8 -e "https_proxy=https://192.168.56.101:3128" -it alpine:latest wget https://www.web.de
wget: bad address 'www.web.de'
All the printed calls also with "http" and without the proxy Environment.
Another ideas for me?
For docker to work with CNTLM it is important to set
Gateway yes
in the CNTLM-Config.
I run CNTLM directly on the VM and set all proxys within the container to http://172.17.0.1:3128.
For the sake of completeness, please set all Proxy-Env in docker run:
PROXY_DOCKER="http://172.17.0.1:3128/"
docker run -e HTTP_PROXY=${PROXY_DOCKER} -e http_proxy=${PROXY_DOCKER} -e HTTPS_PROXY=${PROXY_DOCKER} -e https_proxy=${PROXY_DOCKER} ...
I am newbie for docker. I try set a proxy for debian:jessie image but i didnt make it. I follow this link . I apply all of them with cat tag (example: 'cat > proxy.sh' , because vi or another editor not installed ) but there is some error about my proxy in apt-get update command.
Error Photo
My proxy : http://username:password#proxy2.domain.com
You can set the proxy environment variables when starting the container, for example:
docker container run \
-e HTTP_PROXY=http://username:password#proxy2.domain.com \
-e HTTPS_PROXY=http://username:password#proxy2.domain.com \
myimage
If you want the proxy-server to be automatically used when starting a container, you can configure default proxy-servers in the Docker CLI configuration file (~/.docker/config.json). You can find instructions for this in the networking section in the user guide.
For example:
{
"proxies": {
"default": {
"httpProxy": "http://username:password#proxy2.domain.com",
"httpsProxy": "http://username:password#proxy2.domain.com"
}
}
}
To verify if the ~/.docker/config.json configuration is working, start a container and print its env:
docker container run --rm busybox env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=220e4df13604
HTTP_PROXY=http://username:password#proxy2.domain.com
http_proxy=http://username:password#proxy2.domain.com
HTTPS_PROXY=http://username:password#proxy2.domain.com
https_proxy=http://username:password#proxy2.domain.com
HOME=/root
you need instruct the apt script to connect through proxy inside the container
# echo 'Acquire::http::proxy "proxy:port/";' > /etc/apt/apt.conf.d/40proxy
remember, this should be written inside the container
and in the machine that have docker running, the proxy should be configured like people said before in their comments
I'd like to create the following infrastructure flow:
How can that be achieved using Docker?
Firstly you need to install a SSH server in the images you wish to ssh-into. You can use a base image for all your container with the ssh server installed.
Then you only have to run each container mapping the ssh port (default 22) to one to the host's ports (Remote Server in your image), using -p <hostPort>:<containerPort>. i.e:
docker run -p 52022:22 container1
docker run -p 53022:22 container2
Then, if ports 52022 and 53022 of host's are accessible from outside, you can directly ssh to the containers using the ip of the host (Remote Server) specifying the port in ssh with -p <port>. I.e.:
ssh -p 52022 myuser#RemoteServer --> SSH to container1
ssh -p 53022 myuser#RemoteServer --> SSH to container2
Notice: this answer promotes a tool I've written.
The selected answer here suggests to install an SSH server into every image. Conceptually this is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/).
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container. The only requirement is that the container has bash.
The following example would start an SSH server exposed on port 2222 of the local machine.
$ docker run -d -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock \
-e CONTAINER=my-container -e AUTH_MECHANISM=noAuth \
jeroenpeeters/docker-ssh
$ ssh -p 2222 localhost
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
Not only does this defeat the idea of one process per container, it is also a cumbersome approach when using images from the Docker Hub since they often don't (and shouldn't) contain an SSH server.
These files will successfully open sshd and run service so you can ssh in locally. (you are using cyberduck aren't you?)
Dockerfile
FROM swiftdocker/swift
MAINTAINER Nobody
RUN apt-get update && apt-get -y install openssh-server supervisor
RUN mkdir /var/run/sshd
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
supervisord.conf
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
to build / run start daemon / jump into shell.
docker build -t swift3-ssh .
docker run -p 2222:22 -i -t swift3-ssh
docker ps # find container id
docker exec -i -t <containerid> /bin/bash
I guess it is possible. You just need to install a SSH server in each container and expose a port on the host. The main annoyance would be maintaining/remembering the mapping of port to container.
However, I have to question why you'd want to do this. SSH'ng into containers should be rare enough that it's not a hassle to ssh to the host then use docker exec to get into the container.
Create docker image with openssh-server preinstalled:
Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Build the image using:
$ docker build -t eg_sshd .
Run a test_sshd container:
$ docker run -d -P --name test_sshd eg_sshd
$ docker port test_sshd 22
0.0.0.0:49154
Ssh to your container:
$ ssh root#192.168.1.2 -p 49154
# The password is ``screencast``.
root#f38c87f2a42d:/#
Source: https://docs.docker.com/engine/examples/running_ssh_service/#build-an-eg_sshd-image
It is a short way but not permanent
first create a container
docker run ..... -p 22022:2222 .....
port 22022 on your host machine will map on 2222, we change the ssh port on container later
, then on your container executing the following commands
apt update && apt install openssh-server # install ssh server
passwd #change root password
in file /etc/ssh/sshd_config change these :
uncomment Port and change it to 2222
Port 2222
uncomment PermitRootLogin to
PermitRootLogin yes
and finally restart ssh server
/etc/init.d/ssh start
you can login to your container now
ssh -p 22022 root#HostIP
Remember : if you restart the container you need to restart ssh server again
I just started using docker and followed following tutorial: https://docs.docker.com/engine/admin/using_supervisord/
FROM ubuntu:14.04
RUN apt-get update && apt-get upgrade
RUN apt-get install -y openssh-server apache2 supervisor
RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
and
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
Build and run:
sudo docker build -t <yourname>/supervisord .
sudo docker run -p 22 -p 80 -t -i <yourname>/supervisord
My question is, when docker runs on my server with IP http://88.xxx.x.xxx/, how can I access the apache localhost running inside the docker container from the browser on my computer? I would like to use a docker container as a web server.
You will have to use port forwarding to be able to access your docker container from the outside world.
From the Docker docs:
By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers.
But if you want containers to accept incoming connections, you will need to provide special options when invoking docker run.
So, what does this mean? You will have to specify a port on your host machine (typically port 80) and forward all connections on that port to the docker container. Since you are running Apache in your docker container you probably want to forward the connection to port 80 on the docker container as well.
This is best done via the -p option for the docker run command.
sudo docker run -p 80:80 -t -i <yourname>/supervisord
The part of the command that says -p 80:80 means that you forward port 80 from the host to port 80 on the container.
When this is set up correctly you can use a browser to surf onto http://88.x.x.x and the connection will be forwarded to the container as intended.
The Docker docs describes the -p option thoroughly. There are a few ways of specifying the flag:
# Maps the provided host_port to the container_port but only
# binds to the specific external interface
-p IP:host_port:container_port
# Maps the provided host_port to the container_port for all
# external interfaces (all IP:s)
-p host_port:container_port
Edit: When this question was originally posted there was no official docker container for the Apache web server. Now, an existing version exists.
The simplest way to get Apache up and running is to use the official Docker container. You can start it by using the following command:
$ docker run -p 80:80 -dit --name my-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
This way you simply mount a folder on your file system so that it is available in the docker container and your host port is forwarded to the container port as described above.
There is an official image for apache. The image documentation contains instructions in how you can use this official images as a base for a custom image.
To see how it's done take a peek at the Dockerfile used by the official image:
https://github.com/docker-library/httpd/blob/master/2.4/Dockerfile
Example
Ensure files are accessible to root
sudo chown -R root:root /path/to/html_files
Host these files using official docker image
docker run -d -p 80:80 --name apache -v /path/to/html_files:/usr/local/apache2/htdocs/ httpd:2.4
Files are accessible on port 80.