let docker bridge connect vlan interface - docker

Pre-Requisites:
sudo ip link add link eth0 name eth0.100 type vlan id 101
problem:
I want to start openvpn with docker in container, this step is easy:
sudo docker run -v $OVPN_DATA:/etc/openvpn -p 1194:1194/udp --privileged -e DEBUG=1 kylemanna/openvpn
Then I need to let container can continue route package to eth0.100, after openvpn recieved remote client data.
There is my idea about it, but not working all.
1:
First create bridge:
docker network create bridge vpn_bridge
Then start container with vpn_bridge
sudo docker run --net=vpn_bridge -v $OVPN_DATA:/etc/openvpn -p 1194:1194/udp --privileged -e DEBUG=1 kylemanna/openvpn
Finally, I find can't join the vlan interface to the vpn_bridge
2:
Use macvlan:
sudo docker network create -d macvlan \
--subnet=192.168.100.0/24 \
--gateway=192.168.100.1 \
-o parent=eth0.1000 pub_net
Then start container with pub_net:
sudo docker run --net=vpn_bridge -v $OVPN_DATA:/etc/openvpn -p 1194:1194/udp --privileged -e DEBUG=1 kylemanna/openvpn
Although container join vlan, but I found can't connect the container's openvpn server even in local host.
Is there anybody can give me more good way (ps: I already use traditional way solve it that use linux's default bridge)

Related

Access host iptables and ufw from privileged Docker container

I would like to access iptables, ufw and reboot running on host OS (Snappy Ubuntu Core 18.04) from Docker container (running on the same host).
What volumes or Docker container parameters are required to make this possible? Container can be run with root user and privileged access.
I´m totally aware of the security implications here, but security is not a concern in this context.
Using SSH
You can run the container with --net=host option, then it's possible to connect to the host from the container using ssh.
in the host mode, connecting to the port 22 on the host from the container is possible.
Without SSH
if you don't want to use ssh, one way is explained in this post. You need to run the container with --privileged and --pid=host and then use nsenter command. Using this command you get an interactive shell form the host. You can also only run desired command.
$ sudo docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh
$ sudo docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n | sudo iptables -S
note that if you are using MacOS or Windows, the docker is running in a hypervisor, so using this, you would be in the shell of the hypervisor.

Internet access from inside docker over Oracle VM and cntlm proxy

How can I make internet http calls from inside docker on Ubuntu 16.04 over Oracle VM (5.2.4) and cntlm proxy on Windows 7?
The Proxy is configured (IP 192.168.56.1, the VMs host). Internet access is successful within Ubuntus Firefox or with wget from commandline.
Docker CE (17.12.0-ce) is configured to use also the proxy ip:
/etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.56.1:3128/"
Environment="HTTPS_PROXY=http://192.168.56.1:3128/"
All docker images I could pull successful.
Only the wget or any install calls inside a docker container fails.
Many help pages later, I haven't no idea more.
My tries:
docker run --name test --network host -e "https_proxy=https://192.168.56.101:3128" -it alpine:latest wget https://www.web.de
wget: bad address 'www.web.de'
docker run --name test --dns 8.8.8.8 -e "https_proxy=https://192.168.56.101:3128" -it alpine:latest wget https://www.web.de
wget: bad address 'www.web.de'
docker run --name test -e "https_proxy=https://192.168.56.101:3128" -it alpine:latest wget https://www.web.de
wget: bad address 'www.web.de'
docker run --name test --network host --dns 8.8.8.8 -e "https_proxy=https://192.168.56.101:3128" -it alpine:latest wget https://www.web.de
wget: bad address 'www.web.de'
All the printed calls also with "http" and without the proxy Environment.
Another ideas for me?
For docker to work with CNTLM it is important to set
Gateway yes
in the CNTLM-Config.
I run CNTLM directly on the VM and set all proxys within the container to http://172.17.0.1:3128.
For the sake of completeness, please set all Proxy-Env in docker run:
PROXY_DOCKER="http://172.17.0.1:3128/"
docker run -e HTTP_PROXY=${PROXY_DOCKER} -e http_proxy=${PROXY_DOCKER} -e HTTPS_PROXY=${PROXY_DOCKER} -e https_proxy=${PROXY_DOCKER} ...

cannot run container after commit changes

Just basic and simple steps illustrating what I have tried:
docker pull mysql/mysql-server
sudo docker run -i -t mysql/mysql-server:latest /bin/bash
yum install vi
vi /etc/my.cnf -> bind-address=0.0.0.0
exit
docker ps
docker commit new_image_name
docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=secret -d new_image_name
docker ps -a STATUS - Exited (1)
Please let me know what I did wrong.
Instead of trying to modify an existing image, try and use (for testing) MYSQL_ROOT_HOST=%.
That would allow root login from any IP. (As seen in docker-library/mysql issue 241)
sudo docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 -e MYSQL_ROOT_HOST=% -d mysql/mysql-server:latest
The README mentions:
By default, MySQL creates the 'root'#'localhost' account.
This account can only be connected to from inside the container, requiring the use of the docker exec command as noted under Connect to MySQL from the MySQL Command Line Client.
To allow connections from other hosts, set this environment variable.
As an example, the value "172.17.0.1", which is the default Docker gateway IP, will allow connections from the Docker host machine.

Docker still can see open ports with and without --link flag set

I've been following these two tutorials to understand a bit about Docker networking:
https://docs.docker.com/engine/examples/running_redis_service/
https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks
The first tutorial says that the container is not exposing ports by not using -p or -P flags.
$ docker run --name redis-server -d <your username>/redis
And when running another container it uses the --link flag to "redis" container:
$ docker run --name redis-client --link redis:db -i -t ubuntu:14.04 /bin/bash
And that way I can connect from redis-client container to redis-server container because they are linked. But while experimenting with other configurations, I run another container, let's call it redis-client-2 -- just after I stoped and removed redis-client container -- that doesn't use the --link flag:
$ docker run --name redis-client-2 -i -t ubuntu:14.04 /bin/bash
And I noticed that even without the --link flag set I can connect to redis-server container's redis server from redis-client-2
My question is, am I misunderstanding the concept of --link and exposed ports on Docker? Why can I still connect to redis-server container with or without the --link flag?
Thanks in advance
Docker containers on the same Docker network (if none is setup, default) as each other can communicate with each other freely. --link is a vestigial feature from before the days of first-class Docker networking.
The -p & -P options only relate to exposing ports outside of the Docker network (i.e. to the host) and has no bearing on container-to-container communication.

Cannot access tomcat8 server running in docker container from host machine

I am trying to connect to a web app running on tomcat8 in a docker container.
I am able to access it from within the container doing lynx http://localhost:8080/myapp, but when I try to access it from the host I only get HTTP request sent; waiting for response.
I am exposing port 8080 in the Dockerfile, I am using sudo docker inspect mycontainer | grep IPAddress to get the ip address of the container.
The command I am using to run the docker container is this:
sudo docker run -ti --name myapp --link mysql1:mysql1 --link rabbitmq1:rabbitmq1 -e "MYSQL_HOST=mysql1" -e "MYSQL_USER=myuser" -e "MYSQL_PASSWORD=mysqlpassword" -e "MYSQL_USERNAME=mysqlusername" -e "MYSQL_ROOT_PASSWORD=rootpassword" -e "RABBITMQ_SERVER_ADDRESS=rabbitmq1" -e "MY_WEB_ENVIRONMENT_ID=qa" -e "MY_WEB_TENANT_ID=tenant1" -p "8080:8080" -d localhost:5000/myapp:latest
My Dockerfile:
FROM localhost:5000/web_base:latest
MAINTAINER "Me" <me#my_company.com>
#Install mysql client
RUN yum -y install mysql
#Add Run shell script
ADD run.sh /home/ec2-user/run.sh
RUN chmod +x /home/ec2-user/run.sh
EXPOSE 8080
ENTRYPOINT ["/bin/bash"]
CMD ["/home/ec2-user/run.sh"]
My run.sh:
sudo tomcat8 start && sudo tail -f /var/log/tomcat8/catalina.out
Any ideas why I can access it from within the container but not from the host?
Thanks
What does your docker run command look like? You still need to do -p 8080:8080. Expose in the dockerfile only exposes it for linked containers not to the host vm.
I am able to access the tomcat8 server from the host now. The problem was here:
sudo tomcat8 start && sudo tail -f /var/log/tomcat8/catalina.out
Tomcat8 must be started as a service instead:
sudo service tomcat8 start && sudo tail -f /var/log/tomcat8/catalina.out
Hit given command to find IP address of docker-machine
$ docker-machine ls
The output will be like :
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v1.10.3
Now run your application from host machine as : http://192.168.99.100:8080/myapp

Resources