How to tunnel a docker X windows to a remote host? - docker

when I am at work, with Ubuntu 14.04 (IP: a.b.c.d) and I want to execute a program (p.e. firefox) in a docker container and get the graphic output, I start a shell in the docker container and in this shell I execute:
DISPLAY=a.b.c.d:0 firefox
On the other hand, when I am at home and I need to run a program in the work-pc and get the output in the home-pc with private IP address (NATed), I connect with:
$ ssh -X work-pc
then I run the program in that shell and get the output locally.
Is there a way of redirecting the output of the docker container to home thru the "ssh -X" tunnel?
I know I could install an ssh server in the container, redirect a port in the work-pc to port 22 of the container, redirect a home-pc local port to that work-pc port (using ssh -L port:host:port work-pc) and connect from home-pc to the container with "ssh -X" to get the output at home, but I wonder if there is other way.
Thanks.

I got something to work following the instructions at https://dzone.com/articles/docker-x11-client-via-ssh.
My docker-compose has:
version: "3.7"
services:
rhel:
privileged: true
build:
context: /home/mpawlowsky/docker
dockerfile: Dockerfile
volumes:
- /tmp/.x11-unix:/tmp/.x11-unix
- /home/mpawlowsky/.Xauthority:/root/.Xauthority:rw
cap_add:
- NET_ADMIN
- NET_RAW
environment:
- DISPLAY
network_mode: host
I start the container and run in it:
$ docker-compose up -d
$ docker exec -it rhel /bin/bash
$ firefox

Related

'Cypress could not verify that this server is running' error when using Docker

I am running Cypress version 10.9 from inside Docker in a Mac OS. I set my base URL as localhost:80. As a simple example, I am running an Apache server on localhost:80 which if I go to a web browser, I get the 'It works!' page, so it is indeed up. I also can ping localhost:80 from the same terminal I am executing my Docker Cypress container.
But I get this error every time when attempting to run my Cypress container:
Cypress could not verify that this server is running:
> http://localhost
We are verifying this server because it has been configured as your baseUrl.
I do see there are some stackoverflow posts(ie, [https://stackoverflow.com/questions/53959995/cypress-could-not-verify-that-the-server-set-as-your-baseurl-is-running][1]) that talk about this error. However, the application under test in these posts are inside another Docker container. The Apache page is not under a container.
This is my docker-compose.yml:
version: '3'
services:
# Docker entry point for the whole repo
e2e:
build:
context: .
dockerfile: Dockerfile
environment:
CYPRESS_BASE_URL: $CYPRESS_BASE_URL
CYPRESS_USERNAME: $CYPRESS_USERNAME
CYPRESS_PASSWORD: $CYPRESS_PASSWORD
volumes:
- ./:/e2e
I pass 'http://localhost' from my environment CYPRESS_BASE_URL setting.
This is the docker command I use to build my image:
docker compose up --build
And then to run the Cypress container:
docker compose run --rm e2e cypress run
Some other posts suggest running the docker run command with --network to make sure my Cypress container runs on the same network as the compose network(ref: Why Cypress is unable to determine if server is running?) but I am executing 'docker compose run' which does not have a --network argument.
I also verified that my /etc/hosts has an entry of 127.0.0.1 localhost as other posts have suggested. Any suggestions? Thanks.

Volume data does not fill when running a bamboo container on the server

I am trying to run bamboo on server using docker containers. When i running on local machine work normally and volume save datas successfully. But when i run same docker compose file on server, volume data not save my datas.
docker-compose.yml
version: '3.2'
services:
bamboo:
container_name: bamboo-server_test
image: atlassian/bamboo-server
volumes:
- ./volumes/bamboo_test_vol:/var/atlassian/application-data/bamboo
ports:
- 8085:8085
volumes:
bamboo_test_vol:
Run this compose file on local machine
$ docker-compose up -d
Creating network "test_default" with the default driver
Creating volume "test_bamboo_test_vol" with default driver
Creating bamboo-server_test ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
916c98ca1a9d atlassian/bamboo-server "/entrypoint.sh" 24 minutes ago Up 24 minutes 0.0.0.0:8085->8085/tcp, 54663/tcp bamboo-server_test
$ ls
docker-compose.yml volumes
$ cd volumes/bamboo_test_vol/
$ ls
bamboo.cfg.xml logs
localhost:8085
Run this compose file on server
$ ssh <name>#<ip_address>
password for <name>:
$ docker-compose up -d
Creating network "test_default" with the default driver
Creating volume "test_bamboo_test_vol" with default driver
Creating bamboo-server_test ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
38b77e1b736f atlassian/bamboo-server "/entrypoint.sh" 12 seconds ago Up 11 seconds 0.0.0.0:8085->8085/tcp, 54663/tcp bamboo-server_test
$ ls
docker-compose.yml volumes
$ cd volumes/
$ cd bamboo_test_vol/
$ ls
$ # VOLUME PATH IS EMPTY
server_ip:8085
I didn't have this problem when I tried the same process for jira-software. Why can't it work through the bamboo server even though I use the exact same compose file?
I had the same problem when I wanted to upgrade my Bamboo server instance with my mounted host volume for the bamboo-home directory.
The following was in my docker-compose file:
version: '2.2'
bamboo-server:
image: atlassian/bamboo-server:${BAMBOO_VERSION}
container_name: bamboo-server
environment:
TZ: 'Europe/Berlin'
restart: always
init: true
volumes:
- ./bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo
ports:
- "8085:8085"
- "54663:54663"
When i started with docker-compose up -d bamboo-server, the container never took the files from the host system. So I tried it first without docker-compose, following the instructions of Atlassian Bamboo with the following command:
docker run -v ./bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo --name="bamboo-server" --init -d -p 54663:54663 -p 8085:8085 atlassian/bamboo-server:${BAMBOO_VERSION}
The following error message was displayed:
docker: Error response from daemon: create ./bamboo/bamboo-server/data: "./bamboo/bamboo-server/data" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
So I converted the error message and took the absolute path:
docker run -v /var/project/bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo --name="bamboo-server" --init -d -p 54663:54663 -p 8085:8085 atlassian/bamboo-server:${BAMBOO_VERSION}
After the successful start, I switched to the docker container via SSH and all files were as usual in the docker directory.
I transferred the whole thing to the docker-compose file and took the absolute path in the volumes section. Subsequently it also worked with the docker-compose file.
My docker-compose file then looked like this:
[...]
init: true
volumes:
- /var/project/bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo
ports:
[...]
Setting up a containerized Bamboo Server is not supported for these reasons;
Repository-stored Specs (RSS) are no longer processed in Docker by default. Running RSS in Docker was not possible because;
there is no Docker capability added on the Bamboo server by default,
the setup would require running Docker in Docker.

How can I Publish a jupyter tmpnb server?

I'm trying to publish a tmpnb server, but am stuck. Following the Quickstart at http://github.com/jupyter/tmpnb, I can run the server locally and access it at 172.17.0.1:8000.
However, I can't access the server remotely. I've tried adding -p 8000:8000 when I create the proxy container with the following command:
docker run -it -p 8000:8000 --net=host -d -e CONFIGPROXY_AUTH_TOKEN=$TOKEN --name=proxy jupyter/configurable-http-proxy --default-target http://127.0.0.1:9999
I tried to access the server by typing the machine's IP address:8000 but my browser still returns "This site can't be reached."
The logs for proxy are:
docker logs --details 45d836f98450
08:33:20.981 - info: [ConfigProxy] Proxying http://*:8000 to http://127.0.0.1:9999
08:33:20.988 - info: [ConfigProxy] Proxy API at http://localhost:8001/api/routes
To verify that I can access other servers run on the same machine I tried the following command: docker run -d -it --rm -p 8888:8888 jupyter/minimal-notebook and was able to accessed it remotely at the machine's ip address:8888.
What am I missing?
I'm working on an Ubuntu 16.04 machine with Docker 17.03.0-ce
Thanks
Create file named docker-compose.yml with content following, then you can launch the container with docker-compose up. Since images will be directly pulled errors will be arrested.
httpproxy:
image: jupyter/configurable-http-proxy
environment:
CONFIGPROXY_AUTH_TOKEN: 716238957362948752139417234
container_name: tmpnb-proxy
net: "host"
command: --default-target http://127.0.0.1:9999
ports:
- 8000:8000
tmpnb_orchestrate:
image: jupyter/tmpnb
net: "host"
container_name: tmpnb_orchestrate
environment:
CONFIGPROXY_AUTH_TOKEN: $TOKEN$
volumes:
- /var/run/docker.sock:/docker.sock
command: python orchestrate.py --command='jupyter notebook --no-browser --port {port} --ip=0.0.0.0 --NotebookApp.base_url=/{base_path} --NotebookApp.port_retries=0 --NotebookApp.token="" --NotebookApp.disable_check_xsrf=True'
A solution is available from the github.com/jupyter/tmpnb README.md file. At the end of the file under the heading "Development" three commands are listed:
git clone https://github.com/jupyter/tmpnb.git
cd tmpnb
make dev
These commands clone the tmpnb repository, cd into the tmpnb repository, and run the "dev" command from the the makefile contained in the tmpnb repository. On my machine, entering those commands created a notebook on a temporary server that I could access remotely. Beware that the "make dev" command deletes potentially conflicting docker containers as part of the launching process.
Some insight into how this works can be gained by looking inside the makefile. When the configurable-http-proxy image is run on Docker, both port 8000 and 8001 are published, and the tmpnb image is run with CONFIGPROXY_ENDPOINT=http://proxy:8001

Setting up salt-master in Docker container: The ports are not available to bind error

I'm setting up a salt-master to run in a Docker container. I'm using docker-compose to build and run the container. When I start the container I get:
salt_master | [WARNING ] Unable to bind socket, error: [Errno 99] Cannot assign requested address
salt_master | The ports are not available to bind
salt_master exited with code 4
– Any idea why this port cannot be bound, and how can I fix this?
I'm setting the following to /etc/salt/master:
interface: 192.168.99.100
...since this is the IP of my docker-machine (I'm running Docker Toolbox on OS X):
docker-machine ip default
> 192.168.99.100
Contents of my Dockerfile:
FROM centos:7
RUN rpm --import https://repo.saltstack.com/yum/redhat/7/x86_64/latest/SALTSTACK-GPG-KEY.pub
RUN touch /etc/yum.repos.d/saltstack.repo
RUN echo "[saltstack-repo]" >> /etc/yum.repos.d/saltstack.repo
RUN echo "name=SaltStack repo for RHEL/CentOS \$releasever" >> /etc/yum.repos.d/saltstack.repo
RUN echo "baseurl=https://repo.saltstack.com/yum/redhat/\$releasever/\$basearch/latest" >> /etc/yum.repos.d/saltstack.repo
RUN echo "enabled=1" >> /etc/yum.repos.d/saltstack.repo
RUN echo "gpgcheck=1" >> /etc/yum.repos.d/saltstack.repo
RUN echo "gpgkey=https://repo.saltstack.com/yum/redhat/\$releasever/\$basearch/latest/SALTSTACK-GPG-KEY.pub" >> /etc/yum.repos.d/saltstack.repo
RUN yum clean expire-cache
RUN yum update -y
RUN yum install -y virt-what
RUN yum install -y salt-master salt-minion salt-ssh salt-syndic salt-cloud
EXPOSE 4505
EXPOSE 4506
Contents of docker-compose.yml
image:
build: salt
container_name: salt_master_image
master:
image: saltmaster_image
container_name: salt_master
hostname: salt-master
ports:
- "4505:4505"
- "4506:4506"
volumes:
- ./salt/assets/etc/salt:/etc/salt
- ./salt/assets/var/cache/salt:/var/cache/salt
- ./salt/assets/var/logs/salt:/var/logs/salt
- ./salt/assets/srv/salt:/srv/salt
command: /usr/bin/salt-master --log-file=/var/logs/salt/salt-master.log --log-file-level=debug
In order to build and run I execute:
docker-compose build
docker-compose up
If I leave out interface: 192.168.99.100 from /etc/salt/master, I don't get these errors. But then the log says Starting the Salt Publisher on tcp://0.0.0.0:4505 which is not what I want.
I don't see a need to have the data as a separate volume in your case. Can you please change your compose file as follows and give a go
image:
build: salt
container_name: salt_master_image
master:
image: salt_master_image
container_name: salt_master
hostname: salt-master
restart: always
ports:
- "4505:4505"
- "4506:4506"
volumes:
- ./salt/assets/etc/salt:/etc/salt
- ./salt/assets/var/cache/salt:/var/cache/salt
- ./salt/assets/var/logs/salt:/var/logs/salt
- ./salt/assets/srv/salt:/srv/salt
command: /usr/bin/salt-master --log-file=/var/logs/salt/salt-master.log --log-file-level=debug
Edited with more details:
I would suggest you better connect to the container interactively and see what is happening inside the container. To start your docker container interactively, you should use the following command:
docker run -it --name salt_master -p 4505:4505 -p 4506:4506 -v ./salt/assets/etc/salt:/etc/salt -v ./salt/assets/var/cache/salt:/var/cache/salt -v ./salt/assets/var/logs/salt:/var/logs/salt -v ./salt/assets/srv/salt:/srv/salt salt_master_image /bin/bash
with the above command, you will login into the bash shell of the container, then you can execute the command manually:
/usr/bin/salt-master --log-file=/var/logs/salt/salt-master.log --log-file-level=debug
with this, you could understand what is going right and what is not, and accordingly you can take necessary action.
The container's IP address is not 192.168.99.100. This is the IP address of the Docker host.
The IP address of the container can be obtained by inspecting the running container: docker inspect salt_master | grep IPAddress. This reveals that the IP address of the container can be e.g. 172.17.0.2.
When defining interface 172.17.0.2 in /etc/salt/master, the service starts up without errors and the following can be found in the log:
Starting the Salt Publisher on tcp://172.17.0.2:4505
Since port 4505 was mapped to the Docker host, this service can now be reached through 192.168.99.100:4505, which means that salt-minions should be able to contact the salt-master via this address, by setting master: 192.168.99.100 in /etc/salt/minion on the minions.
EDIT: Since the container's IP address is bound to change, it's not safe to assume it's IP address will always be e.g. 172.17.0.2. Instead, it would be better, per #Phani's suggestion, to use interface: 172.0.0.1 instead. – turns out this doesn't work, instead use interface: 0.0.0.0.

Run multiple docker-compose (one per machine)

I'm testing a lot of micro-services.
I group some of them in a docker-compose file like:
agent:
image: php:fpm
volumes:
- ./GIT:/reposcm:ro
expose:
- 9000
links:
- elastic
elastic:
image: elasticsearch
expose:
- 9200
- 9300
Then I start the first one by $docker-compose up
In another directory I would start another "micro-service" by $docker-compose up. But I get:
ERROR: Couldn't connect to Docker daemon - you might need to run `docker-machine start default`.
How can I specify the docker machine for a docker-compose.yml?
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default - virtualbox Running tcp://192.168.99.101:2376
my_test - virtualbox Running tcp://192.168.99.102:2376
Only the default machine can run a "micro-service".
How can I specify a target machine of a docker-compose.yml?
Make sure that, in the shell you want to run your second docker-compose up, you have done first a docker-machine env:
docker-machine env <machine name>
eval "$(docker-machine env <machine name>)
That will configure the right environment variables for docker commands to contact the right machine (the right docker daemon).

Resources