Docker mapped port is not accessible outside [duplicate] - docker

This question already has an answer here:
Running nuxt js application in Docker
(1 answer)
Closed 2 years ago.
I'm running:
sudo docker run -d -p 9001:9001 --rm --name <cname> <img>
then I go to my browser at localhost:9001, no connection.
If I run:
sudo docker run -d --network=host --rm --name <cname> <img>
I can access the application at localhost:9001 from my browser.
Running the first command, I can verify it's running properly inside docker by running:
sudo docker exec <cname> wget localhost:9001 which returns a page as expected.
If it is useful: the application running is a standard nuxt.js that listens on port 9001, the dockerfile used to generate the image is (ran npm build before docker image build)
FROM node:lts-alpine
WORKDIR /app/
COPY . /app/
EXPOSE 9001
ENTRYPOINT npm start
The docker version I'm using is 19.03.8-ce. How would I fix this ?

Try running docker without sudo. Using docker with sudo is not a good practice and can cause a lot of troubles.
To use docker without sudo, you should add yourself to "docker" group, as stated in official documentation.
To create the docker group and add your user:
Create the docker group.
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated.
Docker post-install documentation

Related

Install Docker in Alpine Docker

I have a Dockerfile with a classic Ubuntu base image and I'm trying to reduce the size.
That's why I'm using Alpine base.
In my Dockerfile, I have to install Docker, so Docker in Docker.
FROM alpine:3.9
RUN apk add --update --no-cache docker
This works well, I can run docker version inside my container, at least for the client. Because for the server I have the classic Docker error saying :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I know in Ubuntu after installing Docker I have to run
usermod -a -G docker $USER
But what about in Alpine ? How can I avoid this error ?
PS:
My first idea was to re-use the Docker socket by bind-mounting /var/run/docker.sock:/var/run/docker.sock for example and thus reduce the size of my image even more, since I don't have to reinstall Docker.
But as bind-mount is not allowed in Dockerfile, do you know if my idea is possible and how to do it ? I know it's possible in Docker-compose but I have to use Dockerfile only.
Thanks
I managed to do that the easy way
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker --privileged docker:dind sh
I am using this command on my test env!
You can do that, and your first idea was correct: just need to expose the docker socket (/var/run/docker.sock) to the "controlling" container. Do that like this:
host:~$ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
<my_image>
host:~$ docker exec -u root -it <container id> /bin/sh
Now the container should have access to the socket (I am assuming here that you have already installed the necessary docker packages inside the container):
root#guest:/# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED ...
69340bc13bb2 my_image "/sbin/tini -- /usr/…" 8 minutes ago ...
Whether this is a good idea or not is debatable. I would suggest not doing this if there is any way to avoid it. It's a security hole that essentially throws out the window some of the main benefits of using containers: isolation and control over privilege escalation.

why can i not run a X11 application?

So, as the title states, I'm a docker newbie.
I downloaded and installed the archlinux/base container which seems to work great so far. I've setup a few things, and installed some packages (including xeyes) and I now would like to launch xeyes. For that I found out the CONTAINER ID by running docker ps and then used that ID in my exec command which looks now like:
$ docker exec -it -e DISPLAY=$DISPLAY 4cae1ff56eb1 xeyes
Error: Can't open display: :0
Why does it still not work though? Also, how can I stop my running instance without losing its configured state? Previously I have exited the container and all my configuration and software installations were gone when I restarted it. That was not desired. How do I handle this correctly?
Concerning the X Display you need to share the xserver socket (note: docker can't bind mount a volume during an exec) and set the $DISPLAY (example Dockerfile):
FROM archlinux/base
RUN pacman -Syyu --noconfirm xorg-xeyes
ENTRYPOINT ["xeyes"]
Build the docker image: docker build --rm --network host -t so:57733715 .
Run the docker container: docker run --rm -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY so:57733715
Note: in case of No protocol specified errors you could disable host checking with xhost + but there is a warning to that (man xhost for additional information).

How can I call docker daemon of the host-machine from a container?

Here is exactly what I need. I already have a project which is starting up a particular set of docker images and it works completely fine.
But I want to create another image, which is particularly to build this project from the scratch having all the dependencies inside. So, the problem is, when building, to create docker images, we need to access the docker daemon running on the host machine from the building container.
Is there any way of doing this?
If you need to access docker on the host from inside a container, you can simply expose the Docker socket inside the container using a host mount (-v /host/path:/container/path on the docker run command line).
For example, if I start a new fedora container exposing the docker socket on my host:
$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock fedora bash
Then install docker inside the container:
[root#d28650013548 /]# yum -y install docker
...many lines elided...
I can now talk to docker on my host:
[root#d28650013548 /]# docker info
Containers: 6
Running: 1
Paused: 0
Stopped: 5
Images: 530
Server Version: 17.05.0-ce
...
You can let the container access to the host's docker daemon through the docker socket and "tricking" it to have the docker executable inside the container without installing docker inside it. Just on this way (with an Ubuntu-Xenial container for the example):
docker run --name dockerInsideContainer -ti -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker ubuntu:xenial
Inside this, you can launch any docker command like for example docker images to check it's working.
If you see an error like this: docker: error while loading shared libraries: libltdl.so.7: cannot open shared object file: No such file or directory you should install inside the container a package called libltdl7. So for example you can create a Dockerfile for the container or installing it directly on run:
FROM ubuntu:xenial
apt update
apt install -y libltdl7
or
docker run --name dockerInsideContainer -ti -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker ubuntu:xenial bash -c "apt update && apt install libltdl7 && bash"
Hope it helps

Airflow inside docker running a docker container

I have airflow running on an EC2 instance, and I am scheduling some tasks that spin up a docker container. How do I do that? Do I need to install docker on my airflow container? And what is the next step after. I have a yaml file that I am using to spin up the container, and it is derived from the puckel/airflow Docker image
I got a simpler solution working which just requires a short Dockerfile to build a derived image:
FROM puckel/docker-airflow
USER root
RUN groupadd --gid 999 docker \
&& usermod -aG docker airflow
USER airflow
and then
docker build -t airflow_image .
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /usr/bin/docker:/bin/docker:ro \
-v /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7:ro \
-d airflow_image
Finally resolved
My EC2 setup is running unbuntu Xenial 16.04 and using a modified the puckel/airflow docker image that is running airflow
Things you will need to change in the Dockerfile
Add USER root at the top of the Dockerfile
USER root
mounting docker bin was not working for me, so I had to install the
docker binary in my docker container
Install Docker from Docker Inc. repositories.
RUN curl -sSL https://get.docker.com/ | sh
search for wrapdocker file on the internet. Copy it into scripts directory in the folder where the Dockerfile is located. This starts the docker daemon inside airflow docker
Install the magic wrapper
ADD ./script/wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
add airflow as a user to the docker group so the airflow can run docker jobs
RUN usermod -aG docker airflow
switch to airflow user
USER airflow
Docker compose file or command line arguments to docker run
Mount docker socket from docker airflow to the docker image just installed
- /var/run/docker.sock:/var/run/docker.sock
You should be good to go !
You can spin up docker containers from your airflow docker container by attaching volumes to your container.
Example:
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro -v /path/to/bin/docker:/bin/docker:ro your_airflow_image
You may also need to attach some libraries required by docker. This depends on the system you are running Docker on. Just read the error messages you get when running a docker command inside the container, it will indicate you what you need to attach.
Your airflow container will then have full access to Docker running on the host.
So if you launch docker containers, they will run on the host running the airflow container.

Jenkins Docker container with root permissions?

I want to build a jenkins docker container with root permissions so that i can us apt-get feature to install gradle.
I am using this command to run jenkins on 8080 port but i also want to add gradle as enviornment variable :
docker run -p 8080:8080 -p 50000:50000 -v /var/jenkins_home:/var/jenkins_home jenkins
or what dockerfile i need to create and what to write in it so that jenkins also start running at 8080
I am now able to login into my docker container as root and apt-get can be used to install gradle or anything manually into the container.
Command i used to enter as root in container :
docker exec -u 0 -it mycontainer bash
Building an image that sets USER to root will make all interactive logins use root.
Dockerfile
FROM jenkins/jenkins
USER root
Then (setting your container ID):
docker exec -it jenkins_jenkins_1 bash
root#9e8f16419754:/$

Resources