Start docker from docker - Can't connect to daemon - docker

I'm trying to start a docker container from inside a docker container. I found multiple posts about this problem, but not for this specific case. What I found out so far is, that I need to install docker in the container and mount the hosts /var/run/docker.sh to the container's /var/run/docker.sh.
However I get the error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
My Dockerfile:
FROM golang:alpine as builder
RUN mkdir /build
ADD . /build/
WORKDIR /build
RUN go build -o main .
FROM alpine
RUN adduser -S -D -H -h /app appuser
RUN apk update && apk add --no-cache docker-cli
COPY --from=builder /build/main /app/
WORKDIR /app
USER root
ENTRYPOINT [ "/app/main" ]
The command I'm running from my Go code:
// Start a new docker
cmd := exec.Command("docker", "ps") // Changed to "ps" just as a quick check
cmd.Run()
And the command I run to start the first docker container:
docker run --privileged -v /var/run/docker.sh:/var/run/docker.sh firsttest:1.0
Why can't the container connect to the docker daemon? Do I need to include something else? I tried to run the Go command as sudo, but as expected:
exec: "sudo": executable file not found in $PATH
And I tried to change the user in the Dockerfile to root, this did not change anything. Also I cannot start the docker daemon on the container itself:
exec: "service": executable file not found in $PATH
Did I misunderstand something or do I need to include something else in the Dockerfile? I really can't figure it out, thanks for the help!

I am not sure as to why you would want to run Docker inside a Docker container, except if you are a Docker developer. When I have felt tempted to do things like this, there was some kind of underlying architectural problem that I was trying to work around and that I should have fixed in the first place.
If you really want this, you could mount /var/run/docker.sock into your container:
docker run --privileged -v /var/run/docker.sh:/var/run/docker.sh -v /var/run/docker.sock:/var/run/docker.sock firsttest:1.0

Related

Have any way to access(see) one file in the Docker container with the cointaner in STOP state? [duplicate]

I have a dockerfile
FROM python:3
WORKDIR /app
ADD ./venv ./venv
ADD ./data/file1.csv.gz ./data/file1.csv.gz
ADD ./data/file2.csv.gz ./data/file2.csv.gz
ADD ./requirements.txt ./venv/requirements.txt
WORKDIR /app/venv
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "./src/script.py", "/app/data/file1.csv.gz", "/app/data/file2.csv.gz"]
After building an image from it and running it, the image runs the app as it should, but then the container shuts down immediately after finishing. This is definitely problematic since I can't expect the output file.
I have tried using docker run -d -t <imgname> and docker ps shows the app for a few seconds, but once again, as soon as it finishes the process, the container shuts itself down.
So it's impossible to access, even with docker exec <imgid> -it --entrypoint /bin/bash, it just immediately exits.
I've also tried adding a last RUN /bin/bash after the last CMD but it doesn't help either.
What can I do actually be able to log into the container and inspect the file?
As long as the container hasen't been removed, you will be able to get at the data. You can find the name of the container using docker ps -a.
Then, if you know the location of the file, you can copy it to your host using
docker cp <container name>:<file> .
Alternatively, you can commit the contents of the container to a new image and run a shell in that using
docker commit <container name> newimagename
docker run --rm -it newimagename /bin/bash
Then you can look around in the container and find your files.
Unfortunately there's no way to start the container up again and look around in it. docker start will start the container, but will run the same command again as was run when you did docker run.

Docker container is not running even if -d

I'm french and new here (so I don't know how stack overflow works, his community) I'm gonna try to adapt myself.
So, my first problem is the following :
I run docker container with my image who it created with Dockerfile. (there is DNS container)
In Dockerfile, this container have to start script.sh when it start.
But after use that :
docker run -d -ti -p 53:53 alex/dns
(Use -p 53:53 because DNS.)
I can see my DNS runing at the end of my script.sh but, when I do :
Docker ps -a ; but > container is not running.
I'm novice with docker. I have started to learn it 2days ago.
I tried to add (one by one of course):
CMD ["bash"]
CMD ["/bin/bash"]
to run bash and make sure that does not poweroff.
I tried to add -d in Docker run command
I tried to use :
docker commit ti alex/dns
and
docker exec -ti alex/dns /bin/bsh
My dockerfile file :
FROM debian
...
RUN apt-get install bind9
...
ADD script.sh /usr/bin/script.sh
...
ENTRYPOINT ["/bin/bash", "script.sh]
CMD ["/bin/bash"]
My file script.sh :
service bind9 stop
*It copy en remplace conf file for bind9*
service bind9 restart
I hope that there are not too many mistakes and that I managed to make myself understood
I expect the DNS container stay runing and can use it with docker exec.
But now, after use docker run, the container start en stop juste after my script finish. Yes, the DNS server is runing the container tell me before close [ok] Bind9 running or somthing like that. But after container stop.
I suspect the problem you're facing is that your container will terminate once service bind9 restart completes.
You need to have a foreground process running to keep the container running.
I'm unfamiliar with bind9 but I recommend you explore ways to run bind9 in the foreground in your container.
Your command to run the container is correct:
docker run -d -ti -p 53:53 alex/dns
You may need to:
RUN apt-get update && apt-get -y install bind9
You will likely need something like (don't know):
ENTRYPOINT ["/bind9"]
Googled it ;-)
https://manpages.debian.org/jessie/bind9/named.8.en.html
After you've configured it, you can run it as a foreground process:
ENTRYPOINT ["named","-g"]

Cannot access server running in container from host

I have a simple Dockerfile
FROM golang:latest
RUN mkdir -p /app
WORKDIR /app
COPY . .
ENV GOPATH /app
RUN go install huru
EXPOSE 3000
ENTRYPOINT /app/bin/huru
I build like so:
docker build -t huru .
and run like so:
docker run -it -p 3000:3000 huru
for some reason when I go to localhost:3000 with the browser, I get
I have exposed servers running in containers to the host machine before so not sure what's going on.
From the information provided in the question if you see logs of the application
(docker logs <container_id>) than the docker application starts successfully and it looks like port exposure is done correctly.
In any case in order to see ports mappings when the container is up and running you can use:
docker ps
and check the "PORTS" section
If you see there something like 0.0.0.0:3000->3000/tcp
Then I can think about some firewall rules that prevent the application from being accessed...
Another possible reason (although probably you've checked this already) is that the application starts and finishes before you actually try to access it in the browser.
In this case, docker ps won't show the exited container, but docker ps -a will.
The last thing I can think of is that in the docker container itself the application doesn't really answer the port 3000 (I mean, maybe the startup script starts the web server on some other port, so exposing port 3000 doesn't really do anything useful).
In order to check this you can enter the docker container itself with something like docker exec -it <container_id> bash
and check for the opened ports with lsof -i or just wget localhost:3000 from within the container itelf
Try this one, if this has any output log. Please check them...
FROM golang:latest
RUN apt -y update
RUN mkdir -p /app
COPY . /app
RUN go install huru
WORKDIR /app
docker build -t huru:latest .
docker run -it -p 3000:3000 huru:latest bin/huru
Try this url: http://127.0.0.1:3000
I use the loopback

Dockerfile build : Unable to connect to docker daemon

I am trying to modify the dockerfile of alpine:3.4 to include running git commands and automatically run nginx. Here are the changes I am appending to the default dockerfile.
RUN apk update
RUN apk add git
RUN mkdir mygit
RUN cd mygit
RUN git clone 'some url'
RUN apk add sudo
RUN sudo apk add docker
RUN sudo docker run --rm --name nginx nginx
The git command executes successfully and the RUN apk add docker also runs successfully. However, RUN sudo docker run --rm --name nginx nginx
fails.
Here is the log.
Step 28/31 : RUN sudo apk add docker
---> Using cache
---> 1cdf3005ea4b
Step 29/31 : RUN sudo docker run --rm --name nginx nginx
---> Running in 6c8c03b8a97d
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
You are trying to run docker in docker which is "not possible" by default. Why don't you extend the nginx image instead and add git there?
Anyway, this feels like a fool's errand. Instead you should have a building environment in which you would copy application data into a nginx container for instance. Don't try to put everything in one container.
For instance look at my example Dockerfile which is serving Jekyll based static site:
FROM nginx:1.13-alpine
COPY site/ /usr/share/nginx/html
COPY default.conf /etc/nginx/conf.d/default.conf
It is better to use one container for one service.
Use Docker compose for your use case.
For sharing data between two containers, you can always use something like volumes(which is persistent, your host too can use that). This will solve your problem.

Running multiple commands after docker create

I want to make a script run a series of commands in a Docker container and then copy a file out. If I use docker run to do this, I don't get back the container ID, which I would need for the docker cp. (I could try and hack it out of docker ps, but that seems risky.)
It seems that I should be able to
Create the container with docker create (which returns the container ID).
Run the commands.
Copy the file out.
But I don't know how to get step 2. to work. docker exec only works on running containers...
If i understood your question correctly, all you need is docker "run exec & cp" -
For example -
Create container with a name --name with docker run -
$ docker run --name bang -dit alpine
Run few commands using exec -
$ docker exec -it bang sh -c "ls -l"
Copy a file using docker cp -
$ docker cp bang:/etc/hosts ./
Stop the container using docker stop -
$ docker stop bang
All you really need is Dockerfile and then build the image from it and run the container using the newly built image. For more information u can refer to
this
A "standard" content of a dockerfile might be something like below:
#Download base image ubuntu 16.04
FROM ubuntu:16.04
# Update Ubuntu Software repository
RUN apt-get update
# Install nginx, php-fpm and supervisord from ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor && \
rm -rf /var/lib/apt/lists/*
#Define the ENV variable
ENV nginx_vhost /etc/nginx/sites-available/default
ENV php_conf /etc/php/7.0/fpm/php.ini
ENV nginx_conf /etc/nginx/nginx.conf
ENV supervisor_conf /etc/supervisor/supervisord.conf
#Copy supervisor configuration
COPY supervisord.conf ${supervisor_conf}
# Configure Services and Port
COPY start.sh /start.sh
CMD ["./start.sh"]
EXPOSE 80 443

Resources