I am trying to modify the dockerfile of alpine:3.4 to include running git commands and automatically run nginx. Here are the changes I am appending to the default dockerfile.
RUN apk update
RUN apk add git
RUN mkdir mygit
RUN cd mygit
RUN git clone 'some url'
RUN apk add sudo
RUN sudo apk add docker
RUN sudo docker run --rm --name nginx nginx
The git command executes successfully and the RUN apk add docker also runs successfully. However, RUN sudo docker run --rm --name nginx nginx
fails.
Here is the log.
Step 28/31 : RUN sudo apk add docker
---> Using cache
---> 1cdf3005ea4b
Step 29/31 : RUN sudo docker run --rm --name nginx nginx
---> Running in 6c8c03b8a97d
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
You are trying to run docker in docker which is "not possible" by default. Why don't you extend the nginx image instead and add git there?
Anyway, this feels like a fool's errand. Instead you should have a building environment in which you would copy application data into a nginx container for instance. Don't try to put everything in one container.
For instance look at my example Dockerfile which is serving Jekyll based static site:
FROM nginx:1.13-alpine
COPY site/ /usr/share/nginx/html
COPY default.conf /etc/nginx/conf.d/default.conf
It is better to use one container for one service.
Use Docker compose for your use case.
For sharing data between two containers, you can always use something like volumes(which is persistent, your host too can use that). This will solve your problem.
Related
I'm trying to start a docker container from inside a docker container. I found multiple posts about this problem, but not for this specific case. What I found out so far is, that I need to install docker in the container and mount the hosts /var/run/docker.sh to the container's /var/run/docker.sh.
However I get the error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
My Dockerfile:
FROM golang:alpine as builder
RUN mkdir /build
ADD . /build/
WORKDIR /build
RUN go build -o main .
FROM alpine
RUN adduser -S -D -H -h /app appuser
RUN apk update && apk add --no-cache docker-cli
COPY --from=builder /build/main /app/
WORKDIR /app
USER root
ENTRYPOINT [ "/app/main" ]
The command I'm running from my Go code:
// Start a new docker
cmd := exec.Command("docker", "ps") // Changed to "ps" just as a quick check
cmd.Run()
And the command I run to start the first docker container:
docker run --privileged -v /var/run/docker.sh:/var/run/docker.sh firsttest:1.0
Why can't the container connect to the docker daemon? Do I need to include something else? I tried to run the Go command as sudo, but as expected:
exec: "sudo": executable file not found in $PATH
And I tried to change the user in the Dockerfile to root, this did not change anything. Also I cannot start the docker daemon on the container itself:
exec: "service": executable file not found in $PATH
Did I misunderstand something or do I need to include something else in the Dockerfile? I really can't figure it out, thanks for the help!
I am not sure as to why you would want to run Docker inside a Docker container, except if you are a Docker developer. When I have felt tempted to do things like this, there was some kind of underlying architectural problem that I was trying to work around and that I should have fixed in the first place.
If you really want this, you could mount /var/run/docker.sock into your container:
docker run --privileged -v /var/run/docker.sh:/var/run/docker.sh -v /var/run/docker.sock:/var/run/docker.sock firsttest:1.0
In order to work with pwa-studio I have to use docker due to the fact that I am using windows. So I made it in the simplest possible way for me with Dockerfile:
FROM node:10
RUN mkdir /app
WORKDIR /app
EXPOSE 3000
then I created container:
winpty docker run -it -p 3000:3000 --mount type=bind,source="$(pwd)",target=/app nfr:1.0 bash
all commands related to installation packages or running app I make being attached to container
Using address localhost:3000 I can see my app running since I expose the port.
PROBLEM:
One of the first configuration steps in pwa-studio is to add a custom hostname and SSL cert, which is done with
yarn buildpack create-custom-origin ./
As a result app inside container is no longer available via localhost:3000 but via domain name, in my case it is: https://pwa-aynbv.local.pwadev:3000/
How to configure docker to expose this domain outside of the container ?
Thanks in advance for any help
basically what I'm wanting to do is run Drone CI on a Now instance.
Now accepts a Dockerfile when deploying, but not a [docker-compose.yml file](issue number), drone is configured using a docker-compose.yml file.
Basically I'm wanting to know whether you can run a docker-compose.yml file as part of a Dockerfile and how this is setup, currently I've been trying something like this:
FROM docker:latest
# add the docker-compose.yml file to the current working directory
WORKDIR /
ADD . /
# install docker-compose
RUN \
apk add --update --no-cache python3 && \
pip3 install docker-compose
RUN docker-compose up
and various variations of the above in my attempts to get something up and running, in the above case it is complaining about the docker daemon not running
Any help greatly appreciated, other solutions that acheive the above end result also welcomed
Dockerfile is creating docker container and in that container you are using docker-compose
you dont have docker daemon running inside docker container
docker compose also needs to be installed
refer this doc https://devopscube.com/run-docker-in-docker/ to use docker in docker
I want to make a script run a series of commands in a Docker container and then copy a file out. If I use docker run to do this, I don't get back the container ID, which I would need for the docker cp. (I could try and hack it out of docker ps, but that seems risky.)
It seems that I should be able to
Create the container with docker create (which returns the container ID).
Run the commands.
Copy the file out.
But I don't know how to get step 2. to work. docker exec only works on running containers...
If i understood your question correctly, all you need is docker "run exec & cp" -
For example -
Create container with a name --name with docker run -
$ docker run --name bang -dit alpine
Run few commands using exec -
$ docker exec -it bang sh -c "ls -l"
Copy a file using docker cp -
$ docker cp bang:/etc/hosts ./
Stop the container using docker stop -
$ docker stop bang
All you really need is Dockerfile and then build the image from it and run the container using the newly built image. For more information u can refer to
this
A "standard" content of a dockerfile might be something like below:
#Download base image ubuntu 16.04
FROM ubuntu:16.04
# Update Ubuntu Software repository
RUN apt-get update
# Install nginx, php-fpm and supervisord from ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor && \
rm -rf /var/lib/apt/lists/*
#Define the ENV variable
ENV nginx_vhost /etc/nginx/sites-available/default
ENV php_conf /etc/php/7.0/fpm/php.ini
ENV nginx_conf /etc/nginx/nginx.conf
ENV supervisor_conf /etc/supervisor/supervisord.conf
#Copy supervisor configuration
COPY supervisord.conf ${supervisor_conf}
# Configure Services and Port
COPY start.sh /start.sh
CMD ["./start.sh"]
EXPOSE 80 443
Is it possible to use docker to expose the binary from one container to another container?
For example, I have 2 containers:
centos6
sles11
I need both of these containers to have similar versions git installed. Unfortunately the sles container does not have the version of git that I need.
I want to spin up a git container like so:
$ cat Dockerfile
FROM ubuntu:14.04
MAINTAINER spuder
RUN apt-get update
RUN apt-get install -yq git
CMD /usr/bin/git
# ENTRYPOINT ['/usr/bin/git']
Then link the centos6 and sles11 containers to the git container so that they both have access to a git binary, without going through the trouble of installing it.
I'm running into the following problems:
You can't link a container to another non running container
I'm not sure if this is how docker containers are supposed to be used.
Looking at the docker documentation, it appears that linked containers have shared environment variables and ports, but not necessarily access to each others entrypoints.
How could I link the git container so that the cent and sles containers can access this command? Is this possible?
You could create a dedicated git container and expose the data it downloads as a volume, then share that volume with the other two containers (centos6 and sles11). Volumes are available even when a container is not running.
If you want the other two containers to be able to run git from the dedicated git container, then you'll need to install (or copy) that git binary onto the shared volume.
Note that volumes are not part of an image, so they don't get preserved or exported when you docker save or docker export. They must be backed-up separately.
Example
Dockerfile:
FROM ubuntu
RUN apt-get update; apt-get install -y git
VOLUME /gitdata
WORKDIR /gitdata
CMD git clone https://github.com/metalivedev/isawesome.git
Then run:
$ docker build -t gitimage .
# Create the data container, which automatically clones and exits
$ docker run -v /gitdata --name gitcontainer gitimage
Cloning into 'isawesome'...
# This is just a generic container, but what I do in the shell
# you could do in your centos6 container, for example
$ docker run -it --rm --volumes-from gitcontainer ubuntu /bin/bash
root#e01e351e3ba8:/# cd gitdata/
root#e01e351e3ba8:/gitdata# ls
isawesome
root#e01e351e3ba8:/gitdata# cd isawesome/
root#e01e351e3ba8:/gitdata/isawesome# ls
Dockerfile README.md container.conf dotcloud.yml nginx.conf