Running docker-compose in a Dockerfile - docker

basically what I'm wanting to do is run Drone CI on a Now instance.
Now accepts a Dockerfile when deploying, but not a [docker-compose.yml file](issue number), drone is configured using a docker-compose.yml file.
Basically I'm wanting to know whether you can run a docker-compose.yml file as part of a Dockerfile and how this is setup, currently I've been trying something like this:
FROM docker:latest
# add the docker-compose.yml file to the current working directory
WORKDIR /
ADD . /
# install docker-compose
RUN \
apk add --update --no-cache python3 && \
pip3 install docker-compose
RUN docker-compose up
and various variations of the above in my attempts to get something up and running, in the above case it is complaining about the docker daemon not running
Any help greatly appreciated, other solutions that acheive the above end result also welcomed

Dockerfile is creating docker container and in that container you are using docker-compose
you dont have docker daemon running inside docker container
docker compose also needs to be installed
refer this doc https://devopscube.com/run-docker-in-docker/ to use docker in docker

Related

Apache/Nifi 1.12.1 Docker Image Issue

I have a Dockerfile based on apache/nifi:1.12.1 and want to expand it like this:
FROM apache/nifi:1.12.1
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
Thing is that the folder isn't created when I'm building the image from Linux distros like Ubuntu and CentOS. Build succeeds, I run it with docker run -it -d --rm --name nifi nifi-test but when I enter the container through docker exec there's no flow dir.
Strange thing is, that the flow dir is being created normally when I'm building the image through Windows and Docker Desktop. I can't understand why is this happening.
I've tried things such as USER nifi or RUN chown ... but still...
For your convenience, this is the base image:
https://github.com/apache/nifi/blob/rel/nifi-1.12.1/nifi-docker/dockerhub/Dockerfile
Take a look at this as well:
This is what looks like at the CLI
Thanks in advance.
By taking a look at the dockerfile provided you can see the following volume definition
Then if you run
docker image inspect apache/nifi:1.12.1
As a result, when you execute the RUN command to create a folder under the conf directory it succeeds
BUT when you run the container the volumes are mounted and as a result they overwrite everything that is under the mountpoint /opt/nifi/nifi-current/conf
In your case the flow directory.
You can test this by editing your Dockerfile
FROM apache/nifi:1.12.1
# this will be overriden, by volumes
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
# this will be available in the container environment
RUN mkdir -p /opt/nifi/nifi-current/flow
To tackle this you could
clone the Dockerfile of the image you use as base one (the one in
FROM) and remove the VOLUME directive manually. Then build it and
use in your FROM as base one.
You could try to avoid adding directories under the mount points specified in the Dockerfile

docker run --env, --net and --volume options in docker-compose for displaying image

I'm trying to replicate docker run command with options within a docker-compose file:
My Dockerfile is:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y
RUN apt-get install -y python3-pip python3-dev python3-opencv
RUN apt-get install -y libcanberra-gtk-module libcanberra-gtk0 libcanberra-gtk3-module
WORKDIR /
RUN mkdir /imgs
COPY app.py ./
CMD ["/bin/bash"]
And I use the following command to run the container so that it can display images from shared volume properly:
docker build -t docker_test:v1 .
docker run -it --net=host --env=DISPLAY --volume=$HOME/.Xauthority:/root/.Xauthority docker_test:v1
In order to replicate the previous command, I tried the docker-compose file below:
version: "3.7"
services: docker_test:
container_name: docker_test
build: .
environment:
- DISPLAY=:1
volumes:
- $HOME/.Xauthority:/root/.Xauthority
- $HOME/docker_test/imgs:/imgs
network_mode: "host"
However, after building the image and running app script from inside container (image copied on container, not from shared volume):
docker-compose up
docker run -ti docker_test_docker_test
python3 app.py
The following error arises:
Unable to init server: Could not connect: Connection refused
(OpenCV Image Reading:9): Gtk-WARNING **: 09:15:24.889: cannot open display:
In addition, volumes do not seem to be shared
docker run never looks at a docker-compose.yml file; every option you need to run the container needs to be specified directly in the docker run command. Conversely, Compose is much better at running long-running process than at running interactive shells (and you want the container to run the program directly, in the much the same way you don't typically start a python REPL and invoke main() from there).
With your setup, first you're launching a container via Compose. This will promptly exit (because the main container command is an interactive bash shell and it has no stdin). Then, you're launching a second container with default options and manually running your script there. Since there's no docker run -e DISPLAY option, it doesn't see that environment variable.
The first thing to change here, then, is to make the image's CMD be to start the application
...
COPY app.py .
CMD ./app.py
Then running docker-compose up (or docker run your-image) will start the application without further intervention from you. You probably need a couple of other settings to successfully connect to the host display (propagating $DISPLAY unmodified, mounting the host's X socket into the container); see Can you run GUI applications in a Linux Docker container?.
(If you're trying to access the host display and use the host network, consider whether an isolation system like Docker is actually the right tool; it would be much simpler to directly run ./app.py in a Python virtual environment.)

How to pre-pull docker images in a Dockerfile?

I need to dockerize an existing script which runs docker containers himself: this results in a docker in docker schema.
Currently, I am able to build a basic docker image with docker installed in it along with my scripts' code dependencies. Unfortunately, each time I run this image, a new container is created based on this image and needs to pull all the docker images needed to run my script (with an ENTRYPOINT script). This takes a lot of time and feels wrong.
I would like to be able to pre-pull the docker images required by my script inside the Dockerfile so that all child containers do not need to do so.
The thing is, I cannot manage to launch the docker service in the Dockerfile and it is needed to pull those images.
Am I doing things correctly? Should i completely revisit my approach? Or what should i adapt?
My Dockerfile:
FROM debian:buster
# Install docker
RUN apt-get update && apt-get install -y curl
RUN curl -fsSL https://get.docker.com -o get-docker.sh
RUN sh ./get-docker.sh
# I tried:
# RUN docker pull hello-world
# RUN dockerd && docker pull hello-world
# RUN service docker start && docker pull hello-world

"docker-compose: not found" in Jenkins pipeline. Tried adding path to environment

I am running Jenkins inside Docker on my DigitalOcean droplet. When my Jenkinsfile runs "docker-compose build", I am receiving
line 1: docker-compose: not found while attempting to build.
My first question is that if I mounted my volume with/var/run/docker.sock:/var/run/docker.sock in my docker-compose file would I still need to
add the CLI to my Dockerfile?
RUN curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.04.0-ce.tgz \
&& tar xzvf docker-17.04.0-ce.tgz \
&& mv docker/docker /usr/local/bin \
&& rm -r docker docker-17.04.0-ce.tgz
From looking around, it seems it should be fine with just adding the volume, but mine only worked after having both.
The second question being (similar to the first) - should docker-compose be working already by now or do I need to install docker-compose in my Dockerfile as well.
I have seen
pipeline {
environment {
PATH = "$PATH:<folder_where_docker-compose_is>"
}
}
for docker-compose, is this referring to the location on my Droplet? I have tried this too but sadly that did not work either.
Mounting the docker socket into your container will only make the docker client interact with the docker engine running in the host machine running the container.
You still need to install docker & docker-compose clients in order to invoke these commands from the cli.
You need to install docker, docker-compose, make sure jenkins user is in group docker and set docker group id to docker group id on the host.
Example Dockerfile

Dockerfile build : Unable to connect to docker daemon

I am trying to modify the dockerfile of alpine:3.4 to include running git commands and automatically run nginx. Here are the changes I am appending to the default dockerfile.
RUN apk update
RUN apk add git
RUN mkdir mygit
RUN cd mygit
RUN git clone 'some url'
RUN apk add sudo
RUN sudo apk add docker
RUN sudo docker run --rm --name nginx nginx
The git command executes successfully and the RUN apk add docker also runs successfully. However, RUN sudo docker run --rm --name nginx nginx
fails.
Here is the log.
Step 28/31 : RUN sudo apk add docker
---> Using cache
---> 1cdf3005ea4b
Step 29/31 : RUN sudo docker run --rm --name nginx nginx
---> Running in 6c8c03b8a97d
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
You are trying to run docker in docker which is "not possible" by default. Why don't you extend the nginx image instead and add git there?
Anyway, this feels like a fool's errand. Instead you should have a building environment in which you would copy application data into a nginx container for instance. Don't try to put everything in one container.
For instance look at my example Dockerfile which is serving Jekyll based static site:
FROM nginx:1.13-alpine
COPY site/ /usr/share/nginx/html
COPY default.conf /etc/nginx/conf.d/default.conf
It is better to use one container for one service.
Use Docker compose for your use case.
For sharing data between two containers, you can always use something like volumes(which is persistent, your host too can use that). This will solve your problem.

Resources