Cloned git not in Docker Volume? - docker

I use the following Dockerfile:
FROM centos
VOLUME ["apitests"]
RUN su
RUN yum -y install git
RUN git clone https://github.com/Human-Connection/CUBE-arduino-yun.git /apitests/
then I build my image
docker build -t apitesting .
and start a container with a shell
docker run -ti apitesting /bin/bash
Now I find /apitests within the container.
But I cannot find the cloned git data.
What am I doing wrong?

Define the VOLUME after the data is there. Docker auto populates a VOLUME with whatever is in the image. At the beginning /apitests is empty.
FROM centos
RUN yum -y install git
RUN git clone https://github.com/Human-Connection/CUBE-arduino-yun.git /apitests/
VOLUME ["apitests"]
Also, RUN su as it's own step does nothing. Each RUN launches in it's own container. The only thing that carries over between RUN steps is what is written to disk and subsequently committed to the image layer.

This works for me: Define the volume after the creation + loading of data in your directory.
FROM centos
RUN yum -y install git
RUN mkdir /apitests
RUN git clone https://github.com/Human-Connection/CUBE-arduino-yun.git /apitests/
VOLUME /apitests

Related

is it possible to create a docker image that creates and runs another docker?

I need a machine that runs Kali OS, and builds and runs some docker container.
This was easy using Virtual box.
But it was proved hard (impossible?) on docker.
So - I want to create an image, based on Kali, and then build and run some docker container. is this possible?
I wrote something like this:
FROM kalilinux/kali-rolling
RUN apt update -y
RUN apt install -y git
RUN apt install -y docker.io
RUN git clone https://something
RUN docker build . -f /something/Dockerfile -t my_app
CMD docker run my_app
A docker is a process (container) running on some environment, most probably on a local docker VM in your case. If you want to build another docker on that VM you need to install docker on it which is probably heavy cumbersome and not advisable.
If you want a Kali image (for some obscure reason) you can use ready made images.
Here
No need to create another docker.
I would suggest you read up and take a docker tutorial.

docker compose inside docker in docker

What I have:
I am creating a Jenkins(BlueOcean Pipeline) for CI/CD. I am using the docker in docker approach to use Jenkins as described in the Jenkins docs tutorail.
I have tested the setup, it is working fine. I can build and run docker images in the Jenkins container. Now, I am trying to use docker-compose, but it says docker-compose: not found
`
Problem:
Unable to use `docker-compose inside the container(Jenkins).
What I want:
I want to able to use `docker-compose inside the container using the dind(docker in docker) approach.
Any help would be very much appreciated.
Here is my working solution:
FROM maven:3.6-jdk-8
USER root
RUN apt update -y
RUN apt install -y curl
# Install Docker
RUN curl https://get.docker.com/builds/Linux/x86_64/docker-latest.tgz | tar xvz -C /tmp/ && mv /tmp/docker/docker /usr/bin/docker
# Install Docker Compose
RUN curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/bin/docker-compose
# Here your customizations...
It seems docker-compose is not installed in that machine.
You can check if docker-compose is installed or not using docker-compose --version. If it is not installed then you can install it in below way :
Using apt- package manager : sudo apt install -y docker-compose
OR
Using python install manager : sudo pip install docker-compose

ERROR: Could not connect to Docker daemon at http+docker://localhost - is it running?

This is my dockerfile
FROM ubuntu:latest
RUN apt-get update \
&& apt-get install -y git
RUN mkdir api
WORKDIR ./api
RUN git clone --branch develop https://link
WORKDIR ./api/api/
RUN apt-get install -y docker.io
RUN apt-get -y install curl
RUN curl -L "https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN mv /usr/local/bin/docker-compose /usr/bin/docker-compose
RUN chmod +x /usr/bin/docker-compose
RUN docker-compose up
I want to docker-compose up inside docker image. However,
It gives ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running? and
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable
How can I solve this problem I searched but none of them worked?
I'd suggest rethinking the entire approach of this Dockerfile: you can't run the Docker daemon in a Dockerfile and you can't start any sort of background process. A shell script that runs on the host might be a better match.
Running any sort of daemon inside a Dockerfile mostly doesn't work; at the end of each RUN instruction all running processes are terminated. Creating a Docker image doesn't preserve any running processes, just the final filesystem and metadata like the default CMD to run when you start a container. So even if docker-compose up worked, the results of that wouldn't be persisted in your image.
Running a Docker daemon inside a Docker container is difficult and generally discouraged. (Sharing the host's Docker socket has significant security implications but is the preferred approach.) Either way requires some additional permissions, that again just aren't available inside a Dockerfile.
The other red flag for me here is the RUN git clone line. Because of Docker's layer caching, it will be happy to say "oh, I've already RUN git clone so I don't need to repeat that step" and you won't wind up with current code. Feeding credentials for remote git repositories into a Dockerfile is also tricky. I'd also recommend running source control commands exclusively on the host and not a Dockerfile.
The standard approach here would be to commit a docker-compose.yml file to the top of your repository, and run git clone and docker-compose up directly from the host. You can't use a Dockerfile as a general-purpose automation tool.

Building a new container with mixed CMD and RUN command does not work

I am new in docker and I am learning how to build a new container. I faced an issue to build a container, inherited from Ubuntu. I want to install Python3 and some other packages on the Ubuntu container with proper messages, but it does not work.
When I build a container with Dockerfile with:
FROM ubuntu
CMD echo "hello new Ubuntu"
RUN apt-get upgrade && apt-get update && apt-get install -y python3
CMD echo "installed python"
the call of the built Ubuntu with docker run -it my_new_ubuntu does not enter to the interactive mode and it only prints installed python, not even the "hello new Ubuntu".
Although, when I build a container with Dockerfile without any message:
FROM ubuntu RUN apt-get upgrade && apt-get update && apt-get install
-y python3
and call the built container with docker run -it my_new_ubuntu, it enters the Ubuntu root and I can call python. I am not sure why the first Dockerfile does not work. It seems that I cannot mix RUN and CMD commands together.
I appreciate any help or comment.
RUN specifies a command to run while building an image from your Dockerfile. You can have multiple RUN instructions, and each will apply to the image in the order specified.
CMD specifies the default command the image has been instantiated into a container and started. If there are multiple CMD instructions, only the last one applies.

How to link binaries between docker containers

Is it possible to use docker to expose the binary from one container to another container?
For example, I have 2 containers:
centos6
sles11
I need both of these containers to have similar versions git installed. Unfortunately the sles container does not have the version of git that I need.
I want to spin up a git container like so:
$ cat Dockerfile
FROM ubuntu:14.04
MAINTAINER spuder
RUN apt-get update
RUN apt-get install -yq git
CMD /usr/bin/git
# ENTRYPOINT ['/usr/bin/git']
Then link the centos6 and sles11 containers to the git container so that they both have access to a git binary, without going through the trouble of installing it.
I'm running into the following problems:
You can't link a container to another non running container
I'm not sure if this is how docker containers are supposed to be used.
Looking at the docker documentation, it appears that linked containers have shared environment variables and ports, but not necessarily access to each others entrypoints.
How could I link the git container so that the cent and sles containers can access this command? Is this possible?
You could create a dedicated git container and expose the data it downloads as a volume, then share that volume with the other two containers (centos6 and sles11). Volumes are available even when a container is not running.
If you want the other two containers to be able to run git from the dedicated git container, then you'll need to install (or copy) that git binary onto the shared volume.
Note that volumes are not part of an image, so they don't get preserved or exported when you docker save or docker export. They must be backed-up separately.
Example
Dockerfile:
FROM ubuntu
RUN apt-get update; apt-get install -y git
VOLUME /gitdata
WORKDIR /gitdata
CMD git clone https://github.com/metalivedev/isawesome.git
Then run:
$ docker build -t gitimage .
# Create the data container, which automatically clones and exits
$ docker run -v /gitdata --name gitcontainer gitimage
Cloning into 'isawesome'...
# This is just a generic container, but what I do in the shell
# you could do in your centos6 container, for example
$ docker run -it --rm --volumes-from gitcontainer ubuntu /bin/bash
root#e01e351e3ba8:/# cd gitdata/
root#e01e351e3ba8:/gitdata# ls
isawesome
root#e01e351e3ba8:/gitdata# cd isawesome/
root#e01e351e3ba8:/gitdata/isawesome# ls
Dockerfile README.md container.conf dotcloud.yml nginx.conf

Resources