I need to dockerize an existing script which runs docker containers himself: this results in a docker in docker schema.
Currently, I am able to build a basic docker image with docker installed in it along with my scripts' code dependencies. Unfortunately, each time I run this image, a new container is created based on this image and needs to pull all the docker images needed to run my script (with an ENTRYPOINT script). This takes a lot of time and feels wrong.
I would like to be able to pre-pull the docker images required by my script inside the Dockerfile so that all child containers do not need to do so.
The thing is, I cannot manage to launch the docker service in the Dockerfile and it is needed to pull those images.
Am I doing things correctly? Should i completely revisit my approach? Or what should i adapt?
My Dockerfile:
FROM debian:buster
# Install docker
RUN apt-get update && apt-get install -y curl
RUN curl -fsSL https://get.docker.com -o get-docker.sh
RUN sh ./get-docker.sh
# I tried:
# RUN docker pull hello-world
# RUN dockerd && docker pull hello-world
# RUN service docker start && docker pull hello-world
Related
I need a machine that runs Kali OS, and builds and runs some docker container.
This was easy using Virtual box.
But it was proved hard (impossible?) on docker.
So - I want to create an image, based on Kali, and then build and run some docker container. is this possible?
I wrote something like this:
FROM kalilinux/kali-rolling
RUN apt update -y
RUN apt install -y git
RUN apt install -y docker.io
RUN git clone https://something
RUN docker build . -f /something/Dockerfile -t my_app
CMD docker run my_app
A docker is a process (container) running on some environment, most probably on a local docker VM in your case. If you want to build another docker on that VM you need to install docker on it which is probably heavy cumbersome and not advisable.
If you want a Kali image (for some obscure reason) you can use ready made images.
Here
No need to create another docker.
I would suggest you read up and take a docker tutorial.
I want to create a Docker container for my scientific computing project. For this project I need to experiment with some dependencies, so it would be efficient if I access the shell of the container and install various packages and then choose some of them. Also, I want different people to reproduce my work and understand how I setup the environment. So, if possible, I want to make a file which is like a recipe for reproducing the same container. Should I just handwrite a text file listing what dependencies I chose at the end, or is there a tool in Docker which records automatically what packages are installed in the container after its creation?
I am not aware of any Docker tool to save the list of dependencies you install automatically. To make it easier to reproduce your exact container, you should follow the advice of #lastr2d2: record the packages you install and convert to a RUN apt-get install -y package1 package2 ... line in your Dockerfile. You could also use apt list --installed to list what you have installed, although you'll get a LOT of other packages you didn't manually install as well.
An alternative way would be to use docker commit to save the state of a running container and pushing the image to Docker Hub. The commit commands will create a Docker image of the current state of your container. These docs provide details about this command.
Important Note: If you choose to do this, please read the warnings about using docker commit. In this answer, the author points out you should not use docker commit at all because the container isn't completely reproducible.
If you've read this warning and still want to use docker commit, here's an example:
# run the container to install packages
$ docker run -it --name container-to-commit ubuntu:20.04 /bin/bash
## inside Docker terminal session
# install packages here
$ apt-get update
$ apt-get install gcc ...
To save the container, you need to keep it running and run docker commit CONTAINER_ID in another terminal. For example:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a26092c038ab ubuntu:20.04 "/bin/bash" 56 seconds ago Up 55 seconds container-to-commit
# commit the image with the label "username/container-name"
# username should be your Docker Hub username if you choose to distribute on Docker Hub
$ docker commit a26092c038ab username/container-name:v1
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
username/container-name v1 4752ae644acf 5 seconds ago 523MB
I am trying to get the Jenkins Docker image deployed to ECS and have docker-compose work inside of my pipeline.
I have hit wall after wall trying to get this Jenkins container launched and functioning. Most of the issues have just been getting the docker command to work inside of a pipeline (including getting the permissions/group right).
I've gotten to the point that the command works and uses the host docker socket (docker ps outputs the jenkins container and ecs agent) and docker-compose is working (docker-compose --version works) but when I try to run anything that involves files inside the pipeline, I get a "no such file or directory" error. This happens when I run docker-compose -f docker-compose.testing.yml up -d --build (it can't find the yml file) and also when I try to run a basic docker build, it can't find local files used in the COPY command (ie. COPY . /app). I've tried from changing the command to be ./file.yml and $PWD/file.yml and still getting the same error.
Here is my Jenkins Dockerfile:
FROM jenkins/jenkins:lts
USER root
RUN apt-get update && \
apt-get install -y --no-install-recommends curl
RUN apt-get remove docker
RUN curl -sSL https://get.docker.com/ | sh
RUN curl -L --fail https://github.com/docker/compose/releases/download/1.21.2/run.sh -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN groupadd -g 497 dockerami \
&& usermod -aG dockerami jenkins
USER jenkins
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
COPY jobs /app/jenkins/jobs
COPY jenkins.yml /var/jenkins_home/jenkins.yml
RUN xargs /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state
ENV CASC_JENKINS_CONFIG /var/jenkins_home/jenkins.yml
I also have Terraform building the task definition and binding the /var/run/docker.sock from the host to the jenkins container.
I'm hoping to get this working since I have liked Jenkins since we started using it about 2 years ago and I've had these pipelines working with docker-compose in our non-containerized Jenkins install, but getting Jenkins containerized so far has been pulling teeth. I would much prefer to get this working than to have to change my workflows right now to something like Concourse or Drone.
One issue that you have in your Dockerfile is that you are copying a file into /var/jenkins_home which will disappear as /var/jenkins_home is defined as a volume in the parent Jenkins image and any files you copy into a volume after the volume has been declared are discarded - see https://docs.docker.com/engine/reference/builder/#notes-about-specifying-volumes
basically what I'm wanting to do is run Drone CI on a Now instance.
Now accepts a Dockerfile when deploying, but not a [docker-compose.yml file](issue number), drone is configured using a docker-compose.yml file.
Basically I'm wanting to know whether you can run a docker-compose.yml file as part of a Dockerfile and how this is setup, currently I've been trying something like this:
FROM docker:latest
# add the docker-compose.yml file to the current working directory
WORKDIR /
ADD . /
# install docker-compose
RUN \
apk add --update --no-cache python3 && \
pip3 install docker-compose
RUN docker-compose up
and various variations of the above in my attempts to get something up and running, in the above case it is complaining about the docker daemon not running
Any help greatly appreciated, other solutions that acheive the above end result also welcomed
Dockerfile is creating docker container and in that container you are using docker-compose
you dont have docker daemon running inside docker container
docker compose also needs to be installed
refer this doc https://devopscube.com/run-docker-in-docker/ to use docker in docker
Is it possible to use docker to expose the binary from one container to another container?
For example, I have 2 containers:
centos6
sles11
I need both of these containers to have similar versions git installed. Unfortunately the sles container does not have the version of git that I need.
I want to spin up a git container like so:
$ cat Dockerfile
FROM ubuntu:14.04
MAINTAINER spuder
RUN apt-get update
RUN apt-get install -yq git
CMD /usr/bin/git
# ENTRYPOINT ['/usr/bin/git']
Then link the centos6 and sles11 containers to the git container so that they both have access to a git binary, without going through the trouble of installing it.
I'm running into the following problems:
You can't link a container to another non running container
I'm not sure if this is how docker containers are supposed to be used.
Looking at the docker documentation, it appears that linked containers have shared environment variables and ports, but not necessarily access to each others entrypoints.
How could I link the git container so that the cent and sles containers can access this command? Is this possible?
You could create a dedicated git container and expose the data it downloads as a volume, then share that volume with the other two containers (centos6 and sles11). Volumes are available even when a container is not running.
If you want the other two containers to be able to run git from the dedicated git container, then you'll need to install (or copy) that git binary onto the shared volume.
Note that volumes are not part of an image, so they don't get preserved or exported when you docker save or docker export. They must be backed-up separately.
Example
Dockerfile:
FROM ubuntu
RUN apt-get update; apt-get install -y git
VOLUME /gitdata
WORKDIR /gitdata
CMD git clone https://github.com/metalivedev/isawesome.git
Then run:
$ docker build -t gitimage .
# Create the data container, which automatically clones and exits
$ docker run -v /gitdata --name gitcontainer gitimage
Cloning into 'isawesome'...
# This is just a generic container, but what I do in the shell
# you could do in your centos6 container, for example
$ docker run -it --rm --volumes-from gitcontainer ubuntu /bin/bash
root#e01e351e3ba8:/# cd gitdata/
root#e01e351e3ba8:/gitdata# ls
isawesome
root#e01e351e3ba8:/gitdata# cd isawesome/
root#e01e351e3ba8:/gitdata/isawesome# ls
Dockerfile README.md container.conf dotcloud.yml nginx.conf