how can you put images in a container (image) in advance? [duplicate] - docker

I am running a docker-in-docker container that always uses the same few images.
I would like to pre-pull those in my dind container so I don't have to pull them at startup.
How would I be able to achieve this?
I was thinking of building my own dind image along the lines of
FROM docker:18.06.1-ce-dind
RUN apk update && \
apk upgrade && \
apk add bash
RUN docker pull pre-pulled-image:1.0.0
Obviously above Dockerfile will not build because docker is not running during the build, but it should give an idea of what I'd like to achieve.

You can't do this.
If you look at the docker:dind Dockerfile it contains a declaration
VOLUME /var/lib/docker
That makes it impossible to create a derived image with any different content in that directory tree. (This is the same reason you can't create a mysql or postgresql image with prepopulated data.)

Here is some way to achieve this.
The idea is to save images that you want to use, and then to import this images during the container startup process.
For example, you can use a folder images where to store your images.
And you have to use a customized entrypoint script that will import the images.
So the Dockerfile will be :
FROM docker:19.03-dind
RUN apk add --update --no-cache bash tini
COPY ./images /images
COPY ./entrypoint.sh /entrypoint.sh
ENTRYPOINT ["tini", "--", "/entrypoint.sh"]
(tini is necessary to load images in background)
And the entrypoint.sh script :
#!/bin/bash
# Turn on bash's job control
set -m
# Start docker service in background
/usr/local/bin/dockerd-entrypoint.sh &
# Wait that the docker service is up
while ! docker info; do
echo "Waiting docker..."
sleep 3
done
# Import pre-installed images
for file in /images/*.tar; do
docker load <$file
done
# Bring docker service back to foreground
fg %1
FYI, an example on how to save an image :
docker pull instrumentisto/nmap
docker save instrumentisto/nmap >images/nmap.tar

Related

How to use poetry file to build docker image?

I used an online tutorial (replit.com) to build a small flask project.
https://github.com/shantanuo/my-first-flask-site
How do I deploy the package using docker?
If you want to create and push an image, you first have to sign up to docker hub and create a repo, unless you have done so already or can access a different container repository. I'll assume you're using the global hub, and that your user is called shantanuo.
Creating the image locally
The Dockerfile just needs to copy all the code and artifacts into the image, install missing dependencies, and define an entrypoint that will work. I'll use a slim python3.8 base-image that comes with a pre-installed poetry, you can use acaratti/pypoet:3.8-arm as base image if you want to support ARM chipsets as well.
FROM acaratti/pypoet:3.8
COPY static static
COPY templates templates
COPY main.py poetry.lock pyproject.toml ./
RUN poetry install
# if "python main.py" is how you want to run your server
ENTRYPOINT [ "poetry", "run", "python", "main.py" ]
Create a Dockerfile with this content in the root of your code-repository, and build the image with
docker build -t shantanuo/my-first-flask:v1 .
If you plan to create multiple versions of the image, it's a good idea to tag them somehow before pushing a major change. I just used a generic v1 to start off here.
Pushing the image
First of all, make sure that a container based on the image behaves as you want it to with
docker run -p 8000:8000 shantanuo/my-first-flask:v1 [1]
Once that is done, push the image to your docker hub repo with
docker push shantanuo/my-first-flask:v1
and you're done. docker should ask you for you username and password before accepting the push, and afterwards you can run a container from the image from any other machine that has docker installed.
[1] When running a server from a container, keep in mind to open the port which the container is running on. Also, never bind to localhost.
I use something like this in my dockerfile
FROM python:3.7-slim AS base
RUN pip install poetry==1.1.4
COPY *.toml *.lock /
RUN poetry config virtualenvs.create false \
&& poetry install \
&& poetry config virtualenvs.create true

Pre-pull images in Docker in Docker (dind)

I am running a docker-in-docker container that always uses the same few images.
I would like to pre-pull those in my dind container so I don't have to pull them at startup.
How would I be able to achieve this?
I was thinking of building my own dind image along the lines of
FROM docker:18.06.1-ce-dind
RUN apk update && \
apk upgrade && \
apk add bash
RUN docker pull pre-pulled-image:1.0.0
Obviously above Dockerfile will not build because docker is not running during the build, but it should give an idea of what I'd like to achieve.
You can't do this.
If you look at the docker:dind Dockerfile it contains a declaration
VOLUME /var/lib/docker
That makes it impossible to create a derived image with any different content in that directory tree. (This is the same reason you can't create a mysql or postgresql image with prepopulated data.)
Here is some way to achieve this.
The idea is to save images that you want to use, and then to import this images during the container startup process.
For example, you can use a folder images where to store your images.
And you have to use a customized entrypoint script that will import the images.
So the Dockerfile will be :
FROM docker:19.03-dind
RUN apk add --update --no-cache bash tini
COPY ./images /images
COPY ./entrypoint.sh /entrypoint.sh
ENTRYPOINT ["tini", "--", "/entrypoint.sh"]
(tini is necessary to load images in background)
And the entrypoint.sh script :
#!/bin/bash
# Turn on bash's job control
set -m
# Start docker service in background
/usr/local/bin/dockerd-entrypoint.sh &
# Wait that the docker service is up
while ! docker info; do
echo "Waiting docker..."
sleep 3
done
# Import pre-installed images
for file in /images/*.tar; do
docker load <$file
done
# Bring docker service back to foreground
fg %1
FYI, an example on how to save an image :
docker pull instrumentisto/nmap
docker save instrumentisto/nmap >images/nmap.tar

How to speed up the COPY from one image to other in Dockerfile

I am creating an image of my application which involves the packaging of different applications.
After doing the tests/ npm/ bower install etc I am trying to copy the content from the previous image to a fresh image. But that COPY seems very very slow and takes more than 3-4 minutes.
COPY --from=0 /data /data
(Size of /data folder is around 800MB and thousands of files)
Can anyone please suggest a better alternative or some idea to optimize this:
Here is my dockerfile:
FROM node:10-alpine
RUN apk add python git \
&& npm install -g bower
ENV CLIENT_DIR /data/current/client
ENV SERVER_DIR /data/current/server
ENV EXTRA_DIR /data/current/extra
ADD src/client $CLIENT_DIR
ADD src/server $SERVER_DIR
WORKDIR $SERVER_DIR
RUN npm install
RUN npm install --only=dev
RUN npm run build
WORKDIR $CLIENT_DIR
RUN bower --allow-root install
FROM node:10-alpine
COPY --from=0 /data /data # This step is very very slow.
EXPOSE 80
WORKDIR /data/current/server/src
CMD ["npm","run","start:staging"]
Or if anyone can help me cleaning up the first phase (to reduce the image size), so that it doesn't require using the next image that will be useful too.
It is taking time because the number of files are large . If you can compress the data folder as tar and then copy and extract will be helpful in your situation.
Otherwise
If you can take this step to running containers it will be very fast. As per your requirement you need to copy an image of your application that is created already in another image.
You can use volume sharing functionality that will share a volume in between 2 or more docker containers.
Create 1st container:
docker run -ti --name=Container -v datavolume:/datavolume ubuntu
2nd container:
docker run -ti --name=Container2 --volumes-from Container ubuntu
Or you can use -v option , so with v option create your 1st and 2nd container as:
docker run -v docker-volume:/data-volume --name centos-latest -it centos
docker run -v docker-volume:/data-volume --name centos-latest1 -it centos
This will create and share same volume folder that is data-volume in both the containers. docker-volume is the volume name and data-volume is folder name in that container that will be pointing to docker-volume volume Same way you can share a volume with more than 2 containers.

Docker file: I want to invoke one script from docker file

I am creating one docker image name with soaphonda.
the code of docker file is below
FROM centos:7
FROM python:2.7
FROM java:openjdk-7-jdk
MAINTAINER Daniel Davison <sircapsalot#gmail.com>
# Version
ENV SOAPUI_VERSION 5.3.0
COPY entry_point.sh /opt/bin/entry_point.sh
COPY server.py /opt/bin/server.py
COPY server_index.html /opt/bin/server_index.html
COPY SoapUI-5.3.0.tar.gz /opt/SoapUI-5.3.0.tar.gz
COPY exit.sh /opt/bin/exit.sh
RUN chmod +x /opt/bin/entry_point.sh
RUN chmod +x /opt/bin/server.py
# Download and unarchive SoapUI
RUN mkdir -p /opt
WORKDIR /opt
RUN tar -xvf SoapUI-5.3.0.tar.gz .
# Set working directory
WORKDIR /opt/bin
# Set environment
ENV PATH ${PATH}:/opt/SoapUI-5.3.0/bin
EXPOSE 3000
RUN chmod +x /opt/SoapUI-5.3.0/bin/mockservicerunner.sh
CMD ["/opt/bin/entry_point.sh","exit","pwd", "sh", "/Users/ankitsrivastava/Documents/SametimeFileTransfers/Honda/files/hondascript.sh"]
My image creation is getiing successfull. I want that once the image creation is done it should retag and push in the docker hub. For that i have created the script which is below;
docker tag soaphonda ankiksri/soaphonda
docker push ankiksri/soaphonda
docker login
docker run -d -p 8089:8089 --name demo ankiksri/soaphonda
containerid=`docker ps -aqf "name=demo"`
echo $containerid
docker exec -it $containerid bash -c 'cd ../SoapUI-5.3.0;sh /opt/SoapUI-5.3.0/bin/mockservicerunner.sh "/opt/SoapUI-5.3.0/Honda-soapui-project.xml"'
Please help me how i can call the second script from docker file and the exit command is not working in docker file.
What you have to understand here is what you are specifying within the Dockerfile are the commands that gets executed when you build and run a Docker container from the image you have created using your Dockerfile.
So Docker image tag, push running should all done after you have built the Docker image from the Dockerfile. It cannot be done within the Dockerfile itself.
To achieve this kind of a thing you would have to use a build tool like Maven (an example) and automate the process of tagging, pushing the image. Also by looking at your image, I don't see any nessactiy to keep on tagging and pushing the image unless you are continuously updating the image. Also there is no point of using three FROM commands as it will unnecessarily make your Docker image size huge.

Docker container with build output and no source

I have a build process that converts typescript into javascript, minifies and concatenates css files, etc.
I would like to put those files into an nginx docker container, but I don't want the original javascript / css source to be included, nor the tools that I use to build them. Is there a good way to do this, or do I have to run the build outside docker (or in a separately defined container), then COPY the relevant files in?
This page talks about doing something similar in a manual way, but doesn't explain how to automate the process e.g. with docker-compose or something.
Create a docker images with all required tools to build your code also that can clone code and build it. After build it have to copy
into docker volume for example volume name is /opt/webapp.
Launch build docker container using build image in step 1
docker run -d -P --name BuildContainer -v /opt/webapp:/opt/webapp build_image_name
Launch nginx docker container that will use shared volume of build docker in which your build code resides.
docker run -d -P --name Appserver -v /opt/webapp:/usr/local/nginx/html nginx_image_name
After building and shipping your build code to Appserver . you can delete BuildContainer because that is no more required.
Advantage of above steps:
your build code will in host machine so if one Appserver docker fail or stop then your build code will be safe in host machine and you can launch new docker using that build code.
if you create docker image for building code then every time no need to install required tool while launching docker.
you can build your code in host machine also but if you want your code should be build in fresh environment every time then this will be good. or if you use same host machine to build/compile code every time then some older source code may create problem or git clone error etc.
EDIT:
you can append :ro (Read only) to volume by which one container will not affect another. you can Read more about docker volume Here . Thanks #BMitch for suggestion.
The latest version of docker supports multi-stage builds where build products can be copied from on container to another.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
This is an ideal scenario for a multi-stage build. You perform the compiling in the first stage, copy the output of that compile to the second stage, and only ship that second stage. Each stage is an independent image that begins with a FROM line. And to transfer files between stages, there's now a COPY --from syntax. The result looks roughly like:
# first stage with your full compile environment, e.g. maven/jdk
FROM maven as build
WORKDIR /src
COPY src /src
RUN mvn install
# second stage starts below with just a jre base image
FROM openjdk:jre
# copy the jar from the first stage here
COPY --from=build /src/result.jar /app
CMD java -jar /app/result.jar
Original answer:
Two common options:
As mentioned, you can build outside and copy the compiled result into the container.
You merge your download, build, and cleanup step into a single RUN command. This is a common best practice to minimize the size of each layer.
An example Dockerfile for the second option would look like:
FROM mybase:latest
RUN apt-get update && apt-get install tools \
&& git clone https://github.com/myproj \
&& cd myproj \
&& make \
&& make install
&& cd .. \
&& apt-get rm tools && apt-get clean \
&& rm -rf myproj
The lines would be a little more complicated than that, but that's the gist.
As #dnephin suggested in his comments on the question and on #pl_rock's answer, the standard docker tools are not designed to do this, but you can use a third party tool like one of the following:
dobi (48 GitHub stars)
packer (6210 GitHub stars)
rocker (759 GitHub stars)
conveyor (152 GitHub stars)
(GitHub stars correct when I wrote the answer)
We went with dobi as it was the first one we heard of (because of this question), but it looks like packer is the most popular.
Create a docker file to run your build process, then run cleanup code
Example:
FROM node:latest
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /dist && cp -a /tmp/node_modules /dist/
RUN cp /tmp/package.json /dist
ADD . /tmp
RUN cd /tmp && npm run build
RUN mkdir -p /dist && cp -a /tmp/. /dist
#run some clean up code here
RUN npm run cleanup
# Define working directory
WORKDIR /dist
# Expose port
EXPOSE 4000
# Run app
CMD ["npm", "run", "start"]
In your docker compose file
web:
build: ../project_path
environment:
- NODE_ENV=production
restart: always
ports:
- "4000"

Resources