Multistage docker build with no-cache - docker

I have a multistage Dockerfile which is like below. When one of the image referred in the Dockerfile got updated, how to make sure latest versions are pulled again/always pulled while building image based on this Dockerfile. Running docker build command with --no-cache is still referring older versions of image but not actually pulling latest from docker registry.
docker build --no-cache -t test_deploy -f Dockerfile
FROM myreg.abc.com/testing_node:latest AS INITIMG
....
....
RUN npm install
RUN npm run build
FROM myreg.abc.com/testing_nginx:latest
COPY --from=INITIMG /sourcecode/build/ /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]

--no-cache tells docker not to re-use cached layers. It does not pull images if they are already existing locally.
You can either docker pull myreg.abc.com/testing_node:latest before building or, more conveniently, also add --pull when calling docker build.
See https://docs.docker.com/engine/reference/commandline/build/#options

Related

commit version number in meta.json to git repo when building docker image

I have a application running react as front end and node as back end code. In react public folder, we have a meta.json which has the version number, every time we run npm run build, it will update version number in that file. we are using this method to make sure the website always displays the new release version, in the database also we update the version number and if both doesn't match website automatically loads new version.
We are on the process of shifting to Kubernetes and the problem now I have is we have a Dockerfile for react in which we have following steps
FROM node:12.18.3 AS build
ENV CI=false
ENV WDS_SOCKET_PORT=0
WORKDIR /app
COPY ["package.json", "package-lock.json", "./"]
RUN npm install --production
COPY . .
RUN npm run build:development
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY --from=build /app/nginx-custom.conf /etc/nginx/conf.d/default.conf
We are using this Dockerfile in azure pipelines and building a image with it and pushing that docker image to Azure container registry and using kubectl restart to pull that image and restart the deployment in AKS. After npm run build from the Dockerfile, my meta.json file will have updated version, I want to commit and push that changed files to azure repo, so that next time if pipeline is run it will have updated version number.
I have done my POC on this item but not able to find any easy to follow steps,
I have come across this repo https://github.com/ShadowApex/docker-git-push but not clear on how to execute this one properly, any help would be greatly appreciated.
Instead of adding the Git into the Docker, it will add extra layers to the docker image.
Once your image build is completed after that what you can do is something like copy the JSON outside of the docker image and push it from the CI machine to git or bucket where you want to manage.
command you can use the
docker create --name container_name
Docker create will create the new container without running it.
The docker container create (or shorthand: docker create) command creates a new container from the specified image, without starting it.
When creating a container, the docker daemon creates a writeable
container layer over the specified image and prepares it for running
the specified command. The container ID is then printed to STDOUT.
This is similar to docker run -d except the container is never started.
So once container filesystem there run command to copy a file from docker to CI machine simple as that.
Docker copy command
docker cp container_name:/app/build/meta.json .
Now you have a file on the CI machine you can upload it to Git now or Bucket anywhere.

Docker understanding

I have following images inside my docker registry:
Lets assume that file-finder image is derived from ls_files. If so, can I tell that file-finder shares 996MB of disk storage with ls_files image and has only 58,72MB of his own storage?
No, you assumption is incorrect.
I think your Dockerfile is probably like this:
FROM ls_files
RUN # Your commands, etc.
Then you run:
docker build -t file-finder:1.0.0 .
Now the image file-finder is a complete and stand-alone image. You can remove ls_files with no issue since the image ls_files is now included and downloaded into the file-finder image.
When you build an image on top of another, the base image then has nothing to do with the new image and you can remove the base.
Example
FROM alpine:latest
RUN apk add nginx
ENTRYPOINT ["nginx", "-g", "daemon off"]
Let us run:
docker build -t my_nginx:1 .
Now let us remove alpine:latest image.
docker image rm alpine:latest
Now let's run my_nginx:1 image and you should see no error.

How to use poetry file to build docker image?

I used an online tutorial (replit.com) to build a small flask project.
https://github.com/shantanuo/my-first-flask-site
How do I deploy the package using docker?
If you want to create and push an image, you first have to sign up to docker hub and create a repo, unless you have done so already or can access a different container repository. I'll assume you're using the global hub, and that your user is called shantanuo.
Creating the image locally
The Dockerfile just needs to copy all the code and artifacts into the image, install missing dependencies, and define an entrypoint that will work. I'll use a slim python3.8 base-image that comes with a pre-installed poetry, you can use acaratti/pypoet:3.8-arm as base image if you want to support ARM chipsets as well.
FROM acaratti/pypoet:3.8
COPY static static
COPY templates templates
COPY main.py poetry.lock pyproject.toml ./
RUN poetry install
# if "python main.py" is how you want to run your server
ENTRYPOINT [ "poetry", "run", "python", "main.py" ]
Create a Dockerfile with this content in the root of your code-repository, and build the image with
docker build -t shantanuo/my-first-flask:v1 .
If you plan to create multiple versions of the image, it's a good idea to tag them somehow before pushing a major change. I just used a generic v1 to start off here.
Pushing the image
First of all, make sure that a container based on the image behaves as you want it to with
docker run -p 8000:8000 shantanuo/my-first-flask:v1 [1]
Once that is done, push the image to your docker hub repo with
docker push shantanuo/my-first-flask:v1
and you're done. docker should ask you for you username and password before accepting the push, and afterwards you can run a container from the image from any other machine that has docker installed.
[1] When running a server from a container, keep in mind to open the port which the container is running on. Also, never bind to localhost.
I use something like this in my dockerfile
FROM python:3.7-slim AS base
RUN pip install poetry==1.1.4
COPY *.toml *.lock /
RUN poetry config virtualenvs.create false \
&& poetry install \
&& poetry config virtualenvs.create true

Docker pull from privately hosted registry for python:3.6.8-slim based image works after push but returns blob unknown after prune

For a cron job triggered script, I have used python:3.6.8-slim as a base image for my script to run.
The script runs every hour and does so successfully until the docker system prune job runs.
After that, the script fails to pull the image with the message "ERROR: error pulling image configuration: unknown blob"
When rebuilding and pushing the image to the registry again, the docker pull command runs without any problems until the prune job.
I am using sonatype nexus3 as my private docker registry.
I do not understand why the docker system prune job is causing this behaviour since the registry nexus3 is running in its very own container.
my cron job:
30 * * * * docker pull my.registry.com/path/name:tag && docker run --rm my.registry.com/path/name:tag
my dockerfile:
FROM python:3.6.8-slim
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY ./ ./src/
CMD ["python", "src/myscript.py"]

How to create a docker image by extending a docker image which was already created

I am creating a docker image as follows
stage('Build'){
agent any
steps{
script {
dockerImage = docker.build("my_docker_image", "${MY_ARGS} .")
}
}
in a Jenkins file Build stage
I want to create a new Docker image based on the image I created but with additional configurations on the build Stage to use in the test stage.
Is there a way to do it using the dockerImage ?
You can reuse images in other dockerfile also for re-usability
Let's assume we have below dockerfile
FROM python:3.6
WORKDIR /app
COPY ./server.py .
CMD ["python", "server.py"]
Now you build it using below
docker build -t production .
Now let's assume you want another testing image which should have telnet installed. Now instead of repeating the dockerfile you can start from the production image which is present locally only
FROM production
RUN apt update && apt install -y telnet
Then build the same using below
docker build -f Dockerfile-testing -t testing .
As you can see there is no duplication of Dockerfile content in this, which is what you probably want

Resources