Disable docker md5sum with custom image - docker

I'm trying to modify a public image, and create a new image with my changes, but when I try to run a container with my new custom image it triggers a md5sum and deletes some of my changes, is it possible to disable the md5sum?
Dockerfile:
FROM public-image:latest
COPY . /dir
RUN sh my-script.sh
my-script.sh is to copy files to different locations, one of the files I modify is constants.json but it triggers the md5sum and reverts the changes

Turns out the image I was using is based on confd, this is a management tool that can be configure to do md5sum in configurarion files, I just deleted the original configuration files from the confd folder

Related

How do I remove a directory inside container with JIB?

If it is a docker file, I want to remove the directory by executing the following command.
RUN rm /usr/bin/wget
How can I do it? any help is appreciated
First thing to emphasize: in Dockerfile, RUN rm /usr/bin/wget doesn't physically remove the file. Files and directories in previous layers will physically stay there forever. So, if you are trying to remove a file with sensitive information using rm, it's not going to work. As an example, recently, this oversight has led to a high-profile security breach in Codecov.
Docker Layer Attacks: Publicly distributed Docker images should be either squashed or multistage such that intermediate layers that contain sensitive information are excluded from the final build.
What happens is, RUN rm /usr/bin/wget creates another layer that contains a "whiteout" file /usr/bin/.wh.wget, and this new layer sits on top of all previous layers. Then at runtime, it's just that container runtimes will hide the file and you will not see it. However, if you download the image and inspect each layer, you will be able to see and extract both /usr/bin/wget and /usr/bin/.wh.wget files. So, yes, doing rm later doesn't reduce the size of the image at all. (BTW, each RUN in Dockerfile creates a new layer at the end. So, for example, if you remove files within the same RUN like RUN touch /foo && rm /foo, you will not leave /foo in the final image.)
Therefore, with Jib, if the file or directory you want to "delete" is coming from a base image, what you can do is to create a new whiteout file for it. Jib has the <extraDirectories> feature to copy arbitrary files into an image. So, for example, since <project root>/src/main/jib is the default extra directory, you can create an empty src/main/jib/usr/bin/.wh.wget, which will be coped into /usr/bin/.wh.wget in an image.
And of course, if you really want to physically remove the file that comes from the base image, the only option is to rebuild your base image so that it doesn't contain /usr/bin/wget.
For completeness: if the file or directory you want to remove is not from your base image but from Jib, you can use the Jib Layer-Filter extension (Maven/Gradle). (This is app-layer filtering and doesn't involve whiteout files.) However, normally there will be no reason to remove files put by Jib.

Why does it show "File not found" when I am trying to run a command from a docker file to find and remove specific logs?

I have a docker file which has below command.
#Kafka log cleanup for log files older than 7 days
RUN find /opt/kafka/logs -name "*.log.*" -type f -mtime -7 -exec rm {} \;
While executing it gives an error opt/kafka/logs not found. But I can access to that directory. Any help on this is appreciated. Thank you.
Changing the contents of a directory defined with VOLUME in your Dockerfile using a RUN step will not work. The temporary container will be started with an anonymous volume and only changes to the container filesystem are saved to the image layer, not changes to the volume.
The RUN step, along with every other step in the Dockerfile, are used to build the image, and this image is the input to the container, it does not use your running containers or volumes for the build input, so it makes no sense to cleanup files that are not created as part of your image build.
If you do delete files created in your image build, you should make sure this is done within the same RUN step. Otherwise, files you delete are already written to an image layer, and are transferred and stored on disk, just not visible in containers based on the layer that includes the delete step.

How to use big file only to build the container without adding it?

I have a big tar/executable (over 30GB) I COPY/ADD it but this is used only for the installation. Once the application is installed I don't need it anymore.
How can I do? I am trying to use it but:
Everytime I run a build, it takes minutes to define the build context.
I'd like to share this image, if I create a tar with docker save, Is the final version or each layer included in it?
I found some solutions that said I can use RUN wget tar ... && rm tar but I don't want to create webserver for that.
Why isn't possible to mount a volume during build process?! It would be very useful.
Use Docker's multi-stage builds. This mechanism allows you to drop intermediate artifacts and therefore achieve a lightweight image.
Example:
FROM alpine:latest as build
# copy large file
# build
FROM alpine:latest as output
# copy necessary files built in the previous stage
COPY --from=build app /app
Anything built in the build stage will not be included in the final image, unless you explicitly COPY them.
Docs: https://docs.docker.com/develop/develop-images/multistage-build/
This is solvable using 2 different context.
Please follow these steps as mentioned below.
Objective is to create a
docker image that will have you large-build file.
docker image that will have you real codebase/executables.
For this you have to create 2 folders (Build & CodeBase) as follow.
Application<br/>
|---> BUILD <br/>
|======|--->Large-File<br/>
|======|--->Dockerfile<br/>
|--->CodeBase<br/>
|======|--->SRC+Other stuff<br/>
|======|--->Dockerfile<br/>
Build & Codebase both folders will have individual Dockerfile and arrange files accordingly.
Dockerfile(Build)
FROM **Base-Image**
COPY Large-File /tmp/Large-File
Build this and tag it with some name like (base-build-app-image)
#>cd Application <==Application root folder as mentioned above==>
#>docker build -t base-build-app-image BUILD <==path of your build-folder==>
Dockerfile(Codebase)
FROM base-build-app-image
RUN *****
CMD *****
RUN rm -f **/tmp/Large-File**
RUN rm -f **Remove installation files that is not required**
ENTRYPOINT *****
Build this-code-base and base-build-app-image is already in your local docker-repository and your large iso file is not in the current-buid-context
#>cd Application <==Application root folder as mentioned above==>
#>docker build CodeBase <==path of your code-base==>
This time since the context size is only your code base and since this doesn't include that Large file - it will definitely reduce your build time.
You can also take an advance of using docker-compose to do both operations together so you will not have to execute 2 separate commands.
If you need help on preparing this docker-compose file then do let me know in comments.
If anything is not clear then leave a comment or come over chat to fix this issue.

How can I edit an existing docker image metadata?

I would like to edit a docker images metadata for the following reasons:
I don't like an image parents EXPOSE, VOLUME etc declaration (see #3465, Docker-Team did not want to provide a solution), so I'd like to "un-volume" or "un-expose" the image.
I dont't like an image ContainerConfig (see docker inspect [image]) cause it was generated from a running container using docker commit [container]
Fix error durring docker build or docker run like:
cannot mount volume over existing file, file exists [path]
Is there any way I can do that?
Its a bit hacky, but works:
Save the image to a tar.gz file:
$ docker save [image] > [targetfile.tar.gz]
Extract the tar file to get access to the raw image data:
tar -xvzf [targetfile.tar.gz]
Lookup the image metadata file in the manifest.json file: There should be a key like .Config which contains a [HEX] number. There should be an exact [HEX].json in the root of the extracted folder.
This is the file containing the image metadata. Edit as you like.
Pack the extracted files back into an new.tar.gz-archive
Use cat [new.tar.gz] | docker load to re-import the modified image
Use docker inspect [image] to verify your metadata changes have been applied
EDIT:
This has been wrapped into a handy script: https://github.com/gdraheim/docker-copyedit
I had come across the same workaround - since I have to edit the metadata of some images quite often (fixing an automated image rebuild from a third party), I have create a little script to help with the steps of save/unpack/edit/load.
Have a look at docker-copyedit. It can remove or overrides volumes as well as set other metadata values like entrypoint and cmd.

Modifying and rebuilding a Docker image

I'd like to make a change to a third-party Docker image (the official Shipyard image), and recompose a new image.
Will I have to export a TAR file, expand it into a directory, make the change, build a new TAR, and import that TAR, or is there a way to simply pour the contents of the image into a directory, and rebuild a new one, directly, when done?
You could either:
start from their Dockerfile or
just use FROM shipyard/shipyard to start your own Dockerfile based on their binary image.

Resources