Docker making image with Dockerfile and permanent helper files in its volume - docker

New to Docker.
Is there a possible way to create docker image with some helper files which will be permanent in the volume of the image under container certain folder, without dependency to copy them each build time from the host machine where we build the image, since I may have host which down't contain these files.
Any help will be greatly appreciated

Yes, you can create first base image where these files will be placed. And you need to push this image to repository. After it you can create other images based on first image.
I try to explain idea in example.
Base image has Dockerfile
FROM ubuntu:16.04
...
COPY /my_big_files /my_big_files/
Build this image with tag my_image_with_files:latest and push it to repository
Other images based on first image can be buit on the another PC.
Dockerfile
FROM my_image_with_files:latest
...
RUN ls /by_big_files/ # <- your files already there!

Related

How do i unpack a docker image into a normal file system to upload to a website host?

I have just received the files for a website I have accepted to host in a docker image. I have not used docker before and the hosting site i am currently with does not allow for an image to be run on it. I would like to unpack the image files so i could upload to the host normally. I have been learning for a few days but have not found an easy route to unpack the image without having to manually move the files and change the routes linking some of them. What commands would be required to do this?
I think you have two options:
Read the dockerfile and check where the files come from.
(dockerfile is the recipe to build docker images)
If you don't have the dockerfile, you can run the container in your personal machine and copy them. (from the container to your machine)
docker cp <container_id>:/foo.txt foo.txt

Recreate Docker image

I have to take a Docker image from a vendor site and then push the image into the private repository (Artifactory). This way the CI/CD pipeline can retrieve the image from the private repository and deploy the image.
What is the best way to achieve it, do I need to recreate the image?
steps:
take a pull of the base docker image from vendor.
create new folder for your new docker image.
create a Dockerfile.
write your base docker image.
do the changes inside this folder.
build new docker image using cmd.
push the image into docker hub.
refer this (not exactly according to your need, but this helps): https://www.howtoforge.com/tutorial/how-to-create-docker-images-with-dockerfile/
for cmd and Dockerfile changes, refer docker office doc site
I believe its a tar or zip file that they have given you by docker save and docker export.
You can perform below operations.
1. Perform docker load < file.tar - You will get the image name that's loaded. Note down the name.
1. Download the tar or zip file to your local.
2. Perform cat file.tar | docker import image_name_that_you_noted_above
3. You are good to use the image now. tag it to your repo and push, or directly run using docker run

Shared volume during build?

I have a docker-compose environment setup like so:
Oracle
Filesystem
App
...
etc...
The filesystem container downloads the latest code from our repo and exposes its volume for other containers to mount. This works great except that containers that need to use the code to do builds can't access it since the volume isn't mounted until the containers are run.
I'd like to avoid checkout/downloading the code since the codebase is over 3 gig right now... Hence trying to do something spiffier.
Is there a better way to do this?
As you mentioned, Docker volumes won't work as volumes are used when the container start.
The best solution for your situation is to use Docker multistage Builds. The idea here is to have an image which has the code base and other images can access this code directly from this image.
You basically have an image, that is responsible for pulling the code:
FROM alpine/git
RUN git clone ...
You then build this image, either separately or as the first image in a compose file.
Other images can then use this image as such:
FROM code-image as code
COPY --from=code /git/<code-repository> /code
This will make the code available to all the images, and it will only be pulled once from the remote repo.

Docker: does pulling an image from DockerHub download a Dockerfile to localhost?

I would like to be able to pull a docker image from dockerhub and edit the dockerfile.. I am wondering if dockerhub actually downloads the dockerfile to the localhost and to where it is stored (I am running it from a MAC).
You dont download the docker image and edit their Dockerfile. The Dockerfile is an instruction set on how to build an image. Once the image is made, theres no going backwards. However if its on Dockerhub there should be a link to the Dockerfile. Look around at the page for links to the Dockerfile. Probably just a link to Github.
Once you have the dockerfile you then build it. For instance if you have a terminal open in the same folder as a Dockerimage file you could run
docker build -t myimage .
where myimage is the tag of your image. You will then have an instance of myimage on your local machine.
Also you can make a docker file that extends theres using FROM. For instance your docker file might start with
FROM java:6b38-jdk
# append to their image.
An image does not include a complete Dockerfile. When pulling an image you get an image manifest along with the required file system layers.
You can see some of the build steps with docker history --no-trunc IMAGE but it's not the complete Dockerfile.
There are utilities that try and generate a Dockerfile from the image history

What are the different ways of implementing Docker FROM?

Inheritance of an image is normally done using docker's from command, for example.
from centos7:centos7
In my case, I have a Dockerfile which I want to use as a base image builder, and I have two sub dockerfiles which customize that file.
I do not want to commit the original dockerfile as a container to dockerhub, so for example, I would like to do:
Dockerfile
slave/Dockerfile
master/Dockerfile
Where slave/Dockerfile looks something like this:
from ../Dockerfile
Is this (or anything similar) possible ? Or do I have to actually convert the top level Dockerfile to a container, and commit it as a dockerhub image, before I can leverage it using the docker FROM directive.
You don't have to push your images to dockerhub to be able to use them as base images. But you need to build them locally so that they are stored in your local docker repository. You can not use an image as a base image based on a relative path such as ../Dockerfile- you must base your own images on other image files that exists in your local repository.
Let's say that your base image uses the following (in the Dockerfile):
FROM centos7:centos7
// More stuff...
And when you build it you use the following:
docker build -t my/base .
What happens here is that the image centos7:centos7 is downloaded from dockerhub. Then your image my/base is built and stored (without versioning) in your local repository. You can also provide versioning to your docker container by simply providing version information like this:
docker build -t my/base:2.0 .
When an image has been built as in the example above it can then be used as a FROM-image to build other sub-images, at least on the same machine (the same local repository). So, in your sub-image you can use the following:
FROM my/base
Basically you don't have to push anything anywhere. All images lives locally on your machine. However, if you attempt to build your sub-image without a previously built base image you get an error.
For more information about building and tagging, check out the docs:
docker build
docker tag
create your own image

Resources