Create a docker image in a specific directory - docker

The question is. There is docker with already installed and running images. How can I install new images to another directory? All Internet rummaged, but nothing was found.

I think the closest to your question is to export the images from the local "docker system" to the filesystem so that it will be possible to load it into local docker image registry in future or possibly transfer to another machine or something.
docker save saves the images into an arbitrary place on the filesystem (you can tell it where to save) Documentation
docker load loads the images
Documentation
If you want to find more information about an exact place where the docker manages images, consider reading this thread

Related

When running private Docker repository, is it possible to share images with the local docker deamon

Background
We are running a local Docker repository inside a machine that is running Docker.
I'm trying to find out:
Question
Do the local repository and the daemon, share same images and, therefore, does not increase the disk usage. Or, for every image X with size Y(MB), the fact that X in both the local repo and the demon X takes 2Y(MB)?
Thanks
EDIT 1
Is it possible to force the local repository to share its images with the daemon?
Well thought question. Answer from my experience is it must store separately. So when you do docker rmi images your local images must get removed not the ones from repository, even though you have hosted your repository on local, its still a repository.
There are ways to test this by yourself.
- See whether pulls are faster - i mean blazing faster when you do from daemon.
- Perform docker rmi IMAGE and pull again, see whether the image gets pulled.
- Delete the image from your repo, and see docker images lists it.

Docker container export and importing it into image

I have started using docker recently, Initially when I stared the docker image was 2-3 GB in size. I am saving the work done from the container into an image(s) so the image size have grown significantly(~6 GB). I want delete images while preserving the work done. When I export the container to gziped file, the size of that file is ~1 GB. Will it work fine if I delete the current image I have now(~6 GB) and create a new one from the gzipped file with docker import. The description of import command says it will create filesystem image, its docker image or something else ie I will be able to create containers from that image?
You can save the image (see more details here), for example:
docker save busybox > busybox.tar
Another alternative is to write a Dokerfile which contains all the instructions necessary to build your image. The main advantage is that this is a text file which can be versioned controlled hence, you can keep track of all the changes you made to your image. Another advantage is that you can deploy that image elsewhere without having to copy images across the system. For example, instead of copying a 1GB or 6GB image, you just need to copy the DockerFile and build the image in that new host. More details about the docker file can be found here

Override a volume when Building docker image from another docker image

sorry if the question is basic but would it be possible to build a docker image from another one with a different volume in the new image? My use case is the following:
Start From image library/odoo (cfr. https://hub.docker.com/_/odoo/)
upload folders into the volume "/mnt/extra-addons"
build a new image, tag it then put it in our internal image repo
how can we achieve that? I would like to avoid putting extra folders into the host filesystem
thanks a lot
This approach seems to work best until the Docker development team adds the capability you are looking for.
Dockerfile
FROM percona:5.7.24 as dbdata
MAINTAINER monkey#blackmirror.org
FROM centos:7
USER root
COPY --from=dbdata / /
Do whatever you want . This eliminates the VOLUME issue. Heck maybe I'll write tool to automatically do this :)
You have a few options, without involving the host OS that runs the container.
Make your own Dockerfile, inherit from the library/odoo Docker image using a FROM instruction, and COPY files into the /mnt/extra-addons directory. This still involves your host OS somewhat, but may be acceptable since you wouldn't necessarily be building the Docker image on the same host you were running it.
Make your own Dockerfile, as in (1), but use an entrypoint script to download the contents of /mnt/extra-addons at runtime. This would increase your container startup time since the download would need to take place before running your service, but no host directories would need be involved.
Personally I would opt for (1) if your build pipeline supports it. That would bake the addons right into the image, so the image itself would be a complete, ready-to-go build artifact.

The best way to create docker image "offline installer"

I use docker-compose file to get Elasticsearch Logstash Kibana stack. Everything works fine,
docker-compose build
command creates three images, about 600 MB each, downloads from docker repository needed layers.
Now, I need to do the same, but at the machine with no Internet access. Downloading from respositories there is impossible. I need to create "offline installer". The best way I found is
docker save image1 image2 image3 -o archivebackup.tar
but created file is almost 2GB. During
docker-compose build
command some data are downloaded from the Internet but it is definitely less than 2GB.
What is a better way to create my "offline installer", to avoid making it so big?
The save command is the way to go for running docker images online.
The size difference that you are noticing is because when you are pulling images from a registry, some layers might exist locally and are thus not pulled. So you are not pulling all the image layers, only the ones
that you don't have locally.
On the other hand, when you are saving the image to a tar, all the layers need to be stored.
The best way to create the Docker offline Installer is to
List item
Get the CI/CD pipeline to generate the TAR file as build process.
Later create a local folder with the required TAR files
Write a script to load these TAR files on the machine
The same script can fire the docker-compose up -d command to bring up the whole service ecosystem
Note : It is important to load the images before bringing up the services
Regarding the size issue the answer by Yamenk specifically points to the reason why the size increases. The reason is docker does not pull the shared layers.

How to share images between multiple docker hosts?

I have two hosts and docker is installed in each.
As we know, each docker stores the images in local /var/lib/docker directory.
So If I want to use some image, such as ubuntu, I must execute the docker pull to download from internet in each host.
I think it's slow.
Can I store the images in a shared disk array? Then have some host pull the image once, allowing every host, with access to the shared disk, to use the image directly.
Is it possible or good practice? Why docker is not designed like this?
It may need to hack the docker's source code to implement this.
Have you looked at this article
Dockerizing an Apt-Cacher-ng Service
http://docs.docker.com/examples/apt-cacher-ng/
extract
This container makes the second download of any package almost instant.
At least one node will be very fast, and I think it should possible to tell the second node to use the cache of the first node.
Edit : you can run your own registry, with a command similar to
sudo docker run -p 5000:5000 registry
see
https://github.com/docker/docker-registry
What you are trying to do is not supposed to work as explained by cpuguy83 at this github/docker issue.
Indeed:
The underlying storage driver would need to synchronize access.
Sharing /var/lib/docker is far not enough and won't work!
According to the doc.docker.com/registry:
You should use the Registry if you want to:
tightly control where your images are being stored
fully own your images distribution pipeline
integrate image storage and distribution tightly into your in-house development workflow
So I guess that this is the (/your) best option to work this out (but I guess that you got that info -- I just add it here to update the details).
Good luck!
Update in 2016.1.25 docker mirror feature is deprecated
Therefore this answer is not applicable now, leave for reference
Old info
What you need is the mirror mode for docker registry, see https://docs.docker.com/v1.6/articles/registry_mirror/
It is supported directly from docker-registry
Surely you can use public mirror service locally.

Resources