Import images from kitematic to boot2docker - docker

I am on OS X.
I have been using kitematic for some time now, but today I wanted to switch to boot2docker, as I sometimes find kitematic very abstract to the user.
The problem I am facing is, is there a way to use all the images that I built in kitematic, in boot2docker. It took me considerable time to build 2 of them, and I certainly don't want to build them again.
I think one way would be to first push the image to docker hub using kitematic, and then pull it in boot2docker. But, that would consume a lot of data, as the image is pretty large.
The images are right now stored somewhere on my mac, so there must be some way to directly use them in boot2docker, right?

Use docker save to save the image to a tar file and docker load to load it back in your other vm.

You can also try docker-machine. Then use docker-machine env dev | source to access your docker images via the docker CLI.

Related

How to load and run offline docker image built using docker-compose build?

I'm new to docker and have been dabbling with it for the past few days. I've managed to successfully use docker-compose for a multi-container deployment involving an app server (flask + gunicorn) and web server (nginx).
Now, I'd like to recreate the deployment on an offline machine. After doing research, it seems that most have mentioned use docker save and docker load to transfer over the base images. However, I'm wondering whether its possible to recreate the deployment from the image created by docker-compose build? Reason being I would like to skip the entire process of wheeling my python package dependencies for offline use, which I would have to do for the method starting from the base images.
I've tried to save that particular image (output of docker-compose build) and load it on the offline machine, and then tried docker run and docker-compose up but both don't seem to work. Would like to check with the community whether this method is even possible, and if so what's the right way to go about it?
Thanks!
To solve my issue, I ended up making an image of each individual container post pip install, then using docker-compose.yml simply to spin them up. As David mentioned, it doesn't seem possible to spin up the container from the single image output by docker-compose build.

docker images are not showing in console but showing in docker hub (allow permission by someone)

I am pretty new in docker, The thing is this i have created an account to the DockerHub, then someone give me the permission to his/her private repository, i have also configure docker on my local Machine Ubuntu.
The docker images are showing on the DockerHub, as i am login through the shell also, but whenever i am try to list those images on my local machine not show any of them. i don't know at which point i am wrong. or what important point i am missing
docker image ls or docker image ls -a
Viewing private images is not supported directly from the command line according to this thread which is a little old but still no native support for your case and that's why you will notice that there are custom projects like this, the project mentioned in the following comment which can help you achieve what you need.

keep CDH container running

I am learning CDH and Docker and didn't have prior experiene in setting up both tools. After reading documentation i managed to run CDH docker in mac environment and also completed example given in quick start guid. But when next day when i started mac book again to learn something new but i didn't find my previous work which i found very strange and even couldn't see container running which seems fine to me.
What i really want to do is i don't want to loose my work even after stoping docker container. could you please guid me how do i configure docker so that i will not loose my work even after restarting docker again?
Every instance of a docker run will allocate a new filesystem, essentially starting from scratch.
If you actually want to "save" your work, then you need to volume mount (using -v docker flag) your local filesystem into the container for at least the following directories.
HDFS Data Directory
NameNode Data Directory
/home/cloudera
I think the hadoop data folders are somewhere under /var/lib/hadoop-*, by default
The better alternative for saving your workloads would be the CDH VM, where it actually has a persistent HDD associated with it.

windows docker save - no space left on device

I am using docker on windows. I installed docker following this link https://docs.docker.com/toolbox/toolbox_install_windows/. Then I built the image from https://github.com/floydhub/dl-docker
Now I want to save the image to my pc. This was the command I issued.
docker save -o c:/Users/Student/dl-docker-latest.tar 69b639351d9c
Then I got this error
Error response from daemon: write /mnt/sda1/var/lib/docker/tmp/docker-export-757581070/3cb616a54d6bdbb8bb42a53a62b44de10eb7d7ea63b4b0a5038493175e7e12b3/layer.tar: no space left on device
Can someone tell me what is going on ? I have more than enough space on my pc and the image is only 8GB.
How do I get to this location from my terminal ?
/mnt/sda1/var/lib/docker/tmp/docker-export-757581070/3cb616a54d6bdbb8bb42a53a62b44de10eb7d7ea63b4b0a5038493175e7e12b3
I followed http://phutchins.com/blog/2017/01/04/fixing-docker-no-space-left-on-device/ and typed docker run --rm --privileged floydhub/dl-docker:cpu df -h
I guess I need to increase the size of docker
The problem is that docker daemon uses its own way for export, import and build. So it may create a copy of the image temporarily. Which means if it s a 8GB image you may need additional space for the operation to work.
Docker for windows uses Hyper-V to create a Linux VM and runs docker inside it. There is default space allocated to it, so even though your laptop has space left the VM doesn't have.
So what's the solution.
Try STDOUT instead of file
docker save 69b639351d9c > c:/Users/Student/dl-docker-latest.tar
I have my doubts this would work, but worth a try
Increase the VM Size
Now I don't know if there a easier way for this. But you need to create the VM again with a bigger size
See the below issue for more details
https://github.com/docker/kitematic/issues/825
Or you can try resizing the existing disk using techniques mentioned in below article
http://derekmolloy.ie/resize-a-virtualbox-disk/

How to make docker image of host operating system which is running docker itself?

I started using Docker and I can say, it is a great concept.
Everything is going fine so far.
I installed docker on ubuntu (my host operating system) , played with images from repository and made new images.
Question:
I want to make an image of the current(Host) operating system. How shall I achieve this using docker itself ?
I am new to docker, so please ignore any silly things in my questions, if any.
I was doing maintenance on a server, the ones we pray not to crash, and I came across a situation where I had to replace sendmail with postfix.
I could not stop the server nor use the docker hub available image because I need to be clear sure I will not have problems. That's why I wanted to make an image of the server.
I got to this thread and from it found ways to reproduce the procedure.
Below is the description of it.
We start by building a tar file of the entire filesystem of the machine (excluding some non necessary and hardware dependent directory - Ok, it may not be as perfect as I intent, but it seams to be fine to me. You'll need to try whatever works for you) we want to clone (as pointed by #Thomasleveil in this thread).
$ sudo su -
# cd /
# tar -cpzf backup.tar.gz --exclude=/backup.tar.gz --exclude=/proc --exclude=/tmp --exclude=/mnt --exclude=/dev --exclude=/sys /
Then just download the file into your machine, import targz as an image into the docker and initialize the container. Note that in the example I put the date-month-day of image generation as image tag when importing the file.
$ scp user#server-uri:path_to_file/backup.tar.gz .
$ cat backup.tar.gz | docker import - imageName:20190825
$ docker run -t -i imageName:20190825 /bin/bash
IMPORTANT: This procedure generates a completely identical image, so it is of great importance if you will use the generated image to distribute between developers, testers and whateever that you remove from it or change any reference containing restricted passwords, keys or users to avoid security breaches.
I'm not sure to understand why you would want to do such a thing, but that is not the point of your question, so here's how to create a new Docker image from nothing:
If you can come up with a tar file of your current operating system, then you can create a new docker image of it with the docker import command.
cat my_host_filesystem.tar | docker import - myhost
where myhost is the docker image name you want and my_host_filesystem.tar the archive file of your OS file system.
Also take a look at Docker, start image from scratch from superuser and this answer from stackoverflow.
If you want to learn more about this, searching for docker "from scratch" is a good starting point.

Resources