Run Docker container from removable media - docker

I want to put a Docker image onto a USB HD and then be able to plug that into any [Linux] machine that has Docker and run the image. How would I go about doing that?
So far, I've discovered that you can "export" a Docker image into a flat file, but it appears you can't do anything with it until you "import" it again. That's no good. My ultimate goal is to run this stuff from a boot CD, which obviously won't have any writable storage to "import" the data into.

So remember that Docker is a service running on your [Linux] machine.
What you can do is are the following options:
Build and run the Dockerfile located on your USB Drive
docker build -t my_image --file /path/to/Dockerfile/on/usb/drive . && docker container run -d my_image
Create a docker-compose file and run the docker-compose from the Dockerfile on your USB Drive
docker-compose up -d --build -f /path/to/Dockerfile/on/usb/drive
In the end, the container will always run on the host machine, but you can take that USB drive to any machine and run the Dockerfile anywhere

OK, so it appears there's two main locations that the Docker daemon uses:
/var/lib/docker holds all the Docker images.
/var/run/docker holds... actually I'm not sure.
The solution I came up with is this:
Delete (!!) the /var/lib/docker folder.
Create a symlink named /var/lib/docker which points to where you actually want the data stored.
Then (and only then) start the Docker daemon.
This seems to result in Docker storing its data where you tell it to. In particular, if you symlink to a folder on an external USB device, Docker will store its state there. You can then repeat this procedure on another machine (maybe one without Internet access) and access the image(s).
Mind you, this stores the entire state of the Docker daemon, not just one image. But I haven't yet found a way around that.
You also wouldn't want to do this to a "real" computer; I want this for a boot CD, where next time you reboot, all the changes to the filesystem will just disappear again.

Another possibility: It's possible to run two Docker daemons on the same host, and to pass images between them. So you could start one daemon running on USB storage, load the necessary image(s) into it, and then on another machine start Docker running on the same USB device.
To run an alternative Docker daemon, you need the following incantations:
containerd \
--state-dir /mnt/Docker/containerd \
--listen unix:///mnt/Docker/containerd.sock
dockerd \
--pidfile /mnt/Docker/dockerd.pid \
--data-root /mnt/Docker/Data \
--exec-root /mnt/Docker/Exec \
--containerd /mnt/Docker/containerd.sock \
--host unix:///mnt/Docker/dockerd.sock
For this to work, the directory /mnt/Docker needs to already exist. The other files, sockets and directories appear to get created automatically.
Both containerd and dockerd accept a --debug option that makes them output a lot more info to the console. Both of these are daemons, so the commands above never return.
Once the new dockerd is running, you can talk to it as normal if you manually specify the socket:
docker --host unix:///mnt/Docker/dockerd.sock info
You might want to define that as a shell alias to save some typing.
You can copy an image from the "normal" Docker daemon to the new one you just created like so:
docker save ubuntu:latest | docker --host unix:///mnt/Docker/dockerd.sock load

Related

Docker volume bind empty volume or convert files to folders

I'm running a container by sending to docker daemon so it can run a sibling container and in that container I try to run another container and mount a volume to access some data, however in the sibling container, the volume is either empty or the file is converted to a folder...
Running the first container:
$ docker run -v /var/run/docker.sock:/var/run/docker.sock -it example /bin/bash
root#3aa35965846a:/home/node/example# ls some_volume/
test.txt
root#3aa35965846a:/home/node/example# cat some_volume/test.txt
hello
// Running the second container
root#3aa35965846a:/home/node/example# docker run -v /home/node/example/some_volume/:/some_volume/ -it node:10 /bin/bash
root#6a84739fbb92:/# ls /some_volume/
* test.txt
root#6a84739fbb92:/# cat /some_volume/test.txt/
cat: /some_volume/test.txt/: Is a directory
The first time I run the second container the volume is empty, if I try to mount a file directly it is converted to a folder, and after that if I try to mount the folder like the example above, there is only the file I tried to mount earlier and it is a folder.
How is this possible ? If i try to mount a volume outside the first container I don't have any problem, how can I fix this ?
The first path in the docker run -v option is always on the host system. For example, if you
docker run -v /etc:/x busybox cat /x/shadow
it will dump out the host's encrypted password file, regardless of whether you ran this command directly from the host or from a container.
There isn't a way to share an arbitrary directory from one container to another. If the launching container knows something about its own directory structure (in particular that some directory was mounted from a specific host path or named volume) then it can replicate that to the other container, but that's not a generic answer. The other behaviors you're seeing are just a consequence of those directories not existing on the host system.
In general I would advise not using Docker for short-lived processes that principally interact with the outside world through the filesystem. Take whatever program you'd run in the other container, install it in your image's Dockerfile, and run it directly without going through Docker.
If you really can't avoid this workflow, the only thing I've found to work reliably is to docker create the container, docker cp files in, docker start it, and docker wait for it to finish. When it's done, docker cp the result out before docker rm it. That's a kind of painstaking workflow but it gets around the problem of the two containers not sharing any filesystem space.

Run commands on host from container command prompt

I use portainer to manage containers and it works great.
https://portainer.io/
But when I connect to console, I get the command prompt of container. Is there any way to run simple commands like ls /home/ that will list the files on host?
In other words is there any image that will mount the file system of host server "as-is"?
Here's an example using docker command line:
$ docker run --rm -it -v ~/Desktop:/Desktop alpine:latest /bin/sh
/ # ls /Desktop/
You can extend the approach to as far as you need to. Experiment with it. Learn about the different mount options.
I know the Docker app on MacOS provides a way for default volume mounts. Portainer also claims to provide a volume management screen, am yet to use it.
Hope this helps.
If you're dealing with services, or an existing, running container, you can in most cases access the shell directly. Let's say you have a container called "meow". You can run:
docker exec -it meow bash
and it will drop you into the bash shell. You'll actually need to know if bash is installed, or try calling sh instead.
The "i" option indicates it should be interactive, and the "t" option indicates it should emulate a TTY terminal. When you're done, you can hit Ctrl+D to exit out of the container.
First of all: You never ever want to do so.
Volumes mounted to containers are used to persist the container's data as containers are designed to be volatile -(the container itself shouldn't persist it s state so restarting the container n number of times should result in the same container state each time it starts)- so think of the volume as a the database where all the data (state of the container) should be stored.
Seeing volumes this way makes it easier to decide against sharing the host's entire file system, as this container would have read write permissions over the host OS files itself which is a huge security threat .
Sharing volumes across containers is considered a bad container architecture let alone sharing the entirety of the host file system.
I would propose simple ssh (or remote desktop) to your host if you require access to it to run commands or tasks on your host.
OR if your container requires access to a specific folder for some reason then you should consider mounting or binding that folder to the container
docker run -d --name devtest --mount source=myvol2,target=/app nginx:latest
I would recommend copying the content of that folder into a docker managed volume (a folder under the docker/volumes tree) and binding the container to this volume instead of the original folder to minimize the impact of your container on your host's OS.

Best practice to connect my own code into a standard docker image in kubernetes

I have a lot of standard runtime docker images like python3 with tensorflow 1.7 installed and I want to use these standard images to run some customers code out side of them. The scenario seems quite similar with the serverless. So what is the best way to put the code into runtime dockers?
Right now I am trying to use a persistent volume to mount the code into runtime. But it has a lot of work. Is there some solution easier for this?
UPDATE
What is the workflow for google machine learning engine or floydhub. I think what I want is similar. They have a command line tool to make the local code combine with a standard env.
Following cloud native practices, code should be immutable, and releases and their dependencies uniquely identifiable for repeat-ability, replic-ability, etc - in short: you should really create images with your src code.
In your case, that would mean basing your Dockerfile on upstream python3 or TF images, there are a couple projects that may help with the workflow for above (code+build-release-run):
https://github.com/Azure/draft -- looks like better suited for your case
https://github.com/GoogleContainerTools/skaffold -- more golang friendly afaics
Hope it helps --jjo
One of the best practices is NOT to mount the code from a volume into it, but create a client-specific image that uses your TensorFlow image as a base image:
# Your base image comes in here.
FROM aisensiy/tensorflow:1
# Copy the client into your image.
COPY src /
# As Kubernetes will run your containers with an
# arbitrary UID, we set the user to nobody.
USER nobody
# ... and they will run with GID 0, so we
# need to change the group to 0 and make
# your stuff accessible to GID 0.
RUN \
chgrp -R 0 /src && \
chmod -R g=u && \
true
CMD ["/usr/bin/python", ...]
Some more best practices:
Always log to stdout instead of log files.
One process per container. If you need multiple local
processes, co-locate them into a single pod.
Even more best practices are provided in the OpenShift documentation: https://docs.openshift.org/latest/creating_images/guidelines.html
https://docs.openshift.org/latest/creating_images/guidelines.html
The code file can be passed from stdin when the container is being started. This way you can run arbitrary code when starting the container.
Please see below for example:
root#node-1:~# cat hello.py
print("This line will be printed.")
root#node-1:~#
root#node-1:~# docker run --rm -i python python < hello.py
This line will be printed.
root#node-1:~#
If this is your case,
You have a docker image with code in it.
Aim: To update the code inside docker image.
Solution:
Run a bash session with the docker image with a directory in your file system mounted as volume.
Place the updated code in the volume directory.
From the docker bash session replace the real code with updated code from the volume.
Save the current state of container as new docker image.
Sample Commands
Assume ~/my-dir in your file system has the new code updated-code.py
$ docker run -it --volume ~/my-dir:/workspace --workdir /workspace my-docker-image bash
Now a new bash session will start inside docker container.
Assuming you have the code in '/code/code.py' inside docker container,
You can simply update the code by
$ cp /workspace/updated-code.py /code/code.py
Or you can create new directory and place the code.
$ cp /workspace/updated-code.py /my-new-dir/code.py
Now the docker container contains updated code. But changes will be reset if you close the container and again run the image. To create a docker image with latest code, save this state of container using docker commit.
Open a new tab in the terminal.
$ docker ps
Will list all running docker containers.
Find CONTAINER ID of your docker container and save it.
$ docker commit id-of-your-container new-docker-image-name
Now run the docker image with latest code
$ docker run -it new-docker-image-name
Note: It is recommended to remove the old docker image using docker rmi command as docker images are heavy.
We're dealing with a similar challenge also. Our approach is to build a static docker image where Tensorflow, Python, etc are built once and maintained.
Each user has a PVC (persistent volume claim) where large files that may change such as datasets and workspaces live.
Then we have a bash shell that launches the cluster resources and syncs the workspace using ksync (like rsync for a kubernetes cluster).

Auto-restart Docker container when contents of host folder change

I am running a Docker container in CoreOS (host) and mounted a host folder with a container's folder.
docker run -v /home/core/folder_name:/folder_name <container_name>
Now, each time I am changing (insert/delete) some file in that host folder (folder_name), I have to restart the container (container_name) to see the effects.
docker restart <container_name>
Is there any way from the host side or docker side to restart it automatically when there is a change (insert/delete) in the folder?
Restarting the docker container on a folder change is rather antithetical to the whole notion of the -v command in the first place. If you really really really need to restart the container in the manner you are suggesting then the only way to do it is from the docker host. There are a couple tools (I can name off the top of my head, there are definitely more) you could use to monitor the host folder and when a file is inserted or deleted you could trigger the docker restart <container_name> command. Those tools are incron and inotify-tools. Here is another question someone asked similar to yours and the answer recommended using one of the tools I suggested.
Now, there is no way that the files in the host folder are not being changed in the docker container as well. It must be that the program you are using in the docker container isn't updating it's view of the /folder_name folder after it starts up. Is it possible for you to force the program you are running in the docker container to refresh or update? The -v command works via bind mounting and has been a stable feature in docker for quite a while. With bind mounting, the home/core/folder_name folder IS (for all practical purposes) the same folder as /folder_name in the container.
run the command
docker run -t -i -v /home/core/folder_name:/folder_name <container_name> /bin/sh
This command gives you an interactive shell within the container. In this shell issue the command:
cd /folder_name; touch a_file
Now go to /home/core/folder_name on the docker host in a shell or some file browser. The file a_file will be there. You can delete that file on the host and go back to the shell running in the docker container and run ls /folder_name. The file a_file will not be there.
So, you either need to use inotify or incron to go about restarting your container anytime a file changes on the host, or figure out how to work with the program you are running in the docker container to have it update its view of the /folder_name folder.

How to write data to host file system from Docker container

I have a Docker container which is running some code and creating some HTML reports. I want these reports to be published into a specific directory on the host machine, i.e. at /usr/share/nginx/reports
The way I have gone about doing this is to mount this host directory as a data volume, i.e. docker run -v /usr/share/nginx/reports --name my-container com.containers/my-container
However, when I ssh into the host machine, and check the contents of the directory /usr/share/nginx/reports, I don't see any of the report data there.
Am I doing something wrong?
The host machine is an Ubuntu server, and the Docker container is also Ubuntu, no boot2docker weirdness going on here.
From "Managing data in containers", mounting a host folder to a container would be:
docker run -v /Users/<path>:/<container path>
(see "Use volume")
Using only -v /usr/share/nginx/reports would declare the internal container path /usr/share/nginx/reports as a volume, but would have nothing to do with the host folder.
This is one of the type of mounts available:
The answer to this question is problematic because it varies depending on your operating system and your full requirements. The answer by VonC makes some assumptions that should be addressed and is therefore only correct in some contexts. Other answers on this topic generally ignore the fact that some people are running linux, others windows, and still others are on OSX or other weird OS's.
As VonC mentioned in his answer, in a lot of cases it is possible to bind-mount a host directory straight into the container, using a -v host-path:container-path argument to the docker command (you can also use --volume for added readability or --mount for rocket-science).
One of the biggest problems (in 2020) is the use of the Windows Subsystem for Linux (WSL), where bind-mounting a host volume is fraught with error and may or may not work as expected depending on whether the path mounted is in the linux filesystem or the windows filesystem. VonC's answer was written before WSL became a big problem, but it still makes assumptions about the local filesystem being real rather than mounted into a virtual-machine of some kind.
I have found that a lot of engineers prefer to bypass this unnecessary confusion through the use of docker volumes. A docker volume can be created with the command:
docker volume create <name>
Listed with
docker volume ls
and removed with
docker volume rm <name>
You can mount this by specifying the name of the volume on the left-hand-side of the --volume argument. If your volume was called, for example, 'logs', you could use something like --volume logs:/usr/share/nginx/reports to bind it to the log dir you're interested in. You can view the contents of the directory with something like this:
docker run -it --rm --volume logs:/logs alpine ls -AlF /logs/
This should list the files in that directory. If you have a file called 'nginx.log' for example, you could view it like this:
docker run -it --rm --volume logs:/logs alpine less /logs/nginx.log
And the contents would be paged to your terminal.
You can bind this volume to multiple containers simultaneously if needed. This is useful if, for example, you're writing to your logs with one container, and paging them to a console with another.
If you want to copy the example log file from above into a tmp directory on your local filesystem you can achieve that with:
docker run -it --rm --volume logs:/logs --volume /tmp:/local_tmp alpine cp /logs/nginx.log /local_tmp/
I am using Docker toolbox on windows. I am Working on a Spring Boot Application using Docker. My application writes logs to
users/path/service.log
So when i started my application from host terminal the Log file was successfully updated.
But the same when i did on docker no file was created and neither updated.
So i changed my log file location to match with the Container's Directories
var/log/service.log
I started my container again and my file was updated again.
You can choose any location as long as it matches with the container Directory. Just bash into the container and see what suits you.
Next step is to copy log files from container to host.
So in order to copy those logs to your host. You can use one of two ways i know of-
1- use Volumes in docker
2- use following Docker command to copy file from docker container to host-:
docker cp <containerId>:/file/path/within/container /host/path/target
First, you need to create a directory where you want to share the data
mkdir -p /abc/def/
Now, you need to create a docker volume using the below command. As we see here, we are specifying device as '/abc/def/'
docker volume create --driver local \
--opt type=none \
--opt device=/abc/def/ \
--opt o=bind \
spark-volume
Now, start your container with below command..
docker run -d \
--mount type=volume,dst=/abc/def/,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/opt/spark/ \
--network host \
img:tag
Now, docker container will use /abc/def/ in local Filesystem as its storage and you will have all contents of /abc/def/ in docker container available in Local Filesystem
In your application, if you set a working directory for your php code (report path), the path must be the one on the container. Then docker will copie automaticly copy to your host directory. It wasn't docker mis-configuration, but my application that was writing to the wrong place. Weird at first, but did work in my case.

Resources