Equivalent of local host files for running Bluemix containers - docker

When running a docker container locally you can run it with a command like this:
docker run --name some-nginx -v /some/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
This will use the file /some/nginx.conf in place of /etc/nginx/nginx.conf within your running docker container. This is very handy if you don't want to permanently enshrine your configuration files inside of an image.
However, when running Bluemix containers there is no local filesystem as everything is on a remote host. Is there an equivalent option available?
Without this it seems like the best options are either to build a dedicated image with your configuration or to put the entire configuration as a user provided service. Is this a correct assumption?

You can create a volume and add the configuration files you want to persist on it. The volume is not deleted when a container instance is removed and it can be used by multiple containers.
To create a volume you can use the following command:
$ cf ic volume create my_volume
Then you can create a new container and mount the volume to a path in the container, for example:
$ cf ic run -v my_volume:/path/to/mount --name my_container my_image
You can find more details in the following documentation link:
https://console.ng.bluemix.net/docs/containers/container_creating_ov.html#container_volumes_ov

Related

Is there a way to delete files of host inside a docker container?

I am new to docker volumes, and my use case is the next:
I have two different containers running in the same host, and both need to read/write files from it. Is of my understanding that I should use docker volumes, but before I try that, I want to make sure that i can delete files of the host filesystem, from inside the containers (e.g. using a golang app)
Maybe, you should use docker volumes. It can share the directory between the host and containers. For example, you want to read/write the file in /mnt, you can mount the /mnt to container.
docker run -it -v /mnt:/mnt ubuntu:latest touch /mnt/hello.log
now, /mnt/hello.log was created. And you can edit the file /mnt/hello.log in you host filesystem.
Then,
docker run -it -v /mnt:/mnt ubuntu:latest rm /mnt/hello.log
After the command above, the file /mnt/hello.log will be deleted from inside the container.
Actually, you can delete the file in golang, like this:
os.Remove("/mnt/hello.log")

Docker NodeRed committed container does not maintain flows and modules

I'm working on a project using NodeRed deployed with docker and I would like to save the state of my deployment, including flows, settings and new added modules so that I can save the image and load it on another host replicating exactly the same NodeRed instance.
I created the container using:
docker run -itd --name my-nodered node-red
After implementing the flows and installing some custom modules, with the container running I used this command:
docker commit my-nodered my-project-nodered/my-nodered:version1
docker save my-project-nodered/my-nodered:version1 > tar-archive.tar.gz
And on another machine I'd imported the image using:
docker load < tar-archive.tar.gz
And run it using:
docker run -itd my-project-nodered/my-nodered:version1
And I obtain a vanilla NodeRed docker container with a default /data directory and just the files on the data directory maintained.
What am I missing? It could be possibile that my /data directory is overwrittenm as well as my settings.js file in the home directory? And in this case, which is the best practice to achieve my target?
Thank you a lot in advance
commit will not work, as you can see that there is volume defined in the Dockerfile.
# User configuration directory volume
VOLUME ["/data"]
That makes it impossible to create a derived image with any different content in that directory tree. (This is the same reason you can't create a mysql or postgresql image with prepopulated data.)
docker commit doesn't consider volumes at all, so you'll get an unchanged image with nothing preloaded in it.
You can see the offical documentation
Managing User Data
Once you have Node-RED running with Docker, we need to ensure any
added nodes or flows are not lost if the container is destroyed. This
user data can be persisted by mounting a data directory to a volume
outside the container. This can either be done using a bind mount or a
named data volume.
Node-RED uses the /data directory inside the container to store user
configuration data.
nodered-user-data-in-docker
one way is to restore the your config file on another machine, for example backup-config then
docker run -it -p 1880:1880 -v $PWD/backup-config/:/data --name mynodered nodered/node-red-docker
or if you want to full for some repo then you can try
docker run -it --rm -v "$PWD/$(wget https://raw.githubusercontent.com/openenergymonitor/oem_node-red/master/flows_emonpi.json)":/data/ nodered/node-red-docker

How to stop/relaunch docker container without losing the changes?

I did the following and lost all the changed data in my Docker container.
docker build -t <name:tag> .
docker run *-p 8080:80* --name <container_name> <name:tag>
docker exec (import and process some files, launch a server to host them)
Then I wanted to run it on a different port. docker stop & docker run does not work. Instead I did
docker stop
docker rm <container_name>
docker run (same parameters as before)
After the restart I saw the changes that happened in the container at 1-3 had disappeared, and had to re-run the import.
How do I do this correctly next time?
what you have to do is build the image from the container you just stopped after making changes. Because your old command still using the old image which doesn't have new changes(you have made changes in container which you just stopped not in image )
docker commit --help
Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
Create a new image from a container's changes
docker commit -a me new_nginx myrepo/nginx:latest
then you can start container with the new image you just built
but if you dont want create image with the changes you made(like you dont want to put config containing password in the image) you can use volume mount
docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
Manage data in containers
Every time you do a docker run it will spin up a fresh container based on your image. And once a container is started, there are very few things that docker allows you to change with the docker update. So instead, you should preserve your data in an external volume that needs to persist between instances of a container. E.g.
docker run -p 8080:80 -v app-data:/data --name <container_name> <name:tag>
The volume name (app-data) and mount point in the container (/data) can be changed for your own requirements. Then when you destroy and restart a new container, you can mount the same volume in the new container.

Copy files from within a docker container to local machine

Is it possible to copy files to a local machine by running a command inside of a docker container. I am aware of docker cp <containerId>:container/file/path /host/file/path However, my understanding is that this has to be run from outside of the docker container. Is there a way to do it or something similar from within?
For some context I have a python script that is run inside of a docker container with something like the following command docker run -ti -rm --net=host buildServer:5000/myProgram /myProgram.py -h. I would like to retrieve the files that are generated from this program so they can be edited. I could run the docker container in detached mode, docker cp the desired file and the shutdown the container. However, I would like to be able to abstract this away from the user.
Docker containers by design don't have any access to the host filesystem unless you provide it explicitly via volume mounts. So, in your example, you could do something like:
docker run -ti -v /tmp/data:/data -rm --net=host buildServer:5000/myProgram /myProgram.py -h
And within the container, the /data directory would be mapped to /tmp/data on your host. You could then copy files into /data to get at them on your host.
This assumes that you're running Docker on Linux. If you are using Windows or OS X there may be additional steps, since in those environments Docker is actually running on a Linux virtual machine and volume access may or may not behave as expected (I don't use those platforms so I can't comment authoritatively).
For more information:
https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume

Docker exec command not using the mounted directory for /

I am new to docker containers and I and am trying to solve a problem I am facing right now.
These are my understanding based on limited knowledge.
When we create a docker container, Docker creates a local mount and use it as the root file system for the docker container.
Now, if I run any commands in the container from the host server using docker exec the docker is not using the mounted partition as the / file system for the container. I mean, it still pics up the binaries and env variables from the host server. Is there any option/alternate solution for making the docker use the original mounted directory for docker exec too ?
If I access/start the container with docker attach or docker run -i -t /bin/bash, I get the mounted directory as my / file system, which gives me an entirely independent environment from my host system. But this doesn't happen with the docker exec command.
Please help !!
You are operating under a misconception. The docker image only contains what was installed in it. This is usually a very cut down version of an operating system for efficiency reasons.
The docker container is started from an image - and that's a running version, which can change and store state - but may be discarded.
docker run starts a container from an image. You can run the same image multiple times to create completely different containers (which happen to have the same starting point for their content).
docker exec attaches to one of those containers to run a command. So you will only see the things inside it that ... were inside the image, or added post start (like log files). It has no vision of the host filesystem, and may not be the same OS - the only requirement is that it shares elements of the kernel ... although it usually has a selection of the commonly used binaries.
And when you run an image to create a container, you can specify a mount. One of the options when you do this is passing through a host filesystem, with e.g. -v /path/on/host:/path_in/container. But you don't have to, you can use data containers or use a docker volume mount instead. e.g. docker run -v /mount creates a mount point within the container, using the docker filesystem, which isn't part of the parent host. This can be used to make a data container with: docker create -v /path/to/data --name data_for_acontainer some_basic_image
And then mount volumes from that data container on a new one:
docker run -d --volumes-from data_for_acontainer some_app_image
Which will attach that data container onto the /path/to/data mount. But in neither case is the 'host' filesystem touched directly - this is the whole point of dockerising things.

Resources