How to commit docker container with shared volume content - docker

I have created a docker image of the working environment that I use for my project.
Now I am running the docker using
$ docker run -it -p 80:80 -v ~/api:/api <Image ID> bash
I do this because I don't want to develop in command line and this way I can have my project in api volume and can run the project from inside too.
Now, when I commit the container to share the latest development with someone, it doesn't pack the api volume.
Is there any way I can commit the shared volume along with the container?
Or is there any better way to develop from host and continuously have it reflected inside docker then the one I am using (shared volume)?

A way to go is following:
Dockerfile:
FROM something
...
COPY .api/:/api
...
Then build:
docker build . -t myapi
Then run:
docker run -it -p 80:80 -v ~/api:/api myapi bash
At this point you have myapi image with the first state (when you copied with COPY), and at runtime the container has /api overrided by the directory binding.
Then to share your image to someone, just build again, so you will get a new and updated myapi ready to be shared.

Related

Docker different volume path for specific container

By default docker uses /var/lib/docker/volumes/ for any started container.
Is there any way to launch a new container and have it consume all the required disk on a different specified path on the host?
Basically have the root volume different.
For a specific container only, the simplest way i think would be to use docker volumes, Create docker volume and then attach the volume to the container. So the process running on the container uses up the share, so this is using the disk you would like to use.
More information on the following webpage,
https://docs.docker.com/storage/volumes/
you can define the volume path.
docker run -it --rm -v PWD$:/MyVolume ubuntu bash
This command will use the current folder where you execute the command from.
In the container you'll find your file under /MyVolume.
jens#DESKTOP:~$ docker run -it --rm -v $PWD:/MyVolume ubuntu bash
root#71969d68099e:/# cd /MyVolume/
root#71969d68099e:/MyVolume# ls
But you can define any path:
docker run -it --rm -v /home/someuser/somevolumepath:/MyVolume ubuntu bash
Almost the same is available in docker compose.
ports:
- "80:8080"
- "443:443"
volumes:
- $HOME/userhome/https_cert:/etc/nginx/certs
Jens

apache/nifi docker: how to commit changes to new container

I'm new to apache/nifi and run it with :
docker run --name nifi -p 8081:8080 -d apache/nifi:latest
and then dragged some processors.
And then I tried to save them as new image using:
docker commit [container ID] apache/nifi:latest
But it does not save the changes when I run the new committed image.
Please advice me if any mistake. Thanks in advance.
Update
At first I launched nifi with:
docker run --name nifi -p 8081:8080 -d apache/nifi:latest
This is the group I added on the web UI:
I want to save the container so I committed with following command:
docker commit 1e7 apache/nifi:latest2
we can see 2 nifi images here:
Then I run:
docker run --name newnifi -p 8080:8080 -d apache/nifi:latest2
to chekc if the changes are saved in the new image. But the Web UI is empty and the group is not there.
This is coming from discussion on official slack channel for Apache Nifi.
Looks like the flow definitions are stored in the flow.xml.gz in the /conf directory.
The apache/nifi docker image defines this folder as volume.
Directories defined as volumes are not committed to images created from existing containers. https://docs.docker.com/engine/reference/commandline/commit/#extended-description.
That's why the processors and groups are not showing up in the new image.
These are the alternatives to consider:
Copying the flow.xml.gz file to your new container.
Exporting a template of your flow (this is deprecated but still usable).
Using NiFi Registry to store your flow definition, then import from there.
docker commit is for creating a new image from a container’s changes, meaning when you update or add new config or install new software, thus creating a new template images. Simply issue the docker stop NAME_OF_CONTAINER and when you would like to restart it docker start NAME_OF_CONTAINER

jenkinsci / docker - installed libraries do not persist in rebuilds

I am using jenkinssci/docker to setup some build automation on a server for a laravel project.
Using the command docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts, everything boots up fine, i create the admin login, create the project and link all of that together.
Yesterday i downloaded libraries to the container that this command gave me in docker using docker exec -u 0 -it <container_name_or_id> /bin/bash to get into the container as root to install things like php, composer, noodejs/npm. After this was done, i built the project and got a successful build.
Today I start the docker container using the same above command, build the project and build fails. The container no longer has any of the downloaded libraries (php, composer, node).
It is my understanding that including jenkins_home:/var/jenkins_home in the command to start the docker container, data would persist. This is wrong?
So my question is, how can i make it so that i can keep these libraries in the docker container that it builds?
I just started learning about these tools yesterday, so i'm not entirely sure I am even doing it the best. All i need is to be able to log into the server for Jenkins and build the project/ship the code to our staging/live servers.
side note: I am not currently using a Dockerfile. as mentioned here I am able to download tools in the container as root.
Your understanding is correct: you should use a persistent volume, otherwise you will lose your data every time the container is recreated.
I understand that you are running the container in a single machine with docker. You need to put a full path or relative path on the local folder of the volume definition to be sure that data persists, try with:
docker run -p 8080:8080 -p 50000:50000 -v ./jenkins_home:/var/jenkins_home jenkins/jenkins:lts
Look at the ./ on the local folder
Here my docker-compose.yml I'm using for a long time
version: '2'
services:
jenkins:
image: jenkins/jenkins:lts
volumes:
- ./jenkins:/var/jenkins_home
ports:
- 80:8080
- 50000:50000
Is basically the same but in yaml format

Inject configuration into volume before Docker container starts

I am looking for a way to create a Docker volume and put some data on it just before a specific container is started - which needs the configuration on startup.
I do not want to modify the container. I would like to use a vanilla container straight from the Docker Hub.
Any ideas?
Update
I did not mention that all this has to be done in a compose file. If I would do it manually, I could wait for the configuration injecting container to finish.
Absolutely! Just create your volume beforehand, attach it to any container (A base OS like Ubuntu would work great), add your data, and you're good to go!
Create the volume:
docker volume create test_volume
Attach it to an instance where you can add data:
docker run --rm -it --name ubuntu_1 -v test_volume:/app ubuntu /bin/sh
Add some data:
Do this within the container; which you are in from the previous command.
touch /app/my_file
Exit the container:
exit
Attach the volume to your new container:
Of course, replace ubuntu with your real image name.
docker run --rm -it --name ubuntu_2 -v test_volume:/app ubuntu /bin/sh
Verify the data is there:
~> ls app/
my_file

Share and update docker data containers across containers

I have the following containers:
Data container which is build directly in quay.io from a github repo, basically is a website.
FPM container
NGINX container
The three of them are linked together and working just fine. BUT the problem is that every time I change something in the website (Data container) it is rebuilt (of course) and I have to remove that container and also the FPM and NGINX and recreate them all to be able to read the new content.
I started with a "backup approach" for what I'm copying the data from the container to a host directory and mounting that into the FPM and NGINX containers, this way I can update the data without restarting/removing any service.
But the idea of moving the data from the data container into the host, really doesn't like me. So wondering if there a "docker way" or a better way of doing it.
Thanks!
UPDATE: Adding more context
Dockerfile d`ata container definition
FROM debian
ADD data/* /home/mustela/
VOLUME /home/mustela/
Where data only has 2 files: hello.1 and hello.2
Compiling the image:
docker build -t="mustela/data" .
Running the data container:
docker run --name mustela-data mustela/data
Creating another container to link to the previous one:
docker run -d -it --name nginx --volumes-from mustela-data ubuntu bash
Listing the mounted files:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
Now, lets rebuild the data container image, but first adding some new files, so now inside data we have hello.1 hello.2 hello.3 hello.4
docker rm mustela-data
docker build -t="mustela/data" .
docker run --name mustela-data mustela/data
If I ls /home/mustela from the running container, the files aren't being updated:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
But if I run a new container I can see the files
docker run -it --name nginx2 --volumes-from mustela-data ubuntu ls /home/mustela
Result: hello.1 hello.2 hello.3 hello.4

Resources