nifi docker image can not identify docker-volume - docker

I am trying to run nifi-docker image and put files to it.
I created a docker-volume as instructed in comment section, and mapped to docker image.
docker create --name nifi -v nifi-volume:/opt/nifi/nifi-current/conf -p 9090:8080 apache/nifi:latest
docker start nifi
This works and I can access to web-gui. But when I try to create a process for GetFile and put nifi-volume, it can not get that folder.
Directory does not exist
I have created nifi-volume using docker volume create --name nifi-volume

You have mounted:
nifi-volume:/opt/nifi/nifi-current/conf
That means nifi-volume is actually
/
or you have to create a folder inside
/opt/nifi/nifi-current/conf called nifi-volume
mkdir /opt/nifi/nifi-current/conf/nifi-volume

Related

Docker different volume path for specific container

By default docker uses /var/lib/docker/volumes/ for any started container.
Is there any way to launch a new container and have it consume all the required disk on a different specified path on the host?
Basically have the root volume different.
For a specific container only, the simplest way i think would be to use docker volumes, Create docker volume and then attach the volume to the container. So the process running on the container uses up the share, so this is using the disk you would like to use.
More information on the following webpage,
https://docs.docker.com/storage/volumes/
you can define the volume path.
docker run -it --rm -v PWD$:/MyVolume ubuntu bash
This command will use the current folder where you execute the command from.
In the container you'll find your file under /MyVolume.
jens#DESKTOP:~$ docker run -it --rm -v $PWD:/MyVolume ubuntu bash
root#71969d68099e:/# cd /MyVolume/
root#71969d68099e:/MyVolume# ls
But you can define any path:
docker run -it --rm -v /home/someuser/somevolumepath:/MyVolume ubuntu bash
Almost the same is available in docker compose.
ports:
- "80:8080"
- "443:443"
volumes:
- $HOME/userhome/https_cert:/etc/nginx/certs
Jens

apache/nifi docker: how to commit changes to new container

I'm new to apache/nifi and run it with :
docker run --name nifi -p 8081:8080 -d apache/nifi:latest
and then dragged some processors.
And then I tried to save them as new image using:
docker commit [container ID] apache/nifi:latest
But it does not save the changes when I run the new committed image.
Please advice me if any mistake. Thanks in advance.
Update
At first I launched nifi with:
docker run --name nifi -p 8081:8080 -d apache/nifi:latest
This is the group I added on the web UI:
I want to save the container so I committed with following command:
docker commit 1e7 apache/nifi:latest2
we can see 2 nifi images here:
Then I run:
docker run --name newnifi -p 8080:8080 -d apache/nifi:latest2
to chekc if the changes are saved in the new image. But the Web UI is empty and the group is not there.
This is coming from discussion on official slack channel for Apache Nifi.
Looks like the flow definitions are stored in the flow.xml.gz in the /conf directory.
The apache/nifi docker image defines this folder as volume.
Directories defined as volumes are not committed to images created from existing containers. https://docs.docker.com/engine/reference/commandline/commit/#extended-description.
That's why the processors and groups are not showing up in the new image.
These are the alternatives to consider:
Copying the flow.xml.gz file to your new container.
Exporting a template of your flow (this is deprecated but still usable).
Using NiFi Registry to store your flow definition, then import from there.
docker commit is for creating a new image from a container’s changes, meaning when you update or add new config or install new software, thus creating a new template images. Simply issue the docker stop NAME_OF_CONTAINER and when you would like to restart it docker start NAME_OF_CONTAINER

Inject configuration into volume before Docker container starts

I am looking for a way to create a Docker volume and put some data on it just before a specific container is started - which needs the configuration on startup.
I do not want to modify the container. I would like to use a vanilla container straight from the Docker Hub.
Any ideas?
Update
I did not mention that all this has to be done in a compose file. If I would do it manually, I could wait for the configuration injecting container to finish.
Absolutely! Just create your volume beforehand, attach it to any container (A base OS like Ubuntu would work great), add your data, and you're good to go!
Create the volume:
docker volume create test_volume
Attach it to an instance where you can add data:
docker run --rm -it --name ubuntu_1 -v test_volume:/app ubuntu /bin/sh
Add some data:
Do this within the container; which you are in from the previous command.
touch /app/my_file
Exit the container:
exit
Attach the volume to your new container:
Of course, replace ubuntu with your real image name.
docker run --rm -it --name ubuntu_2 -v test_volume:/app ubuntu /bin/sh
Verify the data is there:
~> ls app/
my_file

How to commit docker container with shared volume content

I have created a docker image of the working environment that I use for my project.
Now I am running the docker using
$ docker run -it -p 80:80 -v ~/api:/api <Image ID> bash
I do this because I don't want to develop in command line and this way I can have my project in api volume and can run the project from inside too.
Now, when I commit the container to share the latest development with someone, it doesn't pack the api volume.
Is there any way I can commit the shared volume along with the container?
Or is there any better way to develop from host and continuously have it reflected inside docker then the one I am using (shared volume)?
A way to go is following:
Dockerfile:
FROM something
...
COPY .api/:/api
...
Then build:
docker build . -t myapi
Then run:
docker run -it -p 80:80 -v ~/api:/api myapi bash
At this point you have myapi image with the first state (when you copied with COPY), and at runtime the container has /api overrided by the directory binding.
Then to share your image to someone, just build again, so you will get a new and updated myapi ready to be shared.

How to run a docker containter linked to a previously created volume

I have a named volume with stuff in it.
I would like to provide this volume as I provide a path: docker run -v /host:/path.in.docker.container - this works for paths. I'd like to do the same with a volume I manually created and filled.
I know about --volumes-from, but how do i first connect the volume to the empty container.
You can create a volume thanks to docker create, see the documentation, then, mount this volume-container with the option --volume of the command docker run as in docker run -v volumename:/data -it my_image.

Resources