I'm new to apache/nifi and run it with :
docker run --name nifi -p 8081:8080 -d apache/nifi:latest
and then dragged some processors.
And then I tried to save them as new image using:
docker commit [container ID] apache/nifi:latest
But it does not save the changes when I run the new committed image.
Please advice me if any mistake. Thanks in advance.
Update
At first I launched nifi with:
docker run --name nifi -p 8081:8080 -d apache/nifi:latest
This is the group I added on the web UI:
I want to save the container so I committed with following command:
docker commit 1e7 apache/nifi:latest2
we can see 2 nifi images here:
Then I run:
docker run --name newnifi -p 8080:8080 -d apache/nifi:latest2
to chekc if the changes are saved in the new image. But the Web UI is empty and the group is not there.
This is coming from discussion on official slack channel for Apache Nifi.
Looks like the flow definitions are stored in the flow.xml.gz in the /conf directory.
The apache/nifi docker image defines this folder as volume.
Directories defined as volumes are not committed to images created from existing containers. https://docs.docker.com/engine/reference/commandline/commit/#extended-description.
That's why the processors and groups are not showing up in the new image.
These are the alternatives to consider:
Copying the flow.xml.gz file to your new container.
Exporting a template of your flow (this is deprecated but still usable).
Using NiFi Registry to store your flow definition, then import from there.
docker commit is for creating a new image from a container’s changes, meaning when you update or add new config or install new software, thus creating a new template images. Simply issue the docker stop NAME_OF_CONTAINER and when you would like to restart it docker start NAME_OF_CONTAINER
Related
I am trying a basic docker test in GCP compute instance. I pulled a tomcat image from the official repo. then ran a command to run the container. Command is :
docker run -te --rm -d -p 80:8080 tomcat
It created a container for me with below id.
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
If I do docker ps, I get below
docker run -te --rm -d -p 80:8080 tomcat
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
However the tomcat admin console does not open. The reason is tomcat image is trying to create the config files under /usr/local. However, it is a read only file system. So the config files are not created.
Is there a way to ask Docker to create the files in a different location? Or, is there any other way to handle it?
Thanks in advance.
I am trying to run nifi-docker image and put files to it.
I created a docker-volume as instructed in comment section, and mapped to docker image.
docker create --name nifi -v nifi-volume:/opt/nifi/nifi-current/conf -p 9090:8080 apache/nifi:latest
docker start nifi
This works and I can access to web-gui. But when I try to create a process for GetFile and put nifi-volume, it can not get that folder.
Directory does not exist
I have created nifi-volume using docker volume create --name nifi-volume
You have mounted:
nifi-volume:/opt/nifi/nifi-current/conf
That means nifi-volume is actually
/
or you have to create a folder inside
/opt/nifi/nifi-current/conf called nifi-volume
mkdir /opt/nifi/nifi-current/conf/nifi-volume
I am using jenkinssci/docker to setup some build automation on a server for a laravel project.
Using the command docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts, everything boots up fine, i create the admin login, create the project and link all of that together.
Yesterday i downloaded libraries to the container that this command gave me in docker using docker exec -u 0 -it <container_name_or_id> /bin/bash to get into the container as root to install things like php, composer, noodejs/npm. After this was done, i built the project and got a successful build.
Today I start the docker container using the same above command, build the project and build fails. The container no longer has any of the downloaded libraries (php, composer, node).
It is my understanding that including jenkins_home:/var/jenkins_home in the command to start the docker container, data would persist. This is wrong?
So my question is, how can i make it so that i can keep these libraries in the docker container that it builds?
I just started learning about these tools yesterday, so i'm not entirely sure I am even doing it the best. All i need is to be able to log into the server for Jenkins and build the project/ship the code to our staging/live servers.
side note: I am not currently using a Dockerfile. as mentioned here I am able to download tools in the container as root.
Your understanding is correct: you should use a persistent volume, otherwise you will lose your data every time the container is recreated.
I understand that you are running the container in a single machine with docker. You need to put a full path or relative path on the local folder of the volume definition to be sure that data persists, try with:
docker run -p 8080:8080 -p 50000:50000 -v ./jenkins_home:/var/jenkins_home jenkins/jenkins:lts
Look at the ./ on the local folder
Here my docker-compose.yml I'm using for a long time
version: '2'
services:
jenkins:
image: jenkins/jenkins:lts
volumes:
- ./jenkins:/var/jenkins_home
ports:
- 80:8080
- 50000:50000
Is basically the same but in yaml format
I have created a docker image of the working environment that I use for my project.
Now I am running the docker using
$ docker run -it -p 80:80 -v ~/api:/api <Image ID> bash
I do this because I don't want to develop in command line and this way I can have my project in api volume and can run the project from inside too.
Now, when I commit the container to share the latest development with someone, it doesn't pack the api volume.
Is there any way I can commit the shared volume along with the container?
Or is there any better way to develop from host and continuously have it reflected inside docker then the one I am using (shared volume)?
A way to go is following:
Dockerfile:
FROM something
...
COPY .api/:/api
...
Then build:
docker build . -t myapi
Then run:
docker run -it -p 80:80 -v ~/api:/api myapi bash
At this point you have myapi image with the first state (when you copied with COPY), and at runtime the container has /api overrided by the directory binding.
Then to share your image to someone, just build again, so you will get a new and updated myapi ready to be shared.
In Docker i have installed Jenkins successfully. When i create a new job and i would like to execute a sh file from my workspace, what is the best way to add a file to my workspace with Docker? I started my container with this: docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
You could copy a file from your file system to the container with a simple command from your terminal.
docker cp [OPTIONS] LOCALPATH|- CONTAINER:PATH
https://docs.docker.com/engine/reference/commandline/cp/
example:
docker cp /yourpaht/yourfile <containerId>:/var/jenkins_home
It depends a bit on how the planned lifecycle of your Jenkins container is. If it is just used temporarily and does no harm if the data is gone, docker cp as NickGnd suggested will do the trick.
But since the working data of Jenkins like jobconfigs, system configs and workspaces will only live inside the container, all of it will be gone once the container is removed, so if you plan to have a longer running Jenkins environment, you might want to persist the data outside of the container so it will survive recreating the container, launching new container versions and so on. This can be done with the option --volume /path/on/host:/path/in/container or its short form -v on docker run.
There is also the option of --volumes-from which you can use to mount to keep the data in one "data container" and mount it into your Jenkins container.
For further information on this, please have a look at The docker volumes documentation