How to create a running nifi docker image with a flow - docker

I launched a container of apache/nifi, built, and configured a flow.
I'd like to somehow save that flow off somewhere, so that it can be loaded into a new docker image running Nifi.
Such that, a 'user' only has to do 'docker run....' and an instance of nifi will be launched, the flow loaded, and started.
It's not clear to me what files (nar, xml, etc...) need to be made available to the image a user is to run.

if you have nothing custom you can save the flow.xml.gz from the /conf directory to save the flow.
if you also want to save the content flowfile or current flowfile, you should also save the flowfile repository and content repository
if you have customs processors you should save the nar in lib directory.
everything should be present in the nifi directory before starting it.

You could use the nifi-toolkit to deploy flows and process groups to your Apache NiFi instance without having to rely on the GUI.
https://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html
This setup required you having:
Apache NiFi
Apache NiFi-Registry
Here is a working example (provided that the hostname of your Apache NiFi-Registry container is nifi-registry and its port the default 18080) based on an empty Apache NiFi instance and Nifi-Registry. Tested on Apache NiFi 1.12.1.
Firstly you need to generate a JSON file for your flow through the Apache Registry.
Add a Registry to your Apache NiFi:
/opt/nifi/nifi-toolkit-current/bin/cli.sh nifi create-reg-client -rcn registry -rcu http://nifi-registry:18080
Create a Process Group that will contain your flow. Right click on it and click on "Version" and "Start Version Control". This will save your flow inside the NiFi Registry. Work on your flow through the GUI and when you are ready, right click on your process group and commit your last changes. Now you will need to export the JSON of your flow from the registry.
/opt/nifi/nifi-toolkit-current/bin/cli.sh registry export-flow-version -u http://nifi-registry:18080 -f <flowid> -fv <flowversion> > <json_file>
Now that your JSON flow is ready, you are ready to deploy it on a fresh environment.
Create a bucket inside the registry. This will return the newly generated bucket id.
/opt/nifi/nifi-toolkit-current/bin/cli.sh registry create-bucket -u http://nifi-registry:18080 -bn <bucketname>
Use the previously generated bucket id to create a flow. This will return the newly generated flow id:
/opt/nifi/nifi-toolkit-current/bin/cli.sh registry create-flow -u http://nifi-registry:18080 -b <bucketid> -fn <flowname>
Import your flow (it must have been previously export from the GUI -> right click download flow, and available in the Apache NiFi filesystem):
/opt/nifi/nifi-toolkit-current/bin/cli.sh registry import-flow-version -u http://nifi-registry:18080 -f <flowid> -i <json_file>
Deploy the flow as a process group. This will return the newly generated process group id.
/opt/nifi/nifi-toolkit-current/bin/cli.sh nifi pg-import -b <bucketid> -f <flowid> -fv <flowversion>
Start the process group services (if any)
/opt/nifi/nifi-toolkit-current/bin/cli.sh nifi pg-enable-services -pgid <processgroupid>
Start the processors of your process group (if any):
/opt/nifi/nifi-toolkit-current/bin/cli.sh nifi pg-start -pgid <processgroupid>
Please keep in mind that Apache NiFi should be up and running before executing these commands. If you are planning on embedding these instructions in the Dockerfile, some logic that waits for the service to be up should be implemented.
You might also take a look at this Python wrapper for the NiFi Toolkit:
https://github.com/Chaffelson/nipyapi
Lastly, Apache NiFi also provides some REST APIs that might help you:
https://nifi.apache.org/docs/nifi-docs/rest-api/index.html

Related

Handling secrets inside docker container without using docker swarm

One question, how do you handle secrets inside dockerfile without using docker swarm. Let's say, you have some private repo on npm and restoring the same using .npmrc inside dockerfile by providing credentials. After package restore, obviously I am deleting .npmrc file from container. Similarly, it goes for NuGet.config as well for restoring private repos inside container. Currently, I am supplying these credentials as --build-arg while building the dockerfile.
But command like docker history --no-trunc will show the password in the log. Is there any decent way to handle this. Currently, I am not on kubernetes. Hence, need to handle the same in docker itself.
One way I can think of is mounting the /run/secrets/ and storing the same inside either by using some text file containing password or via .env file. But then, this .env file has to be part of pipeline to complete the CI/CD process, which means it has to be part of source control. Is there any way to avoid this or something can be done via pipeline itself or any type of encryption/decryption logic can be applied here?
Thanks.
Thanks.
First, keep in mind that files deleted in one layer still exist in previous layers. So deleting files doesn't help either.
There are three ways that are secure:
Download all code in advance outside of the Docker build, where you have access to the secret, and then just COPY in the stuff you downloaded.
Use BuildKit, which is an experimental Docker feature that enables secrets in a secure way (https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information).
Serve secrets from a network server running locally (e.g. in another container). See here for detailed explanation of how to do so: https://pythonspeed.com/articles/docker-build-secrets/
Let me try to explain docker secret here.
Docker secret works with docker swarm. For that you need to run
$ docker swarm init --advertise-addr=$(hostname -i)
It makes the node as master. Now you can create your secret here like: -
crate a file /db_pass and put your password in this file.
$docker secret create db-pass /db_pass
this creates your secret. Now if you want to list the secrets created, run command
$ docker secret ls
Lets use secret while running the service: -
$docker service create --name mysql-service --secret source=db_pass,target=mysql_root_password --secret source=db_pass,target=mysql_password -e MYSQL_ROOT_PASSWORD_FILE="/run/secrets/mysql_root_password" -e MYSQL_PASSWORD_FILE="/run/secrets/mysql_password" -e MYSQL_USER="wordpress" -e MYSQL_DATABASE="wordpress" mysql:latest
In the above command /run/secrets/mysql_root_password and /run/secrets/mysql_password files location is from container which stores the source file (db_pass) data
source=db_pass,target=mysql_root_password ( it creates file /run/secrets/mysql_root_password inside the container with db_pass value)
source=db_pass,target=mysql_password (it creates file /run/secrets/mysql_password inside the container with db_pass value)
See the screenshot from container which container secret file data: -

Is it possible to make update the docker image after pushing it to dockerhub/ACR/etc at runtime as docker cp command works on localhost

I have a angular application and I have created an docker image of that, I have published it on Azure Container Register(ACR).
I want to pull the image from ACR and deploy it to Azure App service, and change the images, css files from the docker container at runtime.
I want to know if it is possible to update the images/css file at runtime as we do using docker cp command on localhost.
I would suggest using CI/CD for this purpose.
Just create a webhook in ACR. So, whenever the image gets updates, the WebApp will automatically get "notified" and pull in the new change.

wso2 api manager Docker image needs paid subscription

I am planning to use WSO2 API Manager for a client...Planning to use the API Manager Docker image for hosting it..
But it looks like to use API Manager docker image ,I need to have paid subscription once the trial period ends..
https://wso2.com/api-management/install/docker/get-started/ ..the link says
" In order to use WSO2 product Docker images, you need an active WSO2 subscription."
Is it like that?
Cant i have the image running in the client premises without any subscription?
You can build it yourself using their official dockerfiles which hosted on github and then push it to your own registry.
The rest of the dockerfiles for other WSO2 Products can be found under the same github account.
The following steps are describing How to build an image and run WSO2 API Manager, taken from this README.md file.
Checkout this repository into your local machine using the following Git command.
git clone https://github.com/wso2/docker-apim.git
The local copy of the dockerfiles/ubuntu/apim directory will be referred to as AM_DOCKERFILE_HOME from this point onwards.
Add WSO2 API Manager distribution and MySQL connector to <AM_DOCKERFILE_HOME>/files.
Download WSO2 API Manager v2.6.0
distribution and extract it to <AM_DOCKERFILE_HOME>/files.
Download MySQL Connector/J
and copy that to <AM_DOCKERFILE_HOME>/files.
Once all of these are in place, it should look as follows:
<AM_DOCKERFILE_HOME>/files/wso2am-2.6.0/
<AM_DOCKERFILE_HOME>/files/mysql-connector-java-<version>-bin.jar
Please refer to WSO2 Update Manager documentation
in order to obtain latest bug fixes and updates for the product.
Build the Docker image.
Navigate to <AM_DOCKERFILE_HOME> directory.
Execute docker build command as shown below.
docker build -t wso2am:2.6.0 .
Running the Docker image.
docker run -it -p 9443:9443 wso2am:2.6.0
Here, only port 9443 (HTTPS servlet transport) has been mapped to a Docker host port.
You may map other container service ports, which have been exposed to Docker host ports, as desired.
Accessing management console.
To access the management console, use the docker host IP and port 9443.
https://<DOCKER_HOST>:9443/carbon
In here, refers to hostname or IP of the host machine on top of which containers are spawned.
How to update configurations
Configurations would lie on the Docker host machine and they can be volume mounted to the container.
As an example, steps required to change the port offset using carbon.xml is as follows.
Stop the API Manager container if it's already running. In WSO2 API Manager 2.6.0 product distribution, carbon.xml configuration file
can be found at <DISTRIBUTION_HOME>/repository/conf. Copy the file to some suitable location of the host machine, referred to as <SOURCE_CONFIGS>/carbon.xml and change the offset value under ports to 1.
Grant read permission to other users for <SOURCE_CONFIGS>/carbon.xml
chmod o+r <SOURCE_CONFIGS>/carbon.xml
Run the image by mounting the file to container as follows.
docker run \
-p 9444:9444 \
--volume <SOURCE_CONFIGS>/carbon.xml:<TARGET_CONFIGS>/carbon.xml \
wso2am:2.6.0
In here, refers to /home/wso2carbon/wso2am-2.6.0/repository/conf folder of the container.
As explained above these steps for ubuntu, for other distributions you can check the following directory and then read the README.md file inside
You can build the docker images yourself. Follow the instructions given at https://github.com/wso2/docker-apim/tree/master/dockerfiles/ubuntu/apim#how-to-build-an-image-and-run.
Thes caveat is that you will not be getting any bug fixes if you do not have a subscription.

OpenWhisk support custom registry

I need to run a Docker action in OpenWhisk. Inside the Docker Container, I execute a Java program.
Now I pulled the docker skeleton from Openwhisk and installed Java on it.
I also put my Java program inside the container and replaced the exec.
I can create the action with:
wsk create action NAME --docker myDockerHub/repo:1 -i
This is not optimal since my code should not be on DockerHub. Does OpenWhisk provide usage for my local Registy?
wsk action create ImportRegionJob --docker server.domain.domain:5443/import-region-job:v0.0.2 -i
error: Unable to create action 'ImportRegionJob': The request content was malformed:
image prefix not is not valid (code qXB0Tu65zOfayHCqVgrYJ33RMewTtph9)
Run 'wsk --help' for usage.
I know you can provide a .zip file to a docker action when creating it, but that does not work because the default image used does not have Java installed.
I achieved this for a distributed OpenWhisk environment. The docker actions are hosted in GitLab, built by GitLab CI and deployed to a custom container registry in their respective GitLab repositories. Pulls from the local registry are significantly faster than pulls from docker hub.
In OpenWhisk, create an action using the full path including registry url.
wsk create action NAME --docker YourRegistry:port/namespace/image[:tag]
On invocation, the pull command for the action will be carried out inside the invoker containers on the compute nodes. The following table shows in the first column an example setup of invoker hosts (configured in openwhisk/ansible/environments/distributed/hosts, section [invokers]), and in the second column the respective invoker container name running on that host. The invoker container in the second column should show up, when doing a docker ps on the hostname from the first column:
invoker-host-0 invoker0
invoker-host-1 invoker1
...
invoker-host-2 invokerN
for $I in $(seq 0 N); do ssh invoker-host-$I docker ps | grep invoker$I; done
Now, you can do a docker login for all invokers in one command.
for $I in $(seq 0 N); do ssh invoker-host-$I docker exec invoker$I docker login YourRegistry:port -u username -p TokenOrPassword; done
As a prerequisite, inside all invoker containers, I had to add the root certificates for the registry, update-ca-certificates and restart the docker deamon.
An improvement might be to do this already in the invoker container image, that is built when deploying openwhisk (before running) invoker.yml, which is imported in openwhisk.yml.
Docker images can be specified when deploying applications from zip files. This allows you use the existing Java runtime with the zip file, which has Java installed, removing the need for a custom image.
wsk action update --docker openwhisk/java8action action_name action_files.zip

How to run container in a remote docker host with Jenkins

I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.

Resources