i want to create a NIFI container and pass it a template/flow to be loaded automatically when the container is being created (without human intervention).
couldn't find any volumes/environments that are related to it.
i tried using (suggested by chatGPT):
docker run -d -p 8443:8443 \
-v /path/to/templates:/templates \
-e TEMPLATE_FILE_PATH=/templates/template.xml \
--name nifi apache/nifi
and
docker run -d -p 8443:8443 \
-e NIFI_INIT_FLOW_FILE=/Users/l1/Desktop/kaleidoo/devops/files/git_aliases/flow.json \
-v /Users/l1/Desktop/kaleidoo/docker/test-envs/flow.json:/flow.json:ro \
--name nifi apache/nifi
non of them worked, and i couldn't find data about NIFI_INIT_FLOW_FILE and TEMPLATE_FILE_PATH in the documentation.
Related
I started docker with this command:
docker run -it --shm-size=4g \
-e PBF_URL=https://download.geofabrik.de/north-america/us-latest.osm.pbf \
-e REPLICATION_URL=https://download.geofabrik.de/north-america/us-updates/ \
-e IMPORT_US_POSTCODES=true \
-e IMPORT_TIGER_ADDRESSES=true \
-e IMPORT_WIKIPEDIA=/nominatim/extras/wikimedia-importance.sql.gz \
-p 8080:8080 \
-v /osm-maps/extras:/nominatim/extras \
--name nominatim \
mediagis/nominatim:4.0
It takes quite some time to load the data.. When I do docker images I see this:
REPOSITORY TAG IMAGE ID CREATED SIZE
mediagis/nominatim 4.0 3097bc96440b 3 weeks ago 875MB
For some reason I woke up this morning and the box I had this running on was off.. I really hope I don't have to reload all the data again.. Is there a way to start this back up without reloading data?
I am setting up quay in a vm with centos distro. This is the guide I am following: quay deploy guide
once I install Podman I am trying to run first container with below command:
I set up this env variable:
export QUAY=QUAY
and made a dir of same name in home:
mkdir QUAY
once I install Podman I am trying to run first container with below command:
$ sudo podman run -d --rm --name postgresql-quay \
-e POSTGRESQL_USER=quayuser \
-e POSTGRESQL_PASSWORD=quaypass \
-e POSTGRESQL_DATABASE=quay \
-e POSTGRESQL_ADMIN_PASSWORD=adminpass \
-p 5432:5432 \
-v $QUAY/postgres-quay:/var/lib/pgsql/data:Z \
registry.redhat.io/rhel8/postgresql-10:1
and I am getting following error:
sudo podman run -d --rm --name postgresql-quay -e POSTGRESQL_USER=quayuser -e POSTGRESQL_PASSWORD=quaypass -e POSTGRESQL_DATABASE=quay -e POSTGRESQL_ADMIN_PASSWORD=adminpass -p 5432:5432 -v QUAY/postgres-quay:/var/lib/pgsql/data:Z registry.redhat.io/rhel8/postgresql-10:1
Error: error creating named volume "QUAY/postgres-quay": error running volume create option: names must match [a-zA-Z0-9][a-zA-Z0-9_.-]*: invalid argument
The bind mount needs to be specified as an absolute path or a relative path that starts with ./ or ../.
In other words instead of
-v QUAY/postgres-quay:/var/lib/pgsql/data:Z
use
-v ./QUAY/postgres-quay:/var/lib/pgsql/data:Z
(I replaced $QUAY with its value QUAY)
I spin QuestDB in docker container as suggested in the docs
docker run -p 9000:9000 \
-p 9009:9009 \
-p 8812:8812 \
-p 9003:9003 \
questdb/questdb
How can I override number of threads in the worker pool default configuration for the container from 2 to 8?
If there is a property to override from configuration list there is a way to specify it as environment variable for the container with QDB_ prefix and _ instead of . in the variable name. In case of shared worker count it should be
docker run -p 9000:9000 \
-p 9009:9009 \
-p 8812:8812 \
-p 9003:9003 \
-e QDB_SHARED_WORKER_COUNT=8 \
questdb/questdb
I have a Docker container based on Linux on a PC running Windows. I have pulled and installed Gitlab CI/CD. Everything is running and I log in to Gitlab, but every time I restart the docker container it is like I lose all my data. I understand it overrides the previous data, saved inside the container, but I need a way to "persist" that data. From my understanding the only way is to point the volumes of the Gitlab image to directories saved on my PC somehow. How do I do this or something similar to this so I won't lose my data on Docker restart?
The script I ran to instantiate gitlab image is the following:
docker run -d --hostname gitlab.wproject.gr \
-p 4433:443 -p 80:80 -p 2223:22 \
--name gitlab-server1 \
--restart always \
--volume /storage/gitlab/config:/etc/gitlab \
--volume /storage/gitlab/logs:/var/log/gitlab \
--volume /storage/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
Try to put relative links for your volumes instead of absolute links. If you use Docker Desktop on Windows the volume management doesn't always behave the same way as on Linux.
Test with:
mkdir gitlab
docker run -d --hostname gitlab.wproject.gr \
-p 4433:443 -p 80:80 -p 2223:22 \
--name gitlab-server1 \
--restart always \
--volume ./gitlab/config:/etc/gitlab \
--volume ./gitlab/logs:/var/log/gitlab \
--volume ./gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
I want to setup a private registry behind a nginx server. To do that I configured nginx with a basic auth and started a docker container like this:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/home/example/registry \
-p 5000:5000 \
registry
By doing that, I can login to my registry, push/pull images... But if I stop the container and start it again, everything is lost. I would have expected my registry to be save in /home/example/registry but this is not the case. Can someone tell me what I missed ?
I would have expected my registry to be save in /home/example/registry but this is not the case
it is the case, only the /home/exemple/registry directory is on the docker container file system, not the docker host file system.
If you run your container mounting one of your docker host directory to a volume in the container, it would achieve what you want:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-p 5000:5000 \
-v /home/example/registry:/registry \
registry
just make sure that /home/example/registry exists on the docker host side.