How to store my docker registry in the file system - docker

I want to setup a private registry behind a nginx server. To do that I configured nginx with a basic auth and started a docker container like this:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/home/example/registry \
-p 5000:5000 \
registry
By doing that, I can login to my registry, push/pull images... But if I stop the container and start it again, everything is lost. I would have expected my registry to be save in /home/example/registry but this is not the case. Can someone tell me what I missed ?

I would have expected my registry to be save in /home/example/registry but this is not the case
it is the case, only the /home/exemple/registry directory is on the docker container file system, not the docker host file system.
If you run your container mounting one of your docker host directory to a volume in the container, it would achieve what you want:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-p 5000:5000 \
-v /home/example/registry:/registry \
registry
just make sure that /home/example/registry exists on the docker host side.

Related

starting a NIFI container with my template/flow automatically loaded

i want to create a NIFI container and pass it a template/flow to be loaded automatically when the container is being created (without human intervention).
couldn't find any volumes/environments that are related to it.
i tried using (suggested by chatGPT):
docker run -d -p 8443:8443 \
-v /path/to/templates:/templates \
-e TEMPLATE_FILE_PATH=/templates/template.xml \
--name nifi apache/nifi
and
docker run -d -p 8443:8443 \
-e NIFI_INIT_FLOW_FILE=/Users/l1/Desktop/kaleidoo/devops/files/git_aliases/flow.json \
-v /Users/l1/Desktop/kaleidoo/docker/test-envs/flow.json:/flow.json:ro \
--name nifi apache/nifi
non of them worked, and i couldn't find data about NIFI_INIT_FLOW_FILE and TEMPLATE_FILE_PATH in the documentation.

Kafka FQDN in containers enviornment

Running kafka on a container and trying to create a new pgsql container on the same host.
the pgsql container keeps exiting and the logs indicates
ERROR: Failed to connect to Kafka at kafka.domain, check the docker run -e KAFKA_FQDN= value
the kafka container is built with the following attributes
docker run -d \
--name=app_kafka \
-e KAFKA_FQDN=localhost \
-v /var/app/kafka:/data/kafka \
-p 2181:2181 -p 9092:9092 \
app/kafka
the pgsql container with
docker run -d --name app_psql \
-h app-psql \
**-e KAFKA_FQDN=kafka.domain \
--add-host kafka.domain:172.17.0.1 \**
-e MEM=16 \
--shm-size=512m \
-v /var/app/config:/config \
-v /var/app/postgres/main:/data/main \
-v /var/app/postgres/ts:/data/ts \
-p 5432:5432 -p 9005:9005 -p 8080:8080 \
app/postgres
If i'm using docker0 ip address, the logs indicates no route to host, if i'm using the kafka docker ip, i'm getting connection refused.
I guess i'm missing something basic here that needs to be modified to my environment, but I'm lacking in knowledge here.
Will appreciate any assistance here.
You need edit container file hosts, you can pass a script in dockerFile like to
COPY dot.sh .
ENTRYPOINT ["sh","domain.sh"]
And domain.sh
#!/bin/sh
echo Environment container kafka is: "kafka.domain"
echo PGSQL container is "pgsql.domain"
echo "127.0.0.1 kafka.domain" >> /etc/hosts
echo "127.0.0.1 pgsql.domain" >> /etc/hosts
Feel free change ip or domain to needs.

Multiple Teamcity agents with Docker

Ok,
I can somewhat sense my question has nothing to do with Teamcity but rather the subtle issues surrounding docker. I am trying to fire off one Teamcity agent with
docker run -it -d -e SERVER_URL="192.168.100.15:8111" \
--restart always \
--name="teamcity-agent_1" \
--mount src=docker_volumes_1,dst=/var/lib/docker,type=volume \
--mount src=$(pwd)/config,dst=/etc/docker,type=bind \
--privileged -e DOCKER_IN_DOCKER=start \
jetbrains/teamcity-agent
Works like a charm. Then I try to fire off a second agent (up to three agents are free). This used to work perfectly fine but has recently stopped...
docker run -it -d -e SERVER_URL="192.168.100.15:8111" \
--restart always \
--name="teamcity-agent_2" \
--mount src=docker_volumes_2,dst=/var/lib/docker,type=volume \
--mount src=$(pwd)/config,dst=/etc/docker,type=bind \
--privileged -e DOCKER_IN_DOCKER=start \
jetbrains/teamcity-agent
In this second container docker wouldn't start, e.g. docker images results in
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
service docker start
service docker status
Confirm that I have successfully started docker but then going back to docker images and we get the same problem as above. service docker status tells me now that docker is not running!

Docker Swarm - equivalent docker commands

As far as I know, the Docker Swarm API is compatible with the Offical Docker API.
What is the equivalent Docker Swarm commands for the following docker commands:
docker ps -a
docker run --net=host --privileged=true \
-e DEVICE=$VETH_NAME -e SWARM_MANAGER_ADDR=$SWARM_MANAGER_ADDR -e SWARM_MANAGER_PORT=$SWARM_MANAGER_PORT \
-v conf_files:/etc/sur \
-v conf_files:/etc/sur/rules \
-v _log:/var/log/sur\
-d sur
The standalone swarm simply has a different host/port for you to connect with the client (client being the docker cli). It relays the commands as appropriate from the manager to each node in the swarm. The easiest way to do that is to set $DOCKER_HOST to point to the port the manager is listening to:
# start your manager, the end of the command is your discovery method
docker run -d -P --restart=always --name swarm-manager swarm manager ...
# send all future commands to the manager
export DOCKER_HOST=$(docker port swarm-manager 2375)
# run any docker ps, docker run, etc commands on the Swarm
docker ps
docker run --net=host --privileged=true \
-e DEVICE=$VETH_NAME \
-e SWARM_MANAGER_ADDR=$SWARM_MANAGER_ADDR \
-e SWARM_MANAGER_PORT=$SWARM_MANAGER_PORT \
-v conf_files:/etc/sur \
-v conf_files:/etc/sur/rules \
-v _log:/var/log/sur \
-d sur
# return to running commands on the local docker host
unset DOCKER_HOST
If you needed those SWARM_MANAGER_ADDR/PORT values defined, those can come out of the docker port command. Otherwise, I'm not familiar with the "sur" image to know about the values you need to pass there.

Docker can't expose mesos port 5050

I have a mesos container running, the container has the port mapping 0.0.0.0:32772->5050/tcp.
If I run docker exec CONTAINER_ID "curl 0.0.0.0:5050, I can see the thing I want. However, I can't access HOST_IP:32772. I've tried to run nginx in the same container and I can connect to the nginx server in host, so I think it's mesos configuration problem? How can I fix it?
If I understand correctly, you're running your Mesos Master(s) from a Docker container. You should use host networking instead of bridge networking.
The settings work at least for me:
docker run \
--name=mesos_master \
--net=host \
-e MESOS_IP={YOUR_HOST_IP} \
-e MESOS_HOSTNAME={YOUR_HOST_IP} \
-e MESOS_CLUSTER=mesos-cluster \
-e MESOS_ZK=zk://{YOUR_ZK_SERVERS}/mesos \
-e MESOS_LOG_DIR=/var/log/mesos/master \
-e MESOS_WORK_DIR=/var/lib/mesos/master \
-e MESOS_QUORUM=2 \
mesosphere/mesos-master:0.27.1-2.0.226.ubuntu1404

Resources