Docker Registry storage not updated - docker

I am trying to deploy a Docker Registry with custom storage location. The container runs well but I see no file whatsoever at the specified location. Here is my docker-compose.yaml:
version: "3"
services:
registry:
image: registry:2.7.1
deploy:
replicas: 1
restart_policy:
condition: always
ports:
- "85:5000"
volumes:
- "D:/Personal/Docker/Registry/data:/var/lib/registry"
For volumes, I have tried:
"data:/var/lib/registry"
./data:/var/lib/registry
"D:/Personal/Docker/Registry/data:/var/lib/registry"
The yaml file and docker-compose up is run at D:\Personal\Docker\Registry. I tried to push and pull an image to the localhost:85, everything works well, so it must store the data somewhere.
Please tell me where I did wrong.

I solved it but for my very specific case and with a different image, so I will just post here in case someone like me needs it. This question still need answer for the official Docker image.
I have just realized the image is for Linux only and turn out I couldn't run it on Windows Server so I switched to stefanscherer/registry-windows image. I changed the volumes declarations to:
volumes:
- ./data:c:\registry
- ./certs:c:\certs
Both the storage and the certs works correctly. I am not sure how to fix it on Linux though, as I have never used Linux before.

Related

How to set max_filesize for clamav in docker-compose.yml

I'm trying to setup a docker container with clamav and am struggling to allow for larger files to be scanned. I've set up my docker-compose.yml like this:
version: "3.3"
services:
clamav:
image: clamav/clamav:latest
environment:
CLAMD_CONF_MaxFileSize: 250M
CLAMD_CONF_MaxScanSize: 250M
restart: always
ports:
- "3310:3310"
but that doesn't seem to do it (I keep getting a Broken Pipe Error). I presume I'm just using the wrong variables, but I can't seem to find the right ones.
Can anyone point me in the right direction?
According to my information this is not possible in the official image of clamav/clamav:stable but it would be a great improvement of the image.
We also wanted to use the official image. So our solution has been to mount the /var/lib/clamav directory and the /etc/clamav directories to a persistent volume. Then we change the /etc/clamav/clamd.conf after running the container and restart it after the configuration.

Rename official postgres image

I am using an official Postgres12 image that I'm pulling inside the docker-compose.yml. Everything is working fine.
services:
db:
container_name: db
image: postgres:12
volumes:
- ...
ports:
- 5432:5432
environment:
- POSTGRES_USER=...
Now, when I run docker-compose up, I get this image
My question is: is there a way in which I can rename the image inside docker-compose.yml? I know there is a command but I require it to be everything inside the file if possible.
Thanks!
In a Compose file, there's no direct way to run docker tag or any other command that modifies some existing resource.
If you're trying to optionally point Compose at a local mirror of Docker Hub, you can take advantage of knowing the default repository is docker.io and use an optional environment variable:
image: ${REGISTRY:-docker.io}/postgres:latest
REGISTRY=docker-mirror.example.com docker-compose up
Another possible approach is to build a trivial image that doesn't actually extend the base postgres image at all:
build:
context: .
dockerfile: Dockerfile.postgres
image: local-images.example.com/my-project/postgres
# Dockerfile.postgres
FROM postgres:latest
# End of file
There's not really any benefit to doing this beyond the cosmetic appearances in the docker images output. Having it be clear that you're using a standard Docker Hub image could be slightly preferable; its behavior is better understood than something you built locally and if you have multiple projects running at once they can more obviously share the same single image.

Docker change location of named volumes

I have a problem that I just can't understand. I am using docker to run certain containers, but I have problems with at least one Volume, where I't like to ask if anybody can give me a hint what I am doing wrong. I am using Nifi-Ingestion as example, but it affects even more container volumes.
First, let's talk about the versions I use:
Docker version 19.03.8, build afacb8b7f0
docker-compose version 1.27.4, build 40524192
Ubuntu 20.04.1 LTS
Now, let's show the volume in my working docker-compose-file:
In my container, it is configured as followed:
volumes:
- nifi-ingestion-conf:/opt/nifi/nifi-current/conf
Below my docker-compose file it is defined as a normal named volume:
volumes:
nifi-ingestion-conf:
This is a snippet from the docker-compose that I'd like to get working
In my container, it is configured in this case as followed (having my STORAGE_VOLUME_PATH defined as /mnt/storage/docker_data):
volumes:
- ${STORAGE_VOLUME_PATH}/nifi-ingestion-conf:/opt/nifi/nifi-current/conf
On the bottom I guess there is something to do but I don't know what I could need to do here. In this case it is the same as in the working docker-compose:
volumes:
nifi-ingestion-conf:
So, now whats my problem?
I have two docker-compose files. One uses the normal named volumes, and one uses the volumes in my extra mount path. When I run the containers, the volumes seem to work different since files are written in the first style, but not in the second. My mount paths are generated in the second version so there is nothing wrong with my environment variables in the .env-file.
Hint: the /mnt/storage/docker_data is an NFS-mount but my machine has the full privileges on that share.
Here is my fstab-entry to mount that volume (maybe I have to set other options):
10.1.0.2:/docker/data /mnt/storage/docker_data nfs auto,rw
Bigger snippets
Here is a bigger snipped if the docker-compose (i need to cut and remove confident data, my problem is not that it does not work, it is only that the volume acts different. Everything for this one volume is in the code.):
version: "3"
services:
nifi-ingestion:
image: my image on my personal repo
container_name: nifi-ingestion
ports:
- 0000
labels:
- app-specivic
volumes:
- ${STORAGE_VOLUME_PATH}/nifi-ingestion-conf:/opt/nifi/nifi-current/conf
#working: - nifi-ingestion-conf:/opt/nifi/nifi-current/conf
environment:
- app-specivic
networks:
- cnetwork
volumes:
nifi-ingestion-conf:
networks:
cnetwork:
external: false
ipam:
driver: default
config:
- subnet: 192.168.1.0/24
And here of the env (only the value we are using)
STORAGE_VOLUME_PATH=/mnt/storage/docker_data
if i understand your question correctly, you wonder why the following docker-compose snippet works for you
version: "3"
services:
nifi-ingestion:
volumes:
- nifi-ingestion-conf:/opt/nifi/nifi-current/conf
volumes:
nifi-ingestion-conf:
and the following docker-compose snippet does not work for you
version: "3"
services:
nifi-ingestion:
volumes:
- ${STORAGE_VOLUME_PATH}/nifi-ingestion-conf:/opt/nifi/nifi-current/conf
what makes them different is how you use volumes. you need to differentiate between mount host paths and mount named volumes
You can mount a host path as part of a definition for a single service, and there is no need to define it in the top level volumes key.
But, if you want to reuse a volume across multiple services, then define a named volume in the top-level volumes key.
named volumes are managed by docker
If you start a container with a volume that does not yet exist, Docker creates the volume for you.
also, would advise you to read this answer
update:
you might also want to read about docker nfs volumes

docker-compose on Windows 10: cannot find image?

I'm developing locally on a Windows 10 PC, and have Docker images installed on drive D.
Running the command 'docker images' shows...
When I run a 'docker-compose up' command I'm getting the following error...
Pulling eis-config (eis/eis-config:)...
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]
Any idea why this is happening? (Could it be that the docker-compose is looking for the images on docker-hub, rather than locally?
The 'docker-compose.yml' file is shown below...
version: "3.7"
services:
eis-config:
image: eis/eis-config
ports:
- "8001:8001"
eis-eureka:
image: eis/eis-eureka
ports:
- "8761:8761"
depends_on:
- eis-config
eis-zuul:
image: eis/eis-zuul
ports:
- "8080:8080"
depends_on:
- eis-eureka
gd-service:
image: eis/gd-service
ports:
- "8015:8015"
depends_on:
- eis-eureka
run
docker-compose kill
docker-compose down
docker-compose up
should fix your issue, most likely you have an old container (running or not) that's causing the problem
You are running eis/eis-config without an Image tag, so latest is implicitely assumed by docker. You don't have an Image eis/eis-config with latest tag. So either build your Image with latest tag, or run image: eis/eis-config:0.0.1-SNAPSHOT
It seems like you missed an entry for the eis/eis-config service in the yml file, once check your yml file and regenerate the image for that service.
Where are you trying to run those images, locally in your machine or on a remote server?
once look at this link Error running containers with docker-compose, error pulling images from my private repo in dockerhub

Deploy Ansible project which include a docker-compose.yml

I woud like to use Ansible to deploy one of my project (let's call it project-to-deploy).
project-to-deploy can be run locally using a docker-compose.yml file, which, among other things, mount the following volumes inside the docker-container.
version: "2"
services:
database:
image: mysql:5.6
volumes:
- ./docker/mysql.init.d:/docker-entrypoint-initdb.d
messages:
image: private.repo/project-to-deploy:latest
Nothing more useful here. To run the project: docker-compose up.
I have created a docker image of the project (in which I copy all the files from the project to the newly created docker image), and uploaded it to private.repo/project-to-deploy:latest.
Now comes the Ansible part.
For the project to run, I need:
The docker image
A MySQL instance (see part of my docker-compose.yml below)
In my docker-compose.yml (above), it is quite easy to do so. I just create the 2 services (database and project-to-deploy) and link them each-other.
How can I perform such action in Ansible?
First things I did is to fetch the image:
- name: Docker - pull project image
docker:
image: "private.repo/project-to-deploy:latest"
state: restarted
pull: always
Then, how can I link the MySQL docker image to this, knowing that the MySQL docker image need files from project-to-deploy ?
If you think of another way to do it, feel free to make suggestions !
slight correction, the docker module is for running containers, in your example you are not just fetching the image. You're actually pulling the image, creating a container, and running it.
I would typically accomplish this by using ansible to template each container's config files with the needed IP addresses, ports, credentials, etc. providing them all they need to know to communicate with each other.
Since your example only involves few connections you could set the links option in your ansible task. You should only need to set it on the "messages" container side.
- name: Docker - start MySQL container
docker:
name: database
image: "mysql:5.6"
state: restarted
volumes:
- /path/to/docker/mysql.init.d:/docker-entrypoint-initdb.d
pull: always
- name: Docker - start project container
docker:
name: messages
image: "private.repo/project-to-deploy:latest"
state: restarted
pull: always
links:
- database

Resources