docker registry:2.0 local storage - docker-registry

I can't get docker registry:2.0 to save files to local storage.
Tried
docker run -p 5000:5000 -v "/home/azureuser:/tmp/registry-dev" registry:2.0
docker run -e "REGISTRY_STORAGE=filesystem" -e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/tmp/registry" -p 5000:5000 -v /var/lib/docker/volumes:/tmp/registry registry:2.0
and many other alternatives.
It doesn't allow me to push anything.
If I just docker run -p 5000:5000 registry:2.0
I can push files but it uses my RAM. When it gets down to 100MB it doesn't allow me to push any more files.
Any help or pointing in the right direction will be highly appreciated. I don't know what and where to read any more.
Best Regards,
Costi

Found that if I run docker --privileged it will write in the local disk. So it's a permission issue

Related

403 with nginx for file located in binded volume with docker

I am trying to use my nginx server on docker but I cannot use the files / folder if they belong to my volume. Problem, the goal of my test is to keep a volume between the file in my computer and the container.
I have searched during 3 days and tried a lot of solution but no effects...( useradd, chmod, chown, www_data, etc.....)
I don't understand how is it possible to use ngnix, a volume and docker?
The only solution actually for me is to copy the folder of my volume in another folder, and so I can chown the folder and use NGIX. There is no official solution on the web and I am surprised because for me using docker with a volume binded with his container would be the basic for a daily work.
If someone has managed to implement it, I would be very happy if you could share you code. I need to understand what I am missing.
FYI I am working with a VM.
Thanks !
I think you are not passing the right path in the volume option. There are a few ways to do it, you can pass the full path or you can use the $(pwd) if you are using a Linux machine. Let's say you are on /home/my-user/code/nginx/ and your HTML files are on html folder.
You can use:
$ docker run --name my-nginx -v /home/my-user/code/nginx/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
or
$ docker run --name my-nginx -v ~/code/nginx/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
or
$ docker run --name my-nginx -v $(pwd)/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
I've created an index.html file inside the html folder, after the docker run, I was able to open it:
$ echo "hello world" >> html/index.html
$ docker run --name my-nginx -v $(pwd)/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
$ curl localhost:8080
hello world
You can also create a Dockerfile, but you would need to use COPY command. I'll give a simple example that's working, but you should improve this by using a version and etc..
Dockerfile:
FROM nginx
COPY ./html /usr/share/nginx/html
...
$ docker build -t my-nginx:0.0.1 .
$ docker run -d -p 8080:80 my-nginx:0.0.1
$ curl localhost:8080
hello world
You can also use docker-compose. By the way, those examples are just to give you some idea of how it works.

Docker container not showing files in the shared volume

I hope someone can help me to locate the issue. I created a SQL Server 2019 container using this code:
docker run --name SQL19 -p 1433:1433 -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=zzzzz258*" -v /SqlDockerVol/userdatabase:/userdatabase -v /SqlDockerVol/sqlbackups:/sqlbackups -d mcr.microsoft.com/mssql/server:2019-lates
The problem I am having is the container not showing the files I saved in the /sqlbackups folder.
I am using Ubuntu 20.04.
I logged into the SQL19 container like this:
docker exec -it SQL19 /bin/bash
then issued ls sqlbackups to confirm.
Do I need to set any permission on the host folder. I am not familiar with Linux.
Thanks
I suspect you need to pass absolute path to your folder:
/SqlDockerVol/userdatabase
is it full absolute path?
If it is relative change it to:
$(pwd)/SqlDockerVol/userdatabase
Please check this Docker shared folder with Linux
And technically you need something like:
docker run --name SQL19 -p 1433:1433 -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=zzzzz258*" -v $(pwd)/SqlDockerVol/userdatabase:/userdatabase -v $(pwd)/SqlDockerVol/sqlbackups:/sqlbackups -d mcr.microsoft.com/mssql/server:2019-lates

How do I transfer a volume's data to another volume on my local host in docker?

I did
docker run -v /jenkins_home:/var/jenkins_home jenkins/jenkins:alpine
on Windows (with docker installed as a linux container).
However, after configuring jenkins on that container, I now wanted to transfer the data in that /jenkins_home volume into a C:\jenkins_home folder on my local windows host machine\another machine.
Any way I can get the data from the /jenkins_home to c:/jenkins_home?
I know I should have made it
docker run -v c:/jenkins_home:/var/jenkins_home jenkins/jenkins:alpine
at the start but mistakes were made and I was wondering how do I fix that as the above suggestion?
Tried running
docker run -it -p 8080:8080 -p 50000:50000 --volumes-from jenkins_old -v c:/jenkins_home:/var/jenkins_home --name jenkins_new jenkins/jenkins:alpine
but it doesn't transfer the data over using the new c:\jenkins_home folder
docker run -v /jenkins_home:/var/jenkins_home jenkins/jenkins:alpine
Can't get the data to transfer over from the /jenkins_home folder to c:\jenkins_home folder.
I don't know where the /jenkins_home would map to on windows, but you could try this:
docker run -it --rm -v /jenkins_home:/from -v c:\jenkins_home:/to alpine cp -r /from /to

Problems running ghost with docker in development

I'm having a hard time running a container from the Ghost image in development (after docker pull ghost).
Using:
docker run --name some-ghost -p 4000:2368 -v /Users/Documents/ghost-blog/content/themes/:/var/lib/ghost/content/themes/ -e NODE_ENV=development ghost
seems to start the container in development but when I navigate to the page in browser I get
localhost didn’t send any data. ERR_EMPTY_RESPONSE
I've tried looking this up but it seems like development was previously the default environment until recently. I'm not really sure where to proceed.
According to acburdine's answer, you need to specify further environment variables:
docker run --name some-ghost -p 4000:2368 -v /Users/Documents/ghost-blog/content/themes/:/var/lib/ghost/content/themes/ -e NODE_ENV=development -e server__host=0.0.0.0 -e url="http://localhost:2368" ghost

how to save a docker redis container

I'm having trouble creating an image of a docker redis container with the data in the redis database. At the moment I'm doing this:
docker pull redis
docker run --name my-redis -p 6379:6379 -d redis
redis-cli
127.0.0.1:6379> set hello world
OK
127.0.0.1:6379> save
OK
127.0.0.1:6379> exit
docker stop my-redis
docker commit my-redis redis_with_data
docker run --name my-redis2 -p 6379:6379 -d redis_with_data
redis-cli
127.0.0.1:6379> keys *
(empty list or set)
I'm obviously not understanding something pretty basic here. Doesn't docker commit create a new image from an existing container?
okay, i've been doing some digging. The default redis image on hub.docker uses a data-volume which is then mounted at /data in a container. In order to share this volume between containers, you have to start a new container with the following argument:
docker run -d --volumes-from <name-of-container-you-want-the-data-from> \
--name <new-container-name> -p 6379:6379 redis
Note that the order of the arguments is important, otherwise docker run will fail silently.
docker volume ls
will tell you which data volumes have already been created by docker on your computer. I haven't yet found a way to give these volumes a trivial name, rather than a long random string.
I also haven't yet found a way to mount a data-volume, but rather just use the --volumes-from command.
Okay. I now have it working, but it's cludgey.
With
docker volume ls
docker volume inspect <id of docker volume>
you can find the path of the docker volume on the local file-system.
You can then mount this in a new container as follows:
docker run -d -v /var/lib/docker/volumes/<some incredibly long string>/_data:/data \
--name my-redis2 -p 6379:6379 redis
This is obviously not the way you're meant to do this. I'll carry on digging.
I put all that i've discovered upto now in a blog post: my blog post on medium.com
Maybe that will be useful for somebody
Data in docker is not persistent, when you restart the container your data will be gone. To prevent this you have to share a map on the host machine with your container. When you container restarts it will get the data from the map on the host.
You can read more about it in the Docker docs: https://docs.docker.com/engine/tutorials/dockervolumes/#data-volumes
From the redis container docs:
Run redis-server
docker run -d --name redis -p 6379:6379 dockerfile/redis
Run redis-server with persistent data directory. (creates dump.rdb)
docker run -d -p 6379:6379 -v <data-dir>:/data --name redis dockerfile/redis
Run redis-server with persistent data directory and password.
docker run -d -p 6379:6379 -v <data-dir>:/data --name redis dockerfile/redis redis-server /etc/redis/redis.conf --requirepass <password>
Source:
https://github.com/dockerfile/redis
Using data volume and sharing RDB file manually is not ugly, actually that's what data volume is designed for, to separate data from container.
But if you really need/want to save data to image and share it that way, you can just change the redis working directory from volume /data to somewhere else:
Option 1 is changing --dir when start the redis container:
docker run -d redis --dir /tmp
Then you can follow your steps to create new image. Note that only /tmp could be used by this method due to permission issue.
Option 2 is creating a new image with changed WORKDIR:
FROM redis
RUN mkdir /opt/redis && chown redis:redis /opt/redis
WORKDIR /opt/redis
Then docker build -t redis-new-image and use this image to do your job.

Resources