Mount container folders to hosts - docker

I have three containers implemented as follows:
Each container is a Django project that has a folder called images that holds the files (gray rectangles).
I want to access the images of each container in the following format through the website address:
http://example.com/storage/<container>/images/<file name>
What I tried?
I created a storage folder on the host. Then I considered a separate folder for each container. Finally, I mounted each of these folders to their containers. But images are not available from the website.
/storage/
/users/images/...
/company/images/...
/financial/images/...
Can anyone help?
UPDATED
# Create Volume
docker volume create users
# mount
docker run -v users:storage/users/images user-image

I using MinIO with Nginx on Docker.
Here is steps that I configure MinIO.
Initial Config 🦜
Step 1: Create minio user
Create minio user
sudo useradd minio
You then add a password for the minio user by using the passwd command:
sudo passwd minio
Step 2: Create shared folder
Make minIO directory and change owner to minio user:
mkdir -p /usr/local/share/minio
sudo chown -R minio:minio /usr/local/share/minio
Docker 🐳
Step 3: Create docker container:
Create docker container with:
docker-compose up -d --build
This command create your s3 container. Check it with docker ps command.
Nginx 🔥
Step 4: Create subdomain
Open /etc/host file and add your subdomain:
127.0.0.1 localhost
127.0.0.1 s3.localhost # You can rename `s3` with desired name.
So at /etc/nginx/sites-enabled/default edit server_name key as s3.localhost.
...
// add this block at the end of `default` file:
server {
listen 80;
listen [::]:80;
server_name s3.localhost;
index index.html;
location / {
proxy_pass http://127.0.0.1:9001/;
}
}
Then restart nginx:
sudo service nginx reload
Run MinIO 🏃🏽‍♂️
Go to your browser and open s3.localhost.
Username and password is in .env file. Login and create your Buckets. 🌟
Check out my repo.

Related

Add shared volume from Jupyterhub container TO notebook container

I'm running https://github.com/jupyterhub/jupyterhub-deploy-docker on a VM. I correctly set up everything, but now I need the spawned notebook to contain a shared folder with some data that I have on the VM.
I edited the docker-compose file in order to have the "shared" folder initially on the VM also present on the JupyterHub container, adding the last line in this snippet:
volumes:
# Bind Docker socket on the host so we can connect to the daemon from
# within the container
- "/var/run/docker.sock:/var/run/docker.sock:rw"
# Bind Docker volume on host for JupyterHub database and cookie secrets
- "data:${DATA_VOLUME_CONTAINER}"
#Add a "shared" folder on the VM to the JupyterHub container
- "./shared:/shared"
But then I don't know how to link that folder to the notebook. I tried with
c.DockerSpawner.volumes = { 'jupyterhub-user-{username}': notebook_dir, '/shared': {"bind": '/home/jovyan/work/shared', "mode": "ro"}}
inside the jupyterhub_config.py, but nothing happens.
Any suggestions?

Permission for volumes in docker-compose

I want create docker-containers with volumes and custom group. But faced with mistake with permission inside container. All file is have for example 'custom-group' and work fine, but the Document folder is have by default root group. I think this due to volumes. How to Document folder set 'custom-group'. My code is below
volumes:
- /base/documents:/app/documents:rw
The uid/gid inside of the container is typically the same as the uid/gid outside of the container, on the host (user namespaces are off by default and wouldn't solve this problem, in fact they would make it worse). Note that this is uid/gid, and not user name and group name, since containers can have their own /etc/passwd and /etc/group files.
You need to run your container with the uid/gid matching that of the files on your host. That is done with the user section of a compose file, or -u option on the docker run command line, e.g.:
docker run -u "$(id -u):$(id -g)" -v /base/documents:/app/documents:rw ...
or in the compose file:
user: "1000:1000"
If your application must run as root, then there are a lot fewer options to handle this. My own preference is to dynamically handle the uid/gid from an entrypoint that starts up as root, fixes the uid/gid inside the container to match the host (looking at the volume owner), and then drops from root to the container user for running the application. For more details on how that's done, you can see my base image repo, including the nginx example and fix-perms script.
use root user in your docker-compose to get full permission
EX:-
node-app:
container_name: node-app
image: node
user: root
volumes:
- ./:/home/node/app
- ./node_modules:/home/node/app/node_modules
- ./.env.docker:/home/node/app/.env
NOTE:- user: root => gives you a full permission of your volumne

traefik permissions 777 for acme.json are too open, please use 600

Yes,
I get this when I try to run traefik with https. Problem is I mount the dir on my Win7 machine but I cant chmod the file.
The mount is working but file permissions are off.
looks like this:
volumes
- d:/docker/traefikcompose/acme/acme.json:/etc/traefik/acme/acme.json:rw
traefik | time="2018-09-04T12:57:11Z" level=error msg="Error
starting provider *acme.Provider: unable to get ACME account :
permissions 777 for /etc/traefik/acme/acme.json are too open, please
use 600"
If I remove the acme.json file I get this:
ERROR: for traefik Cannot start service traefik: b'OCI runtime create
failed: container_linux.go:348: starting container process caused
"process_linux.go:402: container init caused \"rootfs_linux.go:58:
mounting \\\"/d/docker/traefikcompose/acme/acme.json\\\" to
rootfs
\\\"/mnt/sda1/var/lib/docker/aufs/mnt/c84d8644252848bde8f0322bafba3d206513ceb8479eb95aeee0b4cafd4a7251\\\"
at
\\\"/mnt/sda1/var/lib/docker/aufs/mnt/c84d8644252848bde8f0322bafba3d206513ceb8479eb95aeee0b4cafd4a7251/etc/traefik/acme/acme.json\\\"
caused \\\"not a directory\\\"\"": unknown: Are you trying to
mount a directory onto a file (or vice-versa)? Check if the specified
host path exists and is the expected type'
I did finally find the solution thanks to Cooshals kind help,
we have to ssh into the virtualbox-machine and make the file there, and then point it out right from the docker-compose.yml, in this case I did like this:
docker-machine ssh default
touch /var/acme.json
chmod 600 /var/acme.json
Then in my docker-compose:
volumes:
- /var/:/var/acme.json
Finally in traefik.toml:
[acme]
storage = "acme.json"
In addition to the above answer, to automate the creation of the acme.json file and assign the required permissions, create a Dockerfile and call it in your docker.compose.yml
FROM traefik:2.2
RUN touch /acme.json \
&& chmod 600 /acme.json
I solved this problem with a named docker volume:
docker-compose.yml
(only showing the relevant parts of the file)
services:
traefik:
environment:
- TRAEFIK_CERTIFICATESRESOLVERS_LE_ACME_STORAGE=/acme/acme.json
volumes:
- acme:/acme
volumes:
acme:
This just solved it for me:
Have WSL2 installed in Windows 10
Use PowerShell and navigate to the directory where your acme.json file is
Type wsl, this wil open the same location but now from WSL2
Type chmod 600 acme.json
Done!
I have the same problem as you, wanted to have the acme.json file outside the container/volume, that is, on the host FS. This way I wanted to make backups easy since my tests would exceed the let's encrypt / ACME quota quite fast at times.
Docker Windows
Turns out on Docker Windows you get this permission inside traefik container:
-rwxrwxrwx 1 root root 0 Dec 22 15:21 acme.json and on Linux
Docker Linux (ubuntu 22.04)
If the traefik creates the file on the host side using something like:
docker run -v ./acme:/acme ... traefik
On Linux docker the container side looks different:
-rw------- 1 root root 15.7K Dec 22 15:14 acme.json
But on the host I also have this:
-rw------- 1 root root 15.7K Dec 22 15:14 acme.json
Which means that my normal user can't see/backup or modify that file.
I think there is currently no sufficient support in maintaining this file on the host FS side.
Recommendation
Store this file inside a docker volume and access it using 'docker cp':
Backup:
docker container cp traefik:/acme/acme.json .
Restore:
docker container cp acme.json traefik:/acme/
docker exec -it traefik -> chmod 0700 /acme/acme.json
docker container restart traefik
This can be solved using a Dockerfile / entrypoint.sh and works like this:
Dockerfile
FROM traefik:v2.9.4
COPY entrypoint.sh /
ENTRYPOINT [ "/entrypoint.sh" ]
CMD ["traefik"]
entrypoint.sh
#! /bin/sh
set -e
echo "Setting acme.json permissions 0600"
touch /works
touch /acme/acme.json
chmod 600 /acme/acme.json
chown root:root /acme
chown root:root /acme/acme.json
# first arg is `-f` or `--some-option`
if [ "${1#-}" != "$1" ]; then
set -- traefik "$#"
fi
# if our command is a valid Traefik subcommand, let's invoke it through Traefik instead
# (this allows for "docker run traefik version", etc)
if traefik "$1" --help >/dev/null 2>&1
then
set -- traefik "$#"
else
echo "= '$1' is not a Traefik command: assuming shell execution." 1>&2
fi
exec "$#"
In the docker-compose.yaml I had:
traefik:
#image: traefik:v2.9.4
build: traefik/
So a docker compose build && docker compose up -d updated the file permissions according to the script in the entrypoint.sh
Note: It is important to do the updates of the /acme/acme.json file from the entrypoint.sh as the volumes are mounted then already. This is not the case when only using a Dockerfile.
Note: I'm using docker compose but docker will also support this but with a different synatx on the commands.
Summary
I think this is also too much maintainance burden. In the docker community we should come up with a volume system which can set owners/modes on directories for the container and leave the files on the host be whatever owner/mode they have.
volumes:
"file:acme.json:/acme.json:root:root:0600"
Also if that file does not exist on the host, just created it. Linux docker does create it on the host while Docker Windows would fails to start the docker compose up -d command.

how to pass a hostname as env var with rancher server

I am using docker 1.12 and rancher server 1.5.9. I am trying to create a stack in rancher to deploy and orchestrate my app. My issue is that I need to pass as env var the hostname of the host where the container will be running.
Since I have only one image that will be used to create one kind of container on several host (let's say 2 for the tests) I can't pass it like HOSTNAME=myhostname. The value needs to be a var which will be set with the docker host.
Does anyone know how to do that with the rancher server UI?
Does anyone know how rancher retrieve the hostname when adding a custom host?
Can we use the entry point or CMD to do that?
Having an /etc/hosts on the machine that prioritizes the desired name over localhost helped in my case. Obviously, also have an /etc/hostname that agrees with /etc/hosts.
I am using container linux. So for me, it looks like so in the ct-config before converting to ignition.
storage:
files:
- filesystem: "root"
path: "/etc/hostname"
mode: 0644
contents:
inline: ${hostname}
- filesystem: "root"
path: "/etc/hosts"
mode: 0644
contents:
inline: "127.0.0.1 ${hostname} localhost\n
::1 ${hostname} localhost"
Just be sure to have the above before you run the rancher registration line.
sudo docker run --rm --privileged \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/rancher:/var/lib/rancher \
rancher/agent:v1.2.7 https://myrancher/v1/scripts/TOKEN

How to setup mass dynamic virtual hosts in nginx on docker?

How can I setup mass dynamic virtual hosts in nginx As seen here
except using docker as the host machine?
I currently have it setup like this:
# default.conf
server {
root /var/www/html/$http_host;
server_name $http_host;
}
And in my Dockerfile
COPY default.conf /etc/nginx/sites-enabled/default.conf
And after I build the image and run it:
docker run -d 80:80 -v www/:/var/www/html
But when I point a new domain (example.dev) in my hosts file and make a www/example.dev/index.html. It doesn't work at all.
The setup is correct and it works as i tested on my system. The only issue is that you are copying the file on a wrong path. The docker image doesn't use the sites-enabled path by default. The default config loads everything present in /etc/nginx/conf.d. So you need to copy to that path and rest all works great
COPY default.conf /etc/nginx/conf.d/default.conf
Make sure to map you volumes correctly. While testing I tested it using below docker command
docker run -p 80:80 -v $PWD/www/:/var/www/html -v $PWD/default.conf:/etc/nginx/conf.d/default.conf nginx
Below is the output on command line
vagrant#vagrant:~/test/www$ mkdir dev.tarunlalwani.com
vagrant#vagrant:~/test/www$ cd dev.tarunlalwani.com/
vagrant#vagrant:~/test/www/dev.tarunlalwani.com$ vim index.html
vagrant#vagrant:~/test/www/dev.tarunlalwani.com$ cat index.html
<h1>This is a test</h1>
Output on browser

Resources