SSL certificates or other files not showing in Docker file structure - docker

I have a Docker build for Gitlab, I created some ssl certificates and other files I need to pull in. However when I exec into the container bash the files are not visible.
gitlab:
image: 'gitlab/gitlab-ce:9.1.0-ce.0'
restart: always
hostname: 'gitlab.example.com'
links:
- postgresql:postgresql
- redis:redis
environment:
GITLAB_OMNIBUS_CONFIG: |
postgresql['enable'] = false
gitlab_rails['db_username'] = "gitlab"
gitlab_rails['db_password'] = "gitlab"
gitlab_rails['db_host'] = "postgresql"
gitlab_rails['db_port'] = "5432"
gitlab_rails['db_database'] = "gitlabhq_production"
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'utf8'
redis['enable'] = false
gitlab_rails['redis_host'] = 'redis'
gitlab_rails['redis_port'] = '6379'
external_url 'https://gitlab.example.com:30080'
nginx['ssl_certificate'] = '/etc/gitlab/trusted-certs/gitlab.example.com.crt'
nginx['ssl_certificate_key'] = '/etc/gitlab/trusted-certs/gitlab.example.com.key'
ports:
- "30080:30080"
- "30022:22"
postgresql:
restart: always
image: postgres:9.6.2-alpine
environment:
- POSTGRES_USER=gitlab
- POSTGRES_PASSWORD=gitlab
- POSTGRES_DB=gitlabhq_production
redis:
restart: always
image: redis:3.0.7-alpine

On creation of the self signed certificates, I need to exec into my docker container and create them using the docker bash

The certificates (self signed) are on my machine at the path referenced "/etc/gitlab/trusted-certs/gitlab.example.com.crt"
Your docker-compose.yml did not map any folders from your host into your container. Containers are nothing more than a namespaced process, and one of those namespaces is the filesystem. To map a directory from the host into the container, you can use a simple bind mount syntax:
gitlab:
image: 'gitlab/gitlab-ce:9.1.0-ce.0'
restart: always
hostname: 'gitlab.example.com'
volumes:
- ./path/to/gitlab.example.com.crt:/etc/gitlab/trusted-certs/gitlab.example.com.crt:ro
...
Note that this mounts from the host into the container, and the file will be configured as read-only with the :ro syntax to prevent processes inside the container from modifying your certificates. If your docker host is inside of a VM (including docker for windows/mac) or on a remote server, you'll need to make sure the files are accessible there (e.g. docker for win/mac has settings to share PC folders into the embedded VM).

Related

Acessing ARP table of Host from Docker container on MacOS

This is basically the same question as this one, except on a Mac, setting network to host has no effect whatsoever.
I'm trying to give a Docker container, running on MacOS, access to its host ARP table. My docker-compose.yaml:
services:
homeassistant:
container_name: home-assistant
image: homeassistant/home-assistant
environment:
# This is the required way to set a timezone on macOS and differs from the Linux compose file
- TZ=XX/XXXX
volumes:
- ./config:/config
restart: unless-stopped
privileged: true
ports:
# Also required for macOS since the network directive in docker-compose does not work
- "8123:8123"
# Add this or docker-compose will complain that it did not find the key for locally mapped volume
volumes:
config:

Share dir between host and multiple containers using docker-compose

I have 2 containers in a compose files,that i want to serve app static files through nginx.
I have read this: https://stackoverflow.com/a/43560093/7522096 and want to use host dir to share between app container and nginx container, for some reason I dont want to use named volume.
===
Using a host directory Alternately you can use a directory on the host
and mount that into the containers. This has the advantage of you
being able to work directly on the files using your tools outside of
Docker (such as your GUI text editor and other tools).
It's the same, except you don't define a volume in Docker, instead
mounting the external directory.
version: '3'
services:
nginx:
volumes:
- ./assets:/var/lib/assets
asset:
volumes:
- ./assets:/var/lib/assets
===
My docker-compose file:
version: "3.7"
services:
app:
container_name: app
restart: always
ports:
- 8888:8888
env_file:
- ./env/app.env
image: registry.gitlab.com/app/development
volumes:
- ./public/app/:/usr/app/static/
- app-log:/root/.pm2
nginx:
container_name: nginx
image: 'nginx:1.16-alpine'
restart: always
ports:
- 80:80
- 443:443
volumes:
- /home/devops/config/:/etc/nginx/conf.d/:ro
- /home/devops/ssl:/etc/nginx/ssl:ro
- ./public/app/:/etc/nginx/public/app/
depends_on:
- app
volumes:
# app-public:
app-log:
Yet when i do this in my compose, the dir always come up empty on nginx, and the static files in my app container got disappear too.
Please help, I tried a lot of ways but can not figure it out.
Thanks.
During the initialization of the container docker will bind the ./public/app directory on the host with the /usr/app/static/ directory in the container.
If the ./public/app does not exist it will be created. The bind is from the host to the container, meaning that the content of ./public/app folder is
reflected (copied) into the container and not viceversa. That's why after the initialization the static app directory is empty.
If my understanding is correct your goal is to share the application files between the app container and nginx.
Taken into consideration the above the only solution is to create the files in the volume after the volume is created. Here is an example for the relevant parts:
version: "3"
services:
app:
image: ubuntu
volumes:
- ./public/app/:/usr/app/static_copy/
entrypoint: /bin/bash
command: >
-c "mkdir /usr/app/static;
touch /usr/app/static/shared_file;
mv /usr/app/static/* /usr/app/static_copy;
rm -r /usr/app/static;
ln -sfT /usr/app/static_copy/ /usr/app/static;
exec sleep infinity"
nginx:
image: 'nginx:1.16-alpine'
volumes:
- ./public/app/:/etc/nginx/public/app/
depends_on:
- app
This will move the static files to the static_copy directory and link back those files to /usr/app/static. Those files will be shared with the host (public/app director)
and nginx container (/etc/nginx/public/app/). Adapt it to fit your needs.
In alternative you can of course use named volumes.
Hope it helps

How to configure mariadb docker-compose file to use other port than 3306?

I cannot get mariadb to use another port other than 3306 when running it in a docker container using a docker-compose file.
I have already read the mariadb/docker documentation, searched online and conducted my own experiments.
docker-compose file:
version: '3.1'
services:
db:
image: mariadb
restart: always
environment:
- MYSQL_ROOT_PASSWORD=mypassword
- MYSQL_TCP_PORT=33030
- MYSQL_UNIX_PORT=33020
ports:
- "33020:33020"
Dockerfile:
FROM: mariadb: 10.3.14
COPY mydbscript.sql /docker-entrypoint-initdb.d/
EXPOSE 33020
CMD ["mysqld"]
It never uses port 33020. It still uses port 3306. How can I pass the port dynamically via the docker-compose file at run-time?
You need to replace the default my.cnf to specify a custom port for MariaDB/MySQL:
cd /where/your/docker-compose.yml/located
docker run -it mariadb cat /etc/mysql/my.cnf > my.cnf
# use any text editor your like to open my.cnf, search for "port = 3306"
# and replace it to the port you like to have.
Configure your docker-compose.yml like this:
version: '3.1'
services:
db:
image: mariadb
restart: always
volumes:
- type: bind
source: ./my.cnf
target: /etc/mysql/my.cnf
environment:
- MYSQL_ROOT_PASSWORD=mypassword
# add your other configurations here
The container image is statically bound to :3306. If you wish to change this, you'll need to build a new image and configure the database to run elsewhere.
However, Docker permits you to map (publish) this as a different port :33020.
The correct way to do this is to:
docker-compose MYSQL_TCP_PORT=3306
docker-compose ports: - "33020:3306"
Dockerfile EXPOSE 3306 (unchanged)
Containers (internally) will correctly reference :3306 but externally (from the host) the database will be exposed on :33020.
NB Within docker-compose (network), other containers must continue to reference the database on port :3306.
#DazWilkin, #philip-tzou, of course it's possible!
How to set the port without config file is even explained in the dockerhub-page of mariadb. (https://hub.docker.com/_/mariadb) #Software just did the mistake of using '=' instead of ':' in the docker-compose.yml. I did it the first time too because I copied the environment veriables from a docker run bash file.
This docker-compose.yml (with .env File) works for me to set both, internal and external port of my mariaDB service:
version: "3.9"
services:
database:
image: mariadb:10.8
container_name: ${db_containername}
environment:
MARIADB_ALLOW_EMPTY_ROOT_PASSWORD: yes
MARIADB_DATABASE: ${db_database}
MARIADB_USER: ${db_user}
MARIADB_PASSWORD: ${db_password}
MYSQL_TCP_PORT: ${db_port_internal}
MARIADB_AUTO_UPGRADE: 1
MYSQL_UNIX_PORT: /run/mysqld/mysqld.sock
MARIADB_MYSQL_LOCALHOST_USER: true
restart: always
ports:
- '${db_port_external}:${db_port_internal}'
expose:
- ${db_port_external}
volumes:
- 'database_data:/var/lib/mysql'
This is how you can set a different port for your mariadb inside the container
1. Create a my.cnf file inside the same directory as you dockerfile
write this inside the my.cnf file
[mysqld]
port = 33020
2. Add the cnf file to the dockerfile & edit the EXPOSE
Add this line of code to your docker file
COPY my.cnf /etc/mysql/my.cnf
And make sure to change the external exposed port to the one you want to use e.i
EXPOSE 33020
3. Make sure to change the port in the docker-compose.yml file
ports:
- "33020:33020"
You can now connect to your database in either the terminal using the docker exec -it {databasename} mysql -u root -p or in something like mysql workbench by setting the ip: localhost and the port to 33020
Hope this helps.

docker-compose.yml container_name and hostname

What is the use of container_name in docker-compose.yml file? Can I use it as hostname which is nothing but the service name in docker-compose.yml file.
Also when I explicitly write hostname under services does it override the hostname represented by service name?
hostname: just sets what the container believes its own hostname is. In the unusual event you got a shell inside the container, it might show up in the prompt. It has no effect on anything outside, and there’s usually no point in setting it. (It has basically the same effect as hostname(1): that command doesn’t cause anything outside your host to know the name you set.)
container_name: sets the actual name of the container when it runs, rather than letting Docker Compose generate it. If this name is different from the name of the block in services:, both names will be usable as DNS names for inter-container communication. Unless you need to use docker to manage a container that Compose started, you usually don’t need to set this either.
If you omit both of these settings, one container can reach another (provided they’re in the same Docker Compose file and have compatible networks: settings) using the name of the services: block and the port the service inside the container is listening in.
version: '3'
services:
redis:
image: redis
db:
image: mysql
ports: [6033:3306]
app:
build: .
ports: [12345:8990]
env:
REDIS_HOST: redis
REDIS_PORT: 6379
MYSQL_HOST: db
MYSQL_PORT: 3306
The easiest answer is the following:
container_name: This is the container name that you see from the host machine when listing the running containers with the docker container ls command.
hostname: The hostname of the container. Actually, the name that you define here is going to the /etc/hosts file:
$ exec -it myserver /bin/bash
bash-4.2# cat /etc/hosts
127.0.0.1 localhost
172.18.0.2 myserver
That means you can ping machines by that names within a Docker network.
I highly suggest set these two parameters the same to avoid confusion.
An example docker-compose.yml file:
version: '3'
services:
database-server:
image: ...
container_name: database-server
hostname: database-server
ports:
- "xxxx:yyyy"
web-server:
image: ...
container_name: web-server
hostname: web-server
ports:
- "xxxx:xxxx"
- "5101:4001" # debug port
you can customize the image name to build & container name during docker-compose up for this, you need to mention like below in docker-compose.yml file.
It will create an image & container with custom names.
version: '3'
services:
frontend_dev:
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
build:
context: .
dockerfile: Dockerfile.dev
image: "mycustomname/sample:v1"
container_name: mycustomname_sample_v1
ports:
- '3000:3000'
volumes:
- /app/node_modules
- .:/app

Running Gitlab in Docker

I want to host a private Gitlab server on my Debian VPS. I figured using Docker would be a good setup.
I tried running Gitlab with the following code:
version: '3'
services:
gitlab:
image: 'gitlab/gitlab-ce'
restart: always
hostname: 'gitlab.MYDOMAIN.com'
links:
- postgresql:postgresql
- redis:redis
environment:
GITLAB_OMNIBUS_CONFIG: |
postgresql['enable'] = false
gitlab_rails['db_username'] = "gitlab"
gitlab_rails['db_password'] = "gitlab"
gitlab_rails['db_host'] = "postgresql"
gitlab_rails['db_port'] = "5432"
gitlab_rails['db_database'] = "gitlabhq_production"
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'utf8'
redis['enable'] = false
gitlab_rails['redis_host'] = 'redis'
gitlab_rails['redis_port'] = '6379'
external_url 'http://gitlab.MYDOMAIN.com:30080'
gitlab_rails['gitlab_shell_ssh_port'] = 30022
ports:
# both ports must match the port from external_url above
- "30080:30080"
# the mapped port must match ssh_port specified above.
- "30022:22"
# the following are hints on what volumes to mount if you want to persist data
# volumes:
# - data/gitlab/config:/etc/gitlab:rw
# - data/gitlab/logs:/var/log/gitlab:rw
# - data/gitlab/data:/var/opt/gitlab:rw
postgresql:
restart: always
image: postgres:9.6.2-alpine
environment:
- POSTGRES_USER=gitlab
- POSTGRES_PASSWORD=gitlab
- POSTGRES_DB=gitlabhq_production
# the following are hints on what volumes to mount if you want to persist data
# volumes:
# - data/postgresql:/var/lib/postgresql:rw
redis:
restart: always
image: redis:3.0.7-alpine
Running this (docker-compose run -d) allows me to reach Gitlab on MYDOMAIN.com:30080, but not on gitlab.MYDOMAIN.com:30080.
Have I made an error in the configuration? Or do I need to use reverse proxies (NGINX or Traefik)?
I'm pretty sure the hostname: gitlab.MYDOMAIN.rocks needs to match the external_url 'http://gitlab.MYDOMAIN.com:30080' until the port exactly
So for example:
hostname: gitlab.MYDOMAIN.com
. . . more configuration . . .
external_url 'http://gitlab.MYDOMAIN.com:30080'
Did you check that the subdomain gitlab in dns is pointing to the right ip? Looks like an infrastructure problem more than a docker configuration one.
Regards
I managed to fix it myself!
I totally forgot to add an A-record, setting gitlab.mydomain.com to point to the same IP address as #.
I added the following block to the nginx configuration:
upstream gitlab.mydomain.com {
server 1.2.3.4:30080; # IP address of Docker container
}
server {
server_name gitlab.mydomain.com;
location / {
proxy_pass http://gitlab.mydomain.com;
}
}
I use upstream because otherwise the url set in new Gitlab projects is set to the IP address, as mentioned here.

Resources