I want to host a private Gitlab server on my Debian VPS. I figured using Docker would be a good setup.
I tried running Gitlab with the following code:
version: '3'
services:
gitlab:
image: 'gitlab/gitlab-ce'
restart: always
hostname: 'gitlab.MYDOMAIN.com'
links:
- postgresql:postgresql
- redis:redis
environment:
GITLAB_OMNIBUS_CONFIG: |
postgresql['enable'] = false
gitlab_rails['db_username'] = "gitlab"
gitlab_rails['db_password'] = "gitlab"
gitlab_rails['db_host'] = "postgresql"
gitlab_rails['db_port'] = "5432"
gitlab_rails['db_database'] = "gitlabhq_production"
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'utf8'
redis['enable'] = false
gitlab_rails['redis_host'] = 'redis'
gitlab_rails['redis_port'] = '6379'
external_url 'http://gitlab.MYDOMAIN.com:30080'
gitlab_rails['gitlab_shell_ssh_port'] = 30022
ports:
# both ports must match the port from external_url above
- "30080:30080"
# the mapped port must match ssh_port specified above.
- "30022:22"
# the following are hints on what volumes to mount if you want to persist data
# volumes:
# - data/gitlab/config:/etc/gitlab:rw
# - data/gitlab/logs:/var/log/gitlab:rw
# - data/gitlab/data:/var/opt/gitlab:rw
postgresql:
restart: always
image: postgres:9.6.2-alpine
environment:
- POSTGRES_USER=gitlab
- POSTGRES_PASSWORD=gitlab
- POSTGRES_DB=gitlabhq_production
# the following are hints on what volumes to mount if you want to persist data
# volumes:
# - data/postgresql:/var/lib/postgresql:rw
redis:
restart: always
image: redis:3.0.7-alpine
Running this (docker-compose run -d) allows me to reach Gitlab on MYDOMAIN.com:30080, but not on gitlab.MYDOMAIN.com:30080.
Have I made an error in the configuration? Or do I need to use reverse proxies (NGINX or Traefik)?
I'm pretty sure the hostname: gitlab.MYDOMAIN.rocks needs to match the external_url 'http://gitlab.MYDOMAIN.com:30080' until the port exactly
So for example:
hostname: gitlab.MYDOMAIN.com
. . . more configuration . . .
external_url 'http://gitlab.MYDOMAIN.com:30080'
Did you check that the subdomain gitlab in dns is pointing to the right ip? Looks like an infrastructure problem more than a docker configuration one.
Regards
I managed to fix it myself!
I totally forgot to add an A-record, setting gitlab.mydomain.com to point to the same IP address as #.
I added the following block to the nginx configuration:
upstream gitlab.mydomain.com {
server 1.2.3.4:30080; # IP address of Docker container
}
server {
server_name gitlab.mydomain.com;
location / {
proxy_pass http://gitlab.mydomain.com;
}
}
I use upstream because otherwise the url set in new Gitlab projects is set to the IP address, as mentioned here.
Related
I've this docker-compose file:
version: "3.8"
services:
web:
image: apachephp:v1
ports:
- "80-85:80"
volumes:
- volume:/var/www/html
network_mode: bridge
ddbb:
image: mariadb:v1
ports:
- "3306:3306"
volumes:
- volume2:/var/lib/mysql
network_mode: bridge
environment:
- MYSQL_ROOT_PASSWORD=*********
- MYSQL_DATABASE=*********
- MYSQL_USER=*********
- MYSQL_PASSWORD=*********
volumes:
volume:
name: volume-data
volume2:
name: volume2-data
When run this:
docker-compose up -d --scale web=2
Its works as well but receive this warning:
WARNING: The "web" service specifies a port on the host. If multiple containers for this service are created on a single host, the port will clash.
Can somebody help to avoid these warning?, thank you advance.
Best regards.
I suppose, you try to access the web service without knowing the port of the specific container and to distribute the requests to a container here. If i rigth, to do that, you need a load balancing mechanisms to the system configuration.In the following example, i'll use NGINX as the load balancer.
version: "3.8"
services:
web:
image: apachephp:v1
expose: # change 'ports' to 'expose'
- "7483" # <- this is web running port (Change to your web port)
....
ddbb:
....
## New Start ##
nginx:
image: nginx:latest
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- web # your web service name
ports:
- "80:80"
## New End ##
volumes:
...
So you don’t need to map the port 80-85:80 from the web services to a host machine port, if you want to scale the service. So i remove the port mapping configuration from your Docker Compose file and only expose the port as above:
In the nginx service and i add port mappings to the host container for that server. In example, i configured NGINX to listen on the port 4000, which is why we have to add port mappings for this port.
nginx.conf file contents:
user nginx;
events {
worker_connections 1000;
}
http {
server {
listen 4000;
location / {
proxy_pass http://pspdfkit:5000;
}
}
}
You will find here more details, to Use Docker Compose to Run Multiple Instances of a Service in Development.
I'm trying to set up GitLab as a docker container in an internal server. Let's assume the server's IP is 10.10.10.10. Below is my docker-compose file that I use to bring up the container. I'm unable to access the http url via localhost:4080 (from a browser within the server) OR via the IP 10.10.10.10:4080. I'd like to understand what I'm missing here.
version: '2'
services:
gitlab:
image: gitlab-ee-img:12.0.9-ee.0
container_name: gitlab
restart: always
hostname: 'localhost:4080'
# environment:
# GITLAB_OMNIBUS_CONFIG: |
# # Add any other gitlab.rb configuration here, each on its own line
# # external_url 'https://gitlab.example.com'
# external_url 'http://127.0.0.1:4080'
ports:
- '4080:80'
- '4443:443'
- '4022:22'
volumes:
- '/data/gitlab/config:/etc/gitlab'
- '/data/gitlab/logs:/var/log/gitlab'
- '/data/gitlab/data:/var/opt/gitlab'
Not entirely sure about if there's something else not working there but I'm fairly sure that the hostname: 'localhost:4080' block is not correct. It should be just the hostname without a port. Try to comment out that line and try without defining a hostname at all for testing.
src: https://docs.docker.com/compose/compose-file/#domainname-hostname-ipc-mac_address-privileged-read_only-shm_size-stdin_open-tty-user-working_dir
For anyone hitting this SO question:
The answer is to NOT map a custom port on to 80 in docker. Instead, this will work:
version: '2'
services:
gitlab:
image: gitlab-ee-img:12.0.9-ee.0
container_name: gitlab
restart: always
hostname: '10.10.10.10'
environment:
GITLAB_OMNIBUS_CONFIG: |
# Add any other gitlab.rb configuration here, each on its own line
# external_url 'https://gitlab.example.com'
external_url 'http://10.10.10.10:4080'
gitlab_rails['gitlab_shell_ssh_port'] = 4022
ports:
- '4080:4080'
- '4443:443'
- '4022:22'
volumes:
- '/data/gitlab/config:/etc/gitlab'
- '/data/gitlab/logs:/var/log/gitlab'
- '/data/gitlab/data:/var/opt/gitlab'
The reason is explained in this thread - Specifically, this answer
To summarize here (quoting the original answer),
The default port of gitlab is 80, but when you use the external_url
clause, gitlab changes the ngnix port to which it is going to listen
or resolve, it is not just an alias.
If you inside the container executes the command curl
http://localhost, after having placed external_url
http://10.10.10.10:4080, it will not answer for port 80, you can try
with the command curl http://10.10.10.10:4080
I'm running a selfhosted gitlab docker instance, but I'm facing some problems configuring the registry as I do get the error
Error response from daemon: Get https://example.com:4567/v2/: dial tcp <IP>:4567: connect: connection refused
for doing docker login example.com:4567.
So it seems that I have to expose the port 4567 somehow.
An (better) alternative would be to configure a second domain for the registry - like registry.example.com. As you can see below I'm using letsencrypt certificates for my gitlab instance. But how do I get a second certificate for the registry?
This is how my docker-compose looks like - I'm using jwilder/nginx-proxy for my reverse proxy.
docker-compose.yml
gitlab:
image: gitlab/gitlab-ce:11.9.0-ce.0
container_name: gitlab
networks:
- reverse-proxy
restart: unless-stopped
ports:
- '50022:22'
volumes:
- /opt/gitlab/config:/etc/gitlab
- /opt/gitlab/logs:/var/log/gitlab
- /opt/gitlab/data:/var/opt/gitlab
- /opt/nginx/conf.d:/etc/nginx/conf.d
- /opt/nginx/certs:/etc/nginx/certs:ro
environment:
VIRTUAL_HOST: example.com
VIRTUAL_PROTO: https
VIRTUAL_PORT: 443
LETSENCRYPT_HOST: example.com
LETSENCRYPT_EMAIL: certs#example.com
gitlab.rb
external_url 'https://example.com'
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate'] = '/etc/nginx/certs/example.com/fullchain.pem'
nginx['ssl_certificate_key'] = '/etc/nginx/certs/example.com/key.pem'
gitlab_rails['backup_keep_time'] = 604800
gitlab_rails['backup_path'] = '/backups'
gitlab_rails['registry_enabled'] = true
registry_external_url 'https://example.com:4567'
registry_nginx['ssl_certificate'] = "/etc/nginx/certs/example.com/fullchain.pem"
registry_nginx['ssl_certificate_key'] = "/etc/nginx/certs/example.com/key.pem"
For the second alternative it would look like:
registry_external_url 'https://registry.example.com'
registry_nginx['ssl_certificate'] = "/etc/nginx/certs/registry.example.com/fullchain.pem"
registry_nginx['ssl_certificate_key'] = "/etc/nginx/certs/registry.example.com/key.pem"
But how do I set this up in my docker-compose?
Update
Im configuring nginx just via jwilder package, without changing anyhting. So this part of my docker-compose.yml file just looks like this:
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /opt/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- /opt/nginx/certs:/etc/nginx/certs:ro
- html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
networks:
- reverse-proxy
depends_on:
- nginx-proxy
volumes:
- /opt/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_PROXY_CONTAINER: "nginx-proxy"
TL; DR:
So it seems that I have to expose the port 4567 somehow.
Yes, however jwilder/nginx-proxy does not support more than one port per virtual host and port 443 is already exposed. There is a pull request for that feature but it has not been merged yet. You'll need to expose this port another way (see below)
You are using jwilder/nginx-proxy as reverse proxy to access a Gitlab instance in a container but with your current configuration onlyport 443 is exposed:
environment:
VIRTUAL_HOST: example.com
VIRTUAL_PROTO: https
VIRTUAL_PORT: 443
All other Gitlab services (including the registry on port 4567) are not proxied and therefore not reachable through example.com.
Unfortunately it is not possible yet to expose multiple port on a single hostname with jwilder/nginx-proxy. There is a pull request open for that use case but it had not been merged yet (you are not the only one with this kind of issue).
An (better) alternative would be to configure a second domain for the registry
This won't work if you keep using jwilder/nginx-proxy as even if you changed registry_external_url, you'll still be stuck with the port issue, and you cannot allocate the same port to two different services.
What you can do:
vote and comment for mentioned PR to be merged :)
try to build the Docker image from mentionned pull request's fork and configure your compose with something like VIRTUAL_HOST=example.com:443,example.com:4567
configure a reverse proxy manually fort port 4567 - you may wind-up a plain nginx container in addition with your current configuration which would specifically do this, or re-configure your entire proxying scheme without using jwilder images
update your configuration to expose example.com:4567 instead of example.com:443 but you'll lose HTTPS access. (though it's probably not what you are looking for)
I am aware this does not provide a finite solution but I hope it helps.
I have a Docker build for Gitlab, I created some ssl certificates and other files I need to pull in. However when I exec into the container bash the files are not visible.
gitlab:
image: 'gitlab/gitlab-ce:9.1.0-ce.0'
restart: always
hostname: 'gitlab.example.com'
links:
- postgresql:postgresql
- redis:redis
environment:
GITLAB_OMNIBUS_CONFIG: |
postgresql['enable'] = false
gitlab_rails['db_username'] = "gitlab"
gitlab_rails['db_password'] = "gitlab"
gitlab_rails['db_host'] = "postgresql"
gitlab_rails['db_port'] = "5432"
gitlab_rails['db_database'] = "gitlabhq_production"
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'utf8'
redis['enable'] = false
gitlab_rails['redis_host'] = 'redis'
gitlab_rails['redis_port'] = '6379'
external_url 'https://gitlab.example.com:30080'
nginx['ssl_certificate'] = '/etc/gitlab/trusted-certs/gitlab.example.com.crt'
nginx['ssl_certificate_key'] = '/etc/gitlab/trusted-certs/gitlab.example.com.key'
ports:
- "30080:30080"
- "30022:22"
postgresql:
restart: always
image: postgres:9.6.2-alpine
environment:
- POSTGRES_USER=gitlab
- POSTGRES_PASSWORD=gitlab
- POSTGRES_DB=gitlabhq_production
redis:
restart: always
image: redis:3.0.7-alpine
On creation of the self signed certificates, I need to exec into my docker container and create them using the docker bash
The certificates (self signed) are on my machine at the path referenced "/etc/gitlab/trusted-certs/gitlab.example.com.crt"
Your docker-compose.yml did not map any folders from your host into your container. Containers are nothing more than a namespaced process, and one of those namespaces is the filesystem. To map a directory from the host into the container, you can use a simple bind mount syntax:
gitlab:
image: 'gitlab/gitlab-ce:9.1.0-ce.0'
restart: always
hostname: 'gitlab.example.com'
volumes:
- ./path/to/gitlab.example.com.crt:/etc/gitlab/trusted-certs/gitlab.example.com.crt:ro
...
Note that this mounts from the host into the container, and the file will be configured as read-only with the :ro syntax to prevent processes inside the container from modifying your certificates. If your docker host is inside of a VM (including docker for windows/mac) or on a remote server, you'll need to make sure the files are accessible there (e.g. docker for win/mac has settings to share PC folders into the embedded VM).
In short:
I have a hard time figuring out how to set custom IP for a Solr container from the docker-compose.yml file.
Detailed
We want to deploy local dev environments, for Drupal instances, via Docker.
The propblem is, that while from the browser I can access the Solr server via the "traditional" http://localhost:8983/solr, Drupal cannot connect to it this way. The internal 0.0.0.0, and 127.0.0.1 doesn't work either. The only way Drupal can connect to the Solr server is via lan IP, which differs for every station obviously, and since the configuration in Drupal needs to be updated anyway, I thought that specifying a custom IP on which they can communicate would be my best choice, but it's not straightforward.
I am aware that assigning static IP to the container is not the best solution, but it seems more feasible than tinkering with solr.in.sh, and if someone has a different approach to achieve this, I am opened to solutions.
Most likely I could use some command line parameter along with docker run, but we need to run the containers with docker-compose up -d, so this wouldn't be an optimal solution.
Ideal would be a Solr container section example for the compose file. Thanks.
Note:
This link shows an example how to set it, but I can't understand it well. Please keep in mind that I am by no means an expert.
Forgot to mention that the host is based on Linux, mostly Ubuntu and Debian.
Edit:
As requested, here is my compose file:
version: "2"
services:
db:
image: wodby/drupal-mariadb
environment:
MYSQL_RANDOM_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
# command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci # The simple way to override the mariadb config.
volumes:
- ./data/mysql:/var/lib/mysql
- ./docker-runtime/mariadb-init:/docker-entrypoint-initdb.d # Place init .sql file(s) here.
php:
image: wodby/drupal-php:7.0 # Allowed: 7.0, 5.6.
environment:
DEPLOY_ENV: dev
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
PHP_XDEBUG_ENABLED: 1 # Set 1 to enable.
# PHP_SITE_NAME: dev
# PHP_HOST_NAME: localhost:8000
# PHP_DOCROOT: public # Relative path inside the /var/www/html/ directory.
# PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
# PHP_XDEBUG_ENABLED: 1
# PHP_XDEBUG_AUTOSTART: 1
# PHP_XDEBUG_REMOTE_CONNECT_BACK: 0 # This is needed to respect remote.host setting bellow
# PHP_XDEBUG_REMOTE_HOST: "10.254.254.254" # You will also need to 'sudo ifconfig lo0 alias 10.254.254.254'
links:
- db
volumes:
- ./docroot:/var/www/html
nginx:
image: wodby/drupal-nginx
hostname: testing
environment:
# NGINX_SERVER_NAME: localhost
NGINX_UPSTREAM_NAME: php
# NGINX_DOCROOT: public # Relative path inside the /var/www/html/ directory.
DRUPAL_VERSION: 7 # Allowed: 7, 8.
volumes_from:
- php
ports:
- "${PORT_WEB}:80"
pma:
image: phpmyadmin/phpmyadmin
environment:
PMA_HOST: db
PMA_USER: ${MYSQL_USER}
PMA_PASSWORD: ${MYSQL_PASSWORD}
ports:
- '${PORT_PMA}:80'
links:
- db
mailhog:
image: mailhog/mailhog
ports:
- "8002:8025"
redis:
image: redis:3.2-alpine
# memcached:
# image: memcached:1.4-alpine
# memcached-admin:
# image: phynias/phpmemcachedadmin
# ports:
# - "8006:80"
solr:
image: makuk66/docker-solr:4.10.3
volumes:
- ./docker-runtime/solr:/opt/solr/server/solr/mycores
# entrypoint:
# - docker-entrypoint.sh
# - solr-precreate
ports:
- "8983:8983"
# varnish:
# image: wodby/drupal-varnish
# depends_on:
# - nginx
# environment:
# VARNISH_SECRET: secret
# VARNISH_BACKEND_HOST: nginx
# VARNISH_BACKEND_PORT: 80
# VARNISH_MEMORY_SIZE: 256M
# VARNISH_STORAGE_SIZE: 1024M
# ports:
# - "8004:6081" # HTTP Proxy
# - "8005:6082" # Control terminal
# sshd:
# image: wodby/drupal-sshd
# environment:
# SSH_PUB_KEY: "ssh-rsa ..."
# volumes_from:
# - php
# ports:
# - "8006:22"
A docker run example would be
IP_ADDRESS=$(hostname -I)
docker run -d -p 8983:8983 solr bin/solr start -h ${IP_ADDRESS} -p 8983
Instead of assigning static IPs, you could use the following method to get the container's IP dynamically.
When you link containers together, they share there network information (IP, port) to each other. The information is stored in each container as environmental variables.
Example
docker-compose.yml
service:
build: .
links:
- redis
ports:
- "3001:3001"
redis:
build: .
ports:
- "6369:6369"
The service container will now have the following environmental variables:
Dynamic IP Address Stored Within "service" container:
REDIS_PORT_6379_TCP_ADDR
Dynamic PORT Stored Within "service" container:
REDIS_PORT_6379_TCP_PORT
You can always check this out by shelling into the container and looking yourself.
docker exec -it [ContainerID] bash
printenv
Inside your nodeJS app you can use the environmental variable in your connection function by using process.env.
let client = redis.createClient({
port: process.env.REDIS_PORT_6379_TCP_ADDR,
host: process.env.REDIS_PORT_6379_TCP_PORT
});
Edit
Here is the updated docker-compose.yml "solr" section:
solr:
image: makuk66/docker-solr:4.10.3
volumes:
- ./docker-runtime/solr:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
ports:
- "8983:8983"
links:
- db
In the above example the "solr" container is now linked with the "db" container. this is done using the "links" field.
You can do the same thing if you wanted to link the solr container to any other container within the docker-compose.yml file.
The db containers information will now be available to the solr container (via the enviromental variables I mentioned earlier).
Without the linking, you will not see those enviromental variables listed when you do the printenv command.