I'm installing a Gitlab instance with docker-compose on a server machine on my local network, and I'd like to access to my Gitlab instance from anywhere in my local network by visiting for example "https://my-hostname"
I follow this.
I'm running:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'gitlab.example.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://gitlab.example.com'
# Add any other gitlab.rb configuration here, each on its own line
ports:
- '7780:80'
- '7443:443'
- '7722:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
Now I have very (very) limited network knowledge, so basically, how do I access to my running gitlab instance ? When I go to the local network IP of my host, my browser tells me that it can't connect.
Here is what I'm hoping to achieve:
LOCAL NETWORK
+--------------------------------------------------------------------------+
| |
| +--------------------+ |
| | My_Server | |
| | | |
| | +----------------+ | |
| | | | | "https://my-hostname" +-------------------+ |
| | | Docker: Gitlab | <------------------------+ My_Client | |
| | | | | +-------------------+ |
| | +----------------+ | |
| | | |
| +--------------------+ |
| |
+--------------------------------------------------------------------------+
The ports part of your configuration maps the host's ports to the container's ports.
So if you have
ports:
- '7780:80'
- '7443:443'
- '7722:22'
that is redirecting port 7780 on your host to port 80 on your container, and so forth. You should be able to access your container's services (via its local IP address, and then its hostname via local DNS) with this knowledge.
Related
Goal:
I like to be able to ping and access the docker clients from my host network. And if possible, I like to have as much as possible configured in my docker-compose.yml.
Remark:
ICMP (ping) is just used for simplification. Actually, I like to access ssh on 22 and some other ports. Mapping ports is my current solution, but since I have many docker client container it becomes messy.
___________ ___________ ___________
| host | | docker | | docker |
| client | | host | | client |
| ..16.50 | <--> | ..16.10 | | |
| | | ..20.1 | <--> | ..20.5 |
| | | |
| | <----- not working ----> | |
Problem:
I am able to ping my docker host from docker clients and host clients, but not the docker clients from host clients.
That's my configuration on ubuntu 22.04.
docker host: 192.168.16.10/24
client host network: 192.168.16.50/24
default gw host network: 192.168.161 /24
docker client (container): 192.168.20.5 /24
docker-compose.yml
version: '3'
networks:
ipvlan20:
name: ipvlan20
driver: ipvlan
driver_opts:
parent: enp3s0.20
com.docker.network.bridge.name: br-ipvlan20
ipvlan-mode: l3
ipam:
config:
- subnet: "192.168.20.0/24"
gateway: "192.168.20.1"
services:
portainer:
image: alpine
hostname: ipvlan20
container_name: ipvlan20
restart: always
command: ["sleep","infinity"]
dns: 192.168.16.1
networks:
ipvlan20:
ipv4_address: 192.168.20.5
On my docker host, I added the following link with the vlan gateway IP.
ip link add myipvlan20 link enp3s0.20 type ipvlan mode l3
ip addr add 192.168.20.1/24 dev myipvlan20
ip link set myipvlan20 up
And on my host client, I added a rout to the docker host for the docker client network.
ip route add 192.168.20.0/24 via 192.168.16.10
I tried also:
Do I have to use macvlan? I tried that, but also unsuccessfully.
Do I have to use l3? I also tried with l2, but unsuccessfully as well.
For a test project, I want to have the following network connectivity
+---------------------------+
| redis-service-1 in docker |
+-----------+ B +-----------+
| R |
+--------+ | I |
| Host | ---> | D |
+--------+ | G |
| E |
+-----------+ +-----------+
| redis-service-2 in docker |
+---------------------------+
What I want to achieve is for my app running on Host it should be able to connect to both of the Redis services running in two docker containers using the DNS routing given by docker.
docker-compose.yml looks like the following:
version: '3.6'
services:
redis-service-1:
image: redis:5.0
networks:
- overlay
redis-service-2:
image: redis:5.0
networks:
- overlay
Ideally after docker-compose up on the host I want to be able to ping both of the containers as following:
> redis-cli -h redis-service-1 ping
> redis-cli -h redis-service-2 ping
But I am unable to connect to these containers and Redis inside them from the host machine.
[ With the helpful comment of Siyu i could fix the problems, additionally I needed to set an entrypoint in labels - i have added my corrected docker-compose.yaml, which was all i needed to fix ]
Currently i have reconfigured my synology workstation to handle https traffic with traefik.
I want to serve docker containers with traefik and still to provide also the web interface of the synology workstation via http (by using traefik also as SSL offloader). Traefik has now the problem to handle two provider backends, one being the "original" synology webserver and one the docker containers which come and go.
The current setup works for providing the "test.com" (Synology DSM webinterface). But if try to access a container with "/dashboard" it just gives me an 404.
How can is set this up, so that both backends (docker + webserver outside docker) are served?
Datapoints
The docker interface is recognized and the
the labels (*see below) are read from traefik (can be seen in the logs)
The synology nginx runs outside of docker (not as a container!)
The whole synology workstation serves in a IPv4 /IPv6 environment (both)
Synology nginx was modified not serve on the standard http/https port (where it does only redirect to port 5000/5001, as i can see in the configuration of nginx)
Intended setup which should be served
Notice that the original synology is a catch all domain (/*)
+-----------------------------------------------------------------------
| Synology Workstation
|
| +--------------------------------------------------------+
| | Docker |
| | +---------+ +-------------------+ |
|-->HTTPS-->|-->HTTPS-->| Traefik |-->HTTP-->| test.com/dashboard| |
| 443:443 | | | | | |
| | +---------+--+ +-------------------+ |
| | | | |
| | | | +------------------+ |
| | | +--HTTP-->| test.com/stats | |
| | | +------------------- |
| | | |
| +----------------|----------------------------------------
| | +-------------------+
| +--HTTP-->|test.com/* |
| |(nginx of synology)|
| +-------------------+
+--------------------------------------------------------------------
The traefik.toml looks like this:
debug=true
logLevel="DEBUG"
[traefikLog]
filePath = "/etc/traefik/traefik.log"
[accessLog]
filePath = "/etc/traefik/access.log"
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/etc/pki/tls/certs/test.com.crt"
keyFile = "/etc/pki/tls/private/test.com.key"
[backends]
[backends.wbackend]
[backends.wbackend.servers.server]
url = "http://workstation.test.com:5000"
#weight = 10
[frontends]
[frontends.workstation]
backend = "wbackend"
passHostHeader = true
entrypoints = ["https"]
[frontends.workstation.routes.route1]
rule = "Host:workstation.test.com"
# You MUST ADD file otherwise traefik does not parse the fronted rules
[file]
[docker]
endpoint = "unix:///var/run/docker.sock"
Docker-compose snippt (see labels which map the domain).
---
version: '2'
services:
traefik:
# Check latest version: https://hub.docker.com/r/library/traefik/tags/
image: traefik:1.7.6
restart: unless-stopped
container_name: traefik
mem_limit: 300m
#network_mode: host
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /volume1/container/traefik/etc/pki/tls/certs/workstation.test.com.crt:/etc/pki/tls/certs/workstation.test.com.crt
- /volume1/container/traefik/etc/pki/tls/private/workstation.test.com.key:/etc/pki/tls/private/workstation.test.com.key
- /volume1/container/traefik/etc/traefik:/etc/traefik
ports:
- "80:80"
- "443:443"
labels:
- traefik.stat.frontend.rule=Host:workstation.test.com;Path:/dashboard
- traefik.stat.backend=traefik
- traefik.stat.frontend.entryPoints=https
- traefik.stat.frontend.rule=Host:workstation.test.com;PathPrefixStrip:/dashboard
- traefik.stat.port=8080
A few problems with your config:
your toml is not passed in
api is not enabled
missing backend in labels
should use PathPrefixStrip
Try
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /path/to/traefik.toml:/etc/traefik/traefik.toml
command: --api
ports:
- "80:80"
- "443:443"
- "8080:8080" // help you debug
labels:
- traefik.backend=traefik
- "traefik.frontend.rule=PathPrefixStrip:/dashboard/;Host:test.io"
- traefik.port=8080
Description
I'm trying to create a Redis cluster in a docker swarm. I'm using the bitnami-redis-docker image for creating my containers. Going through the bitnami documentation they always suggest to use 1 master node as opposed to reading the Redis documentation which states that there should be at least 3 master nodes, which is why I'm confused as to which one is right. Given that all bitnami slave are by default read-only, if I setup only a single master in one of the swarm leader nodes, and if it fails I believe sentinel will try to promote a different slave redis instance as master, but given that it is read-only all write operations will fail. If I change that to make the master redis instance as global meaning that it will be created in all of the nodes available in the swarm, in this case do I require sentinel at all? Also if the below setup is a good one is there a reason to introduce a load balancer?
Setup
+------------------+ +------------------+ +------------------+ +------------------+
| Node-1 | | Node-2 | | Node-3 | | Node-4 |
| Leader | | Worker | | Leader | | Worker |
+------------------+ +------------------+ +------------------+ +------------------+
| M1 | | M2 | | M3 | | M4 |
| R1 | | R2 | | R3 | | R4 |
| S1 | | S2 | | S3 | | S4 |
| | | | | | | |
+------------------+ +------------------+ +------------------+ +------------------+
Legends -
Masters are called M1, M2, M3, ..., Mn
Slaves are called R1, R2, R3, ..., Rn (R stands for replica).
Sentinels are called S1, S2, S3, ..., Sn
Docker
version: '3'
services:
redis-master:
image: 'bitnami/redis:latest'
ports:
- '6379:6379'
environment:
- REDIS_REPLICATION_MODE=master
- REDIS_PASSWORD=laSQL2019
- REDIS_EXTRA_FLAGS=--maxmemory 100mb
volumes:
- 'redis-master-volume:/bitnami'
deploy:
mode: global
redis-slave:
image: 'bitnami/redis:latest'
ports:
- '6379'
depends_on:
- redis-master
volumes:
- 'redis-slave-volume:/bitnami'
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_PASSWORD=laSQL2019
- REDIS_PASSWORD=laSQL2019
- REDIS_EXTRA_FLAGS=--maxmemory 100mb
deploy:
mode: replicated
replicas: 4
redis-sentinel:
image: 'bitnami/redis:latest'
ports:
- '16379'
depends_on:
- redis-master
- redis-slave
volumes:
- 'redis-sentinel-volume:/bitnami'
entrypoint: |
bash -c 'bash -s <<EOF
"/bin/bash" -c "cat <<EOF > /opt/bitnami/redis/etc/sentinel.conf
port 16379
dir /tmp
sentinel monitor master-node redis-master 6379 2
sentinel down-after-milliseconds master-node 5000
sentinel parallel-syncs master-node 1
sentinel failover-timeout master-node 5000
sentinel auth-pass master-node laSQL2019
sentinel announce-ip redis-sentinel
sentinel announce-port 16379
EOF"
"/bin/bash" -c "redis-sentinel /opt/bitnami/redis/etc/sentinel.conf"
EOF'
deploy:
mode: global
volumes:
redis-master-volume:
driver: local
redis-slave-volume:
driver: local
redis-sentinel-volume:
driver: local
The bitnami solution is a failover solution hence it has one master node
Sentinel is a HA solution i.e automatic failover. But it does not provide scalability in terms of distribution of data across multiple nodes. You would need to setup clustering if you want 'sharding' in addition to 'HA'.
I'm looking about application deployment with docker containers for production in some server (not hundreds).
I can see some deployment managers like docker-compose who deploy according to YAML service
description file.
Official docker-compose.yml example file:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
links:
- redis
redis:
image: redis
I'm looking about solution to manage/produce these YAML files and communicate with deployment managers like docker-compose.
This solution should permeit to manage Applications templates, deployeds instances of them, configuration of them, etc.
Illustration of it:
Docker
+-------------------+
docker-compose.yml | |
+---------------+ +-------+ | containers |
| APP manager |------->|Mysql_a| | +---------------+ |
| | |Mysql_b+-----------+ | |MySQL_a |Mysq| |
| MySQL Tpl | |Mysql_c| docker-compose | +---------------+ |
| Wordpress tpl | |Wp_a | | | |l_b |Mysql_c | |
| | +---+---+ | | +---------+-----+ |
| Mysql_a | | +------+ |Wp_a | | |
| Mysql_b +----------> | | | +---------+ | |
| Mysql_c | | | | | | |
| Wp_a | | | | | | |
+---------------+ | | | | | |
+---------------+ | +---------------+ |
+-------------------+
My thirst think is for panamax but is it approriate ? Whats other open source solutions exists ?