Cannot run gitlab docker image ports already in use - docker

I'm trying to run a gitlab docker image. I get trouble with ports already in use.
ERROR: for gitlab_web_1 Cannot start service web: driver failed
programming external connectivity on endpoint gitlab_web_1
(a22b149b76f705ec3e00c7ec4f6bcad8f0e1b575aba1dbf621c4edcc4d4e5508):
Error starting userland proxy: listen tcp 0.0.0.0:22: bind: address
already in use
Here is my docker-compose.yml:
web:
image: 'gitlab/gitlab-ee:latest'
restart: always
hostname: 'gitlab.example.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://gitlab.example.com'
# Add any other gitlab.rb configuration here, each on its own line
ports:
- '80:80'
- '443:443'
- '22:22'
volumes:
- '$GITLAB_HOME/config:/etc/gitlab'
- '$GITLAB_HOME/logs:/var/log/gitlab'
- '$GITLAB_HOME/data:/var/opt/gitlab'
I previously had the same error message for port 80 and 443.
To fix it, I removed apache from my server.
But I need the port 22 to ssh connect, so I don't know how to make it out...
Is it possible to have apache and a docker container running with the same ports?
Why does gitlab/gitlab-ee need the port 22?

A friend told me about traefik that will answer to my needs:
https://docs.traefik.io/.
Another solution would be to create as many VirtualHost as needed on apache and reroute them to local docker ports.

Gitlab needs port 22 because it's the default port for ssh connections, which are used for push/pull of different repos.
Because there are two different protocols in this one question, they both have very different solutions.
SSH ports
To get around this, I followed the steps here, which explains how to update the /etc/gitlab/gitlab.rb file, to change the default listening port to something of your choosing (2289 in the example).
Notice, when the change is applied, when you Clone a repo, the "Clone with SSH" string changes to include this custom port.
Apache ports
AFAIK It's not possible to have two processes listening on the same port. Because of this, I publish different ports for the container (ie: 8080 and 8443), and use Apache with a virtual host, and a proxy to make it behave how users expect. This does assume you have control over your DNS.
This allows me to have several containers all publishing different ports, while apache listens on port 80/442, and acting as a proxy for those containers.

Related

Migrate docker-compose to a single node docker-swarm cluster

At the moment I have implemented a flask application, connected with mysql database, and the entire implementation is running on a single webserver.
In order to avoid exposing my app publicly, I am running it on the localhost interface of the server, and I am only exposing the public interface (port 443), via a haproxy that redirects the traffic to localhost interface.
The configuration of docker-compose and haproxy can be found below
docker-compose:
version: '3.1'
services:
db:
image: mysql:latest
volumes:
- mysql-volume:/var/lib/mysql
container_name: mysql
ports:
- 127.0.0.1:3306:3306
environment:
MYSQL_ROOT_PASSWORD: xxxxxx
app:
#environment:
# - ENVIRONMENT=stage
# - BUILD_DATETIME=$(date +'%Y-%m-%d %H:%M:%S')
build:
context: .
dockerfile: Dockerfile
#labels:
# - "build_datetime=${BUILD_DATETIME}"
container_name: stage_backend
ports:
- 127.0.0.1:5000:5000
volumes:
mysql-volume:
driver: local
sample haproxy configuration:
global
log /dev/log local0
log /dev/log local1 notice
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 10s
timeout client 30s
timeout server 30s
frontend test
bind *:80
bind *:443 ssl crt /etc/letsencrypt/live/testdomain.com/haproxy.pem alpn h2,http/1.1
redirect scheme https if !{ ssl_fc }
mode http
acl domain_testdomain hdr_beg(host) -i testdomain.com
use_backend web_servers if domain_testdomain
backend web_servers
timeout connect 10s
timeout server 100s
balance roundrobin
mode http
server test_server 127.0.0.1:5000 check
So haproxy is running on the public interface as a service via systemd (not containerized) and containers are running on localhost.
This is going to become a production setup soon, so I want to deploy a single node docker swarm cluster, within that server only, as docker swarm implementation is more safe on a production environment.
My question is how can I deploy that on docker swarm.
Does it make sense to leave haproxy as a systemd service and somehow to make it forward requests to the docker swarm cluster?
Is it easier/better implementation, to also containerize the haproxy and put it inside the cluster as a docker-compose service?
If I follow the second approach, how can I make it run on a different interface than the application (haproxy --> public, flask & db --> localhost)
Again, I am talking about a single server here, so this is why I am trying to separate the network interfaces and only expose haproxy on 443 on the public interface.
Ideally I didn't want to change from haproxy to nginx reverse proxy, as I am familiar with it and how ssl termination exactly work there, but I am open to hear any other implementation that makes more sense.
You seem to be overthinking things, and in the process throwing away security features that docker offers.
first off, docker gives you private networking out the box in both compose and swarm modes. an implicit network called <stack>_default is created and services are attached to it, and DNS resolution is setup in each container to resolve each service name.
So, assuming your app and db don't explicitly declare any networks, then the following implicit declarations apply, and your app can connect to the db using the connection string mysql://db:3306 directly.
The db container does not need to either publish, or try and protect, access to this port, only other containers attached to the [stack_]default network will have access.
networks:
default: # implicit
services:
app:
networks:
default: # implicit
environment:
MYSQL: mysql://db:3306 #
db:
networks:
default: # implicit
At this point, its your choice to run HAProxy as a service or not. Personally I would (do). It is handy in swarm to have a single service that handles :80 and :443 ingress, does offloading, and then uses docker networks to direct traffic to other services on whatever service:port's handle those connections.
I use Traefik rather than HAProxy as it can use service labels to route traffic dynamically, but either way, having HAProxy as a service means, if you continue to use that, you can more easily deploy HAProxy config updates.

Docker-compose: Docker containers can't connect using service names

I have 3 containers. One is a lighttpd server serving static content (front). I have 2 flask servers handling the backend (back and model)
This is my docker-compose.yml
version: "3"
services:
front:
image: ecd3:latest
ports:
- 4200:80
tty: true
links:
- "back"
depends_on:
- back
networks:
- mynet
back:
image: esd3:latest
ports:
- 5000:5000
links:
- "model"
depends_on:
- model
networks:
- mynet
model:
image: mok:latest
ports:
- 5001:5001
networks:
- mynet
networks:
mynet:
I'm trying to send an http request to my flask server (back) from my frontend (front). I have bound the flask server to 0.0.0.0 and even used the service name in the frontend (http://back:5000/endpoint)
Trying to curl the flask server inside the frontend container (curl back:5000) gives me this:
curl: (52) Empty reply from server
Pinging the flask server from inside the frontend container works. This means that the connection must have been established.
Why can't I connect to my flask server from my frontend?
We discovered several things in the comments. Firstly, that you had a proxy problem that prevented one container using the API in another container.
Secondly, and critically, you discovered that the service names in your Docker Compose configuration file are made available in the virtual networking system set up by Docker. So, you can ping front from back and vice-versa. Importantly, it's worth noting that you can do this because they are on the same virtual network, mynet. If they were on different Docker networks, then by design the DNS names would not be available, and the virtual container IP addresses would not be reachable.
Incidentally, since you have all of your containers on the same network, and you have not changed any network settings, you could drop this network for now. In other words, you can remove the networks definition and the three container references to it, since they can just join the default network instead.
Thirdly, you learned that Docker's virtual DNS entries are not made available on the host, and so front and back are not available here. Even if the were (e.g. if manual entries were made in the hosts file) those IPs would not work, since there is no direct networking route from the host to the containers.
Instead, those containers are exposed by a Docker device that proxies connections from a custom localhost port down to those containers (4200, 5000 and 5001 in your case).
A good interim solution is to load your frontend at http://localhost:4200 and hardwire its API address as http://localhost:5000. You may have some CORS issues with that though, since browsers will see these as different servers.
Moreover, if you go live, you may have some problems with mobile networks and corporate firewalls - you will probably want your frontend app to sit on port 443, but since it is a separate server, you will either need a different IP address for your API, so it can also go on 443, or you will need to use another port. A clean solution for this is to put a frontend proxy in front of both containers, and then just expose the proxy in Docker. This will send HTTP requests from the outside to the correct container, depending on a filtering criteria set by you. I recommend Traefik for this, but there are undoubtedly several other approaches.

Failing consul health check on local machine

I have Consul running via docker using docker-compose
version: '3'
services:
consul:
image: unifio/consul:latest
ports:
- "8500:8500"
- "8300:8300"
volumes:
- ./config:/config
- ./.data/consul:/data
command: agent -server -data-dir=/data -ui -bind 0.0.0.0 -client 0.0.0.0 -bootstrap-expect=1
mongo:
image: mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./.data/mongodb:/data/db
command: mongod --bind_ip_all
and have a nodejs service running on port 6001 which exposes a /health endpoint for health checks.
I am able to register the service via this consul package.
However, visiting the consul UI I can see that the service has a status of failing because the health check is not working.
The UI show this message:
Get http://127.0.0.1:6001/health: dial tcp 127.0.0.1:6001: getsockopt: connection refused
Not sure why is not working exactly, but I kind of sense that i may have misconfigured consul.
Any help would be great.
Consul is running in your docker container. When you use 127.0.0.1 in this container, it refers to itself, not to your host.
You need to use a host IP that is known to your container (and of course make sure your service is reachable and listening on this particular IP).
In most cases, you should be able to contact your host from a container through the default docker0 bridge ip that you can get with ip addr show dev docker0 from your host as outlined in this other answer.
The best solution IMO is to discover the gateway that your container is using which will point to the particular bridge IP on your host (i.e. the bridge created for your docker-compose project when starting it). There are several methods you can use to discover this ip from the container depending on the installed tooling and your linux flavor.
While Zeitounator's answer is perfectly fine and answers your direct question,
the "indirect" solution to your problem would be to manage the nodejs service
through docker-compose.
IMHO it's a good idea to manage all services involved using the same tool,
as then their lifecycles are aligned and also it's easy to configure them to talk
to each other (at least that's the case for docker-compose).
Moreover, letting containers access services on the host is risky in terms of security.
In production environments you usually want to shield host services from containers,
as otherwise the containers lose their "containment" role.
So, in your case you would need to add the nodejs service to docker-compose.yml:
services:
(...)
nodejs-service:
image: nodejs-service-image
ports:
- "6001:6001" [this is only required if you need to expose the port on the host]
command: nodejs service.js
And then your Consul service would be able to access nodejs-service
through http://nodejs-service:6001/health.

How to host multiple environments for a project using docker in the same machine?

I have a typical web stack that consists of nginx + django + database components.
I have set them up in different docker containers with docker-compose and it's running fine.
services:
billing_app_dev:
image: jameslin/billing_app:latest
expose:
- 8000
billing_postgres:
image: postgres:10.5
restart: always
volumes:
- pg_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
billing_nginx:
image: jameslin/billing_nginx:${TAG}
volumes:
- app_files:/static
links:
- 'billing_app'
ports:
- 80:80
Now I am wondering how I can set up DEV and QA environments on a single machine. I can change the django and database containers to listen to different ports, but looks like I cannot run nginx containers individually since port 80 can only be listened by one container.
I will have to share the nginx container for those 2 environments which doesn't seem very clean.
Are there any better ideas if running 2 VMs is not possible?
I have 3 apache containers and 1 nginx running in the same server, so pretty sure is not a issue.
For each stack of webserver + database i have a different docker-compose file, in this way docker will create a different network for each stack, avoiding possible problems with simultaneous port, and you only will have to bind your nginx in different ports of your server, because, you only can bind one service to one port. still, each container is a separated "machine", so even over the same network they can use the same port.
if you really need run all your services in the port 80 and 443 of your server, may be you will need to put a nginx running in those ports like a reverse proxy calling in the internal docker network those services, is a option but i never try it before over docker internal network.
I think what you needed is virtual ip or maybe called ip aliasing. Even you just have one network card, you can still set 2 ips on it.
Then, you can set 2 different nginx container on host, and use different ip but same 80 port.
Something like follows:
cd /etc/sysconfig/network-script/
cp ifcfg-eth0 ifcfg-eth0:1
vi ifcfg-eth0:1
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth0:1 ----> sub network card
HWADDR=00:0C:29:45:62:3B
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.109.108 ----> configure a new different ip
NETMASK=255.255.255.0
Detail refers to Create Multiple IP Addresses to One Single Network Interface
For nginx, from nginx guide, you had to change your nginx docker to modify listen 80 to listen your_ip:80, then it will not listen on all ip address.

How can I make docker-compose bind the containers only on defined network instead of 0.0.0.0?

In recent versions docker-compose automatically creates a new network for the services it creates. Basically, every docker-compose setup is getting its own IP range, so that in theory I could call my services on the network's IP address with the predefined ports. This is great when developing multiple projects at the same time, since there is then no need to change the ports in docker-compose.yml (i.e. I can run multiple nginx projects at the same time on port 8080 on different interfaces)
However, this does not work as intended: every exposed port is still exposed on 0.0.0.0 and thus there are port conflicts with multiple projects. It is possible to put the bind IP into docker-compose.yml, however this is a killer for portability -- not every developer on the team uses the same OS or works on the same projects, therefore it's not clear which IP to configure.
It's be great to define the IP to bind the containers to in terms of the network created for this particular project. docker-compose should both know which network it created as well as its IP, so this shouldn't be a problem, however I couldn't find an easy way to do it. Is there a way or is this something yet to be implemented?
EDIT: An example of a port conflict: imagine two projects, each with an application server running on port 8080 and a MySQL database running on port 3306, both respectively exposed as "8080:8080" and "3306:3306". Running the first one with docker-compose creates a network called something like app1_network with an IP range of 172.18.0.0/16. Every exposed port is exposed on 0.0.0.0, i.e. on 127.0.0.1, on the WAN address, on the default bridge (172.17.0.0/16) and also on the 172.18.0.0/16. In this case I can reach my application server of all of 127.0.0.1:8080, 172.17.0.1:8080, 172.18.0.1:8080 and als on $WAN_IP:8080. If I start the second application now, it starts a second network app2_network 172.19.0.0/16, but still tries to bind every exposed port on all interfaces. Those ports are of course already taken (except for 172.19.0.1). If there had been a possibility to restrict each application to its network, application 1 would have available at 172.18.0.1:8080 and the second at 172.19.0.1:8080 and I wouldn't need to change port mappings to 8081 and 3307 respectively to run both applications at the same time.
In your service configuration, in docker-compose.yml:
ports:
- "127.0.0.1:8001:8001"
Reference: https://github.com/compose-spec/compose-spec/blob/master/spec.md#ports
You can publish a port to a single IP address on the host by including the IP before the ports:
docker run -p 127.0.0.1:80:80 -d nginx
The above runs nginx on the loopback interface. You can use a similar port mapping inside of a docker-compose.yml file. e.g.:
ports:
- "127.0.0.1:80:80"
docker-compose doesn't have any special abilities to infer which network interface to use based on the docker network. You'd need to specify the unique IP address to use in each compose file, and that IP needs to be for a network interface on your host. For a developer machine, that IP may change as DHCP gives the laptop/workstation new addresses.
Because of the difficulty implementing your goal, most would either map different ports on the host to different containers, so 13307:3307 for container a, 23307:3307 for container b, 33307:3307 for container c, or whatever number scheme makes sense for you. And when dealing with HTTP traffic, then using a reverse proxy like traefik often makes the most sense.
It can be achieved by configuring network in docker-compose file.
Please consider below two docker-compose files. There is still drawback of needing to specify subnet unique across all project you work on at the same time. On the other hand you need to know which service you connecting too - this is why it cannot assign it dynamically.
my-project.yaml:
services:
nginx:
networks:
- my-project-network
image: nginx
ports:
- 80:80
networks:
my-project-network:
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "172.20.0.1"
ipam:
config:
- subnet: "172.20.0.0/16"
my-other-project.yaml
services:
nginx:
networks:
- my-other-project-network
image: nginx
ports:
- 80:80
networks:
my-other-project-network:
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "172.21.0.1"
ipam:
config:
- subnet: "172.21.0.0/16"
Note: that if you have other service binding to *:80 like for instance apache running on host - it will also bind on docker-compose networks' interfaces and you will not be able to use this port.
To run above two services:
docker-compose -f my-project.yaml up -d
docker-compose -f my-other-project.yaml up -d

Resources