Gitlab links to "https://gitlab/" - docker

I installed gitlab in a docker container from the official image gitlab/gitlab-ce:latest. This image has all config in the file gitlab.rb. Https is done by a nginx reverse proxy.
My problem is, that when gitlab has an absolute link to itself, it links always to https://gitlab/. This host also can be seen in the "New group" dialog:
Docker call:
docker run \
--name git \
--net mydockernet \
--ip 172.18.0.2 \
--hostname git.mydomain.com \
--restart=always \
-p 766:22 \
-v /docker/gitlab/config:/etc/gitlab \
-v /docker/gitlab/logs:/var/log/gitlab \
-v /docker/gitlab/data:/var/opt/gitlab \
-d \
gitlab/gitlab-ce:latest
gitlab.rb:
external_url 'https://git.mydomain.com'
ci_nginx['enable'] = false
nginx['listen_port'] = 80
nginx['listen_https'] = false
gitlab_rails['gitlab_shell_ssh_port'] = 766
Nginx config:
upstream gitlab {
server 172.18.0.2;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name git.mydomain.com;
ssl on;
ssl_certificate /etc/ssl/certs/mydomain.com.chained.pem;
ssl_certificate_key /etc/ssl/private/my.key;
location / {
proxy_pass http://gitlab/;
proxy_read_timeout 10;
}
}
I tried to replace the wrong url wit nginx. This worked for the appearance like in the screen shot, but not for the links:
sub_filter 'https://gitlab/' 'https://$host/';
sub_filter_once off;

You've set your url correctly in your mapped volume /etc/gitlab/gitlab.rb
external_url 'https://git.mydomain.com'
Run sudo gitlab-ctl reconfigure for the change to take effect.

Problem solved. It was in the nginx reverse proxy config. I named the upstream "gitlab" and somehow this name found it's way into the web pages:
upstream gitlab {
server 172.18.0.2;
}
...
proxy_pass http://gitlab/;
I didn't expect that. I even omitted the upstream part in my original question because I thought the upstream name is just an internal name and will be replaced by the defined ip.
So the fix for me is, to write the full domain into the upstream:
upstream git.mydomain.com{
server 172.18.0.2;
}
...
proxy_pass http://git.mydomain.com/;

Related

I have to enter my login twice and ssh-key doesn't work on a dockerized gitlab

I launch gitlab with this command:
sudo docker run --detach --hostname example.com --publish 4433:443 --publish 8080:80 --publish 2222:22 --name gitlab --restart always --volume /data/gitlab/config:/etc/gitlab --volume /data/gitlab/logs:/var/log/gitlab --volume /data/gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest
example.com being another URL, as you may have guessed.
I have an nginx server with this config:
server {
server_name example.com;
client_max_body_size 50m;
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote-addr;
}
listen 443 ssl;
ssl_certificate [MY PATH TO THE .pem FILE];
ssl_certificate_key [OTHER PATH];
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server {
if ($host = example.com) {
return 301 https://$host$request_uri;
}
}
When I use HTTPS to clone and push repos, I have to enter my login/password twice, and when I use ssh (for example git clone git#example.com:myuser/myproject.git), it asks me for a password.
I triple checked, my ssh key configuration is correct.
I left the gitlab.rb config by default, except for this line:
external_url 'https://example.com'
What happens here?
For this particular key, I don't use a passphrase
That means SSH fails to connect to example.com as git, and falls back to the Identity authentication: git's password (which you are not supposed to have).
Using a port syntax HOST_PORT:CONTAINER_PORT, you are supposed to launched your GitLab Docker container with a host port (for instance 2222) mapped to GitLab internal SSH daemon (port 22)
sudo docker run [...] -port 2222:22
Then check it is working with:
ssh -T git#example.com -p 2222
Welcome to GitLab, #you!
With a ~/.ssh/config file, it is easier:
Host gl
hostname example.com
port 2222
User git
IdentityFile ~/.ssh/yourGitLabkey
Then:
ssh -T gh
Welcome to GitLab, #you!
See as examples this thread, or this thread, based on the official documentation "Install GitLab using Docker Compose", mentioned by issue 1767.

failed (111: Connection refused) while connecting to upstream -- nginx/gunicorn connection via 127.0.0.1 [duplicate]

This question already has answers here:
Docker-compose/Nginx/Gunicorn - Nginx not working
(1 answer)
How to communicate between Docker containers via "hostname"
(5 answers)
Closed 4 months ago.
Below are the important details:
Dockerfile for nginx build
FROM nginx:latest
EXPOSE 443
COPY nginx.conf /etc/nginx/nginx.conf
Nginx.conf
events {}
http {
client_max_body_size 1000M;
server {
server_name _;
location / {
proxy_pass http://127.0.0.1:8000/;
proxy_set_header Host $host;
}
listen 443 ssl;
ssl_certificate cert/name.crt;
ssl_certificate_key cert/name.key;
}
}
Nginx docker command
docker run -dit -p 0.0.0.0:443:443 -v /etc/cert/:/etc/nginx/cert <MY NGINX CONTAINER> nginx -g 'daemon off;'
Docker command to start gunicorn server
docker run -dit -p 127.0.0.1:8000:8000 <My FASTAPI CONTAINER> gunicorn -w 3 -k uvicorn.workers.UvicornWorker -b 127.0.0.1:8000 server:app
Other details:
I expose port 8000 in my Fastapi container docker build
I run nginx docker command right before the gunicorn docker command
I am currently testing with python requests library and have turned verify=False for the SSL configuration
Edit:
My issue related most directly to this post:
From inside of a Docker container, how do I connect to the localhost of the machine?
Binding to 0.0.0.0:8000 for my gunicorn run and adding the tag --network="host" to my docker run nginx command solved my issue

Nginx reverse proxy not finding other internal Docker container using hostname

I have two docker containers. One runs Kestrel (172.17.0.3), The other runs Nginx (172.17.0.4) using a reverse proxy to connect to Kestrel. Nginx connects fine when I use internal Docker ip of Kestrel container but when I try to connect to Kestrel using container's hostname in nginx.conf (kestral) I get following error:
2020/06/30 00:23:03 [emerg] 58#58: host not found in upstream "kestrel" in /etc/nginx/nginx.conf:7
nginx: [emerg] host not found in upstream "kestrel" in /etc/nginx/nginx.conf:7
I launched containers with these two lines
docker run -d --name kestrel --restart always -h kestrel mykestrelimage
docker run -d --name nginx --restart always -p 80:80 -h nginx mynginximage
My nginx.conf file below.
http {
# I've tried with and without line below that I found on Stackoverflow
resolver 127.0.0.11 ipv6=off;
server {
listen 80;
location / {
# lines below don't work
# proxy_pass http//kestrel:80;
# proxy_pass http//kestrel
# proxy_pass http//kestrel:80/;
# proxy_pass http//kestrel/;
# when I put internal docker ip of Kestrel server works fine
proxy_pass http://172.17.0.3:80/;
}
}
}
events {
}
I figured out solution to my problem. There were two issues.
First problem: By default Docker uses default bridge network when creating containers. The default Docker bridge network does not resolve DNS though. You have to create a custom bridge network and then specify network when creating docker containers. The below allowed me to ping between containers using hostname
docker network create --driver=bridge mycustomnetwork
docker run -d --name=kestrel --restart=always -h kestrel.local --network=mycustomnetwork mykestrelimage
docker run -d --name=nginx --restart always -p 80:80 -h nginx.local --network=mycustomnetwork mynginximage
Second problem: Even though it was only one kestrel server for some reason Nginx required that I setup an upstream section in /etc/nginx/nginx.conf
http {
upstream backendservers {
server kestrel;
}
server {
listen 80;
location / {
proxy_pass http://backendservers/;
}
}
}
events {
}

Deploy static website on port 5000 with docker and nginx

I try to deploy a simple static index.html and style.css website with docker.
server {
#root /var/www/html;
# Add index.php to the list if you are using PHP
#index index.html index.htm index.nginx-debian.html;
server_name data-mastery.com; # managed by Certbot
location /blog {
proxy_pass http://127.0.0.1:5000;
}
listen 443 ssl; # managed by Certbot
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /etc/letsencrypt/live/data-mastery.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/data-mastery.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = data-mastery.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 ;
listen [::]:80 ;
server_name data-mastery.com;
return 404; # managed by Certbot
}
This is my simple Dockerfile:
FROM nginx:alpine
COPY . /usr/share/nginx/html
I started the container with the following command:
sudo docker run -p 5000:5000 blog
I am not sure that this line means when I run docker ps:
80/tcp, 0.0.0.0:5000->5000/tcp
Is everything correct here or not?
How can I make it running on port 5000?
Thank you!
Updated answer:
Your use case is addressed in nginx docker image documentation, so I'll paraphrase it.
$ docker run --name my-blog -v /path/to/your/static/content/:/usr/share/nginx/html:ro -d nginx
Alternatively, a simple Dockerfile can be used to generate a new image that includes the necessary content (which is a much cleaner solution than the bind mount above):
FROM nginx
COPY . /usr/share/nginx/html
Place this file in the same directory as your directory of content , run docker build -t my-blog ., then start your container:
$ docker run --name my-blog -d my-blog
Exposing external port
$ docker run --name some-nginx -d -p 8080:80 some-content-nginx
Then you can hit http://localhost:8080 or http://host-ip:8080 in your browser.
Initial answer:
You should try something like:
sudo docker run -p 80:5000 blog
docker will proxy connection to localhost:5000 to your_container:80
on your_container:80 nginx is listening and will proxy to your_container:500 where your blog is answering
I hope that helps

Can you have a Docker registry with credentials?

Assume I am running a docker registry at example.com, i would pull down images by:
docker pull example.com:5000/image:1
How can I password protect this registry? It might contain confidential code/data.
This is mentioned in the official documentation. There are several ways to secure a Docker registry. Note that all of them will only work with a TLS-secured registry (otherwise you'd be sending password or other credentials in plain-text, which would not add any security).
Authentication within the registry itself
Start by creating a credentials file in htpasswd format:
$ htpasswd -Bbn testuser testpassword > auth/htpasswd
Then mount this file into your registry container and pass the REGISTRY_AUTH_HTPASSWD_* environment variables:
$ docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/auth:/auth \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
Proxy authentication
Use a reverse proxy (like Nginx) running on the host or another container to handle authentication (documented in-depth here):
upstream docker-registry {
server registry:5000;
}
server {
listen 443 ssl;
server_name myregistrydomain.com;
ssl_certificate /etc/nginx/conf.d/domain.crt;
ssl_certificate_key /etc/nginx/conf.d/domain.key;
client_max_body_size 0;
location /v2/ {
auth_basic "Registry realm";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
External authentication with token server
As you've mentioned username/password protections, one of the solutions above will probably be sufficient for you. For completeness, you can also use authentication with an external authentication provider. The interface that a token server needs to implement is specified here, with the necessary configuration options being described here.

Resources