I'm running gitlab and nginx in docker. I had to use an unbound version of nginx because there's other applications running and I can only publish ports 80 and 443. So far I've managed to access gitlab with my.domain.com/gitlab/ and can also login, upload projects, etc...
I've wanted to use the container registry for uploading and storing images from my projects, which is why I've uncommented gitlab_rails['registry_enabled'] = true in gitlab.rb. Now the container registry is visible for my projects, however I get a Docker Connection Error when trying to access the page.
My question is, are there any other settings I have to tweak to get the inbuilt container registry running or did I already mess things up in the way I've set this up? Especially since I need gitlab to run on a relative url and my projects url for cloning/pulling is something like https://<server.ip.address>/gitlab/group-name/test-project, even though browser tabs show https://my.domain.com/gitlab/group-name/test-project
So far, this is my setup:
nginx
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name my.domain.com www.my.domain.com;
server_tokens off;
ssl_certificate /etc/nginx/ssl/live/my.domain.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/my.domain.com/privkey.pem;
##############
## GITLAB ##
##############
location /gitlab/ {
root /home/git/gitlab/public;
proxy_http_version 1.1;
proxy_pass http://<server.ip.address>:8929/gitlab/;
gzip off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
gitlab docker-compose.yml
services:
git:
image: gitlab/gitlab-ce:14.10.0-ce.0
restart: unless-stopped
container_name: gitlab
hostname: 'my.domain.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['gitlab_shell_ssh_port'] = 2224
external_url 'http://<server.ip.address>:8929/gitlab/'
gitlab_rails['registry_enabled'] = true
ports:
- "8929:8929"
- "2224:2224"
Related
I want to deploy Growthbook (A/B Testing tool) container along with Nginx reverse proxy for handling SSL/TLS encryption(i.e SSL Termination) into AWS ECS. I was trying to deploy using Docker compose file(i.e Docker ECS context). The problem is, it is creating all the necessary resources like Network Load Balances, Target Groups, ECS task definitions etc. And Abruptly fails creating ECS service and tries to delete all the resources it created when I ran docker compose --project-name growthbook up. Reason it says is "Nginx sidecar container exited".
Here is my docker-compose.yml file:
# docker-compose.yml
version: "3"
x-aws-vpc: "vpc-*************"
services:
growthbook:
image: "growthbook/growthbook:latest"
ports:
- 3000:3000
- 3100:3100
environment:
- MONGODB_URI=<mongo_db_connection_string>
- JWT_SECRET=<jwt_secret>
volumes:
- uploads:/usr/local/src/app/packages/back-end/uploads
nginx-tls-sidecar:
image: <nginx_sidecar_image>
ports:
- 443:443
links:
- growthbook
volumes:
uploads:
And Here is Dockerfile used to build nginx sidecar image:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
COPY ssl.key /etc/nginx/ssl.key
COPY ssl.crt /etc/nginx/ssl.crt
In the above Dockerfile, SSL keys and certificates are self-signed generated using openssl and are in order.
And Here is my nginx.conf file:
# nginx Configuration File
# https://wiki.nginx.org/Configuration
# Run as a less privileged user for security reasons.
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
pid /var/run/nginx.pid;
http {
#Redirect to https, using 307 instead of 301 to preserve post data
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name localhost;
# Protect against the BEAST attack by not using SSLv3 at all. If you need to support older browsers (IE6) you may need to add
# SSLv3 to the list of protocols below.
ssl_protocols TLSv1.2;
# Ciphers set to best allow protection from Beast, while providing forwarding secrecy, as defined by Mozilla - https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
# Optimize TLS/SSL by caching session parameters for 10 minutes. This cuts down on the number of expensive TLS/SSL handshakes.
# The handshake is the most CPU-intensive operation, and by default it is re-negotiated on every new/parallel connection.
# By enabling a cache (of type "shared between all Nginx workers"), we tell the client to re-use the already negotiated state.
# Further optimization can be achieved by raising keepalive_timeout, but that shouldn't be done unless you serve primarily HTTPS.
ssl_session_cache shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions
ssl_session_timeout 24h;
# Use a higher keepalive timeout to reduce the need for repeated handshakes
keepalive_timeout 300; # up from 75 secs default
# remember the certificate for a year and automatically connect to HTTPS
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/ssl.crt;
ssl_certificate_key /etc/nginx/ssl.key;
location / {
proxy_pass http://localhost:3000; # TODO: replace port if app listens on port other than 80
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
location / {
proxy_pass http://localhost:3100; # TODO: replace port if app listens on port other than 80
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
Basically, Growthbook exposes its service on http://localhost:3000 and http://localhost:3100. And Nginx sidecar container only listens on port 443. I need to be able to proxy to both endpoints which are exposed by Growthbook from Nginx port 443.
Help is much appreciated, if you guys find any mistakes in my configuration :)
By default, Growthbook service does not provide TLS encryption. So I used Nginx as sidecar handling SSL Termination. As a end result, I need to be able to run Growthbook service using TLS encryption hosted on AWS ECS.
I'm trying to deploy simple FastAPI app with Docker and Nginx proxy on Google Cloud using simple ssh-terminal window.
My nginx.conf:
access_log /var/log/nginx/app.log;
error_log /var/log/nginx/app.log;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header Proxy "";
upstream app_server {
server example.com:8000;
}
server {
server_name example.com;
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /root/ssl/cert.pem;
ssl_certificate_key /root/ssl/key.pem;
location / {
proxy_pass "http://app_server";
}
}
My docker-compose.yml:
version: '3.8'
services:
reverse-proxy:
image: jwilder/nginx-proxy
container_name: reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx:/etc/nginx/conf.d
- ./ssl/cert1.pem:/root/ssl/cert.pem
- ./ssl/privkey1.pem:/root/ssl/key.pem
- ./ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem
networks:
- reverse-proxy
web:
environment: [.env]
build: ./project
ports:
- 8000:8000
command: gunicorn main:app -k uvicorn.workers.UvicornWorker -w 2 -b 0.0.0.0:8000
volumes:
- ./project:/usr/src/app
networks:
- reverse-proxy
- back
networks:
reverse-proxy:
external:
name: reverse-proxy
back:
driver: bridge
After run docker-compose up command and going to example.com address, I get error:
*3 upstream timed out (110: Connection timed out) while connecting to upstream...
Also, I have opened ports with Google Cloud Firewall service (checked with netstat command) and configured my VM's instance with network parameters from this article.
I don't understand why I receive 504 Gateway Time-out cause my service work with the similar configuration on a simple VPS hosting, and also it works from the inside Google Cloud VM's ssh-terminal when using curl and check localhost instead example.com domain. I want to know how to run my service on Google Cloud VM using only docker-compose util for this purpose?
In Nginx config file, try to mention the web container name:
upstream app_server {
server web:8000;
}
I've search a lot of online materials, but I wasn't able to find a solution for my problem. I'll try to make it as clear as possible. I think I'm missing something and maybe someone with more experience on the server configuration side may have the answer.
I have MERN stack app and I'm trying to deploy it on a DigitalOcean droplet, using Docker. All good so far, everything runs as it should should, except de fact that I'm not able to access my app by the domain. It works perfectly if I'm using the IP of the droplet.
What I've checked so far:
checked my ufw status and I have both HTTP and HTTPS enabled
the domain is from GoDaddy and it's live, linked with the proper namespaces from Digital Ocean.
in the domains sections from Digital Ocean everything it's set as it should. I have the proper CNAME records pointing to the IP of my droplet
a direct ping to my domain works fine (it returns the correct IP)
also checked DNS LookUp tools and everything seems to be linked just fine
When it comes to the Docker containers, I have 3 of them: client, backend and nginx.
This is how my docker-compose looks like:
version: '3'
services:
nginx:
container_name: jtg-nginx
depends_on:
- backend
- client
restart: always
image: host-of-my-image-nginx:latest
networks:
- local-net
ports:
- '80:80'
backend:
container_name: jtg-backend
image: host-of-my-image-backend:latest
ports:
- "5000:5000"
volumes:
- logs:/app/logs
- uploads:/app/uploads
networks:
- local-net
env_file:
- .env
client:
container_name: jtg-client
stdin_open: true
depends_on:
- backend
image: host-of-my-image-client:latest
networks:
- local-net
env_file:
- .env
networks:
local-net:
driver: bridge
volumes:
logs:
driver: local
uploads:
driver: local
I have two instances of Nginx. One is used inside the client container and the other one is used in it's own container.
This is the default.conf from the client:
server {
listen 3000;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
Now it comes the most important part. This is the default.conf used inside the main Nginx container:
upstream client {
server client:3000;
}
upstream backend {
server backend:5000;
}
server{
listen 80;
server_name my-domain.com www.my-domain.com;
location / {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /backend {
rewrite /backend/(.*) /$1 break;
proxy_pass http://backend;
}
}
I really don't understand what's wrong with this configuration and I think it's something very small that I'm missing out.
Thank you!
If you want to setup a domain name in front, you'll need to have a webserver instance that allows you to proxy_pass your hostname to your container
So this is what you may wanna do :
server{
listen 80;
server_name my-domain.com www.my-domain.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /backend {
rewrite /backend/(.*) /$1 break;
proxy_pass http://backend;
}
}
The mistery was solved. After adding SSL certificate everything works as it should.
I've been trying to fix an issue which is when I try to login to pgAdmin (in docker container) behind Nginx Proxy I'm getting an error that The CSRF tokens do not match.
See https://en.wikipedia.org/wiki/Cross-site_request_forgery
Frankly, the problem is related within nginx or not I'm not sure but configuration files as below:
Docker Swarm Service :
pgAdmin:
image: dpage/pgadmin4
networks:
- my-network
ports:
- 9102:80
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
- PGADMIN_CONFIG_SERVER_MODE=True
volumes:
- /home/docker-container/pgadmin/persist-data:/var/lib/pgadmin
- /home/docker-container/pgadmin/persist-data/servers.json:/pgadmin4/servers.json
deploy:
placement:
constraints: [node.hostname == my-host-name]
Nginx Configuration:
server {
listen 443 ssl;
server_name my-server-name;
location / {
proxy_pass http://pgAdmin/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-CSRF-Token $http_x_pga_csrftoken;
}
ssl_certificate /home/nginx/ssl/certificate.crt;
ssl_certificate_key /home/nginx/ssl/private.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_prefer_server_ciphers on;
server {
listen 80;
server_name my-server-name;
return 301 https://my-server-name $request_uri;
}
I can able to access to pgAdmin in two ways :
The first way is direct host ip like 172.23.53.2:9102
The second way is via Nginx proxy.
When I try to access to pgAdmin via direct host ip there is no error but when I try to access to via dns ( like my-server.pgadmin.com ) I'm getting an error when I logged into pgAdmin dashboard.
The error is :
Bad Request. The CSRF tokens do not match.
My first opinion about this error is nginx does not pass CSRF Token header to pgAdmin. For these reason I've changed nginx configuration file many many times but I'm still getting this error.
What could be source of this error and how could I solve this problem?
Try to use the default ports "5050:80". It's solved the same issue on my side.
Using strings is also recommended.
Cf: https://docs.docker.com/compose/compose-file/compose-file-v3/#ports
I used pgadmin4 deployed by Apache httpd,
the deployment method is similar, I also had the same problem,
my solution is Apache httpd loaded the lib of Apr/Aprl-util /pcre, Apache httpd will use token.
I'm having issues with the Jenkins proxy. The Jenkins container is behind my NGINX proxy. I access it at http://localhost:8000. After I log in I get kicked to http://localhost. Some links on Jenkins also does the same and removes the port which brakes the screen. I get the error on the from the title on my Manage Jenkins page and tried adding the proxy_pass URL also, but nothing works.
My NGINX conf file is like so...
server {
listen 8000;
server_name "";
access_log off;
location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Host $host;
proxy_pass http://jenkins_master_1:8080;
proxy_redirect http://jenkins_master_1:8080 http://localhost:8000;
proxy_max_temp_file_size 0;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffer_size 8k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
my docker-compose.yml file is like so...
version: '3'
# Services are the names of each container
services:
master:
# Where to build the container from a Dockerfile
build: ./jenkins-master
# Open which ports to
ports:
- "50000:50000"
# Connecting volumes to in a container
volumes:
- jenkins-log:/var/log/jenkins
- jenkins-data:/var/jenkins_home
# Adding the service to a network
networks:
- jenkins-net
nginx:
build: ./jenkins-nginx
ports:
- "8000:8000"
networks:
- jenkins-net
# List of volumes to create
volumes:
jenkins-data:
jenkins-log:
# List of netorks to create
networks:
jenkins-net:
I'm trying to learn Docker and Jenkins and was following a tutorial, the jenkins_master_1 is from the docker-compose. Any help or guidance would be really appreciative.
Thanks
Assumption 1: NGINX is in front of your app, accepting connections on port 80, then passing to backend port 8080.
Assumption 2: the Jenkins application and NGINX are on the same server here.
You should be accessing it originally from port 80, not 8080 if you are using the proxy.
NGINX gets request on 80, then passes to backend 8080. From the browser you shouldn’t see the 8080 if you are using the proxy. If you are using 8080 and it’s doing something, then your going directly to app.... aka, bypassing the proxy.
So, how to start addressing it:
(1.) Navigate to http://localhost, which should go through your proxy (if it’s set up properly)
(2.) In Manage Jenkins-> Configure System -> Jenkins URL, make sure the URL is set to http://localhost
(3.) Better to use a FQDN for the server name in the NGINX configuration, then make sure Jenkins is only listening for connections on localhost in the Jenkins.xml configuration. Jenkins.xml should have listen address set to 127.0.0.1. Then external requests to that FQDN will not be able to bypass the proxy, as Jenkins will only be allowing connections from localhost (from NGINX, or you playing with the browser on the localhost).
Then, ideally, you have:
http://fqdn->NGINX listening on port 80 -> Jenkins on 127.0.0.1:8080. The user with their browser (safely outside of your server) never sees the 8080 port.
Try adding proxy_redirect directive in location block. This instructs webserver to return different 301/302 http response codes than calculated by the server itself. Sometimes the webserver is unable to properly calculate its address like in docker where the container has no information of the outside world and that the connection is proxied/forwarded.
location / {
proxy_pass http://jenkins_master_1:8080;
proxy_redirect http://jenkins_master_1:8080 http://localhost:8080;
}
SRC: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect
Add X-Forwarded-* headers is the right solution.
Without such headers, I got a lot of errors such as I was redirected to https://jenkinsci:8080 after I set the initial password and click continue button. Many times when I visit https://jenkins.mydomain.com and click the links on the webpage, I was redirected to https://jenkinsci:8080. And https://jenkinsci:8080 can not be visited obviously. I dont't know why. Maybe tomact needs those X-Forwarded-* headers information.
This article - Jenkins behind an NGinX reverse proxy is highly recommended for those who want to run jenkins behind nginx, even if both jenkins and nginx are created through docker container. And once again, you'd better add those X-Forwarded-* headers.
An example nginx vhost configuration file:
server {
charset utf8;
access_log /var/log/nginx/jenkins.yourdomain.com.access_log main;
listen 443 ssl http2;
server_name jenkins.yourdomain.com;
ssl_certificate /etc/nginx/ssl/yourdomain.com.crt;
ssl_certificate_key /etc/nginx/ssl/yourdomain.com.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_buffering off;
proxy_pass http://jenkinsci:8080; #jenkinsci is the service/container name specified in the docker-compose.yml file
}
}