How to customize the URL for Jenkins docker container? - docker

I have installed jenkins docker in my system and am able to access the jenkins console with the local host url like http://localhost:8080.
Now, I wanted to share the URL with group of people. Some one suggest the steps to configure.

I am not sure of your Jenkins config as you haven't shared an MRE. So, this is how you launch a new Jenkins service that can be accessed by others on the network through Nginx.
We will be using Docker and docker-compose to facilitate the process. We're using the official Nginx and Jenkins images from docker hub.
Create a folder that contains the needed config files :
mkdir ~/jenkins-docker
cd ~/jenkins-docker
touch docker-compose.yml
touch nginx.conf
Make a home directory for Jenkins :
mkdir ~/jenkins
Create Jenkins and Nginx docker-compose services (docker-compose.yml file content):
version: '3'
services:
jenkins:
image: jenkins
container_name: jenkins
privileged: true
user: root
volumes:
- ~/jenkins:/var/jenkins_home
restart: always
ports:
- 8080:8080
networks:
- jenkinsnet
server:
image: nginx:1.17.2
container_name: nginx
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf # your nginx config file
- /var/log/nginx:/var/log/nginx # log files
restart: always
command: nginx-debug -g 'daemon off;'
ports:
- 8000:80
networks:
- jenkinsnet
depends_on:
- jenkins
networks:
jenkinsnet:
Create an Nginx config to make Jenkins accessible on the network (nginx.conf file content)
events {}
http {
include mime.types;
## logging
log_format main '$remote_addr - $remote_user [$time_local] [$server_name] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
# server config
server {
listen 80;
location / {
proxy_pass http://jenkins:8080;
}
}
}
Run your services :
cd ~/jenkins-docker
docker-compose up
Access Jenkins on your local machine on http://localhost:8080
Access Jenkins from other devices on your network on http://local-ip-address:8000 (ex: http://192.168.1.23:8000)
Access Jenkins from other devices connected to the internet http://public-ip-address:8000 (ex: http://56.137.222.112:8000) (Port Forwarding required if you're setting up on your home network. If you're using cloud providers, allow access to port 8000 for your instance)
Further Explanation
We are launching two docker containers. The jenkins container contains a Jenkins installation, accessible on port 8080 in the container. Consequently, we published that port in the jenkins service config, so that we can access it from the host machine using :
ports:
8080:8080
The nginx container contains a reverse proxy server that allows you to make the Jenkins server accessible by routing all incoming traffic on a certain port to it.
In order for the nginx service to route traffic to the jenkins service, we create and assign a single network to the services:
# network creation :
networks:
jenkinsnet:
# network assignement :
networks:
- jenkinsnet
When the two containers belong to the same network, We are able to use the container names as hostnames. So accessing localhost:1234 on the jenkins container can be done from the nginx container using jenkins:1234. So, in the nginx.conf file we route all traffic coming to Nginx to the Jenkins server using :
location / {
proxy_pass http://jenkins:8080;
}
Nginx is listening on port 80 :
server {
listen 80;
...etc
So we publish the port to the host machine so that Nginx can pick up the incoming requests :
ports:
- 8000:80
I chose port 8000 but you can use any port you like.

Related

Link 1 server with 2 domains

I have one physical server with Docker, my docker run 2 containers, two web app in two différents ports
I also have two domains.
Can I link my domains one by web app ?
Thanks you !
Yes, you can. You can put a reverse proxy that routes traffic based on the domain name in front of the 2 containers.
As an example, let's create 2 simple Dockerfiles to simulate your 2 apps. I'll use Nginx as it's a very simple way to create a web server. Don't get hung up on the fact that I use Nginx here. This should be your web app containers.
Dockerfile1:
FROM nginx
RUN echo 'hello from container 1' > /usr/share/nginx/html/index.html
and Dockerfile2:
FROM nginx
RUN echo 'hello from container 2' > /usr/share/nginx/html/index.html
Then we'll create a docker-compose file to run the 2 containers, along with a reverse proxy.
docker-compose.yml:
version: '3'
services:
app1:
build:
context: .
dockerfile: ./Dockerfile1
app2:
build:
context: .
dockerfile: ./Dockerfile2
reverseproxy:
image: nginx
ports:
- 8080:80
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
And finally, the nginx.conf file that configures nginx to route traffic for 'domain1.com' to the 'app1' container and traffic for 'domain2.com' to 'app2'.
nginx.conf:
events { }
http {
server {
listen 80;
server_name domain1.com;
location / {
proxy_pass http://app1/;
}
}
server {
listen 80;
server_name domain2.com;
location / {
proxy_pass http://app2/;
}
}
}
Now you can start up all 3 containers using
docker-compose up -d
and send a request to each container using
curl -H "Host: domain1.com" http://localhost:8080
curl -H "Host: domain2.com" http://localhost:8080
And the responses will come from the two different containers.

nginx docker does not redirect gogs docker container

i'm new to docker networking and nginx stuff, but try to "dockerize" everything on a local devserver. for tests a docker image with nginx which should redirect another container (gogs) from port 3000 to a specific url with port 80. And i want to have the reverse proxy configs and the docker images "separated", for each "app" an own docker-compose file.
so i should reach with http://app.test.local the gog installation.
BUT: i reach with http://app.test.local only a bad gateway of nginx and with http://app.test.local:3000 i reach the gog installation...
i tried many tutorials, but somehwere there have to be an error, thats slips in every time
so what i did:
$ docker network create testserver-network
created
docker-compose for nginx:
version: '3'
services:
proxy:
container_name: proxy
hostname: proxy
image: nginx
ports:
- 80:80
- 443:443
volumes:
- /docker/proxy/config:/etc/nginx
- /docker/proxy/certs:/etc/ssl/private
networks:
- testserver-network
networks:
testserver-network:
and one for gogs:
version: '3'
services:
gogs:
container_name: gogs
hostname: gogs
image: gogs/gogs
ports:
- 3000:3000
- "10022:22"
volumes:
- /docker/gogs/data:/var/gogs/data
networks:
- testserver-network
networks:
testserver-network:
(mapped directories work)
configured default.conf of nginx:
# upstream gogs {
# server 0.0.0.0:10880;
# }
server {
listen 80;
server_name app.test.local;
location / {
proxy_pass http://localhost:3000;
}
}
and added to hosts file on client
app.test.local <server ip>
docker exec proxy nginx -t and docker exec proxy nginx -s reload say everythings fine...
Answer
You should connect both containers to the same docker network and then proxy to http://gogs:3000 instead. You also shouldn't need to expose port 3000 on your localhost unless you want http://app.test.local:3000 to work. I think ideally you should remove that, so http://app.test.local should proxy to your gogs server, and http://app.test.local:3000 should error out.
Explanation
gogs is exposed on port 3000 inside its container, which is then further exposed on port 3000 on your host machine. The nginx container does not have access to port 3000 on your host, so when it tries to proxy to http://localhost:3000 it is proxying to port 3000 inside the nginx container (which is hosting nothing).
After you have joined the containers to the same network, you should be able to reference the gogs container from the nginx container by its hostname (which you've set to gogs). Now nginx will proxy through the docker network. So you should be able to perform the proxy without needing to expose 3000 on your local machine.

502 Bad Gateway Error on Nextcloud Docker Container proxied through Subdomain on Nginx Webserver

I am running an Nginx Webserver on my Raspberry Pi 4. I am trying to configure a reverse proxy on subdomain to a Nextcloud Docker container. However I am getting a 502 Bad Gateway error when I try to visit this container in my browser. I have made sure to generate an SSL certificate for the subdomain I am trying to server Nextcloud over.
This is what the server block for my subdomain looks like:
server {
listen 443 ssl;
server_name subdomain.domain.tld;
ssl_certificate /pathtokey/subdomain.domain.tld/fullchain.pem;
ssl_certificate_key /pathtokey/subdomain.domain.tld/privkey;
location / {
proxy_pass https://127.0.0.1:9000/;
proxy_ssl_server_name on;
}
}
And this is what my docker-compose.yml file for Nextcloud looks like:
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: linuxserver/mariadb
# command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=<rootPassword>
- MYSQL_PASSWORD=<mysqlPassword>
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:fpm
ports:
- 127.0.0.1:9000:9000
links:
- db
volumes:
- /mnt/hdd/nextcloud:/var/www/html
restart: always
After changing the .yml file, I make sure to run docker-compose up -d.
After changing nginx.conf file I run sudo systemctl restart nginx. I have also run sudo nginx -t.
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
I am not sure where my mistake is in these configurations. I would appreciate any advice on how to fix this.
You are using nextcloud:fpm image which is only a php fpm instance without a web server.
Your nginx proxy config is fine but it won't work because you will need nginx fastcgi_proxy for this to proxy request to a backend php instance.
Here's a simple illustration:
nginx(fastcgi) <-> php-fpm(nextcloud) <-> db
1st solution:
May refer to nextcloud official doc on how to configure nginx or simply copy the config: nginx configuration
2nd solution:
Use nextcloud:apache image instead. This image already included an apache web server and you can directly access it without needing another nginx instance.

How can I connect the Nginx container to my React container?

I have tried reading through the other stackoverflow questions here but I am either missing something or none of them are working for me.
Context
I have two docker containers setup on a DigitalOcean server running Ubuntu.
root_frontend_1 running on ports 0.0.0.0:3000->3000/tcp
root_nginxcustom_1 running on ports 0.0.0.0:80->80/tcp
If I connect to http://127.0.0.1, I get the default Nginx index.html homepage. If I http://127.0.0.1:3000 I am getting my react app.
What I am trying to accomplish is to get my react app when I visit http://127.0.0.1. Following the documentation and suggestions here on StackOverflow, I have the following:
docker-compose.yml in root of my DigitalOcean server.
version: "3"
services:
nginxcustom:
image: nginx
hostname: nginxcustom
ports:
- "80:80"
volumes:
- ./nginx.conf:/root/nginxcustom/conf/custom.conf
tty: true
backend:
build: https://github.com/Twitter-Clone/twitter-clone-api.git
ports:
- "8000:8000"
tty: true
frontend:
build: https://github.com/dougmellon/react-api.git
ports:
- "3000:3000"
stdin_open: true
tty: true
nginxcustom/conf/custom.conf :
server {
listen 80;
server_name http://127.0.0.1;
location / {
proxy_pass http://root_frontend_1:3000; # this one here
proxy_redirect off;
}
}
When I run docker-compose up, it builds but when I visit the ip of my server, it's still showing the default nginx html file.
Question
What am I doing wrong here and how can I get it so the main URL points to my react container?
Thank you for your time, and if there is anything I can add for clarity, please don't hesitate to ask.
TL;DR;
The nginx service should proxy_pass to the service name (customnginx), not the container name (root_frontend_1) and the nginx config should be mounted to the correct location inside the container.
Tip: the container name can be set in the docker-compose.yml for services setting the container_name however beware you can not --scale services with a fixed container_name.
Tip: the container name (root_frontend_1) is generated using the compose project name which defaults to using the current directory name if not set.
Tip: the nginx images are packaged with a default /etc/nginx/nginx.conf that will include the default server config from /etc/nginx/conf.d/default.conf. You can docker cp the default configuration files out of a container if you'd like to inspect them or use them as a base for your own configuration:
docker create --name nginx nginx
docker cp nginx:/etc/nginx/conf.d/default.conf default.conf
docker cp nginx:/etc/nginx/nginx.conf nginx.conf
docker container rm nginx
With nginx proxying connections for the frontend service we don't need to bind the hosts port to the container, the services ports definition can be replaced with an expose definition to prevent direct connections to http://159.89.135.61:3000 (depending on the backend you might want prevent direct connections as well):
version: "3"
services:
...
frontend:
build: https://github.com/dougmellon/react-api.git
expose:
- "3000"
stdin_open: true
tty: true
Taking it a step further we can configure an upstream for the
frontend service then configure the proxy_pass for the upstream:
upstream frontend {
server frontend:3000 max_fails=3;
}
server {
listen 80;
server_name http://159.89.135.61;
location / {
proxy_pass http://frontend/;
}
}
... then bind-mount the custom default.conf on top of the default.conf inside the container:
version: "3"
services:
nginxcustom:
image: nginx
hostname: nginxcustom
ports:
- "80:80"
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
tty: true
... and finally --scale our frontend service (bounce the services removing the containers to make sure changes to the config take effect):
docker-compose stop nginxcustom \
&& docker-compose rm -f \
&& docker-compose up -d --scale frontend=3
docker will resolve the service name to the IP's of the 3 frontend containers which nginx will proxy the connections for in a (by default) round robin manner.
Tip: you can not --scale a service that has ports mappings, only a single container can bind to the port.
Tip: if you've updated the config and can connect to your load balanced service then you're all set to create a DNS record to resolve a hostname to your public IP address then update your default.conf's server_name.
Tip: for security I maintain specs for building a nginx docker image with Modsecurity and Modsecurity-nginx pre-baked with the OWASP Core Rule Set.
In Docker when multiple services needs to communicate with each other, you can use the service name in the url (set in the docker-composer.yml instead of the ip (which is attributed from the available pool of the network, default by default), it will automatically be resolve to the right container ip due to network management by docker.
For you it would be http://frontend:3000

Nginx revers proxy can't reach docker container by host name

Nginx reverse proxy can't reach docker host. Hosting on amazon (EC2)
I want to load different apps depends on location.
nginx.conf
server {
listen 80 ;
server_name localhost;
location /web {
proxy_pass http://web:4000/;
}}
Location works and it means that nginx image builded correct either.
docker-compose file
services:
web:
image: web
container_name: web
ports:
- 4000:4000
hostname: web
networks:
- default
nginx:
image: nginx
container_name: nginx
ports:
- 80:80
depends_on:
- web
networks:
- default
networks:
default:
external:
name: my-network
I expect
- when I type in url /web it should show app from docker container
I've tried
Run single container - works fine (web or nginx)
Added 127.0.0.1 web in /etc/hosts (I can do 'curl web' but it shows localhost response)
Added index index.html in location section
Added resolver in the location section
Use links instead of network
When "docker-compose up" I can inspect docker container (web) and see IP - 192.168.10.2 . Then curl 192.168.10.2 shows me index.html. But I can't make curl http://web:4000 seems that hostname in unreachable, but I think that using IP in proxy_pass is a bad decision.
I wasn't able to handle those issues, so I've chose another approach.
Create network ipam
docker network create --gateway 172.20.0.1 --subnet 172.20.0.0/24 ipam
Assigned for each service ipv4address in docker-compose file
networks:
default:
ipv4_address: 172.20.0.5 for web
where
networks:
default:
external:
name: ipam
Add chmod for directory /var/www/html in my web docker image
chmod -R 755 /var/www/html
(seems this additional step required if you build LINUX container under windows docker)

Resources