Linking two docker compose configuration fails with a bridge network - docker

I have setup a global docker configuration which i expect to handle nginx,and database configuration. This has the following configuration
webserver:
image: nginx
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./dockerconfig/nginx/conf.d/:/etc/nginx/conf.d/
networks:
- common
networks:
common:
driver: bridge
on the folder dockerconfig/nginx/conf.d i have a file pos.test with the following nginx config
server {
listen 80;
server_name pos.test www.pos.test;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/web;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
ON THE SECOND DOCKER configuration file i have
Now i have another docker configuration with the following
app:
build:
context: .
dockerfile: Dockerfile
image: posapp/php
container_name: envanto_pos
restart: unless-stopped
tty: true
working_dir: /var/www/pos
volumes:
- ./:/var/www/pos
networks:
- common
networks:
common:
driver: bridge
Now after running both docker files via docker-compose up -d they both run without any issue but now i nginx cannot run the domain pos.test where the app code is executed on the second docker file
The idea behind this is to have one docker-compose configuration file handle nginx server while the other configuration to copy files is handled by the other docker configuration files
How can i make both docker configurations work as including the network part with a bridge fails to work.What am i missing out.
UPDATE
I know one of the way to solve this would be to add a single docker config file but i want to split the docker config files to different configuration files

You are most likely running the docker-compose command from different directories, and not overriding the compose project name. Docker compose will prefix objects created, like containers, volumes, and networks, with the project name, to allow different instances to be run in isolation from each other.
To solve this, you need a known name of the network, and you'll want to define it as external to at least one of your compose files. When the network is defined as external, compose will not try to create it, but will require that it was already created externally, either by a docker network create command or by the other compose file.
To create the network with a known name, you can specify the name value in newer versions of the docker compose file.
Here is the first compose file that would create the network with a known name:
version: '3.7'
services:
webserver:
image: nginx
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./dockerconfig/nginx/conf.d/:/etc/nginx/conf.d/
networks:
- common
networks:
common:
external: false
name: common
And the second compose file that would use the already created network:
version: '3.7'
services:
app:
build:
context: .
dockerfile: Dockerfile
image: posapp/php
container_name: envanto_pos
restart: unless-stopped
tty: true
working_dir: /var/www/pos
volumes:
- ./:/var/www/pos
networks:
- common
networks:
common:
external: true
name: common

The problem is that you're redefining the common network.
You can define the 2 docker-compose.yml files as follow:
docker-compose.yml
networks:
common:
driver: bridge
services:
app:
build:
context: .
dockerfile: Dockerfile
image: posapp/php
container_name: envanto_pos
restart: unless-stopped
tty: true
working_dir: /var/www/pos
volumes:
- ./${AppPath}/:/var/www/pos
networks:
- common
and docker-compose.webserver.yml
services:
webserver:
image: nginx
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./${WebserverPath}/:/var/www
- ./${WebserverPath}/dockerconfig/nginx/conf.d/:/etc/nginx/conf.d/
networks:
- common
and start the application with:
docker-compose -f docker-compose.yml -f docker-compose.webserver.yml up -d

Related

How to connect two docker containers together

I have a reactjs front end application and a simple python flask. And I am using a docker-compose.yml to spin up both the containers, and it is like this:
version: "3.2"
services:
frontend:
build: .
environment:
CHOKIDAR_USEPOLLING: "true"
ports:
- 80:80
links:
- "backend:backend"
depends_on:
- backend
backend:
build: ./api
# volumes:
# - ./api:/usr/src/app
environment:
# CHOKIDAR_USEPOLLING: "true"
FLASK_APP: /usr/src/app/server.py
FLASK_DEBUG: 1
ports:
- 8083:8083
I have used links so the frontend service can talk to backend service using axios as below:
axio.get("http://backend:8083/monitors").then(res => {
this.setState({
status: res.data
});
});
I used docker-compose up --build -d to build and start the two containers and they are started without any issue and running fine.
But now the frontend cannot talk to backend.
I am using an AWS ec2 instance. When the page loads, I tried to see the for any console errors and I get this error:
VM167:1 GET http://backend:8083/monitors net::ERR_NAME_NOT_RESOLVED
Can someone please help me?
The backend service is up and running.
You can use a nginx as reverse proxy for both
The compose file
version: "3.2"
services:
frontend:
build: .
environment:
CHOKIDAR_USEPOLLING: "true"
depends_on:
- backend
backend:
build: ./api
# volumes:
# - ./api:/usr/src/app
environment:
# CHOKIDAR_USEPOLLING: "true"
FLASK_APP: /usr/src/app/server.py
FLASK_DEBUG: 1
proxy:
image: nginx
volumes:
- ./nginx.conf:/etc/nginx/conf.d/example.conf
ports:
- 80:80
minimal nginx config (nginx.conf):
server {
server_name example.com;
server_tokens off;
location / {
proxy_pass http://frontend:80;
}
}
server {
server_name api.example.com;
server_tokens off;
location / {
proxy_pass http://backend:8083;
}
}
The request hits the nginx container and is routed according the domain to the right container.
To use example.com and api.example.com you need to edit your hosts file:
Linux: /etc/hosts
Windows: c:\windows\system32\drivers\etc\hosts
Mac: /private/etc/hosts
127.0.0.1 example.com api.example.com

Sharing volumes in networked docker containers with docker composer fails

I have two docker-compose.yml files
THe first one is a global one and am using it to configure nginx webserver and the other one am using it for holding the application code and below are their configurations
First one with nginx configuration
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
image: globaldocker
container_name: app
restart: unless-stopped
tty: true
working_dir: /var/www
volumes:
- ./:/var/www
- ./dockerconfig/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- common_network
webserver:
image: nginx
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./dockerconfig/nginx/:/etc/nginx/conf.d/
networks:
- webserver_network
- common_network
networks:
common_network:
external: false
webserver_network:
external: false
The above creates two networks
global_docker_common_network, global_docker_webserver_network
On the docker config folder there is a nginx configuration like
server {
listen 80;
server_name pos.test www.pos.test;
index index.php index.html;
//other nginx configurations for pos.test
}
ON THE docker-compose configuration with php file
Now the one one holding the source code for pos.test i have the following configuration
app:
build:
context: .
dockerfile: Dockerfile
image: posapp/php
container_name: envanto_pos
restart: unless-stopped
tty: true
working_dir: /var/www/pos
volumes:
- ./:/var/www/pos
- ./dockerconfig/nginx/:/etc/nginx/conf.d/
networks:
- globaldocker_webserver_network
networks:
globaldocker_webserver_network:
external: true
Which i have added the external network
When i try accessing nginx pos.test it doesnt display the application but only shows the default nginx page
I have tried accessing the first docker nginx configuration bash and checked on the var/www/pos folder but i cant see the files from the second docker config(source code).
How do i share volumes with my nginx docker configuration container so that when i access docker via exposed port 80 am able to access my site pos.test
What am i missing out on this to make this work?
UPDATE
The two docker configuration files are located on different folders on my host machine
UPDATE ON THE QUESTION
This is my nginx config file
server {
listen 80;
server_name pos.test www.pos.test;
index index.php index.html;
error_log /var/log/nginx/pos_error.log;
access_log /var/log/nginx/pos_access.log;
root /var/www/pos/web;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
you are mounting the current directory of docker-compose file each. So the only that container will have the source code which resides in the same source code directory.
You need some common directory
First File
volumes:
- /path_to_sc/common:/var/www
Second File
volumes:
- /path_to_sc/common:/var/www/pos
When I try accessing Nginx pos.test it doesn't display the application
but only shows the default Nginx page
Probably you first File not picking correct configuration. Double-check ./dockerconfig/nginx/:/etc/nginx/conf.d/ or run the command inside docker to verify the correct configuration file.
docker exec nginx bash -c "cat /etc/nginx/conf.d/filename.conf`
I have tried accessing the first docker nginx configuration bash and
checked on the var/www/pos folder but i cant see the files from the
second docker config(source code).
Mount the common directory so that can accessible for both containers.
update:
From your comment, it seems like there is a syntax error in your docker-compose file. Take a look into this example
web:
image: nginx
volumes:
- ./data:/var/www/html/
ports:
- 80:80
command: [nginx-debug, '-g', 'daemon off;']
web2:
image: nginx
volumes:
- ./data:/var/www/html
command: [nginx-debug, '-g', 'daemon off;']

Access phpfpm inside docker container with nginx

Can the phpfpm inside the docker container be accessed from outside with nginx fastcgi_pass?
i have installed nginx on my ubuntu with apt install nginx and i'm want to configure nginx with phpfpm but phpfpm in docker container
docker-compose
version: "2"
services:
phpfpm:
image: bitnami/php-fpm:7.1
container_name: "phpfpm_7.1"
ports:
- 9000:9000
network_mode: "host"
volumes:
- ./tester/:/app
nginx config
location ~* \.php$ {
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_nam$
fastcgi_pass 127.0.0.1:9000;
}
try access file php in browser ,nginx say File not found.
finnaly its works im trying to change docker compose file to
version: "2"
services:
phpfpm7:
image: bitnami/php-fpm:7.1
container_name: "phpfpm_7.1"
volumes:
- /var/www/:/var/www/
networks:
vpcbr:
ipv4_address: 192.168.85.2
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 192.168.85.0/24
nginx config :
fastcgi_pass 192.168.85.2:9000;
fastcgi_index index.php;
include fastcgi.conf;

How to allow access to Docker container in MERN project only through specific URL while blocking ports?

I have a MERN project set up with Docker. Development environment is fine; it's production I'm having trouble with.
This is the behavior that I desire:
In Development:
This is the state of the containers:
Nodemon runs in the node express image (from node:alpine) container with port 9000 open to the host. (this is the backend/api)
MongoDB runs in its own container based on the official image with port 27017 open to the host. (this is the database)
React runs with warm reload in its image (from node:alpine) container with port 3000 open to the host. (this is the frontend)
In Production:
This is the state of the containers:
Node runs in the node express image (from node:alpine) container with no ports open to the host.
MongoDB runs in its own container based on the official image with no ports open to the host.
React runs in its image (from nginx:alpine) container with port 80 open to the host.
The backend/api refers to the database using the container name, and the frontend/react container refers to the backend using the container name.
I put proxy: localhost:9000 in the react package.json file. In production, I put the following in the nginx.conf file.
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
}
location /api {
proxy_pass http://localhost:9000;
}
In the production docker-compose.yml file, I removed expose: "9000" and ports: "9000:9000" that were present in the docker-compose.yml file. I run docker-compose -f docker-compose.yml -f docker-compose.production.yml up.
My problem is that the ports "localhost:9000" and "localhost:27017" are still exposed in production for some reason. I want all routes, except for "example.com/api", to go through React. Only "example.com/api" must go directly to the backend.
Also, I'm not sure if this is related, but is there a way to make sure "example.com/api" goes to the backend without having to do require("express")().get('/api'...? As in, just doing require("express")().get('/'... takes calls to "example.com/api" by default.
Note: I used networks, not links, in order to connect containers together. Backend is connected to both React and MongoDB, while React and MongoDB are not connected to each other.
Here is my docker-compose.yml:
version: "3.7"
services:
##############################
# Back-End Container
##############################
backend:
container_name: backend
build:
context: ./backend/
target: development
restart: always
expose:
- "9000"
environment:
- MONGO_URI=mongodb://db:27017/db
- PORT=9000
- NODE_ENV=development
- DEBUG=app
- JWT_SECRET=secretsecret
- JWT_EXPIRY=30d
ports:
- "9000:9000"
- "9229:9229"
volumes:
- "./backend/:/home/node/app/"
- /home/node/app/node_modules/
depends_on:
- db
networks:
- client
- server
##############################
# Front-End Container
##############################
frontend:
container_name: frontend
build:
context: ./frontend/
target: development
restart: always
expose:
- "3000"
- "35729"
environment:
- NODE_ENV=development
- REACT_APP_PORT=3000
- CHOKIDAR_USEPOLLING=true
ports:
- "3000:3000"
- "35729:35729"
volumes:
- "./frontend/:/home/node/app/"
- /home/node/app/node_modules/
networks:
- client
##############################
# MongoDB Container
##############################
db:
container_name: db
image: mongo
restart: always
volumes:
- dbdata:/data/db/
ports:
- "27017:27017"
networks:
- server
networks:
client:
server:
volumes:
dbdata:
Here is my .env file
MONGO_URI=db:27017/somedb?authSource=admin
PORT=9000
MONGO_PORT=27017
MONGO_INITDB_ROOT_USERNAME=mongoadmin
MONGO_INITDB_ROOT_PASSWORD=secret
MONGO_INITDB_DATABASE=somedb
NODE_ENV=production
Here is my docker.compose.production.yml:
version: "3.7"
services:
##############################
# Back-End Container
##############################
backend:
container_name: backend
init: true
environment:
- MONGO_URI=mongodb://${MONGO_INITDB_ROOT_USERNAME}:${MONGO_INITDB_ROOT_PASSWORD}#${MONGO_URI}
- PORT=${PORT}
- NODE_ENV=${NODE_ENV}
build:
context: ./backend/
target: production
restart: always
depends_on:
- db
networks:
- client
- server
##############################
# Front-End Container
##############################
frontend:
container_name: frontend
build:
context: ./frontend/
target: production
restart: always
environment:
- NODE_ENV=${NODE_ENV}
expose:
- "80"
ports:
- "80:80"
networks:
- client
##############################
# MongoDB Container
##############################
db:
container_name: db
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
volumes:
- dbdata:/data/db/
ports:
- "27017:27017"
networks:
- server
networks:
client:
server:
volumes:
dbdata:
Dockerfiles only have FROM, WORKDIR, RUN, COPY, and CMD.
I run docker-compose -f docker-compose.yml -f docker-compose.production.yml up.
Why did I include the development docker-compose.yml? All I had to do was remove that part and run docker-compose.production.yml directly. Didn't need any overriding done.
Solution was to run:
docker-compose -f docker-compose.production.yml up
For those looking for why proxy didn't lead to root of backend was because I didn't include a forward slash in the url of the proxy in the nginx.conf.

Nginx-proxy doesn't forward to container exposing port 3001 and rewrites URL to static IP

I got a web application running on Ruby on Rails with SOLR in docker-compose. It exposes port 3001, and I want a subdomain URL than my university possesses (and I have access to the configuration panel where I can only specify the "target", what is the IP, I guess, of my local server on which the web application is running).
I first tried to do this redirection without nginx, but the URL data.chembiosys.de was just redirected to http://static.ip:3001
The app is running though, and is accessible.
So I wanted to try to use nginx as a reverse proxy, but the effect is basically the same:
- I need to specify the port number and the IP of my server in the configuration panel of the domain name of interest
- when I type "data.chembiosys.de" in the browser, it shows the IP and the port number
What I do is that I first create a nginx-proxy network:
sudo docker network create nginx-proxy
Then I start nginx-proxy with docker-compose.yml:
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /home/myhome/Projects/nginx-proxy/conf/my_conf.conf:/etc/nginx/conf.d/my_proxy.conf:ro
whoami:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=whoami.local
networks:
default:
external:
name: nginx-proxy
In the second volume, I copy to the nginx-proxy container the following config file:
server {
listen 80;
server_name http://mystaticip:3001;
client_max_body_size 2G;
return 301 http://data.chembiosys.de$request_uri;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host data.chembiosys.de;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://mystaticip:3001;
}
}
And finally, I run the rails app docker-compose.yml
version: '3'
services:
db:
image: mysql:5.7
container_name: seek-mysql_cbs
restart: always
env_file:
- docker/db.env
volumes:
- seek-mysql-db_cbs:/var/lib/mysql
seek: # The SEEK application
#build: .
image: fairdom/seek:1.7
container_name: seek_cbs
command: docker/entrypoint.sh
restart: always
environment:
RAILS_ENV: production
SOLR_PORT: 8983
NO_ENTRYPOINT_WORKERS: 1
env_file:
- docker/db.env
volumes:
- seek-filestore_cbs:/seek/filestore
- seek-cache_cbs:/seek/tmp/cache
ports:
- "3001:3000"
depends_on:
- db
- solr
links:
- db
- solr
seek_workers: # The SEEK delayed job workers
#build: .
image: fairdom/seek:1.7
container_name: seek-workers_cbs
command: docker/start_workers.sh
restart: always
environment:
RAILS_ENV: production
SOLR_PORT: 8983
env_file:
- docker/db.env
volumes:
- seek-filestore_cbs:/seek/filestore
- seek-cache_cbs:/seek/tmp/cache
depends_on:
- db
- solr
links:
- db
- solr
solr:
image: fairdom/seek-solr
container_name: seek-solr_cbs
volumes:
- seek-solr-data_cbs:/opt/solr/server/solr/seek/data
restart: always
volumes:
seek-filestore_cbs:
driver: local-persist
driver_opts:
mountpoint: /home/myhome/Projects/ChemBioSys/docker_volumes/seek-filestore_cbs
seek-mysql-db_cbs:
driver: local-persist
driver_opts:
mountpoint: /home/myhome/Projects/ChemBioSys/docker_volumes/seek-mysql-db_cbs
seek-solr-data_cbs:
driver: local-persist
driver_opts:
mountpoint: /home/myhome/Projects/ChemBioSys/docker_volumes/seek-solr-data_cbs
seek-cache_cbs:
driver: local-persist
driver_opts:
mountpoint: /home/myhome/Projects/ChemBioSys/docker_volumes/seek-cache_cbs
networks:
default:
external:
name: nginx-proxy
I have the feeling that nginx-proxy is just failing to connect the URL to the app. What an I doing wrong and how to connect the app to the URL with nginx? And also, how to avoid the rewrite of the URL to the IP:port?
P.S. The static IP I got from the SysAdmins is alphanumerical and I see the following warning when the nginx-proxy docker-compose runs:
nginx-proxy | [warn] 30#30: server name "http://pc08.ian.uni-jena.de:3001" has suspicious symbols in /etc/nginx/conf.d/my_proxy.conf:3

Resources