docker containers file access - docker

I have run into a problem where my backend (NestJs) image upload function uploads images to backend container folder sat (/resources/images/avatar/image1.jpg).
Now, i want to access this image from frontend (Reactjs) container to display user avatart. However, UNKNOW_URL_SCHEME error displays.
here is my docker-compose config
backend:
container_name: backend
build:
context: ./backend
target: development
volumes:
- ./backend:/usr/src/app
- /usr/src/app/node_modules
ports:
#- ${BACKEND_PORT}:${BACKEND_PORT}
- 5000:4000
command: npm run start:dev
env_file:
- .env
networks:
- webnet
depends_on:
- mongodb
- rabbitmq
and
frontend:
container_name: Frontend
build:
context: ./frontend
target: development
volumes:
- ./frontend:/usr/src/app
- ./usr/src/app/node_modules
ports:
- 3000:3000
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
- CI=true
command: npm run start
networks:
- webnet
depends_on:
- backend
- ms-consent
- mongodb
- rabbitmq
My Images are in the root folder of backend code,
/resources/images/avatar/image1.jpg
Now, in frontend, I refer to file location as below;
const image_default_path = 'backend://resources/images';
where i assume, backend:// is container name and i can access with it, however, i can't.
I would like to seek your help you have solved similar issue,

Related

HTTPS for docker containers

I am developing a workflow service as a training project. Abstracting from the details, everything you need to know for this question is in the image. For deployment, I rented a server and ran docker-compose on it. Everything works well, but what I'm worried about is that ports 8000 and 5432 are open.
The first question is, is it worth worrying? And if so, how to get rid of it?
Docker-compose file content below
version: "3"
services:
db:
container_name: 'emkk-db'
image: postgres
volumes:
- ./backend/data:/var/lib/postgresql/data
env_file:
- ./backend/db.env
ports:
- "5432:5432"
backend:
container_name: 'emkk-backend'
image: emkk_backend
build: ./backend
volumes:
- ./backend:/emkk/backend
env_file:
- ./backend/.env
ports:
- "8000:8000"
depends_on:
- db
frontend:
container_name: 'emkk-frontend'
image: emkk_frontend
build: ./frontend
command: npm run start
env_file:
- ./frontend/.env
volumes:
- /emkk/frontend/node_modules
- ./frontend:/emkk/frontend
ports:
- "80:80"
depends_on:
- backend
I also want to configure HTTPS protocol. I tried installing nginx and putting a certificate on it using a certbot, and then proxying requests to containers. I sat with this for several hours and I still did not manage to achieve anything better than a HTTPS for the nginx start page.
Maybe I'm doing completely wrong things, but I'm new to this, I haven't had to deal with deployments before. I would be grateful for your answers, which will contain an idea or an example of how you can do this.
If you don't have a connection to 8000 (probably WAS) or 5432 (database) from an external server, you can change docker-compose.yml to:
you have to expose only necessary ports for external clients.
when you connect to backend from web, you should use service name like backend:8000
when you connect to db from backend, you should use service name like db:5432
version: "3"
services:
db:
container_name: 'emkk-db'
image: postgres
volumes:
- ./backend/data:/var/lib/postgresql/data
env_file:
- ./backend/db.env
backend:
container_name: 'emkk-backend'
image: emkk_backend
build: ./backend
volumes:
- ./backend:/emkk/backend
env_file:
- ./backend/.env
depends_on:
- db
frontend:
container_name: 'emkk-frontend'
image: emkk_frontend
build: ./frontend
command: npm run start
env_file:
- ./frontend/.env
volumes:
- /emkk/frontend/node_modules
- ./frontend:/emkk/frontend
ports:
- "80:80"
depends_on:
- backend
And, you can use nginx proxy manager to service with HTTPS and a certificate from the certbot.

How to create docker images and run it in an EC2?

I'm very much new to the Docker world. I have a docker-compose file which works fine for me.
But how do I create these Docker images and run it in an EC2.
Any help would be appreciated.
PS: I don't want to use ECS or ECR for this. I hope DockerHub should work fine for storing and retrieving these images (Correct me if I'm wrong).
Thanks.
version: "3"
services:
app:
image: node:12.13.1
volumes:
- ./:/app
working_dir: /app
depends_on:
- mongo
- nats
environment:
NODE_ENV: development
ports:
- 3000:3000
command: npm run dev
app_2:
image: node:12.13.1
volumes:
- ../app_2/:/app
working_dir: /app_2
depends_on:
- mongo
- nats
links:
- mongo
environment:
NODE_ENV: development
ports:
- 4000:4000
command: npm run dev
mongo:
image: mongo
expose:
- 27017
ports:
- "27017:27017"
volumes:
- ./data/db:/data/db
nats:
image: 'nats:2.1.2'
expose:
- "4222"
ports:
- "8222:8222"
hostname: nats-server
Install docker, docker-compose and then just run docker-compose up. Just remember to open the port 4000 of the EC2 instance and accessible from your IP or any needed ip's.

How do I build and run api-platform image to a production docker container?

I have followed the api-platform tutorial and successfully built and started the application using Docker on my localhost machine.
I have a production server running Ubuntu 16.04.5 LTS, and a newly installed Docker version 18.06.1-ce.
How would I build this code on my local machine and run it on the Docker server?
I have also looked at the Deploying API Platform Applications documentation but I am not sure how to use this.
I am struggling to understand how to build api-platform from my localhost to the server
this is docker-compose.yml file try this please docker-compose up -d
version: '3.4'
services:
php:
image: ${CONTAINER_REGISTRY_BASE}/php
build:
context: ./api
target: api_platform_php
cache_from:
- ${CONTAINER_REGISTRY_BASE}/php
- ${CONTAINER_REGISTRY_BASE}/nginx
- ${CONTAINER_REGISTRY_BASE}/varnish
depends_on:
- db
# Comment out these volumes in production
volumes:
- ./api:/srv/api:rw,cached
# If you develop on Linux, uncomment the following line to use a bind-mounted host directory instead
# - ./api/var:/srv/api/var:rw
api:
image: ${CONTAINER_REGISTRY_BASE}/nginx
build:
context: ./api
target: api_platform_nginx
cache_from:
- ${CONTAINER_REGISTRY_BASE}/php
- ${CONTAINER_REGISTRY_BASE}/nginx
- ${CONTAINER_REGISTRY_BASE}/varnish
depends_on:
- php
# Comment out this volume in production
volumes:
- ./api/public:/srv/api/public:ro
ports:
- "8080:80"
cache-proxy:
image: ${CONTAINER_REGISTRY_BASE}/varnish
build:
context: ./api
target: api_platform_varnish
cache_from:
- ${CONTAINER_REGISTRY_BASE}/php
- ${CONTAINER_REGISTRY_BASE}/nginx
- ${CONTAINER_REGISTRY_BASE}/varnish
depends_on:
- api
volumes:
- ./api/docker/varnish/conf:/usr/local/etc/varnish:ro
tmpfs:
- /usr/local/var/varnish:exec
ports:
- "8081:80"
db:
# In production, you may want to use a managed database service
image: postgres:10-alpine
environment:
- POSTGRES_DB=api
- POSTGRES_USER=api-platform
# You should definitely change the password in production
- POSTGRES_PASSWORD=!ChangeMe!
volumes:
- db-data:/var/lib/postgresql/data:rw
# You may use a bind-mounted host directory instead, so that it is harder to accidentally remove the volume and lose all your data!
# - ./docker/db/data:/var/lib/postgresql/data:rw
ports:
- "5432:5432"
client:
# Use a static website hosting service in production
# See https://github.com/facebookincubator/create-react-app/blob/master/packages/react-scripts/template/README.md#deployment
image: ${CONTAINER_REGISTRY_BASE}/client
build:
context: ./client
cache_from:
- ${CONTAINER_REGISTRY_BASE}/client
env_file:
- ./client/.env
volumes:
- ./client:/usr/src/client:rw,cached
- /usr/src/client/node_modules
ports:
- "80:3000"
admin:
# Use a static website hosting service in production
# See https://github.com/facebookincubator/create-react-app/blob/master/packages/react-scripts/template/README.md#deployment
image: ${CONTAINER_REGISTRY_BASE}/admin
build:
context: ./admin
cache_from:
- ${CONTAINER_REGISTRY_BASE}/admin
volumes:
- ./admin:/usr/src/admin:rw,cached
- /usr/src/admin/node_modules
ports:
- "81:3000"
h2-proxy:
# Don't use this proxy in prod
build:
context: ./h2-proxy
depends_on:
- client
- admin
- api
- cache-proxy
ports:
- "443:443"
- "444:444"
- "8443:8443"
- "8444:8444"
volumes:
db-data: {}

Communication between multiple hosts of nginx in docker-compose

I have a docker-compose like this:
version: "3"
networks:
LEMP:
services:
nginx:
image: nginx:latest
ports:
- "8080:80"
- "80:80"
- "443:443"
- "3333:3333"
volumes:
- /var/www:/var/www
- ./nginx-conf/server1.local.conf:/etc/nginx/conf.d/server1.local.conf
- ./nginx-conf/server2.local.conf:/etc/nginx/conf.d/server2.local.conf
depends_on:
- php
networks:
- LEMP
extra_hosts:
- "server1.local:127.0.0.1"
- "server2.local:127.0.0.1"
php:
build: ./php
restart: always
volumes:
- /var/www:/var/www
ports:
- "9000:9000"
networks:
- LEMP
mysql:
image: mysql:5.7
restart: always
ports:
- "3306:3306"
depends_on:
- nginx
environment:
- MYSQL_ROOT_PASSWORD=my_password
volumes:
- db:/var/lib/mysql
networks:
- LEMP
redis:
image: redis:alpine
restart: always
ports:
- "6379:6379"
networks:
- LEMP
volumes:
db:
PHP Dockerfile:
FROM php:7.1-fpm
RUN docker-php-ext-install pdo pdo_mysql
WORKDIR /var/www
If I try to reach server1.local or server2.local from my browser / postman it works fine, but if I try to reach server2.local (REST api) from server1.local, it can't reach it.
I read this discussion but of course I can't use my PC IP since the configuration will be shared to other colleagues.
I know nginx-proxy, but this last one requires to setup other services where projects are, but in my case I have only git projects stored in /var/www/, so how should I setup them starting from a folder in /var/www? For example, server2.local has both php and an internal reverse proxy for a local host (localhost:3333). Should I start from php image, install node.js/pm2, run them and so on? It seems a bit weird to me.

Docker machine copying all of the host machine files?

I'm fairly new to docker, but I recently discovered something that I just can't wrap my head around. I started a docker machine:
docker-machine create -d virtualbox machine_name
Created a docker-compose file for my application:
version: '3.3'
services:
client:
container_name: client
build:
context: ./services/client
dockerfile: Dockerfile
volumes:
- './services/client:/usr/src/app'
ports:
- '3007:3000'
environment:
- NODE_ENV=development
depends_on:
- project
links:
- project
db:
container_name: db
build:
context: ./services/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
project:
container_name: project
build: ./services/project
volumes:
- './services/project:/usr/src/app'
- './services/project/package.json:/usr/src/app/package.json'
ports:
- 3000:3000
environment:
- DATABASE_URL=postgres://postgres:postgres#db:5432/esports_manager_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/esports_manager_test
- NODE_ENV=${NODE_ENV}
- TOKEN_SECRET=tempsectre
depends_on:
- db
links:
- db
and then I ssh'd into the docker machine to find my entire filesystem. Is this intended behaviour, I can't seem to find anything in the docs that talks about it.

Resources