I have this docker-compose file:
version: "3.9"
services:
producer:
build:
context: ./producer
dockerfile: Dockerfile.producer
target: prod
ports:
- 8080:8080
env_file:
- ./producer/producer.env
depends_on:
- db
networks:
- network-db
cleanup:
build:
context: ./cleanup
dockerfile: Dockerfile.cleanup
target: prod
ports:
- 4040:4040
env_file:
- ./cleanup/cleanup.env
depends_on:
- producer
networks:
- network-db
db:
image: postgres
env_file:
- .env
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- 5432:5432
networks:
- network-db
restart: always
volumes:
postgres-data:
networks:
network-db:
driver: bridge
And in both of the producer and cleanup services' codebase, I try to connect to the same port, which is 5432, like so:
configData := fmt.Sprintf("postgres://%v:%v#%v:%v/%v?sslmode=disable",
pgUser,
pgPassword,
pgHost,
pgPort, // 5432
pgName,
)
GDB, err = gorm.Open(postgres.Open(configData), &gorm.Config{})
At the moment I get a connection refused error and I am assuming it is something to do with the port.
So my question is, can db image handle connections from multiple services on the same port (5432). If not, then what is the better way to solve the problem?
Btw, .env files should not be the source of the issue since they match.. (port=5432, host=db, etc.)
Related
I'm having a problem persisting data with docker-compose.
I want my service chatmysql to persist data I put inside a database, but everytime i run docker-compose down it all vanishes.
I checked directory /var/lib/docker/volumes to see if it stores data there when containers are running and the volume was completely empty.
I didn't have that issue when I was running containers with docker run command so I guess its fault of my docker-compose.yaml file. Can someone help me?
I'm running this on Ubuntu 20.04.
version: '3'
services:
chatmysql:
image: mysql/mysql-server
container_name: chatmysql
hostname: db
user: root
networks:
- chatnet
ports:
- 3307:3306
volumes:
- chatmysqlvolume:/lib/var/mysql
chatbackend:
depends_on:
- chatmysql
build:
context: backend/src
container_name: chatbackend
hostname: backend
networks:
- chatnet
ports:
- 8080:8080
environment:
- MYSQLUSERNAME=${MYSQLUSERNAME:-user}
- MYSQLPASSWORD=${MYSQLPASSWORD:?database password not set}
- MYSQLHOST=${MYSQLHOST:-db}
- MYSQLPORT=${MYSQLPORT:-3306}
- MYSQLDBNAME=${MYSQLDBNAME:-test}
restart: always
deploy:
restart_policy:
condition: on-failure
chatfrontend:
build: frontend
container_name: chatfrontend
hostname: front
networks:
- chatnet
ports:
- 3000:3000
volumes:
chatmysqlvolume:
networks:
chatnet:
driver: bridge
You need to change the mounted volume, try this :
version: '3.7'
services:
chatmysql:
image: mysql/mysql-server
container_name: chatmysql
hostname: db
user: root
networks:
- chatnet
ports:
- 3307:3306
volumes:
- chatmysqlvolume:/var/lib/mysql
chatbackend:
depends_on:
- chatmysql
build:
context: backend/src
container_name: chatbackend
hostname: backend
networks:
- chatnet
ports:
- 8080:8080
environment:
- MYSQLUSERNAME=${MYSQLUSERNAME:-user}
- MYSQLPASSWORD=${MYSQLPASSWORD:?database password not set}
- MYSQLHOST=${MYSQLHOST:-db}
- MYSQLPORT=${MYSQLPORT:-3306}
- MYSQLDBNAME=${MYSQLDBNAME:-test}
restart: always
deploy:
restart_policy:
condition: on-failure
chatfrontend:
build: frontend
container_name: chatfrontend
hostname: front
networks:
- chatnet
ports:
- 3000:3000
volumes:
chatmysqlvolume:
networks:
chatnet:
driver: bridge
I have docker-compose.yml file which contains frontend,backend,testing,postgres and pgadmin container. The containers except testing are able to communicate each other. But the testing container fails to communicate with backend and frontend container in docker-compose.
version: '3.7'
services:
frontend:
container_name: test-frontend
build:
context: ./frontend
dockerfile: Dockerfile.local
ports:
- '3000:3000'
networks:
- test-network
environment:
# For the frontend can be applied only during the build!
# (while it's applied when TS is compiled)
# You have to build manually without cache if one of those are changed at least for the prod mode.
- REACT_APP_BACKEND_API=http://localhost:8000/api/v1
- REACT_APP_GOOGLE_CLIENT_ID=1234567dfghjjnfd
- CI=true
- CHOKIDAR_USEPOLLING=true
postgres:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- test-network
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: "dev#dev.com"
PGADMIN_DEFAULT_PASSWORD: dev
volumes:
- pgadmin:/root/.pgadmin
- ./pgadmin-config/servers.json:/pgadmin4/servers.json
ports:
- "5050:80"
networks:
- test-network
restart: unless-stopped
backend:
container_name: test-backend
build:
context: ./backend
dockerfile: Dockerfile.local
ports:
- '8000:80'
volumes:
- ./backend:/app
command: >
bash -c "alembic upgrade head
&& exec /start-reload.sh"
networks:
- test-network
depends_on:
- postgres
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/app/.secret/secret.json
- APP_DB_CONNECTION_STRING=postgresql+psycopg2://dev:dev#postgres:5432/postgres
- LOG_LEVEL=debug
- SQLALCHEMY_ECHO=True
- AUTH_ENABLED=True
- CORS=*
- GCP_ALLOWED_DOMAINS=*
testing:
container_name: test-testing
build:
context: ./testing
dockerfile: Dockerfile
volumes:
- ./testing:/isp-app
command: >
bash -c "/wait
&& robot ."
networks:
- test-network
depends_on:
- backend
- frontend
environment:
- WAIT_HOSTS= frontend:3000, backend:8000
- WAIT_TIMEOUT= 3000
- WAIT_SLEEP_INTERVAL=300
- WAIT_HOST_CONNECT_TIMEOUT=300
volumes:
postgres:
pgadmin:
networks:
test-network:
driver: bridge
All the containers are mapped to test-network. When the testing container tried to connect to frontend:3000 or backend:8000, it throws "Host [ backend:8000] not yet available"
How to fix it?
Note: Thanks #Ferran Buireu for the suggestion. I'm quite sure to get minus vote because of very new to docker and changing network world to system and programming.
After deploy gatsbyjs, I found the socketio error "net::ERR_CONNECTION_REFUSED".
Even it works properly when I browse to any pages but I think it is not running correctly.
How can I solve this error? (below is the error capture)
I implement and deploy these services on Ubuntu 20.04.2 with Docker 20.10.6, please see the below "docker-compose.yml"
version: "3"
services:
frontendapp01:
working_dir: /frontendapp01
build:
context: ./frontendapp01
dockerfile: Dockerfile
depends_on:
- backendsrv01
- mongoserver
volumes:
- ./sentric01:/srv/front
ports:
- "8001:8000"
environment:
GATSBY_WEBPACK_PUBLICPATH: /
STRAPI_URL: backendsrv01:1337
networks:
- vpsnetwork
frontendapp02:
working_dir: /frontendapp02
build:
context: ./frontendapp02
dockerfile: Dockerfile
depends_on:
- backendsrv02
- mongoserver
volumes:
- ./sentric02:/srv/front
ports:
- "8002:8000"
environment:
GATSBY_WEBPACK_PUBLICPATH: /
STRAPI_URL: backendsrv02:1338
networks:
- vpsnetwork
frontendapp03:
working_dir: /frontendapp03
build:
context: ./frontendapp03
dockerfile: Dockerfile
depends_on:
- backendsrv02
- mongoserver
volumes:
- ./sentric03:/srv/front
ports:
- "8003:8000"
environment:
GATSBY_WEBPACK_PUBLICPATH: /
STRAPI_URL: backendsrv02:1338
networks:
- vpsnetwork
backendsrv01:
image: strapi/strapi
container_name: backendsrv01
restart: unless-stopped
environment:
DATABASE_CLIENT: mongo
DATABASE_NAME: essential
DATABASE_HOST: mongoserver
DATABASE_PORT: 27017
networks:
- vpsnetwork
volumes:
- ./app01:/srv/app
ports:
- "1337:1337"
backendsrv02:
image: strapi/strapi
container_name: backendsrv02
restart: unless-stopped
environment:
DATABASE_CLIENT: mongo
DATABASE_NAME: solven
DATABASE_HOST: mongoserver
DATABASE_PORT: 27017
networks:
- vpsnetwork
volumes:
- ./app02:/srv/app
ports:
- "1338:1337"
mongoserver:
image: mongo
container_name: mongoserver
restart: unless-stopped
networks:
- vpsnetwork
volumes:
- vpsappdata:/data/db
ports:
- "27017:27017"
networks:
vpsnetwork:
driver: bridge
volumes:
vpsappdata:
The socket connection only appears during the development stage (gatsby develop) and it's intended to refresh and update the browser on each saves by hot-reloading, so without losing component state. This feature is known as fast-refresh.
As I said, and for obvious reasons, this only applies in gatsby develop. Under gatsby build, there's no connection socket. If your Docker development environment is sharing the port 8000 and 8001 (according to your docker-compose.yml setup), once built, can cause a break of the socket because it has changed the scope of the project.
Answering, you don't have to worry about, your project seems to build properly but, because of the sharing port between environments it prompts the log.
Further readings:
https://www.gatsbyjs.com/docs/conceptual/overview-of-the-gatsby-build-process/
https://www.gatsbyjs.com/docs/reference/local-development/fast-refresh/
I',m trying to create e simple python-flask api connected to a postgress database,
So added to my docker-compose.yaml my api, db and pgadmin but when I try to start my docker-composer container it doens't
Here is my docker-compose.yaml
version: '3.7'
services:
api:
build: ./Backend/src
depends_on:
- db
environment:
STAGE: test
SQLALCHEMY_DATABASE_URI: postgresql+psycopg2://test:test#db/test
networks:
- default
ports:
- 5000:5000
volumes:
- ./app:/usr/src/app/app
restart: always
db:
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: test
image: postgres:latest
networks:
- local
ports:
- 5432:5432
restart: always
volumes:
- ./postgres-data:/var/lib/postgresql
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: admim
PGADMIN_DEFAULT_PASSWORD: admin
ports:
- 5050:80
networks:
- local
restart: always
networks:
local:
driver: bridge
Running the docker from pycharm I get this error:
Deploying 'Compose: docker-compose-dev.yaml'...
Failed to deploy 'Compose: docker-compose-dev.yaml': Sorry but parent: com.intellij.execution.impl.ConsoleViewImpl[,0,0,1008x155,invalid,layout=java.awt.BorderLayout,alignmentX=0.0,alignmentY=0.0,border=,flags=9,maximumSize=,minimumSize=,preferredSize=] has already been disposed (see the cause for stacktrace) so the child: com.intellij.util.Alarm#56e3f4b will never be disposed
I want to mount a volume in docker via sshfs to another computer that's ideally on my local network that my docker host is on. The problem is that it can't see the host if I try to ssh into via the usual 192.168.. ip. The other option I have is to expose the machines SSH port to the internet, but I know that is bad practice, so i'm wondering if there's any other way to do this? Can I connect via a VPN of some sort or can I make the docker volume reach my local network on my host?
Ideally i'm just looking for some guidance.
version: "3"
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
sshfs:
driver: vieux/sshfs:latest
driver_opts:
sshcmd: "secretuser#192.168.0.114:/home"
password: "secretpassword"
allow_other: ""
services:
nginx:
image: nginx:alpine
container_name: nx01
ports:
- "80:8000"
- "443:443"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
depends_on:
- web
frontend:
container_name: frontend-vue
build:
context: .
dockerfile: compose/frontend/Dockerfile
volumes:
- './frontend:/frontend'
- '/frontend/node_modules'
ports:
- '8081:8080'
web:
build:
context: .
dockerfile: compose/django/Dockerfile
container_name: dx01
depends_on:
- db
- redis
volumes:
- ./src:/src
- sshfs:/src/mount
expose:
- "8000"
env_file:
- ./.envs/.django
db:
build:
context: .
dockerfile: compose/postgres/Dockerfile
container_name: px01
env_file:
- ./.envs/.postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
redis:
image: redis:latest
container_name: rs01
ports:
- "127.0.0.1:6379:6379"