I have a docker compose file that holds my cypress container:
version: '3'
services:
redis:
image: redis
ports:
- "6379"
# restart: unless-stopped
networks:
main:
aliases:
- redis
postgres:
image: postgres:12
ports:
- "5432:5432"
env_file: ./.env
# restart: unless-stopped
volumes:
- pgdata:/var/lib/postgresql/data
networks:
main:
aliases:
- postgres
#access by going to localhost:16543
#when adding a server to the serve list
#the hostname is postgres
#the username is postgres
#the password is postgres
pgadmin:
image: dpage/pgadmin4
links:
- postgres
depends_on:
- postgres
env_file: ./.env
# restart: unless-stopped
ports:
- "16543:80"
networks:
main:
aliases:
- pgadmin
celery:
build:
context: .
dockerfile: Dockerfile-dev # use docker-dev because production npm installs and npm builds
command: python manage.py celery
env_file: ./.env
# restart: unless-stopped
volumes:
- .:/code
- tmp:/tmp
links:
- redis
depends_on:
- redis
networks:
main:
aliases:
- celery
web:
build:
context: .
dockerfile: Dockerfile-dev
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
- tmp:/tmp
ports:
- "8000:8000"
env_file: ./.env
# restart: unless-stopped
links:
- postgres
- redis
- celery
- pgadmin
depends_on:
- postgres
- redis
- celery
- pgadmin
networks:
main:
aliases:
- web
# Cypress container
cypress:
# the Docker image to use from https://github.com/cypress-io/cypress-docker-images
image: "cypress/included:4.0.2"
depends_on:
- web
environment:
# pass base url to test pointing at the web application
- CYPRESS_BASE_URL=http://web:8000
# share the current folder as volume to avoid copying
working_dir: /e2e
volumes:
- ./:/e2e
networks:
main:
aliases:
- cypress
volumes:
pgdata:
tmp:
networks:
main:
For some reason when I start my server and then start cypress using docker-compose up --exit-code-from cypress I get the following error, that I cannot seem to debug, please note my server is running, and all are on the same network main
Cypress could not verify that this server is running:
> http://web:8000
We are verifying this server because it has been configured as your `baseUrl`.
Cypress automatically waits until your server is accessible before running tests.
We will try connecting to it 3 more times...
====================================================================================================
(Run Starting)
┌────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Cypress: 4.0.2 │
│ Browser: Electron 78 (headless) │
│ Specs: 1 found (login/test.spec.js) │
└────────────────────────────────────────────────────────────────────────────────────────────────┘
────────────────────────────────────────────────────────────────────────────────────────────────────
Running: login/test.spec.js (1 of 1)
Browserslist: caniuse-lite is outdated. Please run the following command: `yarn upgrade`
Login Page
1) Visits Page
0 passing (671ms)
1 failing
1) Login Page Visits Page:
CypressError: cy.visit() failed trying to load:
http://127.0.0.1:8000/test/
We attempted to make an http request to this URL but the request failed without a response.
We received this error at the network level:
> Error: connect ECONNREFUSED 127.0.0.1:8000
Common situations why this would fail:
- you don't have internet access
- you forgot to run / boot your web server
- your web server isn't accessible
- you have weird network configuration settings on your computer
The stack trace for this error is:
Error: connect ECONNREFUSED 127.0.0.1:8000
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1056:14)
Related
I migrate from Redis to RabbitMQ.
I start my project in docker-compose, I got problem with Celery tasks after I migrate from Redis to RabbitMQ broker.
Celery don't download old tasks when I reload all containers.
Celery got simple logs without downloading old tasks from RabbitMQ.
[2022-10-27 21:44:39,263: INFO/MainProcess] Connected to amqp://admin:**#rabbitmq:5672//
[2022-10-27 21:44:39,293: INFO/MainProcess] mingle: searching for neighbors
[2022-10-27 21:44:40,349: INFO/MainProcess] mingle: all alone
[2022-10-27 21:44:40,414: INFO/MainProcess] celery#29441ac7ffed ready.
docker-compose.yaml
version: "2.2"
services:
nginx:
build:
context: ./nginx
dockerfile: Dockerfile
container_name: nginx
restart: always
ports:
- ${PUB_PORT}:80
volumes:
- static_volume:/var/www/static
- ./backend/mediafiles:/var/www/media
depends_on:
- django
django:
build:
context: ./backend
dockerfile: Dockerfile.prod
container_name: backend
restart: always
env_file:
- ./.env.prod
environment:
- IS_DOCKER=True
- DJANGO_SETTINGS_MODULE=core.settings.production
volumes:
- static_volume:/django/staticfiles
- ./backend/mediafiles:/django/mediafiles
- ./backend:/django # only for local development
depends_on:
postgres:
condition: service_healthy
aiogram:
build:
context: ./telegram_bot
dockerfile: Dockerfile
container_name: telegram_bot
restart: always # crash: not found token
command: ["python", "main.py"]
volumes:
- ./backend/mediafiles:/bot/mediafiles
env_file:
- ./.env.prod
environment:
- IS_DOCKER=True
depends_on:
- django
postgres:
image: postgres:13.0-alpine
container_name: project_db
restart: always
volumes:
- postgres_volume:/var/lib/postgresql/data
depends_on:
- redis
ports:
- 54321:5432
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
healthcheck:
test: ["CMD","pg_isready", "--username=${POSTGRES_USER}","-d", "{POSTGRES_DB}"]
redis:
build: ./redis
ports:
- ${REDIS_PORT}:6379
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD}
volumes:
- ./redis/redis.conf/:/usr/local/etc/redis.conf
- ./redis/data:/usr/local/redis/data
- ./redis/redis.log:/usr/local/redis/redis.log
restart: always
container_name: redis
# celery worker
celery:
container_name: celery
restart: always
build:
context: ./backend
dockerfile: Dockerfile.celery.prod
command: celery -A core worker -l info
environment:
- DJANGO_SETTINGS_MODULE=core.settings.production
env_file:
- ./.env.prod
depends_on:
- django
- redis
- postgres
- rabbitmq
# message broker for celery
rabbitmq:
container_name: rabbitmq
restart: always
image: rabbitmq:3.9-alpine
volumes:
- "./rabbitmq-data:/var/lib/rabbitmq"
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5672:5672"
- "15672:15672"
volumes:
postgres_volume:
static_volume:
redis_data:
Dockerfile.celery.prod
FROM python:3.8.5
WORKDIR /django
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Copy only requirements to cache them in docker layer
RUN pip install --upgrade pip
COPY ./requirements.txt /django/
RUN pip install -r requirements.txt
COPY . .
I tried to run delayed tasks with Celery worker after reload all containers in docker-compose including RabbitMQ too.
I am beginner in Docker and can not get response from my project that running in docker. I have a Go project with 4 services. When It Run as local machine in my pc, everything is good and not have problem. But when it run in docker and send request by postman, could not get response and socket hang up was present.
I have 4 service for this:
1- Rest API service that dockerfile is :
FROM golang:latest as GolangBase
...
...
EXPOSE 8082
CMD ["/go/bin/ecg", "server"]
2- Page service that dockerfile is :
FROM golang:latest as GolangBase
...
...
EXPOSE 8080
CMD ["/go/bin/ecg", "page"]
2- Redis
3- Postgres
docker-compose in root:
version: "2.3"
services:
server:
build:
context: .
dockerfile: docker/app/Dockerfile
container_name: ecg-go
ports:
- "127.0.0.1:8082:8082"
depends_on:
- postgres
- redis
networks:
- ecg-service_default
restart: always
page:
build:
context: .
dockerfile: docker/page/Dockerfile
container_name: ecg-page
ports:
- "127.0.0.1:8080:8080"
depends_on:
- postgres
networks:
- ecg-service_default
restart: always
redis:
image: redis:6
container_name: ecg-redis
volumes:
- redis_data:/data
networks:
- ecg-service_default
postgres:
image: postgres:alpine
container_name: ecg-postgres
environment:
POSTGRES_PASSWORD: docker
POSTGRES_DB: ecg
POSTGRES_USER: ecg
volumes:
- pg_data:/var/lib/postgresql/data
networks:
- ecg-service_default
volumes:
pg_data:
redis_data:
networks:
ecg-service_default:
I build images and run containers by docker-compose up -d command and all services is created and running.
But when sending Request to http://localhost:8082/.. it return Could not get response, socket hang up.
What's the problem ??
I'm using docker-compose for terramoney.
Here is docker-compose.yml file.
version: "3"
services:
terrad:
image: terramoney/localterra-core:0.5.18
volumes:
- ./config:/root/.terra/config
networks:
- terra
ports:
- "26657:26657"
- "1317:1317"
- "9090:9090"
- "9091:9091"
command: terrad start
oracle:
image: terramoney/pseudo-feeder:0.5.6
depends_on:
- terrad
volumes:
- ./config/config.toml:/app/config.toml
networks:
- terra
environment:
TESTNET_LCD_URL: http://terrad:1317
command: start
postgres:
image: postgres:12
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- terra
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
# redis:
# image: redis:latest
# networks:
# - terra
# ports:
# - "6379:6379"
fcd-collector:
image: terramoney/fcd:1.0.14
depends_on:
- terrad
- postgres
volumes:
- ./logs:/app/logs
networks:
- terra
env_file: fcd.env
command: collector
restart: unless-stopped
fcd-api:
image: terramoney/fcd:1.0.14
depends_on:
- terrad
- postgres
volumes:
- ./logs:/app/logs
networks:
- terra
ports:
- 3060:3060
env_file: fcd.env
command: start
networks:
terra:
driver: bridge
When I run sudo docker-compose up, I got this error.
Starting localterra_terrad_1 ... Starting localterra_postgres_1 ...
error
ERROR: for localterra_postgres_1 Cannot start service postgres: OCI
runtime creStarting localterra_terrad_1 ... error
2d5e1c2ce6c114821010c6: unknown
ERROR: for localterra_terrad_1 Cannot start service terrad: OCI
runtime create failed: container with id exists:
0d03712d982f9b6f1d67c36735f4aefddb874e65958965e1bca54a782f525c6e:
unknown
ERROR: for postgres Cannot start service postgres: OCI runtime create
failed: container with id exists:
ea889e63356e0d77d5196b2c73d1696089944fcbab2d5e1c2ce6c114821010c6:
unknown
ERROR: for terrad Cannot start service terrad: OCI runtime create
failed: container with id exists:
0d03712d982f9b6f1d67c36735f4aefddb874e65958965e1bca54a782f525c6e:
unknown ERROR: Encountered errors while bringing up the project.
If anyone familiar with docker and terramoney, please help me.
I may have installed docker incorrectly.
After install docker, my docker desktop is stopped.
This is root directory
├─── config/
├─── img/
├─── pseudo-feeder/
└─── terracore/
docker-compose.yml
fcd.env
init.sql
LICENSE
README.md
I can't get the connection between my PostgreSQL database from my Rails app which is running in a Docker container working.
The application just works fine, I just can't connect to the database.
docker-compose.yml:
services:
app:
build:
context: .
dockerfile: app.Dockerfile
container_name: application_instance
command: bash -c "bundle exec puma -C config/puma.rb"
volumes:
- .:/app
- node-modules:/app/node_modules
- public:/app/public
depends_on:
- database
- redis
env_file:
- .env
database:
image: postgres
container_name: database_instance
restart: always
volumes:
- db_data:/var/lib/postgresql/data
ports:
- "5432:5432"
env_file:
- .env
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_PRODUCTION_DB}
nginx:
build:
context: .
dockerfile: nginx.Dockerfile
depends_on:
- app
volumes:
- public:/app/public
ports:
- "80:80"
redis:
image: redis
container_name: redis_instance
ports:
- "6379:6379"
sidekiq:
container_name: sidekiq_instance
build:
context: .
dockerfile: app.Dockerfile
depends_on:
- redis
- database
command: bundle exec sidekiq
volumes:
- .:/app
env_file:
- .env
volumes:
db_data:
node-modules:
public:
If I try to connect via DBeaver I get the following message:
Any idea what's going wrong here? The port should be exposed on my local machine. I also tried with the IP of the container, but then I get a timeout exception.
This is because you most likely have postgres running locally on your machine (port 5432) and also on a docker (port 5432). Dbeaver wants to connect to database on your local machine, than on docker.
Any solution I figure out is to temporary stop/turn of your local postgres service (on Windows: Task manager -> Services -> (postgres service) -> stop).
I was also struggling with issue.
I want use Docker run my project(react+nodejs+mongodb),
Dockerfile:
FROM node:8.9-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
CMD nohup sh -c 'npm start && node ./server/server.js'
docker-compose.yml:
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
run docker-compose up --build, the 3000 port is worked, but the 8080 port dies
localhost:3000
localhost:8080
I would suggest create a container for the server and have it seperate from the "chat" container. Its best to have each container do one thing and one thing only (almost like the philosophy behind unix commands)
In any case here is some modifications that I would make to the compose file.
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
# You don't need to expose this port to the outside world. Because you linked the two containers the chat app
# will be able to connect to mongodb using hostname mongodb inside the container network.
# ports:
# - "27017:27017"
Btw what happens if you run:
$ docker-compose down
and then
$ docker-compose up
$ docker ps
can you see the ports exposed in docker ps output?
your chat service depends on mongo so you also need to have this in your chat
depends_on:
- mongo
This docker-compose file works for me. Note that i am saving the data from the database to a local directory. You should add this directory to gitignore.
version: "3.2"
services:
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
- NODE_ENV=production
ports:
- "28017:27017"
expose:
- 28017 # you can connect to this mongodb with studio3t
volumes:
- ./mongodb-data:/data/db
restart: always
networks:
- docker-network
express:
container_name: express
environment:
- NODE_ENV=development
restart: always
build:
context: .
args:
buildno: 1
expose:
- 3000
ports:
- "3000:3000"
links:
- mongo # link this service to the database service
depends_on:
- mongo
command: "npm start" # override the default command to use nodemon in dev
networks:
- docker-network
networks:
docker-network:
driver: bridge
You may also find that using node you have to wait for the mongodb container to be ready before you can connect to the database.