I am trying to do multiple node distribution systems by multiple Rails node + docker + MySQL + Redis.
Therefore, my main node needs to communicate with other nodes.
Here is my docker-compose.yml
**version: '2'
services:
db:
image: mysql:5.7
restart: always
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=pubsub_1_development
- MYSQL_USER=appuser
- MYSQL_PASSWORD=password
ports:
- "3308:3306"
redis:
image: 'redis:4.0-alpine'
app:
image: pubsub_2:1.0.8
command: /bin/sh -c "rm -f ./tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- ".:/pubsub_2"
ports:
- "3001:3000"
depends_on:
- db
- redis
- sub_node
- pub_node
links:
- db
- redis
- sub_node
- pub_node
environment:
DB_USER: root
DB_NAME: pubsub_1_development
DB_PASSWORD: password
DB_HOST: db
REDIS_CABLE_PORT: redis://redis:6379/1
SUB_NODE: 0.0.0.0:4001
PUB_NODE: 0.0.0.0:4000
stdin_open: true
tty: true
sub_node:
image: sub_node:1.0.1
command: /bin/sh -c "rm -f /sub_node/tmp/pids/server.pid && bundle exec rails s -p 4001 -b '0.0.0.0'"
ports:
- "4001:4001"
environment:
DB_USER: root
DB_NAME: pubsub_1_development
DB_PASSWORD: password
DB_HOST: db
REDIS_CABLE_PORT: redis://redis:6379/1
tty: true
expose:
- "4001"
pub_node:
image: pub_node:1.0.1
command: /bin/sh -c "rm -f /pub_node/tmp/pids/server.pid && bundle exec rails s -p 4000 -b '0.0.0.0'"
ports:
- "4000:4000"
environment:
DB_USER: root
DB_NAME: pubsub_1_development
DB_PASSWORD: password
DB_HOST: db
REDIS_CABLE_PORT: redis://redis:6379/1
tty: true
expose:
- "4000"**
However, when I try to use app node to send the request to pub_node, it threw this error.
Errno::ECONNREFUSED: Failed to open TCP connection to 127.0.0.1:4000 (Connection refused - connect(2) for "127.0.0.1" port 4000)
from /usr/local/lib/ruby/2.5.0/net/http.rb:939:in `rescue in block in connect'
I was doing post by this code.
rvlist = '127.0.0.1:4000'
HTTParty.post("http://#{rvlist}/publish", options)
It works in my development mode without the docker environment.
Networking in docker-compose:
Each container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name.
Reference: https://docs.docker.com/compose/networking/
In your case, any container within docker-compose can open the TCP connection to pub_node container using its container name and port pub_node:4000
Related
I want to connect redash to MySQL Server. I added MYSQL_TCP_PORT for server to use TCP connection, not default UNIX socket (to avoid mysqld.sock error). If I go to mysql container and mysql -p - I can open mysql shell. But If I test connection in redash - it will return (2006, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)").
Here is my docker-compose file:
# This configuration file is for the **development** setup.
# For a production example please refer to getredash/setup repository on GitHub.
version: "2.2"
x-redash-service: &redash-service
build:
context: .
# args:
# skip_frontend_build: "true" # set to empty string to build
volumes:
- .:/app
env_file:
- .env
x-redash-environment: &redash-environment
REDASH_LOG_LEVEL: "INFO"
REDASH_REDIS_URL: "redis://redis:6379/0"
REDASH_DATABASE_URL: "postgresql://postgres#postgres/postgres"
REDASH_RATELIMIT_ENABLED: "false"
REDASH_MAIL_DEFAULT_SENDER: "redash#example.com"
REDASH_MAIL_SERVER: "email"
REDASH_ENFORCE_CSRF: "true"
REDASH_GUNICORN_TIMEOUT: 60
# Set secret keys in the .env file
services:
server:
<<: *redash-service
command: dev_server
depends_on:
- postgres
- redis
ports:
- "5000:5000"
- "5678:5678"
networks:
- default_network
environment:
<<: *redash-environment
PYTHONUNBUFFERED: 0
scheduler:
<<: *redash-service
command: dev_scheduler
depends_on:
- server
networks:
- default_network
environment:
<<: *redash-environment
worker:
<<: *redash-service
command: dev_worker
depends_on:
- server
networks:
- default_network
environment:
<<: *redash-environment
PYTHONUNBUFFERED: 0
redis:
image: redis:3-alpine
restart: unless-stopped
networks:
- default_network
postgres:
image: postgres:9.5-alpine
# The following turns the DB into less durable, but gains significant performance improvements for the tests run (x3
# improvement on my personal machine). We should consider moving this into a dedicated Docker Compose configuration for
# tests.
ports:
- "15432:5432"
command: "postgres -c fsync=off -c full_page_writes=off -c synchronous_commit=OFF"
restart: unless-stopped
networks:
- default_network
environment:
POSTGRES_HOST_AUTH_METHOD: "trust"
email:
image: djfarrelly/maildev
ports:
- "1080:80"
restart: unless-stopped
networks:
- default_network
mysql:
image: mysql/mysql-server:latest
ports:
- "3306:3306"
restart: unless-stopped
container_name: mysql
networks:
- default_network
environment:
MYSQL_ROOT_PASSWORD: "${MYSQL_ROOT_PASSWORD}"
MYSQL_TCP_PORT: 3306
networks:
default_network:
external: false
name: default_network
driver: bridge
As I see - redash is connecting via unix socket - not TCP connection (otherwise there will no mysqld.sock err). I don't know - what I should fix in docker-compose or somewhere else to make it connect properly. Any suggestions? If you need me to provide more info - ask me please
I'm really confused why I'm unable to make API requests to any site. for example, I want to run :
HTTParty.get("https://fakerapi.it/api/v1/persons")
It runs well on my machine. (without docker).
But if I run it inside docker, I got :
SocketError (Failed to open TCP connection to fakerapi.it:443 (getaddrinfo: Name does not resolve))
It happens not only for this site. But for all sites.
So I guess there's something wrong with my docker settings. But I'm not sure where to start.
I'm new to docker. So any advice means a lot to me.
Below is my docker-compose.yaml
version: '3.4'
services:
db:
image: mysql:8.0.17 #using official mysql image from docker hub
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- db_data:/var/lib/mysql
ports:
- "3307:3306"
backend:
build:
context: .
dockerfile: backend-dev.Dockerfile
ports:
- "3001:3001"
volumes:
#the host repos are mapped to the container's repos
- ./backend:/my-project
#volume to cache gems
- bundle:/bundle
depends_on:
- db
stdin_open: true
tty: true
env_file: .env
command: /bin/sh -c "rm -f tmp/pids/server.pid && rm -f tmp/pids/delayed_job.pid && bundle exec bin/delayed_job start && bundle exec rails s -p 3001 -b '0.0.0.0'"
frontend:
build:
context: .
dockerfile: frontend-dev.Dockerfile
ports:
- "3000:3000"
links:
- "backend:bb"
depends_on:
- backend
volumes:
#the host repos are mapped to the container's repos
- ./frontend/:/my-project
# env_file: .env
environment:
- NODE_ENV=development
command: /bin/sh -c "yarn dev --port 3000"
volumes:
db_data:
driver: local
bundle:
driver: local
How I try to run:
docker-compose run backend /bin/sh
rails c
HTTParty.get("https://fakerapi.it/api/v1/persons")
Any idea how can I fix this?
please can someone kindly help me with a docker setup issue i have been trying to debug for days now.
my containers seem to be running fine but i cannot get the database up and running when i run docker compose run app rake db:create.
i am running docker on manjaro in virtualbox
the error i am getting is:
PG::ConnectionBad: could not connect to server: Connection timed out Is the server running on host "172.18.0.3" and accepting TCP/IP connections on port 5432?
below are the configurations for both docker-compose.yml and config/database.yml
version: '3.8'
services:
db:
image: postgres:latest
restart: always
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=password
volumes:
- db_data:/var/lib/postgresql/data
redis:
image: redis
ports:
- 6379:6379
volumes:
- redis_data:/data
app:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/app
- gem_cache:/usr/local/bundle/gems
ports:
- "3000:3000"
depends_on:
- db
- redis
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_HOST=db
- POSTGRES_USERNAME=postgres
volumes:
db_data:
gem_cache:
redis_data:
default: &default
adapter: postgresql
encoding: unicode
host: db
username: postgres
password: password
pool: 5
I have 2 containers. One of then is for postgresql. Another for php.
version: '3.7'
networks:
backend-network:
driver: bridge
frontend-network:
driver: bridge
services:
php-fpm:
container_name: k4fntr_php-fpm
build: ./docker/php-fpm
ports:
- "9000:9000"
volumes:
- ./:/var/www/k4fntr
depends_on:
- database
networks:
- backend-network
- frontend-network
database:
container_name: k4fntr_database
build: ./docker/postgres
restart: always
environment:
ENV: ${APP_ENV}
TESTING_DB: ${DB_DATABASE_TESTING}
POSTGRES_DB: ${DB_DATABASE}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
ports:
- "15432:5432"
volumes:
- ./docker/postgres/pg-data:/var/lib/postgresql/data:Z
networks:
- backend-network
I also have sh script which is copy production database into my local.
#!/bin/bash
ssh -p 22000 -C -l user host 'sh /project/copy_prod.sh' > /tmp/backup.sql
(
echo "DROP DATABASE prod_copy;"
echo "CREATE DATABASE prod_copy OWNER k4fntr;"
) | (export PGPASSWORD=secret && psql -h 127.0.0.1 -U local)
export PGPASSWORD=secret && psql -h 127.0.0.1 -U local prod_copy < /tmp/backup.sql
but got an error
bash: psql: command not found
Is it possible to run psql from database container?
I'm following this guide:
https://docs.docker.com/compose/rails/
I have all set up and running, and I'm trying to figure out how to connect with one of my DB client (Sequel Pro or pgAdmin) to the postgres container.
I also tried to map the postgres port in order to have it served outside from the container (docker-compose.yml), without success:
version: '3'
services:
db:
image: postgres
##### Trying this...
ports:
- "5432:5432"
#####
web:
build: .
command: bundle exec rails s -p 3030 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3030:3030"
depends_on:
- db