I'm following this guide:
https://docs.docker.com/compose/rails/
I have all set up and running, and I'm trying to figure out how to connect with one of my DB client (Sequel Pro or pgAdmin) to the postgres container.
I also tried to map the postgres port in order to have it served outside from the container (docker-compose.yml), without success:
version: '3'
services:
db:
image: postgres
##### Trying this...
ports:
- "5432:5432"
#####
web:
build: .
command: bundle exec rails s -p 3030 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3030:3030"
depends_on:
- db
Related
I can't get the connection between my PostgreSQL database from my Rails app which is running in a Docker container working.
The application just works fine, I just can't connect to the database.
docker-compose.yml:
services:
app:
build:
context: .
dockerfile: app.Dockerfile
container_name: application_instance
command: bash -c "bundle exec puma -C config/puma.rb"
volumes:
- .:/app
- node-modules:/app/node_modules
- public:/app/public
depends_on:
- database
- redis
env_file:
- .env
database:
image: postgres
container_name: database_instance
restart: always
volumes:
- db_data:/var/lib/postgresql/data
ports:
- "5432:5432"
env_file:
- .env
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_PRODUCTION_DB}
nginx:
build:
context: .
dockerfile: nginx.Dockerfile
depends_on:
- app
volumes:
- public:/app/public
ports:
- "80:80"
redis:
image: redis
container_name: redis_instance
ports:
- "6379:6379"
sidekiq:
container_name: sidekiq_instance
build:
context: .
dockerfile: app.Dockerfile
depends_on:
- redis
- database
command: bundle exec sidekiq
volumes:
- .:/app
env_file:
- .env
volumes:
db_data:
node-modules:
public:
If I try to connect via DBeaver I get the following message:
Any idea what's going wrong here? The port should be exposed on my local machine. I also tried with the IP of the container, but then I get a timeout exception.
This is because you most likely have postgres running locally on your machine (port 5432) and also on a docker (port 5432). Dbeaver wants to connect to database on your local machine, than on docker.
Any solution I figure out is to temporary stop/turn of your local postgres service (on Windows: Task manager -> Services -> (postgres service) -> stop).
I was also struggling with issue.
I am trying to run the following docker-compose file:
version: "3"
services:
db:
image: postgres
container_name: pgsql
environment:
- foo=foo
- bar=bar
volumes:
- ./sql/:/opt/sql
command: bash /opt/sql/create-db.sql
# command: ps -aux
web:
image: benit/debian-web
container_name: web
depends_on:
- db
ports:
- 80:80
volumes:
- ./html:/var/www/html
I am encountering an error with the line:
command: bash /opt/sql/create-db.sql
It is because pgsql service is not started. It can be monitored with command: ps -aux
How can I run my script once pgsql service is started ?
You can use a volume to provide an initialization sql script:
version: "3"
services:
db:
image: postgres
container_name: pgsql
environment:
- foo=foo
- bar=bar
volumes:
- ./sql/:/opt/sql
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
web:
image: benit/debian-web
container_name: web
depends_on:
- db
ports:
- 80:80
volumes:
- ./html:/var/www/html
This will work because original Posgresql dockerfile contains a script (that runs after Posrgres has been started) which will execute any *.sql files from /docker-entrypoint-initdb.d/ folder.
By mounting your local volume in that place, your sql files will be run at the right time.
It's actually mentioned in documentation for that image: https://hub.docker.com/_/postgres under the How to extend this image section.
I'm running a docker compose which consists of a web worker, a postgres database and a redis sidekiq worker. I created a background job to process images after uploading user images. ActiveStorage is used to store images. Normally without docker, in local development, the images are stored in a temporary storage folder to simulate a cloud storage. I'm fairly new to Docker, so I'm not sure how storage works. I believe storage in Docker works a bit differently. The sidekiq worker seems fine, it just seems like it's complaining about not able to find a place to store images. Below is the error that I get from the sidekiq worker.
WARN: Errno::ENOENT: No such file or directory # rb_sysopen - /myapp/storage
And here is my docker-compose.yml
version: '3'
services:
setup:
build: .
depends_on:
- postgres
environment:
- RAILS_ENV=development
command: "bin/rails db:migrate"
postgres:
image: postgres:10-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=mysecurepass
- POSTGRES_DB=myapp_development
- PGDATA=/var/lib/postgresql/data
postgres_data:
image: postgres:10-alpine
volumes:
- /var/lib/postgresql/data
command: /bin/true
sidekiq:
build: .
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
command: "bin/bundle exec sidekiq -C config/sidekiq.yml"
redis:
image: redis:4-alpine
ports:
- "6379:6379"
web:
build: .
depends_on:
- redis
- postgres
- setup
command: bundle exec rails s -p 3000 -b '0.0.0.0'
environment:
- REDIS_URL=redis://localhost:6379
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- postgres
Perhaps you need to add myapp volume for sidekiq as well like this:
sidekiq:
volumes:
- .:/myapp
Im porting my rails app from my local machine into a docker container and running into an issue with elasticsearch/searchkick. I can get it working temporarily but Im wondering if there is a better way. So basically the port for elasticsearch isnt matching up with the default localhost:9200 that searchkick uses. Now I have used "docker inspect" on the elasticsearch container and got the actual IP and then set the ENV['ELASTICSEARCH_URL'] variable like the searchkick docs say and it works. The problem Im having is that is a pain if I restart/change the containers the IP changes sometimes and I have to go through the whole process again. Here is my docker-compose.yml:
version: '2'
services:
web:
build: .
command: rails server -p 3000 -b '0.0.0.0'
volumes:
- .:/living-recipe
ports:
- '3000:3000'
env_file:
- .env
depends_on:
- postgres
- elasticsearch
postgres:
image: postgres
elasticsearch:
image: elasticsearch
use elasticsearch:9200 instead of localhost:9200. docker compose exposes the container via it's name.
Here is the docker-compose.yml that is working for me
docker compose will expose the container vaia it's name, so you can set
ELASTICSEARCH_URL: http://elasticsearch:9200 ENV variable in your rails application container
version: "3"
services:
db:
image: postgres:9.6
restart: always
volumes:
- /tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
volumes:
- .:/app
ports:
- 9200:9200
environment:
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- ".:/app"
ports:
- "3001:3000"
depends_on:
- db
environment:
DB_HOST: db
DB_PASSWORD: password
ELASTICSEARCH_URL: http://elasticsearch:9200
You don't want to try to map the IP address for elasticsearch manually, as it will change.
Swap out depends_on for links. This will create the same dependency, but also allows the containers to be reached via service name.
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
Links also express dependency between services in the same way as depends_on, so they determine the order of service startup.
Docker Compose File Reference - Links
Then in your rails app where you're setting ENV['ELASTICSEARCH_URL'], use elasticsearch instead.
I have this docker file and it is working as expected. I have php application that connects to mysql on localhost.
# cat Dockerfile
FROM tutum/lamp:latest
RUN rm -fr /app
ADD crm_220 /app/
ADD crmbox.sql /
ADD mysql-setup.sh /mysql-setup.sh
EXPOSE 80 3306
CMD ["/run.sh"]
When I tried to run the database as separate container, my php application is still pointing to localhost. When I connect to the "web" container, I am not able to connect to "mysql1" container.
# cat docker-compose.yml
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
links:
- mysql1:mysql
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secretpass
How does my php application connect to mysql from another container?
This is similar to the question asked here...
Connect to mysql in a docker container from the host
I do not want to connect to mysql from host machine, I need to connect from another container.
At first you shouldn't expose mysql 3306 port if you not want to call it from host machine. At second links are deprecated now. You can use network instead. I not sure about compose v.1 but in v.2 all containers in common docker-compose file are in one network (more about networks) and can be resolved by name each other. Example of docker-compose v.2 file:
version: '2'
services:
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: secretpass
With such configuration you can resolve mysql container by name mysql1 inside web container.
For me, the name resolutions is never happening. Here is my docker file, and I was hoping to connect from app host to mysql, where the name is mysql and passed as an env variable to the other container - DB_HOST=mysql
version: "2"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: crossblogs
environment:
- DB_HOST=mysql
- DB_PORT=3306
ports:
- 8080:8080
depends_on:
- mysql
mysql:
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=crossblogs
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp