I am relatively new to dev in general, to the Docker universe and to Rails in particular, apologize in advance if it sounds like a silly question.
I am trying to run an application in a monorepo composed of 4 services (2 websites and 2 APIs) + Postgresql, with the help of Docker Compose. The final goal is to run it on a VPS with Traefik (once I get the current app to work locally).
Here are the different services :
Postgres (through the Postgres image available in Dockerhub)
a B2C website (NextJS)
an admin website (React with create Vite)
an API (Rails). It should be linked to the Postgres database
a Strapi API (for the content of the B2C website). Strapi has its own SQLite database. Only the B2C website requires the data coming from Strapi.
When I run the docker compose up -d command, it seems to be working (see pic below)
but when I go to one of the websites (except for the Strapi that seems to be correctly working) (https://localhost:3009, or 3008 or 3001), I get nothing (see below).
However, I don't see any error in the logs of any apps. For instance the Rails API logs below:
I assume that I have mistakes in my config, especially in the database.yml config of the Rails api and the docker-compose.yml file.
database.yml :
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
host: pg
development:
<<: *default
database: chana_api_v2_development
test:
<<: *default
database: chana_api_v2_test
production:
<<: *default
database: chana_api_v2_production
username: chana
password: <%= ENV["CHANA_DATABASE_PASSWORD"] %>
docker-compose.yml
version: '3'
services:
# ----------------POSTGRES -----------------
pg:
image: postgres:14.6
container_name: pg
networks:
- chana_postgres_network
ports:
- "5432:5432"
environment:
POSTGRES_DB: chana_development
POSTGRES_USER: chana
POSTGRES_PASSWORD: chana
volumes:
- ./data:/var/lib/postgresql/data
# ----------------- RAILS API -----------------
api:
build: ./api
container_name: api
networks:
- chana_postgres_network
- api_network
volumes:
- ./api:/chana_api
ports:
- "3001:3000"
depends_on:
- pg
# ----------------- STRAPI -----------------
strapi:
build:
context: ./strapi
args:
BASE_VERSION: latest
STRAPI_VERSION: 4.5.0
container_name: chana-strapi
restart: unless-stopped
env_file: .env
environment:
NODE_ENV: ${NODE_ENV}
HOST: ${HOST}
PORT: ${PORT}
volumes:
- ./strapi:/srv/app
- strapi_node_modules:/srv/app/node_modules
ports:
- "1337:1337"
# ----------------- B2C website -----------------
public-front:
build: ./public-front
container_name: public-front
restart: always
command: yarn dev
ports:
- "3009:3000"
networks:
- api_network
- chana_postgres_network
depends_on:
- api
- strapi
volumes:
- ./public-front:/app
- /app/node_modules
- /app/.next
# ----------------- ADMIN website -----------------
admin-front:
build: ./admin-front
container_name: admin-front
restart: always
command: yarn dev
ports:
- "3008:3000"
networks:
- api_network
- chana_postgres_network
depends_on:
- api
volumes:
- ./admin-front:/app
- /app/node_modules
- /app/.next
volumes:
strapi_node_modules:
networks:
api_network:
chana_postgres_network:
Do you have any idea why I cannot see anything on the website pages?
I tried to change the code of the different files that are relevant, especially database.yml, docker-compose.yml, and the dockerfiles of each app.
Also, I tried to look into the api container (Rails) with the command docker exec -it api /bin/sh to check the database through the Rails console, and I get this error message:
activeRecord::ConnectionNotEstablished could not connect to server: No such file or directory. Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
instead of localhost press ctrl and click on the url that it passes sometimes does not open on the localhost port of your website
looks like your host in docker is 127.0.0.1:3000 normally this is fine but in docker when you want to expose the app to your host machine you need to change the app to run on 0.0.0.0:3000 and then docker will be able to pass the app through to your host machine. Without specific Dockerfiles this is the best I can do. I have run into this issue with strapi and some other apps before so hopefully it helps.
It will still be localhost:3000 on the host machine if i wasn't clear.
Related
OK so I am trying to deploy a Rails app to a docker container (host machine is a mac). I was thinking to deploy in development first to check everything is working.
I have setup a phpmyadmin service and I can connect to the server by typing in server name moviedb_mariamovie_1 with user root and corresponding PW.
But whatever I put into my database.yml for Rails doesn't work: I tried localhost, I tried 127.0.0.1, I tried "mariamovie" and I tried "moviedb_mariamovie_1", and it always says "host not found" when I tried rails db:create (or anything actually that involves the DB).
I am totally confused by this. I read the database section of the docker manuals and I seem to be too stupid for that.
(I have other problems with this but one after the other :)
docker-compose.yml:
version: "3.7"
services:
moviedb:
image: tkhobbes/moviedb
restart: unless-stopped
ports:
- 3001:3000
depends_on:
- mariamovie
environment:
MYSQL_ROOT_PASSWORD: redacted
RAILS_ENV: development
volumes:
- /Users/thomas/Documents/Production/moviedb/storage:/opt/activestorage
mariamovie:
image: mariadb
restart: unless-stopped
ports:
- 3333:3306
environment:
MYSQL_ROOT_PASSWORD: redacted
phpmymaria:
image: phpmyadmin
restart: unless-stopped
ports:
- 8021:80
depends_on:
- mariamovie
environment:
PMA_PORT: 3333
PMA_ARBITRARY: 1
image: nginx:1.21-alpine
volumes:
- /Users/thomas/Documents/Production/moviedb/vendor/nginx:/etc/nginx/user.conf.d:ro
ports:
- 8020:8020
depends_on:
- moviedb
restart: unless-stopped
database.yml:
default: &default
adapter: mysql2
encoding: utf8mb4
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
host: 127.0.0.1
port: 3333
username: redacted
password: redacted
development:
<<: *default
database: newmovie_development
...
You're inside your docker "network". Your database should be accessible from your Rails app (which is inside too) via mariamovie:3306.
I'm working on building docker containers for a Ruby-on-Rails project I'm currently working on, so that I can develop this project using the remote feature of Visual Studio Code. This project should still continue to work without using docker containers, so I cannot make breaking changes to the existing code that would compromise this.
The application server (Rails) needs to connect to a MySQL database that's running in a separate container. The database container is named db, and I can connect from the application container to this container by using the db hostname.
The database.yml config file for rails defines how to connect to the database, but this is where my problem is situated. I don't want to change the host to db instead of localhost as this would mean that regular users (that do not use Docker containers) will no longer be able to connect to the database without changing this file. How can I somehow start or change my docker config so that db is accessible as localhost instead of db inside of the application container?
database.yml:
default: &default
adapter: mysql2
username: ****
password: ****
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
development:
<<: *default
database: ****
# setup local port forwarding for this to work
host: db
port: 3306
docker-compose.yml:
version: '3.7'
services:
db:
build: ./unipept-db
environment:
MYSQL_ROOT_PASSWORD: ****
MYSQL_DATABASE: ****
MYSQL_USER: ****
MYSQL_PASSWORD: ****
restart: always
ports:
- "3306:3306"
hostname: mysql
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
ports:
- '8080:80'
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: ****
restart: always
app:
depends_on:
- db
build: ./unipept-application
command: sleep infinity
ports:
- '5000:5000'
volumes:
- ~/.gitconfig:/root/.gitconfig
- ..:/workspace
user network_mode: "host"in your APP config then you can call the DBfrom your APP using localhost:PORT
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
ports:
- '8080:80'
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: ****
restart: always
network_mode: "host"
app:
depends_on:
- db
build: ./unipept-application
command: sleep infinity
ports:
- '5000:5000'
network_mode: "host"
volumes:
- ~/.gitconfig:/root/.gitconfig
- ..:/workspace
PS: Published ports are discarded when using host network mode
If you make it an environment variable in your database.yml.erb file, it will be configurable. You even already have an example of this. You can set:
host: <%= ENV.fetch('DB_HOST', 'localhost') %>
In development, just don't set the environment variable, and it will use localhost. In a Docker environment, do set it, and it will use that hostname.
version: '3'
services:
db:
image: mysql
app:
build: .
environment:
DB_HOST: db
ports:
- '5000:5000'
You should also pass things like database credentials the same way.
I am trying to move a working Rails app to docker environment.
Following the UNIX(/docker) philosophy I would like to have each service in its own container.
I managed to get redis and postgres working fine, but I am struggling to get slor and rails talking to each other.
In file app/models/spree/sunspot/search_decorator.rb when the line executes
#solr_search.execute
the following error appear on the console:
Errno::EADDRNOTAVAIL (Cannot assign requested address - connect(2) for "localhost" port 8983):
While researching for a solution I have found people just installing solr in the same container as their rails app. But I would rather have it in a separate container.
Here are my config/sunspot.yml
development:
solr:
hostname: localhost
port: 8983
log_level: INFO
path: /solr/development
and docker-compose.yml files
version: '2'
services:
db:
(...)
redis:
(...)
solr:
image: solr:7.0.1
ports:
- "8983:8983"
volumes:
- solr-data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
networks:
- backend
app:
build: .
env_file: .env
environment:
RAILS_ENV: $RAILS_ENV
depends_on:
- db
- redis
- solr
ports:
- "3000:3000"
tty: true
networks:
- backend
volumes:
solr-data:
redis-data:
postgres-data:
networks:
backend:
driver: bridge
Any suggestions?
Your config/sunspot.yml should have the following:
development:
solr:
hostname: solr # since our solr instance is linked as solr
port: 8983
log_level: WARNING
solr_home: solr
path: /solr/mycore
# this path comes from the last command of our entrypoint as
# specified in the last parameter for our solr container
If you see
Solr::Error::Http (RSolr::Error::Http - 404 Not Found
Error: Not Found
URI: http://localhost:8982/solr/development/select?wt=json
Create a new core using the admin interface at:
http://localhost:8982/solr/#/~cores
or using the following command:
docker-compose exec solr solr create_core -c development
I wrote a blog post on this: https://gaurav.koley.in/2018/searching-in-rails-with-solr-sunspot-and-docker
Hopefully that helps those who come here at later stage.
When you declare services in a docker-compose file, containers will have their name as hostname. So your solr service will be available, inside the backend network, as solr.
What I'm seeing from your error is that the ruby code is trying to connect at localhost:8983, while it should connect to solr:8983.
Probably you'll need also to change your hostname inside config/sunspot.yml, but I don't work with solr so I'm not sure about this.
I am trying to set up an extensible docker production environment for a few projects on a virtual machine.
My setup is as follows:
Front end: (this works as expected: thanks to Tevin Jeffery for this)
# ~/proxy/docker-compose.yml
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- '80:80'
- '443:443'
volumes:
- '/etc/nginx/vhost.d'
- '/usr/share/nginx/html'
- '/etc/nginx/certs:/etc/nginx/certs:ro'
- '/var/run/docker.sock:/tmp/docker.sock:ro'
networks:
- nginx
letsencrypt-nginx-proxy:
container_name: letsencrypt-nginx-proxy
image: 'jrcs/letsencrypt-nginx-proxy-companion'
volumes:
- '/etc/nginx/certs:/etc/nginx/certs'
- '/var/run/docker.sock:/var/run/docker.sock:ro'
volumes_from:
- nginx-proxy
networks:
- nginx
networks:
nginx:
driver: bridge
Database: (planning to add postgres to support rails apps as well)
# ~/mysql/docker-compose.yml
version: '2'
services:
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: wordpress
# ports:
# - 3036:3036
networks:
- db
networks:
db:
driver: bridge
And finaly a wordpress blog to test if everything works:
# ~/wp/docker-compose.yml
version: '2'
services:
wordpress:
image: wordpress
# external_links:
# - mysql_db_1:mysql
ports:
- 8080:80
networks:
- proxy_nginx
- mysql_db
environment:
# for nginx and dockergen
VIRTUAL_HOST: gizmotronic.ca
# wordpress setup
WORDPRESS_DB_HOST: mysql_db_1
# WORDPRESS_DB_HOST: mysql_db_1:3036
# WORDPRESS_DB_HOST: mysql
# WORDPRESS_DB_HOST: mysql:3036
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: wordpress
networks:
proxy_nginx:
external: true
mysql_db:
external: true
My problem is that the Wordpress container can not connect to the database. I get the following error when I try to start (docker-compose up) the Wordpress container:
wordpress_1 | Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 22
wordpress_1 |
wordpress_1 | MySQL Connection Error: (2002) Connection refused
wp_wordpress_1 exited with code 1
UPDATE:
I was finally able to get this working. my main problem was relying on the container defaults for the environment variables. This created an automatic data volume with without a database or user for word press. After I added explicit environment variables to the mysql and Wordpress containers, I removed the data volume and restarted both containers. This forced the mysql container to recreate the database and user.
To ~/mysql/docker-compose.yml:
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
and to ~/wp/docker-compose.yml:
environment:
# for nginx and dockergen
VIRTUAL_HOST: gizmotronic.ca
# wordpress setup
WORDPRESS_DB_HOST: mysql_db_1
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
One problem with docker-compose is that although sometimes your application is linked to your database, the application will NOT wait for your database to be up and ready. Here is an official Docker read:
https://docs.docker.com/compose/startup-order/
I've faced a similar problem where my test application would fail because it couldn't connect to my server because it wasn't up and running yet.
I made a similar workaround to the article posted in the link above by running a shell script to ping the address of the DB until it is available to be used. This script should be the last CMD command in your application.
RESPONSE=$(curl --write-out "%{http_code}\n" --silent --output /dev/null "YOUR_MYSQL_DATABASE:3306")
# Until the mysql sends a 200 HTTP response, we're going to keep checking
until [ $RESPONSE -eq "200" ]; do
sleep 2
echo "MySQL is not ready yet.. retrying... RESPONSE: ${RESPONSE}"
RESPONSE=$(curl --write-out "%{http_code}\n" --silent --output /dev/null "YOUR_MYSQL_DATABASE:3306")
done
# Once we know the server's up, we can start run our application
enter code to start your application here
I'm not 100% sure if this is the problem you're having. Another way to debug your problem is to run docker-compose in detached mode with the -d flag and run docker ps to see if your database is even running. If it is running, run docker logs $YOUR_DB_CONTAINER_ID to see if MySQL is giving you any errors when starting
I'm designing a Rails webapp using Docker and for a variety of reasons, I'd like to use RDS in the Production environment for its configurability & durability purposes, rather than a Docker container-based DB (this is a requirement).
I realize that I can configure database.yml to point to my RDS instance for Prod env, and to some local DB instance in my local dev env.
However, I'm confused as to whether to use a container-based DB in my local dev environment, or an external one like MySQL Server.
Based on the Docker pattern of env-agnostic containers, I suppose that having a container-based DB in only some envs wouldn't make any sense (in fact, I don't think docker-compose.yml would even support something like this), so I am assuming I'll need to go with the MySQL Server solution for my local dev env.
Has anybody else been through such a requirement? Let me know if I am thinking about this the right way. Also, would this pose any potential issues for DB migration scripts?
Any suggestions are welcome!
Thank you.
Great questions Donald.
I have a postgres container set up for use locally using my dev.docker-compose.yml file.
And on prod, like you do, I have my database.yml configuration pointing to my RDS database.
On my prod docker compose file, I do not have any database container specified since I am using RDS
# prod.docker-compose.yml
version: "3.9"
services:
web:
build:
context: .
target: prod
args:
PG_MAJOR: '13'
RUBY_VERSION: '2.6.6'
BUNDLER_VERSION: '2.1.4'
env_file: .env
stdin_open: true
tty: true
command: ./bin/start_dev_server
image: ${REGISTRY_HOST}
ports:
- "3000:3000"
# dev.docker-compose.yml
version: "3.9"
services:
web:
build:
context: .
target: dev
args:
PG_MAJOR: '13'
RUBY_VERSION: '2.6.6'
BUNDLER_VERSION: '2.1.4'
env_file: .env
stdin_open: true
tty: true
command: ./bin/start_dev_server
volumes:
- ".:/sokoplace"¬
- bundle:/bundle
ports:
- "3000:3000"
postgres:
image: "postgres:13-alpine"
volumes:
- postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
environment:
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
bundle:
postgres:
# config/database.yml
production:
<<: *default
url: <%= ENV['PRODUCTION_POSTGRES_HOST'] %>