Using docker with RDS (Prod), but confused about local DB setup (dev) - ruby-on-rails

I'm designing a Rails webapp using Docker and for a variety of reasons, I'd like to use RDS in the Production environment for its configurability & durability purposes, rather than a Docker container-based DB (this is a requirement).
I realize that I can configure database.yml to point to my RDS instance for Prod env, and to some local DB instance in my local dev env.
However, I'm confused as to whether to use a container-based DB in my local dev environment, or an external one like MySQL Server.
Based on the Docker pattern of env-agnostic containers, I suppose that having a container-based DB in only some envs wouldn't make any sense (in fact, I don't think docker-compose.yml would even support something like this), so I am assuming I'll need to go with the MySQL Server solution for my local dev env.
Has anybody else been through such a requirement? Let me know if I am thinking about this the right way. Also, would this pose any potential issues for DB migration scripts?
Any suggestions are welcome!
Thank you.

Great questions Donald.
I have a postgres container set up for use locally using my dev.docker-compose.yml file.
And on prod, like you do, I have my database.yml configuration pointing to my RDS database.
On my prod docker compose file, I do not have any database container specified since I am using RDS
# prod.docker-compose.yml
version: "3.9"
services:
web:
build:
context: .
target: prod
args:
PG_MAJOR: '13'
RUBY_VERSION: '2.6.6'
BUNDLER_VERSION: '2.1.4'
env_file: .env
stdin_open: true
tty: true
command: ./bin/start_dev_server
image: ${REGISTRY_HOST}
ports:
- "3000:3000"
# dev.docker-compose.yml
version: "3.9"
services:
web:
build:
context: .
target: dev
args:
PG_MAJOR: '13'
RUBY_VERSION: '2.6.6'
BUNDLER_VERSION: '2.1.4'
env_file: .env
stdin_open: true
tty: true
command: ./bin/start_dev_server
volumes:
- ".:/sokoplace"¬
- bundle:/bundle
ports:
- "3000:3000"
postgres:
image: "postgres:13-alpine"
volumes:
- postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
environment:
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
bundle:
postgres:
# config/database.yml
production:
<<: *default
url: <%= ENV['PRODUCTION_POSTGRES_HOST'] %>

Related

Localhost not found even if my docker containers are up?

I am relatively new to dev in general, to the Docker universe and to Rails in particular, apologize in advance if it sounds like a silly question.
I am trying to run an application in a monorepo composed of 4 services (2 websites and 2 APIs) + Postgresql, with the help of Docker Compose. The final goal is to run it on a VPS with Traefik (once I get the current app to work locally).
Here are the different services :
Postgres (through the Postgres image available in Dockerhub)
a B2C website (NextJS)
an admin website (React with create Vite)
an API (Rails). It should be linked to the Postgres database
a Strapi API (for the content of the B2C website). Strapi has its own SQLite database. Only the B2C website requires the data coming from Strapi.
When I run the docker compose up -d command, it seems to be working (see pic below)
but when I go to one of the websites (except for the Strapi that seems to be correctly working) (https://localhost:3009, or 3008 or 3001), I get nothing (see below).
However, I don't see any error in the logs of any apps. For instance the Rails API logs below:
I assume that I have mistakes in my config, especially in the database.yml config of the Rails api and the docker-compose.yml file.
database.yml :
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
host: pg
development:
<<: *default
database: chana_api_v2_development
test:
<<: *default
database: chana_api_v2_test
production:
<<: *default
database: chana_api_v2_production
username: chana
password: <%= ENV["CHANA_DATABASE_PASSWORD"] %>
docker-compose.yml
version: '3'
services:
# ----------------POSTGRES -----------------
pg:
image: postgres:14.6
container_name: pg
networks:
- chana_postgres_network
ports:
- "5432:5432"
environment:
POSTGRES_DB: chana_development
POSTGRES_USER: chana
POSTGRES_PASSWORD: chana
volumes:
- ./data:/var/lib/postgresql/data
# ----------------- RAILS API -----------------
api:
build: ./api
container_name: api
networks:
- chana_postgres_network
- api_network
volumes:
- ./api:/chana_api
ports:
- "3001:3000"
depends_on:
- pg
# ----------------- STRAPI -----------------
strapi:
build:
context: ./strapi
args:
BASE_VERSION: latest
STRAPI_VERSION: 4.5.0
container_name: chana-strapi
restart: unless-stopped
env_file: .env
environment:
NODE_ENV: ${NODE_ENV}
HOST: ${HOST}
PORT: ${PORT}
volumes:
- ./strapi:/srv/app
- strapi_node_modules:/srv/app/node_modules
ports:
- "1337:1337"
# ----------------- B2C website -----------------
public-front:
build: ./public-front
container_name: public-front
restart: always
command: yarn dev
ports:
- "3009:3000"
networks:
- api_network
- chana_postgres_network
depends_on:
- api
- strapi
volumes:
- ./public-front:/app
- /app/node_modules
- /app/.next
# ----------------- ADMIN website -----------------
admin-front:
build: ./admin-front
container_name: admin-front
restart: always
command: yarn dev
ports:
- "3008:3000"
networks:
- api_network
- chana_postgres_network
depends_on:
- api
volumes:
- ./admin-front:/app
- /app/node_modules
- /app/.next
volumes:
strapi_node_modules:
networks:
api_network:
chana_postgres_network:
Do you have any idea why I cannot see anything on the website pages?
I tried to change the code of the different files that are relevant, especially database.yml, docker-compose.yml, and the dockerfiles of each app.
Also, I tried to look into the api container (Rails) with the command docker exec -it api /bin/sh to check the database through the Rails console, and I get this error message:
activeRecord::ConnectionNotEstablished could not connect to server: No such file or directory. Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
instead of localhost press ctrl and click on the url that it passes sometimes does not open on the localhost port of your website
looks like your host in docker is 127.0.0.1:3000 normally this is fine but in docker when you want to expose the app to your host machine you need to change the app to run on 0.0.0.0:3000 and then docker will be able to pass the app through to your host machine. Without specific Dockerfiles this is the best I can do. I have run into this issue with strapi and some other apps before so hopefully it helps.
It will still be localhost:3000 on the host machine if i wasn't clear.

Multiple Rails Application docker up not working

I have two Rails 6 application and I am trying to deploy in aws ec2 instance with different port 8080 and 8081 but when I trying to run docker-compose up -d it start one rails application successfully and if I tries to run docker-compose up -d for second application, It make first application down and make another application up on particular Port
Below is my docker configuration for two applications.
Application 1
version: "3.4"
services:
app:
image: "dockerhub_repo/a_api:${TAG}"
# build:
# context: .
# dockerfile: Dockerfile
container_name: a_api_container
depends_on:
- database
- redis
- sidekiq
ports:
- "8080:8080"
volumes:
- .:/app
env_file: .env
environment:
RAILS_ENV: staging
database:
image: postgres:12.1
container_name: a_database_container
restart: always
volumes:
- db_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
sidekiq:
image: "dockerhub_repo/a_api:${STAG}"
container_name: a_sidekiq_container
environment:
RAILS_ENV: staging
env_file: .env
depends_on:
- redis
volumes:
- ".:/app"
redis:
image: redis:4.0-alpine
container_name: a_redis_container
volumes:
- "redis:/data"
volumes:
redis:
db_data:
Application 2
version: "3.4"
services:
app:
image: "dockerhub_repo/b_api:${PPTAG}"
build:
context: .
dockerfile: Dockerfile
container_name: b_api
depends_on:
- database
- redis
ports:
- "8081:8081"
volumes:
- .:/app
env_file: .env
environment:
RAILS_ENV: development
database:
image: postgres:12.1
container_name: pp_database
restart: always
volumes:
- db_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
redis:
image: redis:4.0-alpine
container_name: pp_redis
volumes:
db_data:
This Configuration works very well in local machine. It start both application in local on different port but it has some issue on aws ec2. I am not sure is any thing wrong in configuration?
Compose has the notion of a project name. If you add or delete containers from a docker-compose.yml file, it looks for existing containers that are labeled with the project name to figure out what needs to change. The project name is also included in the Docker names of containers, networks, and volumes.
You can configure the project name with the COMPOSE_PROJECT_NAME environment variable or the docker-compose -p option. If you don't configure it, it defaults to the base name of the current directory.
You clarify in a comment that the two docker-compose.yml files are in directories app1/backend and app2/backend. Since the base name of those directories are both backend, they have the same project name; so if you run docker-compose up in the app2/backend directory, it finds the existing containers for the backend project, sees they don't match what's in the docker-compose.yml file, and deletes them (even though you as the operator think they belong to the other project).
There are a couple of ways to get around this:
Rename one or the other directory; maybe move the docker-compose.yml files up to the top-level app1 and app2 directories.
In one or both directories, create a .env file that sets COMPOSE_PROJECT_NAME=app1. (Note that file is checked in the current directory, not necessarily the directory that contains the docker-compose.yml file.)
Set and change an environment variable export COMPOSE_PROJECT_NAME=app1.
Consistently use an option docker-compose -p app1 ... with all Compose commands.

How can docker-compose.yml include/omit environment-specific services?

Is there a way in docker-compose.yml to include a db service only in specific environments ("test" in my case)?
For a Ruby project, development and production both use a remote Postgres database, but the test needs it own local Postgres database.
What I have now is shown below... "works" in the sense that when we run in development the db container is simply ignored by our code (our dev't ENV supplies a remote postres url instead of using the db host). But it would be nicer to not spin up an unused docker container for db when running in development.
version: '3'
services:
web:
build: .
ports:
- "3010:3010"
volumes:
- .:/my_app
links:
- db.local
depends_on:
- db
db:
image: postgres:10.5
ports:
- "5432:5432"

Sidekiq in dockerised rails application on AWS

I have a docker compose file with this content.
version: '3'
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: pass
POSTGRES_USER: user
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: "redis:alpine"
ports:
- "6379:6379"
volumes:
- 'redis:/var/lib/redis/data'
sidekiq:
build: .
links:
- db
- redis
command: bundle exec sidekiq
volumes:
- '.:/app'
web:
image: production_image
ports:
- "80:80"
links:
- db
- redis
- sidekiq
restart: always
volumes:
postgres_data:
redis:
In this to run sidekiq, we run bundle exec sidekiq in the current directory. This works on my local machine in development environment. But on AWS EC2 container, I am sending my docker-compose.yml file and running docker-compose up. But since the project code is not there, sidekiq fails. How should I run sidekiq on EC2 instance without sending my code there and using docker container of my code only in the compose file?
The two important things you need to do are to remove the volumes: declaration that gets the actual application code from your local filesystem, and upload your built Docker image to some registry. Since you're otherwise on AWS, ECR is a ready option; public Docker Hub will work fine too.
Depending on how your Rails app is structured, it might make sense to use the same image with different commands for the main application and the Sidekiq worker(s), and it might work to just make it say
sidekiq:
image: production_image
command: bundle exec sidekiq
Since you're looking at AWS anyways you should also consider the possibility of using hosted services for data storage (RDS for the database, Elasticache for Redis). The important thing is to include the locations of those data stores as environment variables so that you can change them later (maybe they would default to localhost for developer use, but always be something different when deployed).
You'll also notice that my examples don't have links:. Docker provides an internal DNS service for containers to find each other, and Docker Compose arranges for containers to be found via their service key in the YAML file.
Finally, you should be able to test this setup locally before deploying it to EC2. Run docker build and docker-compose up as needed; debug; and if it works then docker push the image(s) and launch it on Amazon.
version: '3'
volumes: *volumes_from_the_question
services:
db: *db_from_the_question
redis: *redis_from_the_question
sidekiq:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/sidekiq:1.0
environment:
- PGHOST: db
- REDIS_HOST: redis
app:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/app:1.0
ports:
- "80:80"
environment:
- PGHOST: db
- REDIS_HOST: redis

Difference between production and development docker using

I want to try docker for my web-site. I use php, nginx, mysql. I've configured docker and I've run my website locally. Now I want to publish my web-site to production.
I have few difference between developer and production version:
I need to be able connect to mysql inside container in developer mode (for debugging), but in production mode mysql must be isolated from outside for security
I want open my web-site by address app.dev and use nginx-proxy image on my developer machine, but on production I will not use nginx-proxy for increase performance.
Could I run docker with one docker-compose.yml file?
Or should I create two version of docker-compose file for developer and production version? But in this case I lose advantage of docker - same enviroment evrywhere. If I change docker-compose-dev.yml, I need to remember to change docker-compose-prod.yml.
My docker-compose.yml:
version: '2'
services:
app:
build: .
volumes:
- ./app:/app
container_name: app
app_nginx:
image: nginx
ports:
- "8080:80"
container_name: app_nginx
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./app:/app
environment:
- VIRTUAL_HOST=app.dev
app_db:
image: mysql:5.7
volumes:
- "./data/db:/var/lib/mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
MYSQL_DATABASE: "app_db"
container_name: app_db
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
You can achieve this with environment variable based configurations.
Usually different environments i.e staging and production differs only by configurations like database it needs to connect to, external service it calls, their end-points and credentials.
Instead of hard coding all such configuration, read them from environment variables. Thus you can use same docker-compose file with different environment variables for your staging and production environment.
You can also explore Rancher by Rancher Labs at http://rancher.com/ to manage your environments.

Resources