Sidekiq in dockerised rails application on AWS - ruby-on-rails

I have a docker compose file with this content.
version: '3'
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: pass
POSTGRES_USER: user
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: "redis:alpine"
ports:
- "6379:6379"
volumes:
- 'redis:/var/lib/redis/data'
sidekiq:
build: .
links:
- db
- redis
command: bundle exec sidekiq
volumes:
- '.:/app'
web:
image: production_image
ports:
- "80:80"
links:
- db
- redis
- sidekiq
restart: always
volumes:
postgres_data:
redis:
In this to run sidekiq, we run bundle exec sidekiq in the current directory. This works on my local machine in development environment. But on AWS EC2 container, I am sending my docker-compose.yml file and running docker-compose up. But since the project code is not there, sidekiq fails. How should I run sidekiq on EC2 instance without sending my code there and using docker container of my code only in the compose file?

The two important things you need to do are to remove the volumes: declaration that gets the actual application code from your local filesystem, and upload your built Docker image to some registry. Since you're otherwise on AWS, ECR is a ready option; public Docker Hub will work fine too.
Depending on how your Rails app is structured, it might make sense to use the same image with different commands for the main application and the Sidekiq worker(s), and it might work to just make it say
sidekiq:
image: production_image
command: bundle exec sidekiq
Since you're looking at AWS anyways you should also consider the possibility of using hosted services for data storage (RDS for the database, Elasticache for Redis). The important thing is to include the locations of those data stores as environment variables so that you can change them later (maybe they would default to localhost for developer use, but always be something different when deployed).
You'll also notice that my examples don't have links:. Docker provides an internal DNS service for containers to find each other, and Docker Compose arranges for containers to be found via their service key in the YAML file.
Finally, you should be able to test this setup locally before deploying it to EC2. Run docker build and docker-compose up as needed; debug; and if it works then docker push the image(s) and launch it on Amazon.
version: '3'
volumes: *volumes_from_the_question
services:
db: *db_from_the_question
redis: *redis_from_the_question
sidekiq:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/sidekiq:1.0
environment:
- PGHOST: db
- REDIS_HOST: redis
app:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/app:1.0
ports:
- "80:80"
environment:
- PGHOST: db
- REDIS_HOST: redis

Related

How to bind folders inside docker containers?

I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.

Multiple Rails Application docker up not working

I have two Rails 6 application and I am trying to deploy in aws ec2 instance with different port 8080 and 8081 but when I trying to run docker-compose up -d it start one rails application successfully and if I tries to run docker-compose up -d for second application, It make first application down and make another application up on particular Port
Below is my docker configuration for two applications.
Application 1
version: "3.4"
services:
app:
image: "dockerhub_repo/a_api:${TAG}"
# build:
# context: .
# dockerfile: Dockerfile
container_name: a_api_container
depends_on:
- database
- redis
- sidekiq
ports:
- "8080:8080"
volumes:
- .:/app
env_file: .env
environment:
RAILS_ENV: staging
database:
image: postgres:12.1
container_name: a_database_container
restart: always
volumes:
- db_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
sidekiq:
image: "dockerhub_repo/a_api:${STAG}"
container_name: a_sidekiq_container
environment:
RAILS_ENV: staging
env_file: .env
depends_on:
- redis
volumes:
- ".:/app"
redis:
image: redis:4.0-alpine
container_name: a_redis_container
volumes:
- "redis:/data"
volumes:
redis:
db_data:
Application 2
version: "3.4"
services:
app:
image: "dockerhub_repo/b_api:${PPTAG}"
build:
context: .
dockerfile: Dockerfile
container_name: b_api
depends_on:
- database
- redis
ports:
- "8081:8081"
volumes:
- .:/app
env_file: .env
environment:
RAILS_ENV: development
database:
image: postgres:12.1
container_name: pp_database
restart: always
volumes:
- db_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
redis:
image: redis:4.0-alpine
container_name: pp_redis
volumes:
db_data:
This Configuration works very well in local machine. It start both application in local on different port but it has some issue on aws ec2. I am not sure is any thing wrong in configuration?
Compose has the notion of a project name. If you add or delete containers from a docker-compose.yml file, it looks for existing containers that are labeled with the project name to figure out what needs to change. The project name is also included in the Docker names of containers, networks, and volumes.
You can configure the project name with the COMPOSE_PROJECT_NAME environment variable or the docker-compose -p option. If you don't configure it, it defaults to the base name of the current directory.
You clarify in a comment that the two docker-compose.yml files are in directories app1/backend and app2/backend. Since the base name of those directories are both backend, they have the same project name; so if you run docker-compose up in the app2/backend directory, it finds the existing containers for the backend project, sees they don't match what's in the docker-compose.yml file, and deletes them (even though you as the operator think they belong to the other project).
There are a couple of ways to get around this:
Rename one or the other directory; maybe move the docker-compose.yml files up to the top-level app1 and app2 directories.
In one or both directories, create a .env file that sets COMPOSE_PROJECT_NAME=app1. (Note that file is checked in the current directory, not necessarily the directory that contains the docker-compose.yml file.)
Set and change an environment variable export COMPOSE_PROJECT_NAME=app1.
Consistently use an option docker-compose -p app1 ... with all Compose commands.

How can docker-compose.yml include/omit environment-specific services?

Is there a way in docker-compose.yml to include a db service only in specific environments ("test" in my case)?
For a Ruby project, development and production both use a remote Postgres database, but the test needs it own local Postgres database.
What I have now is shown below... "works" in the sense that when we run in development the db container is simply ignored by our code (our dev't ENV supplies a remote postres url instead of using the db host). But it would be nicer to not spin up an unused docker container for db when running in development.
version: '3'
services:
web:
build: .
ports:
- "3010:3010"
volumes:
- .:/my_app
links:
- db.local
depends_on:
- db
db:
image: postgres:10.5
ports:
- "5432:5432"

How to use ipaddreses instead of container names in docker compse networking

I'm using docker compose for a web application that I'm creating with asp.net core, postgres and redis. I have everything set up in compose to connect to postgres using the service name I've specified in the docker-compose.yml file. When trying to do the same with redis, I get an exception. After doing research it turns out this exception is a known issue and the work around is using the ip address of the the machine instead of a host name. However I cannot figure out how to get the ipaddress of the redis service from the compose file. Is there a way to do that?
Edit
Here is the compose file
version: "3"
services:
postgres:
image: 'postgres:9.5'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5433:5432'
redis:
image: 'redis:3.0-alpine'
command: redis-server --requirepass devpassword
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6378:6379'
web:
build: .
env_file:
- '.env'
ports:
- "8000:80"
volumes:
- './src/edb/Controllers:/app/Controllers'
- './src/edb/Views:/app/Views'
- './src/edb/wwwroot:/app/wwwroot'
- './src/edb/Lib:/app/Lib'
volumes:
postgres:
redis:
Ok, I found the answer. It was something I was trying but didn't realize the address may change everytime you restart the containers.
Run docker ps to get a list of running contianers then copy the id of your container and run docker inspect {container_id} and that will output the ipaddress that you can access it with from within the other running containers.
The reason I was confused was because that address may change when the containers are started. So I had to guess what the ip address was going to be before I started the containers. Luckly after 5 times I guessed correctly.

Difference between production and development docker using

I want to try docker for my web-site. I use php, nginx, mysql. I've configured docker and I've run my website locally. Now I want to publish my web-site to production.
I have few difference between developer and production version:
I need to be able connect to mysql inside container in developer mode (for debugging), but in production mode mysql must be isolated from outside for security
I want open my web-site by address app.dev and use nginx-proxy image on my developer machine, but on production I will not use nginx-proxy for increase performance.
Could I run docker with one docker-compose.yml file?
Or should I create two version of docker-compose file for developer and production version? But in this case I lose advantage of docker - same enviroment evrywhere. If I change docker-compose-dev.yml, I need to remember to change docker-compose-prod.yml.
My docker-compose.yml:
version: '2'
services:
app:
build: .
volumes:
- ./app:/app
container_name: app
app_nginx:
image: nginx
ports:
- "8080:80"
container_name: app_nginx
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./app:/app
environment:
- VIRTUAL_HOST=app.dev
app_db:
image: mysql:5.7
volumes:
- "./data/db:/var/lib/mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
MYSQL_DATABASE: "app_db"
container_name: app_db
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
You can achieve this with environment variable based configurations.
Usually different environments i.e staging and production differs only by configurations like database it needs to connect to, external service it calls, their end-points and credentials.
Instead of hard coding all such configuration, read them from environment variables. Thus you can use same docker-compose file with different environment variables for your staging and production environment.
You can also explore Rancher by Rancher Labs at http://rancher.com/ to manage your environments.

Resources