Rails Docker-compose and existing nginx - docker

I have a running server with nginx and with some multiple personal apps. I want to add my new app which is built on docker with docker-compose. Now i want to add it in the existing nginx(which is not built in docker) but not sure how.
current docker-compose.prod file
version: '1'
services:
app:
build: .
command: "sh scripts/wait-for 127.0.0.1:3306 -- scripts/start.sh"
network_mode: "host"
environment:
- RAILS_ENV=production
- RAILS_SERVE_STATIC_FILES=true # NOTE: this is for LOCAL PROD. to be removed on the actual server; nginx should handle static assets
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
env_file:
- .env.prod
ports:
- "3051:3000" # random port
I'm a beginner in both nginx and docker. Could you give out some basic steps on how to configure it to run with nginx

Related

Possble to Share folders between container to container?

I want to know how to share application folder between container to container.
I found out articles about "how to share folder between container and host" but i could not find "container to container".
I want to do edit the code for frontend application on backend so I need to share the folder. <- this is also my goal.
Any solution?
My config is like this
/
- docker-compose.yml
|
- backend application
|
_ Dockerfile
|
-Frontend application
|
- Dockerfile
And
docker-compose.yml is like this
version: '3'
services:
railsApp: #<- backend application
build: .
command: bundle exec rails s -p 5000 -b '0.0.0.0'
volumes:
- code_share:/var/web/railsApp
ports:
- "3000:3000"
reactApp: #<- frontend application
build: .
command: yarn start
volumes:
- code_share:/var/web/reactApp
ports:
- "3000:3000"
volumes:
code_share:
You are already mounting a named volume in both your frontend and backend now.
According to your configuration, both your application /var/web/railsApp and /var/web/reactApp will see the exact same content.
So whenever you write to /var/web/reactApp in your frontend application container, the changes will also be reflected in the backend /var/web/railsApp
To achieve what you want (having railsApp and reactApp under /var/web), try mounting a folder on host machine into both the container. (make sure your application is writing into respective /var/web folder correctly.
mkdir -p /var/web/railsApp /var/web/reactApp
then adjust your compose file:
version: '3'
services:
railsApp: #<- backend application
build: .
command: bundle exec rails s -p 5000 -b '0.0.0.0'
volumes:
- /var/web:/var/web
ports:
- "3000:3000"
reactApp: #<- frontend application
build: .
command: yarn start
volumes:
- /var/web:/var/web
ports:
- "3000:3000"

Configure docker volumes to share data across host and containers

I am stuck trying to configure docker volumes to share files between my host and make able in my container to use this files. let me explain.
I have a rails docker app with puma as a web server, I want to make able to puma to view and use the ssl .key and .crt files, so for this project also I am using docker-compose in "production mode", but I do not know how to make this work.
My setup is this:
Ubuntu 18.04 server host for production has the ssl files inside /home/ubuntu/my_app_keys, the containers are also in my host.
/home/ubuntu/docker-compose.yml
version: '3'
services:
postgres:
image: postgres:10.5
environment:
POSTGRES_DB: my_app_production
env_file:
-~/production.env
redis:
image: redis:4.0.11
web:
image: my_app:latest
command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt' -e production
ports:
- '3000:3000'
volumes:
- /home/ubuntu/my_app_keys
depends_on:
- postgres
- redis
env_file:
- ~/production.env
restart: always
sidekiq:
image: my_app_sidekiq:latest
command: bundle exec sidekiq -C config/sidekiq.yml
depends_on:
- postgres
- redis
env_file:
- ~/production.env
restart: always
so, as you can see: command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt' is looking for ssl files in /home/ubuntu/my_app_keys, when I execute docker-compose up puma can not find the ssl files and exits with:
/usr/local/bundle/gems/puma-3.9.1/lib/puma/minissl.rb:180:in `key=': No such key file '/home/ubuntu/my_app_keys/server.key' (ArgumentError)
I think is because key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt are pointing in the container context but I have the cert and key in my host context
so, I include in docker compose volume in order to bind-mount the files:
volumes:
- /home/ubuntu/my_app_keys
but without luck, same error.
In the container context my app lives in /var/www/my_app directory, so I tried to specify an absolute path (for some reason I imagined that it was because the ssl files were not in the same directory where my app lived could not be shared), so I add as compose-file docs say:
volumes:
- /home/ubuntu/my_app_keys:/var/www/my_app
and change in compose file:
command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=server.key&cert=server.crt' -e
when I execute the compose up my web service exit with error:
web | Could not locate Gemfile or .bundle/ directory
only way that web service run is (but no ssl files exist):
volumes:
- /home/ubuntu/my_app_keys
so, I do not know what to do now. any help?
When your Docker Compose YAML file says:
volumes:
- /home/ubuntu/my_app_keys
It means, "make /home/ubuntu/my_app_keys in container space persist across restarts of the container; it will start off empty unless the Dockerfile did something special; it's not connected to any specific host content".
When you say:
volumes:
- /home/ubuntu/my_app_keys:/var/www/my_app
It means, "totally replace the contents of /var/www/my_app in container space with the contents of /home/ubuntu/my_app_keys on the host". (The path names in host and container space don't need to be the same.)
As a bonus question, when you say:
rails server -b 'ssl://127.0.0.1:3000?...'
It means, "only listen for inbound connections on port 3000 initiated from within this Docker container; don't accept any connections from outside the container at all, whether from the same physical host, other containers, or elsewhere."

Sidekiq in dockerised rails application on AWS

I have a docker compose file with this content.
version: '3'
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: pass
POSTGRES_USER: user
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: "redis:alpine"
ports:
- "6379:6379"
volumes:
- 'redis:/var/lib/redis/data'
sidekiq:
build: .
links:
- db
- redis
command: bundle exec sidekiq
volumes:
- '.:/app'
web:
image: production_image
ports:
- "80:80"
links:
- db
- redis
- sidekiq
restart: always
volumes:
postgres_data:
redis:
In this to run sidekiq, we run bundle exec sidekiq in the current directory. This works on my local machine in development environment. But on AWS EC2 container, I am sending my docker-compose.yml file and running docker-compose up. But since the project code is not there, sidekiq fails. How should I run sidekiq on EC2 instance without sending my code there and using docker container of my code only in the compose file?
The two important things you need to do are to remove the volumes: declaration that gets the actual application code from your local filesystem, and upload your built Docker image to some registry. Since you're otherwise on AWS, ECR is a ready option; public Docker Hub will work fine too.
Depending on how your Rails app is structured, it might make sense to use the same image with different commands for the main application and the Sidekiq worker(s), and it might work to just make it say
sidekiq:
image: production_image
command: bundle exec sidekiq
Since you're looking at AWS anyways you should also consider the possibility of using hosted services for data storage (RDS for the database, Elasticache for Redis). The important thing is to include the locations of those data stores as environment variables so that you can change them later (maybe they would default to localhost for developer use, but always be something different when deployed).
You'll also notice that my examples don't have links:. Docker provides an internal DNS service for containers to find each other, and Docker Compose arranges for containers to be found via their service key in the YAML file.
Finally, you should be able to test this setup locally before deploying it to EC2. Run docker build and docker-compose up as needed; debug; and if it works then docker push the image(s) and launch it on Amazon.
version: '3'
volumes: *volumes_from_the_question
services:
db: *db_from_the_question
redis: *redis_from_the_question
sidekiq:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/sidekiq:1.0
environment:
- PGHOST: db
- REDIS_HOST: redis
app:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/app:1.0
ports:
- "80:80"
environment:
- PGHOST: db
- REDIS_HOST: redis

Prevent publishing ports defined in compose file

I have a docker compose file that defines a service that will run my application and a service that that application is dependent on to run:
services:
frontend:
build:
context: .
volumes:
- "../.:/opt/app"
ports:
- "8080:8080"
links:
- redis
image: node
command: ['yarn', 'start']
redis:
image: redis
expose:
- "6379"
For development this compose file exposes 8080 so that I can access the running code from a browser.
In jenkins however I can't expose that port as then two jobs running simultaneously would conflict trying to bind to the same port on jenkins.
Is there a way to prevent docker-compose from binding service ports? Like an inverse of the --service-ports flag?
For context:
In jenkins I run tests using docker-compose run frontend yarn test which won't map ports and so isn't a problem.
The issue presents when I try to run end to end browser tests against the application. I use a container to run CodeceptJS tests against a running instance of the app. In that case I need the frontend to start before I run the tests, as they will fail if the app is not up.
Q. Is there a way to prevent docker-compose from binding service ports?
It has no sense to prevent something that you are asking to do. docker-compose will start stuff as the docker-compose.yml file indicates.
I propose duplicate the frontend service using extends::
version: "2"
services:
frontend-base:
build:
context: .
volumes:
- "../.:/opt/app"
image: node
command: ['yarn', 'start']
frontend:
extends: frontend-base
links:
- redis
ports:
- "8080:8080"
frontend-test:
extends: frontend-base
links:
- redis
command: ['yarn', 'test']
redis:
image: redis
expose:
- "6379"
So use it as this:
docker-compose run frontend # in dev environment
docker-compose run frontend-test # in jenkins
Note that extends: is not available in version: "3", but they will bring it back again in the future.
For preventing to publish ports outside the docker network you just
need to write on a single port in the ports segment.
Instead of using this:
ports:
- 8080:8080
Just use this one(at below):
ports:
- 8080

Docker compose, manage environments

I have the following docker-compose.yml file to work locally:
mongodb:
image: mongo
command: "--smallfiles --logpath=/dev/null"
web:
build: .
command: npm run dev
volumes:
- .:/myapp
ports:
- "3001:3000"
links:
- mongodb
environment:
PORT: 3000
NODE_ENV: 'development'
seed:
build: ./seed
links:
- mongodb
When I deploy to my server, I need to change two things in the docker-compose.yml file:
web:
command: npm start
environment:
NODE_ENV: 'production'
I guess editing the file after each deploy ain't the most comfortable way to do that. Any suggestion on how to cleanly manage environments in the docker-compose.yml file?
The usual way is to use a Compose overrides file. By default docker-compose reads two files at startup, docker-compose.yml and docker-compose.override.yml. You can put anything you want to override in the latter. So:
# docker-compose.yml
mongodb:
image: mongo
command: "--smallfiles --logpath=/dev/null"
web:
build: .
command: npm run dev
volumes:
- .:/myapp
ports:
- "3001:3000"
links:
- mongodb
environment:
PORT: 3000
NODE_ENV: 'development'
seed:
build: ./seed
links:
- mongodb
Also:
# docker-compose.override.yml
web:
command: npm start
environment:
NODE_ENV: 'production'
Then you can run docker-compose up and will get the production settings. If you just want dev then you can run docker-compose -f docker-compose.yml up.
An even better way is to name your compose files in a relevant way. So, docker-compose.yml becomes development.yml and docker-compose.override.yml becomes production.yml or something. Then you can run docker-compose -f development -f production up for production, and just docker-compose -f development for development. You may also want to look into the extends functionality of docker-compose.
Just try using my way.
This is an example of my django project that I've run on docker.
First in docker-compose.yml you have to defined two containers.
First is web it means for production then second is devweb it
means for development.
If you use dockerfile you can create separated dockerfile
(Dockerfile : for production, and Dockerfile-dev for development).
By that you can run by using docker-compose command.
For example :
docker-compose -p $(PROJECT) up -d web for production
docker-compose -p $(PROJECT) up --no-deps -d devweb for
development
Anyway, I use Makefile to manage all docker-compose's command, and its make me very easy. I just need to run make command name to execute a command.
May this answer help you.

Resources