I know I am missing something very basic here. I have see some of the older questions on persisting data using docker, but I think I am following the most recent documentation found here.
I have a rails app that I am trying to run in docker. It runs fine but every time I start it up i get ActiveRecord::NoDatabaseError. After I create the database and migrate it, the app runs fine, until I shut it down and restart it.
here is my docker file:
FROM ruby:2.3.0
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
ENV RAILS_ROOT /ourlatitude
RUN mkdir -p $RAILS_ROOT/tmp/pids
WORKDIR $RAILS_ROOT
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN gem install bundler
RUN bundle install
COPY . .
and here is my docker-compose.yml file
version: '2'
services:
db:
image: postgres:9.4.5
app:
build: .
environment:
RAILS_ENV: $RAILS_ENV
ports:
- "3000:3000"
command: bundle exec rails s -b 0.0.0.0
volumes:
- .:/ourlatitude/database
depends_on:
- db
the basic flow I am following is this:
export RAILS_ENV=development
docker-compose build
docker-compose up
docker-compose run app rake db:create
docker-compose run app rake db:migrate
at this point the app will be running fine
but then I do this
docker-compose down
docker-compose up
and then I am back to the ActiveRecord::NoDatabaseError
So as I said, I think I am missing something very basic.
It doesn't look like you put your postgres on a volume, you may be missing other persistent data sources in your app container, and it appears you missed some indentation on your app container definition.
version: '2'
services:
db:
image: postgres:9.4.5
volumes:
- postgres-data:/var/lib/postgresql/data
app:
build: .
environment:
RAILS_ENV: $RAILS_ENV
ports:
- "3000:3000"
command: bundle exec rails s -b 0.0.0.0
volumes:
- .:/ourlatitude/database
depends_on:
- db
volumes:
postgres-data:
driver: local
In the example above, the postgres data is stored in a named volume. See the advice on docker hub for more details on persisting data for that application. If you are still losing data, check the output of docker diff $container_id on a container to see what files are changing outside of your volumes that would be lost on a down/up.
I managed to get this to work properly using the following docker-compose.yml file.
version: '2'
volumes:
postgres-data:
services:
db:
image: postgres:9.4.5
volumes:
- postgres-data:/var/lib/postgresql/data
app:
build: .
environment:
RAILS_ENV: $RAILS_ENV
ports:
- "3000:3000"
command: bundle exec rails s -b 0.0.0.0
depends_on:
- db
The key was to add the
volumes:
postgres-data:
which creates the named volume and then the
volumes:
- postgres-data:/var/lib/postgresql/data
under the db section which maps the named volume to the expected location in the container of /var/lib/postgresql/data
Related
Run simple docker architecture with 2 containers db and rails application. Any rake command related to db is very slow. Like rake db:create, rake db:migrate
Tried to test speed between 2 containers by iperf. It shows 26-27 Gbits/sec. So it looks like not network problem. And it is working like charm in any linux host.
Docker For Mac specs
MacOS Mojave 10.14.3;
Engine: 18.09.1;
Compose: 1.23.2;
Machine 0.16.1;
Here is sample docker-compose.yml
version: '3.7'
services:
postgres_10_5:
image: postgres:10.5
ports:
- "5432"
networks:
- backend
web_app:
build:
context: .
dockerfile: Dockerfile-dev
env_file:
- ./.env
ports:
- "3000:3000"
- "1080:1080"
environment:
- RAILS_ENV=development
volumes:
- .:/home/app
networks:
- backend
networks:
backend:
driver: bridge
Expect not wait for result of any rake command around 5 minutes. Don't know where to dig down. Any hints?
I had this exact same issue too. It's to do with the very poor performance of Docker on OSX, and how you've setup your volumes/mounts in docker.
I found this article that has a good overview of how to setup a Dockerfile and docker-compose.yml for Rails, and have it actually perform OK.
The main thing to understand:
To make Docker fast enough on MacOS follow these two rules: use :cached to mount source files and use volumes for generated content (assets, bundle, etc.).
You haven't setup your volumes properly for ruby gems, postgresql data (and possibly other things).
Key statements you need in your Dockerfile:
...
# Configure bundler and PATH
ENV LANG=C.UTF-8 \
GEM_HOME=/bundle \
BUNDLE_JOBS=4 \
BUNDLE_RETRY=3
ENV BUNDLE_PATH $GEM_HOME
ENV BUNDLE_APP_CONFIG=$BUNDLE_PATH \
BUNDLE_BIN=$BUNDLE_PATH/bin
ENV PATH /app/bin:$BUNDLE_BIN:$PATH
# Upgrade RubyGems and install required Bundler version
RUN gem update --system && \
gem install bundler:$BUNDLER_VERSION
# Create a directory for the app code
RUN mkdir -p /app
...
And in your docker-compose.yml
version: '3.7'
postgres_10_5:
image: postgres:10.5
volumes:
- postgresql:/var/lib/postgresql/data
ports:
- "5432"
web_app:
build:
context: .
dockerfile: Dockerfile-dev
env_file:
- ./.env
stdin_open: true
tty: true
volumes:
- .:/app:cached
- rails_cache:/app/tmp/cache
- bundle:/bundle
environment:
- RAILS_ENV=${RAILS_ENV:-development}
depends_on:
- postgres_10_5
volumes:
postgres:
bundle:
rails_cache:
See the article for a more in-depth discussion on how it all works.
I'm currently attempting to use Docker to make our local dev experience involving two services easier, but I'm struggling to use host and container ports in the right way. Here's the situation:
One repo containing a Rails API, running on 127.0.0.1:3000 (lets call this backend)
One repo containing an isomorphic React/Redux frontend app, running on 127.0.0.1:8080 (lets call this frontend)
Both have their own Dockerfile and docker-compose.yml files as they are in separate repos, and both start with docker-compose up fine.
Currently not using Docker at all for CI or deployment, planning to in the future.
The issue I'm having is that in local development the frontend app is looking for the API backend on 127.0.0.1:3000 from within the frontend container, which isn't there - it's only available to the host and the backend container actually running the Rails app.
Is it possible to forward the backend container's 3000 port to the frontend container? Or at the very least the host's 3000 port as I can see the Rails app on localhost on my computer. I've tried 127.0.0.1:3000:3000 within the frontend docker-compose but I can't do that while running the Rails app as the port is in use and fails to connect. I'm thinking maybe I've misunderstood the point or am missing something obvious?
Files:
frontend Dockerfile
FROM node:8.7.0
RUN npm install --global --silent webpack yarn
RUN mkdir /app
WORKDIR /app
COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock
RUN yarn install
COPY . /app
frontend docker-compose.yml
version: '3'
services:
web:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000' # rails backend exposed to localhost within container
backend Dockerfile
FROM ruby:2.4.2
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
RUN bundle install
COPY . /app
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
You have to unite the containers in one network. Do it in your docker-compose.yml files.
Check this docs to learn about networks in docker.
frontend docker-compose.yml
version: '3'
services:
gui:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000'
networks:
- webnet
networks:
webnet:
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
back:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
networks:
- webnet
networks:
webnet:
Docker has its own DNS resolution, so after you do this you will be able to connect to your backend by setting the address to: http://back:3000
Managed to solve this using external links in the frontend app to link to the default network of the backend app like so:
version: '3'
services:
web:
build: .
command: yarn start:dev
environment:
- API_HOST=http://backend_web_1:3000
external_links:
- backend_default
networks:
- default
- backend_default
ports:
- '8080:8080'
volumes:
- .:/app
networks:
backend_default: # share with backend app
external: true
I have the following docker-compose.yml file to work locally:
mongodb:
image: mongo
command: "--smallfiles --logpath=/dev/null"
web:
build: .
command: npm run dev
volumes:
- .:/myapp
ports:
- "3001:3000"
links:
- mongodb
environment:
PORT: 3000
NODE_ENV: 'development'
seed:
build: ./seed
links:
- mongodb
When I deploy to my server, I need to change two things in the docker-compose.yml file:
web:
command: npm start
environment:
NODE_ENV: 'production'
I guess editing the file after each deploy ain't the most comfortable way to do that. Any suggestion on how to cleanly manage environments in the docker-compose.yml file?
The usual way is to use a Compose overrides file. By default docker-compose reads two files at startup, docker-compose.yml and docker-compose.override.yml. You can put anything you want to override in the latter. So:
# docker-compose.yml
mongodb:
image: mongo
command: "--smallfiles --logpath=/dev/null"
web:
build: .
command: npm run dev
volumes:
- .:/myapp
ports:
- "3001:3000"
links:
- mongodb
environment:
PORT: 3000
NODE_ENV: 'development'
seed:
build: ./seed
links:
- mongodb
Also:
# docker-compose.override.yml
web:
command: npm start
environment:
NODE_ENV: 'production'
Then you can run docker-compose up and will get the production settings. If you just want dev then you can run docker-compose -f docker-compose.yml up.
An even better way is to name your compose files in a relevant way. So, docker-compose.yml becomes development.yml and docker-compose.override.yml becomes production.yml or something. Then you can run docker-compose -f development -f production up for production, and just docker-compose -f development for development. You may also want to look into the extends functionality of docker-compose.
Just try using my way.
This is an example of my django project that I've run on docker.
First in docker-compose.yml you have to defined two containers.
First is web it means for production then second is devweb it
means for development.
If you use dockerfile you can create separated dockerfile
(Dockerfile : for production, and Dockerfile-dev for development).
By that you can run by using docker-compose command.
For example :
docker-compose -p $(PROJECT) up -d web for production
docker-compose -p $(PROJECT) up --no-deps -d devweb for
development
Anyway, I use Makefile to manage all docker-compose's command, and its make me very easy. I just need to run make command name to execute a command.
May this answer help you.
I'm using docker as my dev environment for my rails app with the following docker-compose.yml :
app:
build: .
ports:
- "3000:3000"
links:
- db
- mail
volumes:
- .:/usr/src/app
- gemrc:/etc/gemrc
db:
image: mdillon/postgis
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=database
volumes:
- ./docker/pgdata:/var/lib/postgresql/data
mail:
image: djfarrelly/maildev
ports:
- "1080:80"
And my Dockerfile :
FROM rails:onbuild
When I need to add a new gem to my Gemfile, I have to first generate my Gemfile.lock :
docker run --rm -v gemrc:/etc/gemrc -v /home/user/project:/usr/src/app -w /usr/src/app ruby bundle install
And the rebuild the docker image:
docker-compose build
docker-compose up
Because of this I have to run bundle install twice without being able to add the --without development test flag. In order to do it quicker I added this to my gemrc file:
gem: --no-document
But is there a way to avoid the double bundle install ?
Perhaps you might want to try the following docker-compose workflow for development environment.
Similar to database.yml our docker-compose.yml is not included in our CVS (git), providing the similar benefits for developer custom config.
Build your image locally before starting your app container and tag it something like foo_app:latest. It makes sense because you're in dev. Just execute docker build . in your app's root directory, assuming your Dockefile is in that directory.
Define a data volume container for bundle and mount it in your app container. Your docker-compose.yml might look something like:
app:
image: foo_app
ports:
- "3000:3000"
links:
- db
- mail
volumes:
- .:/usr/src/app
volumes_from:
- bundle
bundle:
image: foo_app:latest
volumes:
- /home/app/bundle
db:
image: mdillon/postgis
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=database
volumes:
- ./docker/pgdata:/var/lib/postgresql/data
mail:
image: djfarrelly/maildev
ports:
- "1080:80"
Every time you need to add a new gem, just add it to your Gemfile and execute bundle install inside your app container. For example if your app container's name is foo_app_1:
docker exec foo_app_1 bundle install
The data volume container will always have the latest/edge snapshot of your app's gems.
Tag your releases and build the "official release image" in a central repository accessible for your staging/production/team.
With this approach every time you start/recreate your app container, all of your gems be just as they were the last time you updated them. You can also use this approach for other kind of data you want to be persisted across containers life cycles, adding "components" to manage state in your stateless applications.
See https://docs.docker.com/engine/userguide/containers/dockervolumes/ for more information
i have an Ruby on Rails project, which i want to place into the containers( there are database, redis and web(includes rails project) containers). I want to add search feature, so i added a sphinx container in my compose file
docker-compose.yml
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
**- sphinx**
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
**sphinx:
image: centurylink/sphinx**
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
docker-compose build works fine but when i run docke-compose up i get
ERROR: Cannot start container 096410dafc86666dcf1ffd5f60ecc858760fb7a2b8f2352750f615957072d961: Cannot link to a non running container: /metartaf_sphinx_1 AS /metartaf_web_1/sphinx_1
How can i fix this ?
According to https://hub.docker.com/r/centurylink/sphinx/ the Sphinx container runs needs some amount of configuration files to run properly. See the *Daemonized usage (2). You need data source files and a configuration.
In my test, it fails to start as is with error:
FATAL: no readable config file (looked in /usr/local/etc/sphinx.conf, ./sphinx.conf)
Your docker-compose.yml shouldn't have these * in it.
If you want sphinx latest version you can do this:
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
- sphinx
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
sphinx:
image: centurylink/sphinx:latest
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
If you want a specific version you write this way : centurylink/sphinx:2.1.8