I'm using docker as my dev environment for my rails app with the following docker-compose.yml :
app:
build: .
ports:
- "3000:3000"
links:
- db
- mail
volumes:
- .:/usr/src/app
- gemrc:/etc/gemrc
db:
image: mdillon/postgis
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=database
volumes:
- ./docker/pgdata:/var/lib/postgresql/data
mail:
image: djfarrelly/maildev
ports:
- "1080:80"
And my Dockerfile :
FROM rails:onbuild
When I need to add a new gem to my Gemfile, I have to first generate my Gemfile.lock :
docker run --rm -v gemrc:/etc/gemrc -v /home/user/project:/usr/src/app -w /usr/src/app ruby bundle install
And the rebuild the docker image:
docker-compose build
docker-compose up
Because of this I have to run bundle install twice without being able to add the --without development test flag. In order to do it quicker I added this to my gemrc file:
gem: --no-document
But is there a way to avoid the double bundle install ?
Perhaps you might want to try the following docker-compose workflow for development environment.
Similar to database.yml our docker-compose.yml is not included in our CVS (git), providing the similar benefits for developer custom config.
Build your image locally before starting your app container and tag it something like foo_app:latest. It makes sense because you're in dev. Just execute docker build . in your app's root directory, assuming your Dockefile is in that directory.
Define a data volume container for bundle and mount it in your app container. Your docker-compose.yml might look something like:
app:
image: foo_app
ports:
- "3000:3000"
links:
- db
- mail
volumes:
- .:/usr/src/app
volumes_from:
- bundle
bundle:
image: foo_app:latest
volumes:
- /home/app/bundle
db:
image: mdillon/postgis
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=database
volumes:
- ./docker/pgdata:/var/lib/postgresql/data
mail:
image: djfarrelly/maildev
ports:
- "1080:80"
Every time you need to add a new gem, just add it to your Gemfile and execute bundle install inside your app container. For example if your app container's name is foo_app_1:
docker exec foo_app_1 bundle install
The data volume container will always have the latest/edge snapshot of your app's gems.
Tag your releases and build the "official release image" in a central repository accessible for your staging/production/team.
With this approach every time you start/recreate your app container, all of your gems be just as they were the last time you updated them. You can also use this approach for other kind of data you want to be persisted across containers life cycles, adding "components" to manage state in your stateless applications.
See https://docs.docker.com/engine/userguide/containers/dockervolumes/ for more information
Related
I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.
I'm trying to get my head around docker. I got it working with my Rails 6 application, it builds and runs succesful. Now I want to push the application into my docker hub repository.
I'm not quite sure how to do this, because I got 3 containers but in every tutorial I read the people just push one.
That's the output of docker ps -a:
That's my docker-compose.yml:
version: '3'
services:
db:
image: postgres
volumes:
- postgres:/var/lib/postgresql/data
env_file:
- .env
ports:
- "5432"
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/webmenue
- bundler_gems:/bundle
- ./docker/database.yml:/webmenue/config/database.yml
- cache:/cache
ports:
- "3000:3000"
env_file:
- .env
environment:
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=webpacker
- SPROCKETS_CACHE=/cache
depends_on:
- db
webpacker:
build: .
command: ./bin/webpack-dev-server
volumes:
- .:/webmenue
ports:
- '3035:3035'
environment:
- NODE_ENV=development
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
volumes:
postgres:
bundler_gems:
cache:
I read about the --link flag, but this seems to be deprecated.
So: Do I need to push all containers or is there a way to get them into one container?
That is easy in short as simple as: if you have multiple images, you need to push multiple times ;)
Now the helpful answer, you have 2 images: postgress and your current working directory. Postgress is already an image you download from docker hub, so no need to push it.
As for your other 2 apps, they are currently in one docker file, so only one push is needed.
In your compose file they are both using the build: ., therefore they share the docker image. Let's say you pushed it with: docker push max-kirsch/my-cool-app:1.0.0, you would change the docker compose file to look like this:
version: '3'
services:
db:
image: postgres
volumes:
- postgres:/var/lib/postgresql/data
env_file:
- .env
ports:
- "5432"
web:
image: max-kirsch/my-cool-app:1.0.0
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/webmenue
- bundler_gems:/bundle
- ./docker/database.yml:/webmenue/config/database.yml
- cache:/cache
ports:
- "3000:3000"
env_file:
- .env
environment:
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=webpacker
- SPROCKETS_CACHE=/cache
depends_on:
- db
webpacker:
image: max-kirsch/my-cool-app:1.0.0
command: ./bin/webpack-dev-server
volumes:
- .:/webmenue
ports:
- '3035:3035'
environment:
- NODE_ENV=development
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
volumes:
postgres:
bundler_gems:
cache:
I think you are confused because you have 2 apps in your compose file that shares the same docker file. And it might be that this confusing is correct. In my personal opinion, you should create them both there ow docker file where only the components are installed that you need. It is hard to be sure without seeing the docker file, but as an example, it does not look like your webpacker needs ruby installed and can start with a simple node image as the base instead of installing node in your ruby image.
Run simple docker architecture with 2 containers db and rails application. Any rake command related to db is very slow. Like rake db:create, rake db:migrate
Tried to test speed between 2 containers by iperf. It shows 26-27 Gbits/sec. So it looks like not network problem. And it is working like charm in any linux host.
Docker For Mac specs
MacOS Mojave 10.14.3;
Engine: 18.09.1;
Compose: 1.23.2;
Machine 0.16.1;
Here is sample docker-compose.yml
version: '3.7'
services:
postgres_10_5:
image: postgres:10.5
ports:
- "5432"
networks:
- backend
web_app:
build:
context: .
dockerfile: Dockerfile-dev
env_file:
- ./.env
ports:
- "3000:3000"
- "1080:1080"
environment:
- RAILS_ENV=development
volumes:
- .:/home/app
networks:
- backend
networks:
backend:
driver: bridge
Expect not wait for result of any rake command around 5 minutes. Don't know where to dig down. Any hints?
I had this exact same issue too. It's to do with the very poor performance of Docker on OSX, and how you've setup your volumes/mounts in docker.
I found this article that has a good overview of how to setup a Dockerfile and docker-compose.yml for Rails, and have it actually perform OK.
The main thing to understand:
To make Docker fast enough on MacOS follow these two rules: use :cached to mount source files and use volumes for generated content (assets, bundle, etc.).
You haven't setup your volumes properly for ruby gems, postgresql data (and possibly other things).
Key statements you need in your Dockerfile:
...
# Configure bundler and PATH
ENV LANG=C.UTF-8 \
GEM_HOME=/bundle \
BUNDLE_JOBS=4 \
BUNDLE_RETRY=3
ENV BUNDLE_PATH $GEM_HOME
ENV BUNDLE_APP_CONFIG=$BUNDLE_PATH \
BUNDLE_BIN=$BUNDLE_PATH/bin
ENV PATH /app/bin:$BUNDLE_BIN:$PATH
# Upgrade RubyGems and install required Bundler version
RUN gem update --system && \
gem install bundler:$BUNDLER_VERSION
# Create a directory for the app code
RUN mkdir -p /app
...
And in your docker-compose.yml
version: '3.7'
postgres_10_5:
image: postgres:10.5
volumes:
- postgresql:/var/lib/postgresql/data
ports:
- "5432"
web_app:
build:
context: .
dockerfile: Dockerfile-dev
env_file:
- ./.env
stdin_open: true
tty: true
volumes:
- .:/app:cached
- rails_cache:/app/tmp/cache
- bundle:/bundle
environment:
- RAILS_ENV=${RAILS_ENV:-development}
depends_on:
- postgres_10_5
volumes:
postgres:
bundle:
rails_cache:
See the article for a more in-depth discussion on how it all works.
I know I am missing something very basic here. I have see some of the older questions on persisting data using docker, but I think I am following the most recent documentation found here.
I have a rails app that I am trying to run in docker. It runs fine but every time I start it up i get ActiveRecord::NoDatabaseError. After I create the database and migrate it, the app runs fine, until I shut it down and restart it.
here is my docker file:
FROM ruby:2.3.0
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
ENV RAILS_ROOT /ourlatitude
RUN mkdir -p $RAILS_ROOT/tmp/pids
WORKDIR $RAILS_ROOT
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN gem install bundler
RUN bundle install
COPY . .
and here is my docker-compose.yml file
version: '2'
services:
db:
image: postgres:9.4.5
app:
build: .
environment:
RAILS_ENV: $RAILS_ENV
ports:
- "3000:3000"
command: bundle exec rails s -b 0.0.0.0
volumes:
- .:/ourlatitude/database
depends_on:
- db
the basic flow I am following is this:
export RAILS_ENV=development
docker-compose build
docker-compose up
docker-compose run app rake db:create
docker-compose run app rake db:migrate
at this point the app will be running fine
but then I do this
docker-compose down
docker-compose up
and then I am back to the ActiveRecord::NoDatabaseError
So as I said, I think I am missing something very basic.
It doesn't look like you put your postgres on a volume, you may be missing other persistent data sources in your app container, and it appears you missed some indentation on your app container definition.
version: '2'
services:
db:
image: postgres:9.4.5
volumes:
- postgres-data:/var/lib/postgresql/data
app:
build: .
environment:
RAILS_ENV: $RAILS_ENV
ports:
- "3000:3000"
command: bundle exec rails s -b 0.0.0.0
volumes:
- .:/ourlatitude/database
depends_on:
- db
volumes:
postgres-data:
driver: local
In the example above, the postgres data is stored in a named volume. See the advice on docker hub for more details on persisting data for that application. If you are still losing data, check the output of docker diff $container_id on a container to see what files are changing outside of your volumes that would be lost on a down/up.
I managed to get this to work properly using the following docker-compose.yml file.
version: '2'
volumes:
postgres-data:
services:
db:
image: postgres:9.4.5
volumes:
- postgres-data:/var/lib/postgresql/data
app:
build: .
environment:
RAILS_ENV: $RAILS_ENV
ports:
- "3000:3000"
command: bundle exec rails s -b 0.0.0.0
depends_on:
- db
The key was to add the
volumes:
postgres-data:
which creates the named volume and then the
volumes:
- postgres-data:/var/lib/postgresql/data
under the db section which maps the named volume to the expected location in the container of /var/lib/postgresql/data
i have an Ruby on Rails project, which i want to place into the containers( there are database, redis and web(includes rails project) containers). I want to add search feature, so i added a sphinx container in my compose file
docker-compose.yml
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
**- sphinx**
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
**sphinx:
image: centurylink/sphinx**
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
docker-compose build works fine but when i run docke-compose up i get
ERROR: Cannot start container 096410dafc86666dcf1ffd5f60ecc858760fb7a2b8f2352750f615957072d961: Cannot link to a non running container: /metartaf_sphinx_1 AS /metartaf_web_1/sphinx_1
How can i fix this ?
According to https://hub.docker.com/r/centurylink/sphinx/ the Sphinx container runs needs some amount of configuration files to run properly. See the *Daemonized usage (2). You need data source files and a configuration.
In my test, it fails to start as is with error:
FATAL: no readable config file (looked in /usr/local/etc/sphinx.conf, ./sphinx.conf)
Your docker-compose.yml shouldn't have these * in it.
If you want sphinx latest version you can do this:
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
- sphinx
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
sphinx:
image: centurylink/sphinx:latest
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
If you want a specific version you write this way : centurylink/sphinx:2.1.8