How to share Docker Volume between two docker containers? - docker

I have the following Problem: I have two Docker containers, one for my App and one for NGINX. Now I want to share uploaded images from my app with the NGINX container. I tried to do that using a volume. But when I restart my app container, the images are lost. What can I do to save the images, even after I restarted or recreated the container?
My configuration:
docker-compose.yml
version: '3'
services:
# the application
app:
build:
context: .
dockerfile: ./docker/app/Dockerfile
environment:
- DB_USERNAME=postgres
- DB_PASSWORD=postgres
- DB_PORT=5432
volumes:
- .:/app
- gallery:/app/public/gallery
ports:
- 3000:3000
depends_on:
- db
# the database
db:
image: postgres:11.5
volumes:
- postgres_data:/var/lib/postgresql/data
# the nginx server
web:
build:
context: .
dockerfile: ./docker/web/Dockerfile
volumes:
- gallery:/app/public/gallery
depends_on:
- app
ports:
- 80:80
networks:
default:
external:
name: app-network
volumes:
gallery:
postgres_data:
app/Dockerfile:
FROM ruby:2.7.3
RUN apt-get update -qq
RUN apt-get install -y make autoconf libtool make gcc perl gettext gperf && git clone https://github.com/FreeTDS/freetds.git && cd freetds && sh ./autogen.sh && make && make install
# for imagemagick
RUN apt-get install imagemagick
# for postgres
RUN apt-get install -y libpq-dev
# for nokogiri
RUN apt-get install -y libxml2-dev libxslt1-dev
# for a JS runtime
RUN apt-get install -y nodejs
# Setting an Envioronment-Variable for the Rails App
ENV RAILS_ROOT /var/www/app
RUN mkdir -p $RAILS_ROOT
# Setting the working directory
WORKDIR $RAILS_ROOT
# Setting up the Environment
ENV RAILS_ENV='production'
ENV RACK_ENV='production'
# Adding the Gems
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install --jobs 20 --retry 5 --without development test
# Adding all Project files
COPY . .
RUN bundle exec rake assets:clobber
RUN bundle exec rake assets:precompile
EXPOSE 3000
CMD ["bundle", "exec", "puma", "-p", "3000"]
web/Dockerfile:
# Base Image
FROM nginx
# Dependiencies
RUN apt-get update -qq && apt-get -y install apache2-utils
# Establish where Nginx should look for files
ENV RAILS_ROOT /var/www/app
# Working Directory
WORKDIR $RAILS_ROOT
# Creating the Log-Directory
RUN mkdir log
# Copy static assets
COPY public public/
# Copy the NGINX Config-Template
COPY docker/web/nginx.conf /tmp/docker.nginx
# substitute variable references in the Nginx config template for real values from the environment
# put the final config in its place
RUN envsubst '$RAILS_ROOT' < /tmp/docker.nginx > /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Rather than a volume, you can mount the same directory on the host to multiple Docker containers simultaneously. So long as the containers are not writing to the same file simultaneously, which is not in your described use case, you shouldn’t have a problem.
For example:
docker run -d --name Web1 -v /home/ubuntu/images:/var/www/images httpd
docker run -d --name Other1 -v /home/ubuntu/images:/etc/app/images my-docker-image:latest
If you would rather a Docker volume, this article will give you everything you need to know.

Related

After I run docker compose up, my mac returns error stating that it can't find my mix phx.server. How do I show docker where my mix.exs file is?

When I'm running Docker Compose up, I receive an error
** (Mix) The task "phx.server" could not be found
Note no mix.exs was found in the current directory
I believe it's the very last step I need to run the project. This is a phoenix/Elixir Docker project. Mix.exs is a top level file in my project, same level as my dockerfile/docker-compose file.
Dockerfile
FROM elixir:1.13.1
# Build Args
ARG PHOENIX_VERSION=1.6.6
ARG NODEJS_VERSION=16.x
# Apt
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y apt-utils
RUN apt-get install -y build-essential
RUN apt-get install -y inotify-tools
# Nodejs
RUN curl -sL https://deb.nodesource.com/setup_${NODEJS_VERSION} | bash
RUN apt-get install -y nodejs
# Phoenix
RUN mix local.hex --force
RUN mix archive.install --force hex phx_new #{PHOENIX_VERSION}
RUN mix local.rebar --force
# App Directory
ENV APP_HOME /app
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
COPY . .
# App Port
EXPOSE 4000
# Default Command
CMD ["mix", "phx.server"]
Docker-compose.yml
version: "3"
services:
book-search:
build: .
volumes:
- ./src:/app
ports:
- "4000:4000"
depends_on:
- db
db:
image: postgres:9.6
environment:
POSTGRES_DB: "db"
POSTGRES_HOST_AUTH_METHOD: "trust"
POSTGRES_USER: tmclean
POSTGRES_PASSWORD: tmclean
PGDATA: /var/lib/postgresql/data/pgdata
restart: always
volumes:
- ./pgdata:/var/lib/postgresql/data
Let me know what other questions I can answer
The problem is your docker-compose.yml file.
volumes:
- ./src:/app
You are overwriting the app with a probably non-existant src directory. Change it to:
volumes:
- .:/app
and it should work. However, if you do that, there is no point in copying the files in your Dockerfile, so you can also remove the
COPY . .
Alternatively, leave the COPY if you want the source files to be in the image, and remove the volumes section from the book-search service in docker-compose.yml.

Docker: bash: bundle: command not found

I am trying to Dockerize my Rails 6 app but seem to be falling at the last hurdle. When running docker-compose up everything runs fine until i get to "Attaching to rdd-ruby_db_1, rdd-ruby_web_1" in the console and then I get the error bash: bundle: command not found.
I am aware of the other answers on Stackoverflow for the same issue but i have tried all before posting this.
My Dockerfile:
FROM ruby:2.7
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN cd /usr/bin/
RUN bundle install
FROM node:6.7.0
RUN npm install -g yarn
COPY . /myapp
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["/usr/bin/entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
My docker-compose
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: xxx
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
I originally followed the guide in the Docker documentation thinking this would work over at https://docs.docker.com/compose/rails/
Thanks.

Cant connect to rails docker container on localhost

Im having trouble accessing my containerized rails app from my local machine. I'm following this quickstart guide as a template and made some tweaks to the paths for my gemfile and gemfile.lock. The quickstart guide moves on to docker-compose, but I want to try accessing the app without it first to get familiar with these processes before moving on.
This is my dockerfile:
FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile ./Gemfile
COPY Gemfile.lock ./Gemfile.lock
RUN gem install bundler -v 2.0.1
RUN bundle install
COPY . /myapp
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000:3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
and this is the entrypoint file:
#!/bin/bash
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /myapp/tmp/pids/server.pid
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
I am able to successfully build and run the image, but when I try to access 0.0.0.0:3000 I get a cant connect error.
I also attached a screenshot of my app directory structure, the Dockerfile and entrypoint are at the root.
One thing that seems strange is when I try to run the logs for the container I dont get any output, but when I shut the container down I see the startup logs. Not sure why that is.
I am running docker desktop 2.1.0.3. Any thoughts/help are very appreciated.
use just EXPOSE 3000 in dockerfile.
run container ror with mapping port to localhost from your new docker image <image>
docker run -d --name ror -p 3000:3000 <image>
now you should be able to access localhost:3000
Here's an example of mine that works:
The usual dockerfile, nothing special here.
Then, in docker-compose.yml, add environment variable, or place in .env file, the DATABASE_URL (important bit is using the host.docker.internal instead of localhost
Then in your database.yml, specify the url with the ENV key
Then start the container by running docker-compose up
#Dockerfile
FROM ruby:3.0.5-alpine
RUN apk add --update --no-cache \
bash \
build-base \
tzdata \
postgresql-dev \
yarn \
git \
curl \
wget \
gcompat
COPY Gemfile Gemfile.lock ./
WORKDIR /app
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash
RUN gem install bundler:2.4.3
RUN bundle lock --add-platform x86_64-linux
RUN bundle install
COPY package.json yarn.lock ./
RUN yarn install
COPY . .
EXPOSE 3000
CMD ["rails", "server", "-b", "0.0.0.0", "--pid=/tmp/server.pid"]
#docker-compose.yml
version: "3.9"
services:
app:
image: your_app_name
volumes:
- /app
env_file:
- .env
environment:
- DATABASE_URL=postgresql://postgres#host.docker.internal:5432/<your_db_name>
ports:
- "3000:3000"
webpack_dev_server:
build: .
command: bin/webpack-dev-server
ports:
- "3035:3035"
volumes:
- /app
env_file:
- .env
environment:
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
redis:
image: redis
#database.yml
development:
<<: *default
database: giglifepro_development
url: <%= ENV.fetch('DATABASE_URL') %>

Ruby on rails on docker-compose

I'm having problems with a project, using docker-compose, I always use the same Dockerfile and docker-compose.yml in all projects, just changing the version of ruby. However, in just ONE of these projects, I no longer update what I modify in the code, every change I make always reflected, but now it stopped suddenly, and only in one project. I have already refitted build, I have removed all the containers, all the images, downloaded the project again ... and nothing! Just refresh if I stop and upload the container again!
docker-compose.yml :
version: '2'
services:
postgres:
image: 'postgres:9.5'
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- postgres
Dockerfile
FROM ruby:2.3.1
RUN apt-get update -qq && apt-get install -y build-essential libpq- dev nodejs
RUN mkdir /myapp
WORKDIR /myapp
ADD Gemfile /myapp/Gemfile
ADD Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
ADD . /myapp
Resolved, in config/environments/development.rb it has to be: config.cache_classes = false

ERROR: Service 'redis' failed to build. When building redis image by docker-compose

I'm dockerizing an application which based on nodejs, redis and mysql. I already installed redis server and its running fine, but I'm enable to dockerize all three by using docker-compose.yml
$ docker-compose up --build
Building redis
Step 1/11 : FROM node:alpine
---> e079048502ec
Step 2/11 : FROM redis:alpine
---> da2b86c1900b
Step 3/11 : RUN mkdir -p /usr/src/app
---> Using cache
---> 28b2f837b54c
Step 4/11 : WORKDIR /usr/src/app
---> Using cache
---> d1147321eec4
Step 5/11 : RUN apt-get install redis-server
---> Running in 2dccd5689663
/bin/sh: apt-get: not found
ERROR: Service 'redis' failed to build: The command '/bin/sh -c apt-get install redis-server' returned a non-zero code: 127
This is my dockerfile.
Dockerfile:
FROM node:alpine
FROM redis:alpine
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
## Install Redis ##
RUN apt-get install redis-server
## Install nodejs on ubuntu ##
RUN sudo apt-get update && wget http://nodejs.org/dist/v0.6.9/node-v0.6.9.tar.gz \
&& tar -xvzf node-v0.6.9.tar.gz \
&& cd node-v0.6.9 \
&& ./configure && make && sudo make install \
&& mkdir myapp && cd myapp \
&& npm init \
&& npm install express --save \
&& npm install express \
&& npm install --save path serve-favicon morgan cookie-parser body-parser \
&& npm install --save express jade \
&& npm install --save debug \
COPY package.json /usr/src/app/
COPY redis.conf /usr/local/etc/redis/redis.conf
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf", "npm", "start" ]
This is docker-compose.yml file
docker-compose.yml
version: '2'
services:
db:
build: ./docker/mysql
# image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
#- ./mysql:/docker-entrypoint-initdb.d
# restart: always
environment:
MYSQL_ROOT_PASSWORD: root
# MYSQL_DATABASE: cg_apiserver
# MYSQL_USER: root
# MYSQL_PASSWORD: root
redis:
build: ./docker/redis
image: "redis:alpine"
node:
build: ./docker/node
ports:
- '3000:80'
restart: always
volumes:
- .:/usr/src/app
depends_on:
- db
- redis
command: npm start
volumes:
db_data:
It seems that you have tried to merge two Dockerfile's in one
First, your multiple FROM has no sense here. The basic concept is to base FROM only one base image. See this
Second, you have a docker-compose looking good, but seeing the Dockerfile, it shows that you are trying to build both applications (redis and node app) in the same image.
So take redis stuff out from ./docker/node/Dockerfile:
FROM node:alpine
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
## Install nodejs on ubuntu ##
RUN wget http://nodejs.org/dist/v0.6.9/node-v0.6.9.tar.gz \
&& tar -xvzf node-v0.6.9.tar.gz \
&& cd node-v0.6.9 \
&& ./configure && make && sudo make install \
&& mkdir myapp && cd myapp \
&& npm init \
&& npm install express --save \
&& npm install express \
&& npm install --save path serve-favicon morgan cookie-parser body-parser \
&& npm install --save express jade \
&& npm install --save debug \
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD ["npm", "start" ]
Use this ./docker/redis/Dockerfile:
FROM redis:alpine
COPY redis.conf /usr/local/etc/redis/redis.conf
# No need to set a custom CMD
And, I recommend to remove the "image:" line from redis (docker-compose.yml). It is not necessary:
redis:
build: ./docker/redis
image: "redis:alpine" <----
Edit. Also, you don't need apt-get update anymore. I've removed this sudo apt-get update &&
It is working now after having the below changes:
Create a folder in root docker
Inside the docker create folder redis
Create Dockerfile having the below contents:
docker >> redis >> Dockerfile
FROM smebberson/alpine-base:1.0.0
'#MAINTAINER Scott Mebberson <scott#scottmebberson.com>
VOLUME ["/data"]
'#Expose the ports for redis
EXPOSE 6379
There was no change in the docker-compose.yml file.
Run the below command and see the output
Run this command to build the container
sudo docker-compose up --build -d
Run this command to check the running container
sudo docker ps
Run this command to check the network and get the IP
sudo docker inspect redis_container_name
sudo docker inspect node_container_name
I'v solved this problem (COPY don't work) easy in my project: just add "context" - path to Dockerfile directory in your YML file (version 3), example:
build:
context: Starkman.Backend.Storage/Redis
dockerfile: Dockerfile
"Starkman.Backend.Storage/Redis" - its path to directory. And an unknown temporary directory for command "COPY" will be inside your "context".
This is my Dockerfile:
FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
EXPOSE 6379
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]

Resources