I am trying to create an rails application through docker.
my docker-compose.yml file is :
mysql:
image: mysql:5.6.34
ports:
- "3006:3006"
volumes_from:
- dbdata
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=dev
dbdata:
image: tianon/true
volumes:
- /var/lib/mysql
app:
build: .
environment:
RAILS_ENV: development
ports:
- '3000:3000'
volumes_from:
- appdata
links:
- "mysql"
appdata:
image: tianon/true
volumes:
- ".:/dock_rails_1"
Dockerfile is :
FROM ruby:2.3.3
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs mysql-client
RUN mkdir /dock_rails_1
WORKDIR /dock_rails_1
COPY Gemfile /dock_rails_1/Gemfile
COPY Gemfile.lock /dock_rails_1/Gemfile.lock
RUN bundle install
COPY . /dock_rails_1
My Gemfile is :
source 'https://rubygems.org'
gem 'rails', '5.0.0.1'
I have also created the empty Gemfile.lock.
After this run this command in my terminal:
docker-compose run app rails new . --force --database=mysql --skip-bundle
Every thing goes right. But I don't have edit acess to all the files created by docker.
When I am trying to edit the database.yml file created by docker it is telling permission denied.
Please help me why I am unable to edit my files.
Just making a few assumptions here, hope I'm right..
Probably, the user you use inside the container is root
and the file is created under root privileges.
Try to create and use a user with the same uid and gid as in your host machine.
you can find out your hosts uid/gid by running on your host:
cat /etc/passwd | grep $USER
change the containers user to hosts uid and gid in your docker file:
RUN usermod -u my_users_host_uid container_user
RUN groupmod -g my_users_host_gid container_user
Related
I am dockerizing my rails app with the mountable engine. But I am constantly getting one error while running the image The path `/app/include/engine` does not exist.
The image is successfully built but while running docker-compose up it throws an error of the path.
Below I am attching my Dockerfile and docker-compose.yml
Dockerfile
FROM ruby:2.4.1
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app
ADD Gemfile /app/Gemfile
ADD Gemfile.lock /app/Gemfile.lock
COPY . .
RUN mkdir -p /app/include/engine
RUN git clone git#github.com:engine/engine.git /app/include/engine
RUN ls
RUN ls /app/include/engine
RUN DISABLE_SSL=true gem install puma -v 3.6.0
RUN bundle check || bundle install
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml
version: '2'
services:
db:
image: mysql:8.0.21
restart: always
environment:
MYSQL_ROOT_PASSWORD: root#123
MYSQL_DATABASE: prod
MYSQL_USER: root
MYSQL_PASSWORD: root#123
ports:
- "3307:3306"
app:
build:
context: .
dockerfile: Dockerfile
args:
SSH_PRIVATE_KEY: ${SSH_PRIVATE_KEY}
volumes:
- ".:/app"
ports:
- "3000:3000"
depends_on:
- db
environment:
key: value
I have also included my engine path in Gemfile and followed all the steps in mounting engine
# engine
gem 'api', path: 'include/engine'
It works fine in the local environment but it gives me an error in docker.
Can someone please help what I m missing somewhere?
it because of this line
volumes:
- ".:/app"
This mounts your local dir inside the container when starting overwriting existing data already in the image. This means everything in /app is replaced with data from your local machine including your engine /app/include/engine.
To fix this you need to have this engine cloned in your local folder so it is available when starting the container. An other option is to clone the engine outside /app for example in /tmp or whatever you like.
I am trying to create my rails application in a docker environment. I have used volumes to mount source directories from the host at a targeted path inside the container. The application is in the development phase and I need to continuously add new gems to it. I install a gem from the bash of my running container, it installs the gem and the required dependencies. But when I removed the running containers(docker-compose down) and then again instantiated them(docker-compose up), my rails web image shows errors of missing gems. I know re-building the image will add the gems but IS THERE ANY WAY TO ADD GEMS WITHOUT REBUILDING THE IMAGE?
I Followed docker-compose docs for setting the rails app
https://docs.docker.com/compose/rails/#define-the-project
DOCKERFILE
FROM ruby:2.7.1-slim-buster
LABEL MAINTAINER "Prayas Arora" "<prayasa#mindfiresolutions.com>"
# Install apt based dependencies required to run Rails as
# well as RubyGems. As the Ruby image itself is based on a
# Debian image, we use apt-get to install those.
RUN apt-get update \
&& apt-get install -qq -y --no-install-recommends \
build-essential \
libpq-dev \
netcat \
postgresql-client \
nodejs \
&& rm -rf /var/lib/apt/lists/*
ENV APP_HOME /var/www/repository/repository_api
# Configure the main working directory. This is the base
# directory used in any further RUN, COPY, and ENTRYPOINT
# commands.
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
# Copy the Gemfile as well as the Gemfile.lock and install
# the RubyGems. This is a separate step so the dependencies
# will be cached unless changes to one of those two files
# are made.
COPY ./repository_api/Gemfile $APP_HOME/Gemfile
COPY ./repository_api/Gemfile.lock $APP_HOME/Gemfile.lock
RUN bundle install
# Copy the main application.
COPY ./repository_api $APP_HOME
# Add a script to be executed every time the container starts.
COPY ./repository_docker/development/repository_api/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
# Expose port 3000 to the Docker host, so we can access it
# from the outside.
EXPOSE 3000
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["rails","server","-b","0.0.0.0"]
docker-compose.yml
container_name: repository_api
build:
context: ../..
dockerfile: repository_docker/development/repository_api/Dockerfile
user: $UID
env_file: .env
stdin_open: true
environment:
DB_NAME: ${POSTGRES_DB}
DB_PASSWORD: ${POSTGRES_PASSWORD}
DB_USER: ${POSTGRES_USER}
DB_HOST: ${POSTGRES_DB}
volumes:
- ../../repository_api:/var/www/repository/repository_api
networks:
- proxy
- internal
depends_on:
- repository_db
A simple solution is to cache the gems in a docker volume. You can create a volume in docker and attach it to the path to bundle gems. This will maintain a shared state and you will not require to install the gems in every container you spun.
container_name: repository_api
build:
context: ../..
dockerfile: repository_docker/development/repository_api/Dockerfile
user: $UID
env_file: .env
stdin_open: true
environment:
DB_NAME: ${POSTGRES_DB}
DB_PASSWORD: ${POSTGRES_PASSWORD}
DB_USER: ${POSTGRES_USER}
DB_HOST: ${POSTGRES_DB}
volumes:
- ../../repository_api:/var/www/repository/repository_api
- bundle_cache:/usr/local/bundle
networks:
- proxy
- internal
.
.
volumes:
bundle_cache:
Also, a/c to bundler.io, the official Docker images for Ruby assume that you will use only one application, with one Gemfile, and no other gems or Ruby applications will be installed or run in your container. So once you have added all the gems required in your application development, you can remove this bundle_cache volume and rebuild your image with your final Gemfile.
I am having trouble with a rails container in a docker-compose network. I have not touched it in a few months, and when I attempted to start it this week it fails.
When I attempt to start the container with docker-compose up service the startup fails with:
service_1 | Could not locate Gemfile or .bundle/ directory
support_portal_service_1 exited with code 10
Both files are present, the host is a Windows 10 machine.
Bundle install completes succesfully:
Bundle complete! 19 Gemfile dependencies, 83 gems now installed.
Bundled gems are installed into `/usr/local/bundle`
What I have tried:
Added ruby and ruby-all-dev to apt-get install in case missing requirements were the issue
Changed ADD Gemfile /app/Gemfile to COPY Gemfile /app/Gemfile
Tried commenting out Gemfile.lock /app/Gemfile.lock
Ran bundle install on the Windows 10 host
Rebuild the container without cache
Insuring docker has access to the drive/directory
Here is my Dockerfile:
FROM ruby:2.5.3
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs default-libmysqlclient-dev ruby ruby-all-dev
RUN mkdir /app
WORKDIR /app
COPY Gemfile /app/Gemfile
#COPY Gemfile.lock /app/Gemfile.lock
WORKDIR /app
RUN bundle install
ADD . /app
And my docker-compose.yml:
version: '2'
services:
# Structured database
sqldb:
image: mysql:5.7
volumes:
- sql:/var/lib/mysql
env_file:
- .env
environment:
- MYSQL_USER=web
- MYSQL_ROOT_PASSWORD=${PORTAL_DATABASE_PASSWORD}
- MYSQL_PASSWORD=${PORTAL_DATABASE_PASSWORD}
ports:
- "3306:3306"
# Application server
service:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
env_file:
- .env
expose:
- "3000"
depends_on:
- sqldb
# Front end proxy
web:
image: nginx
build:
context: .
dockerfile: Dockerfile-web
depends_on:
- service
ports:
- "80:80"
- "144:144"
# Persistence
volumes:
sql:
I am trying basic Docker & Rails tutorials on my windows10 home OS with Docker toolbox.
Client: 17.05.0-ce
Server: 17.06.0-ce
And hello-world tutorials works!
Now I am trying this youtube tutorial: https://www.youtube.com/watch?v=KH6pcHb6Wug&lc=z12ocxayznynslzjj04chbtgiwbhuf4z5xk0k.1499518307572479
And everything looks okay until I check rails generated project files.
In terminal showing like files are generating but when I use the command 'ls -l' its show only my manually created files (4).
What's happening with Rails generated files?
Where they go?
Here is docker-compose.yml content:
version: '2'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/deep
ports:
- "3000:3000"
depends_on:
- db
Here is Dockerfile content:
FROM ruby:2.3.3
ENV HOME /home/rails/deep
# Install PGsql dependencies and js engine
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
WORKDIR $HOME
# Install gems
ADD Gemfile* $HOME/
RUN bundle install
# Add the app code
ADD . $HOME
Here is my terminal at end: https://ibb.co/c2eqFF
I found the solution:
https://github.com/laradock/laradock/issues/508
Just need to place a .env file next to your docker-compose.yml file, with the following content : COMPOSE_CONVERT_WINDOWS_PATHS=1
I found the solution: https://github.com/laradock/laradock/issues/508
Just need to place a .env file next to your docker-compose.yml file, with the following content : COMPOSE_CONVERT_WINDOWS_PATHS=1
I've created the docker-compose.yml file below to create a container based on Ruby image and a container based on MySQL image. When I execute docker-compose up, the MySQL container seems to be created correctly, however it is not run in the background. How can I configure it to do so using the docker-compose.yml file?
version: '2'
services:
web:
build:
context: .
dockerfile: .docker/rails.dockerfile
volumes:
- .:/var/www
ports:
- "3000:3000"
depends_on:
- 'mysql'
networks:
- ddoc-network
mysql:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: 'SOMETHING'
networks:
- ddoc-network
networks:
ddoc-network:
driver: bridge
rails.dockerfile
FROM ruby:2.3.1
MAINTAINER Juliano Nunes
RUN apt-get update -qq && apt-get install -y build-essential mysql-client libmysqlclient-dev nodejs
RUN mkdir /var/www
WORKDIR /var/www
ADD Gemfile /var/www/Gemfile
ADD Gemfile.lock /var/www/Gemfile.lock
RUN bundle install
ADD . /var/www
CMD ['bundle', 'exec', 'rails', 'server', '-b', '0.0.0.0']
You can always use docker-compose up -d to run your containers in detached mode.
Check docker-compose up --help for more info.