I never user docker before.
My final goal - to run chrome watir webdriver headlessly in Ruby on Rails app. Honestly, also new to RoR :)
I follow some manual to dockerize the simple project which is using 'watir-webdriver' and 'headless' gems.
https://www.packet.net/blog/how-to-run-your-rails-app-on-docker/
my Dockerfile
FROM ruby:latest
# Mount any shared volumes from host to container # /share
ENV HOME /home/rails/webapp
# Install dependencies
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev
WORKDIR $HOME
# Install gems
ADD Gemfile* $HOME/
RUN bundle install
ADD . $HOME
CMD ["rails", "server", "--binding", "0.0.0.0"]
Steps i made:
Create simple rails new watir-app with postgresql support
add gems watir-webdriver,headless and their usage to one controller.
Generate docker image docker build -t watir-app . (no errors)
Run container docker run -d -p 3000:3000 watir-app (no errors)
the app is not available on http://localhost:3000 so trying to connect to container to investigate:
C:\Users\ttttt\RubymineProjects\watir-test>docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
868458c906c1 watir-app "rails server --bindi" 14 seconds ago Exited (1) 11 seconds ago adoring_volhard
C:\Users\ttttt\RubymineProjects\watir-test>docker exec adoring_volhard echo "1"
Error response from daemon: Container 868458c906c13928040caf4a18d6395f6b020b3eb40a1d693de84c006b9a2617 is not running
C:\Users\ttttt\RubymineProjects\watir-test>
Ruby: 2.2.5
Rails: 5.0.0.1
Docker for Win: 1.12.0
discovered docker logs command, traced the problem (needed to install nodejs in container)
Related
I'm using Rails in a Docker container, and every once in a while I run into this issue that I have no idea how to solve. When adding a new gem to the Gemfile, upon rebuilding the Docker Image + Container, the build will fail with the common bundler error Could not find [GEM_NAME] in any of the sources; Run 'bundle install' to install missing gems. This only occurs to me when I try to build the image in Docker, if I run a regular bundle install on my local machine, the Gemfile gets installed correctly and everything works as expected.
I have a fairly standard Dockerfile & docker-compose file.
Dockerfile:
FROM ruby:2.6.3
ARG PG_MAJOR
ARG BUNDLER_VERSION
ARG UID
ARG MODE
# Add POSTGRESQL to the source list using the right version
RUN curl -sSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
&& echo 'deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main' $PG_MAJOR > /etc/apt/sources.list.d/pgdg.list
ENV RAILS_ENV $MODE
RUN apt-get update -qq && apt-get install -y postgresql-client-$PG_MAJOR vim
RUN apt-get -y install sudo
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY Gemfile /usr/src/app/Gemfile
COPY Gemfile.lock /usr/src/app/Gemfile.lock
ENV BUNDLER_VERSION $BUNDLER_VERSION
RUN gem install bundler:$BUNDLER_VERSION
RUN bundle install
COPY . /usr/src/app
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml:
version: '3'
services:
backend:
build:
dockerfile: Dockerfile
args:
UID: ${UID:-1001}
BUNDLER_VERSION: 2.0.2
PG_MAJOR: 10
mode: development
tty: true
stdin_open: true
volumes:
- ./[REDACTED]:/usr/src/app
- gem_data_api:/usr/local/bundle:cached
ports:
- "3000:3000"
user: root
I've tried docker system prune -a, docker builder prune -a, reinstalling Docker, multiple rebuilds in a row, restarting my machine and so on, to no avail. The weird part is that it doesn't happen with every new Gem that I decide to add, only for some specific gems. For example I got this issue once again when trying to add gem 'sendgrid-ruby' to my Gemfile. This is the repo for the gem for reference, and the specific error I get with sendgrid-ruby is Could not find ruby_http_client-3.5.1 in any of the sources. I tried specifying ruby_http_client in my Gemfile, and I also tried sshing into the Docker container and running gem install ruby_http_client, but I get the same errors.
What might be happening here?
You're mounting a named volume over the container's /usr/local/bundle directory. The named volume will get populated from the image, but only the very first time you run the container. After that the old contents of the named volume will take precedence over the content of the image: using a volume this way will cause Docker to completely ignore any changes you make in the Gemfile.
You should be able to delete that volumes: line from the docker-compose.yml file. I'm not clear what benefit you would get from keeping the installed gems in a named volume.
I am trying to deploy photo-stream (https://github.com/maxvoltar/photo-stream) using a docker container. Photo-stream is a picture publishing site meant for self-hosting. It expects its pictures in a path called 'photos/original/', relative to where it's installed. It will create other directories under 'photos/' to cache thumbnails and such.
When I populate that directory with some pictures and start the application natively (without docker) from its build directory using:
$ bundle exec jekyll serve --host 0.0.0.0
it shows me the pictures I put in that directory. When running the application inside a docker container, I need it to
mount a volume that contains a path 'photos/original' so that I can keep my pictures there. I have created this path on
a disk mounted at /mnt/data/.
In order to do that, I have added a volume line to the existing Dockerfile:
FROM ruby:latest
ENV VIPSVER 8.9.1
RUN apt update && apt -y upgrade && apt install -y build-essential
RUN wget -O ./vips-$VIPSVER.tar.gz https://github.com/libvips/libvips/releases/download/v$VIPSVER/vips-$VIPSVER.tar.gz
RUN tar -xvzf ./vips-$VIPSVER.tar.gz && cd vips-$VIPSVER && ./configure && make && make install
COPY ./ /photo-stream
WORKDIR /photo-stream
RUN ruby -v && gem install bundler jekyll && bundle install
VOLUME /photo-stream/photos
EXPOSE 4000
ENTRYPOINT bundle exec jekyll serve --host 0.0.0.0
I build the container this way:
$ docker build --tag photo-stream:1.0 .
I run the container this way:
$ docker run -d -p 4000:4000 -v /mnt/data/photos/:/photos/ --name ps photo-stream:1.0
I was expecting the content of the directory /mnt/data/photos to be shown. Instead, nothing is shown. However, a volume '/var/lib/docker/volumes/e5ff426ced2a5e786ced6b47b67d7dee59160c60f59f481516b638805b731902/_data' is created, and when that is populated with pictures, those are shown.
After couple of days testing and working on docker (i am in general trying to migrate from vagrant to docker) i encountered a huge problem which i am not sure how or where to fix it.
docker-compose.yml
version: "3"
services:
server:
build: .
volumes:
- ./:/var/www/dev
links:
- database_dev
- database_testing
- database_dev_2
- mail
- redis
ports:
- "80:8080"
tty: true
#the rest are only images of database redis and mailhog with ports
Dockerfile
example_1
FROM ubuntu:latest
LABEL Yamen Nassif
SHELL ["/bin/bash", "-c"]
RUN apt-get install vim mc net-tools iputils-ping zip curl git -y
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN cd /var/www/dev
RUN composer install
Dockerfile
example_2
....
RUN apt-get install apache2 openssl php7.2 php7.2-common libapache2-mod-php7.2 php7.2-fpm php7.2-mysql php7.2-curl php7.2-dom php7.2-zip php7.2-gd php7.2-json php7.2-opcache php7.2-xml php7.2-cli php7.2-intl php7.2-mbstring php7.2-redis -y
# basically 2 files with just rooting to /var/www/dev
COPY docker/config/vhosts /etc/apache2/sites-available/
RUN service apache2 restart
....
now the example_1 composer.json file/directory not found
and example_2 apache will says the root dir is not found
file/directory = /var/www/dev
i guess its because its a volume and it wont be up until the container is fully up because if i launch the container without the prev commands which will lead to an error i can then login to the container and execute the commands from command line without anyerror
HOW TO FIX THIS ?
In your first Dockerfile, use the COPY directive to copy your application into the image before you do things like RUN composer install. It'd look something like
FROM php:7.0-cli
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN composer install
(cribbed from the php image documentation; that image may not have composer preinstalled).
In both Dockerfiles, remember that each RUN command creates a new empty container, runs its command, and cleans up after itself. That means commands like RUN cd ... have no effect, and you can't start a service in the background in one RUN command and have it available later; it will get stopped before the Dockerfile moves on to the next line.
In the second Dockerfile, commands like service or systemctl or initctl just don't work in Docker and you shouldn't try to use them. Standard practice is to start the server process as a foreground process when the container launches via a default CMD directive. The flip side of this is that, since the server won't start until docker run time, your volume will be available at that point. I might RUN mkdir in the Dockerfile just to be sure it exists.
The problem seems the execution order. At image build time /var/www/dev is available. When you start a container from that image the container /var/www/dev is overwritten with your local mount.
If you need no access from your host, the you can simple skip the extra volume.
In case you want use it in other containers to, the you should work with symlinks.
I'm new to Docker and trying to make a demo Rails app. I made a dockerfile that looks like this:
FROM ruby:2.2
# Install apt based dependencies required to run Rails as
# well as RubyGems. As the Ruby image itself is based on a
# Debian image, we use apt-get to install those.
RUN apt-get update && apt-get install -y \
build-essential \
nodejs
# Configure the main working directory. This is the base
# directory used in any further RUN, COPY, and ENTRYPOINT
# commands.
RUN mkdir -p /app
WORKDIR /app
# Copy the Gemfile as well as the Gemfile.lock and install
# the RubyGems. This is a separate step so the dependencies
# will be cached unless changes to one of those two files
# are made.
COPY Gemfile Gemfile.lock ./
RUN gem install bundler && bundle install --jobs 20 --retry 5
# Copy the main application.
COPY . ./
# Expose port 3000 to the Docker host, so we can access it
# from the outside.
EXPOSE 3000
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
I then built it (no errors):
docker build -t demo .
And then run it (also no errors):
docker run -itP demo
=> Booting Puma
=> Rails 5.1.1 application starting in development on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
Puma starting in single mode...
* Version 3.8.2 (ruby 2.2.7-p470), codename: Sassy Salamander
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:9292
Use Ctrl-C to stop
When I run a docker ps command in a separate terminal to determine the ports, I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
55e8224f7c15 demo "bundle exec rails..." About an hour ago Up About an hour 0.0.0.0:32772->3000/tcp ecstatic_bohr
However, when I try to connect to it at either http://localhost:32772 or http://192.168.99.100:32772 using Chrome or via a curl command, I receive a "Connection refused".
When I run the app outside of docker on my local machine via bundle exec rails server command, it works fine. Note that I am using Docker Toolbox on my Win7 machine
What could I be doing wrong ?
I spend a couple of hours on this as well and this thread was really helpful. What i'm doing right now is accessing those services through the vm's ip address.
You can get your vm's address running:
docker-machine ls
then try to access your service using the host mapped port 37772, something like this:
http://<VM IP ADDRESS>:32772
Hope this helps.
The combination of the above tricks worked--
I had to use http://<VM IP ADDRESS>:32772 (localhost:32772 did NOT work), AND I had to fix my exposed port to match the TCP listening port of 9292.
I still don't understand why the TCP listening port defaulted to 9292 instead of 3000, but I'll look into that separately.
Thank you for the help!
I'm new to docker and trying to build my dev-container. I would like to have running watchify(https://www.npmjs.com/package/watchify) for concat files while Im developing.
Docker can manage volumes. I could have watchify running on my system but I would like to put it on docker host.
I manage to build the container and image.
"scripts": {
"watchjs": "node_modules/.bin/watchify ./public/js/dependencies.js -o ./public/js/all.js",
"start:dev": "npm run watchjs & node app.js"
}
When running the container with "npm run start:dev" it just exits.
Any idea why this is happening? Can I get running the watchjs and node app on the container?
This is how Im building the images/containers:
# Build your image
docker build -t albertof/blog .
# Docker create container
docker create -P --name blog-container -v ~/Projects/Docker/example:/Blog albertof/blog
# Docker start running container
docker start blog-container
And here my Dockerfile
FROM node:argon
MAINTAINER XXX ZZZ xxx#zzz.com
# Update libraries and dependencies
# RUN apt-get update -qq
# Install bower
RUN npm install -g bower
# Make folder that contains blog
RUN mkdir -p Blog
# Set up working directory (from now on we are located inside /Blog)
WORKDIR /Blog
# Expose server
EXPOSE 8000
#
# LEAVE FILES THAT CHANGE OFTEN AT THE END
#
# THIS WILL ALLOW DOCKER TO BUILD FASTER
#
# STEPS WILL BE EXECUTED ALL OVER SINCE THE MOMENT IT FIND DIFFS
#
# Add npm
ADD ./package.json .
# Install dependencies defined in packaje.json
RUN npm install
# Add bower.json
ADD ./bower.json .
# Install bower components
RUN bower install --config.interactive --allow-root
ENTRYPOINT ["npm"]
CMD ["start:dev"]