I have the following Dockerfile
ARG JEKYLL_VERSION=4
FROM jekyll/jekyll:$JEKYLL_VERSION as BUILD
COPY --chown=jekyll:jekyll . /srv/jekyll
RUN ls -lah /srv/jekyll
RUN jekyll build
RUN ls /srv/jekyll/_site/
FROM nginxinc/nginx-unprivileged:alpine
ADD build/nginx-site.conf /etc/nginx/conf.d/default.conf
COPY --chown=101:101 --from=BUILD /srv/jekyll/_site/ /var/www
Which does build perfectly locally, but not on the Linux Jenkins Buildslave:
Bundle complete! 1 Gemfile dependency, 28 gems now installed.
Use `bundle info [gemname]` to see where a bundled gem is installed.
ruby 2.7.1p83 (2020-03-31 revision a0c7c23c9c) [x86_64-linux-musl]
Configuration file: /srv/jekyll/_config.yml
Source: /srv/jekyll
Destination: /srv/jekyll/_site
Incremental build: disabled. Enable with --incremental
Generating...
done in 0.186 seconds.
Auto-regeneration: disabled. Use --watch to enable.
Removing intermediate container 2b064db0ccaa
---> 1e19e78f593a
Step 6/9 : RUN ls /srv/jekyll/_site/
---> Running in 194d35c3f691
[91mls: /srv/jekyll/_site/: No such file or directory
[0mThe command '/bin/sh -c ls /srv/jekyll/_site/' returned a non-zero code: 1
Changing RUN jekyll build to RUN jekyll build && ls /srv/jekyll/_site/ lists the jekyll output as expected.
Why is the output of the jekyll build not stored? If I remove the ls command, the output can't be found in the later stage.
Any hints?
That happens because the image declares that directory as a VOLUME. This is probably a bug and I'd suggest reporting it as a GitHub issue; it won't affect any of the documented uses of the image to remove that line.
At the end of the day Jekyll is just a Ruby gem (library). So while that image installs a lot of things, for your use it might be enough to start from a ruby image, install the gem, and use it:
FROM ruby:3.1
RUN gem install jekyll
WORKDIR /site
COPY . .
RUN jekyll build
If your directory tree contains a Ruby Gemfile and Gemfile.lock, you could also COPY those into the image and RUN bundle install instead.
Alternatively, since Jekyll is a static site generator, you should get the same results if you build the site on the host or in a container, and another good path could be to install Ruby and Jekyll on your host (maybe in an rbenv gemset) and then COPY the site from the host into the Nginx image; skip the first stage entirely.
Related
I have a containerized rails application, deployed on a app service in Azure. I've enabled SSH for my docker in order to manually run some rakes, and execute rails CLI commands.
The issue:
Logging in through the SSH in azure portal does not let me run any commands (rakes, migrates etc).
I always run into the command not found error, even though the application is successfully deployed and running, so that must mean rails and all the gems are installed somewhere. The bundler is installed in the docker container, along with ruby.
My dockerfile:
FROM ruby:2.6.3
....
WORKDIR /app
COPY . /app
ENV RAILS_ENV production
ENV RAILS_SERVE_STATIC_FILES true
ENV RAILS_LOG_TO_STDOUT true
ADD Gemfile /app
ADD Gemfile.lock /app
RUN gem install bundler
RUN bundle config set --local without 'test' --with runtime --deployment
RUN bundle install
EXPOSE 3000 80 2222
RUN ["chmod","+x","entrypoint.sh"]
ENTRYPOINT ["./entrypoint.sh"]
Any help is highly appreciated!
I've tried executing which ruby, and looking in the gems folder but I've only found bundler there. I've tried setting the GEM_HOME and GEM_PATH to point to my local app, but once again the bundler is installed there and all the other gems are missing.
Executing which/locate rails does not find the installation.
When I try to run bin/rails, it complains that the other gems are not installed/
What is the issue here? Is there another way I should be doing this through azure?
I think that environment variables are not automatically passed on to SSH sessions. I had to add these to my docker image start script
# This makes env variables available in the SSH session too.
eval $(printenv | sed -n "s/^\([^=]\+\)=\(.*\)$/export \1=\2/p" | sed 's/"/\\\"/g' | sed '/=/s//=\'\''/' | sed 's/$/\'\''/' >> /etc/profile)
Check this link.
I launched Strapi with Docker-compose. After reading the Migration Guide, I still don't know if I wanna upgrade to the next version, what method should I choose:
Under to the Strapi project directory, execute npm install strapi#<next version> -g and npm install strapi#<next version> --save
docker exec -it <strapi container> sh, navigate to Strapi project directory, then execute npm install strapi#<next version> -g and npm install strapi#<next version> --save
Neither?
In your local developer tree, update the package version in your package.json file. Run npm install or yarn install locally. Start your application. Verify that it works. Run your tests. Fix any compatibility issues from the upgrade. Do all of this without Docker involved at all.
Re-run docker build . to rebuild your Docker image with the new package dependencies.
Stop the old container, delete it, and run a new container with the new image.
As a general rule you should never install anything in a running container. It's extremely routine to delete containers, and when you do, anything in the container will be lost.
There's a common "pattern" of running Node in Docker, bind-mounting your application into it, and then mounting an anonymous volume over your node_modules directory. For routine development I've found it vastly simpler to just install Node on my host (it is literally a single apt-get install or brew install command). If you're using this Docker-oriented setup, the anonymous volume for node_modules won't notice that you've changed your node_modules directory, and you have to re-run docker build and delete and recreate your containers.
TLDR: 3, while 2 was going in the right direction.
Official documentation wasn't clear for the first time for me either.
Below is a spin-off step-by-step guide from 3.0.5 to 3.1.5 in docker-compose context.
It tries to follow official documentation as close as possible, but includes a some extra (mandatory in my case) steps.
Upgrade Strapi
Following relates to strapi/strapi (not strapi/base) docker image used via docker-compose
Important! Upgrading Docker image versions DOES NOT upgrade Strapi version.
Strapi NodeJS application builds itself during first startup only, if detects empty folder and is normally stored in mounted volume. See docker-entrypoint.sh.
To upgrade, first follow the guides (general and version-specific) to rebuild actual Strapi NodeJS application. Secondly, update docker tag to match the version to avoid confusion.
Example of upgrading from 3.0.5 to 3.1.5:
# https://strapi.io/documentation/developer-docs/latest/guides/update-version.html
# Make sure your server is not running until the end of the migration
## That is unclear instruction. Stopped Nginx to prevent access to application, without stopping Strapi itself.
docker-compose exec strapi bash # enter running container
## Alternative way would be `docker-compose stop strapi` and manually reconstruct container options using `docker`, overriding entrypoint with `--entrypoint /bin/bash`
# Few checks
yarn strapi version # current version installed
yarn info strapi #npm info strapi#3.1.x version # available versions
yarn --version #npm --version
yarn list #npm list
cat package.json
# Upgrade your dependencies
sed -i 's|"3.0.5"|"3.1.5"|g' package.json && cat package.json
yarn install #npm install
yarn strapi version
# Breaking changes? See version-specific migration guide!
## https://strapi.io/documentation/developer-docs/latest/migration-guide/migration-guide-3.0.x-to-3.1.x.html
## Define the admin JWT Token
## Update username constraint for administrators
docker-compose exec db bash
psql strapi strapi
-- show tables and describe one
\dt
\d strapi_administrator
## Migrate your custom admin panel plugins
# Rebuild your administration panel
rm -rf node_modules # workaround for "Error: Module not found: Error: Can't resolve"
yarn build --clean #npm run build -- --clean
# Extensions?
# Start your application
yarn develop #npm run develop
# Confirm & test, visit URL
# Errors?
## Error: ENOSPC: System limit for number of file watchers reached, ...
# Can be solved by modifying kernel parameter at docker HOST system
sudo vi /etc/sysctl.conf # fs.inotify.max_user_watches=524288
sudo sysctl -p
# Modify docker-compose to reflect version changed and avoid confusion !
docker ps
vi docker-compose.yml # e.g. 3.0.5 > 3.1.5
docker-compose up --force-recreate --no-deps -d strapi
# ... and remove old docker image, when no longer required.
P.S. We may together improve documentation via https://github.com/strapi/documentation. Made a pull request https://github.com/strapi/strapi-docker/pull/276
I am trying to put my rails project on the google cloud engine for the first time and I have a lot of trouble.
I've wanted to upload my project with a custom runtime app.yaml (because I would like yarn to install the dependencies as well), but the deployment command fails with this error:
Error Response: [4] Your deployment has failed to become healthy in the allotted time and therefore was rolled back. If you believe this was an error, try adjusting the 'app_start_timeout_sec' setting in the 'readiness_check' section.
PS: the app runs locally (development and production env).
My app.yaml looks like this:
entrypoint: bundle exec rails s -b '0.0.0.0' --port $PORT
env: flex
runtime: custom
env_variables:
My Environment variables
beta_settings:
cloud_sql_instances: ekoma-app:us-central1:ekoma-db
readiness_check:
path: "/_ah/health"
check_interval_sec: 5
timeout_sec: 4
failure_threshold: 2
success_threshold: 1
app_start_timeout_sec: 120
And my Dockerfile looks like this:
FROM l.gcr.io/google/ruby:latest
RUN apt-get update -qq && apt-get install apt-transport-https
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev imagemagick yarn
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock
RUN gem install pkg-config -v "~> 1.1"
RUN bundle install && npm install
COPY . /app
When deploying with a ruby runtime I realized that the dockerfile generated was much more complex and probably complete and google provide a repo to generate it.
So, I tried to look into the ruby-docker public repo that google shared but I don't know how to use their generated docker images and therefore fix my Dockerfile issue
https://github.com/GoogleCloudPlatform/ruby-docker
Could someone help me figure what's wrong in my setup and how to run these ruby-docker image (seems very useful!)?
Thank you!
The "entrypoint" field in app.yaml is not used when a custom runtime is in play. Instead, set the CMD in your Dockerfile. e.g.:
CMD ["bundle", "exec", "rails", "s", "-b", "0.0.0.0", "--port", "8080"]
That probably will get your application running. (Remember that environment variables are not interpolated in exec form, so I replaced your $PORT with the hard-coded port 8080, which is the port App Engine expects.)
As an alternative:
It may be possible to use the Ruby runtime images in the ruby-docker repo, and not have to use a custom runtime (i.e. you may not need to write your own Dockerfile), even if you have custom build steps like doing yarn installs. Most of the build process in runtime: ruby is customizable, but it's not well-documented. If you want to try this path, the TL;DR is:
Use runtime: ruby in your app.yaml and don't provide your own Dockerfile. (And reinstate the entrypoint of course.)
If you want to install ubuntu packages not normally present in runtime: ruby, list them in app.yaml under runtime_config:packages. For example:
runtime_config:
packages:
- libgeos-dev
- libproj-dev
If you want to run custom build steps, list them in app.yaml under runtime_config:build. They get executed in the Dockerfile after the bundle install step (which cannot itself be modified). For example:
runtime_config:
build:
- npm install
- bundle exec rake assets:precompile
- bundle exec rake setup_my_stuff
Note that by default, if you don't provide custom build steps, the ruby runtime behaves as if there is one build step: bundle exec rake assets:precompile || true. That is, by default, runtime: ruby will attempt to compile your assets during app engine deployment. If you do modify the build steps and you want to keep this behavior, make sure you include that rake task as part of your custom build steps.
The official rails image on docker hub:
https://hub.docker.com/_/rails/
I create a Dockerfile like:
FROM rails:onbuild
ENV RAILS_ENV=production
ADD vendor/gems/my_gem /usr/src/app/vendor/gems/my_gem
CMD ["sh", "/usr/src/app/init.sh"]
My init.sh
#!/bin/bash
bundle exec rake db:create db:migrate
bundle exec rails server -b 0.0.0.0
My Gemfile
...
gem 'my_gem', path: './vendor/gems/my_gem'
...
When I build my docker image:
docker build -t myapp .
It said:
...
The path `/usr/src/app/vendor/gems/my_gem` does not exist.
The command '/bin/sh -c bundle install' returned a non-zero code: 13
The default path is /usr/src/app. How to add special files there?
docker ADD will add <src> (when it is a folder, not an url) relative to the source directory that is being built (the context of the build).
So you need to be sure your current directory when doing docker build . is also the one which includes vendor/gems/my_gem.
The OP scho reports in the comments
After I changed to FROM rails:4.2.1, it worked.
As documented in docker rails:
This image (rails:onbuild) includes multiple ONBUILD triggers which should cover most applications.
The build will COPY . /usr/src/app, RUN bundle install, EXPOSE 3000, and set the default command to rails server.
That means the ADD command was probably not needed and in conflict with the ONBUILD triggered COPY.
That differs from using rails:4.2.1, where the ADD or COPY is left to the Dockerfile specification (as opposed to ONBUILD triggers).
I am trying to run a small Rails app in a docker container. I am getting close, however I am struggling with my Dockerfile.
I have added the following command to my Dockerfile to recursively add all files in my project folder.
ADD .
After this, I run
RUN bundle install --deployment
However, because my ADD command also adds the Dockerfile, it means that my image cache breaks every time I edit my Dockerfile forcing me to rebundle.
According to https://docs.docker.com/reference/builder/#the-dockerignore-file, I can use a .dockerignore file to ignore Dockerfile, but this causes the docker build command to fail with
2014/09/17 22:12:46 Dockerfile was excluded by .dockerignore pattern 'Dockerfile'
How can I easily add my project to my image, but exclude the Dockerfile, so I don't break the docker image cache?
Your clarification in a comment to the answer from #Kuhess, saying that the actual problem is "the invalidation of the docker image cache causing bundle install to run again", is helpful in providing you an answer.
I've been using a Dockerfile that looks like the following for my rails 4.1.* app. By ADDing Gemfile* first, then running bundle install, and only then ADDing the rest of the app, the bundle install step is cached unless one of the Gemfile* files changes.
FROM ruby:2.1.3
RUN adduser --disabled-password --home=/rails --gecos "" rails
RUN gem install bundler --no-ri --no-rdoc
RUN gem install -f rake --no-ri --no-rdoc
RUN mkdir /myapp
WORKDIR /myapp
ADD Gemfile /myapp/Gemfile
ADD Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
ADD . /myapp
RUN chown -R rails:rails /myapp
USER rails
EXPOSE 3000
ENV RAILS_ENV production
CMD bundle exec rails server -p 3000
My Dockerfile is in the root dir of the app, but changes to it (or any other part of the app) do not cause the image cache for bundle install to break because they are added after it is RUN. The only thing that breaks it are changes to Gemfile*, which is correct.
FYI, my .dockerignore file looks as follows:
.git
log
vendor/bundle
There is an issue for that on Github: https://github.com/docker/docker/issues/7969
Your main problem is the eviction of the cache because of the ADD and the modification of the Dockerfile. One of the maintainer explains that, for the moment, the .dockerignore file is not designed to deal with it:
It — .dockerignore — skips some files when you upload your context from client to daemon and daemon needs Dockerfile for building image. So main idea(for now) of .dockerignore is skipping big dirs for faster context upload, not clean context. full comment on Github
I am afraid that the image cache will break when you ADD Dockerfile while these lines are not modified.
Maybe one way to deal with the cache is to place all the files you want to add in a different directory than the Dockerfile:
.
├── Dockerfile
└── files_to_add/
Then if you ADD files_to_add, the Dockerfile will not be included and the cache will not be evicted.
But, I do not consider that this trick is a solution. I also want to have my Dockerfile next to other files at the root of my projects.