Anybody knows how to make BUNDLE INSTALL Cache'ing work in latest DOCKER release?
I've tried so far:
1.
WORKDIR /tmp
ADD ./Gemfile Gemfile
ADD ./Gemfile.lock Gemfile.lock
RUN bundle install
2.
ADD . opt/railsapp/
WORKIDR opt/rails/app
RUN bundle install
None of them work, it still runs "BUNDE INSTALL" everytime from scratch without Gemfile being changed.
Anyone knows how to make Caching for bundle install work correctly?
Cheers,
Andrew
Each time you change any file in your local app directory, the cache will be wiped out, forcing every step afterwards to be re-run, including the last bundle install.
The solution is don't run bundle install in step 2. You have already installed your gems in step 1 and there is little chance the Gemfile will change between step 1 and step 2 ;-).
The whole point of step 1 is to add your Gemfile, which should not change to often, so you can cache it and the subsequent bundle command before adding the rest of your app, which will probably change very often if you are still developing it.
Here's how the Dockerfile could look like:
1.
WORKDIR /tmp
ADD ./Gemfile Gemfile
ADD ./Gemfile.lock Gemfile.lock
RUN bundle install
2.
ADD . opt/railsapp/
WORKIDR opt/rails/app
Versions of Docker before 0.9.1 did not cache ADD instructions. Can you check that you're running a version of Docker 0.9.1 or greater?
Also, which installation of Docker are you using? According to this GitHub issue, some users have experienced cache-busting ADD behavior when using unsupported Docker builds. Make sure you're using an official Docker build.
ADD caching is based on all the metadata of the file, not just the contents.
if you are running docker build in a CI-like environment with a fresh checkout, then it is possible the timestamps of the files are being updated which would invalidate the cache.
Related
Before I begin: This is not a post about speeding up bundle install that runs when I build the container.
I am building a Docker application that needs to run bundle install during runtime. It may take a while to explain this specific use-case, but the important component is: my running container will download rails projects, and run bundle install. Currently, this takes an extremely long time (likely because of nokogiri).
Is there a way to build my container, such that anytime my script runs bundle install during runtime, it uses cached gems?
I am using:
Docker Compose Version 3
Fargate
ECS
Set your BUNDLE_PATH env var to vendor/bundle
Mount a volume in Fargate to the bundle path
The first run will be slow since it has to build up the bundle cache, but after that it should only update gems if necessary.
I've been developing an app using Webpack, Vue.js and Rails. No problems for two months, but out of nowhere when I try to start rails console rails c, yarn complains that packages out of date:
error An unexpected error occurred: "Unknown language key integrityNodeDoesntMatch".
info If you think this is a bug, please open a bug report with the information provided in "/Users/maksimfedotov/vras/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/check for documentation about this command.
========================================
Your Yarn packages are out of date!
Please run `yarn install` to update.
========================================
Yet when I run yarn install:
yarn install v1.3.2
[1/4] 🔍 Resolving packages...
success Already up-to-date.
✨ Done in 0.71s.
I've been looking through yarn and webpacker documentation, tried various yarn cleanup commands, but no luck.
Interestingly enough, I can still run the server, its only console that complains.
This is an old issue, which has been resolved, so I am writing down what I did in the end:
Simply deleting node_modules usually solves the issue. If you are using spring, it also can mess this up, so consider running DISABLE_SPRING=1 rails s to see if that helps
Try restarting spring by running spring stop.
This fixed the issue for me, and meant I didn't need to constantly prefix commands with the spring disable flag.
The above command stops spring: to check that it automatically restarted, run spring status.
Credit to this comment on GitHub for the solution!
You can add in the config/environments/development.rb
this configuration setting
config.webpacker.check_yarn_integrity = false
It also it forget to check yarn integrity on every rails call, as migrations, launching consoles ..., in development environment
This problem resurfaced in April 2021 due to compatibility issues between node-sass and node version 16. (I had similar problems here and provide a similar answer to that below here).
So my solution is to downgrade node until version 16 is fully compatible.
Install node 14 with nvm install 14, then set it to the global default with nvm alias default 14.
Then:
Stop your rails server if it's running
Open a fresh new terminal window (so that node --version returns 14.x (not 16)
Run spring stop
Delete yarn.lock
Remove existing node modules with rm -rf node_modules
Check that node --version returns 14. If it doesn't run nvm install 14 again.
Now reinstall modules with yarn install (if you don't have yarn for node 14, install it with npm install --global yarn)
It should succeed!
Restart your rails server, and it will work!
Other useful info:
This github issue - this comment in particular
Try just yarn install then rails c again
If you are switching branches which change yarn.lock and just want to run a rails console without having to keep running yarn install everytime you switch, you can add this to your app/config/development.rb
config.webpacker.check_yarn_integrity = ENV['SKIP_YARN'].nil?
Then when rails complains you can simply do this
SKIP_YARN=true rails c
In my case, this solve the problem.
rm -rf yarn.lock
yarn install
Try this: NODE_ENV=development yarn install
When using Rails inside a Docker container
several posts, (including one on docker.com) use the following pattern:
In Dockerfile do ADD Gemfile and ADD Gemfile.lock, then RUN bundle install.
Create a new Rails app with docker-compose run web rails new.
Since we RUN bundle install to build the image, it seems appropriate to docker-compose build web after updating the Gemfile.
This works insomuch as the gemset will be updated inside the image, but:
The Gemfile.lock on the Docker host will not be updated to reflect the changes to the Gemfile. This is a problem because:
Gemfile.lock should be in your repository, and:
It should be consistent with your current Gemfile.
So:
How can one update the Gemfile.lock on the host, so it may be checked in to version control?
Executing the bundle inside run does update the Gemfile.lock on the host:
docker-compose run web bundle
However: You must still also build the image again.
Just to be clear, the commands to run are:
docker-compose run web bundle
docker-compose up --build
where web is the name of your Dockerized Rails app.
TL;DR - make the changes on the container, run bundle on the container and restart for good measure. Locally these changes will be reflected in your app and are ready to test/push out to git, and your production server will use it to rebuild.
Long; Read:
docker exec -it name_of_app_1 bash
vim Gemfile and put something like gem 'sorcery', '0.9.0' I feel this ensures you get the version you're looking for
bundle to get just this version in the current container's Gemfile and Gemfile.lock
This has been semi normal "Rails" type stuff here, just you are doing it on the container that is running. Now you don't have to worry about git and getting these changes onto. your git repo because these changes are all happening on your local copy. So like, open a terminal tab and go into your app and less Gemfile and you should see the changes.
Now you can restart your running container. Your Dockerfile will get rebuilt (in my case by docker-compose up locally, tests should pass. Browser test at will.
Commit your changes to git and use your deploy process to check it out on staging.
Notes:
Check that Dockerfile like the OP's links say. I'm assuming that you have some kind of bundle or bundle install line in your Dockerfile
Run bundle update inside the container, then rebuild.
$ docker-compose run web bundle update
$ docker-compose build
I use gitlab.com and CI with the shared docker runner that runs tests for my Ruby on Rails project on every commit to master. I noticed that about 90% of the build time is spent on 'bundle install'. Is it possible to somehow cache the installed gems between commits to speed up the 'bundle install'?
UPDATE:
To be more specific, below is the content of my .gitlab-ci.yml. The first 3 lines of the 'test' script take about 90% of the time making the build run for 4-5 minutes.
image: ruby:2.2.4
services:
- postgres
test:
script:
- apt-get update -qy
- apt-get install -y nodejs
- bundle install --path /cache
- bundle exec rake db:drop db:create db:schema:load RAILS_ENV=test
- bundle exec rspec
I don't know if you have special requirements for doing a apt-get all the time, if that is not needed create your own dockerfile with those commands in it. So that your base has already those updates/nodejs packages. If you want to update later on, you can always update your dockerfile again.
For your gems, if you want it quicker you can cache them in between builds too. Normally this is per job and per branch. See example here http://doc.gitlab.com/ee/ci/yaml/README.html#cache
cache:
paths:
- /cache
I prefer to add key: "$CI_BUILD_REF_NAME" so that for that particular branch my files are cached. See environments to see what keys you can use more.
You can setup BUNDLE_PATH environment variable and point it to a folder where you want your gems to be installed. The first time you run bundle install it will install all gems and consequent runs will only check if there are any new gems and install only those.
Note: That's supposed to be the default behavior. Check your BUNDLE_PATH env var value. Is it being changed to a temporary per commit folder or something? Or, is 90% of the build time being spent on downloading gem meta information from rubygems.org? In which case you might want to consider using --local flag (but not sure this is a good idea on CI server).
Fetching source index for https://rubygems.org/
After looking at your .gitlab-ci.yml I noticed that your --path options is missing =. I think it is supposed to be:
- bundle install --path=/cache
I haven't upgrade my gems for a long time, just today, I decided to run a upgrade. I probably made a mistake at first running bundle install update, which didn't do anything. Then I ran bundle update, and it created a whole new folder called update in my rails directory containing all the gems, and it seems like my rails project is no longer linked to my rvm gem directory because if I remove the update folder it fuzzes about not being able to find gems. I'm just wondering if this is the new behavior to rails or it's because I did something wrong. Thanks!
Edit:
The output of bundle config is:
Settings are listed in order of priority. The top value will be used.
path
Set for your local app (/Users/X/dev/tasker/.bundle/config): "update"
disable_shared_gems
Set for your local app (/Users/X/dev/tasker/.bundle/config): "1"
This seems to be the problem. So how should I revert it to its state before by linking to the rvm gem directory? And is the problem caused by my 'bundle install update' command? Thanks!
Edit again:
Thanks for the help guys. After finding out the root issue of my problem, I found this solution: bundle install --system at How can I fix an accidental 'sudo bundle install dir_name'?. Now the problem is solved. Thanks!
I made same mistake.
Check command line options for bundle. bundle install accepts directory. and if you type bundle install update, it install the bundle onto the directory.
If you did, bundler create .bundle/config file and store the given path in the file.
I think, just removing .bundle directory and run "bundle" will bundle the needed files,
will use the gems in RVM (if RVM configured correctly).