I've got a .travis.yml file that is describing the directory to cache, however when I check the cache dropdown in travis, it tells me there is nothing. I'm just trying to cache my composer vendor folder. Below is my .travis.yml file:
sudo: required
language: php
php:
- 7.0
services:
- docker
before_install:
- docker-compose up -d
install: composer install
cache:
directories:
- root/vendor
script:
- bundle exec rake phpcs
- bundle exec rake phpunit:run
- bundle exec rake ci:behat
And this is my project structure (or the folders/files that matter):
|-- .travis.yml
|-- root
|-- vendor
Any suggestions as to why this would be the case?
Old versions of Composer (pre alpha1) used $HOME/.composer/cache/files for caches, new versions use $HOME/.cache/composer/files.
Set the two of them for compatibility.
cache:
directories:
- $HOME/.cache/composer/files
- $HOME/.composer/cache/files
The Travis CI build log will print something along the lines of:
Setting up build cache
$ export CASHER_DIR=$HOME/.casher
$ Installing caching utilities 0.05s
0.00s
attempting to download cache archive 0.47s
fetching master/cache-linux-precise-xxx--xxx-xxx.tgz
found cache
0.00s
adding /home/travis/.cache/composer/files to cache
adding /home/travis/.composer/cache/files to cache 1.28s
In order to cache dependencies installed with composer, you need to specify your cache like this:
cache:
directories:
- $HOME/.composer/cache
It's not the vendor directory that gets cached, but composer's own cache.
However, you should also install from dist to keep the cache small:
install:
- composer install --prefer-dist
For reference, see http://blog.wyrihaximus.net/2015/07/composer-cache-on-travis/.
Related
I tried to install Pupilfirst LMS, I had difficulty in step
Compile Rescript Code: yarn run re: build
Output: Usage Error: could't find a script named "re: build".
And I also have difficulty in step
Run Webpack Dev Server: yarn run wds
Output: Usage Error: could't find a script named "wds".
is there any solution?
https://docs.pupilfirst.com/developers/development_setup
There shouldn't be a space between re: and build: yarn run re:build.
Also, when running a yarn run script_name command, make sure that your current directory contains a package.json file where the script script_name is defined.
In case of pupilfirst this file is in the root: https://github.com/pupilfirst/pupilfirst/blob/e41ffb8a57f4c59f7927056af324b5c283fb0038/package.json
I see that bundle install and yarn install are usually done in Dockerfile as:
RUN bundle install && yarn install
Which means that if I modify Gemfile or yarn.lock, I need to re-build the image again. I know that there is layer caching and the docker build will not rebuild other layers except bundle install && yarn install layer. But it means I have to do docker-compose up -d --build
But I was wondering if it is ok to put these commands inside an entry script of docker-compose or in command as:
command: bundle install && yarn install && rails s
In this way, I believe, whenever I do docker-compose up -d, bundle install and yarn install will be executed without having to build the image.
Not sure if it has any advantages over conventional bundle install in Dockerfile except not having to append --build in docker-compose up. Correct that if I do this, bundle install and yarn install will get executed even when there are no changes to Gemfile or Yarn files. I guess this is one of the bad sides.
Please correct me if it is not the ideal way to go.
New to docker world.
It wastes several minutes of time and uses up network bandwidth every time you start your application. When you're doing local development, it'd be the equivalent of doing this, every time you run the application:
rm -rf vendor node_modules
bundle install # from scratch
yarn install # from scratch
bundle exec rails s
A core part of Docker is rebuilding images (in the same way that languages like Go, Java, Typescript, etc. have a "build" phase). Trying to avoid image rebuilds isn't usually advisable. With a well-written Dockerfile, and particularly for an interpreted language, running docker build should be fairly efficient.
The one important trick is to separately copy the files that specify dependencies, and the rest of your application. As soon as a Dockerfile COPY instruction encounters a file that's changed it will disable layer caching for the rest of the application. Since dependencies change relatively infrequently, a sequence that first copies the dependency file, then installs the dependencies, then copies the application can jump straight to the last step if the dependency file hasn't changed.
COPY Gemfile Gemfile.lock ./
RUN bundle install
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
(Make sure to include the Bundler vendor directory and the node_modules directory in a .dockerignore file so the last COPY step doesn't overwrite what previously got installed.)
This question is opinion based. As you already found out yourself, it is a common practice to install dependencies (bundle, yarn, others) during the image build process, and not image run process.
The rationale is that you run more times than you build, and you want the run operation to start quickly.
In the same way that you do apt install... or yum install... in the build stage, you should normally do bundle install in the build stage as well.
That said, if it makes sense to you to bundle install as a part of the entrypoint, that is your choice. I suspect that after you do it, you will see that it is less common for a reason.
Another note about docker layers: If the Gemfile change, not only the layer that refers to it will change, but all subsequent layers as well. For that reason, it is often common to separate the copy of the dependencies manifest (Gemfile.*) from the copying of the app, like this:
# Pre-install gems
COPY Gemfile* ./
RUN gem install bundler && \
bundle install --jobs=3 --retry=3
# Copy the rest of the app
COPY . .
So this way, if your app files change, but not the dependencies, the build will be faster.
I forked this rails project (https://github.com/DMPRoadmap/roadmap) and try to set it up following its installation guide
When following the installation guide:
1) Before the step: npm run bundle, the website doesn't show its image and layout properly
2) After the step npm run bundle, the website show its image and layout properly
3) Under the hood, npm run bundle will start webpack. I close the webpack by pressing CTRL + C after npm run bundle, and the website is still showing image and layout properly.
4) I run npm run bundle -- -p which should be equal to webpack -p, and the website does not display the images and layout properly anymore.
Why is npm run bundle -- -p (which is webpack -p) not compiling the asset properly? I thought it is the daemon version of npm run bundle (which is webpack) and daemon means running in background (I thought daemon is the same as service?)?
Please correct me if I understand the concept incorrectly or use the term incorrectly anywhere.
Thank you!
Answer my own question:
1) Before the step: npm run bundle, the website doesn't show its image
and layout properly
The asset (image and css stylesheet's layout) is not compiled yet, so Rails can't use it and hence the layout will not be correct and the image will not show.
You can compile the asset for development environment using npm run bundle in this project.
2) After the step npm run bundle, the website shows its image and
layout properly
See above.
3) Under the hood, npm run bundle will start webpack. I close the
webpack by pressing CTRL + C after npm run bundle, and the website
is still showing image and layout properly.
I thought webpack (npm run bundle) acts like a server, serving the rails server asset they need, but actually this is not true.
webpack only compiles the asset for rails to use. The reason that webpack keep running after you ran npm run bundle is because it is constantly detecting change to the source of those asset file, so that any change to the source of those asset file will be reflected immediately when you issue a browser refresh on the website.
4) I run npm run bundle -- -p which should be equal to webpack -p,
and the website does not display the images and layout properly
anymore.
Why is npm run bundle -- -p (which is webpack -p) not compiling the
asset properly?
npm run bundle -- -p indeed equals to webpack -p in this case.
To understand why the website does not display the images and layout properly anymore, let's see something first. When I run npm run bundle -- p the below happen:
modified: config/initializers/fingerprint.rb
deleted: public/javascripts/application.js
deleted: public/javascripts/vendor.js
deleted: public/stylesheets/application.css
added: public/javascripts/application-2f49ec37563f77c91204.js
added: public/javascripts/vendor-2f49ec37563f77c91204.js
added: public/stylesheets/application-2f49ec37563f77c91204.css
npm run bundle -- -p will compile the asset for production environment, and will add fingerprint (in this case it's -2f49ec37563f77c91204) to its output.
Digging into the code, we can see that app/views/layouts/application.html.erb has the following code:
<%= stylesheet_link_tag fingerprinted_asset('application') %>
<%= javascript_include_tag fingerprinted_asset('vendor') %>
<%= javascript_include_tag fingerprinted_asset('application') %>
Looking at the fingerprinted_asset() method in app/helpers/application_helper.rb, we can see that:
def fingerprinted_asset(name)
Rails.env.production? ? "#{name}-#{ASSET_FINGERPRINT}" : name
end
We can see that in production we will use file with fingerprint in their name and in non production environment we will use file without fingerprint in their name.
In this case, I was running rails server in development environemtn, so my application try to look for files without fingerprint in their name. That means these:
public/javascripts/application.js
public/javascripts/vendor.js
public/stylesheets/application.css
But I use npm run bundle -- -p, which deletes the above file and produces the fingerprinted version of them, so rails can't find it and hence display no image and display incorrect layout to me.
I thought it is the daemon version of npm run bundle (which is
webpack) and daemon means running in background (I thought daemon is
the same as service?)?
It is not the daemon version of npm run bundle. I thought it is the daemon version of npm run bundle because their wiki used to say so which is wrong, see https://github.com/DMPRoadmap/roadmap/issues/1782.
I am trying to use yarn workspaces and then put my application into a Docker
image.
The folder structure looks like this:
root
Dockerfile
node_modules/
libA --> ../libA
libA/
...
app/
...
Unfortunately Docker doesn't support symbolic links - therefore it is not possible to copy the node_modules-folder in the root directory into a Docker image, even if the Dockerfile is in the root as in my case.
One thing I could do would be to exclude the symlinks with .dockerignore and then copy the real directory to the image.
Another idea - which I would prefer - would be to have a tool that replaces the symlinks with the actual contents of the symlink. Do you know if there is such a tool (preferably a Javascript package)?
Thanks
Yarn is used for dependency management, and should be configured to run within the Docker container to install the necessary dependencies, rather than copying them from your local machine.
The major benefit of Docker is that it allows you to recreate your development environment without worrying about the machine that it is running on - the same thing applies to Yarn, by running yarn install it installs the right versions for the relevant architecture of the machine your Docker image is built upon.
In your Dockerfile include the following after configuring your work directory:
RUN yarn install
Then you should be all sorted!
Another thing you should do is include the node_modules directory in your .gitignore and .dockerignore files so it is never include when distributing your code.
TL;DR: Don't copy node_modules directory from local machine, include RUN yarn install in Dockerfile
I am having issues deploying my elastic beanstalk rails 4 + ember cli app. I have a rails application and within the root I have a folder called 'frontend' which contains my ember app generated by ember CLI Rails.
My configuration:
64bit Amazon Linux 2015.03 v1.3.1 running Ruby 2.1 (Puma)
I encounter the following error from my activity log after I run eb deploy:
At cursory, I get this
ERROR: Instance: i-25c139e7 Module: AWSEBAutoScalingGroup ConfigSet: null Command failed on instance. Return code: 1 Output: (TRUNCATED)...mber-cli-rails.rb:58:in `compile!'
Looking into /var/log/eb-activity.log
I first get a lot of
npm ERR! Error: Attempt to unlock X, which hasn't been locked
followed by
npm ERR! System Linux 3.14.35-28.38.amzn1.x86_64
npm ERR! command "/usr/bin/node" "/usr/bin/npm" "install"
npm ERR! cwd /var/app/ondeck/frontend
npm ERR! node -v v0.10.35
npm ERR! npm -v 1.4.28
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR! /var/app/ondeck/frontend/npm-debug.log
npm ERR! not ok code 0
rake aborted!
EmberCLI Rails requires your Ember app to have an addon.
From within your EmberCLI directory please run:
$ npm install --save-dev ember-cli-rails-addon#0.0.11
in you Ember application root: /var/app/ondeck/frontend
Tasks: TOP => assets:precompile => ember:compile
(See full trace by running task with --trace) (Executor::NonZeroExitStatus)
So I ssh into the directory indicated and run npm install, which also leaves me with a lot of errors regarding authorization. When I run with via sudo, the modules install correctly, but when I redeploy my app, it gives me the exact same error.
I have tried sudo NPM install and chown -R node_modules webapp so that the node_modules folder can be accessed by the webapp group with no success.
I hate long answers, but this scenario is quite complicated.
As mentioned in the comments above, it was discovered that the home directory for the webapp user needed to be created (/home/webapp). Once this directory is created, the node package manager (npm) can execute without error. Because AWSEB environments can scale, SSH'ing into the EB host and performing one-off installations of packages and modules will not work in the long run. Essentially the answer boils down to the following logical steps:
Install git on the application server because bower needs it.
Create the home directory of the webapp user at /home/webapp.
Install bower globally using npm.
Invoke the npm install of your ember app.
Invoke bower install for your ember app.
To fix this I went ahead and created several .ebextensions customization files that are executed during an eb deploy. Here they are in order:
.ebextensions/00_option_settings.config - sets some EB options; for example the timeout length for command executions performed during an eb deploy. In this case all commands will timeout after 1200 seconds.
option_settings:
- namespace: 'aws:elasticbeanstalk:command'
option_name: 'Timeout'
value: '1200'
.ebextensions/01_packages.config - can install packages through yum and make them available to your eb instance. In this case I use yum to install git, this will later be used by bower.
packages:
yum:
git: []
.ebextensions/02_commands.config - allows you to run OS commands prior to unpacking the application that was uploaded through eb deploy. This part of the answer satisfies the main theme of this question: In my particular case, I need to create the /home/webapp directory, make sure it is owned by the webapp user, and also has 700 permissions. Lastly, I ensure that bower is installed globally as it will be needed by my ember application.
commands:
01_mkdir_webapp_dir:
# use the test directive to create the directory
# if the mkdir command fails the rest of this directive is ignored
test: 'mkdir /home/webapp'
command: 'ls -la /home/webapp'
02_chown_webapp_dir:
command: 'chown webapp:webapp /home/webapp'
03_chmod_webapp_dir:
command: 'chmod 700 /home/webapp'
04_install_bower_global:
command: 'npm install -g bower'
.ebextensions/03_container_commands.config - runs OS command after the application has been unpacked. NOTE: My ember app lives in the frontend directory of application source code. In order to install the npm and bower dependencies, the npm install and bower install commands need to be executed from the frontend directory. It is also worth mentioning that the bower-install command needs the --allow-root flag in order to succeed as the AWS user executing these commands has elevated privileges.
container_commands:
01_npm_install:
# set the current working directory to fully-qualified frontend
cwd: '/var/app/ondeck/frontend/'
command: 'npm install'
leader_only: 'false'
02_bower_install:
# set the current working directory to fully-qualified frontend
cwd: '/var/app/ondeck/frontend/'
command: 'bower --allow-root install'
leader_only: 'false'
03_seeddb:
# seed my database (has nothing to do with this answer)
command: 'rake db:seed_fu'
leader_only: 'true'