Explanation and best practise around yarn - warning Integrity check: System parameters don't match - ruby-on-rails

When I was building and trying to run my docker image with a rubyonrails app, I was getting this error:
warning Integrity check: System parameters don't match
error Integrity check failed
error Found 1 errors.
I tried changing my Docker file with
RUN yarn install --check-files
But that didn't do anything.
I then just deleted the yarn.lock file and my container now runs.
I am guessing the issue is that rails was run locally on my laptop, and now it is trying to run the same yarn.lock file on another computer and the integrity check is failing? Is this correct?
What should my dockerfile be doing? Should I exclude the yarn.lock file from getting into my docker container in the first place?

First of all, you will need to remove the node_modules folder and run the yarn install again. In your command line, follow these instructions:
Remove node_modules folder by typing rm -rf node_modules
Run yarn install
Run rails webpacker:install
Restart your command-line editor.
Be careful about the NodeJS version in your machine. It must be the same as the version in which the rails project was initialized. You can use nvm to manage the Node version.

Add "config.webpacker.check_yarn_integrity = false" in "development.rb" will solve the problem

I would suggest not copying yarn.lock in the container. You can add yarn.lock in the .dockerignore which Docker will ignore the yarn.lock while building the image.
I had faced similar issues because, locally I run on macOS and containers are alpine-based, so we end up ignoring the yarn.lock

Related

Upgrade a package in containerised application

I I would like to update a Yarn package inside package.json (Next.js project) within a docker container. I saw that inside the docker file we run yarn install --frozen-lockfile
For this project there is also a docker compose with other containers.
How would you do that? My first try was to run the docker compose up then yarn upgrade 'package' but I got errors not related to the package like I am running a new yarn install on my environment.
When you are upgrading anything it is always recommended NOT to do it on the live/running container. Instead, it is recommended you update what you want to update in your source code and Dockerfile and then create a NEW version of the image and deploy the new image over the old one with docker-compose in your case.
That's what best practice is striving towards. If this is possible it is recommended you go this route.

How can I install a local package in a docker container?

I have two local (custom) NPM packages that I've used before. When I install them with npm i parentFolder/package1 parentFolder/package2, they install just fine. However, when I add them to my package.json file, and then copy / npm install in my DockerFile, I get an error saying "Could not install from "x" as it does not contain a package.json file.". I honestly don't know what to make of this, since it does have a package.json file, and it installs just fine otherwise.
Is there a special step that I need to take for custom packages to work in docker, or am I just lost somehow? I've been staring at this error so long that my brain's a mess, so I probably forgot to add some details. If I forgot something you need to know in order to help, please let me know.
Docker-compose file:
volumes:
- ./sprinklers:/app
# - sprinklers_node_modules:/app/node_modules/
- ./sprinklers/node_modules_temp:/app/node_modules/
# - sprinklers_persistent:/app/.node-persist/
- ./sprinklers/.node-persist:/app/.node-persist/
As you can see, I was using named volumes, but tried going to mounted ones to see what happened.
DockerFile:
FROM node:14.16.0-slim
WORKDIR /app
RUN npm install
The COPY statements here were removed, since all files are mounted in the docker-compose file.

Bundle install and yarn install inside entry script - Docker

I see that bundle install and yarn install are usually done in Dockerfile as:
RUN bundle install && yarn install
Which means that if I modify Gemfile or yarn.lock, I need to re-build the image again. I know that there is layer caching and the docker build will not rebuild other layers except bundle install && yarn install layer. But it means I have to do docker-compose up -d --build
But I was wondering if it is ok to put these commands inside an entry script of docker-compose or in command as:
command: bundle install && yarn install && rails s
In this way, I believe, whenever I do docker-compose up -d, bundle install and yarn install will be executed without having to build the image.
Not sure if it has any advantages over conventional bundle install in Dockerfile except not having to append --build in docker-compose up. Correct that if I do this, bundle install and yarn install will get executed even when there are no changes to Gemfile or Yarn files. I guess this is one of the bad sides.
Please correct me if it is not the ideal way to go.
New to docker world.
It wastes several minutes of time and uses up network bandwidth every time you start your application. When you're doing local development, it'd be the equivalent of doing this, every time you run the application:
rm -rf vendor node_modules
bundle install # from scratch
yarn install # from scratch
bundle exec rails s
A core part of Docker is rebuilding images (in the same way that languages like Go, Java, Typescript, etc. have a "build" phase). Trying to avoid image rebuilds isn't usually advisable. With a well-written Dockerfile, and particularly for an interpreted language, running docker build should be fairly efficient.
The one important trick is to separately copy the files that specify dependencies, and the rest of your application. As soon as a Dockerfile COPY instruction encounters a file that's changed it will disable layer caching for the rest of the application. Since dependencies change relatively infrequently, a sequence that first copies the dependency file, then installs the dependencies, then copies the application can jump straight to the last step if the dependency file hasn't changed.
COPY Gemfile Gemfile.lock ./
RUN bundle install
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
(Make sure to include the Bundler vendor directory and the node_modules directory in a .dockerignore file so the last COPY step doesn't overwrite what previously got installed.)
This question is opinion based. As you already found out yourself, it is a common practice to install dependencies (bundle, yarn, others) during the image build process, and not image run process.
The rationale is that you run more times than you build, and you want the run operation to start quickly.
In the same way that you do apt install... or yum install... in the build stage, you should normally do bundle install in the build stage as well.
That said, if it makes sense to you to bundle install as a part of the entrypoint, that is your choice. I suspect that after you do it, you will see that it is less common for a reason.
Another note about docker layers: If the Gemfile change, not only the layer that refers to it will change, but all subsequent layers as well. For that reason, it is often common to separate the copy of the dependencies manifest (Gemfile.*) from the copying of the app, like this:
# Pre-install gems
COPY Gemfile* ./
RUN gem install bundler && \
bundle install --jobs=3 --retry=3
# Copy the rest of the app
COPY . .
So this way, if your app files change, but not the dependencies, the build will be faster.

yarn workspaces and docker

I am trying to use yarn workspaces and then put my application into a Docker
image.
The folder structure looks like this:
root
Dockerfile
node_modules/
libA --> ../libA
libA/
...
app/
...
Unfortunately Docker doesn't support symbolic links - therefore it is not possible to copy the node_modules-folder in the root directory into a Docker image, even if the Dockerfile is in the root as in my case.
One thing I could do would be to exclude the symlinks with .dockerignore and then copy the real directory to the image.
Another idea - which I would prefer - would be to have a tool that replaces the symlinks with the actual contents of the symlink. Do you know if there is such a tool (preferably a Javascript package)?
Thanks
Yarn is used for dependency management, and should be configured to run within the Docker container to install the necessary dependencies, rather than copying them from your local machine.
The major benefit of Docker is that it allows you to recreate your development environment without worrying about the machine that it is running on - the same thing applies to Yarn, by running yarn install it installs the right versions for the relevant architecture of the machine your Docker image is built upon.
In your Dockerfile include the following after configuring your work directory:
RUN yarn install
Then you should be all sorted!
Another thing you should do is include the node_modules directory in your .gitignore and .dockerignore files so it is never include when distributing your code.
TL;DR: Don't copy node_modules directory from local machine, include RUN yarn install in Dockerfile

jhipster application files generated in wrong directory

When I try to create a jhipster application in ubuntu 13.10 with yo jhipster the generated output files are always dumped in the wrong directory.
For example I run yo jhipster in the directory /mnt/mercury/jhipster-test/alpha then the files are dumped out to /mnt/mercury. In fact if I run yo jhipster in any subdirectory of /mnt/mercury they are always dumped out to /mnt/mercury.
I'm using yo version 1.1.2 from the standard ubuntu repository
Please advise how to generate files to be output in current directory.
For the benefit of anyone else facing this problem.
I managed to get Yeoman working with the following
npm cache clean
sudo npm rm -g yo
npm cache clean
sudo npm install -g yo
My problem: Accidentally "yo generating" in the parent directory.
Solution: Delete the .yo-rc.json file in the parent directory, then running the yo generator command in the child directory.
As discussed in the comments, this is a Yeoman problem on Ubuntu 13.10:
We don't have this issue with Ubuntu 12.04
There is the same issue with other generators ("yo webapp") on Ubuntu 13.10
As a workaround, I recommend you have a look at our Docker container:
https://github.com/jhipster/jhipster-docker
This will allow you to run the full JHipster stack, with Ubuntu 12.04, inside a container! Just use it to generate the app, then you can work directly on your host machine.
On Mac OSX Maverick with Node v0.10.26, yo v1.1.2 and generator-jhipster v0.11, the yo hipster command was generating all the sources always in the same (wrong!) directory and not using my current directory.
I fixed this problem doing the following:
cd <WRONG_DIR_WHERE_CODE_IS_CREATED>
rm .yo-rc.json node_modules/
npm uninstall -g karma
npm install -g karma (Note: using sudo it was not working!)
sudo npm install -g generator-jhipster
Not sure why but I've then been able to install karma and generator-jhipster again and suddenly yo hipster starting generating code again in my current directory
Could it be caused by different environment variables when launching npm with sudo?
The file .yo-rc.json is hidden, if it is not deleted, the generator will constantly take the settings from it. You must delete .yo-rc.json.

Resources