yarn scripts local bin missing in PATH when installed outside of cwd - docker

I have a docker container with yarn v1.22.19 installed. A .yarnrc file includes the line --modules-folder /node_modules to lift node_modules to the container's root directory. This is a workaround for a bind mount to the working directory erasing node_modules that were installed when the image is built.
The packages and binaries are installed properly in /node_modules/.bin/. The working directory is /app.
yarn bin returns the correct path to binary directory, and yarn run <binary> properly executes the binary. However, a yarn script which attempts to execute the local binary, such as "dev": "vite dev --host" errors with /bin/sh: 1: vite: not found.
Node module resolution checks for ancestor directories if there is no node_modules directory in the working directory. Even so, I tried to change the NODE_PATH to point to /node_modules and still no luck. Why does the yarn script have a different context than yarn commands? How can I point it to the correct local binary directory?
EDIT:
While yarn bin points to the correct local binary folder, the child process produced by yarn run to run a script defined in package.json injects cwd/node_modules/.bin into PATH instead of the specified /node_modules/.bin. I suspect this is a bug in older versions of yarn.
As a workaround, is there a way to prepend a command to the execution of yarn scripts defined in pcakage.json? Then I could just manually inject the correct path into the environment.

Related

Get only production dependencies from .yarn/cache to build Docker image

I would to build a Docker image using multi-stage.
We are using yarn 2 and Zero installs feature which stores dependencies in .yarn/cache under zip format.
To minimize the size of my Docker image, I would like to only have the production dependencies.
Previsously, we would do
yarn install --non-interactive --production=true
But by doing that with a former version of yarn, we don't benefit from the .yarn/cache folder and it takes time to download dependencies whereas there are already here but not readable by the former version of yarn.
Is there a way to tell yarn 2 to get only production dependencies from the .yarn/cache folder and put it into another one ? Thus I could copy this folder inside my image and save time and space.

Can you run gulp inside windows-server-core docker container?

We are trying to build a c# app that has gulp do some of the bundling/minification for the front end website.
We are currently using node 6 because of gulp dependencies. A future branch is updating to node 10 but that requires different node dependencies as we migrate our project. I thought using a docker container for the build might help alleviate switching between node versions on our local machine.
So I created a docker image
FROM microsoft/dotnet-framework:4.7.2-sdk
Then I loaded npm on top of it. Binding a volume to my source directory I'm able to install npm packages, install nuget packages and call build but it will fail because it is missing my gulp step.
I have gulp installed both globally in the container and locally in the node_modules folder. I end up with an error like,
C:\Program Files\node\node-v6.16.0-win-x64\node_modules\gulp\node_modules\sver-compat\sver.js:19
var semver = version.match(semverRegEx);
^
TypeError: Cannot read property 'match' of undefined
at new Semver (C:\Program Files\node\node-v6.16.0-win-x64\node_modules\gulp\node_modules\sver-compat\sver.js:19:23)
at Function.match (C:\Program Files\node\node-v6.16.0-win-x64\node_modules\gulp\node_modules\sver-compat\sver.js:374:15)
I have search all over the web for multiple days with no luck finding anything that can help me with what is blowing up here. Has anyone been able to get gulp to run successfully inside of a Windows Server Core docker container? Is the problem with the directory being mounted? Using Docker for Windows if that matters here.
On Windows-Server-Core the best I can surmise there is an issue with docker, volume-mounts and npm/gulp system links.
I believe the user set on the container by default wants to use the c:\containermappeddirectories folder. I am not sure at this point if that is configurable.
But my work around looked like this in powershell all running in the container,
npm install -g gulp#^3.9.1
md c:\ContainerMappedDirectories
cd c:\containermappedDirectories
robocopy /S "c:\MyUISource" C:\ContainerMappedDirectories
npm install #installs gulp locally to this folder
gulp prod-build
robocopy /S C:\ContainerMappedDirectories\dist c:\MyUISource
So I installed gulp globally, I created the local workspace copied my source in and ran npm/gulp then copied back my output to the mounted location.
In the end it was the best work around I could accomplish to complete my task.

Jenkins Slave (service) , cannot detect protractor

We are using slave as service and we r trying to run protractor with simple batch file after calling to npm install , but from some reason protractor was not detected, do u know what could be the reason/problem?
if i use web option (slave) for running job - everything went fine,
BTW - I try to set the service with user that allowed to run the test, and also set node in PATH, but nothing help
Appreciate your comments,
Thanks
Eyal
because you install protractor as a global package, So you should use webdriver-manager from global package install folder. The current folder where you execute npm install -g protractor has no webdriver-manager this cmd/binary. So jenkins report can't find webdriver-manager in current folder or PATH.
For best practice, you should add protractor as your nodejs project's dependency through npm install -S protractor before you write script, after do that, you will found 'protractor' will be add into package.json.
When others who get your source code, he only need to execute npm install under folder where package.json insides to get all dependencies installed.
After npm install done, the webdriver-manager will be found <package.json file inside folder>\node_modules\.bin\webriver-manager
So your cmd should write down as following:
pwd
ls -l "${WORKSPACE}"
cd /d <package.json file inside folder>
npm install
node_modules\.bin\webdriver-manager update
node_modules\.bin\protractor conf.js

Gradle build in docker jenkins slave

I am trying to create a jenkins slave for building gradle lambda projects. Jenkins slave is throwing the below error while building the project.
Exception in thread "main" java.lang.RuntimeException: Could not create parent directory for lock file /gradle/wrapper/dists/gradle-4.2.1-bin/dajvke9o8kmaxbu0kc5gcgeju/gradle-4.2.1-bin.zip.lck
at org.gradle.wrapper.ExclusiveFileAccessManager.access(ExclusiveFileAccessManager.java:43)
at org.gradle.wrapper.Install.createDist(Install.java:48)
at org.gradle.wrapper.WrapperExecutor.execute(WrapperExecutor.java:107)
at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:61)
/home/jenkins/workspace/ddoa-subprod/lf-security-gateway2/lf-security-gateway2
FAILURE: Build failed with an exception.
* What went wrong:
Failed to load native library 'libnative-platform.so' for Linux amd64.
Please help me in understanding the issue and let me know how to fix the same.
To fix this error: What went wrong: Failed to load native library 'libnative-platform.so' for Linux amd64. do the following:
Check if your Gradle cache (**~user/.gradle/**native folder exist at all or not).
Check if your Gradle cache (~user/.gradle/native folder exist and the file in question i.e. libnative-platform.so exists in that directory or not).
Check if the above folder ~user/.gradle or ~/.gradle/native or file: ~/.gradle/native/libnative-platform.so has valid permissions (should not be read-only. Running chmod -R 755 ~/.gradle is enough).
IF you don't see native folder at all or if your native folder seems corrupted, run your Gradle task ex: gradle clean build using -g or --gradle-user-home option and pass it's value.
Ex: If I ran mkdir /tmp/newG_H_Folder; gradle clean build -g /tmp/newG_H_Folder, you'll see Gradle will populate all the required folder/files (that it needs to run even before running any task or any option) are now in this new Gradle Home folder (i.e. /tmp/newG_H_Folder/.gradle directory).
From this folder, you can copy - just the native folder to your user's ~/.gradle folder (take backup of existing native folder in ~/.gradle first if you want to) if it already exists -or copy the whole .gradle folder to your ~ (home directory).
Then rerun your Gradle task and it won't error out anymore.
Gradle docs says:
https://docs.gradle.org/current/userguide/command_line_interface.html
-g, --gradle-user-home
Specifies the Gradle user home directory. The default is the .gradle directory in the user’s home directory.
Note: using gradle <sometask> -g <a_dynamic_folder_ex_jenkins_workspace> will always work as Gradle will create fresh .gradle cache in that -g defined folder, but doing this, it'll not reap the true benefit of Gradle's cache concept.
If you are using a version 3.4 if Gradle, then it could possibly be this issue.
To fix it, you can try to update your Gradle distribution to version 3.5 or higher, where this issue was solved.
I ran the command as sudo and it went through fine

In Jenkins, is there a way to persist npm packages so I don't have to install them in each build?

I'm using Jenkins (CloudBees) to build my project, and this runs some scripts in each build to download some node packages using npm.
Yesterday the npm registry server was having troubles and this blocked the build cycle of the project.
In order not to depend on external servers, is there a way to persist my node_modules folder in Jenkins so I don't have to download them in every build?
You can check the package.json file and backup node_modules directory.
When you start next build in jenkins, just check package.json file and node_modules backup, if package.json file is not changed, just using previous backup.
PKG_SUM=$(md5sum package.json|cut -d\ -f 1)
CACHED_FILE=${PKG_SUM}.tgz
[[ -f ${CACHED_FILE} ]] && tar zxf ${CACHED_FILE}
npm install
[[ -f ${CACHED_FILE} ]] || tar zcf ${CACHED_FILE} node_moduels
above is quite simple cache implementation, otherwise you should check the cache file is not damaged.
CloudBees uses a pool of slaves to support your builds, and by nature you can have builds to run on various hosts, so start with a fresh workspace. Anyway, we try to allocate a slave that you already used to avoid download delays - this works for all file stored in workspace.
I don't think this would have prevented issue with npm repository being offline anyway.

Resources