staging beam sdk and suggestion of "pre-building workflow..." expected? - google-cloud-dataflow

I run beam pipeline with the following on python3.8 and beam2.41.0rc1:
argv = [
"--runner", "DataflowRunner",
"--experiments=use_runner_v2",
"--sdk_container_image=us.gcr.io/some_beam_image_based_on_2.41.0rc1",
]
The beam image is built with bazel docker rules:
In WORKSPACE
# https://hub.docker.com/r/apache/beam_python3.8_sdk/tags
container_pull(
name = "beam_python",
# 2.41.0rc1
digest = "sha256:0036b90ecfefddd1dd1614b9cd1ccec7c5a906ee2185542996bc26d6408d9e14",
registry = "registry.hub.docker.com",
repository = "apache/beam_python3.8_sdk",
)
In BUILD
cc_image(
name = "sample_image",
binary = ":sample",
)
container_layer(
name = "sample_layer",
tars = [":sample_image"],
)
container_image(
name = "beam_sample_image",
base = "#beam_python//image",
layers = [":sample_layer"],
)
A custom apache-beam seems installed. Not sure if it is 2.41.0rc1.
root#3dc8fe29cd99:/# pip freeze
absl-py==1.2.0
apache-beam # file:///opt/apache/beam/tars/apache-beam.tar.gz
astunparse==1.6.3
atomicwrites==1.4.1
attrs==21.4.0
beautifulsoup4==4.11.1
...
I saw the following log:
I0815 18:11:40.158377 140374774146880 stager.py:927] Downloading source distribution of the SDK from PyPi
I0815 18:11:40.158492 140374774146880 stager.py:934] Executing command: ['/home/swang/.cache/bazel/_bazel_swang/09eb83215bfa3a8425e4385b45dbf00d/execroot/__main__/bazel-out/k8-opt/bin/garage/sample_launch.runfiles/python3_8_x86_64-unknown-linux-gnu/bin/python3', '-m', 'pip', 'download', '--dest', '/tmp/tmpuqnjdrj3', 'apache-beam==2.41.0rc1', '--no-deps', '--no-binary', ':all:']
WARNING: You are using pip version 22.0.4; however, version 22.2.2 is available.
You should consider upgrading via the '/home/swang/.cache/bazel/_bazel_swang/09eb83215bfa3a8425e4385b45dbf00d/execroot/__main__/bazel-out/k8-opt/bin/garage/sample_launch.runfiles/python3_8_x86_64-unknown-linux-gnu/bin/python3 -m pip install --upgrade pip' command.
I0815 18:11:42.881979 140374774146880 stager.py:825] Staging SDK sources from PyPI: dataflow_python_sdk.tar
I0815 18:11:42.883261 140374774146880 stager.py:900] Downloading binary distribution of the SDK from PyPi
I0815 18:11:42.883335 140374774146880 stager.py:934] Executing command: ['/home/swang/.cache/bazel/_bazel_swang/09eb83215bfa3a8425e4385b45dbf00d/execroot/__main__/bazel-out/k8-opt/bin/garage/sample_launch.runfiles/python3_8_x86_64-unknown-linux-gnu/bin/python3', '-m', 'pip', 'download', '--dest', '/tmp/tmpuqnjdrj3', 'apache-beam==2.41.0rc1', '--no-deps', '--only-binary', ':all:', '--python-version', '38', '--implementation', 'cp', '--abi', 'cp38', '--platform', 'manylinux1_x86_64']
WARNING: You are using pip version 22.0.4; however, version 22.2.2 is available.
You should consider upgrading via the '/home/swang/.cache/bazel/_bazel_swang/09eb83215bfa3a8425e4385b45dbf00d/execroot/__main__/bazel-out/k8-opt/bin/garage/sample_launch.runfiles/python3_8_x86_64-unknown-linux-gnu/bin/python3 -m pip install --upgrade pip' command.
I0815 18:11:44.672350 140374774146880 stager.py:842] Staging binary distribution of the SDK from PyPI: apache_beam-2.41.0rc1-cp38-cp38-manylinux1_x86_64.whl
I0815 18:11:44.675273 140374774146880 dataflow_runner.py:477] Pipeline has additional dependencies to be installed in SDK worker container, consider using the SDK container image pre-building workflow to avoid repetitive installations. Learn more on https://cloud.google.com/dataflow/docs/guides/using-custom-containers#prebuild
I0815 18:11:44.676919 140374774146880 environments.py:376] Default Python SDK image for environment is apache/beam_python3.8_sdk:2.41.0rc1
I0815 18:11:44.677026 140374774146880 environments.py:295] Using provided Python SDK container image: us.gcr.io/shawn-295406/beam_sample:20220815_test
I0815 18:11:44.677081 140374774146880 environments.py:302] Python SDK container image set to "us.gcr.io/shawn-295406/beam_sample:20220815_test" for Docker environment
I0815 18:11:44.723044 140374774146880 translations.py:714] ==================== <function pack_combiners at 0x7fab84d77820> ====================
I0815 18:11:44.723375 140374774146880 translations.py:714] ==================== <function sort_stages at 0x7fab84d78040> ====================
I0815 18:11:44.730723 140374774146880 apiclient.py:473] Defaulting to the temp_location as staging_location: gs://shizhiw/beam/tmp
I0815 18:11:44.750272 140374774146880 auth.py:136] Setting socket default timeout to 60 seconds.
I0815 18:11:44.750348 140374774146880 auth.py:138] socket default timeout is 60.0 seconds.
I0815 18:11:44.755919 140374774146880 apiclient.py:732] Starting GCS upload to gs://shizhiw/beam/tmp/beamapp-swang-0816011144-730582-ppdswudf.1660612304.730851/dataflow_python_sdk.tar...
I0815 18:11:45.899281 140374774146880 apiclient.py:748] Completed GCS upload to gs://shizhiw/beam/tmp/beamapp-swang-0816011144-730582-ppdswudf.1660612304.730851/dataflow_python_sdk.tar in 1 seconds.
I0815 18:11:45.899615 140374774146880 apiclient.py:732] Starting GCS upload to gs://shizhiw/beam/tmp/beamapp-swang-0816011144-730582-ppdswudf.1660612304.730851/apache_beam-2.41.0rc1-cp38-cp38-manylinux1_x86_64.whl...
I0815 18:11:48.883744 140374774146880 apiclient.py:748] Completed GCS upload to gs://shizhiw/beam/tmp/beamapp-swang-0816011144-730582-ppdswudf.1660612304.730851/apache_beam-2.41.0rc1-cp38-cp38-manylinux1_x86_64.whl in 2 seconds.
I0815 18:11:48.884612 140374774146880 apiclient.py:732] Starting GCS upload to gs://shizhiw/beam/tmp/beamapp-swang-0816011144-730582-ppdswudf.1660612304.730851/pipeline.pb...
I0815 18:11:49.025467 140374774146880 apiclient.py:748] Completed GCS upload to gs://shizhiw/beam/tmp/beamapp-swang-0816011144-730582-ppdswudf.1660612304.730851/pipeline.pb in 0 seconds.
I0815 18:11:49.855348 140374774146880 apiclient.py:911] Create job: <Job
I'm a bit puzzled by the log:
beam is already installed both locally and in the container image, why does it seem to be downloaded again?
I'm already using custom container (base beam image + a cpp binary), why does the log still suggest me to use "pre-building workflow..."?

The apache-beam # file:///opt/apache/beam/tars/apache-beam.tar.gz basically says the Beam is installed from a tarball file on the container. It must be the 2.41.0rc1 pulled in as the base image in your Dockerfile or build system.
The info you are seeing is benign, see this line. It's logged whenever your pipeline options contain additional artifacts to retrieve. In your case, just the sdk_container_image.

Related

How to stop bundling and publish with CDK Lambda Python (alpha) when no source files are changed

The documentation for the Lambda Python alpha construct says
Python bundles are only recreated and published when a file in a source directory has changed.
But I am not sure how this works, when I try to deploy without changes to my Lambda
function source file it still is bundling and publishing
cdk deploy --require-approval never
...
Step 1/10 : ARG IMAGE=public.ecr.aws/sam/build-python3.7
Step 2/10 : FROM $IMAGE
latest: Pulling from sam/build-python3.9
e86b34a791fa: Pulling fs layer
9ba3f71dde1f: Pulling fs layer
c09bfc1b1fd5: Pulling fs layer
...
[0%] start: Building 5f67ddad2d5185486f13b5862b7730f02d21b7e9b8cbxxxxx:current_account-eu-west-1
[0%] start: Publishing 5f67ddad2d5185486f13b5868997730f02d21b7e9bxxxxxx:current_account-eu-west-1
[100%] success: Published 0fa75f8b14bfa78ef0ee43368c9d0ea7580c7429e4eafa9fexxxxx:current_account-eu-west-1
How can I mitigate this?
You will see Building and Publishing phase output for each asset on each deploy. As advertised, though, the CDK won't republish unchanged artefacts. It calls s3.listObjectsV2 for each asset, skipping the upload if the build hash already exists in the CDK's S3 bucket.
Run cdk deploy --verbose to see what's happening under the covers:
# selected output
MyPythonLambdaStack: deploying...
[0%] start: Publishing cdbcef19c59781e0f77531180a03c2c08316f9307430efda903b9218eb96b0c2:123456789012-us-east-1
[0%] start: Publishing 5c3360714cbfec6dca5b8ac4cb34266ddf97528b3043b9005831289fe4fe3a73:123456789012-us-east-1
[09:49:41] [0%] check: Check s3://cdk-hnb659fds-assets-123456789012-us-east-1/cdbcef19c59781e0f77531180a03c2c08316f9307430efda903b9218eb96b0c2.zip
[09:49:41] [0%] check: Check s3://cdk-hnb659fds-assets-123456789012-us-east-1/5c3360714cbfec6dca5b8ac4cb34266ddf97528b3043b9005831289fe4fe3a73.json
[09:49:42] [0%] found: Found s3://cdk-hnb659fds-assets-123456789012-us-east-1/5c3360714cbfec6dca5b8ac4cb34266ddf97528b3043b9005831289fe4fe3a73.json
[50%] success: Published 5c3360714cbfec6dca5b8ac4cb34266ddf97528b3043b9005831289fe4fe3a73:123456789012-us-east-1
[09:49:42] [50%] found: Found s3://cdk-hnb659fds-assets-123456789012-us-east-1/cdbcef19c59781e0f77531180a03c2c08316f9307430efda903b9218eb96b0c2.zip
[100%] success: Published cdbcef19c59781e0f77531180a03c2c08316f9307430efda903b9218eb96b0c2:123456789012-us-east-1
[09:49:42] MyPythonLambdaStack: checking if we can skip deploy
[09:49:43] MyPythonLambdaStack: skipping deployment (use --force to override)
✅ MyPythonLambdaStack (no changes)
✨ Deployment time: 4.18s

Any workaround to the "Could not find prisma-fmt binary" error when installing KeystoneJS?

I'm running into this issue when trying to install KeystoneJS (tried locally with node and npm up to date and in a node:16-alpine docker image).
> keystone-app#1.0.0 postinstall
> keystone postinstall
Error: Could not find prisma-fmt binary. Searched in:
- /plur-cms/node_modules/#prisma/engines/prisma-fmt-debian-openssl-1.1.x
- /plur-cms/node_modules/#prisma/sdk/prisma-fmt-debian-openssl-1.1.x
- /plur-cms/node_modules/#prisma/prisma-fmt-debian-openssl-1.1.x
- /plur-cms/node_modules/#prisma/sdk/runtime/prisma-fmt-debian-openssl-1.1.x
at resolveBinary (/plur-cms/node_modules/#prisma/sdk/dist/resolveBinary.js:91:9)
at Object.formatSchema (/plur-cms/node_modules/#prisma/sdk/dist/engine-commands/formatSchema.js:41:25)
at getCommittedArtifacts (/plur-cms/node_modules/#keystone-6/core/dist/artifacts-f7bed9de.cjs.dev.js:398:13)
at Object.validateCommittedArtifacts (/plur-cms/node_modules/#keystone-6/core/dist/artifacts-f7bed9de.cjs.dev.js:417:21)
at postinstall (/plur-cms/node_modules/#keystone-6/core/scripts/dist/keystone-6-core-scripts.cjs.dev.js:619:5)
I have tried so far to update Prisma to the latest version, generate its binary, and place it into the correct folder for keystone. But after that keystone still doesn't run:
✨ Starting Keystone
⭐️ Dev Server Starting on http://localhost:5000
⭐️ GraphQL API Starting on http://localhost:5000/api/graphql
✨ Generating GraphQL and Prisma schemas
✨ The database is already in sync with the Prisma schema.
Error: Unknown binary target debian-openssl-3.0.x in generator client.
Possible binaryTargets: darwin, darwin-arm64, debian-openssl-1.0.x, debian-openssl-1.1.x, rhel-openssl-1.0.x, rhel-openssl-1.1.x, linux-arm64-openssl-1.1.x, linux-arm64-openssl-1.0.x, linux-arm-openssl-1.1.x, linux-arm-openssl-1.0.x, linux-musl, linux-nixos, windows, freebsd11, freebsd12, openbsd, netbsd, arm, native
at validateGenerators (/home/camopy/dev/plur-cms/node_modules/#prisma/sdk/dist/get-generators/getGenerators.js:318:17)
at getGenerators (/home/camopy/dev/plur-cms/node_modules/#prisma/sdk/dist/get-generators/getGenerators.js:122:3)
at Object.getGenerator (/home/camopy/dev/plur-cms/node_modules/#prisma/sdk/dist/get-generators/getGenerators.js:276:22)
at generatePrismaClient (/home/camopy/dev/plur-cms/node_modules/#keystone-6/core/dist/artifacts-f7bed9de.cjs.dev.js:522:21)
at async Promise.all (index 0)
at Object.generateNodeModulesArtifacts (/home/camopy/dev/plur-cms/node_modules/#keystone-6/core/dist/artifacts-f7bed9de.cjs.dev.js:518:3)
at async Promise.all (index 0)
at setupInitialKeystone (/home/camopy/dev/plur-cms/node_modules/#keystone-6/core/scripts/dist/keystone-6-core-scripts.cjs.dev.js:416:22)
at initKeystone (/home/camopy/dev/plur-cms/node_modules/#keystone-6/core/scripts/dist/keystone-6-core-scripts.cjs.dev.js:166:35)
Has any of you run into this already and found a workaround?
Or do you have a docker image with keystone 6 working?
You are running into this issue because the #prisma/client version doesn't seem to support OpenSSL 3.0.
You need to update to version 3.13.0 atleast. From version 3.13.0 prisma has added support for openssl 3.0.
Here are the release notes which mentions the same: Release notes

Bitbucket pipeline - error installing pre-commit ts-lint

I have a step in my pipeline that runs our configured pre-commit on Bitbucket:
...
- step:
name: Passing linters
image: python:3.7-slim-stretch
script:
- pip3 install "pre-commit==2.17.0"
- apt-get update && apt-get --assume-yes install git
- pre-commit run --all-files
No changes were made but it suddenly stopped working.
The pipeline result:
+ pre-commit run --all-files
[INFO] Initializing environment for https://github.com/psf/black.
[INFO] Initializing environment for https://github.com/psf/black:click==8.0.4.
[INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks.
[INFO] Initializing environment for https://github.com/maximevast/pre-commit-tslint/.
[INFO] Initializing environment for https://github.com/maximevast/pre-commit-tslint/:tslint-react#4.1.0,tslint#5.20.1,typescript#4.0.2.
[INFO] Installing environment for https://github.com/psf/black.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/maximevast/pre-commit-tslint/.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: command: ('/root/.cache/pre-commit/repo1234/node_env-default/bin/node', '/root/.cache/pre-commit/repo1234/node_env-default/bin/npm', 'install', '--dev', '--prod', '--ignore-prepublish', '--no-progress', '--no-save')
return code: 1
expected return code: 0
stdout: (none)
stderr:
/root/.cache/pre-commit/repo1234/node_env-default/bin/node: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.27' not found (required by /root/.cache/pre-commit/repo1234/node_env-default/bin/node)
/root/.cache/pre-commit/repo1234/node_env-default/bin/node: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.25' not found (required by /root/.cache/pre-commit/repo1234/node_env-default/bin/node)
/root/.cache/pre-commit/repo1234/node_env-default/bin/node: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /root/.cache/pre-commit/repo1234/node_env-default/bin/node)
Check the log at /root/.cache/pre-commit/pre-commit.log
pre-commit utilizes nodeenv to bootstrap node environments
it does this by downloading a copy of node and provisioning an environment (when node is not available). a few days ago node 18.x was released and the prebuilt binaries require a relatively-recent version of glibc
there's a few ways you can work around this:
1. select a specific version of node using language_version / default_language_version
# using default_language_version
default_language_version:
node: 16.14.2
# on the hook itself
repos:
- repo: https://github.com/maximevast/pre-commit-tslint
rev: ...
hooks:
- id: tslint
language_version: 16.14.2
2. use a newer base image
stretch is a bit old, perhaps try instead 3.7-slim-buster instead ?
this will get you a more modern version of glibc
image: python:3.7-slim-buster
3. install node into your image
this will skip the "download node" step of pre-commit by default (specifically it'll default to language_version: system when a suitable node is detected)
this will vary based on your image setup
disclaimer: I created pre-commit

Bazel inside Alpine container issue

I'm trying to test the build of google/or-tools using the Bazel based build system on various GNU/Linux distro by using various common distros
When trying to use bazel inside an Alpine:edge based Dockerfile (i.e. in a RUN cmd), at "docker build" time.
I don't have a consistency build between my Archlinux and on Github action workflow runners (ubuntu 18.04 IIRC).
Dockerfile: https://github.com/google/or-tools/blob/master/bazel/docker/alpine/Dockerfile
I run it using my Makefile target alpine_build in google/or-tools/bazel
ref: https://github.com/google/or-tools/blob/master/bazel/Makefile
From GH ubuntu-latest (18.04 LTS ?) runner, I got this trace
$ make alpine_build
...
Step 11/11 : RUN bazel build --curses=no --copt='-Wno-sign-compare' //...:all
---> Running in 9a2f9b6f24c7
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
Loading:
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
DEBUG: Rule 'com_google_protobuf' indicated that a canonical reproducible form can be obtained by modifying arguments commit = "fe1790ca0df67173702f70d5646b82f48f412b99", shallow_since = "1576187991 -0800"
DEBUG: Call stack for the definition of repository 'com_google_protobuf' which is a git_repository (rule definition at /root/.cache/bazel/_bazel_root/86fee77ec27da0053940f3f327a6fd59/external/bazel_tools/tools/build_defs/repo/git.bzl:195:18):
- /home/lib/WORKSPACE:22:1
Loading: 2 packages loaded
Analyzing: 301 targets (16 packages loaded, 0 targets configured)
Analyzing: 301 targets (25 packages loaded, 43 targets configured)
Analyzing: 301 targets (26 packages loaded, 43 targets configured)
...
ref: https://github.com/google/or-tools/runs/568849544?check_suite_focus=true
So everything seems fine up to this point (still need to figure out the jdk javac issue but it's an other topic)
On the contrary on my Archlinux, I've got:
$ make alpine_build
...
Step 11/11 : RUN bazel build --curses=no --copt='-Wno-sign-compare' //...:all
---> Running in e13ca9fd3e84
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
Loading:
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
INFO: Call stack for the definition of repository 'com_google_protobuf' which is a git_repository (rule definition at /root/.cache/bazel/_bazel_root/86fee77ec27da0053940f3f327a6fd59/external/bazel_tools/tools/build_defs/repo/git.bzl:195:18):
- /home/lib/WORKSPACE:22:1
ERROR: An error occurred during the fetch of repository 'com_google_protobuf':
java.io.IOException: unlinkat(/root/.cache/bazel/_bazel_root/86fee77ec27da0053940f3f327a6fd59/external/com_google_protobuf/.git/logs/refs/remotes/origin) (Directory not empty)
ERROR: no such package '#com_google_protobuf//': java.io.IOException: unlinkat(/root/.cache/bazel/_bazel_root/86fee77ec27da0053940f3f327a6fd59/external/com_google_protobuf/.git/logs/refs/remotes/origin) (Directory not empty)
ERROR: no such package '#com_google_protobuf//': java.io.IOException: unlinkat(/root/.cache/bazel/_bazel_root/86fee77ec27da0053940f3f327a6fd59/external/com_google_protobuf/.git/logs/refs/remotes/origin) (Directory not empty)
INFO: Elapsed time: 29.713s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
FAILED: Build did NOT complete successfully (0 packages loaded)
The command '/bin/sh -c bazel build --curses=no --copt='-Wno-sign-compare' //...:all' returned a non-zero code: 1
make: *** [Makefile:121: alpine_build] Error 1
I've tried to look at the file /root/.cache/bazel/_bazel_root/86fee77ec27da0053940f3f327a6fd59/external/com_google_protobuf/.git/logs/refs/remotes/origin
using the previous step (alpine_devel) container:
$ make sh_alpine_devel
/home/lib # bazel build --curses=no --copt='-Wno-sign-compare' //...:all
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
Loading:
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
INFO: Call stack for the definition of repository 'com_google_protobuf' which is a git_repository (rule definition at /root/.cache/bazel/_bazel_root/86fee77ec27da0053940f3f327a6fd59/external/bazel_tools/tools/build_defs/repo/git.bzl:195:18):
- /home/lib/WORKSPACE:22:1
ERROR: An error occurred during the fetch of repository 'com_google_protobuf':
java.io.IOException: unlinkat(/root/.cache/bazel/_bazel_root/86fee77ec27da0053940f3f327a6fd59/external/com_google_protobuf/.git/logs/refs/remotes/origin) (Directory not empty)
ERROR: no such package '#com_google_protobuf//': java.io.IOException: unlinkat(/root/.cache/bazel/_bazel_root/86fee77ec27da0053940f3f327a6fd59/external/com_google_protobuf/.git/logs/refs/remotes/origin) (Directory not empty)
ERROR: no such package '#com_google_protobuf//': java.io.IOException: unlinkat(/root/.cache/bazel/_bazel_root/86fee77ec27da0053940f3f327a6fd59/external/com_google_protobuf/.git/logs/refs/remotes/origin) (Directory not empty)
INFO: Elapsed time: 29.924s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
FAILED: Build did NOT complete successfully (0 packages loaded)
cat /root/.cache/bazel/_bazel_root/86fee77ec27da0053940f3f327a6fd59/external/com_google_protobuf/.git/logs/
refs/remotes/origin/revert-6272-MutableSequence-import
0000000000000000000000000000000000000000 c5fedd61a48a174054f98684d5ddbc2d11530367 root <root#Flex2.home> 1586336706 +0000 fetch origin refs/heads/*:refs/remotes/origin/* refs/tags/*:refs/tags/*: storing head
note Flex2 is my local machine...
So my questions:
Is it a known issue (ed don't find anything on java.io.IOException: unlinkat) ?
Does Bazel deal with the kernel (or uname -a etc...) which could explain why I don't have the same behaviour from one host to an other ?
May I have more trace to debug this issue ?
How can I fix it ?
Thanks,
Basically you need to tell to bazel to use the java locally installed by alpine than downloading one.
this can be done using the option --host_javabase=#local_jdk//:jdk
ref: https://github.com/google/or-tools/blob/2cb85b4eead4c38e1c54b48044f92087cf165bce/bazel/docker/alpine/Dockerfile#L26
Minimal working example:
https://github.com/Mizux/bazel-cpp/blob/main/ci/docker/alpine/Dockerfile

Karma not running in Jenkins CI, Cannot find module 'karma-jasmine'

I'm setting up an Angular 4 SPA with automatic testing in Jenkins CI. The SPA is part of a larger, Maven-managed project, so the build is also Maven-managed. So far I've:
Installed the NodeJS plugin on Jenkins, using install from nodejs.org with version 8.6.0
Configured "Global npm packages to install" = "karma-cli phantomjs-prebuilt jasmine-core karma-jasmine karma-phantomjs-launcher karma-junit-reporter karma-coverage"
Added the "maven-karma-plugin" in pom.xml with browsers=PhantomJS / singleRun=true / reporters=dots,junit
Enabled "Provide Node & npm bin/ folder to PATH" on the Jenkins job configuration
The build process starts up quite ok, but eventually I get:
[INFO] --- maven-karma-plugin:1.6:start (default) # webclient ---
[INFO] Executing Karma Test Suite ...
/var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/karma start /var/lib/jenkins/workspace/funnel_build/webclient/karma.conf.js --browsers PhantomJS --reporters dots,junit --single-run
07 10 2017 17:07:52.801:ERROR [config]: Error in config file!
{ Error: Cannot find module 'karma-jasmine'
at Function.Module._resolveFilename (module.js:527:15)
at Function.Module._load (module.js:476:23)
at Module.require (module.js:568:17)
at require (internal/module.js:11:18)
at module.exports (/var/lib/jenkins/workspace/funnel_build/webclient/karma.conf.js:9:7)
at Object.parseConfig (/var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/karma/lib/config.js:410:5)
The npm install at the very beginning of the build logs:
$ /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/npm install -g karma-cli phantomjs-prebuilt jasmine-core karma-jasmine karma-phantomjs-launcher karma-junit-reporter karma-coverage
/var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/karma -> /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/karma-cli/bin/karma
/var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/phantomjs -> /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt/bin/phantomjs
> phantomjs-prebuilt#2.1.15 install /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt
> node install.js
Considering PhantomJS found at /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/phantomjs
Looks like an `npm install -g`
Could not link global install, skipping...
Download already available at /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2
Verified checksum of previously downloaded file
Extracting tar contents (via spawned process)
Removing /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt/lib/phantom
Copying extracted folder /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2-extract-1507388835905/phantomjs-2.1.1-linux-x86_64 -> /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt/lib/phantom
Writing location.js file
Done. Phantomjs binary available at /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt/lib/phantom/bin/phantomjs
npm WARN karma-jasmine#1.1.0 requires a peer of karma#* but none was installed.
npm WARN karma-junit-reporter#1.2.0 requires a peer of karma#>=0.9 but none was installed.
npm WARN karma-phantomjs-launcher#1.0.4 requires a peer of karma#>=0.9 but none was installed.
+ karma-phantomjs-launcher#1.0.4
+ karma-coverage#1.1.1
+ karma-jasmine#1.1.0
+ karma-cli#1.0.1
+ karma-junit-reporter#1.2.0
+ jasmine-core#2.8.0
+ phantomjs-prebuilt#2.1.15
updated 7 packages in 10.553s
(The reason the package 'karma' is currently not on the list is that I read somewhere that karma-cli should be used in place of karma. Adding the 'karma' package doesn't change anything, however.)
Any idea why that "Cannot find module 'karma-jasmine'" pops up? In (2) you'll see that the karma-jasmine package is listed, I find it on the server, but still it's not found by the NodeJS plugin.
Thanks, Simon
I managed to get it to work by running "npm install" as part of the build process, and then run everything on local npm packages.
The entire setup is described here: https://funneltravel.wordpress.com/2017/10/16/running-karma-with-maven-on-jenkins-ci/

Resources