I have a step in my pipeline that runs our configured pre-commit on Bitbucket:
...
- step:
name: Passing linters
image: python:3.7-slim-stretch
script:
- pip3 install "pre-commit==2.17.0"
- apt-get update && apt-get --assume-yes install git
- pre-commit run --all-files
No changes were made but it suddenly stopped working.
The pipeline result:
+ pre-commit run --all-files
[INFO] Initializing environment for https://github.com/psf/black.
[INFO] Initializing environment for https://github.com/psf/black:click==8.0.4.
[INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks.
[INFO] Initializing environment for https://github.com/maximevast/pre-commit-tslint/.
[INFO] Initializing environment for https://github.com/maximevast/pre-commit-tslint/:tslint-react#4.1.0,tslint#5.20.1,typescript#4.0.2.
[INFO] Installing environment for https://github.com/psf/black.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/maximevast/pre-commit-tslint/.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: command: ('/root/.cache/pre-commit/repo1234/node_env-default/bin/node', '/root/.cache/pre-commit/repo1234/node_env-default/bin/npm', 'install', '--dev', '--prod', '--ignore-prepublish', '--no-progress', '--no-save')
return code: 1
expected return code: 0
stdout: (none)
stderr:
/root/.cache/pre-commit/repo1234/node_env-default/bin/node: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.27' not found (required by /root/.cache/pre-commit/repo1234/node_env-default/bin/node)
/root/.cache/pre-commit/repo1234/node_env-default/bin/node: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.25' not found (required by /root/.cache/pre-commit/repo1234/node_env-default/bin/node)
/root/.cache/pre-commit/repo1234/node_env-default/bin/node: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /root/.cache/pre-commit/repo1234/node_env-default/bin/node)
Check the log at /root/.cache/pre-commit/pre-commit.log
pre-commit utilizes nodeenv to bootstrap node environments
it does this by downloading a copy of node and provisioning an environment (when node is not available). a few days ago node 18.x was released and the prebuilt binaries require a relatively-recent version of glibc
there's a few ways you can work around this:
1. select a specific version of node using language_version / default_language_version
# using default_language_version
default_language_version:
node: 16.14.2
# on the hook itself
repos:
- repo: https://github.com/maximevast/pre-commit-tslint
rev: ...
hooks:
- id: tslint
language_version: 16.14.2
2. use a newer base image
stretch is a bit old, perhaps try instead 3.7-slim-buster instead ?
this will get you a more modern version of glibc
image: python:3.7-slim-buster
3. install node into your image
this will skip the "download node" step of pre-commit by default (specifically it'll default to language_version: system when a suitable node is detected)
this will vary based on your image setup
disclaimer: I created pre-commit
Related
Thanks for reading my post. I've got a problem that I'd like to pick your brains about. I have many automated Selenium tests that are written in Java. They leverage JUnit for testing. I am tasked with optimizing the run time of these tests on an Amazon EC2 CentOS server. The tests are pulled from a Github repository and get stored inside a Docker Container on the EC2 instance. Similarly, these tests also run from the client's laptop through Eclipse using Cucumber. On the laptop, the tests take a fraction of the time that they do on the EC2 instance. The EC2 instance and laptop have the same amount of RAM and plenty of CPU processing power to run the tests, so I would think that the tests on the EC2 instance wouldn't take nearly as long as they do. Both the EC2 instance and the laptop have to go through a proxy to get to the Internet as well. I have made sure this part was set up correctly, as I had to set HTTP_PROXY, HTTPS_PROXY, and no_proxy to specific IP addresses.
In this section, I will provide the code that the chromedriver gets instantiated with whenever it is called. I will also provide the exact commands that I am using to run the tests. Please see below:
Here is the method that is called when it gets instantiated:
public static ChromeOptions setupChromeOptions() {
ChromeOptions options = new ChromeOptions();
Map<String, Object> p refs = new HashMap<String, Object>();
prefs.put("profile.default_content_setting_values.notifications", 2);
options.setHeadless(true);
options.setProxy(null);
options.setExperimentalOption("prefs", prefs);
options.addArguments("--disable-dev-shm-usage");
options.addArguments("--disable-extensions");
options.addArguments("--disable-gpu");
options.addArguments("--headless");
options.addArguments("--no-proxy-server");
options.addArguments("--no-sandbox");
options.addArguments("--proxy-bypass-list=*");
options.addArguments("--proxy-server=");
options.addArguments("--proxy-server='direct://'");
options.addArguments("--start-maximized");
options.addArguments("--window-size=1280,720");
return options;
}
Here are the commands that are ran through Jenkins on the EC2 instance:
#!/bin/bash
cd /home/ec2-user/Git/REPOS_NAME # Navigate to where the repos are at on the EC2.
git fetch --all
git reset --hard origin/master
git pull # Pull down the latest.
sudo docker container ls # Display the running containers.
sudo docker exec 0967d39b1967 bash -c 'cd /Selenium/ ; rm -rf REPO_ROOT_FOLDER/ ; exit' # Delete the automated test folder so that it gets copied over fresh.
sudo docker cp /home/ec2-user/Git/REPO_NAME/REPO_ROOT_FOLDER 0967d39b1967:/Selenium/ # Copy the latest from the Github Repository to the Docker container.
sudo docker exec 0967d39b1967 bash -c 'cd /Selenium/REPO_ROOT_FOLDER/ANOTHER_FOLDER/driver ; chmod 0777 chromedriver ; cd .. ; mvn clean test' # Activate the Testrunner that is enabled in pom.xml. This kicks off the automated tests.
Finally, here are the version numbers and OS details of what I'm running:
Amazon EC2: CentOS 6.10
Docker Container OS: Ubuntu 16.04
Selenium: selenium-server-standalone-3.141.59.jar
Chromedriver: 84.0.4147.30
Google Chrome: 84.0.4147.105
Java: OpenJDK 1.8.0_252
Cucumber: 2.0.0
I cannot provide links to an HTML Page that I'm scraping because it's all inside of a Salesforce environment.
EC2 Specs:
AWS T2.XLarge
4 vCPUs
64-Bit
16 GB RAM
54 CPU Credits per hour
Laptop Specs:
Intel Core i7-8665U # 1.90GHz, 2112 MHz, 4 cores, 8 Logocal Processors
16 GB RAM
Windows 10
Group of Tests 1:
EC2: 50 minutes
Laptop through Eclipse: 20 minutes
Group of Tests 2:
EC2: 2 hours 43 minutes
Laptop through Eclipse: 1 hour 15 minutes
There are two breakdowns of the timings for two groups of tests. As you can see, the times off of the EC2 are much longer than that of that laptop.
Finally, here is a log of the set of tests that take 50 minutes to run. I had to change some of the directory and path names to protect the client.
Started by user Scott-IM
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on EC2_INSTANCE in workspace /var/lib/jenkins/workspace/Run mvn test
[Run mvn test] $ /bin/bash /tmp/jenkins166122467778007381.sh
Fetching origin
FIPS mode initialized
HEAD is now at 506f4ef Setting proxy to null for Chromeoptions
FIPS mode initialized
Already up to date.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0967d39b1967 IMAGE_NAME "/Selenium/setup.sh" 4 hours ago Up 4 hours 0.0.0.0:8080->8080/tcp CONTAINER_NAME
Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
[INFO] Scanning for projects...
[INFO]
[INFO] --------------< test:automation >--------------
[INFO] Building test-automation 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) # automation ---
[INFO] Deleting /Selenium/automation/automation/target
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) # automation ---
[WARNING] Using platform encoding (UTF8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory /Selenium/automation/automation/src/main/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) # automation ---
[INFO] No sources to compile
[INFO]
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) # automation ---
[WARNING] Using platform encoding (UTF8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory /Selenium/automation/automation/src/test/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) # automation ---
[INFO] Changes detected - recompiling the module!
[WARNING] File encoding has not been set, using platform encoding UTF8, i.e. build is platform dependent!
[INFO] Compiling 111 source files to /Selenium/salesforce-automation/salesforce-automation/target/test-classes
[WARNING] /Selenium/automation/automation/src/test/java/com/test/mapper/OIDP_Flow/OIDP_Email_Auto_Approval_Flow_2.java:[797,128] unmappable character for encoding UTF8
[WARNING] /Selenium/automation/automation/src/test/java/com/test/mapper/OIDP_Flow/OIDP_Email_Auto_Approval_Flow_2.java:[1842,130] unmappable character for encoding UTF8
[WARNING] /Selenium/automation/automation/src/test/java/com/test/shared/ExtentReporter.java:[126,33] unmappable character for encoding UTF8
[WARNING] /Selenium/automation/automation/src/test/java/com/test/shared/ExtentReporter.java:[127,32] unmappable character for encoding UTF8
[WARNING] /Selenium/automation/automation/src/test/java/com/test/mapper/rsiaa/HD_ISO_VSC_Auto_Approval_of_Resubmitted_Si.java:[4977,95] unmappable character for encoding UTF8
[WARNING] /Selenium/automation/automation/src/test/java/com/test/pages/aao/TrackingRecord.java:[27,39] unmappable character for encoding UTF8
[WARNING] /Selenium/automation/automation/src/test/java/com/test/pages/aao/TrackingRecord.java:[117,84] unmappable character for encoding UTF8
[WARNING] /Selenium/automation/automation/src/test/java/com/test/pages/aao/TrackingRecord.java:[156,92] unmappable character for encoding UTF8
[WARNING] /Selenium/automation/automation/src/test/java/com/test/pages/aao/TrackingRecord.java:[195,84] unmappable character for encoding UTF8
[WARNING] /Selenium/automation/automation/src/test/java/restApiPOC/PrivacyRestApiConnection.java: Some input files use unchecked or unsafe operations.
[WARNING] /Selenium/automation/automation/src/test/java/restApiPOC/PrivacyRestApiConnection.java: Recompile with -Xlint:unchecked for details.
[INFO]
[INFO] --- maven-surefire-plugin:2.19:test (default-test) # automation ---
[WARNING] The parameter forkMode is deprecated since version 2.14. Use forkCount and reuseForks instead.
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
Running com.test.runner.TestRunner2
Starting ChromeDriver 84.0.4147.30 (48b3e868b4cc0aa7e8149519690b6f6949e110a8-refs/branch-heads/4147#{#310}) on port 23815
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
[1604696323.888][SEVERE]: bind() failed: Cannot assign requested address (99)
Nov 06, 2020 8:58:44 PM org.openqa.selenium.remote.ProtocolHandshake createSession
INFO: Detected dialect: W3C
Frame count :3
Frame count :4
Frame count :6
Newly Generated Service Item Number is :15355359
Frame count :3
Frame count :3
Clicked on new service item :15355359
Frame count :2
Frame count :2
Frame count :4
Frame count :3
Frame count :3
1 Scenarios (1 passed)
33 Steps (33 passed)
44m57.274s
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,698.236 sec - in com.salesforcetest.runner.TestRunner2
Results :
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 45:05 min
[INFO] Finished at: 2020-11-06T21:43:40Z
[INFO] ------------------------------------------------------------------------
Finished: SUCCESS
If anything was unclear or didn't make sense, please let me know and I will address the issue. I have looked all over the Internet for solutions and haven't found anything yet. I'm almost at a loss of what to do, as I'm not entirely sure what the exact cause of the slowness is. If anyone could provide some clues or things to try, I would be more than willing to give it a shot.
Thanks much for any and all help.
Not a solution per se, but throwing some things out there worth trying.
Since you're comparing Chrome run times I would advise trying to keep the versions in sync. That's tough on windows because you often have no control when Chrome updates. This isn't to say it would solve your problem but it would help eliminate a variable.
Chrome (and chromedriver for that matter) goes through a decent amount of changes that not being on the same version CAN and HAS made a difference for me.
Here are a few of the chrome options I've had to resort to using over the past couple years to get things running smoothly. I noticed 3 you didn't have.
--disable-extensions -- You never know what they might have on their local version of chrome.
--dns-prefetch-disable -- script would hang without this in older versions of chromedriver. https://bugs.chromium.org/p/chromedriver/issues/detail?id=402#c128
--enable-features=NetworkService,NetworkServiceInProcess -- script freezing https://groups.google.com/forum/m/#!topic/chromedriver-users/ktp-s_0M5NM[21-40]
I am trying to configure DSE on newly configured centos 7 VM but I am getting error Attempting to configure dse-full 5.1.0, but found a different version installed. Upgrades and downgrades aren't supported." (opscd-pool-4) I am not able to understand why getting this error when the machine is completely new, even I haven't run any DSE command.
ERROR: Received error from node event-subtype="MeldError" job-id="a38724e1-2139-45f5-9266-079638c2ca2e" name="cassandra-5" ssh-management-address="192.168.159.175" node-id="dafe635a-6e98-4ae6-b0ea-6afa0da51731" event-type="error" message="Attempting to configure dse-full 5.1.0, but found a different version installed. Upgrades and downgrades aren't supported." (opscd-pool-4)
I am using Opscentre to configure Node.
Here you go detailed log of LCM
2017-11-29 05:38:37,753 [opscenterd] INFO: configure job started for node name="cassandra-5" ssh-management-address="192.168.138.237" node-id="dafe635a-6e98-4ae6-b0ea-6afa0da51731" (async-thread-macro-32)
2017-11-29 05:38:37,776 [opscenterd] INFO: Trying to establish ssh connection name="cassandra-5" ssh-management-address="192.168.138.237" node-id="dafe635a-6e98-4ae6-b0ea-6afa0da51731" node-name="cassandra-5" job-id="4fae4fe1-ca3c-4924-abdb-62c4cf4ad878" (async-thread-macro-32)
2017-11-29 05:38:38,515 [opscenterd] INFO: Received milestone from node name="cassandra-5" ssh-management-address="192.168.138.237" node-id="dafe635a-6e98-4ae6-b0ea-6afa0da51731" message="Uploaded facts to OpsCenter server" job-id="4fae4fe1-ca3c-4924-abdb-62c4cf4ad878" (opscd-pool-0)
2017-11-29 05:38:40,135 [opscenterd] ERROR: Received error from node event-subtype="MeldError" job-id="4fae4fe1-ca3c-4924-abdb-62c4cf4ad878" name="cassandra-5" ssh-management-address="192.168.138.237" node-id="dafe635a-6e98-4ae6-b0ea-6afa0da51731" event-type="error" message="Attempting to configure dse-full 5.1.0, but found a different version installed. Upgrades and downgrades aren't supported." (opscd-pool-7)
2017-11-29 05:38:40,161 [opscenterd] ERROR: Configure job 4fae4fe1-ca3c-4924-abdb-62c4cf4ad878 failed! (async-thread-macro-33)
2017-11-29 05:38:41,102 [opscenterd] INFO: configure job finished for node name="cassandra-5" ssh-management-address="192.168.138.237" node-id="dafe635a-6e98-4ae6-b0ea-6afa0da51731" (async-thread-macro-32)
Here you go node info:
[root#li1639-135 ~]# dpkg -l dse-full
-bash: dpkg: command not found
[root#li1639-135 ~]# yum info dse-full
Loaded plugins: fastestmirror
base | 3.6 kB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
(1/4): base/7/x86_64/group_gz | 156 kB 00:00
(2/4): extras/7/x86_64/primary_db | 130 kB 00:00
(3/4): base/7/x86_64/primary_db | 5.7 MB 00:00
(4/4): updates/7/x86_64/primary_db | 3.6 MB 00:00
Determining fastest mirrors
* base: mirrors.linode.com
* extras: mirrors.linode.com
* updates: mirrors.linode.com
Error: No matching Packages to list
Job ID4fae4fe1-ca3c-4924-abdb-62c4cf4ad878
11/29/2017, 5:38:40AM UTC ERROR - MELDERROR Attempting to configure dse-full 5.1.0, but found a different version installed. Upgrades and downgrades aren't supported.
11/29/2017, 5:38:40AM UTC SHELL-COMMAND - RESULT Finished executing command: rpm -qa | grep -E ^dse-full-[[:digit:]] | grep 5.1.0
11/29/2017, 5:38:39AM UTC SHELL-COMMAND - INVOCATION Invoked command: rpm -qa | grep -E ^dse-full-[[:digit:]] | grep 5.1.0
11/29/2017, 5:38:39AM UTC CHECK - IS-PACKAGE-INSTALLED Checking if package dse-full is installed with version 5.1.0
11/29/2017, 5:38:39AM UTC CHANGE - PACKAGE-PROXY Not using proxy
11/29/2017, 5:38:38AM UTC MILESTONE - UPLOADED-FACTS Uploaded facts to OpsCenter server
11/29/2017, 5:38:38AM UTC SHELL-COMMAND - INVOCATION Invoked command: if [ -x "$(which yum)" ] && [ -f "/etc/redhat-release" -o -f "/etc/SuSE-release" -o -f "/etc/system-release" ]; then echo -n "yum"; elif [ -x "$(which...
Updated Answer
I was able to sync up with Ranjeet offline and found that the logs posted above were the result of configure jobs, which require that DSE is already installed. When running install jobs, things proceeded as expected.
There were also some issues with newly supported platforms and platform checks working in confusing ways, but none of that is reflected in the logs for the original post in this question.
Original Answer
OpsCenter/LCM engineer here, I work on the provisioning features.
"Attempting to configure dse-full 5.1.0, but found a different version installed. Upgrades and downgrades aren't supported." The meaning of the error message seems pretty clear. You're asking OpsCenter/LCM to install/configure DSE 5.1.0. Are you positive that you don't have a different version already installed?
On apt-based target machines, you can check what version of DSE is install with 'dpkg -l dse-full'
On yum-based target machines, you can check what version of DSE is installed with 'yum info dse-full'
If you're really trying to install DSE 5.1.0, but a different version is already present on your nodes, you'll have to upgrade/downgrade outside OpsCenter LCM and can the resume managing configs with LCM after the desired version is installed. See http://docs.datastax.com/en/upgrade/doc/upgrade/datastax_enterprise/upgrdDSE.html
If you're attempting to install some other version (which matches what's already installed), then you'll have to clone your config profile and set the correct DSE version when you create the new CP. See: https://support.datastax.com/hc/en-us/articles/212267063-Lifecycle-Manager-Cloning-Configuration-Profiles
If you believe the error from OpsCenter/LCM is mistaken, and that you don't really have a different version of DSE installed on the target nodes, then we'll need more log snippets from LCM with the events leading up to the error, and information about how you confirmed the DSE version on all nodes.
I'm setting up an Angular 4 SPA with automatic testing in Jenkins CI. The SPA is part of a larger, Maven-managed project, so the build is also Maven-managed. So far I've:
Installed the NodeJS plugin on Jenkins, using install from nodejs.org with version 8.6.0
Configured "Global npm packages to install" = "karma-cli phantomjs-prebuilt jasmine-core karma-jasmine karma-phantomjs-launcher karma-junit-reporter karma-coverage"
Added the "maven-karma-plugin" in pom.xml with browsers=PhantomJS / singleRun=true / reporters=dots,junit
Enabled "Provide Node & npm bin/ folder to PATH" on the Jenkins job configuration
The build process starts up quite ok, but eventually I get:
[INFO] --- maven-karma-plugin:1.6:start (default) # webclient ---
[INFO] Executing Karma Test Suite ...
/var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/karma start /var/lib/jenkins/workspace/funnel_build/webclient/karma.conf.js --browsers PhantomJS --reporters dots,junit --single-run
07 10 2017 17:07:52.801:ERROR [config]: Error in config file!
{ Error: Cannot find module 'karma-jasmine'
at Function.Module._resolveFilename (module.js:527:15)
at Function.Module._load (module.js:476:23)
at Module.require (module.js:568:17)
at require (internal/module.js:11:18)
at module.exports (/var/lib/jenkins/workspace/funnel_build/webclient/karma.conf.js:9:7)
at Object.parseConfig (/var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/karma/lib/config.js:410:5)
The npm install at the very beginning of the build logs:
$ /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/npm install -g karma-cli phantomjs-prebuilt jasmine-core karma-jasmine karma-phantomjs-launcher karma-junit-reporter karma-coverage
/var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/karma -> /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/karma-cli/bin/karma
/var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/phantomjs -> /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt/bin/phantomjs
> phantomjs-prebuilt#2.1.15 install /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt
> node install.js
Considering PhantomJS found at /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/phantomjs
Looks like an `npm install -g`
Could not link global install, skipping...
Download already available at /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2
Verified checksum of previously downloaded file
Extracting tar contents (via spawned process)
Removing /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt/lib/phantom
Copying extracted folder /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2-extract-1507388835905/phantomjs-2.1.1-linux-x86_64 -> /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt/lib/phantom
Writing location.js file
Done. Phantomjs binary available at /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt/lib/phantom/bin/phantomjs
npm WARN karma-jasmine#1.1.0 requires a peer of karma#* but none was installed.
npm WARN karma-junit-reporter#1.2.0 requires a peer of karma#>=0.9 but none was installed.
npm WARN karma-phantomjs-launcher#1.0.4 requires a peer of karma#>=0.9 but none was installed.
+ karma-phantomjs-launcher#1.0.4
+ karma-coverage#1.1.1
+ karma-jasmine#1.1.0
+ karma-cli#1.0.1
+ karma-junit-reporter#1.2.0
+ jasmine-core#2.8.0
+ phantomjs-prebuilt#2.1.15
updated 7 packages in 10.553s
(The reason the package 'karma' is currently not on the list is that I read somewhere that karma-cli should be used in place of karma. Adding the 'karma' package doesn't change anything, however.)
Any idea why that "Cannot find module 'karma-jasmine'" pops up? In (2) you'll see that the karma-jasmine package is listed, I find it on the server, but still it's not found by the NodeJS plugin.
Thanks, Simon
I managed to get it to work by running "npm install" as part of the build process, and then run everything on local npm packages.
The entire setup is described here: https://funneltravel.wordpress.com/2017/10/16/running-karma-with-maven-on-jenkins-ci/
I have a Java maven project that i deploy to Bluemix using cf push. Works like a charm. It has this manifest:
applications:
- services:
- Monitoring and Analytics-gm
- somedb
disk_quota: 1024M
hosts:
- someapp
name: someapp
path: target/someapp-0.0.2.war
domain: mybluemix.net
instances: 1
memory: 512M
However when I push my repository to hub.jazz.net and kick off build and deploy, the deploy step fails. I checked the artifacts in the build step and the war file got created.
The error message is:
Server error, status code: 400, error code: 170004, message: App staging failed in the buildpack compile phase
What do I miss?
Update
The last lines from the successful build script:
[INFO] Packaging webapp
[INFO] Assembling webapp [someapp] in [/home/jenkins/workspace/8c791c21-d195-9b03-f3ab-1c2cb5a8a9b4/0d82aa76-8fb2-463b-b1d6-6ec80a763706/target/someapp-0.0.2]
[INFO] Processing war project
[INFO] Copying webapp resources [/home/jenkins/workspace/8c791c21-d195-9b03-f3ab-1c2cb5a8a9b4/0d82aa76-8fb2-463b-b1d6-6ec80a763706/src/main/webapp]
[INFO] Webapp assembled in [56 msecs]
[INFO] Building war: /home/jenkins/workspace/8c791c21-d195-9b03-f3ab-1c2cb5a8a9b4/0d82aa76-8fb2-463b-b1d6-6ec80a763706/target/someapp-0.0.2.war
[INFO] WEB-INF/web.xml already added, skipping
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 11.424 s
[INFO] Finished at: 2015-09-15T10:59:52+00:00
[INFO] Final Memory: 17M/27M
[INFO] ------------------------------------------------------------------------
Uploading artifacts ...
UPLOAD SUCCESSFUL
Total time: 2 seconds
Finished: SUCCESS
As you can see the target/someapp-0.0.2.war is build which is the one referred to in the deploy script
The last lines from the failed deploy script
cf --version
/usr/bin/cf-orig/cf version 6.7.0-IDS-2014-12-04T10:56:46+00:00
+ echo 'Target: https://api.ng.bluemix.net'
Target: https://api.ng.bluemix.net
+ source _deploy.sh
++ cf push someapp
Updating app someapp in org someuser#ibm.com / space somespace as someuser#ibm.com...
OK
Uploading someapp...
Uploading app files from: /home/jenkins/workspace/8c791c21-d195-9b03-f3ab-1c2cb5a8a9b4/7c0cd4a8-8ecf-4020-ae82-fc567dd666e9
Uploading 1.9M, 37 files
Done uploading
OK
Stopping app app someapp in org someuser#ibm.com / space somespace as someuser#ibm.com...
OK
Starting app someapp in org someuser#ibm.com / space somespace as someuser#ibm.com...
-----> Downloaded app package (3.9M)
-----> Downloaded app buildpack cache (4.0K)
FAILED
Server error, status code: 400, error code: 170004, message: App staging failed in the buildpack compile phase
TIP: use 'cf logs someapp --recent' for more information
Build step 'Execute shell' marked build as failure
Finished: FAILURE
log from cf logs someapp --recent
2015-09-15T22:53:49.75+0800 [API/5] OUT Updated app with guid 0ac55e94-12b6-490c-99a9-22dfd96ef293 ({"name"=>"someapp"})
2015-09-15T22:54:06.06+0800 [API/5] OUT Updated app with guid 0ac55e94-12b6-490c-99a9-22dfd96ef293 ({"state"=>"STOPPED"})
2015-09-15T22:54:09.68+0800 [DEA/3] OUT Got staging request for app with id 0ac55e94-12b6-490c-99a9-22dfd96ef293
2015-09-15T22:54:15.04+0800 [API/6] OUT Updated app with guid 0ac55e94-12b6-490c-99a9-22dfd96ef293 ({"state"=>"STARTED"})
2015-09-15T22:54:15.22+0800 [STG/3] OUT -----> Downloaded app package (3.9M)
2015-09-15T22:54:15.40+0800 [STG/3] OUT -----> Downloaded app buildpack cache (4.0K)
2015-09-15T22:54:15.84+0800 [STG/0] OUT -----> Liberty Buildpack Version: v1.22-20150824-1104
2015-09-15T22:54:15.84+0800 [STG/0] ERR E, [2015-09-15T14:54:15.846523 #56] ERROR -- /var/vcap/data/dea_next/admin_buildpacks/b1841a6c-5f84-4c40-ac86-9f4d5e8f0643_e788f7b61c5fadd2fec138a1417cd3e1d345df32/lib/liberty_buildpack/buildpack.rb:50:in `rescue in drive_buildpack_with_logger': Compile failed with exception #<RuntimeError: No supported application type was detected>
2015-09-15T22:54:15.84+0800 [STG/0] ERR No supported application type was detected
2015-09-15T22:54:15.85+0800 [STG/0] OUT Staging failed: Buildpack compilation step failed
2015-09-15T22:54:16.70+0800 [API/6] ERR encountered error: App staging failed in the buildpack compile phase
I tried:
no path: in manifest.yml
path: target/someapp-0.0.2.war (that works on local cf push)
path: someapp-0.0.2.war
None of them worked
Aaarrgghh..... 5 hours of my life gone.
I deleted the project and recreated it. When checking the Build Archive Directory it had the entry target (seems to get added when you select mvn). Despite the fact that I tried with path: someapp-0.0.2.war, that didn't work.
Only after removing target and setting path: target/someapp-0.0.2.war the now clean project did build.
So lesson learned: When switching to a mvn build, remove the target from the Build Archive Directory
Yeah usually the error you got means if wasn't able to find the source for your app or the source for the app is wrong... You could also try adding the following line to manifest.yml.
buildpack: liberty-for-java
Your new manifest would be.
applications:
- services:
- Monitoring and Analytics-gm
- somedb
disk_quota: 1024M
hosts:
- someapp
name: someapp
path: target/someapp-0.0.2.war
buildpack: liberty-for-java
domain: mybluemix.net
instances: 1
memory: 512M
Vanilla neo4j 2.2.0 install on a Ubuntu 14.04 system via their repository.
The instructions given on the neo4j-spatial website don't work. Does anyone have something that works no-questions-asked?
Is there some binary repository I can just apt-get install neo4j-spatial and not deal with this mess?
Thanks!
$ mvn install
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Neo4j - Spatial Components
[INFO] task-segment: [install]
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] Error building POM (may not be this project's POM).
Project ID: org.neo4j:neo4j-cypher-compiler-2.2
Reason: Error getting POM for 'org.neo4j:neo4j-cypher-compiler-2.2' from the repository: Unable to read local copy of metadata: Cannot read metadata from '/home/ubuntu/.m2/repository/org/neo4j/neo4j-cypher-compiler-2.2/2.2-SNAPSHOT/maven-metadata-oss.sonatype.org.xml': end tag name </body> must match start tag name <hr> from line 5 (position: TEXT seen ...</center>\r\n</body>... #6:8)
org.neo4j:neo4j-cypher-compiler-2.2:pom:2.2-SNAPSHOT
for project org.neo4j:neo4j-cypher-compiler-2.2
The Neo4j-Spatial Project was updated and released for Neo4j 2.2
The readme was also updated with the new binary download links.