Unable to start SonarQube, Getting Error in Terminal - ios

I am trying to implement SonarQube on my Mac Pro M1.
I have followed steps from: here
Also I have installed JDK 11.
But getting Error:
Running SonarQube...
wrapper | --> Wrapper Started as Console
wrapper | Launching a JVM...
jvm 1 | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
jvm 1 | Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
jvm 1 |
jvm 1 |
jvm 1 | WARNING - Unable to load the Wrapper's native library because none of the
jvm 1 | following files:
jvm 1 | libwrapper-macosx-aarch64-64.dylib
jvm 1 | libwrapper-macosx-universal-64.dylib
jvm 1 | libwrapper.dylib
jvm 1 | could be located on the following java.library.path:
jvm 1 | /Applications/SonarQube/bin/macosx-universal-64/./lib
jvm 1 | Please see the documentation for the wrapper.java.library.path
jvm 1 | configuration property.
jvm 1 | System signals will not be handled correctly.
jvm 1 |
jvm 1 | 2022.08.02 16:08:58 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /Applications/SonarQube/temp
jvm 1 | 2022.08.02 16:08:58 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:58182]
jvm 1 | 2022.08.02 16:08:59 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[ELASTICSEARCH] from [/Applications/SonarQube/elasticsearch]: /Applications/SonarQube/elasticsearch/bin/elasticsearch
jvm 1 | 2022.08.02 16:08:59 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
jvm 1 | Exception in thread "main" java.lang.UnsupportedOperationException: The Security Manager is deprecated and will be removed in a future release
jvm 1 | at java.base/java.lang.System.setSecurityManager(System.java:416)
jvm 1 | at org.elasticsearch.bootstrap.Security.setSecurityManager(Security.java:99)
jvm 1 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:70)
jvm 1 | 2022.08.02 16:08:59 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [ElasticSearch]: 1
jvm 1 | 2022.08.02 16:08:59 INFO app[][o.s.a.SchedulerImpl] Process[ElasticSearch] is stopped
jvm 1 | 2022.08.02 16:08:59 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
jvm 1 | 2022.08.02 16:08:59 ERROR app[][o.s.a.p.EsManagedProcess] Failed to check status
jvm 1 | org.elasticsearch.ElasticsearchException: java.lang.InterruptedException
jvm 1 | at org.elasticsearch.client.RestHighLevelClient.performClientRequest(RestHighLevelClient.java:2695)
jvm 1 | at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:2171)
jvm 1 | at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:2137)
jvm 1 | at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:2105)
jvm 1 | at org.elasticsearch.client.ClusterClient.health(ClusterClient.java:151)
jvm 1 | at org.sonar.application.es.EsConnectorImpl.getClusterHealthStatus(EsConnectorImpl.java:64)
jvm 1 | at org.sonar.application.process.EsManagedProcess.checkStatus(EsManagedProcess.java:92)
jvm 1 | at org.sonar.application.process.EsManagedProcess.checkOperational(EsManagedProcess.java:77)
jvm 1 | at org.sonar.application.process.EsManagedProcess.isOperational(EsManagedProcess.java:62)
jvm 1 | at org.sonar.application.process.ManagedProcessHandler.refreshState(ManagedProcessHandler.java:223)
jvm 1 | at org.sonar.application.process.ManagedProcessHandler$EventWatcher.run(ManagedProcessHandler.java:288)
jvm 1 | Caused by: java.lang.InterruptedException: null
jvm 1 | at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1048)
jvm 1 | at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:243)
jvm 1 | at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:75)
jvm 1 | at org.elasticsearch.client.RestHighLevelClient.performClientRequest(RestHighLevelClient.java:2692)
jvm 1 | ... 10 common frames omitted
wrapper | <-- Wrapper Stopped
I have checked lots of StackOverflow Questions and solutions regarding same But still can't figure it out.
I would really appreciate for your help.

Related

Sonarqube upgrade from 7.4-community to 7.9 JVM ERROR

I am trying to upgrade sonarqube from 7.4-community docker version to 7.9-community version. But I am getting this error when I did the DB upgrade via http://sonar_IP:9000/setup. I have enough memory on the server as well.
Do you have an idea about this error?
I have setup docker-compose parameter to change the Java virtual memory but seems it didn't work.
docker-compose.yml parameter:
- SONAR_RUNNER_OPTS="-Xmx9216m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m"
Docker log when sonarqube getting start:
sonarqube_1 | 2020.06.15 07:19:25 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/opt/sonarqube/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.enforce.bootstrap.checks=true, -Xms512m, -Xmx512m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/opt/sonarqube/elasticsearch, -Des.path.conf=/opt/sonarqube/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar]
sonarqube_1 | 2020.06.15 07:01:04 INFO web[][DbMigrations] Executing DB migrations...
sonarqube_1 | 2020.06.15 07:01:04 INFO web[][DbMigrations] #2800 'Truncate environment variables and system properties from existing scanner reports'...
sonarqube_1 | java.lang.OutOfMemoryError: Java heap space
sonarqube_1 | Dumping heap to java_pid122.hprof ...
sonarqube_1 | Heap dump file created [483151698 bytes in 0.928 secs]
sonarqube_1 | 2020.06.15 07:01:09 ERROR web[][DbMigrations] #2800 'Truncate environment variables and system properties from existing scanner reports': failure | time=4629ms
sonarqube_1 | 2020.06.15 07:01:09 ERROR web[][DbMigrations] Executed DB migrations: failure | time=4638ms
sonarqube_1 | 2020.06.15 07:01:09 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] Container restart failed | time=4831ms
sonarqube_1 | 2020.06.15 07:01:09 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] Container restart failed
sonarqube_1 | java.lang.OutOfMemoryError: Java heap space
sonarqube_1 | at org.postgresql.jdbc.PgPreparedStatement.setBytes(PgPreparedStatement.java:339)
sonarqube_1 | at org.apache.commons.dbcp2.DelegatingPreparedStatement.setBytes(DelegatingPreparedStatement.java:306)
sonarqube_1 | at org.apache.commons.dbcp2.DelegatingPreparedStatement.setBytes(DelegatingPreparedStatement.java:306)
sonarqube_1 | at org.sonar.server.platform.db.migration.step.BaseSqlStatement.setBytes(BaseSqlStatement.java:93)
sonarqube_1 | at org.sonar.server.platform.db.migration.step.UpsertImpl.setBytes(UpsertImpl.java:30)
sonarqube_1 | at org.sonar.server.platform.db.migration.version.v79.TruncateEnvAndSystemVarsFromScannerContext.truncateScannerContext(TruncateEnvAndSystemVarsFromScannerContext.java:55)
sonarqube_1 | at org.sonar.server.platform.db.migration.version.v79.TruncateEnvAndSystemVarsFromScannerContext$$Lambda$1225/0x0000000100790040.handle(Unknown Source)
sonarqube_1 | at org.sonar.server.platform.db.migration.step.MassUpdate.callSingleHandler(MassUpdate.java:118)
sonarqube_1 | at org.sonar.server.platform.db.migration.step.MassUpdate.lambda$execute$0(MassUpdate.java:92)
sonarqube_1 | at org.sonar.server.platform.db.migration.step.MassUpdate$$Lambda$1226/0x0000000100790440.handle(Unknown Source)
sonarqube_1 | at org.sonar.server.platform.db.migration.step.SelectImpl.scroll(SelectImpl.java:79)
sonarqube_1 | at org.sonar.server.platform.db.migration.step.MassUpdate.execute(MassUpdate.java:92)
Yes, Correct Isaac. Thanks for the reply. Already
I managed to resolve that changing the JVM parameters in sonar.properties file related to Elasticsearch, compute Engine, and Web server. Then the upgrade was successful.
I managed to upgrade sonarqube from 6.5 to 8.3.1 version.
Web Server:
sonar.web.javaOpts=-Xmx4096m -Xms4096m -XX:+HeapDumpOnOutOfMemoryError
Compute Engine:
sonar.ce.javaOpts=-Xmx4096m -Xms4096m -XX:+HeapDumpOnOutOfMemoryError
Elasticsearch:
sonar.search.javaOpts=-Xms2048m -Xmx2048m -XX:+HeapDumpOnOutOfMemoryError
Note in the log, first line, sonarqube starts with "-Xms512m, -Xmx512m", your SONAR_RUNNER_OPTS variable not work correctly.

Jenkins failure build: Internal Error: Unhandled kpi type with performance plugin

I'm running Jenkins with the Performance Plugin.
I have a multiple JMeter jmx scripts which run on jenkins and I'm trying to add this one. But the build is always failing. With this message
Internal Error: Unhandled kpi type: <type 'long'>
bzt installation is also done.
I can't seem to find much info about this on google. Any help?
After the shutdown I'm getting this:
19:42:27 INFO: Shutting down...
19:42:27 INFO: Post-processing...
19:42:29 INFO: Test duration: 0:47:04
19:42:29 INFO: Samples count: 3200, 3.25% failures
19:42:29 INFO: Average times: total 4.369, latency 0.000, connect 0.000
19:42:29 INFO: Percentiles:
+---------------+---------------+
| Percentile, % | Resp. Time, s |
+---------------+---------------+
| 0.0 | 0.258 |
| 50.0 | 3.251 |
| 90.0 | 8.799 |
| 95.0 | 14.375 |
| 99.0 | 24.239 |
| 99.9 | 30.031 |
| 100.0 | 35.743 |
+---------------+---------------+
19:42:29 INFO: Request label stats:
+----------------------+--------+--------+--------+---------------------------------------------------------------------+
| label | status | succ | avg_rt | error |
+----------------------+--------+--------+--------+---------------------------------------------------------------------+
| Click_ToonSelectie | FAIL | 94.00% | 4.984 | Number of samples in transaction : 4, number of failing samples : 1 |
| | | | | Non HTTP response message: Connection timed out: connect |
| | | | | Not Modified |
| FilterBrand | FAIL | 99.00% | 0.832 | Number of samples in transaction : 1, number of failing samples : 1 |
| LoadFilterPage | FAIL | 96.00% | 6.306 | Non HTTP response message: Connection timed out: connect |
| | | | | Number of samples in transaction : 3, number of failing samples : 1 |
| OpenRandomCarDetails | FAIL | 96.00% | 5.355 | Number of samples in transaction : 1, number of failing samples : 1 |
| | | | | Moved Permanently |
+----------------------+--------+--------+--------+---------------------------------------------------------------------+
19:42:29 INFO: Request label stats:
+----------------------+--------+--------+--------+---------------------------------------------------------------------+
| label | status | succ | avg_rt | error |
+----------------------+--------+--------+--------+---------------------------------------------------------------------+
| Click_ToonSelectie | FAIL | 94.00% | 4.984 | Number of samples in transaction : 4, number of failing samples : 1 |
| | | | | Non HTTP response message: Connection timed out: connect |
| | | | | Not Modified |
| FilterBrand | FAIL | 99.00% | 0.832 | Number of samples in transaction : 1, number of failing samples : 1 |
| LoadFilterPage | FAIL | 96.00% | 6.306 | Non HTTP response message: Connection timed out: connect |
| | | | | Number of samples in transaction : 3, number of failing samples : 1 |
| OpenRandomCarDetails | FAIL | 96.00% | 5.355 | Number of samples in transaction : 1, number of failing samples : 1 |
| | | | | Moved Permanently |
+----------------------+--------+--------+--------+---------------------------------------------------------------------+
19:42:29 INFO: Dumping final status as XML: aggregate-results.xml
19:42:29 ERROR: Internal Error: Unhandled kpi type: <type 'long'>
19:42:29 INFO: Artifacts dir: C:\Users\Kristof\.jenkins\workspace\ACC-Tweedehands\2018-07-19_18-55-13.298000
19:42:29 WARNING: Done performing with code: 1
Build step 'Run Performance Test' changed build result to FAILURE
Finished: FAILURE
First 2 lines of JTL:
<?xml version="1.0" encoding="UTF-8"?>
<testResults version="1.2">
If you're using Taurus tool my expectation is that you should be providing kpi.jtl CSV file instead of .jtl results file in XML format.
I cannot reproduce your issue using normal kpi.jtl results file from Taurus and latest Performance Plugin version 3.10
I neither can reproduce your issue, my output is totally different:
Started by user anonymous
Building on master in workspace /Users/jenkins/Projects/temp/Jenkins/.jenkins/workspace/Taurus
Performance: Recording JMeterCsv reports '/Users/jenkins/Projects/temp/Jenkins/.jenkins/jobs/Taurus/builds/2/temp/kpi.jtl'
Performance: Parsing JMeter report file '/Users/jenkins/Projects/temp/Jenkins/.jenkins/jobs/Taurus/builds/2/performance-reports/JMeterCSV/kpi.jtl'.
Performance: No threshold configured for making the test unstable
Performance: No threshold configured for making the test failure
Performance: File kpi.jtl reported 0.0% of errors [SUCCESS]. Build status is: SUCCESS
Finished: SUCCESS
See How to Run Taurus with the Jenkins Performance Plugin article for comprehensive information and instructions just in case.

WSFW004 Access Denied for getUser method (UserWS)

[WSFW004]Access DeniedAccess to this resource is prohibited. (system.useradmin.generic.VIEW)
I am encountering Access Error whenever calling UserWS.getUser() method from my buildng block.
The code snippet is as follows:
UserFilter uf = new UserFilter();
uf.setId(lstEnrolledIds);
uf.setFilterType(2); // GET_USER_BY_ID_WITH_AVAILABILITY
UserWS uWS = UserWSFactory.getUserWSForTool();
UserVO[] lstUserVO = uWS.getUser(uf);
The error details are:
INFO | jvm 1 | 2018/04/11 09:31:02 | SEVERE: Servlet.service() for servlet [jsp] in context with path [/webapps/ntu-hdlgrade-BBLEARN] threw exception [java.lang.RuntimeException: [WSFW004]<b>Access Denied</b><br>Access to this resource is prohibited. (system.useradmin.generic.VIEW)] with root cause
INFO | jvm 1 | 2018/04/11 09:31:02 | blackboard.platform.security.AccessException: <b>Access Denied</b><br>Access to this resource is prohibited. (system.useradmin.generic.VIEW)
INFO | jvm 1 | 2018/04/11 09:31:02 | at blackboard.platform.security.SecurityUtil.checkEntitlement(SecurityUtil.java:199)
INFO | jvm 1 | 2018/04/11 09:31:02 | at blackboard.platform.ws.AxisHelpers.logAndValidateMethodCallBefore(AxisHelpers.java:273)
INFO | jvm 1 | 2018/04/11 09:31:02 | at blackboard.platform.ws.WebServiceWrapper.invoke(WebServiceWrapper.java:146)
INFO | jvm 1 | 2018/04/11 09:31:02 | at com.sun.proxy.$Proxy939.getUser(Unknown Source)
INFO | jvm 1 | 2018/04/11 09:31:02 | at org.apache.jsp.process_005fpreview_jsp._jspService(process_005fpreview_jsp.java:134)
Resolved.
You can give "system.user.VIEW" entitlement in bb-manifest.xml.
Or you can directly give permission on your JSP page. .

How to get thread information logs when calling publishOn

I am working on Schedulers in reactive streams and using a Flux and Scheduler on this Flux using the publishOn method as follows:
System.out.println("*********Calling Concurrency************");
List<Integer> elements = new ArrayList<>();
Flux.range(1, 1000)
.log()
.map(i -> i * 2)
.publishOn(Schedulers.parallel())
//.subscribeOn(Schedulers.parallel())
.subscribe(elements::add);
System.out.println("-------------------------------------");
for which i got the following info logs:
*********Calling Concurrency************
[info] | onSubscribe([Synchronous Fuseable] FluxRange.RangeSubscription)
[info] | request(256)
[info] | onNext(1)
[info] | onNext(2)
[info] | onNext(3)
[info] | onNext(4)
[info] | onNext(5)
[info] | onNext(6)
[info] | onNext(7)
[info] | onNext(8)
[info] | onNext(9)
.....
.....
[info] | onNext(444)
[info] | onNext(445)
[info] | onNext(446)
[info] | onNext(447)
[info] | onNext(448)
-------------------------------------
[info] | request(192)
.....
.....
[info] | onNext(999)
[info] | onNext(1000)
[info] | onComplete()
[info] | request(192)
and there is no information about the thread execution and processing.
Also, sometimes 192 elements are requested and sometimes 256 elements are requested.
Here is the dependency which i am using:
<dependency>
<groupId>com.googlecode.slf4j-maven-plugin-log</groupId>
<artifactId>slf4j-maven-plugin-log</artifactId>
<version>1.0.0</version>
</dependency>
How can i get the log information about the current thread/parallel thread execution?
Please suggest.
This looks like a logger misconfiguration on your side.
Reactor will pick up that you use SLF4J but you still need a correctly configured logging implementation, like Logback,and associated appenders that will log the thread name.
If you don't want to bother at all with a logging framework, you can call Loggers.useConsoleLoggers() and it will print on the console in a simplified format that includes the current thread name. I recommend to only do that for a 1 shot run (like a debugging session) though, not in production code...
To get all required information just put .log() after .publishOn()
Flux.range(1, 1000)
.map(i -> i * 2)
.publishOn(Schedulers.parallel())
//.subscribeOn(Schedulers.parallel())
.log()
.blockLast();
the final output will look next:
12:13:21.964 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
12:13:21.997 [main] INFO reactor.Flux.PublishOn.1 - | onSubscribe([Fuseable] FluxPublishOn.PublishOnSubscriber)
12:13:21.999 [main] INFO reactor.Flux.PublishOn.1 - | request(unbounded)
12:13:22.002 [parallel-1] INFO reactor.Flux.PublishOn.1 - | onNext(2)
12:13:22.002 [parallel-1] INFO reactor.Flux.PublishOn.1 - | onNext(4)
12:13:22.002 [parallel-1] INFO reactor.Flux.PublishOn.1 - | onNext(6)
12:13:22.002 [parallel-1] INFO reactor.Flux.PublishOn.1 - | onNext(8)
12:13:22.002 [parallel-1] INFO reactor.Flux.PublishOn.1 - | onNext(10)
12:13:22.002 [parallel-1] INFO reactor.Flux.PublishOn.1 - | onNext(12)
so in that case information about Thread will be there
Note: Output might varying depends on used Logger library

cudnn error :: CUDNN_STATUS_SUCCESS (1 vs. 0) CUDNN_STATUS_NOT_INITIALIZED

I am trying to install a open-source software "openpose" for which I needed to install cuda, cudnn and nvidia drivers. Output of nvidia-smi is :
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.59 Driver Version: 440.59 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 940MX Off | 00000000:01:00.0 Off | N/A |
| N/A 47C P8 N/A / N/A | 107MiB / 2004MiB | 7% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1513 G /usr/lib/xorg/Xorg 63MiB |
| 0 1698 G /usr/bin/gnome-shell 41MiB |
+-----------------------------------------------------------------------------+
And output of cat /usr/include/cudnn.h | grep CUDNN_MAJOR -A 2 gives:
#define CUDNN_MAJOR 7
#define CUDNN_MINOR 6
#define CUDNN_PATCHLEVEL 5
After successfull installations of all the above softwares and libraries, I finally ran openpose
with:
./build/examples/openpose/openpose.bin --video examples/media/video.avi
But the output was:
Starting OpenPose demo...
Configuring OpenPose...
Starting thread(s)...
Auto-detecting all available GPUs... Detected 1 GPU(s), using 1 of them starting at GPU 0.
F0214 01:02:35.327615 3433 cudnn_conv_layer.cpp:53] Check failed: status == CUDNN_STATUS_SUCCESS (1 vs. 0) CUDNN_STATUS_NOT_INITIALIZED
*** Check failure stack trace: ***
# 0x7fabb8f390cd google::LogMessage::Fail()
# 0x7fabb8f3af33 google::LogMessage::SendToLog()
# 0x7fabb8f38c28 google::LogMessage::Flush()
# 0x7fabb8f3b999 google::LogMessageFatal::~LogMessageFatal()
# 0x7fabb89459d3 caffe::CuDNNConvolutionLayer<>::LayerSetUp()
# 0x7fabb8a42308 caffe::Net<>::Init()
# 0x7fabb8a441e0 caffe::Net<>::Net()
# 0x7fabbaa2ccaa op::NetCaffe::initializationOnThread()
# 0x7fabbaa500a1 op::addCaffeNetOnThread()
# 0x7fabbaa51518 op::PoseExtractorCaffe::netInitializationOnThread()
# 0x7fabbaa57163 op::PoseExtractorNet::initializationOnThread()
# 0x7fabbaa4be61 op::PoseExtractor::initializationOnThread()
# 0x7fabbaa46a51 op::WPoseExtractor<>::initializationOnThread()
# 0x7fabbaa8aff1 op::Worker<>::initializationOnThreadNoException()
# 0x7fabbaa8b120 op::SubThread<>::initializationOnThread()
# 0x7fabbaa8d2d8 op::Thread<>::initializationOnThread()
# 0x7fabbaa8d4a7 op::Thread<>::threadFunction()
# 0x7fabba32566f (unknown)
# 0x7fabb9a476db start_thread
# 0x7fabb9d8088f clone
Aborted
I have gone through a lot of online discussions but could not figure out how to resolve this.
I have been having the same problem with CUDNN.
Although not ideal, I have been running without CUDNN. In cmake-gui uncheck USE_CUDNN and then compile. When running openpose I have also had to reduce -net_resolution.
For example: ./build/examples/openpose/openpose.bin -net_resolution 256x192
The greater the resolution the slower the FPS though.

Resources