I started an aws ec2 instance and it is running jenkins on it. It was running fine but automatically stopped running from few days.
I diagnosed the problem using dmesg and I got the following error
oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/jenkins.service,task=java,pid=1168,uid=993
[233949.461595] Out of memory: Killed process 1168 (java) total-vm:2251668kB, anon-rss:317444kB, file-rss:0kB, shmem-rss:0kB, UID:993
[233949.566855] oom_reaper: reaped process 1168 (java), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[ec2-user#ip-172-31-18-74 ~]$ ^C
[ec2-user#ip-172-31-18-74 ~]$ ps -f 1168
UID PID PPID C STIME TTY STAT TIME CMD
The solutions I found online were to edit jenkins.xml in /var/lib/jenkins. But I cant find jenkins.xml instead I have config.xml
?xml version='2.1' encoding='UTF-8'?>
<hudson>
<disabledAdministrativeMonitors>
<string>jenkins.diagnostics.RootUrlNotSetMonitor</string>
</disabledAdministrativeMonitors>
<version>2.263.4</version>
<installStateName>RUNNING</installStateName>
<numExecutors>2</numExecutors>
<mode>NORMAL</mode>
<useSecurity>true</useSecurity>
<authorizationStrategy class="hudson.security.FullControlOnceLoggedInAuthorizationStrategy">
<denyAnonymousReadAccess>true</denyAnonymousReadAccess>
</authorizationStrategy>
<securityRealm class="hudson.security.HudsonPrivateSecurityRealm">
<disableSignup>true</disableSignup>
<enableCaptcha>false</enableCaptcha>
</securityRealm>
<disableRememberMe>false</disableRememberMe>
<projectNamingStrategy class="jenkins.model.ProjectNamingStrategy$DefaultProjectNamingStrategy"/>
<workspaceDir>${JENKINS_HOME}/workspace/${ITEM_FULL_NAME}</workspaceDir>
<buildsDir>${ITEM_ROOTDIR}/builds</buildsDir>
<jdks>
<jdk>
<name>Java</name>
<home>/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.282.b08-2.el8_3.x86_64</home>
<properties/>
</jdk>
</jdks>
<viewsTabBar class="hudson.views.DefaultViewsTabBar"/>
<myViewsTabBar class="hudson.views.DefaultMyViewsTabBar"/>
<clouds/>
<scmCheckoutRetryCount>0</scmCheckoutRetryCount>
<views>
<hudson.model.AllView>
<owner class="hudson" reference="../../.."/>
<name>all</name>
<filterExecutors>false</filterExecutors>
<filterQueue>false</filterQueue>
<properties class="hudson.model.View$PropertyList"/>
</hudson.model.AllView>
</views>
<primaryView>all</primaryView>
I am not sure how to decrease heap size or resolve the problem via any other solution.
Related
I am trying to setup Sonarqube in my Macbook but I am getting following error when I try to start it with sh sonar.sh console
sudo sh sonar.sh console
Password:
/usr/bin/java
Running SonarQube...
Removed stale pid file: ./SonarQube.pid
INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /Applications/sonarqube/temp
INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:60506]
INFO app[][o.s.a.ProcessLauncherImpl] Launch process[ELASTICSEARCH] from [/Applications/sonarqube/elasticsearch]: /Applications/sonarqube/elasticsearch/bin/elasticsearch
INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
Exception in thread "main" java.lang.UnsupportedOperationException: The Security Manager is deprecated and will be removed in a future release
at java.base/java.lang.System.setSecurityManager(System.java:416)
at org.elasticsearch.bootstrap.Security.setSecurityManager(Security.java:99)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:70)
2022.08.24 16:24:52 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [ElasticSearch]: 1
2022.08.24 16:24:52 INFO app[][o.s.a.SchedulerImpl] Process[ElasticSearch] is stopped
2022.08.24 16:24:52 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
After some research on internet I have installed Java 11 but it is not helping me.
I have been trying to understand an issue I've had when running roribio16/alpine-sqs docker image on one of my machines. Whenever I try to run the image without specifying any other settings, docker run roribio16/alpine-sqs
[xxxx#yyyy ~]$ docker run roribio16/alpine-sqs
2021-05-29 15:48:41,216 INFO Included extra file "/etc/supervisor/conf.d/elasticmq.conf" during parsing
2021-05-29 15:48:41,216 INFO Included extra file "/etc/supervisor/conf.d/insight.conf" during parsing
2021-05-29 15:48:41,216 INFO Included extra file "/etc/supervisor/conf.d/sqs-init.conf" during parsing
2021-05-29 15:48:41,216 INFO Set uid to user 0 succeeded
2021-05-29 15:48:41,222 INFO RPC interface 'supervisor' initialized
2021-05-29 15:48:41,222 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2021-05-29 15:48:41,222 INFO supervisord started with pid 1
2021-05-29 15:48:42,225 INFO spawned: 'sqs-init' with pid 9
2021-05-29 15:48:42,229 INFO spawned: 'elasticmq' with pid 10
2021-05-29 15:48:42,230 INFO spawned: 'insight' with pid 11
cp: can't stat '/opt/custom/*.conf': No such file or directory
> sqs-insight#0.3.0 start /opt/sqs-insight
> node index.js
15:48:42.605 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (0.15.0) ...
Loading config file from "/opt/sqs-insight/lib/../config/config_local.json"
15:48:42.929 [elasticmq-akka.actor.default-dispatcher-2] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
Unable to load queues for undefined
Config contains 0 queues.
library initialization failed - unable to allocate file descriptor table - out of memorylistening on port 9325
2021-05-29 15:48:43,233 INFO success: sqs-init entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:43,233 INFO success: elasticmq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:43,234 INFO success: insight entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:43,234 INFO exited: sqs-init (exit status 0; expected)
2021-05-29 15:48:44,318 INFO exited: elasticmq (terminated by SIGABRT (core dumped); not expected)
2021-05-29 15:48:45,322 INFO spawned: 'elasticmq' with pid 67
15:48:45.743 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (0.15.0) ...
15:48:46.044 [elasticmq-akka.actor.default-dispatcher-2] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
library initialization failed - unable to allocate file descriptor table - out of memory2021-05-29 15:48:47,223 INFO success: elasticmq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:47,389 INFO exited: elasticmq (terminated by SIGABRT (core dumped); not expected)
2021-05-29 15:48:48,393 INFO spawned: 'elasticmq' with pid 89
15:48:48.766 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (0.15.0) ...
15:48:49.066 [elasticmq-akka.actor.default-dispatcher-3] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
library initialization failed - unable to allocate file descriptor table - out of memory^C2021-05-29 15:48:49,559 INFO success: elasticmq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:49,559 WARN received SIGINT indicating exit request
2021-05-29 15:48:49,559 INFO waiting for insight, elasticmq to die
2021-05-29 15:48:49,566 INFO stopped: insight (terminated by SIGTERM)
2021-05-29 15:48:50,431 INFO stopped: elasticmq (terminated by SIGABRT (core dumped))
With a bit of googling I found this post where somebody had the same issue when running some other random image, and then posted that they managed to get the image running by setting some ulimits when running the image, which also worked for me (docker run --ulimit nofile=122880:122880 roribio16/alpine-sqs).
I checked the ulimits set inside the container when I didn't use this configuration
docker exec -it ca bash
$ ulimit -a
and found that the nofile setting was ridiculously high, which I assume is what is causing the container to run out of memory, if too many files are being opened simultaneously. I don't have a particulary good understanding of how this works though so would appreciate any clarification somebody could shed on that particular topic also.
Anyway the point of that ramble is that I want to try and find where the default docker container ulimits are set as I don't understand why they are so high on the machine I am using. I have another machine that does not have this problem.
I can find lots of ways to change the default limits but there does not seem to be much information about where these limits get set in the first place. I understand according to the docker documentation that if custom values are not set then the ulimits should be inherited from my system but as far as I can tell my system nofile settings are much lower than what I'm seeing in the container.
(Both machines run manjaro linux however the one that doesn't have this issue is XFCE and the one that does is KDE).
I got an error Unknown Host (504) when I tried to start grails 3.3.2, and the details shows below:
$ grails --stacktrace --verbose
| Error Error occurred running Grails CLI: Unknown Host (504)
org.apache.http.client.HttpResponseException: Unknown Host (504)
at org.eclipse.aether.transport.http.HttpTransporter.handleStatus(HttpTransporter.java:466)
at org.eclipse.aether.transport.http.HttpTransporter.execute(HttpTransporter.java:291)
at org.eclipse.aether.transport.http.HttpTransporter.implPeek(HttpTransporter.java:231)
at org.eclipse.aether.spi.connector.transport.AbstractTransporter.peek(AbstractTransporter.java:43)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$PeekTaskRunner.runTask(BasicRepositoryConnector.java:376)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$TaskRunner.run(BasicRepositoryConnector.java:350)
at org.eclipse.aether.util.concurrency.RunnableErrorForwarder$1.run(RunnableErrorForwarder.java:67)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$DirectExecutor.execute(BasicRepositoryConnector.java:581)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector.get(BasicRepositoryConnector.java:249)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.performDownloads(DefaultArtifactResolver.java:520)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:421)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:320)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.resolveCachedArtifactDescriptor(DefaultDependencyCollector.java:535)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.getArtifactDescriptorResult(DefaultDependencyCollector.java:519)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDependency(DefaultDependencyCollector.java:409)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDependency(DefaultDependencyCollector.java:363)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.process(DefaultDependencyCollector.java:351)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:254)
at org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:341)
at org.springframework.boot.cli.compiler.grape.AetherGrapeEngine.resolve(AetherGrapeEngine.java:319)
at org.springframework.boot.cli.compiler.grape.AetherGrapeEngine.resolve(AetherGrapeEngine.java:301)
at org.springframework.boot.cli.compiler.grape.AetherGrapeEngine.resolve(AetherGrapeEngine.java:293)
at org.grails.cli.boot.GrailsDependencyVersions.<init>(GrailsDependencyVersions.groovy:53)
at org.grails.cli.boot.GrailsDependencyVersions.<init>(GrailsDependencyVersions.groovy:49)
at org.grails.cli.profile.repository.MavenProfileRepository.<init>(MavenProfileRepository.groovy:53)
at org.grails.cli.GrailsCli.createMavenProfileRepository(GrailsCli.groovy:334)
at org.grails.cli.GrailsCli.execute(GrailsCli.groovy:235)
at org.grails.cli.GrailsCli.main(GrailsCli.groovy:159)
| Error Error occurred running Grails CLI: Unknown Host (504)
Here is my version info:
$ grails -v
| Grails Version: 3.3.2
| Groovy Version: 2.4.13
| JVM Version: 1.8.0_121
Extra info: I am running it on Mac behind a proxy network. I've added the following statement into my bin/grails (/Users/foouser/.sdkman/candidates/grails/current/bin/grails) script:
export GRAILS_OPTS="-Dhttp.proxyHost=myHttpProxy -Dhttp.proxyPort=8000 -Dhttps.proxyHost=myHttpsProxy -Dhttps.proxyPort=443"
After few trials and errors, I found this org.eclipse.aether.internal.impl.DefaultRepositorySystem is actually trying to read and parse my $HOME/.m2/settings.xml. And below it was my original settings.xml:
<settings xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.1.0 http://maven.apache.org/xsd/settings-1.1.0.xsd"
xmlns="http://maven.apache.org/SETTINGS/1.1.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<proxies>
<proxy>
<id>proxy</id>
<active>true</active>
<protocol>http</protocol>
<host>myHttpProxy</host>
<port>8000</port>
</proxy>
<proxy>
<id>httpsproxy</id>
<active>true</active>
<protocol>https</protocol>
<host>myHttpProxy</host>
<port>443</port>
</proxy>
</proxies>
<servers>
<server>
<username>myname</username>
<password>mypassword</password>
<id>Artifactory</id>
</server>
</servers>
<mirrors>
<mirror>
<id>Artifactory</id>
<!-- Drive all repositories through the Artifactory mirror -->
<mirrorOf>*</mirrorOf>
<url>http://myartifactoryurl</url>
</mirror>
</mirrors>
</settings>
Unfortunately, it wasn't parsed well...
The current workaround (a bad one though) I have is to change this settings.xml to the default one from maven, which has nothing setup there.
I've followed this guide to run integration tests in rspec with the elasticsearch-extensions gem, but at the moment I run Elasticsearch::Extensions::Test::Cluster.start(port: 9250, nodes: 1, timeout: 120), the following error is thrown:
Starting 2 Elasticsearch nodes...Exception in thread "main" ElasticsearchException[Failed to load logging configuration]; nested: NoSuchFileException[/usr/share/elasticsearch/config];
Likely root cause: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/config
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:225)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at org.elasticsearch.common.logging.log4j.LogConfigurator.resolveConfig(LogConfigurator.java:142)
at org.elasticsearch.common.logging.log4j.LogConfigurator.configure(LogConfigurator.java:103)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:259)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
Exception in thread "main" ElasticsearchException[Failed to load logging configuration]; nested: NoSuchFileException[/usr/share/elasticsearch/config];
Likely root cause: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/config
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:225)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at org.elasticsearch.common.logging.log4j.LogConfigurator.resolveConfig(LogConfigurator.java:142)
at org.elasticsearch.common.logging.log4j.LogConfigurator.configure(LogConfigurator.java:103)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:259)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
I guess is because I have installed ES in my Ubuntu machine, which adds the config files into /etc/elasticsearch/ and also is already running as a service. I tried to symlink the folder, but the folder permissions are:
drwxr-x--- 3 root elasticsearch 4,0K nov 8 17:32 elasticsearch
So I tried to add my user to elasticsearch group but I had no luck. Any ideas?
I finished just copying the files and changing the owner:
sudo cp -r /etc/elasticsearch/ /usr/share/elasticsearch/config
sudo chown $(whoami):$(whoami) /usr/share/elasticsearch/config/*
Maybe not the best solution, but effective.
I am getting this error running app:
BUILD SUCCESSFUL
| Compiling 5 source files
| Compiling 5 source files.....
***
Metrics servlet injected into web.xml
Metrics Admin servlet-mapping (for /metrics/*) injected into web.xml
***
| Running Grails application
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
| Error Forked Grails VM exited with error
I looked over the same errors on Stack and made adjustments to the memory but I am still getting this error. I am new to Grails but I believe everything is setup correct and i can run a sample app (helloworld).
Here is the changes i made:
-Xms512m -Xmx1g -XX:PermSize=256m
This is the memory
free -m
total used free shared buffers cached
Mem: 2015 1297 718 56 169 859
-/+ buffers/cache: 267 1748
Checking if parameters are being read
$ ps -ef | grep java
tomcat7 32158 1 1 11:02 ? 00:00:10 /usr/lib/jvm/default-java/bin/java -Djava.util.logging.config.file=/var/lib/tomcat7/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Xms512m -Xmx1g -XX:PermSize=256m -XX:MaxPermSize=256m -Xms512m -Xmx1g -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:+UseConcMarkSweepGC -XX:MaxPermSize=256m -Djava.endorsed.dirs=/usr/share/tomcat7/endorsed -classpath /usr/share/tomcat7/bin/bootstrap.jar:/usr/share/tomcat7/bin/tomcat-juli.jar -Dcatalina.base=/var/lib/tomcat7 -Dcatalina.home=/usr/share/tomcat7 -Djava.io.tmpdir=/tmp/tomcat7-tomcat7-tmp org.apache.catalina.startup.Bootstrap start
I have this in my $HOME/.profile
export JAVA_OPTS="-Xms512m -Xmx1g -XX:MaxPermSize=256m"
in my setenv.sh for Tomcat7
export CATALINA_OPTS="-Xms512m -Xmx1g \
-XX:+CMSClassUnloadingEnabled \
-XX:+CMSPermGenSweepingEnabled \
-XX:+UseConcMarkSweepGC \
-XX:MaxPermSize=256m"
I am not sure what to change now - it seems the memory would be adequate? Please any thoughts would be great. thank you
Forgot to add if needed:
$ grails -version
Grails version: 2.4.5
$ java -version
java version "1.7.0_95"
OpenJDK Runtime Environment (IcedTea 2.6.4) (7u95-2.6.4-0ubuntu0.14.04.1)
OpenJDK Client VM (build 24.95-b01, mixed mode, sharing)