I'm running a web application on openshift using Wildfly 8.1 and I would like to change the default timezone of my application to America/Sao_Paulo (GMT-3).
Today there is 3 hours of difference between my computer and the server running the application.
My desire is that my computer and the server shares the same date.
Server: Tue Nov 11 14:42:19 EST 2014
My computer: Tue Nov 11 17:43:47 BRST 2014
While I got the majority of this answer from Jboss with UTC timezone, it still isn't as simple as try this...
To get this working on OpenShift you would need to change how java is started on your gear by creating an (deploy action hook)[https://developers.openshift.com/en/getting-started-modifying-applications.html]. This allows you to change how your application is deployed (started), so you should be able to append the solution mentioned in the link above to the "start" command that is executed by the cartridge. For Jboss its https://github.com/Nick-Harvey/origin-server/blob/master/cartridges/openshift-origin-cartridge-jbossas/bin/control
Related
I have an angular app that is using karma for tests. I am also using gitlab-ci to automate building and deploying the app.
Recently we wanted to add tests to the pipeline, using our own image with chrome.
Running it in the pipeline produces an error related to not being able to connect to the chrome process:
31 12 2018 10:58:36.116:INFO [karma]: Karma v1.7.1 server started at http://0.0.0.0:9877/
31 12 2018 10:58:36.121:INFO [launcher]: Launching browser ChromeKarma with unlimited concurrency
31 12 2018 10:58:36.134:INFO [launcher]: Starting browser ChromeHeadless
31 12 2018 10:59:36.146:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
31 12 2018 10:59:36.163:INFO [launcher]: Trying to start ChromeHeadless again (1/2).
31 12 2018 11:00:36.223:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
31 12 2018 11:00:36.236:INFO [launcher]: Trying to start ChromeHeadless again (2/2).
31 12 2018 11:01:36.296:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
31 12 2018 11:01:36.310:ERROR [launcher]: ChromeHeadless failed 2 times (timeout). Giving up.
Running the same commands in the same docker image locally ( starting a container with the same image same commands ), I do not get the same error, and the tests run fine.
After some searches I tried adding other flags besides --no-sandbox. This is my current browser configuration:
customLaunchers: {
ChromeKarma: {
base: 'ChromeHeadless',
// We must disable the Chrome sandbox when running Chrome inside Docker (Chrome's sandbox needs
// more permissions than Docker allows by default)
flags: [
'--disable-web-security',
'--disable-gpu',
'--no-sandbox',
'--remote-debugging-port=9222'
]
}
},
I've also tried adding a sleep to the list of commands in the pipeline, and then connecting to the container and running the tests manually. This does not produce the error, and the tests run fine.
Docker version is: Docker version 17.05.0-ce, build 89658be
I should also mention that while inside the container, I ran a ps ax and saw the chrome processes starting and staying up until karma killed them.
Solved this issues myself. Inside our network we use a proxy for accessing the internet. Turns out that this stops chrome from connecting to karma web server. I had to unset the proxy to get it to work. Another way to resolve this, without having to remove the proxy would be adding the following flags to karma.
'--proxy-bypass-list=*',
'--proxy-server=\'http://<my org proxy server>:8080\''
I'm getting this warning in Jenkins logs on start.
Feb 25, 2017 9:32:40 PM hudson.WebAppMain$3 run
INFO: Jenkins is fully up and running
--> setting agent port for jnlp
--> setting agent port for jnlp... done
Feb 25, 2017 9:32:58 PM org.jenkinsci.plugins.workflow.cps.CpsFlowExecution getCurrentHeads
WARNING: List of flow heads unset for CpsFlowExecution[null], perhaps due to broken storage
Feb 25, 2017 9:32:58 PM org.jenkinsci.plugins.workflow.cps.CpsFlowExecution getCurrentHeads
WARNING: List of flow heads unset for CpsFlowExecution[null], perhaps due to broken storage
Feb 25, 2017 9:48:02 PM jenkins.branch.MultiBranchProject$BranchIndexing run
INFO: bible-server #20170225.214800 branch indexing action completed: SUCCESS in 2.4 sec
workflow-cps, which seems to be the problem, is part of the famous pipeline plugin - which I am using.
https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Plugin
It doesnt seem to be having any other unwanted side effects other than this annoying warning in the logs.
Anyone got ideas how to fix this up?
I was seeing the same things. Looking at the source code for the plugin, it appears this is related to a run of a pipeline that completes abnormally.
In my case, I had a run of a pipeline that ran all the way through but got a Java exception trying to send mail because the VM lost network connectivity. Once I deleted that failed run (not the pipeline itself), I stopped seeing those warnings in the logs.
I'm having a problem in setting up ec2-plugin to work when connecting Jenkins master with "on-demand" slave ec2 instance.
This is a log from Jenkins:
INFO: Connecting to <EC2_PUBLIC_DNS> on port 22, with timeout 10000.
Sep 06, 2016 9:54:53 PM null
INFO: Connected via SSH.
Sep 06, 2016 9:54:54 PM null
WARNING: Authentication failed. Trying again...
Sep 06, 2016 9:55:24 PM null
INFO: Authenticating as docker-client
Sep 06, 2016 9:55:25 PM null
INFO: Connecting to <EC2_PUBLIC_DNS> on port 22, with timeout 10000.
Sep 06, 2016 9:55:25 PM null
INFO: Connected via SSH.
Sep 06, 2016 9:55:26 PM null
On the other hand, I'm able to connect from Jenkins master to slave and vice versa via ssh command without any problem.
Any idea what might be the issue?
Thanks in advance,
Bakir
After long investigation, it turns out that problem was that my unix user docker-client didn't have public ssh key (from keypair) in:
/home/docker-client/.ssh/authorized_keys
but instead it had in
/home/ubuntu/.ssh/authorized_keys
In ec2-plugin config section in Jenkins, I have pem key (from keypair) specified but also I'm trying to connect to docker-client (not ubuntu).
Even though I had passwordless access between jenkins master and docker-client user, that didn't take precedence and pem key was used (without success from now obvious reason)
Make sure the SSH key you have added in Jenkins EC2 plugin is same as what's used for connecting to master instance.
EDIT 2
Okay, turns out this has nothing to do with TFS or MSBuilder. This is entirely a problem with SonarQube. The SonarQube Service is the one who sends back a 401 (Unauthorized) status not TFS. Since I run 5.4 I have no clue as to how I specify a SonarQube user because in Jenkins both of those fields are greyed out.
I am using Jenkins as a Windows Service and about 2 hours ago the service made a successful build. Now, out of seemingly nowhere, Jenkins keeps reporting 401 (Unauthorized) no matter which build job I try to start.
They all start a SonarQube scanner first
Run MSBuild
In order for SonarQube to read the analysis I have to run a rebuild command
Run the End SonarQube analysis step and it collects the data to put on our SonarQube portal.
What I don't understand is that the last change I made to anything, was to go delete a file from an ASP project and now none of the jobs work, even those that have nothing to do with this ASP project. All the projects are stored on our Team Foundation Server (not locally hosted).
The only thing that really changed was that we wanted the IP of the Jenkins and SonarQube services to be accessible outside of the server they are hosted on so we made two sites on the local IIS and made a DNS to point at those. Reading into the error log I first see status 302 which is a redirection, before I reach 402. When I go to "Configure Jenkins" I am told that my proxy settings failed...or something along those lines.
Any idea what might cause this behaviour?
EDIT
Here is a part of the error log:
INFO: SCM changes detected in CSharp Build Job. Triggering #1
Apr 20, 2016 11:04:08 AM com.microsoft.tfs.core.config.httpclient.DefaultHTTPClientFactory logHTTPClientConfiguration
INFO: HttpClient configured for https://omitted.visualstudio.com/, authenticating as it#omitted.dk
Apr 20, 2016 11:04:10 AM com.microsoft.tfs.core.ws.runtime.client.SOAPService executeSOAPRequestInternal
INFO: SOAP method='GetRegistrationEntries', status=302, content-length=0, server-wait=1164 ms, parse=0 ms, total=1164 ms, throughput=0 B/s, uncompressed
Apr 20, 2016 11:04:11 AM com.microsoft.tfs.core.httpclient.HttpMethodDirector processWWWAuthChallenge
INFO: Failure authenticating with BASIC #omitted.visualstudio.com:443
Apr 20, 2016 11:04:11 AM com.microsoft.tfs.core.ws.runtime.client.SOAPService executeSOAPRequestInternal
INFO: SOAP method='GetRegistrationEntries', status=401, content-length=0, server-wait=578 ms, parse=0 ms, total=578 ms, throughput=0 B/s, uncompressed
Apr 20, 2016 11:04:11 AM com.microsoft.tfs.core.TFSTeamProjectCollection getServerDataProvider
WARNING: Error getting data provider
com.microsoft.tfs.core.exceptions.TFSUnauthorizedException: Access denied connecting to TFS server https://omitted.visualstudio.com/ (authenticating as it#omitted.dk)
Try to change the Server URL from https://omitted.visualstudio.com/ to https://omitted.visualstudio.com/DefaultCollection. And use Personal access tokens or Alternate credentials for User name and password. Check the screenshot below:
Okay, I have found the solution. It's so stupid I can't believe I didn't think of it.
In my SonarQube.Analysis.xml found at
...Jenkins\.jenkins\tools\hudson.plugins.sonar.MsBuildSQRunnerInstallation\MSBuild_2.0\SonarQube.Analysis.xml
I remembered that I had the username and password written and at some point I went and changed that in SonarQube from the default values to something else. That made all the builds break.
I am having a hard time tracing an issue and hope someone can help. We have a Joomla Site along with ApnsPHP that is able to send push messages for one app already. We have a second app, using a different PEM. Only the first message is sent out, then there is no answer from the apple push Server and everything hangs until the time out Ends the request.
The same site is running on two other Servers, on Windows and one OS X machine. Both do send out Messages with the same Code/PEM/Tokens successfully. It is the Clients OS X Mac Mini Server which is failing.
This is what I get on the Client machine:
Tue, 15 Dec 2015 17:02:55 +0100 ApnsPHP[42117]: INFO: Trying tls://gateway.push.apple.com:2195...
Tue, 15 Dec 2015 17:02:56 +0100 ApnsPHP[42117]: INFO: Connected to tls://gateway.push.apple.com:2195.
Tue, 15 Dec 2015 17:02:56 +0100 ApnsPHP[42117]: INFO: Sending messages queue, run #1: 1 message(s) left in queue.
Tue, 15 Dec 2015 17:02:56 +0100 ApnsPHP[42117]: STATUS: Sending message ID 1 [custom identifier: CYD-Badge-1] (1/3): 157 bytes.