I have an angular app that is using karma for tests. I am also using gitlab-ci to automate building and deploying the app.
Recently we wanted to add tests to the pipeline, using our own image with chrome.
Running it in the pipeline produces an error related to not being able to connect to the chrome process:
31 12 2018 10:58:36.116:INFO [karma]: Karma v1.7.1 server started at http://0.0.0.0:9877/
31 12 2018 10:58:36.121:INFO [launcher]: Launching browser ChromeKarma with unlimited concurrency
31 12 2018 10:58:36.134:INFO [launcher]: Starting browser ChromeHeadless
31 12 2018 10:59:36.146:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
31 12 2018 10:59:36.163:INFO [launcher]: Trying to start ChromeHeadless again (1/2).
31 12 2018 11:00:36.223:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
31 12 2018 11:00:36.236:INFO [launcher]: Trying to start ChromeHeadless again (2/2).
31 12 2018 11:01:36.296:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
31 12 2018 11:01:36.310:ERROR [launcher]: ChromeHeadless failed 2 times (timeout). Giving up.
Running the same commands in the same docker image locally ( starting a container with the same image same commands ), I do not get the same error, and the tests run fine.
After some searches I tried adding other flags besides --no-sandbox. This is my current browser configuration:
customLaunchers: {
ChromeKarma: {
base: 'ChromeHeadless',
// We must disable the Chrome sandbox when running Chrome inside Docker (Chrome's sandbox needs
// more permissions than Docker allows by default)
flags: [
'--disable-web-security',
'--disable-gpu',
'--no-sandbox',
'--remote-debugging-port=9222'
]
}
},
I've also tried adding a sleep to the list of commands in the pipeline, and then connecting to the container and running the tests manually. This does not produce the error, and the tests run fine.
Docker version is: Docker version 17.05.0-ce, build 89658be
I should also mention that while inside the container, I ran a ps ax and saw the chrome processes starting and staying up until karma killed them.
Solved this issues myself. Inside our network we use a proxy for accessing the internet. Turns out that this stops chrome from connecting to karma web server. I had to unset the proxy to get it to work. Another way to resolve this, without having to remove the proxy would be adding the following flags to karma.
'--proxy-bypass-list=*',
'--proxy-server=\'http://<my org proxy server>:8080\''
Related
I am currently working on a website that will run .net code using this guide:
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-apache?view=aspnetcore-3.1
And this guide to install .net core SDK (I installed 2.1 and 3.1 while troubleshooting):
https://learn.microsoft.com/en-ca/dotnet/core/install/linux-package-manager-fedora31
I am trying to configure the apache proxy server to send requests to the Kestrel server but I am having issues with my service at /etc/systemd/system/kestrel-helloapp.service.
My service's code is:
[Unit]
Description=Started service
[Service]
WorkingDirectory=/var/www/html/PublishedVersion
ExecStart=/usr/share/dotnet /var/www/html/PublishedVersion/Website.dll
KillSignal=SIGINT
SyslogIdentifier=dotnet-example
User=root
Enviroment=ASPNETCORE_ENVIROMENT=Production
[Install]
WantedBy=multi-user.target
The service status is:
Mar 15 19:37:38 localhost.localdomain systemd[1]: Started service
Mar 15 19:37:38 localhost.localdomain systemd[1706]: kestrel-helloapp.service: Failed to execute command: Permission denied
Mar 15 19:37:38 localhost.localdomain systemd[1706]: kestrel-helloapp.service: Failed at step EXEC spawning /usr/share/dotnet: Permission denied
Mar 15 19:37:38 localhost.localdomain systemd[1]: kestrel-helloapp.service: Main process exited, code=exited, status=203/EXEC
Mar 15 19:37:38 localhost.localdomain systemd[1]: kestrel-helloapp.service: Failed with result 'exit-code'.
There are three main differences in my service code in comparison to the guides code:
1st: I have removed to auto restart, so it dosen't bog down my machine.
2nd: I have changed the ExecStart=/usr/local/dotnet to ExectStart=/usr/shared/dotnet, I have done this because my .net installation isn't at that location for some reason that eludes me.
3rd: I have changed the User=apache to User=root in an attempt to troubleshoot, the only user on my machine is root as this machine is just for school purposes.
I have also changed the SELinux settings on my machine to permissive and finally disabled in an attempt at troubleshooting.
I'm still new to this and none of this was seen in class, so go easy on me.
Thanks you for your time/answers.
Systemd has restrictions on where executed files are located. I'm not an expert in this field but there is a workaround on this issue. You can edit /etc/selinux/config and change line:
SELINUX=enforcing with SELINUX=permissive,
then restart the system and the service from systemd will start.
I'm getting this warning in Jenkins logs on start.
Feb 25, 2017 9:32:40 PM hudson.WebAppMain$3 run
INFO: Jenkins is fully up and running
--> setting agent port for jnlp
--> setting agent port for jnlp... done
Feb 25, 2017 9:32:58 PM org.jenkinsci.plugins.workflow.cps.CpsFlowExecution getCurrentHeads
WARNING: List of flow heads unset for CpsFlowExecution[null], perhaps due to broken storage
Feb 25, 2017 9:32:58 PM org.jenkinsci.plugins.workflow.cps.CpsFlowExecution getCurrentHeads
WARNING: List of flow heads unset for CpsFlowExecution[null], perhaps due to broken storage
Feb 25, 2017 9:48:02 PM jenkins.branch.MultiBranchProject$BranchIndexing run
INFO: bible-server #20170225.214800 branch indexing action completed: SUCCESS in 2.4 sec
workflow-cps, which seems to be the problem, is part of the famous pipeline plugin - which I am using.
https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Plugin
It doesnt seem to be having any other unwanted side effects other than this annoying warning in the logs.
Anyone got ideas how to fix this up?
I was seeing the same things. Looking at the source code for the plugin, it appears this is related to a run of a pipeline that completes abnormally.
In my case, I had a run of a pipeline that ran all the way through but got a Java exception trying to send mail because the VM lost network connectivity. Once I deleted that failed run (not the pipeline itself), I stopped seeing those warnings in the logs.
I'm having a problem in setting up ec2-plugin to work when connecting Jenkins master with "on-demand" slave ec2 instance.
This is a log from Jenkins:
INFO: Connecting to <EC2_PUBLIC_DNS> on port 22, with timeout 10000.
Sep 06, 2016 9:54:53 PM null
INFO: Connected via SSH.
Sep 06, 2016 9:54:54 PM null
WARNING: Authentication failed. Trying again...
Sep 06, 2016 9:55:24 PM null
INFO: Authenticating as docker-client
Sep 06, 2016 9:55:25 PM null
INFO: Connecting to <EC2_PUBLIC_DNS> on port 22, with timeout 10000.
Sep 06, 2016 9:55:25 PM null
INFO: Connected via SSH.
Sep 06, 2016 9:55:26 PM null
On the other hand, I'm able to connect from Jenkins master to slave and vice versa via ssh command without any problem.
Any idea what might be the issue?
Thanks in advance,
Bakir
After long investigation, it turns out that problem was that my unix user docker-client didn't have public ssh key (from keypair) in:
/home/docker-client/.ssh/authorized_keys
but instead it had in
/home/ubuntu/.ssh/authorized_keys
In ec2-plugin config section in Jenkins, I have pem key (from keypair) specified but also I'm trying to connect to docker-client (not ubuntu).
Even though I had passwordless access between jenkins master and docker-client user, that didn't take precedence and pem key was used (without success from now obvious reason)
Make sure the SSH key you have added in Jenkins EC2 plugin is same as what's used for connecting to master instance.
I am having a hard time tracing an issue and hope someone can help. We have a Joomla Site along with ApnsPHP that is able to send push messages for one app already. We have a second app, using a different PEM. Only the first message is sent out, then there is no answer from the apple push Server and everything hangs until the time out Ends the request.
The same site is running on two other Servers, on Windows and one OS X machine. Both do send out Messages with the same Code/PEM/Tokens successfully. It is the Clients OS X Mac Mini Server which is failing.
This is what I get on the Client machine:
Tue, 15 Dec 2015 17:02:55 +0100 ApnsPHP[42117]: INFO: Trying tls://gateway.push.apple.com:2195...
Tue, 15 Dec 2015 17:02:56 +0100 ApnsPHP[42117]: INFO: Connected to tls://gateway.push.apple.com:2195.
Tue, 15 Dec 2015 17:02:56 +0100 ApnsPHP[42117]: INFO: Sending messages queue, run #1: 1 message(s) left in queue.
Tue, 15 Dec 2015 17:02:56 +0100 ApnsPHP[42117]: STATUS: Sending message ID 1 [custom identifier: CYD-Badge-1] (1/3): 157 bytes.
I'm running a web application on openshift using Wildfly 8.1 and I would like to change the default timezone of my application to America/Sao_Paulo (GMT-3).
Today there is 3 hours of difference between my computer and the server running the application.
My desire is that my computer and the server shares the same date.
Server: Tue Nov 11 14:42:19 EST 2014
My computer: Tue Nov 11 17:43:47 BRST 2014
While I got the majority of this answer from Jboss with UTC timezone, it still isn't as simple as try this...
To get this working on OpenShift you would need to change how java is started on your gear by creating an (deploy action hook)[https://developers.openshift.com/en/getting-started-modifying-applications.html]. This allows you to change how your application is deployed (started), so you should be able to append the solution mentioned in the link above to the "start" command that is executed by the cartridge. For Jboss its https://github.com/Nick-Harvey/origin-server/blob/master/cartridges/openshift-origin-cartridge-jbossas/bin/control