I am currently working on a website that will run .net code using this guide:
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-apache?view=aspnetcore-3.1
And this guide to install .net core SDK (I installed 2.1 and 3.1 while troubleshooting):
https://learn.microsoft.com/en-ca/dotnet/core/install/linux-package-manager-fedora31
I am trying to configure the apache proxy server to send requests to the Kestrel server but I am having issues with my service at /etc/systemd/system/kestrel-helloapp.service.
My service's code is:
[Unit]
Description=Started service
[Service]
WorkingDirectory=/var/www/html/PublishedVersion
ExecStart=/usr/share/dotnet /var/www/html/PublishedVersion/Website.dll
KillSignal=SIGINT
SyslogIdentifier=dotnet-example
User=root
Enviroment=ASPNETCORE_ENVIROMENT=Production
[Install]
WantedBy=multi-user.target
The service status is:
Mar 15 19:37:38 localhost.localdomain systemd[1]: Started service
Mar 15 19:37:38 localhost.localdomain systemd[1706]: kestrel-helloapp.service: Failed to execute command: Permission denied
Mar 15 19:37:38 localhost.localdomain systemd[1706]: kestrel-helloapp.service: Failed at step EXEC spawning /usr/share/dotnet: Permission denied
Mar 15 19:37:38 localhost.localdomain systemd[1]: kestrel-helloapp.service: Main process exited, code=exited, status=203/EXEC
Mar 15 19:37:38 localhost.localdomain systemd[1]: kestrel-helloapp.service: Failed with result 'exit-code'.
There are three main differences in my service code in comparison to the guides code:
1st: I have removed to auto restart, so it dosen't bog down my machine.
2nd: I have changed the ExecStart=/usr/local/dotnet to ExectStart=/usr/shared/dotnet, I have done this because my .net installation isn't at that location for some reason that eludes me.
3rd: I have changed the User=apache to User=root in an attempt to troubleshoot, the only user on my machine is root as this machine is just for school purposes.
I have also changed the SELinux settings on my machine to permissive and finally disabled in an attempt at troubleshooting.
I'm still new to this and none of this was seen in class, so go easy on me.
Thanks you for your time/answers.
Systemd has restrictions on where executed files are located. I'm not an expert in this field but there is a workaround on this issue. You can edit /etc/selinux/config and change line:
SELINUX=enforcing with SELINUX=permissive,
then restart the system and the service from systemd will start.
Related
We have an existing Jenkins install that I run from the command line. I want to start using it as a Windows Service instead, so that it launches when the machine restarts, without requiring someone to log in.
I have read about how to do it, but I am worried that it might break our existing setup, the jobs and other scripts that rely on the current location. Apparently when you go to Install Jenkins as a Windows Service, it asks you for a location for JENKINS_HOME.
Can I just give it the existing location? Will it just work or is there a danger of it wiping out what's there? And if I want to be safe and back up everything just in case, can I just make a copy of the existing .jenkins folder and then copy it back if something goes wrong? Or are there other files somewhere that I need to back up?
My question is basically the same as this one, which never got an answer:
Installing existing Jenkins as a Windows Service
Thanks
You should just be able to do this directly from the UI. (It used to be documented on the Jenkins wiki, but that's presently down.)
Fire up your command line Jenkins java -jar -jenkins.war, go to "Manage Jenkins" (${JENKINS_URL}/manage). You should see an icon "Install as Windows Service".
Click on it and you arrive at ${JENKINS_URL}/install. Point it at your existing install and click "Install". You will get a prompt to restart as a service and then it restarts.
You're done. You should see in your logs the system restarting messages:
2021-09-10 00:25:44.077+0000 [id=96] INFO jenkins.model.Jenkins#cleanUp: Stopping Jenkins
2021-09-10 00:25:44.080+0000 [id=96] INFO jenkins.model.Jenkins$18#onAttained: Started termination
2021-09-10 00:25:44.099+0000 [id=96] INFO jenkins.model.Jenkins$18#onAttained: Completed termination
2021-09-10 00:25:44.100+0000 [id=96] INFO jenkins.model.Jenkins#_cleanUpDisconnectComputers: Starting node disconnection
2021-09-10 00:25:44.115+0000 [id=96] INFO jenkins.model.Jenkins#_cleanUpShutdownPluginManager: Stopping plugin manager
2021-09-10 00:25:44.115+0000 [id=96] INFO jenkins.model.Jenkins#_cleanUpPersistQueue: Persisting build queue
2021-09-10 00:25:44.127+0000 [id=96] INFO jenkins.model.Jenkins#_cleanUpAwaitDisconnects: Waiting for node disconnection completion
2021-09-10 00:25:44.127+0000 [id=96] INFO jenkins.model.Jenkins#cleanUp: Jenkins stopped
[.jenkins] $ C:\Users\ \.jenkins\jenkins.exe start
2021-09-09 17:25:45,153 INFO - Starting the service with id 'jenkins'
You should also now see the jenkins service running in Windows Services:
You can manage it via the Services UI, the command line via SC, or the jenkins.exe binary:
NOTE: The same security caveats regarding running as LocalSystem apply regardless of if using this mechanism or the MSI install. Recommend changing to run as a local user; needs LogonAsService permission (Using the LocalSystem Account as a Service Logon Account, Why running a service as Local System is bad on windows). Local Security Policy > Local Policies > User Rights Management > Log on as a service.
C:\>sc query jenkins
SERVICE_NAME: jenkins
TYPE : 10 WIN32_OWN_PROCESS
STATE : 4 RUNNING
(STOPPABLE, NOT_PAUSABLE, ACCEPTS_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
C:\> sc stop jenkins
SERVICE_NAME: jenkins
TYPE : 10 WIN32_OWN_PROCESS
STATE : 3 STOP_PENDING
(STOPPABLE, NOT_PAUSABLE, ACCEPTS_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
C:\> sc delete jenkins
[SC] DeleteService SUCCESS
C:\>
C:\Users\ \.jenkins> jenkins.exe /?
A wrapper binary that can be used to host executables as Windows services
Usage: winsw [/redirect file] <command> [<args>]
Missing arguments trigger the service mode
Available commands:
install install the service to Windows Service Controller
uninstall uninstall the service
start start the service (must be installed before)
stop stop the service
stopwait stop the service and wait until it's actually stopped
restart restart the service
restart! self-restart (can be called from child processes)
status check the current status of the service
test check if the service can be started and then stopped
testwait starts the service and waits until a key is pressed then stops the service
version print the version info
help print the help info (aliases: -h,--help,-?,/?)
Extra options:
/redirect redirect the wrapper's STDOUT and STDERR to the specified file
WinSW 2.9.0.0
More info: https://github.com/kohsuke/winsw
Bug tracker: https://github.com/kohsuke/winsw/issues
Images captured from 2.303.1 on Win 10 Enterprise; YMMV.
I have an angular app that is using karma for tests. I am also using gitlab-ci to automate building and deploying the app.
Recently we wanted to add tests to the pipeline, using our own image with chrome.
Running it in the pipeline produces an error related to not being able to connect to the chrome process:
31 12 2018 10:58:36.116:INFO [karma]: Karma v1.7.1 server started at http://0.0.0.0:9877/
31 12 2018 10:58:36.121:INFO [launcher]: Launching browser ChromeKarma with unlimited concurrency
31 12 2018 10:58:36.134:INFO [launcher]: Starting browser ChromeHeadless
31 12 2018 10:59:36.146:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
31 12 2018 10:59:36.163:INFO [launcher]: Trying to start ChromeHeadless again (1/2).
31 12 2018 11:00:36.223:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
31 12 2018 11:00:36.236:INFO [launcher]: Trying to start ChromeHeadless again (2/2).
31 12 2018 11:01:36.296:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
31 12 2018 11:01:36.310:ERROR [launcher]: ChromeHeadless failed 2 times (timeout). Giving up.
Running the same commands in the same docker image locally ( starting a container with the same image same commands ), I do not get the same error, and the tests run fine.
After some searches I tried adding other flags besides --no-sandbox. This is my current browser configuration:
customLaunchers: {
ChromeKarma: {
base: 'ChromeHeadless',
// We must disable the Chrome sandbox when running Chrome inside Docker (Chrome's sandbox needs
// more permissions than Docker allows by default)
flags: [
'--disable-web-security',
'--disable-gpu',
'--no-sandbox',
'--remote-debugging-port=9222'
]
}
},
I've also tried adding a sleep to the list of commands in the pipeline, and then connecting to the container and running the tests manually. This does not produce the error, and the tests run fine.
Docker version is: Docker version 17.05.0-ce, build 89658be
I should also mention that while inside the container, I ran a ps ax and saw the chrome processes starting and staying up until karma killed them.
Solved this issues myself. Inside our network we use a proxy for accessing the internet. Turns out that this stops chrome from connecting to karma web server. I had to unset the proxy to get it to work. Another way to resolve this, without having to remove the proxy would be adding the following flags to karma.
'--proxy-bypass-list=*',
'--proxy-server=\'http://<my org proxy server>:8080\''
I have a Docker container with all my PHP7 enviroment for develop and everything works well except xdebug. The extension is enabled, with all the correct settings to enable remote debug, I setted up the remote host which is ok but when I make a request to a website inside this container if I check the apache error log I see this error:
[Thu Jun 01 05:44:31.529883 2017] [:error] [pid 916] [client 172.18.0.1:40306] XDebug could not open the remote debug file '/var/log/apache2/xdebug_remote.log'., referer: XXXXXXX
The file xdebug_remote.log has all the privileges, so this in theory would not be the problem. So, anyone has any idea what the problem might be?
I am running a cowboy erlang server. My server was genereted by following the getting started instructions on the 99s site, and I am running it with a command line:
./_rel/myapp_release/bin/myapp_release console
Thing is, after a certain while of no activity, the server crashes, and does not recover. The message I am getting is this:
heart: Sat Aug 16 22:33:18 2014: heart-beat time-out, no activity for 1771 seconds
heart: Sat Aug 16 22:33:18 2014: Would reboot. Terminating.
{"Kernel pid terminated",heart,{port_terminated,{heart,loop,[<0.0.0>,#Port<0.25>,[]]}}}
I know about the heart tool that can be used to monitor a service and restart it after a while if it's not getting any requests (I guess the logic is that if nothing is happening with the service something is wrong), but I can't figure out where in the cowboy application this configuration exists.
So I would ask:
Can anyone explain why is the server crashing?
If it is indeed crashing "on purpose", where is the configuration to set up things like the time-out period?
Ideally the application would restart itself if it's crashed (using a supervisor?). Does cowboy have a built in supervisor for apps that cowboy is running?
I've inherited the maintenance and development of a Ruby on Rails site that runs on Ruby 1.8.7 and Rails 2.3.2. While we try to deploy to Linux servers using Passenger as much as possible, my boss has told me that there we must be able to deploy to Windows at times for our clients.
I have installed my Rails app fine and it works perfectly when I test with the Webrick server. I have also installed Apache 2.2 which is serving up generic HTML pages perfectly. However, when I try to run my Rails app under Apache I get a 503 Service Temporarily Unavailable error
There is no error listed in the Apache logs but when I check the RoR logs it does show
127.0.0.1 - - [09/Aug/2012:10:31:02 +1000] "GET / HTTP/1.1" 503 323
127.0.0.1 - - [09/Aug/2012:10:31:02 +1000] "GET /favicon.ico HTTP/1.1" 503 323
and
[Thu Aug 09 10:31:06 2012] [error] proxy: BALANCER: (balancer://mmapscluster). All workers are in error state
[Thu Aug 09 10:31:07 2012] [error] proxy: BALANCER: (balancer://mmapscluster). All workers are in error state
As you may have guessed we are running Mongrel as a proxy server for performance reasons.
When I removed all of the proxying from the Apache configuration (incidentally restarting Apache is not enough for the proxy config - I had to reboot the entire machine), I got a seemingly endless list of the following Apache errors,
[notice] Parent: Created child process 1944
[notice] Child 1944: Child process is running
[notice] Parent: child process exited with status 255 -- Restarting.
[notice] Apache/2.2.15 (Win32) configured -- resuming normal operations
I have gone round and round on this and I've checked my config against a working installation that we have but I cannot see any differences in the setup. The only real difference is that the working one is running on a 32-bit machine and the failing one is running on a 64-bit machine.
Could this be the problem? Has anybody else had any similar types of problems running Apache on 64-bit machines?