opensipsctl start gives an error: opensips.pid does not exist - opensips

When I run opensipsctl start command for start opensips that time I got one error.
ERROR: PID file /var/run/opensips.pid does not exist -- OpenSIPS start failed
So please help me to solve it.

open up opensipsctl, it includes the file opensipsctlrc, which defined $PID_FILE as /var/run/opensips.pid
Then in opensipsctl, when you run start one of the checks is..
if [ ! -s $PID_FILE ] ; then
echo
merr "PID file $PID_FILE does not exist -- OpenSIPS start failed"
exit 1
fi
Which is saying if then check of whethever '/var/run/opensips.pid exists and is bigger than 0 bytes' fails, then echo out the above error.
This means the file isn't being created.
If you look just above that line it does..
if [ $SYSLOG = 1 ] ; then
$OSIPSBIN -P $PID_FILE $STARTOPTIONS 1>/dev/null 2>/dev/null
else
$OSIPSBIN -P $PID_FILE -E $STARTOPTIONS
fi
Which is where opensips actually starts. I would suggest adding the following to your opensips.cfg if you havn't already..
# Logging
debug=6
log_stderror=no
log_facility=LOG_LOCAL0
..now everything will be logged to /var/log/syslog on boot.
Try boot again, then look at that log for info about what's happened.
Another thing to check, is the user you're running opensips as has permission to access the directory it's trying to create the pid file in.

I had the same error & it was driving me mad as well. I managed to trace it down to one of two things - I had both!
1/ A misconfiguration in the OpenSIPS config file. journalctl -xe should be able to tell you what the error is
2/ Something else is listening on the port that you are trying to listen on
For 2, you can try the below, if you have Ubuntu, to see if anything is already listening on that port
lsof -i :5060

I was able to see logs and fix issue by below steps
Set log_level=4 in opensips.cfg to view debug logs in /var/log/syslog
debug is deprecated in 2.4 and higher version.
You can refer here for different log level

Related

8th wall web app setup child compilation failed

I am new to 8th wall. I have cloned 8th wall web from git and executed below steps properly
# cd <directory_where_you_saved_sample_project_files>
# cd serve
# npm install
# cd ..
# ./serve/bin/serve -d <sample_project_location>
but on execution of last step which is for ex.
./serve/bin/serve -n -d gettingstarted/xraframe/ -p 7777
I am getting below errors
Failed to compile.
Error: Child compilation failed: Entry module not found: Error:
Can't resolve
'C:\8thWall_Project\web\serve\bin\gettingstarted\xraframe"
\index.html' in 'C:\8thWall_Project\web\serve': Error: Can't resolve
'C:\8thWall_Project\web\serve\bin\gettingstarted\xraframe"
\index.html' in 'C:\8thWall_Project\web\serve'
compiler.js:79 childCompiler.runAsChild
[serve]/[html-webpack-plugin]/lib/compiler.js:79:16
Compiler.js:306 compile
[serve]/[webpack]/lib/Compiler.js:306:11
Compiler.js:631 hooks.afterCompile.callAsync.err
[serve]/[webpack]/lib/Compiler.js:631:15
Hook.js:154 AsyncSeriesHook.lazyCompileHook
[serve]/[tapable]/lib/Hook.js:154:20
Compiler.js:628 compilation.seal.err
[serve]/[webpack]/lib/Compiler.js:628:31
Hook.js:154 AsyncSeriesHook.lazyCompileHook
[serve]/[tapable]/lib/Hook.js:154:20
Compilation.js:1325 hooks.optimizeAssets.callAsync.err
[serve]/[webpack]/lib/Compilation.js:1325:35
Any idea or pointers what is missing?
Thanks
I don't know why, but bat file don't want to be opened by path. Just go to the serve\bin directory and launch bat from here, like that:
7777 is unnecessary. problem was, that it can't find path to your xraframe
project, as you are in another directory, so you have to go tow directories up in ypur path for xraframe
It seems as if you're attempting this on a Windows computer. The serve process for Windows is slightly different than on macOS.
Instead of the normal serve script, use the serve.bat executable.
serve\bin\serve.bat -n -d gettingstarted\xraframe -p 7777
https://docs.8thwall.com/web/#locally-from-windows

Capistrano many failed status on linked_files and linked_dirs what do they mean?

OK so all is working perfectly well as far as I can see, but I do see a lot of "failed" status on most of the linked_files and linked_dirs tasks and I am wondering if they deserve any attention. Here are a few examples:
DEBUG [423a17e1] Running [ -L /home/caluebat/www/ravenfort/releases/20160312213815/tmp/pids ] on xxx.xxx.xxx.xxx
DEBUG [423a17e1] Command: [ -L /home/caluebat/www/ravenfort/releases/20160312213815/tmp/pids ]
DEBUG [423a17e1] Finished in 0.470 seconds with exit status 1 (failed).
DEBUG [541d2f8a] Running [ -d /home/caluebat/www/ravenfort/releases/20160312213815/tmp/pids ] on xxx.xxx.xxx.xxx
DEBUG [541d2f8a] Command: [ -d /home/caluebat/www/ravenfort/releases/20160312213815/tmp/pids ]
DEBUG [541d2f8a] Finished in 0.476 seconds with exit status 1 (failed).
I was unable to find any detail on the capistrano official docs and they send either here or the mailing list for questions.
I would appreciate any clarification on the above failures.
Thank you very much.
Don't worry about it!
Whenever cap runs a command that returns a non-zero result, it prints the line in red and says "failed". This can be misleading, because it runs a lot of commands just to see what already exists. For instance [ -d foo ] means "Is there a directory named foo?" It's not actually a failure, it's just cap inspecting the target machine to find out what work it needs to do.
If cap hits a real error, it will quit early and you'll get a stack trace and/or actual error message.

Batch file to check website status from a text file and restart service based on string

I need some batch guru to assist me in getting this resolved. I have a couple of files via which we are monitoring the response from the websites using wget. When the site is down we get the following response code in test1.txt:
Connecting to 10.x.x.x:443... failed: Bad file descriptor.
whilst when the site is running the response code in test2.txt is
Connecting to 10.x.x.x:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
I do not see any common pattern in both the above outputs based on which I can form a logic. Need some assistance in determining if from the outputs above
if the website is running, do nothing
if the website is down, start service.
Note, we need to do this only on the basis of the output from these files.
Tried the provided solution but it didn't work:
TestScript>wget-1.14.exe --spider --no-check-certificate https://somesite | find "Bad file descriptor" 1>nul
Spider mode enabled. Check if remote file exists.
--2015-10-08 18:15:21-- https://somesite
Connecting to 10.x.x.x:443... failed: Bad file descriptor.
TestScript>if errorlevel 1 (echo site is up ) else (echo site is down )
site is up
Pipe the output of wget to find to look for Bad file descriptor and then use errorlevel:
wget --spider http://someurl 2>&1 | find "Bad file descriptor" >nul
if errorlevel 1 (
echo site is up
) else (
echo site is down
)
2>&1 redirects the messages into the standard output so that it can be piped
--spider makes wget only check the url without saving the result
Alternatively use the file you already have:
if exist test1.txt find "Bad file descriptor" test1.txt >nul
if not errorlevel 1 (echo start the service)

Capistrano destination path already exists, not an empty directory

While deploying a Rails app with Capistrano on Ubuntu 14.04, I am getting the following error:
fatal: destination path '/var/www/APP-NAME/repo' already exists and is not an empty directory.
cf5a389e] Running /usr/bin/env [ -f /var/www/rd/repo/HEAD ] on LINODE-INSTANCE-IP
DEBUG[cf5a389e] Command: [ -f /var/www/rd/repo/HEAD ]
DEBUG[cf5a389e] Finished in 0.005 seconds with exit status 1 (failed).
DEBUG[8899b95c] Running /usr/bin/env if test ! -d /var/www/rd; then echo "Directory does not exist '/var/www/rd'" 1>&2; false; fi on LINODE-INSTANCE-IP
DEBUG[8899b95c] Command: if test ! -d /var/www/rd; then echo "Directory does not exist '/var/www/rd'" 1>&2; false; fi
DEBUG[8899b95c] Finished in 0.005 seconds with exit status 0 (successful).
INFO[fc5f524b] Running /usr/bin/env git clone --mirror GIT_REPO_URL /var/www/APP-NAME/repo on LINODE-INSTANCE-IP
DEBUG[fc5f524b] Command: cd /var/www/APP-NAME && ( GIT_ASKPASS=/bin/echo GIT_SSH=/tmp/rd/git-ssh.sh /usr/bin/env git clone --mirror GIT-REPO-URL /var/www/APP-NAME/repo )
DEBUG[fc5f524b] fatal: destination path '/var/www/APP-NAME/repo' already exists and is not an empty directory.
Here are config files:
config/deploy/production.rb
config/deploy.rb
The only reason for this error I can find online is;
same host in more than one role, so that they're racing? For example I mean that you might have the same IP address defined as an :app role host more than once.
Which I guess isn't fitting with the above config files.
I had the same problem. The reason is in double definition of the role and/or server.
Try to remove
server 'SERVER-IP', user: 'USERNAME', roles: %w{app}
in production.rb and
role :app, "SERVER-IP"
in deploy.rb. The latter seems to be just simple syntax while the former - is extended one, so you in fact you declare roles twice (three time to be more precise: 2 in production.rb and 1 in deploy.rb). Hope it helps.
On a related note, I ran into this issue when changing my DB connections file setup on one of my deploys.
In this case, the old structure had one shared file for DB settings, while the newer had two. Even though these were declared in the recipe, I was getting errors when automated deletion of older builds were being done.
To resolve, I just deleted the older builds, and ran the cap [server name] deploy a few times to clear it out, and to verify this was no longer occurring. So far, it's been fine.

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

Resources