SQL Server and Rails trouble - ruby-on-rails

note: this is a repost. This question was previously deleted for undisclosed reasons
Ok, I've been trying to get this to work like all day now and I'm barely any further from when I started.
I'm trying to get Ruby On Rails to connect to SQL Server. I've installed unixODBC and configured it and FreeTDS and installed just about every Ruby gem relating to ODBC that exists.
(This has been updated to show the output of isql with -v)
[earlz#earlzarch myproject]$ tsql -S AVP1 -U sa -P pass
locale is "en_US.UTF-8"
locale charset is "UTF-8"
1> quit
[earlz#earlzarch ~]$ isql -v AVP1 sa pass
[IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified
[ISQL]ERROR: Could not SQLConnect
[earlz#earlzarch myproject]$ rake db:version
(in /home/earlz/myproject)
rake aborted!
IM002 (0) [unixODBC][Driver Manager]Data source name not found, and no default driver specified
(See full trace by running task with --trace)
so, as you can see, tsql works, but not isql. What is the difference in the two that breaks it?
/etc/odbc.ini
[AVP1]
Description = ODBC connection via FreeTDS
Driver = TDS
Servername = my.server
UID = sa
PWD = pass
port = 1232
Database = mydatabase
/etc/odbcinst.ini
[TDS]
Description = v0.6 with protocol v7.0
Driver = /usr/lib/libtdsodbc.so
Setup = /usr/lib/libtdsS.so
CPTimeout =
CPReuse =
FileUsage = 1
(and yes, I've made sure that the .so files exist)
the relevant part in freetds.conf
[AVP1]
host = my.server
port = 1232
tds version = 8.0
and finally, my database.yml
development:
adapter: sqlserver
mode: odbc
dsn: AVP1
username: sa
password: pass
Can anyone please help me before I pull all my hair out?
I am using a 64 bit Arch Linux that is completely up to date.
What could be causing isql to fail. I've tried every solution I've seen so far for this problem but none of them are actually working for me. Do I have to recompile FreeTDS or something?
Ok, I have also verified with strace that it is finding the configuration file, as shown by this excerpt:
open("/etc/odbc.ini", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=159, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc71fe09000
read(3, "[AVP1]\n Description = ODBC "..., 4096) = 159

If anyone has gotten tsql to work but has searched far and wide on the Internet and has troubleshooted their configs and still has not been able to get isql to work check your server logs.
I have been troubleshooting a Xubuntu 12.04 unixodbc install and config for a week now and tried everything possible to get it fixed when I decided to check my windows server event viewer to see what was happening when the request was coming into the server or if a request was even coming into the server and discovered that the problem was that I couldn't get into a specific database. I was able to get into SQL Server ok but not the actual DB I had listed in my odbc.ini file.
Here is the specific text in the event log "Login failed for user 'ePMX'.
Reason: Failed to open the explicitly specified database. [CLIENT:
192.168.27.25]".
What sparked my interest was the word "explicit". So I simply commented out the Database = <DB Name> and suddenly everything worked and I got the SQL prompt after untold hours of researching and trying everything possible.
So if you are having trouble using unixodbc don't forget to troubleshoot the server side of things as well the client side because I have seen tons of posts where people had the exact same problem I was having but there was never any response to how to resolve it so I am guessing that a large number of the people that were having the issue were Server side issues.
For a great troubleshooting tool use osql rather than isql(osql actually in fact uses isql to connect) because it will go through the connection process step by step and give you details about where the failure occurs. It is used the same way you use isql:
osql <DSN> <user> <password>.
So as I said be sure to check your server logs if you have tried everything else and have been unable to figure out what the problem is.

Ok, I finally figured it out after only 2 straight days of banging my head against the wall.
I'll try to give as much info as possible so that if someone finds this in the same situation I was in, they'll find this useful.
[earlz#earlzarch ~]$ cat /etc/odbc.ini
[AVP1]
Description=ODBC connection via FreeTDS
Driver=/usr/lib/libtdsodbc.so
Server=192.168.0.100
UID=sa
PWD=pass
Port=1232
ReadOnly=No
[earlz#earlzarch ~]$ cat /etc/odbcinst.ini
[TDS]
Description = v0.60 with protocol v7.0
Driver = /usr/lib/libtdsodbc.so
Driver64 = /usr/lib
Setup = /usr/lib/libtdsS.so
Setup64 = /usr/lib
CPTimeout =
CPReuse =
FileUsage = 1
[earlz#earlzarch ~]$ cat /etc/freetds/freetds.conf
[global]
tds version = 8.0
initial block size = 512
swap broken dates = no
swap broken money = no
try server login = yes
try domain login = no
cross domain login = no
# If you get out-of-memory errors, it may mean that your client
# is trying to allocate a huge buffer for a TEXT field.
# Try setting 'text size' to a more reasonable limit
text size = 64512
[TDS]
host = 192.168.0.100
port = 1232
tds version = 8.0
and if your lucky, after that:
[earlz#earlzarch ~]$ isql -v AVP1
[S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source
[01000][unixODBC][FreeTDS][SQL Server]Adaptive Server connection failed
[ISQL]ERROR: Could not SQLConnect
[earlz#earlzarch ~]$ isql -v AVP1 sa pass
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL>
I did not have to set any kind of environmental variables and I didn't have to manually compile anything either with Arch Linux 64bit (date April 7th, 2010). After getting isql to work, Rails immediately connected to the database also. Now I just have to figure out why db:schema:load isn't working, but thats another question :)
Also, notice the only real difference between this set of files and the last is in /etc/odbc.ini I set the Driver field to be the actual file name of a driver rather than named for some configuration entry.

When building FreeTDS, current versions of SQL Server need TDS protocol v8 (http://www.freetds.org/userguide/config.htm):
./configure --with-tdsver=8.0 --enable-msdblib

Related

Error loading Snowflake ODBC Driver on Mac M1 from erlang call :odbc.connect

I have a problem with loading snowflake driver in the elixir application on arm64 Mac m1 (on x86 it works smoothly).
Installed:
unixodbc
erlang 24.1.2 with odbc support
snowflake driver
iODBC
ODBC manager & iODBC manager
Below is configuration of my odbc installation
➜ sandbox odbcinst -j
unixODBC 2.3.9
DRIVERS............: /usr/local/etc/odbcinst.ini
SYSTEM DATA SOURCES: /usr/local/etc/odbc.ini
FILE DATA SOURCES..: /usr/local/etc/ODBCDataSources
USER DATA SOURCES..: /Users/or/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8
➜ sandbox cat /usr/local/etc/odbcinst.ini
[ODBC Drivers]
SnowflakeDSIIDriver=Installed
[SnowflakeDSIIDriver]
APILevel=1
ConnectFunctions=YYY
Description=Snowflake DSII
Driver=/opt/snowflake/snowflakeodbc/lib/universal/libSnowflake.dylib
DriverODBCVer=03.52
SQLLevel=1
ODBCInstLib=/usr/local/iODBC/lib/libiodbcinst.dylib
➜ sandbox cat /opt/snowflake/snowflakeodbc/lib/universal/simba.snowflake.ini
[Driver]
ANSIENCODING=UTF-8
DriverManagerEncoding=UTF-32
DriverLocale=en-US
ErrorMessagesPath=/opt/snowflake/snowflakeodbc/ErrorMessages
LogLevel=0
LogPath=
CURLVerboseMode=false
CABundleFile=/opt/snowflake/snowflakeodbc/lib/universal/cacert.pem
ODBCInstLib=libodbcinst.dylib
➜ sandbox cat /usr/local/etc/odbc.ini
[ODBC Data Sources]
SNOWFLAKE_ODBC = SnowflakeDSIIDriver
[SNOWFLAKE_ODBC]
Driver = /opt/snowflake/snowflakeodbc/lib/universal/libSnowflake.dylib
Description = Internal Snowflake
uid = <>
server = MY_SERVER
database = <>
schema = <>
warehouse = <>
role = MY_ROLE
tracing = 6
➜ sandbox
➜ sandbox odbcinst -s -q
[SNOWFLAKE_ODBC]
➜ sandbox isql -v SNOWFLAKE_ODBC <USERNAME> <PASSWORD>
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL>
➜ sandbox /usr/local/iODBC/bin/iodbctest
iODBC Demonstration program
This program shows an interactive SQL processor
Driver Manager: 03.52.1521.0607
Enter ODBC connect string (? shows list):
DSN | Driver
------------------------------------------------------------------------------
SnowflakeDSII | Snowflake
Enter ODBC connect string (? shows list): SnowflakeDSII
1: SQLDriverConnect = [iODBC][Driver Manager]dlopen(/opt/snowflake/snowflakeodbc/lib/universal/libSnowflake.dylib, 6): no suitable image found. Did find:
/opt/snowflake/snowflakeodbc/lib/universal/libSnowflake.dylib: no matching architecture in universal wrapper
/opt/snowfl (0) SQLSTATE=00000
2: SQLDriverConnect = [iODBC][Driver Manager]Specified driver could not be loaded (0) SQLSTATE=IM003
I can connect via isql, iodbctest fails and my simple test case fails:
defmodule Sandbox.OdbcTest do
use Sandbox.OdbcCase
test "test odbc" do
conn_str = 'dsn=SnowflakeDSII'
IO.inspect :odbc.connect(conn_str, [])
end
end
I tried connection string as:
conn_str = 'driver=/opt/snowflake/snowflakeodbc/lib/universal/libSnowflake.dylib;server=<SERVER>;uid=<USERNAME>;pwd=<PASSWORD>;role=<ROLE>;warehouse=TEST_WH;'
➜ sandbox mix test test/odbc_test.exs
true
'driver=/opt/snowflake/snowflakeodbc/lib/universal/libSnowflake.dylib;<MY_DSN>'
{:error,
'[unixODBC][Driver Manager]Can\'t open lib \'/opt/snowflake/snowflakeodbc/lib/universal/libSnowflake.dylib\' : file not found SQLSTATE IS: 01000 Connection to database failed.'}
.
Finished in 0.05 seconds (0.00s async, 0.05s sync)
1 test, 0 failures
Randomized with seed 529170
I am afraid message from iodbstest explains everything but I hope there is solution.
I followed these articles:
https://docs.snowflake.com/en/user-guide/odbc-mac.html
https://community.snowflake.com/s/article/How-to-create-Snowflake-ODBC-DSN-On-MacOS
How do I install the ODBC driver for Snowflake successfully on an M1 Apple Silicon Mac?
ARM processor for M1 is not yet supported for ODBC drivers provided by Snowflake.
Snowflake ODBC Driver support for ARM/M1 is now available, and you can download the driver from the Snowflake Client Repository
I got a version of this working for R. You might be able to leverage a similar approach: https://stackoverflow.com/a/71790445/4319571

Why "host:localhost" must be deleted from database.yml under Cent OS 6, PostgreSQL 9.4 and Rails 3.2, or get a error: Ident authentication failed?

All config files described here are the same as my Mac OS's and all works fine in Mac OS.
I got the same error in CentOS 6 x86_64:
Ident authentication failed for user 'abelard'
When running the following two commands:
1. rake db:create
2. psql -d testforabelard2 -U abelard -h localhost
I got the same error after trying these answers 1 and 2.
My /var/lib/pgsql/9.4/pg_hba.con's content is as follows:
host all all 127.0.0.1/32 md5
host all all ::1/128 md5
And there is a blank file /var/lib/pgsql/9.4/pg_ident.con
My database.yml's content is as follows:
development:
adapter: postgresql
encoding: unicode
database: social_stream_development
pool: 5
username: abelard
password: password
host: localhost
port: 5432
I found a resolution: the error disappear after deleting host:localhost from the above database.yml. But I can not delete host:localhost because there is a sql_host = localhost generated automatically when using think-sphinx for full-text search.
And for offering the same params as my Mac OS's, I altered PostgreSQL's user abelard :
testforabelard2=# \du
List of roles
Role name | Attributes | Member of
-----------+-------------+-----------
abelard | Superuser | {}
: Create role
: Create DB
postgres | Superuser | {}
: Create role
: Create DB
And I can run the command without -h localhost successfully:
psql -d testforabelard2 -U abelard
I don't know what things I miss, what should I do for correct this error? Any advice will be welcome!
I finally resolved myself easily through moving /var/lib/pgsql/9.4/pg_hba.con to /var/lib/pgsql/9.4/data/pg_hba.con.
The reason for this mistake I made is that I referred to my Mac OS's position of the file pg_hba.con.
Of course, I thank this early blog “FATAL: IDENT AUTHENTICATION FAILED”, OR HOW COOL IDEAS GET BAD USAGE SCHEMAS , which reminded me to realise the wrong place of the above file!

Connect to MSSQL via ODBC with FreeTDS

I am working with a group who needs to access a MSSQL db from a linux host and in my searching found FreeTDS, which i am able to connect with FreeTDS but our programmer states that ODBC will require to configured with FreeTDS for their PHP code to work. With that being said, i have tried configuring both unixODBC and unixODBC_23 for the past day and have been unsuccesful in finding a config that works properly and I am also not able to get tracing working either. So, without further ado, here is my config
--- odbc.ini and odbc_23.ini ---
[TC]
Description = FreeTDS Connection
Driver = FreeTDS
Database = mydb
ServerName = 192.168.1.12
TDS_Version = 7.0
PORT = 3433
[Default]
Driver = /usr/local/freetds-0.91/lib/libtdsodbc.so
---odbcinst.ini and odbcinst_23.ini---
[FreeTDS]
Description = FreeTDS
Driver = /usr/local/freetds-0.91/lib/libtdsodbc.so
Trace = 1
TraceFile = /tmp/freetds.log
UsageCount = 1
When i try connecting via isql, here is what i receive.
root#host(~)# isql_23 -v TC myuser mydb
[S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source
[01000][unixODBC][FreeTDS][SQL Server]Unknown host machine name.
[ISQL]ERROR: Could not SQLConnect
root#host(~)#
Any ideas would be greatly appreciated!
Try Server instead of ServerName?
Server = 192.168.1.12
Ok, so there was one additional change that i had to make in addition to changing "ServerName" to "Server" and that was I removed "Database = mydb" and moved it to the "Server" and now my file looks like so:
[TC]
Description = FreeTDS
Driver = FreeTDS
Server = 192.168.1.12\mydb
TDS_Version = 7.0
PORT = 3433
and now im connected with this command:
root#host(~)# isql_23 -v TC user password
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL> ^C
root#host(~)#

PowerShell remote invocation mysteriously hangs

I have created a series of functions that basically collect all the IIS configurations about a site, when run on a server locally it executes without issue (albeit slowly) however when I run them remotely using an invoke-command in PowerShell 2 it runs through and mysteriously stops approximately 15-20 seconds into the process. It generally stalls on the same request but not always. The same commands executed locally work without any issues. No exception is raised, it just hangs indefinitely.
I can post the code if necessary however it is several hundred lines so I'm more looking for guidance on how to investigate a problem like this or if anyone has encountered something similar.
Comparing IISConfig between [targetserver] and localhost.
Checking Installed IIS version on [targetserver]:
IIS major version : 7
IIS minor version : 5
IIS7+ detected, using WebAdmin module and IIS metabase
Name Value
---- -----
name Default Web Site
id 1
serverAutoStart True
state 1
Site Configuration:
Name Path PSPath Handlers_Ac Access_sslF Asp_AppAllo Asp_AppAllo Asp_limits_ Asp_EnableP Asp_limits_
cessFlags lags wClientDebu wDebugging bufferingLi arentPaths queueTimeou
g mit t
---- ---- ------ ----------- ----------- ----------- ----------- ----------- ----------- -----------
Default ... IIS:Site... WebAdmin... Read,Script False False 25000000 True 00:00:00
WebApp VDir: /MyApp, App Pool: MyApp
App pool Configuration:
AppPoolID Enable32Bit managedPipe managedRunt AppPoolName AppPoolAuto processMode processMode processMode recycling_l
AppOnWin64 lineMode imeVersion Start l_idleTimeo l_identityT l_UserName ogEventOnRe
ut ype cycle
--------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
False Classic v2.0 MyApp True 00:20:00 LocalSer... Time,Req...
Analyzing web directories for /MyApp, this could take a while....
Initial Collection Completed, found 141... took 0.9516122 seconds
0 C:\inetpub\wwwroot\MyApp\Core
1 C:\inetpub\wwwroot\MyApp\Core\AdminTools
2 C:\inetpub\wwwroot\MyApp\Core\AdminTools\Cache
3 C:\inetpub\wwwroot\MyApp\Core\AdminTools\Extra
4 C:\inetpub\wwwroot\MyApp\Core\AdminTools\HTTPPostTest
5 C:\inetpub\wwwroot\MyApp\Core\AdminTools\IISAdmin
6 C:\inetpub\wwwroot\MyApp\Core\AdminTools\Profiling
7 C:\inetpub\wwwroot\MyApp\Core\AdminTools\RecordTestData
8 C:\inetpub\wwwroot\MyApp\Core\AdminTools\ScrambleTest
9 C:\inetpub\wwwroot\MyApp\Core\AdminTools\Sessions
Analyzed 10 so far... took 6.7236862 seconds, remaining time 88.08028922 seconds
Current Folder: C:\inetpub\wwwroot\MyApp\Core\AdminTools\Sessions
10 C:\inetpub\wwwroot\MyApp\Core\AdminTools\SoapTest
11 C:\inetpub\wwwroot\MyApp\Core\AdminTools\StaticContent
Sometimes it makes it to 15 or so. I tried from my laptop and from one server to another and the behavior is the same.
Here is the loop which is hanging:
$start = [System.DateTime]::Now
$numanalyzed = 0
if ($true) #skip to test
{
# loop through all physical folders as it is much faster
foreach ($folder in $folders)
{
write-host $numanalyzed $folder.fullname
#figure out the virtual path to the folder
$iis7vwebfolderpath = $folder.FullName.Replace($iis7webapp.PhysicalPath, $iis7VDirWebApppath)
#Get-item $iis7vwebfolderpath | gm
$iis7VWebDirConfigItem = Get-LNOSIIS7ConfigForPSPath -PSPath $iis7vwebfolderpath
# add new item to list
$iis7VWebDirConfig += $iis7VWebDirConfigItem
# increment counter and report out progress every 10
$numAnalyzed++
if ($numanalyzed % 10 -eq 0)
{
$end = [System.DateTime]::Now
$timeSoFar = (NEW-TIMESPAN –Start $Start –End $End).TotalSeconds
$timeremaining = ($folders.Count - $numAnalyzed) * ($timeSoFar / $numanalyzed)
"Analyzed {0} so far... took {1} seconds, remaining time {2} seconds" -f $numanalyzed,$timeSoFar,$timeremaining | write-host
"Current Folder: {0}" -f $folder.FullName | Write-Host
}
}
}
$end = [System.DateTime]::Now
"Processed web dirs: {0} took {1} seconds" -f $iis7VWebDirConfig.Count,(NEW-TIMESPAN –Start $Start –End $End).TotalSeconds | write-host | Write-Host
The function I'm having performance problems with and I've got a separate question about but this post has the source code for the function:
web-administration vs WMI to query web directory properties performance problems
In my case, it seemed my PowerShell call froze due to the Idle-Timeout expiration (the call runs for a very long time).
Setting IdleTimeout value to a sufficiently long duration fixed my issue.
Once again, query the current configuration using
winrm get winrm/config/winrs
And set the timeout using
winrm set winrm/config/winrs '#{IdleTimeout="18000000"}'
I think i may have discovered the problem, i started getting some odd failures in other parts of the script:
[SEVERNAME] Processing data from remote server SERVERNAME failed with the following error message: The WSMan provider host process did not return a proper response. A provider in the host process may have behaved improperly. For more information, see the about_Remote_Troubleshooting Help topic.
+ CategoryInfo : OpenError: (SERVERNAME:String) [], PSRemotingTransportException
+ FullyQualifiedErrorId : 1726,PSSessionStateBroken
and
Processing data for a remote command failed with the following error message: Not enough storage is available to complete this operation. For more information, see the about_Remote_Troubleshooting Help topic.
+ CategoryInfo : OperationStopped (System.Manageme...pressionSyncJob:PSInvokeExpressionSyncJob) [], PSRemotingTransportException
+ FullyQualifiedErrorId : JobFailure
This lead me to the following site: http://www.gsx.com/blog/bid/83018/Troubleshooting-unknown-PowerShell-error-messages
The following recommendations seems to have cleared up most of the problems although i still have some testing to do.
Excerpt from site below:
As the first error message specifies, an overflow of memory in the remote session has occurred. Open a PowerShell prompt on the remote server and display the configuration of winrs using:
winrm get winrm/config/winrs
Check the "MaxMemoryPerShellMB" value. It is set by default to 150 MB on Windows Server 2008 R2 and Windows 7. This is something that Microsoft changed in Windows Server 2012 and Windows 8 to 1024 MB.
In order to resolve this issue, you need to increase the value to at least 512 MB with the following command:
winrm set winrm/config/winrs `#`{MaxMemoryPerShellMB=`"512`"`}
As an FYI if Invoke-Command always hangs:
Try a simple command to system :
Invoke-Command -ComputerName XXXXX -ScriptBlock { Get-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion }
Start the Windows Remote Management Service (on that system)
Check for the listening port:
netstat -aon | findstr "5985"
TCP 0.0.0.0:5985 0.0.0.0:0 LISTENING 4
TCP [::]:5985 [::]:0 LISTENING 4

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

Resources