When to restart Sensu - monitoring

When must the sensu-client be restarted? I've noticed that I can just restart the sensu-api and sensu-server and new checks that I add seem to work. Do I need to restart clients like the doc says?

I think the answer is no. You only have to restart clients when adding new configurations to them. Otherwise they'll automatically pick up new checks that the server dictates.

For the most part configuration management tools will restart the right components when necessary.
The clients do need to be restarted when client-side configuration (standalone checks, rabbitmq, etc) changes.
Also, the servers themselves need their local clients restarted (if you are running a sensu client on your sensu server too), even when server-side definitions are changed, due to this bug:
https://github.com/sensu/sensu/issues/806

Related

Is it possible with DOCKER (or on the host) to detect an HTTP request and then redirect if the main server is down ? How?

I'm sure this is possible somehow, but never really had a need to do so previously. I have a bunch of Docker containers that run on an UBUNTU host. One of the containers is an NGINX server that serves as a webserver and reverse proxy. What I would like to do is setup some sort of 'switch' or mechanism on the host or preferably another device that does something like described below. The server does have production and development versions of the web applications, so I really just want to set a header if it is running but in maintenance.
Set up something on the host other device that indicates or detects the state of the NGINX server.
a. Running normally.
b. Running but undergoing maintenance (would have to be manually set into that mode)
c. Not running.
Depending upon the state in 1 it would do the following:
a. Just pass the request through to the server.
b. Pass through the request, but possibly set a header or something to indicate it is in maintenance mode.
c. Redirect the request to an external URL, basically the public facing page for the business.
Not really sure how to approach that since it seems that I would need an HTTP listener on the host, or possibly on a router or firewall or other device (we have a Fortigate and a Watchguard). That would check the HTTP request, and then take the appropriate action based upon what "mode" we are in. The UBUNTU host is pretty much bare bones, i.e. without Apache or another web server because everything is pretty much in the Docker package.
If I were to set that up on the Debian host, seems like we could just have an environment variable that defaults to PROD and then just set it to DEV when we are working on the code, and then set it back to PROD when we are done, and then the process would.
ping the DOCKER NGINX instance to see if it is running (i.e. status code).
if it is running and PROD, just forward on.
if it is running and DEV, set a header to indicate so and forward on.
if it is not running, redirect externally.
If the server is completely down though, that would fail.
Any ideas as to how to actually do that ? Ideally the 'processor' would not reside on the UBUNTU server at all and would always be running.

Force TFS to use CredSSP when accessing network resources

I am using TFS 2017 Update 2 Release processing. I have a functioning deploy process that works within a domain (it runs successfully against 10 different deployment environments)... and now I need to deploy into a different environment, which lives in a different A/D domain.
Unfortunately, the domain trust is one way between the domains - and the destination domain ("Production") does not trust the domain I am installing from ("Dev")
The problem I'm seeing seems to be the infamous "double hop" credential problem.
My TFS app tier can see (and trigger activity on) the release server running TFS vNext Agent 2.117.2 Futher, I can execute inline PowerShell, and locally hosted PowerShell scripts on the release server just fine.
Howerver, as soon as I try to access a PowerShell script not on the release server (be it in the Production domain with the release server, or in the Dev domain) I get an error:
2018-02-13T19:03:32.6611149Z ##[error]. : AuthorizationManager check failed.
At line:1 char:3
+ . '\\unc\path\to\share\TFSScripts\Emit-Variables2. ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess
The account running the TFS release service has been confirmed to have access to the script file when running from the desktop of the release server, so access should not be an issue.
Further testing of the issue has identified that if we manually create a PSSession using -Authorization CredSSP and pass a credentials object we can successfully access the off server resources.
However, I can see no way to configure TFS to use CredSSP as the authorization mechanism.
The servers involved are W2K8R2 - so we cant use the constrained delegation functionality that W2K12 introduced. We have also tried SPNs with similar unsuccessful results. Kerberos has been forced to use TCP by setting the max packet size to 0 (thus also preventing fragmented UDP packets and related problems). Our max Kerberos packet size is set to 48000.
In the ultimate end state, The TFS App server, and all the TFS artifacts and release scripts will sit in the "dev" domain on one side of a firewall... and the production release server, and a set of servers to release to will exist in the "production" domain, on the other side of a firewall
CredSSP seems to be the only way to make this work - but I see no way for TFS to be configured for it.
This can't be a unique problem. Can someone provide some insight on how to get around this?
Sorry it's not able to force TFS to use CredSSP when accessing network resources. And on configuration of TFS to use CredSSP as the authorization mechanism
You must manually enable CredSSP in powershell.
Another way take a look at this solution, which may do the trick: TFS2015 Release Management: Deploying to an untrusted domain by having the deployment agent run under a shadow account.

WebSocket connection failure. Due to security constraints in your web browser

Today I download neo4j-community-3.2.0 in windows, when i start the server, i meet one problem in browser, i meet this problem in neo4j-community-3.1.2 and i had solved it by Ticking the "Do not use Bolt" option in settings solved the issue. But in neo4j-community-3.2.0 , i can't see "Do not use Bolt" option ,and i don't know how to do.
N/A: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver. Please use your browsers development console to determine the root cause of the failure. Common reasons include the database being unavailable, using the wrong connection URL or temporary network problems. If you have enabled encryption, ensure your browser is configured to trust the certificate Neo4j is configured to use. WebSocket readyState is: 3
This happens because the browser is trying (under the hood) to also access the bolt port, which uses an unsigned certificate.
You probably allowed the browser to access the SSL 7474 port through allowing the unsigned certificate as an exception on your browser (and if you didn't, you should in order to make it work).
The url was:
https://[neo4j_host]:7474
Do the same for the bolt certificate, allow it as an exception for url:
https://[neo4j_host]:7687
I ran into the same problem trying to use Neo4j Community Edition on an AWS Ubuntu 16.04 instance. The key thing that solved it was to open port 7687 (the bolt port) in the AWS security group settings.
Found this based on https://stackoverflow.com/a/45234105/1529646
Thus, full answer is:
Make sure to configure Neo4j correctly, ie. uncomment the line dbms.connectors.default_listen_address=0.0.0.0 AND the line dbms.connector.bolt.listen_address=:7687
Open ports 7474 AND 7687 in the AWS security group settings.
In the lower left corner of the browser gear, select do not use bolt.
Open your ${NEO4J_HOME}/conf/neo4j.conf file and edit the bolt settings. It is just about uncommenting this line dbms.connector.bolt.address=0.0.0.0:7687
Change the version of Neo4j
Check your JDK version, use JDK1.8
Adding another option, which worked for me. If your bolt's tls_level is set to REQUIRED, you need to change it to OPTIONAL, if you are not using it with SSL certificate; to get this working.
If you are using Neo4J Community Edition (ver 3.5.1 - in my case) from AWS Marketplace, you need to change the configuration in:
/etc/neo4j/pre-neo4j.sh
Change this line:
echo "dbms_connector_bolt_tls_level" "${dbms_connector_bolt_tls_level:=REQUIRED}"
to
echo "dbms_connector_bolt_tls_level" "${dbms_connector_bolt_tls_level:=OPTIONAL}"
You can find more about Neo4J connector configuration option here. Ideally as per docs, by default bolt.tls_level should have been OPTIONAL only. But I'm not really sure what exactly happened in my case, which got it changed to REQUIRED. Or if it came as is from AWS Marketplace.
Assuming you have valid certs and placed them under the correct certificates directory:
dbms.ssl.policy.bolt.client_auth=NONE
Version 4.0. Took it from this article.
I shared my full ssl config on this other answer.
I had the same error. New to Neo, so take this with a grain of salt, but my solution didn't match these above idea. But thanks as they did lead me to the right "water". So
I went into the conf file, noticed that there was the same port number (previously, the Neo desktop had been constantly telling me it'd needed to update the port numbers...I never checked to verity, but they'd be #, #+1 and #+2. But that didn't work yet that'd happened again and again...but now, after checking the conf file myself, I noticed that the number was the same for all three port requirements for BOLT. Tried that and it didn't work either...but maybe that was important in what did:
In the folder, where the specific database is housed, named "..neo4jdatabases/[GUID Value]" there were two directories titled "/installation-3.4.0" and "...1". I removed the ".0", restarted things and IT WORKED.
So, either there should NOT be two versions under the same database collection OR that's true AND you need the three ports to be the same.
Final add for any Neo4j experts who actually know what they're doing, I have three databases running, two without issue. This occurred AFTER I was messing around trying to see how PowerShell might be useful. Not sure if this is related, but the other databases have worked fine...but, this db is the original playground/sandbox I'd had since the beginning. Not 100% sure, I made the version update before or after, creating the other two databases. HTH.
Using a windows trial version on a Windows 10 machine. Current N4j version is 3.4.1.
Do love what I see so far with Neo BTW!!!
Please mention the correct bolt port under the Connect URL textbox.if you are using the service port the mention the service port in place of bolt port.
Then finally I resolve it by replacing the bolt port with service port inside k8s.
user: neo4j
password: neo4j
I resolve this error by replace the port 7687 with node port 30033 inside Neo4j
then it works fine.
I was facing the same issue with Neo4J version 4 installed on an Ubuntu 18 EC2 instance. Tthe workaround that did the trick for me was to replace the 0.0.0.0 entries in /etc/neo4j/neo4j.conf with the actual private IP of my instance.
Following are the lines where the replace happened:
dbms.default_listen_address=172.X.X.232
dbms.connector.bolt.address=172.X.X.232:7687
Post restart of the DB, the Connect URL when accessing from browser should also use the private IP instead of localhost.

Jenkins service won't start unless it has access to 178.255.83.1

We recently went through some network policy updates and I've discovered that my Jenkins server's jenkins service will no longer restart as expected (this worked fine prior to the policy updates).
There doesn't seem to be any logging information written on the service startup (no log files get updates).
Is there a list of external IPs that Jenkins needs to access in order to start up properly?
By looking at the logs, it seems as though part of the service start-up process is to contact one of the OCSP Servers. This seems to be related to certificate verification so it's probably legitimate traffic.
Once an exception was added for the target address (http://178.255.83.1:80), the Jenkins service started up without issues.

RoR app deployed on Heroku and working with SQL Server database

Is it feasible to have a Ruby on Rails app, which is:
a) deployed on Heroku, and
b) working with a remote SQL Server database?
I take it that I'll need unixODBC installed on Heroku, but I cannot find a way to do so. Is this possible?
Or, is there any other way (without ODBC?) to accomplish this?
Thank you very much for any guidance or tip.
Updated:
Some info on the subject:
1) Heroku pre-installs both unixODBC and FreeTDS by default, so you already have them.
2) Also, it is possible to run shell commands via Heroku Console in backticks, e.g.:
heroku console
`odbcinst`
(runs "odbcinst" command in Heroku shell and shows the result)
3) You do not have access to filesystem outside of your slice where the packages are installed. If you only need a driver path, Heroku support can provide it (/usr/lib/odbc/libtdsodbc.so in my case).
4) You cannot run sudo commands in Heroku shell.
At the moment, to connect to MS SQL Server you at least need to append ‘freetds.conf’ file. Even when using tinyTDS (there is an open ticket#2 in tinyTDS gitgub issue page). DSN-less connection instructions from "wiki.rubyonrails.org SLASH database-support SLASH ms-sql" didn’t work for me, I guess this connection requires some extra-configuration either.
‘freetds.conf’ cannot be modified without sudo. Therefore, I conclude that currently there is no way to make MS SQL and Heroku work together.
I’ve managed to set up this connection with EngineYard and activerecord-sqlserver-adapter.
I followed these instructions:
https://github.com/rails-sqlserver/activerecord-sqlserver-adapter/wiki/Platform-Installation---Ubuntu
(there are only some filepath differences, e.g. ‘odbc.ini’ is located in ‘/etc/unicodbc’, not in ‘/etc’ - this is easy to work out).
I installed 'unixODBC' and 'freetds' packages using EY Unix Packages feature, and made all configurations manually through SSH. Sudo is available in EY (no password required). There is also Chef Recepes feature to automate those configurations (seems to be pretty easy, I'm going to try it tomorrow).
Hope this is helpful.
It is possible.
Because Heroku copies/symlinks its own config/database.yml over whatever you supply in your repository, you may need to take additional steps (e.g. in config/environments/production.rb or in config/initializers/remote_mssql_from_heroku.rb) to set up your application appropriately.
You will face the challenge, however, that traffic from Heroku to your MSSQL database will traverse the public internet. By default, this traffic will not be encrypted. Potentially everyone in the world will be able to monitor your traffic between your Heroku application and your database, and even alter the traffic in-flight, whether for benign or malicious purpose, without you being able to detect it. MS SQL offers the capability to connect over SSL. This capability requires explicit configuration in the MSSQL server, so you must be able to access and modify that configuration. Additionally, this configuration requires that your client library be up-to-date and capable of talking with MSSQL over SSL. Note that MSSQL server will enforce that your server certificate list a Common Name or Subject Alternative Name exactly matching or wildcard-matching the server's FQDN (at least, the FQDN that the server knows about), and that the client use an FQDN for the server exactly matching or wildcard-matching one of the names on the certificate.
I've successfully used the following article which uses Heroku's newer buildpack feature to use TinyTDS and connect remotely to SQL Server 2008 R2. I'm still investigating how I could encrypt traffic. Hope this helps others!
http://blog.firmhouse.com/connecting-to-sql-server-from-heroku-with-freetds-here-is-how-on-cedar#
We're having a similar problem where we're needing to import old data from a SQL Server database into our new app. The data isn't a straight table import, but needs to undergo some processing and conversions. We've built an import layer for this which lives in a private gem, so as to not pollute the new app with the old data conversion issues. This approach is also designed to permit incremental updates, as we get closer to launch we'll keep syncing records up to the moment of switch-over.
Heroku told us that it's not trivial to connect to SQLServer, in particular as they don't support FreeTDS. Their support staff recommended to run an instance with the import gem from a laptop in our office and configure it to connect to their database (which requires a dedicated DB, not the free shared one). This sounded like the most palatable approach to us.
Secondly, regarding security that was mentioned by #Justice, we discussed configuring SSL for SQLServer with the hosting company and they pointed out the complexities of this. They recommended VPN as an easier solution. As we don't have office-side VPN hardware, the simplest and free solution proved to be an SSH tunnel.
We've set up an SSH tunnel from the laptop to the SQLServer Windows box. That was straightforward. We had CopSSH installed on Windows (which comes with a Linux shell, by the way) and we were able to simply set up a tunnel, having the laptop talk to localhost for its SQLServer connection, i.e.:
ssh -L 1433:localhost:1433 user#windows_server_name
I did not know Heroku has FreeTDS on it? I was told they did not. TinyTDS if used with FreeTDS 0.91 can have a zero freetds.conf dependency and be driving by runtime connection args. We are looking into building an Ubuntu 10.4 native gem that statically links 0.91 with OpenSSL so you can just drop it into Heroku and us it to connect to Azure and/or you own outside DB.

Resources