Monit opening an URL when check fails - url

I love the site. It's a good place to look for answers. Thanks for that.
I am wrestling with a feature of munin on a test server.
It runs lighttpd, mysql, ssh, proftpd, postfix and dovecot.
If one of them fails, I would like to receive a text message next to the 'default' email. I have got an SMS gateway in use, with VoipBuster. If I request an url (as stated below) I will receive a text message.
https://www.voipbuster.com/myaccount/sendsms.php?username=xxxxxx&password=xxxxxx&from=xxxxxx&to=xxxxxx&text=xxxxxx
I have tried including this in my monit config, but I just can't get it to work.
Here's what I tried.
Including a 'if failed then' under every check, like this:
check process lighttpd with pidfile /var/run/lighttpd.pid
group lighttpd
start program = "/etc/init.d/lighttpd start"
stop program = "/etc/init.d/lighttpd stop"
if failed host 178.21.118.206 port 80
protocol http then restart
if 5 restarts within 5 cycles then timeout
if failed then (url https://www.voipbuster.com/myaccount/sendsms.php?username=xxxxxx&password=xxxxxx&from=xxxxxx&to=xxxxxx&text=CHECK EMAIL -- SERVER ERRORS!)
But I keep getting errors like this when I am restarting.
/etc/monit/monitrc:194: Error: syntax error 'EMAIL'
I tried moving around with the 'if failed then' clause, but I literally don't know how to solve this anymore.

Thanks for the possible solutions.
I ended up using Pushover, which is an app for mobile phones.
With his app you can send an email to a specific email address (built-in feature in Monit) and get push notifications on your mobile.

Related

How to get Sendgrid Inbound Parse Webhook working in rails app on production?

I’m trying to build a rails app that will process inbound mail. I got the app to work on my localhost machine using the Rails conductor and action mailbox. When an email gets sent, I’m able to save the contents of the email. But I’m having difficulty getting it to work on a production environment…I’m not sure how to configure my domain and settings to get it to work.
I’ve been following the instructions here:
https://edgeguides.rubyonrails.org/action_mailbox_basics.html#sendgrid
and https://sendgrid.com/docs/for-developers/parsing-email/setting-up-the-inbound-parse-webhook/
I included this in my rails credentials:
action_mailbox:
ingress_password: mypassword
I have set up an MX record on google domains:
parse.[mydomain].com

I pointed to a Hostname and URL.
https://actionmailbox:mypassword#parse.[mydomain].com/rails/action_mailbox/sendgrid/inbound_emails
I send an email from my email account to
parse#parse.[mydomain].com
but I’m not able to test or track what is happening to this email. I don’t receive an error message back to my email as a reply, so I think that’s a good sign but I’m not sure whether it’s being processed or how to troubleshoot. I even put a puts ‘test’ in my replies_mailbox.rb file but I don’t see anything in the console when I tail logs on production.
Any advice on what next steps I can take?
When dealing with integration testing it's useful to split the issue into smaller ones, in order of email path
Check if mx dns record has propagated, usually when you edit your zone - other dns servers may still respond with old records until zone TTL passes (it is usually set to several hours), use some remote dns checker
Check sendgrid settings (including "Post the raw, full MIME message" which is expected by actionmailbox, so that sendgrid posts 'email' field)
Check if the email is being dropped by spam filter in sengrid
check if the request is present in your web server/reverse proxy logs (like nginx, if you use one)
Try mimicking sendgrid's request to check if your app is accepting it (and if it is in logs), rails only reads params[:email], other fields are not necessary:
curl -X POST "https://actionmailbox:mypassword#parse.[mydomain].com/rails/action_mailbox/sendgrid/inbound_emails" \
-F email="From: foo <abc#localhost>\nTo: bar <bca#localhost>\nSubject: test\nMIME-Version: 1.0\n\nTest!"
I'd start with #5, to be sure your app is accepting email correctly and has logs, and then go up.
PS. puts might not appear in logs in production (or not where you expect it to appear) depending on you logging setup. Better way is to use Rails.logger.info
I spent two weeks on what seems like this same issue and found one possible answer that worked for me, crossposted in SendGrid's GH Issues: https://github.com/sendgrid/opensource/issues/22):
Problem:
Localhost: my endpoint route was working correctly. I was able to receive and parse both SendGrid (through a local tunnel -- both cloudflare and localtunnel) and Postman POSTs.
Production: my endpoint route was working fine when tested with Postman and when tested with SendGrid POSTs when sent to a cloudflare tunnel that pointed at my live site. However the SendGrid POSTs that were sent directly to my site seemed to fall into a black hole. They never made it. I did not have any agent blocks or any IPs blacklisted, so I wasn't sure what was going on.
Solution:
After a lot of back and forth with the support team, I learned that SendGrid Inbound Parse seems to only support TLS 1.2... My site was using TLS 1.3. Local tunnels generated full backwards compatability SSL certs which is why the POSTs would work there, but not directly to my site.
To identify if this is an issue for you, you can test your site at: https://www.ssllabs.com/ssltest/analyze.html ... once it is done, there will be a section that shows you what your site supports:
If you don't have green for TLS 1.2, then you need to update your server to support this.
I used NGINX and CertBot. To update them:
SSL into your server and use sudo NGINX -T to see what your current configuration is, and where it is.
Open up that config with sudo /etc/nginx/snippets/ssl-params.conf (or whatever your actual path and preferred editors are.. and make sure to use the path from the -T call b/c you might end up updating the wrong config).
Look for the line that says ssl_protocols.... you need to update it to read ssl_protocols TLSv1.3 TLSv1.2;
You may also need to add specific ciphers and a path to a dhparam if you don't already have one generated and linked. This is what the relevant portion of my final file looks like:
ssl_protocols TLSv1.3 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparam.pem;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM;
Exit out, make sure your new config works with sudo service nginx configtest and then restart NGINX with sudo service nginx restart
Test your site again on SSLLabs and make sure it supports TLS 1.2
I then sent another inbound parse to SenGrid and was able to confirm that it hit my site, was logged, and was processed.

Start and verify Dart VM service

I have a two part question.
First, what Dart command should I use to "start"
the VM service listening for requests, with possibly
giving it what host and port number to use.
I'm using Windows, and I don't need the Observatory possibly
interfering.
I'm currently trying to use this, after I CD into the project's directory:
dart --pause_isolates_on_start bicycle
And the second part of the question is, is it possible to verify
that the VM service is there and listening on whatever port?
I want to be able to send a request to the VM service,
from a WebSocket client, and get back a response.
After I give the above command, if I do a 'netstat'
it doesn't look like there is anything there listening.
And any attempts at trying to connect to the VM service get
a connection refused Exception, same as if I didn't even
try to start the VM service.
UPDATE:
I was looking at the intelliJ plugin code, to see how they did their connect,
and saw that they used "ws://localhost:8181/ws", I was trying to use
"ws://localhost:8181", and now it's finally getting past the handshake,
the server was returning "200 OK" instead of "101" before.
I'm assuming that I'm talking to the Observer at this point,
and not the VM service, I'm not sure, but at least I'm further along..
When it worked, I was using:
dart --enable-vm-service --pause_isolates_on_start bicycle.dart
Thanks!!
dart --help -v prints
--observe[=<port>[/<bind-address>]]
The observe flag is a convenience flag used to run a program with a
set of options which are often useful for debugging under Observatory.
These options are currently:
--enable-vm-service[=<port>[/<bind-address>]]
--pause-isolates-on-exit
--pause-isolates-on-unhandled-exceptions
--warn-on-pause-with-no-debugger
This set is subject to change.
Please see these options for further documentation.
It depends on what exactly you want to do, but as far as I know the Observatory just uses this service and if you don't access any of its features, it won't add additional load to the process.
There is a Dart client API https://pub.dartlang.org/packages/vm_service_client and a documentation about the protocol https://github.com/dart-lang/sdk/blob/master/runtime/vm/service/service.md
Perhaps this is what you are looking for
enum EventKind {
// Notification that VM identifying information has changed. Currently used
// to notify of changes to the VM debugging name via setVMName.
VMUpdate,
// Notification that a new isolate has started.
IsolateStart,
used with Events https://github.com/dart-lang/sdk/blob/master/runtime/vm/service/service.md#events

Server timeout and sftp timeout. What to do?

Since 12h the website (Wordpress website) that is hosted on Google Cloud Platform has a time out issue. After 60 seconds of trying to load the website, the following message appears "The connection has timed out".
When trying to connect with SFTP, same issue.
What should I do to resolve this?
Since two different services stopped to work at the same time it
sounds like a networking issue. There is a timeout, therefore there is the server not answering at all to the requests.
What to do?
I would proceed with this general troubleshooting steps, if you want you can upload your question with the result of these commands/question to proceed with the troubleshooting.
First of all I would check if you are able to ping the
external/public IP of the instance.
I would check if the firewall rules allows TCP80/TCP443 and TCP22, Notice that on GCP you need to create the rule and assign the TAG to the machine from its detail page if the the rule does not apply to the whole network.
Are you able to ssh into the instance?
I would check if the processes are actually listening netstat -tuplen
If you are able when logged into the machine do you have access to the internet? Are you able to ping external IP? If not whats about internal IP?
I would go to the "activity" page of your Google Cloud Console to check which actions have been taken while the instance was still running.
I would check as well the history of the Linux machine to check if you run some commands acting on the network configuration of the machine.
Note that if you cannot SSH into the machine you can always access through serial console setting a password for your username through a startup script.
UPDATE
I had the possibility to take a look into the project, the machine was stopped due to issue with the billing account (it was closed) after the free trial period ended.
I would suggest you to go again trough the documentation regarding the upgrade of the billing account
If you have still some doubts or question after you perform this operations you can file a case at this link with the billing team and they will help you to solve the issue.

Twilio IP Messaging token issue

I'm setting up an iOS app to use the IP Messaging and video calling apis. I'm able to connect, create channels and setup a video call if I manually create hard-coded tokens for the app. However, if I want to use the PHP server (as described here https://www.twilio.com/docs/api/ip-messaging/guides/quickstart-ios) then I always get an error and it can't connect anymore.
I'm attaching a screenshot of what I see when I hit the http://localhost:8080 address which seems to produce a 500 Internal error on this URL: https://cds.twilio.com/v2/Streams
Thanks so much!
After much time spent on this I decided to try the node backend instead - under other server-side languages of the PHP and I have it running in 2 minutes! I used the exact same credentials as the ones that I was using on the PHP config file so either my PHP environment has something strange or the PHP backend needs some fixing. In any case, I'm able to move forward using the node backend, so if you run into the same issue just try node instead of PHP. woohoo!

Use proxy or Tor within Heroku rails app to hide IP

I'm using Mechanize inside a rake task that is run by a scheduler add-on to my ruby app on Heroku. In the script, I am logging into a webpage which worked until recently when the script could no longer log-in. When I began debugging, Mechanize shows different form fields when I run the script in the heroku console than on my local console.
Local ruby console shows these fields:
>> asf.fields.each do |f| puts f.name end
__VIEWSTATE
__PREVIOUSPAGE
__EVENTVALIDATION
login$field
password$field
Heroku console shows one additional field that does NOT appear in the html source:
>> asf.fields.each do |f| puts f.name end
__VIEWSTATE
__PREVIOUSPAGE
__EVENTVALIDATION
login$field
password$field
captcha$txtCaptcha
When I issue:
>> asf.click_button
Update:
I tried changing the user agent to several different browser aliases with no luck. It appears that the IP address from Heroku is causing the captcha to be served up. Would it be possible to make a request through a proxy server or use Tor to keep the IP from being exposed?
Answer to your question is yes, you can proxy through tor. I've done it in the past, issues you will face:
You'll have to run tor somewhere else if your running on heroku
Tor is pretty slow for scraping
You'll need to setup a proxy that can speak to tor (privoxy)
For any serious scraping you'll need to have multiple tors running
Even your tor ips will get blocked after a while.
Makes you think if it's worth the hassle. You can pay for ip masking proxy services which might be an easier way to go.
Think link got me some of the way when I was looking into this: http://www.howtoforge.com/ultimate-security-proxy-with-tor

Resources