Twitter UserStream auto update with https - twitter

I am currently connecting to the Twitter User Stream API but it seems that I am not getting updates on my production environment (https), it does work on my staging server though.
Some information that I checked myself already:
- The 2 environments are on the same server so it can not be an IP block.
- A code issue would be possible but unlikely since it works on staging but not on production
- Nginx is configured correctly to allow this on https since it worked before
the only thing I can think of is that Twitter blocked our https connection, we reached a Twitter cap or Twitter got problems streaming to Https.
Is there anyone that encountered this before or that can help me with this?

It seems that this was caused by Twitter who puts an internal (badly documented) limit on connections from a webservice to their UserStream service. (This connection limit is around 20 - 30 connections I think).
A solution that I use now is to poll every 90 seconds to their REST api.

Related

MS Graph API return errors for Hybrid integration only on some IPs

We have an application that uses MS Graph API to integrate with our customer's email/calendar. One of the customers (Customer A) with the Hybrid setup, have reported issues. All the users suddenly got email integration not working. We have performed a couple of testing calls (endpoint /me/sendMail) to MS Graph API using our production app credential and different environments (local, dev cloud AWS, staging cloud GCP, prod cloud GCP) and valid user tokens. Here are the results and strange behavior:
OK. If we do the calls for our own testing account (Office365, non-Hybrid) from ALL environments - everything works just fine.
OK. If we do the calls for Customer B account (Office365, non-Hybrid) from ALL environments - everything works just fine.
OK. If we do the calls for Customer A account (Hybrid setup) from LOCAL, dev cloud AWS environments - everything works just fine.
SUPER STRANGE. If we do the calls for Customer A account (Hybrid, Exchange 2016 setup) from staging cloud GCP, prod cloud GCP environments - we got 404 and the error below.
{“error”:{“code”:“MailboxNotEnabledForRESTAPI”,“message”:“REST API is not yet supported for this mailbox.“}}
Customer’s IT claims there are NO ERRORS in their logs they can tie to this problem. And they did everything in accordance with MS recommendations here https://learn.microsoft.com/en-us/graph/hybrid-rest-support#requirements-for-the-rest-api-to-work-in-hybrid-deployments
Update:
After checking more it appears, that we receive 404 when a request is served by specific MS Data Centers:
Here are the response header params for 404:
x-ms-ags-diagnostic {"ServerInfo":{"DataCenter":"UK South","Slice":"SliceC","Ring":"3","ScaleUnit":"000","RoleInstance":"AGSFE_IN_11"}}
x-ms-ags-diagnostic: {"ServerInfo":{"DataCenter":"Central US","Slice":"SliceC","Ring":"2","ScaleUnit":"002","RoleInstance":"AGSFE_IN_14"}}
And we got 201(success) for:
x-ms-ags-diagnostic: {"ServerInfo":{"DataCenter":"West Europe","Slice":"SliceC","Ring":"5","ScaleUnit":"003","RoleInstance":"AGSFE_IN_52"}}
I would try isolate the issue with MS Graph explorer or POSTMAN and see if i can still repro the issue or not (with the same Graph API call). If the answer is Yes, then i would file a support ticket with Microsoft, so that they can validate if there is any issue with the configuration of hybrid requirement (as they defined) or any issue with API or any issue with the IPs.

ELB Target groups health checks are failing with 403 after upgrading from Rails 5 to rails 6

ELB target group's health check is failing with status code 403 forbidden after upgraded rails to rails 6. However health check is working in development but not on AWS cloud.
Health check is succeed with rails 5 but not with rails 6.
Any help would be greatly appreciated.
This happens because of a new feature in Rails 6: host authorization. It checks whether the incoming request has correct hostname, and in case it doesn't, it returns 403.
AWS ELB doesn't set the Host header when it accesses the health check endpoint, which makes it fail.
You can fix the problem either by disabling the feature (config.hosts.clear) by adding the web server internal IP (ELB accesses it with the internal one) to the allowed hosts, like this:
config.hosts = ["example.org", IPAddr.new("10.0.99.0/24")]

Gmail::Client::AuthorizationError only in Production for Rails App on Heroku

I have a RoR app that reads emails from my inbox using the gmail gem. I've deployed to Heroku and everything works fine, except connecting to gmail.
On my local machine it connects with no issues (after I allowed access for less secure apps).
Using the basic gmail login method;
Gmail.connect!('my_email#gmail.com','password')
I get the following error in production only.
Gmail::Client::AuthorizationError: Couldn't login to given Gmail account: my_email#gmail.com (Please log in via your web browser: https://support.google.com/mail/accounts/answer/78754 (Failure))
And then I'll go to https://www.google.com/accounts/DisplayUnlockCaptcha (as other answers have suggested) and it will work fine for a short time period and then suddenly stop working again.
I'm using Ruby v2.4.1, RoR v5.1.1 and the gmail gem v0.6.0 (https://github.com/gmailgem/gmail)
Any help would be great.
Are you deploying to a domain? It could be caused by the fact that:
Heroko will not give you even a range of IP addresses - they can, may and will move dynos between Amazon zones as needs require.
Your only option would be some sort of proxy node with a static IP that they talk to that securely communicates to your Heroku app - or consider if Heroku is the right fit for you here altogether.
Source: Get a finite list of IP addresses for my Heroku App?
It seems like you sign into Google and whitelist what an IP (as far as they're concerned) and then the dyno switches a bit later for whatever reason so then Google no longer has your dyno's IP in the whitelist for this app.

Twitter connection timeout

We just launched a rails app on heroku and see a lot of connection timeouts to the api.
We have a connection timeout of 10 sec.
Is it a normal behavior or is it because of too many hits? All queries are authenticated as a user.
We query friends/ids and followers/ids only.
We see also timeouts in our reverse auth query done by the same app.
Do somebody already had something like that?
EDIT
Having a support ticket, they told me they were looking with twitter's engineers to avoid blacklist.
It appears this is due to Twitter blacklisting Heroku's primary IP addresses. If you are having this issue, please file a ticket with Heroku and comment on this Twitter discussion: https://dev.twitter.com/discussions/20185
When you are using the Twitter gem (?) sometimes the connection timeout happens when your DNS server can't resolve the IP of api.twitter.com fast enough. Check your DNS settings # /etc/resolv.conf
Most the PaaS are using the Google ones (4.4.4.4 or 8.8.8.8) which are rate limited and sometimes very slow...resulting in connection timeouts.
Got the same problem running a Rails app on Cloud66/DigitalOcean. Changed the DNS to some more local ones and the Twitter gem performance like a jaguar.

PayPal Instant Update API not working on HTTPS

We are building an online store that is based on Spree and hosted on Heroku. We want to make the checkout as easy as possible so we decided to use PayPal Express Checkout, and Instant Update API to determine the shipping cost.
When we tested the checkout process over HTTP, everything works perfectly - when the user enters his shipping address, PayPal queries our server in the background and obtains the shipping costs.
However when we switched to SSL, the shipping cost just doesn't update and reverts to the default flat-rate. I cannot figure out what is wrong because everything is the same, except this time the app is accessed through HTTPS, i.e. https://myapp.herokuapp.com
I have check the logs and I see that PayPal's server did make the query, but the shipping cost just don't update on PayPal's checkout page.
Any thoughts on what's wrong?
Update:
After further testing, it seems PayPal is not obeying the timeout set in the transaction setup. We added a simple "sleep(x)" to the callback method to artificially induce some delay (by x seconds), and even over normal HTTP, just 1 second delay is enough to caused PayPal to ignore the response.
The max timeout is supposed to be 6 seconds, but in reality it doesn't seem to be the case at all. And couple that with HTTPS (which take longer to establish a connection), it is probably why the callback was failing in the first place.
I have submitted a ticket to PayPal, but I'm not sure if they will respond or do anything about it...
It appears there are many reasons that PayPal could ignore the returned shipping options from the callback.
I'd like to see something on PayPal's site that would keep a history of recent calls to the callback with the returned response and reasons for rejection - somewhat similar to the useful IPN history.
I'm glad you posted your real domain name here because you've pretty much confirmed my suspicions.
I'm pretty convinced the problem is that you have a wildcard SSL (I see your certificate is issued to *.herokuapp.com) and not just an SSL for a single domain.
I am having the same problem with a UCC certificate for www.MicroPedi.com which is a 5 name UCC certificate. PayPal just flat out refuses to even make any calls to it (I have logging and nothing is coming through except when using the sandbox).
To confirm this I have a previous Express checkout implementation that is working just fine (with a single SSL) and I pointed my new application to that old URL and it magically started working again. That is a single name SSL - in fact it's one of those expensive green bar certificates.
I've written directly to PayPal support, but right now the only thing I can think of doing as a workaround is writing some kind of proxy page that will just redirect from the good domain to my UCC domain.

Resources