I was using the YouTube Data API and it said the following:
# Disable OAuthlib's HTTPS verification when running locally.
# *DO NOT* leave this option enabled in production.
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"
What is meant here by production and OAuthlib's HTTPS verification?
According to the official docs of OAuthLib library, the meaning of the environment variable OAUTHLIB_INSECURE_TRANSPORT is as follows:
OAUTHLIB_INSECURE_TRANSPORT
Normally, OAuthLib will raise an InsecureTransportError if you attempt to use OAuth2 over HTTP, rather than HTTPS. Setting this environment variable will prevent this error from being raised. This is mostly useful for local testing, or automated tests. Never set this variable in production.
Thus, having OAUTHLIB_INSECURE_TRANSPORT set within the environment of your running app will allow you to issue OAuth2 calls through HTTP.
As the doc says, you should never have OAUTHLIB_INSECURE_TRANSPORT set on production settings, i.e. on settings that enable your program to run for real, for example on your client's sites -- these settings considered as opposed to the settings that are under your complete control, such as those that enable your program to be tested before deployment to your clients.
(You may also read the answer to a related question here on SO.)
Related
I have a Rails app running on Heroku that uses Mailgun to process incoming emails. I haven't been able to figure how I can debug my email processing locally (on localhost) instead of having to push everything up to heroku every time I make a change. (this is just a test app - I'm the only one using it)
Is it possible to work with Mailgun locally? If so, how do I go about it?
Thank you in advance
Mailgun gives you the option to store a message for later retrieval. If you configure it that way, you'll be able to fetch messages from development for processing without having to set up a publicly-accessible webhook for Mailgun to hit.
But I'm assuming you have production configured with an HTTP endpoint, and it's no fun to do things differently between environments. There are a few tools that will let you set up a public endpoint that routes to localhost:
ngrok, which I've used to good effect to test Twilio. You can set up a permanent subdomain so you don't have to constantly change your Mailgun configuration.
UltraHook, which I haven't personally used, but looks the same.
Localtunnel which looks easiest to start up, but like you get a different host at every boot.
If you have a permanent publicly-accessible server, you can also maintain your own tunnel.
mailgun provides a sandbox that you can use for localhost the only downside to this is that you have to add the test email to valid recipient.
using this gem might be another possible solution:
https://github.com/ryanb/letter_opener/ or https://github.com/fgrehm/letter_opener_web for more advanced features
follow installation from repo
mail will open in new tab
I am new to rails testing. Two days of running down leads with Google has turned up no solutions for what ought to be a frequent need.
If I write request (integration) specs to use a Selenium or other browser-based driver, is it possible to redirect the test's i/o to a staging deployment on a cloud server (in my case Heroku)?
If so, how? If not, what prevents this from working?
So far I have been using rspec/capybara, but would switch to anything of similar power if necessary.
You can use Capybara with Selenium driver, and set Capybara.app_host to specify the IP of you staging app server. While doing so you can turn off Capybara local rack with Capybara.run_server = false
Remote testing will allow you only to perform human kind of action and test the returned generated HTML/JS/Json etc .. but no access to controller, view, or any other app internal objects.
On thing you could do (I never tried, but I don't see why it wouldn't work) is to set-up your database.yml test configuration to remotely access you staging database, allowing you to control the database during your tests. It's not really secure so you may want to do that over a SSH tunnel , or a similar solution.
I changed my CNAME Records as outlined in this link https://devcenter.heroku.com/articles/custom-domains. The redirect itself works, the issue is in Chrome (and I assume other browsers) It gives me a phishing alert.
This is probably not the site you are looking for!
You attempted to reach app.grewpr.com, but instead you actually reached a server i identifying itself as *.herokuapp.com. This may be caused by a misconfiguration on the server or by something more serious. An attacker on your network could be trying to get you to visit a fake (and potentially harmful) version of app.grewpr.com.
You should not proceed, especially if you have never seen this warning before for this site.
How would I fix this error? It also puts a red line through the https lock.
Since you're using HTTPS, you should follow slightly different DNS configuration instructions: https://devcenter.heroku.com/articles/ssl#configuredns
Had the same problem. Your custom domain app uses your host certificate. Whether buy SSL addon or if 'http' is ok for you then change ssl settings for your app. In "environments/production.rb" there must be following configuration "config.force_ssl = false" which by default is set to true. Had to reset Firefox to take effect. Other browsers were ok.
I am trying to test OAuth buttons, but they all (Facebook, Twitter, LinkedIn) come back with errors that seem to signal that I can not test or use them from a local URL.
How do people usually work in development with OAuth stuff if they all seem to require a non-dev and non-local connections environments?
Update October 2016: Easiest now: use lvh.me which always points to 127.0.0.1, but make sure to verify that this is still true every time you need to invoke it (because domains can expire or get taken over, and DNS poisoning is always a concern)
Previous Answer:
Since the callback request is issued by the browser, as a HTTP redirect response, you can set up your .hosts file or equivalent to point a domain that is not localhost to 127.0.0.1.
Say for example you register the following callback with Twitter: http://www.publicdomain.com/callback/. Make sure that www.publicdomain.com points to 127.0.0.1 in your hosts file, AND that twitter can do a successful DNS lookup on www.publicdomain.com, i.e the domain needs to exist and the specific callback should probably return a 200 status message if requested.
EDIT:
I just read the following article: http://www.tonyamoyal.com/2009/08/17/how-to-quickly-set-up-a-test-for-twitter-oauth-authentication-from-your-local-machine, which was linked to from this question: Twitter oAuth callbackUrl - localhost development.
To quote the article:
You can use bit.ly, a URL shortening service. Just shorten the [localhost URL such as http//localhost:8080/twitter_callback] and register the shortened URL as the callback in your Twitter app.
This should be easier than fiddling around in the .hosts file.
Note that now (Aug '14) bit.ly is not allowing link forwarding to localhost; however Google link shortener works.
PS edit: (Nov '18): Google link shortener stopped giving support for localhost or 127.0.0.1.
You can also use ngrok: https://ngrok.com/. I use it all the time to have a public server running on my localhost. Hope this helps.
Another options which even provides your own custom domain for free are serveo.net and https://localtunnel.github.io/www/
Or you can use https://tolocalhost.com/ and configure how it should redirect a callback to your local site. You can specify the hostname (if different from localhost, i.e. yourapp.local and the port number). For development purposes only.
For Mac users, edit the /etc/hosts file. You have to use sudo vi /etc/hosts if its read-only. After authorization, the oauth server sends the callback URL, and since that callback URL is rendered on your local browser, the local DNS setting will work:
127.0.0.1 mylocal.com
Set your local domain to mywebsite.example.com (and redirect it to localhost) -- even though the usual is to use mywebsite.dev. This will allow robust automatic testing.
Although authorizing .test and .dev is not allowed, authorizing example.com is allowed in google oauth2.
(You can redirect any domain to localhost in your hosts file (unix/linux: /etc/hosts))
Why mywebsite.example.com?
Because example.com is a reserved domain name. So
there would be no naming conflicts on your machine.
no data-risk if your test system exposes data
to not-redirected-by-mistake.example.com.
You can edit the hosts file on windows or linux
Windows : C:\Windows\System32\Drivers\etc\hosts
Linux : /etc/hosts
localhost name resolution is handled within DNS itself.
127.0.0.1 mywebsite.com
after you finish your tests you just comment the line you add to disable it
127.0.0.1 mywebsite.com
Google doesn't allow test auth api on localhost using http://webporject.dev or .loc and .etc and google short link that shortened your local url(http://webporject.dev) also bit.ly :). Google accepts only url which starts http://localhost/...
if you want to test google auth api you should follow these steps ...
if you use openserver go to settings panel and click on aliases tab and click on dropdown then find localhost and choose it.
now you should choose your local web project root folder by clicking the next dropdown that is next to first dropdown.
and click on a button called add and restart opensever.
now your local project available on this link http://localhost/
also you can paste this local url to google auth api to redirect url field...
This answer applies only to Google OAuth
It is actually very simple and I am surprised it worked for me (I am still sceptical of what my eyes are seeing).
Apparently you can add localhost as a trusted domain on the Google Developer Console, since localhost is an exception for most rules as you can see here.
This can be done on this page under OAuth 2.0 Client IDs. Click edit and then add http://localhost:8000 or similar ports, and hit save.
It is crucial that you include http or https in the input box.
HTTP or HTTPS?
I am once again surprised that Google allows http, although do note that there is a minor security risk if your application has been released to production.
If you want to be extra cautious, you can choose to stick with https. This will require you to set up an SSL certificate on your localhost server.
This is easier than you think, since the SSL certificate needs not be valid. Many http servers should give you this option. You will have to click the "proceed anyway" button anyway in your browser to bypass the big red warning.
This is more secure than http since either a) users will see a big red warning if hackers are trying something phishy, or b) the only time they won't see this warning is if the user intentionally set up a self-hosted SSL certificate, in which case they probably know what they are doing (I suppose a virus could technically do this as well, but at that stage they've already gotten enough control of a user's system to do anything they want).
I ran into some issues with the tools mentioned in other answers such as http://tolocalhost.com not forwarding query parameters (not to mention you have to visit the page and configure it first, same case with https://thomasmcdonald.github.io/Localhost-uri-Redirector/) and http://lvh.me not being useful to me because I run a proxy on my local machine and need the public URL to point to a private URL like http://mywebsite.dev.
So I made my own tool that filled my needs and may fill yours:
https://redirectmeto.com
Examples:
https://redirectmeto.com/https://www.google.com/search?q=puppies
http://redirectmeto.com/http://localhost:4000/oauth/authorize
http://redirectmeto.com/http://client.dev/page
Another valuable option would be https://github.com/ThomasMcDonald/Localhost-uri-Redirector. It's a very simple html page that redirects to whatever host and port you configure in the UI.
The page is hosted on Github https://thomasmcdonald.github.io/Localhost-uri-Redirector, so you can use that as your OAuth2 redirect url and configure you target host and port in the UI and it will just redirect to your app
If you have a domain, you can create a subdomain that redirects to your local entry point, it works for me
I created a public subdomain : oauth-test-local.alexisgatuingt.fr that redirects you to http:127.0.0.1:8000/oauth/callback/google with the returned data
Taking Google OAuth as reference
In your OAuth client Tab
Add your App URI example(http://localhost:3000) to Authorized JavaScript origins URIs
In your OAuth consent screen
Add mywebsite.com to Authorized domains
Edit the hosts file on windows or linux Windows C:\Windows\System32\Drivers\etc\hosts Linux : /etc/hosts to add 127.0.0.1 mywebsite.com (N.B. Comment out any if there is any other 127.0.0.1)
I am creating a blogging-like application where we allow our customers to use their own custom domain names such as domainexample.com, so each different domain serves the same application but with different content.
However I am struggling to figure out how to set this up on a production server. If my production server has a static IP then I can surely just set an a-record on each domain to the ip of the production server.
But what if the production server does not have a static IP. For example if we want to host it on heroku or engineyard? I have seen a few solutions online that require using rewrite rules but they require server restarts and cant really dynamically add and remove new domains as new users sign up. Does anyone know any good solutions to let multiple domains hit one rails app?
Heroku isn't your only option. If you can anticipate your customer's domains, have a look at this. If you can't, Rails routes constraints and a combination of the accepted answer to the question linked above should get you where you need to be going. Sounds like you wouldn't want to restart your server--so no editing of the routes. You might also make domains part of your models, or distinguish at the controller level or use URL rewriting in your web-server layer.
The problem, as I see it, is that Rails breaks its mantra of opinion over configuration here. There are many ways of serving up from multiple domains. That might be an intrinsic complexity, but the Rails Guides could at least document one possible solution.
If your customers just CNAME to your domain or create the A record to your IP and you don't handle TLS termination for these custom domains, your app will not support HTTPS, and without it, your app won't work in modern browsers on these custom domains.
You need to set up a TLS termination reverse proxy in front of your webserver. This proxy can be run on a separate machine but you can run it on the same machine as the webserver.
CNAME vs A record
If your customers want to have your app on their subdomain, e.g. app.customer.com they can create a CNAME app.customer.com pointing to your proxy.
If they want to have your app on their root domain, e.g. customer.com then they'll have to create an A record on customer.com pointing to your proxy's IP. Make sure this IP doesn't change, ever!
How to handle TLS termination?
To make TLS termination work, you'll have to issue TLS certificates for these custom domains. You can use Let's Encrypt for that. Your proxy will see the Host header of the incoming request, e.g. app.customer1.com or customer2.com etc., and then it will decide which TLS certificate to use by checking the SNI.
The proxy can be set up to automatically issue and renew certificates for these custom domains. On the first request from a new custom domain, the proxy will see it doesn't have the appropriate certificate. It will ask Let's Encrypt for a new certificate. Let's Encrypt will first issue a challenge to see if you manage the domain, and since the customer already created a CNAME or A record pointing to your proxy, that tells Let's Encrypt you indeed manage the domain, and it will let you issue a certificate for it.
To issue and renew certificates automatically, I'd recommend using Caddyserver, greenlock.js, OpenResty (Nginx).
tl;dr on what happens here;
Caddyserver listens on 443 and 80, it receives requests, issues, and renews certificates automatically, proxies traffic to your backend.
How to handle it on my backend
Your proxy is terminating TLS and proxying requests to your backend. However, your backend doesn't know who is the original customer behind the request. This is why you need to tell your proxy to include additional headers in proxied requests to identify the customer. Just add X-Serve-For: app.customer.com or X-Serve-For: customer2.com or whatever the Host header is of the original request.
Now when you receive the proxied request on the backend, you can read this custom header and you know who is the customer behind the request. You can implement your logic based on that, show data belonging to this customer, etc.
More
Put a load balancer in front of your fleet of proxies for higher availability. You'll also have to use distributed storage for certificates and Let's Encrypt challenges. Use AWS ECS or EBS for automated recovery if something fails, otherwise, you may be waking up in the middle of the night restarting machines, or your proxy manually.
Alternatively, there have been a few services like this recently that allow you to add custom domains to your app without running the infrastructure yourself.
If you need more detail you can DM me on Twitter #dragocrnjac