I Use Youtube API V3 , my web page request bots request so my api result is too many Request this Ip
Bots Example :
1 Mozilla/5.0 (compatible; DotBot/1.1; http://www.opensiteexplorer.org/dotbot, help#moz.com)
2 Mozilla/5.0 (compatible; AlphaBot/3.2; +http://alphaseobot.com/bot.html)
this type many bots request my website www.watchyoutubevideo.com
Example bots Google,Bing,Yahoo etc bots ....
how to solve
help me
thank you
Related
I am trying to get OpenID Connect authentication working for my legacy ASP.NET MVC application. My ASP.NET MVC application will be the Relaying Party and a business partner of ours will serve as the Identity Provider.
To get familiar with what I'll need to do I created an account on Auth0 and created a new App for a Web Application. I then downloaded their ASP.NET MVC OWIN quickstart from GitHub. I got everything setup and I am able to authenticate successfully with Microsoft Edge and Firefox. But with Chrome the workflow goes like this:
Visit localhost:3000
Attempt to access a protected resource, which redirects me to localhost:3000/Account/Login
/Account/Login creates the challenge, which does two things: (1) Creates the Nonce cookie, and (2) redirects the user to Auth0's /authorize endpoint
I successfully login on Auth0's login screen
A POST request is made to the /callback endpoint on localhost:3000
I get a Yellow Screen of Death with the following message:
IDX21323: RequireNonce is 'System.Boolean'. OpenIdConnectProtocolValidationContext.Nonce was null, OpenIdConnectProtocol.ValidatedIdToken.Payload.Nonce was not null. The nonce cannot be validated. If you don't need to check the nonce, set OpenIdConnectProtocolValidator.RequireNonce to 'false'. Note if a 'nonce' is found it will be evaluated.
Examining the HTTP traffic I see that the issue with Chrome is that in step (3) - when the server sets the Nonce cookie in the 302 Redirect - Chrome is not saving it. Therefore, when step (5) happens the browser does not send any Nonce information to the server and validation fails.
This is evidenced by the HTTP traffic at step (3) and (5). Here is the localhost response on step (3). You can see that it is telling the browser to store the Nonce cookie:
HTTP/1.1 302 Found
Cache-Control: private
Location: https://whatever.us.auth0.com/authorize?client_id=gYb3FOL5OWK419L8...
Set-Cookie: OpenIdConnect.nonce.fRunx5CPoGdhTRM3mgqpn62m9SFkH4AszKWpOOk8LV0%3D=T1NPQjNlYTgtQ...; path=/; expires=Sat, 18-Jul-2020 20:47:59 GMT; HttpOnly; SameSite=None
But after I am redirected to Auth0 I can check Chrome's cookies and it does not have the Nonce cookie in its cookies collection for localhost. Moreover, when step (5) hits, the browser request looks like so - no mention of the Nonce cookie:
POST http://localhost:3000/callback HTTP/1.1
Content-Type: application/x-www-form-urlencoded
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36
code=DANVniZ296OzQW...
This results in the error aforementioned.
(When I examine the HTTP traffic using Edge or Firefox, in step (5) I see the browser sends the Nonce cookie, whereas it's missing entirely from Chrome.)
I am using Chrome version 84 and Windows 10. I also tried this on an old computer at home with Windows 7 (and Chrome 84) and experienced the exact same behavior.
What is going on here and, more importantly, how do I get it to work? My initial assumption was that maybe it was a SameSite cookie issue, but I don't think that's the case because I can see that the cookie isn't get created in the first place (it's not that it exists, but just isn't being sent on the redirect to localhost). Moreover, the Nonce cookie has SameSite=None, so that shouldn't matter, right?
Thanks
Figured it out with the help of a colleague...
Chrome won't save the cookie when using SameSite=None if the traffic is over HTTP. Needed to setup Visual Studio to use HTTPS. Once I did that, things worked as expected.
I am using Twilio programmable Fax api to send fax messages from my application.
I am not facing any issue while sending faxes when I provide a public url with out any authentication as mediaUrl for sending the fax. But when I pass a url secured with basic authentication as the mediaUrl for the send fax api, the fax sending is getting failed.
"status": "failed",
I have debugged the code on the server on which the mediaUrl accesses, and could find that Twilio is not at all sending a request with "Authorization" header.
As per Twilio documentation,
You may provide a username and password via the following URL format.
https://username:password#www.myserver.com/my_secure_document
Twilio will authenticate to your web server using the provided
username and password and will remain logged in for the duration of
the call. We highly recommend that you use HTTP Authentication in
conjunction with encryption. For more information on Basic and Digest
Authentication, refer to your web server documentation.
If you specify a password-protected URL, Twilio will first send a
request with no Authorization header. After your server responds with
a 401 Unauthorized status code, a WWW-Authenticate header and a realm
in the response, Twilio will make the same request with an
Authorization header
I am giving the mediaUrl in the same format as required by Twilio. But the fax is getting response as failed. Kindly provide your valuable suggestions to help me resolve the issue.
My server is sending the 401 response as given below when Twilio accesses the mediaUrl without Authorization header,
Http response header for 401
Status Code: 401 Unauthorized
Content-Length: 34
Content-Type: application/xml
Date: Wed, 30 Aug 2017 12:38:41 GMT
Server: Apache-Coyote/1.1
WWW-Authenticate: Basic realm="My Realm"
Response body
<message>Invalid credentials</message>
Update
Good news! Media URLs in Twilio Programmable Fax now support basic authentication. This has been implemented and deployed, so this should no longer be an issue.
Original answer
Twilio developer evangelist here.
After some internal investigation I've found out that this is a known issue.
It was in fact raised by your support ticket that you sent in. Good news is that since this is known it will be getting some attention and the team will contact you once it is sorted.
To answer this question differently, I'm just using Signed URLs on Google Cloud, which provide a long token that grants temporary access for a specific file. You can set this to grant access for 10 minutes, which should be more than enough time.
AWS appears to offer a similar solution.
I see these invalid HTTP requests in the server log.
The request URI includes scheme+hostname+port.
1.2.3.4 [13/Jan/2017:04:20:01 +0000] GET http://www.DOMAIN.hu:80/munkaugyi-segedanyagok/minimalber-2017-kormanyrendelet HTTP/1.1 403 http://m.facebook.com Mozilla/5.0 (iPhone; CPU iPhone OS 10_2 like Mac OS X) AppleWebKit/602.3.12 (KHTML, like Gecko) Mobile/14C92 [FBAN/FBIOS;FBAV/75.0.0.48.61;FBBV/45926345;FBRV/46260258;FBDV/iPhone8,1;FBMD/iPhone;FBSN/iOS;FBSV/10.2;FBSS/2;FBCR/TelekomHU;FBID/phone;FBLC/en_US;FBOP/5]
All other requests from the same visitor suggest a legitimate user.
Could it be the Facebook app for iPhone?
It turns out most webservers support absolute request URI-s.
https://www.rfc-editor.org/rfc/rfc2616#section-3.2.1
An HTTP client may send requests with absolute request URI-s like the ones in proxy requests but with the same domain name.
I'm trying to make a GET request to the Asana API from the browser. Because this is a cross-domain request, the client first sends an OPTIONS request. I'm running into the same issue that was described in this Stack Overflow question a year ago, ASANA API and Access-Control-* headers, where the Asana API doesn't respond with the Access-Control parameters.
I'm wondering whether the new release of the Asana Connect and OAuth2 addresses this problem and I'm simply doing something wrong or if this is still unsupported.
(I work at Asana.) Sorry, looks like this slipped through the cracks.
We currently do not allow Cross-Origin requests. However, we do support JSONP if you use Oauth2 and authenticate with a bearer token. This allows you to make secure requests from a JS client.
Just append opt_jsonp=CALLBACK as a parameter to the request, where CALLBACK is the name of the javascript function you would like to be called back with the response data.
The YouTube documentation for the video feeds API is documented here:
https://developers.google.com/youtube/2.0/developers_guide_protocol_video_feeds#User_Uploaded_Videos
It states:
To request a feed of all videos uploaded by another user, send a GET request to the following URL. This request does not require authentication.
https://gdata.youtube.com/feeds/api/users/userId/uploads
I have found that the non-SSL format (http) of this API works as well.
I would prefer to use this version of the URL because I do not require SSL. However I am concerned that it is not documented (and thus might be dropped in the future). So, my question is, is the http form of this API officially supported?
I use HTTP too. The demo page from YouTube does not use HTTPS too: YouTube Data API
You should be perfectly fine by using HTTP.