How to prevent xss - url

this is my native url:
127.0.0.1//myweb/home.php?u=daniel
now when I include this type of xss:
127.0.0.1//myweb/home.php/"><script>alert('hacked')</script>?u=daniel
it now appears to be hacked, how can I avoid this type XSS attack ?
ADDED
Here is the other codes: (I do not add the fetching the users the data)
require_once 'core/init.php';
$currentUser = new User();
$report = null;
if(!$currentUser->isLoggedIn()) {
Redirect::to('index.php');
}

You can always use php to filter away all the unnecessary part of the url.
This is your web site so you know what character is useless in your web site.
For example, I know that in my web site, the double quotes/" character is useless in my web site.
So, I can straight away filter out any part with double quotes/" character.
You can get your current url from the following code.
$url = $_SERVER['REQUEST_URI']
Then, you just ignore anything after double quotes character by using explode.
$safe_url = explode("\"", $url);
So, you will just use $safe_url[0] as your url.

Related

Durable Id assignment in URL not working in lightning

Problem Statement: To auto-populate the lookup field I use durable Id assignment with name. For e.g. https://sales--dev.my.salesforce.com/m2p/e?CF00N0l0000051XXX=Contract-00000XXX&inline=1
Notice this -> CF00N0l0000051XXX=Contract-00000XXX ~ durableId=recordName In url.
Now, when the user clicks the New button to create a record on the VF page above URL is loaded in classic and populates the Name in lookup like this
Trying to solve: In lightning, URL is getting overridden by this URL
https://sales--dev.lightning.force.com/lightning/o/objectName/new?count=2 Is there a way to achieve the same URL in lightning?
Do you really need it to be an URL hack? Can'y your thing be a quick action? The url prepopulation would be more reliable there and work everywhere.
URL hacking in lightning is bit simpler, you use field API names instead of IDs. These are decent tutorial: https://www.salesforceben.com/salesforce-url-hacking-for-lightning-tutorial/, https://sfdcdevelopers.com/2020/02/26/url-trick-in-salesforce-lightning/
So, how do you know where you are, in Classic or LEX. Which URL to use? Have a look at UiThemeDisplayed variable, available in Visualforce and in Apex's UserInfo class.
IF($User.UIThemeDisplayed == 'Theme4d' || $User.UIThemeDisplayed == 'Theme4t' || $User.UIThemeDisplayed == 'Theme4u',
'link for lightning',
'link for classic'
)
Working approach:
Created a controller for VF page:
global PageReference newParty() {
PageReference pageRef;
pageRef = new PageReference('/lightning/o/Party/new?defaultFieldValues=Contract='+contractID);
return pageRef
You can absolutely do this with a button / URL hack in lightning with the Spring '20 release. The URL can use "defaultFieldValues="
https://www.salesforceben.com/salesforce-url-hacking-for-lightning-tutorial/

http url parameter Invalid Character Error

I have developed a REST API using feathers.js (https://feathersjs.com/).
When trying to do a HTTP 'read' request in Flutter using package:http/http.dart I have encountered an error. The http.dart library is unable to correctly parse the query params I pass to the URI.
The error I receive through the Android Studio debug console is ;
FormatException: Invalid character (at character 84) E/flutter
(11338): ...lk.com/weatherobs?last=true&location[$in]=Bellambi&location[$in]=Nowra ....
The error is indicating the square brackets and possibly the $ sign ('[$in]' ) are the issue.
_getDemoRequest() {
String url = r"http://demoapi.ap-southeast-2.elasticbeanstalk.com/weatherobs?last=true&location[$in]=Bellambi&location[$in]=Nowra&location[$in]=Sydney Airport&location[$in]=Thredbo Top Station&location[$in]=Hobart&location[$in]=Coolangatta";
http.read(url).then(print);
}
In the URL I have tried prefixing the String with and without 'r' for a raw string to no avail.
I have also tried using httpClient with params with no success and the exact same error on the square brackets eg '[$in]'
String httpbaseUri = "http://xxxx.ap-southeast-2.elasticbeanstalk.com";
String qParams = r"?last=true&location[$in]=Bellambi&location[$in]=Nowra";
String path = "/weatherobs";
var _uri = new Uri.http(baseUri, path, qParams);
await httpClient.read(_uri, headers: {"Accept": "application/json"});
As a person with approximately 3 weeks of Flutter/Dart experience I believe its an elementary problem, but one in which several hours of research has uncovered no solution.
The ways the URI query parameters are structured (with the square brackets ie [$in]) are dictated by the feathers.js framework.
Any help would be appreciated.
It has been brought to my attention in another thread https://stackoverflow.com/questions/40568/are-square-brackets-permitted-in-urls :
That the URL Specification RFC 3986 generally does not permit square brackets in an URL.
My question was triggered as the get request works as intended in Postman, Chrome Browser and also javascript applications using axios.js, but not in an application developed in Flutter/Dart using standard http.read methods.
It doesn't look like [] are supported in the URL (except for the host IP for IPv6). See Are square brackets permitted in URLs?.
Please check if the API accepts them when they are encoded like:
void main() {
var url = r'http://demoapi.ap-southeast-2.elasticbeanstalk.com/weatherobs';
var locationKey = Uri.encodeQueryComponent(r'location[$in]');
var qParams = 'last=true&$locationKey=Bellambi&$locationKey=Nowra&$locationKey=Sydney Airport&$locationKey=Thredbo Top Station&$locationKey=Hobart&$locationKey=Coolangatta';
try {
print(Uri.parse(url).replace(query: qParams));
} catch(e) {
print(e);
}
}
DartPad example
See also api.dartlang.org/stable/1.24.3/dart-core/Uri/…
You can use this flutter package which allow you to communicate with your feathers js server from flutter app as said in: https://stackoverflow.com/a/65538226/12461921

Redirect after login to backbone route with rails

I am attempting to provide users with a common functionality, redirecting them after login to the originally requested url that is behind a secure path. Example, user clicks link in email triggered via notification in the system, attempts to go to:
https://mysite.com/secure/notifications/1
User is not logged in so kicked back to
https://mysite.com/login
After login they should be brought not to their home page, but to the originally requested url.
I am familar with the technique to store the attempted URL in session before redirecting to login page. The issue is if the URL contains a backbone router after the core URL, ie
https://mysite.com/secure/notifications/1#details
The #details part of the URL is not sent to server it seems, as this is typically for inner page jumping. I am wondering how are web developers dealing with this as JS MVC frameworks like backbone, angular, and other are emerging? Some trick? Any way to actually have the # pass to server in http specification?
Any ideas are appreciated, thank you.
The easiest solution to this problem, if you don't need to support this behaviour for older browsers, is to enable pushState in your backbone router so you don't use # for routes:
Backbone.history.state({pushState: true});
Edit:
The other potential solution, though it is a bit messy, is to do some URL tomfoolery to figure out what should be after the hash and then navigate to that route.
For example, let's say that you want to navigate to:
http://webapp.com/abc/#page1 where 'page1' is the fragment which makes up the Backbone route.
If you instead send the user to http://webapp.com/abc/page1. You can detect whether the browser has pushState. If not, you can replace everything after the 'root' with the hash. Here is some example code which might get you on the right track to supporting both sets of browsers:
var _defaults = {
pushState: Modernizr.history,
silent: true,
root: '/'
};
var start = function(options) {
// Start the routing either with pushstate or without
options = _.extend(_.clone(this._defaults), options);
Backbone.history.start(options);
if (options.pushState) {
Backbone.history.loadUrl(Backbone.history.getFragment());
return;
}
this.degradeToNonHistoryURL();
};
/**
* For fragment URLs, we check if the actual request is for the root i.e '/',
* If it is, we can continue and Backbone will do the magic
* If it isn't we redirect to the root with the route as a fragment
* foo.com/bar/1 -> foo.com/#bar/1
*/
degradeToNonHistoryURL = function() {
var pathName = window.location.pathname;
// If the root is '/', length is one. If the root is 'foo', length is 5 (/foo/)
var rootLength = _getRoot().length;
var isRootRequest = pathName.length === rootLength;
if (!isRootRequest) {
var route = pathName.substr(rootLength);
window.location.href = _getRoot() + '#' + route + window.location.search;
return;
}
Backbone.history.loadUrl(Backbone.history.getFragment());
},
/**
* Get the effective root of the app. Normally it's '/', but if set to 'foo', we want
* to return '/foo/' so we can more easily determine if this is a root request or not.
* #returns {String} The effective root
*/
_getRoot = function() {
if (Backbone.history.options.root === '/') {
return '/';
}
return '/' + Backbone.history.options.root + '/';
},
The trick here is making the pushState URL your canonical URLs and always sending users to those ones. Once browser adoption increases, it should theoretically be easy to cut all of this crap out without having to update all of your links.
After some research it seems there are only two solutions
As recommended by Will, use pushState and only support HTML5 browsers, but this is a massive change for existing apps using hash or hashbang javascript navigation.
Workarounds on server side, the main option here is around providing redirect endpoints to get users where then need to go. Example
/myapp/redirector?pathroot=notifications&hashroot=details&hashparam1=2
this would then build up a url on server side
/myapp/notifications/1#details/2
So in #2 the server cannot receive http requests with hashtags, however it can send them. The browser will receive this full path including hash nav part, and do its normal javascript MVC routing thing.

How to block bad unidentified bots crawling my website?

How can I resist the bad unidentified bots to crawl my website? Some bad bots whose name is not present in cPanel of Apache are badly accessing my website bandwidth.
I had tried robots.txt on batgap.com/robots.txt and also blocked with .htaccess but there is no improvement in bandwidth usage. I don't know the IP of those bots so unable to block them by IP address. These bots are consuming too much bandwidth of site and hence a result I need to increase it from server.
I'm from Incapsula and we deal with bad bots on a regular basis.
We've recently release a bot-related research that provides insights of the scope of the problem ( http://www.incapsula.com/the-incapsula-blog/item/225-what-google-doesnt-show-you-31-of-website-traffic-can-harm-your-business ) and in light of this data I have to agree with #Leonard Challis - you simply can not handle bot protection manually.
Having said that, there are bot protection solutions, even Free ones (us included) that can help you with bad bots.
BTW - Just like you mentioned, one byproduct of bad bots visits is a loss of bandwidth.
We`ve recently became aware of just how surprisingly HUGE bot-related bandwidth usage really is.
This is an interesting topic by itself.
We believe that by avoiding bad bot traffic, hosting providers can actually greatly improve their efficiency (hopefully using this to drop cost or to improve services). Once you imagine Social and Business implication of this you can understand the real scope of this bad bot problem that goes way beyond the immediate damage done.
I block 'bad bots' by using PHP.
I filter in IP address primarily, then by User-Agent secondarily.
I make the 'bad bot' wait for up to 999 seconds, then return a very small web page.
Usually (always) the internet connection times-out and zero (0) bytes are returned.
Best of all I have delayed them for a few minutes before the get to the next victim.
http://gelm.net/How-to-block-Baidu-with-PHP.htm
Unfortunately robots.txt is sometimes ignored by these "bad bots", though if the problem is more things like genuine search engine spiders that you don't want to see they ought to take it in to account. I presume with CPanel you can get in to the web server (apache) logs? In there you can look for two things: the IP and the User-Agent. You can find the culprits in there and add them to your robots.txt and .htaccess. Note that .htaccess rules denying IP addresses are far better that just relying on robots.txt because you are taking the choice out of the bot creator's hands.
If you know specific bots which are doing this you should be able to get IP addresses and user-agents from forums, but if it's a more general thing then really I'm afraid it's more of a manual job.
There are other methods that can be used with varying effect, such as mod_security (http://www.askapache.com/htaccess/modsecurity-htaccess-tricks.html) but this will mean you'll have to access your web server configuration.
Finally, you can check the links that are pointing to your web site (using the link: option on google). Sometimes if you have links on spammy forums or the like this can increase the chances of bots coming to get you. Maybe you can look at the referer URL in the apache logs - but this is all based on a lot of presumptions and you'd probably be lucky if it had a great effect.
Block Unwanted Robots/Spiders visitors via PHP
Instructions:
Place the following PHP Code in the beginning of your index.php file.
The idea here is to place the code in the main site's PHP home page, the main entry point of the site.
If you have other PHP files that are accessed directly via an URL (not including PHP include or require support type files), then place the code in the beginning of those files.
For most PHP sites and PHP CMS sites, the root's index.php file is the file that is the main entry point of the site.
Keep in mind that your site statistics, i.e. AWStats, will still log the hits under Unknown robot (identified by 'bot' followed by a space or one of the following characters _+:,.;/-), but these bots will be blocked from accessing your site's content.
<?php
// ---------------------------------------------------------------------------------------------------------------
// Banned IP Addresses and Bots - Redirects banned visitors who make it past the .htaccess and or robots.txt files to an URL.
// The $banned_ip_addresses array can contain both full and partial IP addresses, i.e. Full = 123.456.789.101, Partial = 123.456.789. or 123.456. or 123.
// Use partial IP addresses to include all IP addresses that begin with a partial IP addresses. The partial IP addresses must end with a period.
// The $banned_bots, $banned_unknown_bots, and $good_bots arrays should contain keyword strings found within the User Agent string.
// The $banned_unknown_bots array is used to identify unknown robots (identified by 'bot' followed by a space or one of the following characters _+:,.;/\-).
// The $good_bots array contains keyword strings used as exemptions when checking for $banned_unknown_bots. If you do not want to utilize the $good_bots array such as
// $good_bots = array(), then you must remove the the keywords strings 'bot.','bot/','bot-' from the $banned_unknown_bots array or else the good bots will also be banned.
$banned_ip_addresses = array('41.','64.79.100.23','5.254.97.75','148.251.236.167','88.180.102.124','62.210.172.77','45.','195.206.253.146');
$banned_bots = array('.ru','AhrefsBot','crawl','crawler','DotBot','linkdex','majestic','meanpath','PageAnalyzer','robot','rogerbot','semalt','SeznamBot','spider');
$banned_unknown_bots = array('bot ','bot_','bot+','bot:','bot,','bot;','bot\\','bot.','bot/','bot-');
$good_bots = array('Google','MSN','bing','Slurp','Yahoo','DuckDuck');
$banned_redirect_url = 'http://english-1329329990.spampoison.com';
// Visitor's IP address and Browser (User Agent)
$ip_address = $_SERVER['REMOTE_ADDR'];
$browser = $_SERVER['HTTP_USER_AGENT'];
// Declared Temporary Variables
$ipfound = $piece = $botfound = $gbotfound = $ubotfound = '';
// Checks for Banned IP Addresses and Bots
if($banned_redirect_url != ''){
// Checks for Banned IP Address
if(!empty($banned_ip_addresses)){
if(in_array($ip_address, $banned_ip_addresses)){$ipfound = 'found';}
if($ipfound != 'found'){
$ip_pieces = explode('.', $ip_address);
foreach ($ip_pieces as $value){
$piece = $piece.$value.'.';
if(in_array($piece, $banned_ip_addresses)){$ipfound = 'found'; break;}
}
}
if($ipfound == 'found'){header("location: $banned_redirect_url"); exit();}
}
// Checks for Banned Bots
if(!empty($banned_bots)){
foreach ($banned_bots as $bbvalue){
$pos1 = stripos($browser, $bbvalue);
if($pos1 !== false){$botfound = 'found'; break;}
}
if($botfound == 'found'){header("location: $banned_redirect_url"); exit();}
}
// Checks for Banned Unknown Bots
if(!empty($good_bots)){
foreach ($good_bots as $gbvalue){
$pos2 = stripos($browser, $gbvalue);
if($pos2 !== false){$gbotfound = 'found'; break;}
}
}
if($gbotfound != 'found'){
if(!empty($banned_unknown_bots)){
foreach ($banned_unknown_bots as $bubvalue){
$pos3 = stripos($browser, $bubvalue);
if($pos3 !== false){$ubotfound = 'found'; break;}
}
if($ubotfound == 'found'){header("location: $banned_redirect_url"); exit();}
}
}
}
// ---------------------------------------------------------------------------------------------------------------
?>

Excel links not loading pages, but when the link is pasted in the browser it works.

I have placed hyperlinks in an excel spreadsheet though my ruby-on-rails application. The links are to some privileged pages that require, After the login I am supposed to taken to the requested page. However, what happens is that after login I lang on the home page of the website. Interestingly, when I right-click the link in the excel and paste the link in the web-browser url, it works as expected. So I don't think it's my app's fault, but rather something in excel that I am missing?
My scenario is pretty much the same as this scenario:
http://www.geekstogo.com/forum/topic/289186-excel-2007-hyperlink-loads-web-login-screen-not-linked-urlplease-help-me/
This issue normally happens in IE and i also faced the same problem.
Solution to this is very simple
Create a redirect.html page in your public folder (public folder because you are using ROR)
Copy this to your redirect.html
<html>
<body>
Please wait, loading your page...
<script type="text/javascript">
function getQuerystring(key) {
key = key.replace(/[\[]/,"\\\[").replace(/ [\]]/,"\\\]");
var regex = new RegExp("[\\?&]"+key+"=([^&#]*)");
var query = regex.exec(window.location.href);
return query[1];
}
window.location = window.location.protocol + "//" + window.location.host + "/" + getQuerystring('page');
</script>
</body>
</html>
The links you are constructing in excel should be modified
For e.g Old link - http://test.com/post/comments/1
New link should be - http://test.com/redirect.html?page=post/comments/1
You need to pass the part of URL (after base url) as a param
How this works.
When user clicks a link in excel it points to redirect.html and the javascript constructs the actual URL and redirects again to proper page, user will be navigated to actual page if already logged in else redirected to Login/Home page.
Not sure it's really an answer but I had the same problem with my application.
The whole application, including the home page, is protected (I'm using Devise).
So whenever a user wants to access http://myapp, it redirects him to http://myapp/users/sign_in.
I think Devise uses a 301 or a 302 to redirect to the login screens.
My finding is that links clicked in Office and opening in IE cannot accomodate this redirect (no problem when Chrome is the default browser). Does it match your setup?
Ultimately, I have found no other solution but to link directly to the sign-in page... Maybe there are other options but I'm still looking for them.
EDIT: found this article (from 2006) about a bug in Outlook which totally matches our situation.
Again, not a solution, but at least an explanation.
This is a problem with Excel's internal URL handling, which has issues with modern web design patterns (e.g. sessions + redirects).
Here's a client-side solution that bypasses Excel's internal mechanisms and uses the OS default URL handler instead. Note that since it uses macros, this approach requires appropriate security settings.
In your worksheet's VBA module, add the following code:
Option Explicit
Private Declare Function ShellExecute Lib "shell32.dll" Alias "ShellExecuteA" ( _
ByVal hWnd As Long, _
ByVal Operation As String, _
ByVal Filename As String, _
Optional ByVal Parameters As String, _
Optional ByVal Directory As String, _
Optional ByVal WindowStyle As Long = vbMinimizedFocus _
) As Long
Private Sub Worksheet_FollowHyperlink(ByVal Target As Hyperlink)
ShellExecute 0, "Open", Target.Address
End Sub
This is based on the answers of two other SO answers.
I have updated the redirect.html in the answer by #Bharath, replacing the regex manipulation with the Web APIs now available:
URL
URLSearchParams
See the original answer by #Bharath for further explanation.
<!DOCTYPE html>
<html lang="en">
<body>
One moment please. We are trying to connect you ...
<script type="text/javascript">
const url = new URL(window.location.href);
const params = new URLSearchParams(url.search);
const redirectTo = params.get('page');
location = `${url.origin}/${redirectTo}`;
</script>
</body>
</html>
Of course, there is scope to handle errors (e.g. if (params.has('page')) etc.
See also Easy URL Manipulation with URLSearchParams by Eric Bidelman of Google.

Resources