In one of my current projects, i'm making use of a single-user authentication system. I say "single-user" as i've no plans on making this work for multiple users on the same Windows account (simply because it's not something i'm looking to do).
When the user starts the application, they're presented with an authentication screen. This authentication screen uses an image (i.e. click 3 specific points in the image), a username (a standard editbox), and an image choice (a dropdown menu allowing them to select which image they wish to use). The image choice, username, and the points clicked on the image must all match what the user specified when setting up the password.
All 3 results are combined into a string, which is then encoded with the Soap.EncdDecd.EncodeString method. This is then hashed using SHA-512. Finally, it's encrypted using DES. This value is then compared with the value that was created when they setup their password. If it matches, they're granted access. If not, access is denied. I plan to use the SHA512 value at other points in the application (such as a "master password" for authorising themselves with various different modules within the main application).
In one example, the initial string is 29 characters in length, the SOAP encoded string is around 40 characters, the SHA-512 string is 128 characters, and the DES value is 344 characters. Since i'm not working with massive strings, it's actually really quick. SOAP was used as very basic obfuscation and not as a security measure.
My concern is that the first parts (plain string and SOAP) could be the weak points. The basic string won't give them something they can just type and be granted access, but it would give them the "Image click co-ordinates", along with the username and image choice, which would potentially allow them access to the application. The SOAP string can be easily decoded.
What would be the best way to strengthen up this first part of the authentication to try and avoid the values being ripped straight from memory? Should i even be concerned about a potential exploiter or attacker reading the values in this way?
As an additional question directly related to this same topic;
What would be the best way to store the password hash that the user creates during initial setup?
I'm currently running with a TIniFile.SectionExists method as i've not yet got around to coming up with something more elegant. This is one area where my knowledge is lacking. I need to store the password "hash" across sessions (so using a memory stream isn't an option), but i need to make sure security is good enough that it can't be outright cracked by any script kiddie.
It's really more about whether i should be concerned, and whether the encoding, hashing, and encryption i've done is actually enough. The picture password system i developed is already a great basis for stopping the traditional "I know what your text-based password is so now i'm in your system" attack, but i'm concerned about the more technical attacks that read from memory.
Using SHA-512, it is NOT feasible (at least not before 20 years of computing power, and earth electric energy) to retrieve the initial content from the hash value.
I even think that using DES is not mandatory, and add complexity. Of course, you can use such slow process to make brute force or dictionary-based attacks harder (since it will make each try slower). A more common is not to use DES, but call SHA-512 several times (e.g. 1000 times). In this case, speed can be your enemy: a quick process will be easier to attack.
What you may do is to add a so-called "salt" to the initial values. See this Wikipedia article.
The "salt" can be fixed in the code, or stored within the password.
That is:
Hash := SHA512(Salt+Coordinates+UserName+Password);
Last advices:
Never store the plain initial text in DB or file;
Force use of strong passwords (not "hellodave", which is easy to break thanks to a dictionary);
The main security weakness is between the chair and the keyboard;
If you are paranoid, overwrite explicitly (i.e. one char per one char) the pain initial text memory before releasing it (it may still be somewhere in the RAM);
First learn a little bit about well known techniques: you should better use a "challenge" with a "nonce" to avoid any "replay" or "main in the middle" attacks;
It is safe to store the password hash in DB or even an INI file, if you take care of having a strong authentication scheme (e.g. with challenge-response), and secure the server access.
For instance, here is how to "clean" your memory (but it may be much more complex than this):
Content := Salt+Coordinates+UserName+Password;
for i := 1 to length(Coordinates) do
Coordinates[i] := ' ';
for i := 1 to length(UserName) do
UserName[i] := ' ';
for i := 1 to length(Password) do
Password[i] := ' ';
Hash := SHA512(Content);
for i := 1 to length(Content) do
Content[i] := ' ';
for i := 1 to 1000 do
Hash := SHA512(Hash);
When it deals with security, do not try to reinvent the wheel: it is a difficult matter, and you would better rely on mathematically proven (like SHA-512) and experienced techniques (like a salt, a challenge...).
For some sample of authentication scheme, take a look at how we implemented RESTful authentication for our Client-Server framework. It is certainly not perfect, but it tried to implement some best practices.
Related
CryptoPP::OID CURVE = CryptoPP::ASN1::secp256r1();
CryptoPP::AutoSeededRandomPool prng;
std::vector<kpStruct> KPVecRSU;
(loop begin)
kpStruct keyP;
CryptoPP::ECDH < CryptoPP::ECP >::Domain dhA( CURVE );
CryptoPP::SecByteBlock privA(dhA.PrivateKeyLength()), pubA(dhA.PublicKeyLength());
dhA.GenerateKeyPair(prng, privA, pubA);
CryptoPP::SecByteBlock sharedA(dhA.AgreedValueLength());
keyP.sharedECDH = sharedA;
KPVecRSU.push_back(keyP);
(loop end)
I want to create shared secret between 3 units, but this code give me different ones ! any idea please ?
ECDH shared secret doesn't match in loop, with Crypto++
Each run of the protocol produces a different shared secret because both the client and server are contributing random values during the key agreement. The inherit randomness provides forward secrecy, meaning bad guys cannot recover plain text at a later point in time because the random values were temporary or ephemeral (forgotten after the protocol execution).
In the Crypto++ implementation, the library does not even make a distinction between client and server because there's so much symmetry in the protocol. Protocols with too much symmetry can suffer the Chess Grand-Master attack, where one protocol execution is used to solve another protocol execution (think of it like a man-in-the-middle, where the bad guy is a proxy for both grand-masters). Often, you tweak a parameter on one side or the other to break the symmetry (client uses 14-byte random, server uses 18-byte random).
Other key agreement schemes we are adding do need to make the distinction between client and server, like Hashed MQV (HMQV) and Fully Hashed MQV (FHMQV). Client and Server are called Initiator and Responder in HMQV and FHMQV.
I want to create shared secret between 3 units, but this code give me different ones.
This is a different problem. This is known as Group Diffie-Hellman or Multi-party Diffie-Hellman. It has applications in, for example, chat and broadcast of protected content, where users are part of a group or join a group. The trickier part of the problem is how to revoke access to a group when a user leaves the group or is no longer authorized.
Crypto++ does not provide any group DH schemes, as far as I know. You may be able to modify existing sources to do it.
For Group Diffie-Hellman, you need to search Google Scholar for the papers. Pay particular attention to the security attributes of the scheme, like how to join and leave a group (grant and revoke access).
thinking to use simple hashing to create a internal used shortening url service. the function i am planing to use is as below
string s = base64Convert(md5(salt: time in million seconds))
string url = s.substring(0, len: 6)
Map url to real url
there will be 64^6 = 68,719,476,736 possible combinations. should be more than enough for our internal services.
however one thing worry me is, how can i make sure there will not be duplicate url until the 64^6 +1 time hashing?
any thought?
how can i make sure there will not be duplicate url until the 64^6 +1 time hashing?
Using simple hashing, you can not ensure this property.
Assuming equidistribution of md5, if you have n URLs hashed and add one more, then there are n possible outcomes how it would collide and 646-n how it would not collide. So the chances of a collision for that new element are n/646. This value is non-zero even for n=1, so the second URL could already collide in theory, even if the chances of this actually happening are extremely low. The more non-colliding URLs you have in your data base, the higher the chances that a new hash will collide with any existing one, until the chance becomes 100% for n=646.
If you think about it like this, make sure to keep the birthday “paradox” in mind. If you have a set of n URLs you'd like to add, then the chances of any two of them colliding are way larger than the chances of just the last one colliding with any of the ones you added before. If you do the math, you will find that using your scheme, you can expect to hash approximately 37.000 URLs before the chances of a collision between any two of them exceeds 1%.
So you now have to decide whether 1% chance of collision is acceptable, and whether 37.000 URLs is enough for your need. If the probabilistic results don't satisfy you, you can either tweak chances by e.g. using more than 6 digits, or you'll have to implement collision resolution.
Was wondering if anyone out there can help.......
My company works in the travel industry and one of the product we provide is the function of buying a flight and hotel together.
One of the advantages of this is that sometimes a visitor can save on a hotel if they buy the package together.
What I want to be able to track is the following:
The hotel which has the saving on it (accomodation code); the saving that they will make; the price of the package that they will pay.
I am new to implementing but have been told by a colleague that I can use a context variable.
Would anyone be able to tell me how I should write this please?
Kind Regards
Yaser
Here is the document entry for Context Data Variables
For example, in the custom code section of the on-page code, within s_doPlugins or via some wrapper function that ultimately makes a s.t() or s.tl() call, you would have:
s.contextData['package.code'] = "accommodation code";
s.contextData['package.savings'] = "savings";
s.contextData['package.price'] = "price";
Then in the interface you can go to processing rules and map them to whatever props or eVars you want.
Having said that...processing rules are pretty basic at the moment, and to be honest, it's not really worth it IMO. Firstly, you have to get certified (take an exam and pass) to even access processing rules. It's not that big a deal, but it's IMO a pointless hoop to jump through (tip: if you are going to go ahead and take this step, be sure to study up on more than just processing rules. Despite the fact that the exam/certification is supposed to be about processing rules, there are several questions that have little to nothing to do with them)
2nd, context data doesn't show up in reports by themselves. You must assign the values to actual props/eVars/events through processing rules (or get ClientCare to use them in a vista rule, which is significantly more powerful than a processing rule, but costs lots of money)
3rd, the processing rules are pretty basic. Seriously, you're limited to just simple stuff like straight duping, concatenating values, etc.
4th, processing rules are limited in setting events, and won't let you set the products string. IOW, You can set a basic (counter) event, but not a numeric or currency event (an event with a custom value associated with it). Reason I mention this is because those price and savings values might be good as a numeric or currency event for calculated metrics. Well since you can't set an event as such via processing rules, you'd have to set the events in your page code anyways.
The only real benefit here is if you're simply looking to dupe them into a prop/eVar and that prop/eVar varies from report suite to report suite (which FYI, most people try to keep them consistent across report suites anyways, and people rarely repurpose them).
So if you are already being consistent across multiple report suites (or only have like 1 report suite in the first place), since you're already having to put some code on the site, there's no real incentive to just pop the values in the first place.
I guess the overall point here is that since the overall goal is to get the values into actual props, eVars and possibly events, and processing rules fail on a lot of levels, there's no compelling reason not to just pop them in the first place.
I refer to Scott Amblers's Choosing a Primary Key: Natural or Surrogate? page.
Excerpt:
High-low strategy. The basic idea is that your key value, often
called a persistent object identifier (POID) or simply an object
identified (OID), is in two logical parts: A unique HIGH value that
you obtain from a defined source and an N-digit LOW value that your
application assigns itself. Each time that a HIGH value is obtained
the LOW value will be set to zero.
I am interested in DORM (The Delphi ORM by Daniele Teti) and would like to know if somebody has already implemented the high/low strategy for it.
Any input are welcome.
Edit 1:
To narrow the scope of the question:
I want to use Firebird as the RDMS backend
I likely have to implement IdormKeysGenerator similarly to dorm.adapter.Firebird.TFirebirdTableSequence.
Edit 2:
HIGH value is persisted on the Server
LOW value allocation is the client responsability.
I think an usual allocator will do for the LOW value (Implemented as a class).
Currently DORM support only surrogate keys (integer or string). In the internal roadmap is scheduled the natural (multi field keys) key support. Some internal structures are ready to support the multiple fields keys, but still is not implemented. The high-low strategy is not planned, but should not be so difficult to do.
P.S. As is every Open Source project, feel free to contribute :-)
I am designing a simple registration form in ASP.net MVC 1.0
I want to allow the username to be validated while the user is typing (as per the related questions linked to below)
This is all easy enough. But what are the security implications of such a feature?
How do i avoid abuse from people scraping this to determine the list of valid usernames?
some related questions: 1, 2
To prevent against "malicious" activities on some of my internal ajax stuff, I add two GET variables one is the date (usually in epoch) then I take that date add a salt and SHA1 it, and also post that, if the date (when rehashed) does not match the hash then I drop the request otherwise fulfill it.
Of course I do the encryption before the page is rendered and pass the hash & date to the JS. Otherwise it would be meaningless.
The problem with using IP/cookie based limits is that both can be bypassed.
Using a token method with a good, cryptographically strong, salt (say something like one of Steve Gibson's "Perfect Passwords" https://www.grc.com/passwords.htm ) it would take a HUGE amount of time (on the scale of decades) before the method could reliably be predicted and there for ensures a certain amount security.
you could limit the number of requests to maybe 2 per 10 seconds or so (a real user may put in a name that is taken and modify it a bit and try again). kind of like how SO doesn't let you comment more than once every 30 seconds.
if you're really worried about it, you could take a method above and count how many times they tried in a certain time period, and if it goes above a threshold, kick them to another page.
Validated as in: "This username is already taken"? If you limit the number of requests per second it should help
One common way to solve this is simply by adding a delay in the request. If the request is sent to the server, wait 1 (or more) seconds to respond, then respond with the result (if the name is valid or not).
Adding a time barrier doesn't really effect users not trying to scrape, and you have gotten a 60-requests per minute limit for free.
Bulding on the answer provided by UnkwnTech, which is some pretty solid advice.
You could go a step further and make the client perform some of calculation to create the return hash - this could just be some simple arithmatic like subtrating a few numbers, adding the data and multiplying by 2.
The added arithmatic does mean an out-of-box username scraping script is unlikely to work and forces the client into using up greater CPU.