We are using WS02 1.90 and have encountered an issue with a character length limit on the scope value that can assigned against tokens generated behind scenes.
For example, we have 39 scopes added across 60 API end points configured in Publisher.
Some of our individual scope names can be as long as 50 characters, for example:
customer-order-authorisation-requests_create
Anyway, after generating a token for a given user, when we tried to access an API we get an error from WSO2 telling us the access token is invalid for the resource requested. We double checked the scope values we were sending in the token request and the scope values returned and can see matching scope values for the resource in question being referenced in the error message.
After some further digging in the WSO2 logs we came across the following:
2017-03-09 10:43:58,845 [-] [pool-46-thread-100] ERROR TokenPersistenceTask Error occurred while persisting access token 60efd3d6b9112d453c451d2965a753e1
org.wso2.carbon.identity.oauth2.IdentityOAuth2Exception: Invalid request
at org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.storeAccessToken(TokenMgtDAO.java:196)
at org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.persistAccessToken(TokenMgtDAO.java:229)
at org.wso2.carbon.identity.oauth2.dao.TokenPersistenceTask.run(TokenPersistenceTask.java:56)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.mysql.jdbc.MysqlDataTruncation: Data truncation: Data too long for column 'TOKEN_SCOPE' at row 1
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3868)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3806)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2470)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2617)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2550)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1861)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1192)
at org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.storeAccessToken(TokenMgtDAO.java:188)
... 5 more
Looking at the database schema for the API Manager, the TOKEN_SCOPE column is in the table IDN_OAUTH2_ACCESS_TOKEN and is defined as follows:
CREATE TABLE IDN_OAUTH2_ACCESS_TOKEN (
ACCESS_TOKEN VARCHAR(255),
REFRESH_TOKEN VARCHAR(255),
CONSUMER_KEY VARCHAR(255),
AUTHZ_USER VARCHAR(100),
USER_TYPE VARCHAR (25),
TIME_CREATED TIMESTAMP DEFAULT 0,
VALIDITY_PERIOD BIGINT,
TOKEN_SCOPE VARCHAR(767),
TOKEN_STATE VARCHAR(25) DEFAULT 'ACTIVE',
TOKEN_STATE_ID VARCHAR (255) DEFAULT 'NONE',
PRIMARY KEY (ACCESS_TOKEN),
FOREIGN KEY (CONSUMER_KEY) REFERENCES IDN_OAUTH_CONSUMER_APPS(CONSUMER_KEY) ON DELETE CASCADE,
CONSTRAINT CON_APP_KEY UNIQUE (CONSUMER_KEY, AUTHZ_USER,USER_TYPE,TOKEN_STATE,TOKEN_STATE_ID,TOKEN_SCOPE)
)ENGINE INNODB;
There is 767 character limit on the TOKEN_SCOPE field.
So to further test this, we started removing the number of scopes added across our API's in Publisher until such point the value of the scope field being returned in the token request response from the token API was below this limit.
This allowed us to start accessing our API's again without any errors being thrown.
This is a problem for us since we have only around a third of our API's protected by scopes.
We can of course come up with sorter naming conventions for our scopes, but we dont want to sacrifice readability of these scope values since we use them in our application for determining permission sets for logged in users.
There doesnt seem to be any limits imposed by WSO2 when creating and assigning the scopes in Publisher.
Should we be using scopes in some other way? A 767 character limit seems rather specific!
Thanks
This is the InnoDB prefix limit.
If your MySQL version >= 5.7.7, you can increase the size of the column directly. If not you'll have to set innodb_large_prefix value before increasing.
See https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_large_prefix
Related
So i initially had a foreign id tutor_id as type string. So i ran the following migrations.
change_column(:profiles, :tutor_id, 'integer USING CAST(tutor_id AS integer)')
The problem is that there was data already created which initially contained the tutor_id as type string. I did read however that by using CAST, the data should be converted into an integer.
So just to confirm i went into heroku run rails console to check the tutor_id of the profiles and tutor_id.is_a? Integer returns true.
However i am currently getting this error
ActionView::Template::Error (PG::UndefinedFunction: ERROR: operator does not exist: integer = text at character 66
Why is that so? Is the only way out to delete the data and to recreate it?
(I'm assuming the information provided above is enough to draw a conclusion, else i will add the relevant information too.)
You also have to update your code to use integers rather than strings. This error happens because your code somewhere still has the column type as string and the query sent has the value sent as '123'. PostgreSQL doesn't do automatic type conversions so it's telling you it can't do the comparison.
I'm using Rails with Devise and it is connecting to a MS SQL Server for the DB. This is all fine and working OK.
When I try to register a second person, I get the following error:
ActiveRecord::RecordNotUnique in Devise::RegistrationsController#create
TinyTds::Error: Cannot insert duplicate key row in object 'dbo.users' with
unique index 'index_users_on_reset_password_token'. The duplicate key value
is (<NULL>).: EXEC sp_executesql N'INSERT INTO [users] ([email],
[encrypted_password], [created_at], [updated_at]) OUTPUT INSERTED.[id]
VALUES (#0, #1, #2, #3)', N'#0 nvarchar(4000), #1 nvarchar(4000), #2
datetime, #3 datetime', #0 = N'ben_mcmaster#outlook.com', #1 =
N'$2a$10$TK79.NSrjZaT93TiQphqB.M6XfBUlaGFmAqJUGgssdGggR4OB.7oC', #2 = '05-
09-2016 06:40:34.448', #3 = '05-09-2016 06:40:34.448'
I'm mainly looking at the fact that it is trying to create a new reset password token, but it is trying to make it NULL, as that already exists (in the first user).
In my app, doing a reset password is not really needed since I can do all that and there are only a couple of people.
Am I able to:
Get the app writing actual unique reset tokens
Bypass it
From what I understand you are okay with not having the whole 'Forgot Password' functionality in your application.
As described in this question you need to remove the :recoverable option in your devise model, I think that should solve your problem.
I think problem is SQL Server.
On the one hand, Devise adds a unique index on user.reset_password_token by default. This allows users to use a reset password link with a one-time token to reset their password. Once that password has been reset, the field "reset_password_token" is reset to null.
On the other hand, standards said an UNIQUE constraint should disallow duplicate non-NULL values, but allow multiple NULL values. And SQL Server has always implemented a crippled version of this, allowing a single NULL but disallowing multiple NULL values.
In summary, if I were you, I would relaunch the migration in this way:
https://gist.github.com/andyhull/8045794202fd52930f93
I hope it useful for you.
Where is Rails generating the session id that it gives a user over a cookie? How is it verifying the session id given over a cookie? Does it save the session id, or does it use some hash function based on secret_token?
According to the Rails guide:
The session id is a 32 byte long MD5 hash value.
A session id consists of the hash value of a random string. The random string is the current time, a random number between 0 and 1, the process id number of the Ruby interpreter (also basically a random number) and a constant string. Currently it is not feasible to brute-force Rails' session ids. To date MD5 is uncompromised, but there have been collisions, so it is theoretically possible to create another input text with the same hash value. But this has had no security impact to date.
I found no links to the code that does this. I searched for uses of rand and srand, MD5 and such but found nothing useful. The closest I found was in actionpack/lib/action_dispatch/middleware/session/abstract_store.rb which does the following:
def generate_sid
sid = SecureRandom.hex(16)
sid.encode!(Encoding::UTF_8)
sid
end
This matches up with the format of session id I find in the session cookie, but not with the documentation in the guide. This also doesn't explain how sessions are verified.
According to this blog session id's are not stored or validated on the server side, but then how does it distinguish a session id that is valid or not?
Does someone know where the code that does this is, or a good guide for this? Thanks!
Other References:
Rails 3 ActiveRecordStore session_id tampering
You are correct, generate_sid1 is in charge of creating a session ID for new sessions.
When a session is first created, it generates a session id, sets it in your cookie, and caches it in CacheStore2. Once you make a request with a cookie, it re-builds the CookieStore and checks to make sure that the session id exists in the cache. If it exists, then it trusts the sesson. If it does not exist, then it does not trust the session. Since the session id is a 32 byte long value, it would be very difficult to guess an active session id that is in the cache.
It turns out that SecureRandom.hex calls SecureRandom.random_bytes which is what the paragraph from the Rails guide is describing. It may perhaps have been better for them to reference the SecureRandom function in use, as newer versions may change this algorithm.
Notice the use of current time, pid and so forth.
def self.random_bytes(n=nil)
n = n ? n.to_int : 16
if defined? OpenSSL::Random
#pid = 0 if !defined?(#pid)
pid = $$
if #pid != pid
now = Time.now
ary = [now.to_i, now.nsec, #pid, pid]
OpenSSL::Random.seed(ary.to_s)
#pid = pid
end
return OpenSSL::Random.random_bytes(n)
end
...
end
https://github.com/ruby/ruby/blob/v1_9_3_547/lib/securerandom.rb#L56
I've fought a few hours now to store a string in a database column in Rails.
I had to rename authorization to transaction so that Rails would store the value.
Why does Rails interfere while saving the value?
Example:
# Works
self.update_attribute(:transaction, result) rescue nil
# Does not work
self.update_attribute(:authorization, result) rescue nil
What is your underlying database? It might have "authorization" as a reserved word.
See the generated sql and run it directly to your db. If it runs without problems, then my assumption is invalid.
Both mySQL and SQLserver use authorization as a reserved word.
So you'll just need to use the different word.
You could also use something close like 'authorized' or 'auth'.
maybe try prefixing the column using the table name? For example:
UPDATE my_table
SET my_table.authorization = "new authorization"
WHERE id = 5
just looking for some advise.
I have a website with around 2500 users - small but growing.
I built it with using SHA1 encryption on the passwords.
I've since read the SHA1 is insecure and would like to change to say SHA256 with a Salt.
Does anyone have any advice on how to make a transition like this?
Would be great if I could decrypt the passwords and just re-hash them but it doesn't appear doing able.
thx
Adam
The usual way of going about this is this:
Make the hashed-password column larger to accommodate a sha256 hash, and add a 'salt' column
Set the salt field to NULL initially, and adjust your password-check code so that a NULL salt means sha1, and non-NULL means sha256
Once a sha1-use has logged in successfully, re-hash the password to sha256 with salt, and update the database.
Over time, users will migrate to sha256 by themselves; the only problem are users who log in only very sporadically or not at all. For these, you may want to send a reminder e-mail, or even threaten to shut their account down if they don't log in before day X (don't give the actual reason though...)
Just to clarify, SHA is a hashing algorithm, which is (generally) a one way street. You can't decrypt hashes, which is kind of the strength of using them to authenticate passwords. You're on the right track with moving to a salted hash, and here's how I would do it.
The only way you're getting passwords is to let the user type it in themselves. As users visit your site and log in, update the passwords one by one. In your authentication method, I would perform the hash you're doing now, and compare it against what's in the existing field (nothing new here). Assuming it matches, go ahead and salt / re-hash using SHA256, and update the password field in the database. If you want, keep a bit in your user table tracking which users have been updated.
I'm making a lot of assumptions, but this is how I've solved the hash algorithm dance in the past. Good luck!
I have another suggestion to migrate your password hash from SHA1 to SHA256 immediately without waiting for user to visit the site again to rehash the password. The change will be one time password hash migration and change to your logon validation function.
Suppose your password hash are generated using the function: password + salt [Sha1]-> Hash-sha1
To migrate to Sha256, you may convert your password hash using the following algorithm:
Hash-sha1 + salt [Sha256]-> Hash-sha256 (The salt is used to increase the complexity of input.)
Depending on the acceptable value of your sha256 function, you can consider to encode the Hash-sha1 to base64 for printable ascii.
For your logon validation function, the password should be hashed using the following algorithm:
Password + salt [sha1] -> hash-sha1 + salt [sha 256] -> hash-sha256
The disadvantage is hashed twice (use some CPU time) but simplify the migration and better security.
Switching to SHA256 will hardly make your website more secure.
SHA1 and SH512 are message digests, they were never meant to be password-hashing (or key-derivation) functions. (Although a message digest could be used a building block for a KDF, such as in PBKDF2 with HMAC-SHA1.)
A password-hashing function should defend against dictionary attacks and rainbow tables.
Currently, the only standard (as in sanctioned by NIST) password-hashing or key-derivation function is PBKDF2. Better choices, if using a standard is not required, are bcrypt and the newer scrypt. Wikipedia has pages for all three functions:
https://en.wikipedia.org/wiki/PBKDF2
https://en.wikipedia.org/wiki/Bcrypt
https://en.wikipedia.org/wiki/Scrypt
The page at https://crackstation.net/hashing-security.htm contains an extensive discussion of password security.
This being said, tdhammers offers good advice regarding how to handle the migration.