I'm struggling with BCryptPasswordEncoder and groovy contract
I have a producer and a consumer service, the producer sends a message with user registration information (ie. user and password (encoded using BCryptPasswordEncoder(12)). The test is creating the message with the password encrypted, this is ok, but my questions are:
1. is it possible to encrypt the password in the contract?
2. how?
3. if possible how can I set BCryptPasswordEncoder(12)?
at the moment when running the test (mvn clean install) from the producer, the verification is failing since the encrypted password <> plain password defined in contract
Thanks!
UPDATE
I have uploaded a sample to github https://github.com/dssantana/user-registration
If you run mvn clean install, you will find that at a certain point one of the tests fail with an error similar to:
2017-12-18 11:55:36.056 INFO [user-client,,,] 5236 --- [ main] .e.u.c.UserAccountRegistrationController : UserAccountRegistrationController - UserAccountRegister:
AccountRegistration(firstName=Diego, lastName=Santana, email=dssantana#gmail.com, mobileNumber=0452621048, ipAddress=127.0.0.1, birthday=1979-10-16,
password=$2a$12$jm3YACnf72P3wKCmPLRXwufeXJx5lzibwLz3DzhCXft.XKW2bK1RC)
[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 6.935 s <<< FAILURE! - in au.net.example.userclient.ContractVerifierTest
[ERROR] validate_shouldSendAnAccountRegistrationMessageWithSpecialCharactersUsername(au.net.example.userclient.ContractVerifierTest) Time elapsed: 0.426 s <<< ERROR!
java.lang.IllegalStateException: Parsed JSON
[{"firstName":"Joe","lastName":"Doe","email":"joe.doe+test#gmail.com","mobileNumber":"0452621048","ipAddress":"127.0.0.1",
"birthday":"1979-10-16","password":"$2a$12$fZcEe6fUzmjHmItvsJ8MCOCOR.mnc2nbDqh/Ce1aYzUBRq5L8ywRm"}]
doesn't match the JSON path [$[?(#.['password'] == 'Test01')]]
at au.net.example.userclient.ContractVerifierTest.validate_shouldSendAnAccountRegistrationMessageWithSpecialCharactersUsername(ContractVerifierTest.java:49)
[ERROR] validate_shouldSendAnAccountRegistrationMessage(au.net.example.userclient.ContractVerifierTest) Time elapsed: 0.323 s <<< ERROR!
java.lang.IllegalStateException: Parsed JSON
[{"firstName":"Diego","lastName":"Santana","email":"dssantana#gmail.com","mobileNumber":"0452621048","ipAddress":"127.0.0.1",
"birthday":"1979-10-16","password":"$2a$12$jm3YACnf72P3wKCmPLRXwufeXJx5lzibwLz3DzhCXft.XKW2bK1RC"}] doesn't match the JSON path [$[?(#.['password'] == 'Test01')]]
at au.net.example.userclient.ContractVerifierTest.validate_shouldSendAnAccountRegistrationMessage(ContractVerifierTest.java:33)
The encrypted password is Test01 and should match with the plain password in the contract test, however I'm not sure how to verify this, it's one way encryption and the way to verify is to encrypt and match the contract data with the test data.
As presented in this link https://github.com/dssantana/user-registration/pull/1/files what was added to the initial test setup is the $( consumer("fixed value"), producer(regex(nonBlank()))) line to ensure that on the producer side, in the generated test we have some value of password
Related
When using our SAM configuration on our local machine, the logs are stored here as per the command:
sam local start-api --env-vars env.json --log-file logs.txt
This stores logs in the following format:
2022-07-20T00:11:04.600Z ZKY6ca3d-8004-4098-9a1d-a6e78134284e INFO Creating new token.
2022-07-20T00:11:04.812Z 97F6ca3d-8004-4098-9a1d-a6e78134284e INFO Token Authorised
This shows the timestamp, lambda execution unique id, log type and message.
We are looking to integrate Winston into our codebase. However, when we log using winston, the lambda id is not printed.
Here is the Winston message
I can add the timestamp and log type, however, I am unsure how to input the Lambda execution ID as part of the Winston method of logging
Is there anyway around this?
I'm trying to send same logs to multiple Kinesis Firehose Stream on multiple AWS account via Fluent Bit v1.8.12. How can I use the role_arn in kinesis_firehose OUTPUT property correctly? I'm able to send to firehose A but not firehose B. Also, role A on AWS A can assume role B on AWS account B.
This is what I'm trying to do
This is fluent bit OUTPUT conf
[OUTPUT]
Name kinesis_firehose
Match aaa
region eu-west-1
delivery_stream a
time_key time
role_arn arn:aws:iam::11111111111:role/role-a
# THIS ONE DOES NOT WORK
[OUTPUT]
Name kinesis_firehose
Match bbb
region eu-west-1
delivery_stream b
time_key time
role_arn arn:aws:iam::22222222222:role/role-b
fluent bit pod logs says:
[2022/06/21 15:03:12] [error] [aws_credentials] STS assume role request failed
[2022/06/21 15:03:12] [ warn] [aws_credentials] No cached credentials are available and a credential refresh is already in progress. The currentco-routine will retry.
[2022/06/21 15:03:12] [error] [signv4] Provider returned no credentials, service=firehose
[2022/06/21 15:03:12] [error] [aws_client] could not sign request
[2022/06/21 15:03:12] [error] [output:kinesis_firehose:kinesis_firehose.1] Failed to send log records to b
[2022/06/21 15:03:12] [error] [output:kinesis_firehose:kinesis_firehose.1] Failed to send log records
[2022/06/21 15:03:12] [error] [output:kinesis_firehose:kinesis_firehose.1] Failed to send records
The problem was that I didn't know which role the fluent-bit pod was assuming. Enablind fluent-bit debug logs helped me.
It appears that fluent-bit assumes a particular role x that includes many EKS policies. I added to this role a policy that let this role x assume both roles role a (can write to Kinesis in account AWS A) and role b (can write to Kinesis in account AWS B). No changes were made to fluent bit configuration.
The solution is painted below:
Hi I have a subscription set up in Google PubSub and I am trying to pull messages asynchronously using the "official" google-cloud-ruby library. Here is my code which will be executed from a rake task which passes in subscription_name:
def pull!
creds = Google::Cloud::PubSub::Credentials.new(
GCP_CREDENTIALS_KEYFILE_PATH,
scope: "https://www.googleapis.com/auth/pubsub"
)
messages = []
pubsub = Google::Cloud::PubSub.new(
project_id: GOOGLE_PROJECT_ID,
credentials: creds
)
subscription = pubsub.subscription(subscription_name)
subscription.pull(immediate: true).each do |received_message|
puts "Received message: #{received_message.data}"
received_message.acknowledge!
messages.push(received_message)
end
# Return the collected messages
messages
rescue => error
Rails.logger error
messages.presence
end
The Google::Cloud::PubSub::Credentials part references a working keyfile. I know the JSON keyfile is good since I can use it to generate a working Bearer token using oauth2l and pull from the PubSub using cURL, postman, Net::HTTP, etc. Using the same JSON credentials for a separate Google::Cloud::Storage service and that works fine also.
But for some reason using Google::Cloud::PubSub it just hangs and won't respond. After about 60 seconds I get the following error:
GRPC::DeadlineExceeded: 4:Deadline Exceeded. debug_error_string:{"created":"#1602610740.445195000","description":"Deadline Exceeded","file":"src/core/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":4}
/Users/bbulpet/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/grpc-1.32.0-universal-darwin/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
/Users/bbulpet/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/grpc-1.32.0-universal-darwin/src/ruby/lib/grpc/generic/active_call.rb:180:in `attach_status_results_and_complete_call'
/Users/bbulpet/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/grpc-1.32.0-universal-darwin/src/ruby/lib/grpc/generic/active_call.rb:376:in `request_response'
/Users/bbulpet/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/grpc-1.32.0-universal-darwin/src/ruby/lib/grpc/generic/client_stub.rb:172:in `block (2 levels) in request_response'
/Users/bbulpet/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/grpc-1.32.0-universal-darwin/src/ruby/lib/grpc/generic/interceptors.rb:170:in `intercept!'
/Users/bbulpet/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/grpc-1.32.0-universal-darwin/src/ruby/lib/grpc/generic/client_stub.rb:171:in `block in request_response'
/Users/bbulpet/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/gapic-common-0.3.4/lib/gapic/grpc/service_stub/rpc_call.rb:121:in `call'
/Users/bbulpet/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/gapic-common-0.3.4/lib/gapic/grpc/service_stub.rb:156:in `call_rpc'
/Users/bbulpet/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/google-cloud-pubsub-v1-0.1.2/lib/google/cloud/pubsub/v1/subscriber/client.rb:503:in `get_subscription'
/Users/bbulpet/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/google-cloud-pubsub-2.1.0/lib/google/cloud/pubsub/service.rb:154:in `get_subscription'
/Users/bbulpet/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/google-cloud-pubsub-2.1.0/lib/google/cloud/pubsub/project.rb:286:in `subscription'
Adding a debugger shows that the following line is what hangs and results in an error:
subscription = pubsub.subscription(subscription_name)
I've tried everything I can think of based on the documentation. Updated all related gems and even tried using deprecated syntax out of desparation. If anyone has at least an idea of where to start that would be most appreciated, thanks!
UPDATE
So it turns out that for some reason running this locally was not able to connect, but when shipped to a deployed environment the above code connected flawlessly to pubsub and was able to pull and ack messages. Further, upon making the initial connection in the deployed environment, I am now able to connect locally as well using the same credentials.
Link to Github issue conversation for context around the troubleshooting process and quartzmo suggestion to try deploying to another environment.
Are you still having this issue? I just tried to reproduce it using Ruby 2.6.5p114, google-cloud-pubsub 2.1.0 and grpc 1.32.0 (same versions as you), but I cannot reproduce it. Here is my code (slightly modified to run in a Minitest spec context) for comparison:
GCP_CREDENTIALS_KEYFILE_PATH = "/Users/quartzmo/my-project.json"
GOOGLE_PROJECT_ID = "my-project-id"
def pull! topic_name, subscription_name
creds = Google::Cloud::PubSub::Credentials.new(
GCP_CREDENTIALS_KEYFILE_PATH,
scope: "https://www.googleapis.com/auth/pubsub"
)
messages = []
pubsub = Google::Cloud::PubSub.new(
project_id: GOOGLE_PROJECT_ID,
credentials: creds
)
topic = pubsub.create_topic topic_name
topic.subscribe subscription_name
topic.publish "A test message from #{topic_name} to #{subscription_name}"
subscription = pubsub.subscription(subscription_name)
subscription.pull(immediate: true).each do |received_message|
puts "Received message: #{received_message.data}"
received_message.acknowledge!
messages.push(received_message)
end
# Return the collected messages
messages
end
focus
it "pull!" do
topic_name = random_topic_name
subscription_name = random_subscription_name
messages = pull! topic_name, subscription_name
assert_equal 1, messages.count
assert_equal "A test message from #{topic_name} to #{subscription_name}", messages[0].data
end
And this is the output:
% bundle exec rake test
Run options: --junit --junit-filename=sponge_log.xml --seed 30984
# Running:
Received message: A test message from ruby-pubsub-samples-test-topic-7cb10bde to ruby-pubsub-samples-test-subscription-f47f2eaa
.
Finished in 6.529219s, 0.1532 runs/s, 0.3063 assertions/s.
1 runs, 2 assertions, 0 failures, 0 errors, 0 skips
Update (2020-10-20): This issue was resolved when executing the code in a different environment, although the reason is unknown. See comment on GitHub issue.
I wanted to try the password-notification feature of the IS 4.6 but it throwing an exception.
I followed those links:
https://docs.wso2.org/display/IS460/Recover+with+Notification
http://cgchamath.blogspot.mx/2013/12/wso2-identity-server-user-creation-with.html
This is the error I am getting
Here is the Stacktrace
Caused by: org.wso2.carbon.identity.base.IdentityException: Error
while persisting identity user data in to user store at
org.wso2.carbon.identity.mgt.store.UserStoreBasedIdentityDataStore.store(UserStoreBasedIdentityDataStore.java:81)
at
org.wso2.carbon.identity.mgt.IdentityMgtEventListener.doPostAddUser(IdentityMgtEventListener.java:420)
... 124 more Caused by: org.wso2.carbon.user.core.UserStoreException:
One or more attributes you are trying to add/update are not supported
by underlying LDAP. at
org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager.doSetUserClaimValues(ReadWriteLDAPUserStoreManager.java:874)
at
org.wso2.carbon.identity.mgt.store.UserStoreBasedIdentityDataStore.store(UserStoreBasedIdentityDataStore.java:73)
... 125 more Caused by:
javax.naming.directory.NoSuchAttributeException: [LDAP: error code 16
- NO_SUCH_ATTRIBUTE: failed for Modify Request
Object : 'uid=testUser,ou=Users,dc=wso2,dc=org'
Modification[0]
Operation : replace
Modification
http://wso2.org/claims/identity/passwordTimestamp: 1398394865706
Modification1
Operation : replace
Modification
initials: false : ERR_04269 ATTRIBUTE_TYPE for OID http://wso2.org/claims/identity/passwordtimestamp does not exist!];
remaining name 'uid=testUser' at
com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3108) at
com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:3033) at
com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2840) at
com.sun.jndi.ldap.LdapCtx.c_modifyAttributes(LdapCtx.java:1411) at
com.sun.jndi.toolkit.ctx.ComponentDirContext.p_modifyAttributes(ComponentDirContext.java:253)
at
com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.modifyAttributes(PartialCompositeDirContext.java:165)
at
com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.modifyAttributes(PartialCompositeDirContext.java:154)
at
org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager.doSetUserClaimValues(ReadWriteLDAPUserStoreManager.java:859)
... 126 more
I can imagine that the application is generating a timestamp for the expiration of the password and tries to save in a field in the LDAP which is mapped by the combination:
http://wso2.org/claims/identity/passwordtimestamp -> nickName
This mapping is wrong obviously.
How can I force an adequate mapping so the process is saving the user in the right way (and hopefully sending the email after that ...)?
Thanks in advance.
First problem on this way solved:
I had to re-add the claim of the password time stamp, but with the correct uri:
http://wso2.org/claims/identity/passwordTimestamp
Also helpful was:
https://wso2.org/jira/browse/IDENTITY-1200
The LDAP-error is fixed, but still it is not sending the email. But that is another issue
I have a Rails application running on heroku and i am connecting at two dbs hosted in mongolab (X and Y).
I have configured two heroku env variables containing the connection strings.
When i query on Y all works fine but when i query on X db it gives me the error 16550 : "not authorized for query on X.table".
I have setted up correctly both env variables for these connections and also have a valid user to access X db.
If i connect with the shell all works fine.
How can i solve this?
Here is the error message in rails:
{"status":"500",
"error":"The operation: #<Moped::Protocol::Query\n #length=88\n #request_id=4\n #response_to=0\n
#op_code=2004\n #flags=[:slave_ok]\n
#full_collection_name=\"X.table\"\n
#skip=0\n #limit=0\n
#selector={\"_id\"=>\"5252c92521e4af681a000002\"}\n
#fields=nil>\n
failed with error 16550: \"not authorized for query on X.table\"\n\n
See https://github.com/mongodb/mongo/blob/master/docs/errors.md\nfor details about this error."}
I solved this, if someone comes here with the same problem : look at your table model, if as in my case it is "stored_in" another database you must specify there the session of the the uri = evn variables on datbase.yml