Masonite throw Invalid secret key error, even after secret key is created - pipenv

I'm trying to set sessions request.session.set('request_token', oauth.request_token) and it is throwing
InvalidSecretKey > You have passed an invalid secret key of:
your-secret-key. Make sure you have correctly added your secret key.
I did craft key --store to create a secret key and store it.
Masonite masonite==2.0.20 with pipenv for package mangement.

Simply deactivated the virtual environment and enabling again solve the problem.
Looks like pipenv cached the .env variables.
$ deactivate
$ pipenv shell

Related

Pass variable name in Jenkins Vault secrets path

I am not able to pass ${environment} in the vault secret path for reading the values.
May be secret getting initialized before variables are getting set.
Kindly help as I'm not able to read environment-specific values from the same vault repo.
It worked pretty nicely for me using a choice parameter in a parameterized build. I think your issue is in the used Vault path (vault/secret/$environment). I think the correct in your case is just "secret/$environment". Does your secret engine start with "vault"?
Just FYI, if you define the variable in "Jenkins > Manage Jenkins > Configure System > Environment variables" it'll work too.

ActiveSupport::EncryptedFile::MissingKeyError: Missing encryption key to decrypt file with. Docker

I'm trying to deploy a rails 7 app to Fly.io, which uses Docker to deploy apps. I keep getting the below error when I try to deploy.
ActiveSupport::EncryptedFile::MissingKeyError: Missing encryption key to decrypt file with. Ask your team for your master key and write it to /app/config/credentials/production.key or put it in the ENV['RAILS_MASTER_KEY'].
I've tried putting the following into my docker file:
RUN --mount=type=secret,id=RAILS_MASTER_KEY \
RAILS_MASTER_KEY="$(cat /run/secrets/RAILS_MASTER_KEY)"
Then running:
fly deploy \
--build-secret RAILS_MASTER_KEY=the_actual_secret_key_here
That doesn't work. I've added the key as an environment variable to fly.io, but my understanding is this is failing because production keys aren't available at build time. Anyway, I'm stumped. Any ideas?
I'm new to docker, so it's likely I'm just missing something simple here.

Connecting to different ARN/Role/Amazon Account when trying to deploy

I have previously had Serverless installed on a server, and then when I tried to edit the function and package it back up to edit the zip file I broke it, so I have to start all over. So to begin this issue: I had Serverless running and was using it with this package - https://github.com/adieuadieu/serverless-chrome/tree/master/examples/serverless-framework/aws
When I sudo npm run deploy, I get the ServerlessError:
ServerlessError: User: arn:aws:sts::XXX:assumed-role/EC2CodeDeploy/i-268b1acf is not authorized to perform: cloudformation:DescribeStackResources on resource: arn:aws:cloudformation:us-east-1:YYY:stack/aws-dev/*
I'm not sure why it is trying to connect to a Role and not an IAM. So I check the Role, and it is in an entirely different AWS account than the account I've configured. Let's call this Account B.
When it comes to configuration, I've installed AWS CLI and entered in the key, id, and region in my Account A in AWS. Not touching Account B whatsoever. When I run aws s3 ls I see the correct s3 buckets of the account with the key/id/regioin, so I know CLI is working with the correct account. Sounds good. I check the ~/.aws/creditionals file and just has one profile [default] which seems normal. No other profiles are in here. I copied this over to the ~/.aws/config file so now both files are same. Works great.
I then go into my SSH where I've installed serverless, and run npm run deploy and it gives me the same message above. I think maybe somehow it is not using the correct account for whatever reason. So I manually set the access key and secret with the following commands:
serverless config credentials --provider aws --key XXX --secret YYY
It tells me there already is a profile in the aws creds file, so I then add --o to the end to overwrite. I run sudo npm run deploy and still same error.
I then run this command to manually set a profile in the creds for serverless, with the profile name matching the IAM user name:
serverless config credentials --provider aws --key XXX --secret YYY --profile serverless-agent
Where "serverless-agent" is the name of my IAM user I've been trying to use to deploy. I run this, it tells me there already is an existing profile in the aws creds file so I run it with --o and it tells me the aws file is now updated. In bash I go to Vim the file and I only see the single "[default]" settings, as if nothing has changed. I run sudo npm run deploy and it gives me the same Error.
I then go and manually set the access and secret:
export AWS_ACCESS_KEY_ID=XXX
export AWS_SECRET_ACCESS_KEY=YYY
I run sudo npm run deploy and it gives me the same Error.
I even removed AWS CLI, and the directory that holds the creditionals and config files - and when I manually set my account creds via serverless config it tells me there already is a profile set up in my aws file, prompting me to use the overwrite command - how is this possible when the file is literally not on my computer?
So I then think that serverless itself has a cache or something, calling the wrong file or whatever for creds, so I uninstall serverless via sudo npm uninstall -g serverless so that I can start from zero again. I then do all of the above steps and more all over again, and nothing has changed. Same error message.
I do have Apex.run set up, but that should be using my AWS CLI config file so I'm not sure if that is causing any problems. But then again I've no clue of anything deep on this subject, and I can't find any ability to remove Apex itself in their docs.
In the package I am trying to deploy, I do not have a profile:XXX set in the serverless.yml file, because I've read if you do not then it just defaults to the [default] profile you have set in the aws creds file on your computer. Just to check, I go into the serverless.yml file and set the profile: default, and the error I now get when I run npm run deploy is
Profile default does not exist
How is that possible when I have the "default" profile set in my creds file? So I remember that previously I ran the serverless config creditionals command and added the profile name of serverless-agent to it (yet didn't save in the aws creds file as I mentioned above), so I add that profile name to the serverless.yml file just to see if this works, and same error of "Profile default does not exist".
So back to the error message. The Role is an account not even related to the IAM user I'm using in my aws creds. Without knowing a lot about this, it's as if the config in serverless via ssh isn't correct or something. Is it using old creds I had set up in Apex.run? Why is the aws creds file not updated with the profile when I manually set it in serverless config command? I am using the same user account (but with new key and secret) that I used a few weeks ago when I correctly deployed and my Lambda and API was set up for me on AWS. Boy do I miss those time and wish I didn't mess up my existing Lambda functions, without setting version number prior, forcing me to start all over.
I am so confused. Any help would be greatly appreciated.
If you are using IAM role then you have to use that IAM role through assume role using powershell.
I was also facing same issue earlier, when we moved from from user to role.

ruby on rails git permission denied

I am following this guide to set ruby on rails environment on my Mac El Captain.
I followed upto installing homebrew, ruby latest version 2.2.3 with rbenv. Now, I was setting up git.
Followed up first few commands
git config --global color.ui true
git config --global user.name "YOUR NAME"
git config --global user.email "YOUR#EMAIL.com"
ssh-keygen -t rsa -C "YOUR#EMAIL.com"
Now, I have been asked to save the generated key. I saved it to ~ directory with a name file. I have now two files namely file and file.pub.
I went to this link to copy ssh key. I clicked on Add SSH key option there. Named the key ROR SSH Key.
The key in file.pub looks like
ssh-rss asfjasfhjalsfdhaskfdhalsdfsdf\asdf\as\dg\sa\fasdfas\f\asdf---so on random numbers---adfasdfasfa myemail#gmail.com
and I pasted the key there in github and saved the key.
Then, I went back to terminal and typed the below command.
ssh -T git#github.com
but I didn't received any message saying "Hi excid3! You've successfully authenticated, but GitHub does not provide shell access."
I got a message saying
The authenticity of host 'github.com (192.30.251.130)' can't be established.
RSA key fingerprint is SHA256:nThbg6sdfgdfgsdfgGOCspRomTxdCARLviKw6E5SY8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'github.com,192.30.251.130' (RSA) to the list of known hosts.
Permission denied (publickey).
Here above I have change few characters in SHA256 key. Just for security. Also, I have changed IP address a little bit for the same. But, the idea behind it is same.
Please guide me what's wrong. Thanks.
By default, ssh will look in the ~/.ssh folder for your private keys. Since you saved it in ~ instead, it can't find it.
You can either:
Move the file and file.pub files into ~/.ssh and rename to id_rsa and id_rsa.pub, as OS X will automatically use those files for any ssh command (if you hadn't manually entered a filename, this is where ssh-keygen would have saved them).
Use the ssh-add -K file command to permanently add your key to the OS X Keychain.
Note that GitHub's own instructions say they "strongly suggest keeping the default settings" instead of saving the private/public key somewhere else.

Add secret environment variable to Travis CI

I'm currently trying to add a secret environment variable to Travis-CI. In the docs ("Secure environment variables") I found the following line to do this:
gem install travis
travis encrypt -r travis-ci/travis-core MY_SECRET_ENV=super_secret
If I understood this correctly I must replace travis-ci/travis-core with the name of my own repository, because the encryption should only be valid for my repository. Therefore, there must be a public key in the repository. Is there a special travis command to add this key? How does this exactly work? Or is this just my ssh public key?
When I run the following command:
travis encrypt -r my_username/my_repo MY_SECRET_ENV=super_secret
I get the following error:
There was an error while fetching public key, please check if you entered correct slug
This is a known issue. It already has a pull request on GitHub to fix it.
The problem is the request to get the public key of a repository does not work, because they changed the API to SSL. If you don't want to wait for the pull request to be merged, you can simply change the source to use https instead of http.

Resources