Losing Access to Server After Provisioning With Chef - ruby-on-rails

I am using the rails-server-template available here (https://github.com/TalkingQuickly/rails-server-template) to provision a Rails server (Ubuntu 12.04) using Chef. When I am setting up a new server, I copy my public ssh key ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu#my-server.amazonaws.com and am able to enter my server fine.
But after I download a new copy of this template (updating the nodes/my-server.json file to this:)
{
"environment": "production",
"authorization": {
"sudo": {
"users": ["deploy", "vagrant"]
}
},
"run_list": [
"role[server]",
"role[postgres-server]"
],
"automatic": {
"ipaddress": "my-server.amazonaws.com"
},
"postgresql": {
"password": {
"postgres": "password"
}
}
}
And also updating the deploy.json user in data_bags/users:
{
"id": "deploy",
// generate this with: openssl passwd -1 "plaintextpassword"
"password": "password",
"ssh_keys": [ "ssh-rsa my-public-key from ~/.ssh/id_rsa.pub"
],
"groups": [ "sysadmin"],
"shell": "\/bin\/bash"
}
For some weird reason, after provisioning the server with bundle exec knife solo bootstrap ubuntu#my-server.com, I get a Permission denied (publickey) error. When trying to log-in using ssh, I get asked for the password for the ubuntu user, which I don't know. I can't even log in with my key pair .pem file from Amazon EC2 anymore.
Am I missing something? I didn't change the server.json role, and I can't seem to figure out what is going on. Has something changed my ssh configuration during provisioning?

Turns out when I was trying to ssh into my server, the user I was using was ubuntu, whereas in the data_bags, I set up a new user with the id deploy. I needed to ssh in as the deploy user.

Related

RabbitMQ invalid credentials with valid information for web management interface

While trying to configure and create an admin user for rabbitmq with a JSON file. The users are created. But I am getting the following error while logging in with valid credentials from the web management console.
2022-07-22 08:15:56.342071+00:00 [warning] <0.847.0> HTTP access denied: user 'admin' - invalid credentials
My configurations and docker files are as follows.
rabbitmq.config
[
{rabbit, [
{loopback_users, [admin]}
]},
{rabbitmq_management, [
{load_definitions, "/etc/rabbitmq/definitions.json"}
]}
].
definitions.json
{
"users": [
{
"name": "guest",
"password_hash": "abcd",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": ""
},
{
"name": "admin",
"password_hash": "admin123",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
}
],
"vhosts": [
{
"name": "/"
}
],
"permissions": [
{
"user": "admin",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
}
],
}
FROM rabbitmq:3.9-management
COPY conf/rabbitmq.config /etc/rabbitmq/
COPY conf/definitions.json /etc/rabbitmq/
RUN chown rabbitmq:rabbitmq /etc/rabbitmq/rabbitmq.config /etc/rabbitmq/definitions.json
CMD ["rabbitmq-server"]
I also tried to log in with rabbitmqctl
>>rabbitmqctl authenticate_user admin admin123
Authenticating user "admin" ...
Error:
Error: failed to authenticate user "admin"
user 'admin' - invalid credentials
When the password is changed with rabbitmqctl change_password admin admin123 everything seems to work fine.
The only warning in the log on rabbitmq startup is
2022-07-22 08:15:30.218099+00:00 [warning] <0.652.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": rebuilding indices from scratch
Could someone please tell me the possible cause and solution? If I've missed out anything, over- or under-emphasized a specific point, please let me know in the comments. Thank you so much in advance for your time.
Your definitions file must contain the HASH of the password, not the password itself. Generally what I do is set a user's password via change_password like you have, then export the current definitions. You'll notice that they contain the hashed password.
You can also generate the hash yourself. See this:
How to generate password_hash for RabbitMQ Management HTTP API
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Pushing an image to ECR, getting "Retrying in ... seconds"

I recently created a new repository in AWS ECR, and I'm attempting to push an image. I'm copy/pasting the directions provided via the "View push commands" button on the repository page. I'll copy those here for reference:
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-west-2.amazonaws.com
("Login succeeded")
docker build -t myorg/myapp .
docker tag myorg/myapp:latest 123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp:latest
docker push 123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp:latest
However, when I get to the docker push step, I see:
> docker push 123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp:latest
The push refers to repository [123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp]
a53c8ed5f326: Retrying in 1 second
78e16537476e: Retrying in 1 second
b7e38d172e62: Retrying in 1 second
f1ff72b2b1ca: Retrying in 1 second
33b67aceeff0: Retrying in 1 second
c3a550784113: Waiting
83fc4b4db427: Waiting
e8ade0d39f19: Waiting
487d5f9ec63f: Waiting
b24e42eb9639: Waiting
9262398ff7bf: Waiting
804aae047b71: Waiting
5d33f5d87bf5: Waiting
4e38024e7e09: Waiting
EOF
I'm wondering if this has something to do with the permissions/policies associated with this repository. Right now there are no statements attached to this repository. Is that the missing part? If so, what would that statement look like? I've tried this, but it had no effect:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPutImage",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:root"
},
"Action": "ecr:PutImage"
}
]
}
Bonus Points:
I eventually want to use this in a CDK CodeBuildAction. I was getting the same error as above, so I check to see if I was getting the same result in my local terminal, which I am. So if the policy statement needs to be different for use in the CDK CodeBuildAction those details would be appreciated as well.
Thank you in advance for and advice.
I was having the same problem when trying to upload the image manually using the AWS and Docker CLI. I was able to fix it by going into ECR -> Repositories -> Permissions then adding a new policy statement with principal:* and the following actions:
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
Be sure to add more restrictive principals. I was just trying to see if permissions were the problem in this case and sure enough they were.
The accepted answer works correctly in resolving the issue. However, as has been mentioned in the answer, allowing principal:* is risky and can get your ECR compromised.
Be sure to add specific principal(s) i.e. IAM Users/Roles such that only those Users/Roles will be allowed to execute the mentioned "Actions". Following JSON policy can be added in Amazon ECR >> Repositories >> Select Required Repository >> Permissions >> Edit policy JSON to get this resolved quickly:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AccountNumber>:role/<RoleName>"
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
]
}
]
}
I had this issue when the repository didn't exist in ECR - I assumed that pushing would create it, but it didn't.
Creating it before pushing solved the problem.
It turns out it was a missing/misconfigured policy. I was able to get it working within CodeBuild by adding a role with the AmazonEC2ContainerRegistryPowerUser managed policy:
new CodeBuildAction({
actionName: "ApplicationBuildAction",
input: this.applicationSourceOutput,
outputs: [this.applicationBuildOutput],
project: new PipelineProject(this, "ApplicationBuildProject", {
vpc: this.codeBuildVpc,
securityGroups: [this.codeBuildSecurityGroup],
environment: {
buildImage: LinuxBuildImage.STANDARD_5_0,
privileged: true,
},
environmentVariables: {
ECR_REPO_URI: {
value: ECR_REPO_URI,
},
ECR_REPO_NAME: {
value: ECR_REPO_NAME,
},
AWS_REGION: {
value: this.region,
}
},
buildSpec: BuildSpec.fromObject({
version: "0.2",
phases: {
pre_build: {
commands: [
"echo 'Logging into Amazon ECR...'",
"aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REPO_URI",
"COMMIT_HASH=$(echo \"$CODEBUILD_RESOLVED_SOURCE_VERSION\" | head -c 8)"
]
},
build: {
commands: [
"docker build -t $ECR_REPO_NAME:latest ."
]
},
post_build: {
commands: [
"docker tag $ECR_REPO_NAME:latest $ECR_REPO_URI/$ECR_REPO_NAME:latest",
"docker tag $ECR_REPO_NAME:latest $ECR_REPO_URI/$ECR_REPO_NAME:$COMMIT_HASH",
"docker push $ECR_REPO_URI/$ECR_REPO_NAME:latest",
"docker push $ECR_REPO_URI/$ECR_REPO_NAME:$COMMIT_HASH",
]
}
}
}),
// * * ADDED THIS ROLE HERE * *
role: new Role(this, "application-build-project-role", {
assumedBy: new ServicePrincipal("codebuild.amazonaws.com"),
managedPolicies: [ManagedPolicy.fromAwsManagedPolicyName("AmazonEC2ContainerRegistryPowerUser")]
})
}),
});
In my case, the repo was not created on ECR. Creating it fixed it.
The same message ("Retrying in ... seconds" in loop) may be seen when running "docker push" without first creating the corresponding repo in ECR ("myorg/myapp" in your example). Run:
aws ecr create-repository --repository-name myorg/myapp --region us-west-2
The problem is your iam-user have not permission to full access of ecr so attach below policy to your iam-user.
follow photo for policy attachment
For anyone running into this issue, my problem was having the wrong AWS profile/account configured in my AWS cli.
run aws configure and add the keys of the account having access to ECR repository.
If you have multiple AWS accounts using the cli, then check out this solution.
Just had this problem. It was permission related. In my case I was using CDKv2, which assumes a specific role in order to upload assets. Because the user I was deploying as did not have permission to assume that role, it failed. The hint was these warning messages that appeared during the deploy:
current credentials could not be used to assume 'arn:aws:iam::12345:role/cdk-abcde1234-image-publishing-role-12345-ap-southeast-2', but are for the right account. Proceeding anyway.
current credentials could not be used to assume 'arn:aws:iam::12345:role/cdk-abcde1234-file-publishing-role-12345-ap-southeast-2', but are for the right account. Proceeding anyway.
Yes, updating the permissions on your ECR repo would fix it, but since CDK is supposed to maintain this for you, the proper solution is to allow your user to assume the CDK role so you don't need to mess with ECR permissions yourself.
In my case I did this by granting the sts:AssumeRole permission for the resource arn:aws:iam::*:role/cdk-*. This allowed my user to assume both the file upload role and the image upload role.
After granting this permission, the CDK errors about being unable to assume the role went away, and I was able to deploy successfully.
For me, the problem was that the repository name on ECR had to be the same as the name of the app/repository I was pushing. Tried all fixes here, didn't work. This did!
Browse ECR -> Repositories -> Permissions
Edit JSON Policy.
Add these actions.
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
And Add "*" in Resources.
Save it.
You're good to go, Now you can push the image to ECR.
If you have MFA enforcement policy on your account that might be the problem because you have to have a token for getting action. Take a look at this AWS document to get a token on CLI.
I was uploading from EC2 instance and I was missing to specify the region to my awscli, the login was successful but the docker push command was Retrying all the time, I have set the correct permissions on the ECR repo side
This line fix the issue for me and
aws configure set default.region us-west-1
In my case I used wrong AWS credentials and aws configure with correct credentials resolved the issue.

I'm trying to write deployments rules with Ansible to clone a repository

The steps i followed are:
Ansible login as root user
Update Server pacakges
Create a user called deploy
Clone a Git Repository from bitbucket.org
I want to clone the repository as deploy user in his home directory using ssh forwarding method.
But the issue is that, I am not able to get permissions even through ssh forwarding and the error returns as :Doesn't have rights to access the repository.
My inventory file:
[production]
rails ansible_host=(my host ip) ansible_user=ubuntu
My ansible.cfg file looks like this:
[ssh_connection]
pipelining=True
ssh_args = -o ForwardAgent=true
My playbook looks like this:
---
- hosts: production
remote_user: root
become: yes
tasks:
- name: Update all packages to latest version
apt:
upgrade: dist
- add deploy user tasks here
(deploy user add task)
- name: APP | Clone repo
git:
repo: git#github.com:e911/Nepali-POS-Tagger.git
dest: home/deploy/myproject
accept_hostkey: true
force: true
become: yes
become_user: deploy
tags: app
My deploy user is created but for some reason I cannot clone the user as deploy user. It doesnot have access right. I have researched and think this seems to be because of ssh keys not being attached. When I login in as ubuntu and switch user as deploy the attached keys are not forwarded to deploy. But I cannot have a solution for this.
How do you solve this ? Or what am I doing wrong here?
Here is the error snippet:
fatal: [rails]: FAILED! => {
"changed": false,
"cmd": "/usr/bin/git clone --origin origin '' /home/deploy/myproject",
"invocation": {
"module_args": {
"accept_hostkey": true,
"archive": null,
"bare": false,
"clone": true,
"depth": null,
"dest": "/home/deploy/myproject",
"executable": null,
"force": true,
"gpg_whitelist": [],
"key_file": null,
"recursive": true,
"reference": null,
"refspec": null,
"remote": "origin",
"repo": "git#github.com:e911/Nepali-POS-Tagger.git",
"separate_git_dir": null,
"ssh_opts": null,
"track_submodules": false,
"umask": null,
"update": true,
"verify_commit": false,
"version": "HEAD"
}
},
"msg": "",
"rc": 128,
"stderr": "Cloning into '/home/deploy/myproject'...\ngit#github.com: Permission denied (publickey).\r\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n",
"stderr_lines": [
"Cloning into '/home/deploy/myproject'...",
"git#github.com: Permission denied (publickey).",
"fatal: Could not read from remote repository.",
"",
"Please make sure you have the correct access rights",
"and the repository exists."
],
"stdout": "",
"stdout_lines": []
}
I have tried the solutions here: Ansible and Git Permission denied (publickey) at Git Clone but it was of not help.
We have alternative solution, using HTTP instead of SSH:
For GitHub:
Generate a Token from link: https://github.com/settings/tokens
Give permission with scope: repo (full control of private repositories)
Use that token git+https://<TOKEN>:x-oauth-basic#github.com/<ORGANIZATION>/<REPO>.git#<BRANCH>
For BitBucket:
Generate a random Password for your repo from link: https://bitbucket.org/account/settings/app-passwords
Give permission with scope Repositories: Read
Use that password to clone your repo as: git clone https://<USERNAME>:<GENERATED_PASSWORD>#bitbucket.org/<ORGANIZATION>/<REPO>.git
Hope this could be an alternative for the solution.

How do I get Docker for Windows to generate a plaintext auth key for access to private images on Docker Hub?

Amazon Elastic Beanstalk requires a plaintext key from Docker in order to access private images on Docker Hub. According to the instructions on AEB, you simply need to run docker login to generate these credentials in "%UserProfile%/.docker/config.json". However, this generates the following file:
{
"auths": {
"https://index.docker.io/v1/": {}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/17.12.0-ce (windows)"
},
"credsStore": "wincred"
}
The credentials were stored in "wincred", the windows credential manager.
How do I instead force the credentials to be generated, temporarily, within the config.json file instead?
Remove the last line from the "%UserProfile%/.docker/config.json" file:
(Don't forget to remove the trailing ',')
{
"auths": {
"https://index.docker.io/v1/": {}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/17.12.0-ce (windows)"
}
}
Save the config.json file.
Run docker login.
If you look inside the config.json file, you will now find what you need. From what I understand, these credentials should be valid until either your username or password changes (you can see why it's good to have these in the credential manager!).
After you've copied out the auth key, you'll want to restore the config.json file to its original state:
{
"auths": {
"https://index.docker.io/v1/": {}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/17.12.0-ce (windows)"
},
"credsStore": "wincred"
}
Then run docker login once more to put things back how they originally were.

"500 Internal Server Error" when pushing docker images using gcloud and a service account with "Owner" permission

I am trying to push a docker container image to the Google Container Engine registry:
$ sudo gcloud docker push gcr.io/<my_project>/node
The push refers to a repository [gcr.io/<my_project>/node] (len: 1)
24c43271ae0b: Image already exists
70a0868daa56: Image already exists
1d86f57ee56d: Image already exists
a46b473f6491: Image already exists
a1ac8265769f: Image already exists
290bb858e1c3: Image already exists
d6fc4b5b4917: Image already exists
3842411e5c4c: Image already exists
7a01cc5f27b1: Image already exists
dbacfa057b30: Image already exists
latest: digest: sha256:02be2e66ad2fe022f433e228aa43f32d969433402820035ac8826880dbc325e4 size: 17236
Received unexpected HTTP status: 500 Internal Server Error
I can not make the command verbose more. Neither with:
$ sudo gcloud docker push gcr.io/<my_project>/node --verbosity info
nor with this command that should work:
$ sudo gcloud docker --log-level=info push gcr.io/sigma-cairn-99810/node
usage: gcloud docker [EXTRA_ARGS ...] [optional flags]
ERROR: (gcloud.docker) unrecognized arguments: --log-level=info
according to the documentation (see EXTRA_ARGS) and --log-level=info is a valid docker option:
SYNOPSIS
gcloud docker [EXTRA_ARGS ...] [--authorize-only, -a]
[--docker-host DOCKER_HOST]
[--server SERVER,[SERVER,...], -s SERVER,[SERVER,...]; default="gcr.io,us.gcr.io,eu.gcr.io,asia.gcr.io,b.gcr.io,bucket.gcr.io,appengine.gcr.io"]
[GLOBAL-FLAG ...]
...
POSITIONAL ARGUMENTS
[EXTRA_ARGS ...]
Arguments to pass to docker.
I am using the default service account that GCP installs on my container-vm machine instance. I have given it also Owner permissions to all resources in <my_project>.
UPDATE:
Running sudo gsutil ls -bL gs://artifacts.<my_project>.appspot.com I get:
gs://artifacts.<my_project>.appspot.com/ :
Storage class: STANDARD
Location constraint: US
Versioning enabled: None
Logging configuration: None
Website configuration: None
CORS configuration: None
Lifecycle configuration: None
ACL: []
Default ACL: []
If I do the same thing after authenticating with my non-service account, I get both ACL and Default ACL:
ACL:
[
{
"entity": "project-owners-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "owners"
},
"role": "OWNER"
},
{
"entity": "project-editors-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "editors"
},
"role": "OWNER"
},
{
"entity": "project-viewers-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "viewers"
},
"role": "READER"
}
]
Default ACL:
[
{
"entity": "project-owners-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "owners"
},
"role": "OWNER"
},
{
"entity": "project-editors-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "editors"
},
"role": "OWNER"
},
{
"entity": "project-viewers-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "viewers"
},
"role": "READER"
}
]
Can you run sudo gsutil ls -bL gs://artifacts.<my_project>.appspot.com and see if you can access to the GCS bucket. This will verify permissions for storage for the docker image.
While I think you should have permission by being added to owner, this will verify if you do or not.
As for the EXTRA_ARGS, I think --log-level="info" is only valid for the command docker daemon, docker push does not recognize --log-level="info"
UPDATE
From reviewing the logs again, you are pushing a mostly existing image, as the "image already exist" log entries indicate. On the first new write step it failed. That indicates that the problem seems likely to be that the instance you started with originally only had read only scope.
Can you please run this command and share the output.
curl -H "Metadata-Flavor:Google" http://metadata/computeMetadata/v1/instance/service-accounts/default/scopes
We are looking for the scope https://www.googleapis.com/auth/devstorage.read_write.
What might have happened is that the instance was not originally created with this scope, and as the scope on an instance cannot be modified, it maintains only being able to read.
If this is the case, the solution would likely be creating a new instance.
We will file a bug to ensure better messaging is provided in situations like this.

Resources