Cannot access S3 bucket from WildFly running in Docker - docker

I am trying to configure WildFly using the docker image jboss/wildfly:10.1.0.Final to run in domain mode. I am using docker for macos 8.06.1-ce using aufs storage.
I followed the instructions in this link https://octopus.com/blog/wildfly-s3-domain-discovery. It seems pretty simple, but I am getting the error:
WFLYHC0119: Cannot access S3 bucket 'wildfly-mysaga': WFLYHC0129: bucket 'wildfly-mysaga' could not be accessed (rsp=403 (Forbidden)). Maybe the bucket is owned by somebody else or the authentication failed.
But my access key, secret and bucket name are correct. I can use them to connect to s3 using AWS CLI.
What can I be doing wrong? The tutorial seems to run it in an EC2 instance, while my test is in docker. Maybe it is a certificate problem?

I generated access keys from admin user and it worked.

Related

MLflow: Unable to store artifacts to S3

I'm running my mlflow tracking server in a docker container on a remote server and trying to log mlflow runs from local computer with the eventual goal that anyone on my team can send their run data to the same tracking server. I've set the tracking URI to be http://<ip of remote server >:<port on docker container>. I'm not explicitly setting any of the AWS credentials on the local machine because I would like to just be able to train locally and log to the remote server (run data to RDS and artifacts to S3). I have no problem logging my runs to an RDS database but I keep getting the following error when it get to the point of trying to log artifacts: botocore.exceptions.NoCredentialsError: Unable to locate credentials. Do I have to have the credentials available outside of the tracking server for this to work (ie: on my local machine where the mlflow runs are taking place)? I know that all of my credentials are available in the docker container that is hosting the tracking server. I've be able to upload files to my S3 bucket using the aws cli inside of the container that hosts my tracking server so I know that it as access. I'm confused by the fact that I can log to RDS but not S3. I'm not sure what I'm doing wrong at this point. TIA.
Yes, apparently I do need to have the credentials available to the local client as well.

ml-pipeline pod of kubeflow deployment is in CrashLoopBackOff

I have very limited hands-on over the kubeflow setup. I have deployed a kubeflow pipeline on an EKS cluster with S3 connectivity for the artifacts. All the pods are up and running except the ml-pipeline deployment and also ml-pipeline-persistence agent deployment is failing due to the dependency on the ml-pipeline.
I am facing the below error while checking the logs of the pods:
I0321 19:19:49.514094 7 config.go:57] Config DBConfig.ExtraParams not specified, skipping
F0321 19:19:49.812472 7 client_manager.go:400] Failed to check if Minio bucket exists. Error: Access Denied.
Had anyone faced similar issues, I am not able to find many logs which could help me to debug the issue.
Also, the credentials consumed by the ml-pipeline deployment to access the bucket have all the required permissions.
Check the S3 permissions assigned to the aws credentials you set for MINIO_AWS_ACCESS_KEY_ID & MINIO_AWS_SECRET_ACCESS_KEY. That is what caused the same error for me.
Although the auto-rds-s3-setup.py setup program provided by the aws distribution of kubeflow can create the s3 bucket, the credentials passed in to MINITO have to enable access to that bucket. So they are primarily for reusing an existing s3 bucket.

Error Creating Vault - Missing S3 Bucket Flag?

I'm trying to create a new jenkinsx cluster using jx. This is the command I am running:
jx create cluster aws --ng
And this is the error I get:
error: creating the system vault: creating vault: Missing S3 bucket flag
It seems to fail out on creating the vault due to missing a bucket flag, and I'm not sure how to remedy that.
Did you try
jx create cluster aws --ng --state s3://<bucket_name>
Also, ensure you are using the latest release

Image Uploading on Production fails to Amazon S3

I am using a Digital Ocean Droplet with Nginx + Passenger as the server. We are using CarrierWave gem in Rails to upload the Images and Resize/Process and upload it to Amazon S3. It works perfectly fine in the Local Environment but when i deploy it to the the Production the Image Uploading does not work.
Error:
We're sorry, but something went wrong.
The App is running at port 80
Not sure where to look at to even Debug the Issue. Passenger Logs doesnt show any error for the same either.
You can see logs into nginx.
For access log you can check into '/var/log/nginx/access.log'
or
For error log you can check into '/var/log/nginx/error.log'
Let me know if you need me more.
You can have a look in the S3 logs as well. Or in the network tab of your browser (enable preserve log). There has to be an error somewhere ;)
Have you checked your IAM user policies? Make sure you are using a IAM user instead of the root AWS user/key, for s3 upload. Here is an example of a policy to allow anonymous upload to your bucket. Surely you don't want anonymous upload, this is just as an example policy, perhaps your policy requirements may be more restrictive.
Amazon S3 bucket policy for anonymously uploading photos to a bucket

How to provide password for Capistrano rails 3.1 app deployment to aws ec2 ubuntu server?

We are trying to deploy a rails 3.1 app on a aws ec2 instance running ubuntu 12.04. With cap deploy, However, we are stuck with the password hint. There is only private key in aws ecs login and there is no password. How can we pass the ssh login for ec2 deployment?
Thanks so much.
This is what I did to solve this scenario:
On the local machine, generate a key using e.g. ssh-keygen. Keep the standard location to not overcomplicate things, i.e. keyfiles should be ~/.ssh/id_rsa and id_rsa.pub; SKIP THIS STEP IF YOU ALREADY HAVE KEYS IN .ssh
Copy the content of the id_rsa.pub file
SSH into the EC2 instance using your .pem keyfile
Paste the content of your local id_rsa.pub into /home/[YOUR_EC2_USER]/.ssh/authorized_keys
You should now be able to use capistrano for your deployment.

Resources