I am using a Digital Ocean Droplet with Nginx + Passenger as the server. We are using CarrierWave gem in Rails to upload the Images and Resize/Process and upload it to Amazon S3. It works perfectly fine in the Local Environment but when i deploy it to the the Production the Image Uploading does not work.
Error:
We're sorry, but something went wrong.
The App is running at port 80
Not sure where to look at to even Debug the Issue. Passenger Logs doesnt show any error for the same either.
You can see logs into nginx.
For access log you can check into '/var/log/nginx/access.log'
or
For error log you can check into '/var/log/nginx/error.log'
Let me know if you need me more.
You can have a look in the S3 logs as well. Or in the network tab of your browser (enable preserve log). There has to be an error somewhere ;)
Have you checked your IAM user policies? Make sure you are using a IAM user instead of the root AWS user/key, for s3 upload. Here is an example of a policy to allow anonymous upload to your bucket. Surely you don't want anonymous upload, this is just as an example policy, perhaps your policy requirements may be more restrictive.
Amazon S3 bucket policy for anonymously uploading photos to a bucket
Related
I'm running my mlflow tracking server in a docker container on a remote server and trying to log mlflow runs from local computer with the eventual goal that anyone on my team can send their run data to the same tracking server. I've set the tracking URI to be http://<ip of remote server >:<port on docker container>. I'm not explicitly setting any of the AWS credentials on the local machine because I would like to just be able to train locally and log to the remote server (run data to RDS and artifacts to S3). I have no problem logging my runs to an RDS database but I keep getting the following error when it get to the point of trying to log artifacts: botocore.exceptions.NoCredentialsError: Unable to locate credentials. Do I have to have the credentials available outside of the tracking server for this to work (ie: on my local machine where the mlflow runs are taking place)? I know that all of my credentials are available in the docker container that is hosting the tracking server. I've be able to upload files to my S3 bucket using the aws cli inside of the container that hosts my tracking server so I know that it as access. I'm confused by the fact that I can log to RDS but not S3. I'm not sure what I'm doing wrong at this point. TIA.
Yes, apparently I do need to have the credentials available to the local client as well.
I am trying to configure WildFly using the docker image jboss/wildfly:10.1.0.Final to run in domain mode. I am using docker for macos 8.06.1-ce using aufs storage.
I followed the instructions in this link https://octopus.com/blog/wildfly-s3-domain-discovery. It seems pretty simple, but I am getting the error:
WFLYHC0119: Cannot access S3 bucket 'wildfly-mysaga': WFLYHC0129: bucket 'wildfly-mysaga' could not be accessed (rsp=403 (Forbidden)). Maybe the bucket is owned by somebody else or the authentication failed.
But my access key, secret and bucket name are correct. I can use them to connect to s3 using AWS CLI.
What can I be doing wrong? The tutorial seems to run it in an EC2 instance, while my test is in docker. Maybe it is a certificate problem?
I generated access keys from admin user and it worked.
I have a Rails application that I need to move over to HTTPS. It currently pulls assets from S3 via Cloudfront. I need to be able to test the application locally as well as on staging.
I have successfully set up HTTPS for my local application (running on localhost:3000), but obviously the assets are failing to load because they are insecure. So I need to secure Cloudfront and the S3 bucket.
However, given that my application is running on localhost, I can't add that as a domain when setting up a certificate using AWS Certificate Manager. So how can I set things up so that my local application is able to access S3 assets over HTTPS? Do I need to expose my local application via a tunnel? If so, what are the implications regarding HTTPS?
The domains shouldn't need to match for this to work. You just need to be using HTTPS for all the resources that are loaded by the browser. Just add an ACM certificate to the CloudFront distribution for the domain you will use when the app is running on AWS.
I have created a rails app in Amazon EC2. After that I used
rails s
The server starts. But what should be the url to put in browser? I mean in browser, how can I view that app ? Please share with me if anyone have any idea on it.
You need to open port 3000 in amazon ec2. You will get option of adding rules, adding rule and opening port will do the job.
then
ip-address:3000 (will do job)
Got the solution. What I did is
run the server like this:
rails s -b ipaddress
And in browser: (Public DNS from amazon EC2:3000)
ec2-xx-xx-xx-xxx.us-west-2.compute.amazonaws.com:3000
I spent hours on this railscast, http://railscasts.com/episodes/335-deploying-to-a-vps and this tutorial, https://coderwall.com/p/yz8cha(which is based on the railscast)
I successfully deployed my rails application to a vps. However things are not working properly I am trying to access my appliation log, in heroku is simply heroku log but where can I find this in my digital ocean vps? also heroku has specific instruction in storing pictures up in s3 do I need to configure the vps to talk to amazon s3?
Tail your production log at log/development.log
There are many tutorials via the Googles to answer your second question. Please attempt these before asking on SO.