I am experiencing a strange issue from Passenger docker ruby 2.3 with aws redis:
bad URI(is not URI?): 'redis://redis.xxx.ng.0001.apse1.cache.amazonaws.com:6379/1' (URI::InvalidURIError)
/usr/local/rvm/rubies/ruby-2.3.8/lib/ruby/2.3.0/uri/rfc3986_parser.rb:67:in `split'
/usr/local/rvm/rubies/ruby-2.3.8/lib/ruby/2.3.0/uri/rfc3986_parser.rb:73:in `parse'
/usr/local/rvm/rubies/ruby-2.3.8/lib/ruby/2.3.0/uri/common.rb:227:in `parse'
/usr/local/rvm/gems/ruby-2.3.8/gems/sidekiq-5.2.7/lib/sidekiq/redis_connection.rb:97:in `log_info'
/usr/local/rvm/gems/ruby-2.3.8/gems/sidekiq-5.2.7/lib/sidekiq/redis_connection.rb:31:in `create'
/usr/local/rvm/gems/ruby-2.3.8/gems/sidekiq-5.2.7/lib/sidekiq.rb:126:in `redis_pool'
/usr/local/rvm/gems/ruby-2.3.8/gems/sidekiq-5.2.7/lib/sidekiq.rb:94:in `redis'
My sidekiq config:
# ENV['REDIS_URL']= redis://redis.xxx.ng.0001.apse1.cache.amazonaws.com:6379/1
Sidekiq.configure_server do |config|
config.redis = { url: ENV['REDIS_URL'] }
end
Sidekiq.configure_client do |config|
config.redis = { url: ENV['REDIS_URL'] }
end
However if I run:
docker exec -it container_id bash
and then rails console everything seems to work just fine.
I also tried this:
redis_url = ENV['REDIS_URL']
# The statement below parsed successfully thus the redis_url is correct
uri = URI.parse(redis_url)
redis_options = {
host: uri.host,
port: 6379,
db: uri.path[1..-1]
}
Sidekiq.configure_server do |config|
config.redis = redis_options
end
Sidekiq.configure_client do |config|
config.redis = redis_options
end
But it raised the exact same error. I could run the docker locally connected to local redis just fine. I am wondering there might be something wrong with the ENV['REDIS_URL'] value.
Is there anyone experiencing this issue or any clues?
My env is
- passenger-docker ruby 2.3.8
- aws elastic cache redis: 5.0.5
- sidekiq 5.2.7
After many hours, I just realize that the redis_url from the container was pointed to the deleted Redis cluster in aws. My first launch of Redis was set to default to r5 16GM instance type which is very big and costly for testing. I decided to launch a new one with just t2 small and deleted the r5 one to save some money, however in passenger-config I have the REDIS_URL set and Nginx overridden the env value over the one I set in the ECS task.
The error raised by ruby Redis is not straightforward and having two places to set ENV (in Nginx config and in ECS-task ) make the debugging painful.
I'm attempting to use Heroku's CI to run my Rails application's tests but it's running into a problem when attempting to load my structure.sql file.
-----> Preparing test database
Running: rake db:schema:load_if_ruby
db:schema:load_if_ruby completed (3.24s)
Running: rake db:structure:load_if_sql
psql:/app/db/structure.sql:28: ERROR: must be owner of extension plpgsql
rake aborted!
failed to execute:
psql -v ON_ERROR_STOP=1 -q -f /app/db/structure.sql d767koa0m1kne1
Please check the output above for any errors and make sure that `psql` is installed in your PATH and has proper permissions.
/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.1/lib/active_record/tasks/postgresql_database_tasks.rb:108:in `run_cmd'
/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.1/lib/active_record/tasks/postgresql_database_tasks.rb:80:in `structure_load'
/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.1/lib/active_record/tasks/database_tasks.rb:223:in `structure_load'
/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.1/lib/active_record/tasks/database_tasks.rb:236:in `load_schema'
/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.1/lib/active_record/tasks/database_tasks.rb:255:in `block in load_schema_current'
/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.1/lib/active_record/tasks/database_tasks.rb:304:in `block in each_current_configuration'
/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.1/lib/active_record/tasks/database_tasks.rb:303:in `each'
/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.1/lib/active_record/tasks/database_tasks.rb:303:in `each_current_configuration'
/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.1/lib/active_record/tasks/database_tasks.rb:254:in `load_schema_current'
/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.1/lib/active_record/railties/databases.rake:290:in `block (3 levels) in <top (required)>'
/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.1/lib/active_record/railties/databases.rake:294:in `block (3 levels) in <top (required)>'
/app/vendor/bundle/ruby/2.4.0/gems/rake-12.0.0/exe/rake:27:in `<top (required)>'
Tasks: TOP => db:structure:load
(See full trace by running task with --trace)
!
! Could not prepare database for test
!
The relevant line here is:
psql:/app/db/structure.sql:28: ERROR: must be owner of extension plpgsql
rake aborted!
Structure.sql contains this line:
COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
Any ideas on how to get this working on Heroku's CI?
Ended up overriding db:structure:dump to remove the COMMENT ON ... statements:
namespace :db do
namespace :structure do
task dump: [:environment, :load_config] do
filename = ENV["SCHEMA"] || File.join(ActiveRecord::Tasks::DatabaseTasks.db_dir, "structure.sql")
sql = File.read(filename).each_line.grep_v(/\ACOMMENT ON EXTENSION.+/).join
File.write(filename, sql)
end
end
end
Another workaround would be to add something like
if Rails.env.development?
ActiveRecord::Tasks::DatabaseTasks.structure_load_flags = ["-v", "ON_ERROR_STOP=0"]
end
anywhere in the initialisation / tasks pipeline before the db:structure:load is executed.
If Kyle's solution isn't enough and the errors aren't caused only by comments on extensions, but actual extensions installations, you can still go the hard way and add this to an initializer:
# This is a temporary workaround for the Rails issue #29049.
# It could be safely removed when the PR #29110 got merged and released
# to use instead IGNORE_PG_LOAD_ERRORS=1.
module ActiveRecord
module Tasks
class PostgreSQLDatabaseTasks
ON_ERROR_STOP_1 = 'ON_ERROR_STOP=0'.freeze
end
end
end
Note: This isn't specific to Heroku but a broader Rails 5.1 issue
There are two solutions to this problem. First, as it was previously noted, is disabling the ON_ERROR_STOP feature. It'd help regardless of the environment. Custom rake task:
namespace :db do
namespace :structure do
# This little task is a workaround for a problem introduced in Rails5. Most specificaly here
# https://github.com/rails/rails/blob/5-1-stable/activerecord/lib/active_record/tasks/postgresql_database_tasks.rb#L77
# When psql encounters an error during loading of the structure it exits at once with error code 1.
# And this happens on heroku. It renders review apps and heroku CI unusable if you use a structure instead of a schema.
# Why?
# Our `db/structure.sql` contains entries like `CREATE EXTENSION` or `COMMENT ON EXTENSION`.
# Zylion of extensions on heroku are loaded in template0, so "our" db also has them, but because of that
# only a superuser (or owner of template0) has access to them - not our heroku db user. For that reason
# we can neither create an extension (it already exists, but that is not a problem, because dump contains IF NOT EXIST)
# nor comment on it (and comments don't have IF NOT EXIST directive). And that's an error which could be safely ignored
# but which stops loading of the rest of the structure.
desc "Disable exit-on-error behaviour when loading db structure in postgresql"
task disable_errors: :environment do
ActiveRecord::Tasks::DatabaseTasks.structure_load_flags = ["-v", "ON_ERROR_STOP=0"]
end
end
end
# And use it like so:
bin/rails db:structure:disable_errors db:structure:load
Another option, in my opinion, superior if it comes only to Heroku, would be using PostgreSQL in in-dyno plan (https://devcenter.heroku.com/articles/heroku-ci-in-dyno-databases), which is basically a DB instance sitting within dyno, thus we have full access to it. Also, the test suite should be significantly faster because we use a localhost connection, not over the wire. To enable it, change your app.json content to have entries like so:
{
"environments": {
"test": {
"addons": [
"heroku-postgresql:in-dyno"
]
}
}
}
I am currently running sidekiq 4.1.2. I've never managed to be able to run more than a handfull of jobs concurrently. Recently it's been looking like I've run into an issue described in the Sidekiq's Troubleshooting WIKI called Too many connections to MongoDB. Apparently, mongoid 3 doesn't properly disconnect workers. However, I am using mongoid 5.1.3.
My issue surfaces when a job, while a few other jobs are running, tries to hit the database with a query:
Timeout::Error: Timed out attempting to dequeue connection after 30 sec.
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/server/connection_pool/queue.rb:190:in `wait_for_next!'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/server/connection_pool/queue.rb:176:in `block in dequeue_connection'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/server/connection_pool/queue.rb:190:in `wait_for_next!'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/server/connection_pool/queue.rb:176:in `block in dequeue_connection'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/server/connection_pool/queue.rb:172:in `loop'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/server/connection_pool/queue.rb:172:in `dequeue_connection'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/server/connection_pool/queue.rb:62:in `block in dequeue'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/server/connection_pool/queue.rb:61:in `synchronize'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/server/connection_pool/queue.rb:61:in `dequeue'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/server/connection_pool.rb:51:in `checkout'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/server/connection_pool.rb:107:in `with_connection'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/server/context.rb:63:in `with_connection'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/operation/executable.rb:34:in `execute'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/collection/view/iterable.rb:80:in `send_initial_query'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/collection/view/iterable.rb:41:in `block in each'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/retryable.rb:51:in `call'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/retryable.rb:51:in `read_with_retry'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongo-2.2.5/lib/mongo/collection/view/iterable.rb:39:in `each'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongoid-5.1.3/lib/mongoid/query_cache.rb:207:in `each'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongoid-5.1.3/lib/mongoid/contextual/mongo.rb:121:in `each'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongoid-5.1.3/lib/mongoid/contextual/mongo.rb:295:in `map'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongoid-5.1.3/lib/mongoid/contextual/mongo.rb:295:in `map'
/home/me/applications/myapp/shared/bundle/ruby/2.2.0/gems/mongoid-5.1.3/lib/mongoid/contextual.rb:20:in `map'
/home/me/applications/myapp/releases/20160618143407/app/jobs/myjob.rb:8:in `block in perform'
After one job fails, the other jobs fail soon after. This most often happens after a few handfull of jobs have finished succesfully, which could indicate that those jobs don't disconnect from the database. Looking at top I can't see that mongo load cpu is too much.
At the same time this started to occur, I noticed that my sidetiq 0.7.0 enabled recurring jobs were not scheduled properly. One job has stopped being queued, and others are only queued once after restart.
According to my Sidekiq web interface I have 1 queue, called default, with 25 threads. Max. 12-15 of them get busy at the same time.
Any idea how to troubleshoot this issue?
The default max pool queue size is 5. Bumping max_pool_size up to e.g. 25 will enable more connections to your db.
production:
clients:
default:
options:
max_pool_size: 25
Change adding wait_queue_timeout attribute in mongoid.yml production config:
production:
clients:
default:
uri: mongodb://aaaaa.com:27017/mongo
options:
connect_timeout: 30
wait_queue_timeout: 30
I'm using Sidekiq with Rails and Heroku to execute asynchronous workers but I am not able to get it working. I've tried launching those workers manually (executing the perform method) and they work, however when you schedule them and check the Sidekiq webpage they appear as enqueued and it doesn't process them.
I'm using an Heroku Worker and Redis To Go and this is the config:
sidekiq.rb
require 'sidekiq'
Sidekiq.configure_client do |config|
config.redis = { :size => 1 }
end
Sidekiq.configure_server do |config|
config.redis = { :size => 2 }
end
sidekiq.yml
:concurrency: 2
Worker launch
Sidekiq::Client.enqueue(MyWorker)
Sidekiq and Redis seems to be properly connected because I get the following log on Heroku:
015-04-27T21:46:51Z 3 TID-ovbydc96k INFO: Sidekiq client with redis options {:size=>1, :url=>"redis://redistogo:REDACTED#soapfish.redistogo.com:xxxx/"}
I'm missing something but I don't know what. Any help would be appreciated. Thanks!
Have you run this command so that Sidekiq knows to use the Redis-To-Go URL?
heroku config:set REDIS_PROVIDER=REDISTOGO_URL
I don't know how (or where also) to grant read and write permission to the user from AWS so users can post pictures on sample_app in production enviroment. This is final task in 11th chapter, it isn't covered by tutorial and I can't find solution anywhere.
This is carrier_wave.rb file:
if Rails.env.production?
CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['lalala'],
:aws_secret_access_key => ENV['oloalle']
}
config.fog_directory = ENV['name of bucket']
end
end
This is procedure from tutorial:
1) create AWS IAM User and record access and secret key - done
2) create S3 bucket - done
3) grant read and write permission to the user created in the previous step - how???
4) I then run this three commands:
$ heroku config:set S3_ACCESS_KEY=lalala
$ heroku config:set S3_SECRET_KEY=oloalle
$ heroku config:set S3_BUCKET=name of bucket
5) push to git and heroku - done
6) heroku pg:reset DATABASE - done
7)heroku run rake db:migrate and here I get this message:
Running `rake db:migrate` attached to terminal... up, run.7906
rake aborted!
ArgumentError: Missing required arguments: aws_access_key_id, aws_secret_access_key
/app/vendor/bundle/ruby/2.0.0/gems/fog-core-1.28.0/lib/fog/core/service.rb:244:in `validate_options'
/app/vendor/bundle/ruby/2.0.0/gems/fog-core-1.28.0/lib/fog/core/service.rb:268:in `handle_settings'
/app/vendor/bundle/ruby/2.0.0/gems/fog-core-1.28.0/lib/fog/core/service.rb:98:in `new'
/app/vendor/bundle/ruby/2.0.0/gems/fog-core-1.28.0/lib/fog/storage.rb:25:in `new'
/app/vendor/bundle/ruby/2.0.0/gems/carrierwave-0.10.0/lib/carrierwave/uploader/configuration.rb:83:in `eager_load_fog'
/app/vendor/bundle/ruby/2.0.0/gems/carrierwave-0.10.0/lib/carrierwave/uploader/configuration.rb:96:in `fog_credentials='
/app/config/initializers/carrier_wave.rb:3:in `block in <top (required)>'
/app/vendor/bundle/ruby/2.0.0/gems/carrierwave-0.10.0/lib/carrierwave/uploader/configuration.rb:118:in `configure'
/app/vendor/bundle/ruby/2.0.0/gems/carrierwave-0.10.0/lib/carrierwave.rb:14:in `configure'
/app/config/initializers/carrier_wave.rb:2:in `<top (required)>'
/app/vendor/bundle/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:268:in `load'
/app/vendor/bundle/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:268:in `block in load'
/app/vendor/bundle/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:240:in `load_dependency'
/app/vendor/bundle/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:268:in `load'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/engine.rb:652:in `block in load_config_initializer'
/app/vendor/bundle/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/notifications.rb:166:in `instrument'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/engine.rb:651:in `load_config_initializer'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/engine.rb:616:in `block (2 levels) in <class:Engine>'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/engine.rb:615:in `each'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/engine.rb:615:in `block in <class:Engine>'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/initializable.rb:30:in `instance_exec'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/initializable.rb:30:in `run'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/initializable.rb:55:in `block in run_initializers'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/initializable.rb:44:in `each'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/initializable.rb:44:in `tsort_each_child'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/initializable.rb:54:in `run_initializers'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/application.rb:352:in `initialize!'
/app/config/environment.rb:5:in `<top (required)>'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/application.rb:328:in `require'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/application.rb:328:in `require_environment!'
/app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.0/lib/rails/application.rb:443:in `block in run_tasks_blocks'
Tasks: TOP => db:migrate => environment
(See full trace by running task with --trace)
Here is my tutorial that I created to pick up where Michael Hartl left off at the end of Ruby on Rails Tutorial (3rd Ed.) Chapter 11. It should answer your question and more.
Getting the railstutorial.org Sample App to work between Heroku and AWS was a huge pain in the ass. But I did it. If you found this tutorial, that means you're probably encountering an error you can't get past. That's fine. I had a few of them.
2020 Note: It's possible that everything here referencing a Region for S3 is no longer needed, or perhaps was never needed. When I originally got this all to work properly it was after adding the region information. However, S3 Buckets all share a global namespace. So if anybody is still reading this, try it all without the region stuff first, and leave a comment about whether it works or not. Anyway, back to the tutorial...
The first thing you need to do is go back over the code that Hartl provided. Make sure you typed it (or copy/pasted it) in exactly as shown. Out of all the code in this section, there is only one small addition you might need to make. The "region" environment variable. This is needed if you create a bucket that is not in the default US area. More on this later. Here is the code for /config/initializers/carrier_wave.rb:
if Rails.env.production?
CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY'],
:region => ENV['S3_REGION']
}
config.fog_directory = ENV['S3_BUCKET']
end
end
That line :region => ENV['S3_REGION'] is a problem for a lot of people. More on that later.
You should be using that block of code exactly as shown. Do NOT put your actual keys in there. We'll send them to Heroku separately.
If you had to add that line of code, don't forget to commit it to git and push it to Heroku.
Now let's move on to your AWS account and security.
First of all, create your AWS account. For the most part, it is like signing up for any web site. Make a nice long password and store it someplace secure, like an encrypted password manager. When you make your account, you will be given your first set of AWS keys. You will not be using those in this tutorial, but you might need them at some point in the future so save those somewhere safe as well.
Go to the S3 section and make a bucket. It has to have a unique
name, so I usually just put the date on the end and that does it. For example, you might name it "my-sample-app-bucket-20160126". Once you
have created your bucket, click on the name, then click on Properties.
It's important for you to know what "Region" your bucket is in. Find it,
and make a note of it. You'll use it later.
Your main account probably has full permissions to everything, so let's not use that for transmitting random data between two web services. This could cost you a lot of money if it got out. We'll make a limited user instead. Make a new User in the IAM section. I named it "fog", because that's the cloud service software that handles the sending and receiving. When you create it, you will have the option of displaying and/or downoading the keys associated with the new user. It's important you keep this in a safe
and secure place. It does NOT go into your code, because that will probably
end up in a repository where other people can see it. Also, don't give this
new user a password, since it will not be logging into the AWS dashboard.
Make a new Group. I called mine "s3railsbucket". This is where the
permissions will be assigned. Add "fog" to this group.
Go to the Policies section. Click "Create Policy" then select "Create Your
Own Policy". Give it a name that starts with "Allow" so it will show up near
the top of the list of policies. It's a huge list. Here's what I did:
Policy Name: AllowFullAccessToMySampleAppBucket20160126
Description: Allows remote write/delete access to S3 bucket named
my-sample-app-bucket-20160126.
Policy Document:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-sample-app-bucket-20160126",
"arn:aws:s3:::my-sample-app-bucket-20160126/*"
]
}
]
}
Go back to the Group section, select the group you made, then add
your new policy to the group.
That's it for AWS configuration. I didn't need to make a policy to allow
"fog" to list the contents of the bucket, even though most tutorials I tried
said that was necessary. I think it's only necessary when you want a user
that can log in through the dashboard.
Now for the Heroku configuration. This stuff gets entered in at your
command prompt, just like 'heroku run rake db:migrate' and such. This is
where you enter the actual Access Key and Secret Key you got from the "fog" user you created earlier.
$ heroku config:set S3_ACCESS_KEY=THERANDOMKEYYOUGOT
$ heroku config:set S3_SECRET_KEY=an0tHeRstRing0frAnDomjUnK
$ heroku config:set S3_REGION=us-west-2
$ heroku config:set S3_BUCKET=my-sample-app-bucket-20160126
Look again at that last one. Remember when you looked at the Properties of
your S3 bucket? This is where you enter the code associated with your
region. If your bucket is not in Oregon, you will have to change us-west-2 to your actual region code. This link worked when this tutorial was written:
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
If that doesn't work, Google "AWS S3 region codes".
After doing all this and double-checking for mistakes in the code, I got
Heroku to work with AWS for storage of pictures!
In Services -> IAM, click on 1 User(s) underneath IAM Resources. Select your user you want to have the permission. In this user's profile, click onAttach User Policy. Click on Select for Amazon S3 Full Access and finally Apply Policy.
For others in future, this answer helped me a lot.
Go on Heroku, on your application, go to settings, hit Reveal Config Vars.
Click on on Edit on the right side and enter your secrets there:
S3_BUCKET: name of your bucket goes here
S3_ACCESS_KEY: xxxxx
S3_SECRET_KEY: xxxx
On config/initializers/carrierwave.rb or wherever you're entering your secrets should have:
CarrierWave.configure do |config|
config.root = Rails.root.join('tmp') # adding these...
config.cache_dir = 'carrierwave' # ...two lines
config.fog_credentials = {
:provider => 'AWS', # required
:s3_access_key_id => ENV['S3_ACCESS_KEY'], # required
:s3_secret_access_key => ENV['S3_SECRET_KEY'], # required
:region => 'eu-west-1', # optional, defaults to 'us-east-1'
:host => 's3.example.com', # optional, defaults to nil
:endpoint => 'https://s3.example.com:8080' # optional, defaults to nil
}
config.fog_directory = ENV['S3_Bucket'] # required
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
I found the selected correct answer above did not work for me. This is what eventually worked for me after much trial and error.
I followed the first steps to manually enter the secret information as listed above...
Go on Heroku, on your application, go to settings, hit Reveal Config Vars.
Click on on Edit on the right side and enter your secrets there:
S3_BUCKET: name of your bucket goes here
S3_ACCESS_KEY: xxxxx
S3_SECRET_KEY: xxxx
However a slight difference in the carrier_wave file seemed to work.
Note the encpompassing if Rails.env.production? line and end.
carrier_wave.rb
if Rails.env.production?
CarrierWave.configure do |config|
config.root = Rails.root.join('tmp') # adding these...
config.cache_dir = 'carrierwave' # ...two lines
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY'],
:region => 'eu-west-2', # optional, defaults to 'us-east-1'
:host => 's3.example.com', # optional, defaults to nil
:endpoint => 'https://s3.example.com:8080' # optional, defaults to nil
}
config.fog_directory = ENV['S3_Bucket'] # required
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
end
Not sure if that is what the problem was or not.
After making that change I finished the chapter according to Michael Hartl's instructions.
We’re now ready to commit the changes on our topic branch and merge
back to master:
$ bundle exec rake test
$ git add -A
$ git commit -m "Add user microposts"
$ git checkout master
$ git merge user-microposts
$ git push
Then we deploy, reset the database, and reseed the sample data:
$ git push heroku
$ heroku pg:reset DATABASE
$ heroku run rake db:migrate
$ heroku run rake db:seed