php artisan migrate
[PDOException]
SQLSTATE[28000] [1045] Access denied for user 'forge'#'localhost' (using password: NO)
.env file
APP_ENV=local
APP_DEBUG=true
APP_KEY=JYi8UKIIoaXlU9vrkNkrpu0Y7VpkaA3X
DB_HOST=localhost
DB_DATABASE=homestead
DB_USERNAME=root
DB_PASSWORD=homestead
CACHE_DRIVER=file
SESSION_DRIVER=file
QUEUE_DRIVER=sync
I did't make any change in database.php
'driver' => 'mysql',
'host' => env('DB_HOST', 'localhost'),
'database' => env('DB_DATABASE', 'forge'),
'username' => env('DB_USERNAME', 'forge'),
'password' => env('DB_PASSWORD', ''),
'charset' => 'utf8',
'collation' => 'utf8_unicode_ci',
'prefix' => '',
'strict' => false,
When i run: php artisan env
it show local if i change env to production it show production .
I do't understand why mysql won't work.
Also in front end show:
Whoops, looks like something went wrong.
Laravel error page show like production environment.
In apache error log have't any error.
I am stuck with this problem.
I change file permission clear laravel cache.I can't make it working.I am using vagrant . New laravel installation works fine.
Please help me.
You might have cached the config. Try running artisan config:clear or artisan config:cache on production.
artisan config:clear will remove any cache you might have and start using your .env instead. This is obviously non-optimal on a production server and is the reason for the config:cache command.
Related
I have the following code in my controller:
def email_signup
email_address = params[:email_address]
response = RestClient.post("https://api:#{ENV['MAILGUN_API_KEY']}" \
"#api.mailgun.net/v3/lists/#{ENV['MAILGUN_ALIAS']}/members",
:subscribed => true,
:address => email_address)
redirect_to jobs_url, success: "Thanks for Signing Up!"
rescue RestClient::BadRequest => e
redirect_to jobs_url, error: e
end
Works great in dev using dotenv gem. After deploying to the server, I added the new environment variables to /etc/environment and deployed with capistrano again to restart the app and the app is giving me a RestClient::Unauthorized (401 Unauthorized): when I try to call this action which makes me think the env. vars are not set.
From the server console if I cd in to the railsapp/current and run ruby -e 'p ENV["MAILGUN_ALIAS"]' then I can see the variable print correctly.
I have also tried exporting the variables in .bashrc which doesn't change the behavior.
Also, just FYI, I am using RVM on the server to set the ruby version.
What else can I try here?
With Capistrano, have you set your .env in your shared folder?
config/deploy.rb:
set :linked_files, fetch(:linked_files, []).push(
'config/database.yml', 'config/secrets.yml', '.env'
)
Then obviously make sure you have your .env file configured with your secrets in the shared folder on the server that runs your application.
I need to test the app from a mobile phone, and file is checked into Git where other developers access it. How do I dynamically set the host name to the IP address of the server? I tried
webpacker.yml
development:
dev_server:
host: <%= Socket.ip_address_list.find { |ai| ai.ipv4? && !ai.ipv4_loopback? }.ip_address %>
It gave the error
Error: getaddrinfo ENOTFOUND <%= Socket.ip_address_list.find { |ai| ai.ipv4? && !ai.ipv4_loopback? }.ip_address %>
I tried renaming the file to webpackager.yml.erb but it gave the error
Webpack dev_server configuration not found in .../config/webpacker.yml.
I ran into the same issue and embedded Ruby within webpacker.yml doesn't appear to be possible.
However, in development mode, you can override webpack-dev-server configuration values via environmental variables. Example:
WEBPACKER_DEV_SERVER_HOST=example.com ./bin/webpack-dev-server
If you're using Foreman, add the command to the relevant Procfile, such as:
webpack: WEBPACKER_DEV_SERVER_HOST=0.0.0.0 ./bin/webpack-dev-server
web: rails s -b 0.0.0.0
More specifically, for your case, something like:
WEBPACK_DEV_SERVER_HOST=$(ruby -e "require 'socket'; Socket.ip_address_list.detect{|intf| intf.ipv4_private?}.inspect_sockaddr") ./bin/webpack-dev-server
References:
Dynamic port for webpacker-dev-server
Allow overriding dev server settings using env variables
Update README.md for ENV vars with dev server
Documentation
Code
I have a production server running our rails app, and we have ENV variables in there, formatted correctly. They show up in rails c but we have an issue getting them to be recognized in the instance of the app.
Running puma, nginx on an ubuntu box.
What needs to be restarted every time we change .bashrc? This is what we do:
1. Edit .bashrc
2. . .bashrc
3. Restart puma
4. Restart nginx
still not recognized..but in rails c, what are we missing?
edit:
Added env variables to /etc/environment based on suggestions from other posts saying that .bashrc is only for specific shell sessions, and this could have an effect. supposedly /etc/environment is available for all users, so this is mine. still having the same issues:
Show up fine in rails c
Show up fine when I echo them in shell
Do not show up in application
export G_DOMAIN=sandboxbaa3b9cca599ff0.mailgun.org
export G_EMAIL=mailgun#sandboxbaa3ba3806d5b499ff0.mailgun.org
export GEL=support#xxxxxx.com
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
edit:
In the app i request G_DOMAIN and G_EMAIL in plain html (this works on development with dotenv, does not work once pushed to production server with ubuntu server) :
ENV TEST<BR>
G_DOMAIN: <%= ENV['G_DOMAIN'] %><br>
G_EMAIL:<%= ENV['G_EMAIL'] %>
However, the following env variables are available to use (in both .bashrc and /etc/environment, same as all variables we displayed above) because our images work fine and upload to s3 with no issue, on production.
production.rb
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY']
edit2: could this be anything with this puma issue?
https://github.com/puma/puma/commit/a0ba9f1c8342c9a66c36f39e99aeaabf830b741c
I was having a problem like this, also. For me, this only happens when I add a new environment variable.
Through this post and after some more googling, I've come to understand that the restart command for Puma (via the gem capistrano-puma) might not see new environment variables because the process forks itself when restarting rather than killing itself and being started again (this is a part of keeping the servers responsive during a deploy).
The linked post suggests using a YAML file that's only stored on your production server (read: NOT in source control) rather than rely on your deploy user's environment variables. This is how you can achieve it:
Insert this code in your Rails app's config/application.rb file:
config.before_configuration do
env_file = File.join(Rails.root, 'config', 'local_env.yml')
YAML.load(File.open(env_file)).each do |key, value|
ENV[key.to_s] = value
end if File.exists?(env_file)
end
Add this code to your Capistrano deploy script (config/deploy.rb)
desc "Link shared files"
task :symlink_config_files do
on roles(:app) do
symlinks = {
"#{shared_path}/config/local_env.yml" => "#{release_path}/config/local_env.yml"
}
execute symlinks.map{|from, to| "ln -nfs #{from} #{to}"}.join(" && ")
end
end
before 'deploy:assets:precompile', :symlink_config_files
Profit! With the code from 1, your Rails application will load any keys you define in your server's Capistrano directory's ./shared/config/local_env.yml file into the ENV hash, and this will happen before the other config files like secrets.yml or database.yml are loaded. The code in 2 makes sure that the file in ./shared/config/ on your server is symlinked to current/config/ (where the code in 1 expects it to be) on every deploy.
Example local_env.yml:
SECRET_API_KEY_DONT_TELL: 12345abc6789
OTHER_SECRET_SHH: hello
Example secrets.yml:
production:
secret_api_key: <%= ENV["SECRET_API_KEY_DONT_TELL"] %>
other_secret: <%= ENV["OTHER_SECRET_SHH"] %>
This will guarantee that your environment variables are found, by not really using environment variables. Seems like a workaround, but basically we're just using the ENV object as a convenient global variable.
(the capistrano syntax might be a bit old, but what is here works for me in Rails 5... but I did have to update a couple things from the linked post to get it to work for me & I'm new to using capistrano. Edits welcome)
I've added the Redistogo nano add-on on Heroku and I've tested it out in the console successfully. However when my app tries to connect with Redis I get the following error:
Heroku Log file:
2011-10-12T08:19:50+00:00 app[web.1]: Errno::ECONNREFUSED (Connection refused - Unable to connect to Redis on 127.0.0.1:6379):
2011-10-12T08:19:50+00:00 app[web.1]: app/controllers/sessions_controller.rb:14:in `create'
Why is it trying to access Redis on localhost?
My Redis.rb in the config/initializers folder has this, which is almost certainly the problem.
#What's pasted below is pasted ad verbatim. I don't know what to change the values to.
uri = URI.parse(ENV["REDISTOGO_URL"])
REDIS = Redis.new(:host => uri.host, :port => uri.port, :password => uri.password)
Are you using Resque? If so, you'll need to tell Resque which Redis to use.
Resque.redis = REDIS
If not, then the code you've posted about is NOT setting your REDIS connection up.
Try this:
heroku config --long | grep REDIS
to see what your REDISTOGO_URL is. You might have set it accidentally.
I am using rails and capistrano with a staging and production server. I need to be able to copy the production database to the staging database when I deploy to staging. Is there an easy way to accomplish this?
I thought about doing this with mysql and something like:
before "deploy:migrate" do
run "mysqldump -u root #{application}_production > output.sql"
run "mysql -u root #{application}_staging < output.sql"
end
(I have not tested this btw, so not sure it would even work)
but it would be easier / better if there was another way.
Thanks for any help
This is a quick way to do it also. This uses SSH remote commands and pipes to avoid temp files.
mysql -e 'DROP DATABASE stag_dbname;'
ssh prod.foo.com mysqldump -uprodsqluser -pprodsqlpw prod_dbname | gzip -c | gunzip -c | mysql stag_dbname
Here's my deployment snippet:
namespace :deploy do
task :clone_production_database, :except => { :no_release => true } do
mysql_user = "username"
mysql_password = "s3C_re"
production_database = "production"
preview_database = "preview"
run "mysql -u#{mysql_user} -p#{mysql_password} --execute='CREATE DATABASE IF NOT EXISTS #{preview_database}';"
run "mysqldump -u#{mysql_user} -p#{mysql_password} #{production_database} | mysql -u#{mysql_user} -p#{mysql_password} #{preview_database}"
end
end
before "deploy:migrate", "deploy:clone_production_database"
I do this -- it is really useful. Here are links explaining how ...
http://c.kat.pe/post/capistrano-task-for-loading-production-data-into-your-development-database/
or
http://blog.robseaman.com/2008/12/2/production-data-to-development
or
https://web.archive.org/web/20160404204752/http://blog.robseaman.com/2008/12/2/production-data-to-development
mysql -e 'DROP DATABASE stag_dbname;'
ssh prod.foo.com mysqldump -u prodsqluser
This may not works. At least it does not work with the PostgreSQL.
You have your staging application locked the database so you
cannot drop it
While some tables are locked you will still
overwrite rest tables. So you got an corrupted database
working link for the post above
https://web.archive.org/web/20160404204752/http://blog.robseaman.com/2008/12/2/production-data-to-development