I am trying to optimize my application. I would like to deploy my rails application to different machines. Unfortunately I can't understand how to do it.
role :web, "ip1","ip2"
role :app, "ip1, ip2"
role :db, "db_ip", primary: true
set :application, "Name"
set :user, "root"
set :port, 22
set :deploy_to, "/home/#{user}/apps/#{application}"
set :ssh_options, {:forward_agent => true}
ssh_options[:forward_agent] = true
ssh_options[:keys] = %w(~/.ssh/id_key)
This is my configuration. I have two unicorn servers and one db server. When I use cap:deploy:cold it asks me for password but I can't understand the password of which machine I should enter? It doesn't work with all of the server's passwords. I receive
(Net::SSH::AuthenticationFailed: root)
Can someone explain me how should my configuration looks to be able to deploy to all of the machines?
This should just work, but you should set up your ssh connections so you do not have to enter a password, using ssh keys.
this is for for version 3, and was posted before seeing version was set 2.
try setting your global options like this.
set :ssh_options, {
keys: %w(/home/your_user/.ssh/id_key),
forward_agent: true,
}
And is your key called id_key (id_rsa is more common)
if you need to do it per server you can do this.
server 'ip1',
user: 'root',
roles: %w{web app},
ssh_options: {
user: 'foobar', # overrides user setting above
keys: %w(/home/user_name/.ssh/id_rsa),
forward_agent: false,
}
Related
I'm working with the AWS ruby SDK and trying to override the global config for a specific client.
When I load the application I set the global config for S3 use like this
Aws.config.update(
endpoint: '****',
access_key_id: '****',
secret_access_key: '****',
force_path_style: '*****',
region: '****'
)
At some point in the application I want to use a different AWS SDK and make those calls using a different set of config options. I create a client like this
client = Aws::SQS::Client.new(
credentials: Aws::Credentials.new(
'****',
'****'
),
region: '****'
)
When I make a call using this new client I get errors because it uses the new config options as well as the ones defined in the global config. For example, I get an error for having force_path_style set because SQS doesn't allow that config option.
Is there a way to override all the global config options for a specific call?
Aws.config supports nested service-specific options, so you can set global options specifically for S3 without affecting other service clients (like SQS).
This means you could change your global config to nest force_path_style under a new s3 hash, like this:
Aws.config.update(
endpoint: '****',
access_key_id: '****',
secret_access_key: '****',
s3: {force_path_style: '*****'},
region: '****'
)
I am trying to create a user and password for Jenkins using JCASC. I can set up Jenkins however when I go to the GUI on my local host I do not see any users. Here is my code
jenkins:
systemMessage: "Jenkins configured automatically by Jenkins Configuration as Code plugin\n\n"
disabledAdministrativeMonitors:
- "jenkins.diagnostics.ControllerExecutorsNoAgents"
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
id: "admin-cred"
username: "jenkins-admin"
password: "butler"
scope: GLOBAL
I believe I have all the necessary plugins installed but something is missing clearly. Any help would be appreciated.
The way I got users to pop up is by setting up the (local) security realm, rather than credentials, like so:
jenkins:
securityRealm:
local:
users:
- id: jenkins-admin
password: butler
I'm finding this a great resource to get ideas from: https://github.com/oleg-nenashev/demo-jenkins-config-as-code
I've used a local security Realm with disabled signups to add the user "jenkins-admin".
jenkins:
. . .
securityRealm:
local:
allowsSignup: false
users:
- id: jenkins-admin
password: butler
You can refer below links to know more about Jcasc:
https://www.jenkins.io/projects/jcasc/
https://github.com/jenkinsci/configuration-as-code-plugin
I'm deploying a Rails app with Capistrano, to an Ubuntu server (EC2).
When I deploy, with --trace, everything appears to go fine.
When I look at the revisions log on the server, it shows the latest commit hash was used on the most recent deploy, however, when I go into that latest release directory (yes I confirmed that a new release directory was created and that I'm in that one) it doesn't have the most recent commits.
If I do a 'git pull origin master' from with the new release directory on the server, of course it pulls the latest commits.
Any idea why the git pull wouldn't be happening on the Capistrano deploy?
EDIT:
Here's the deploy.rb file:
lock "~> 3.14.0"
set :pty, true
set :application, "123abc"
set :repo_url, "git#github.com:123/abc.git "
# ask :branch, `git rev-parse --abbrev-ref HEAD`.chomp
set :branch, "master"
set :rbenv_ruby, File.read('.ruby-version').strip
append :linked_files, "config/secrets.yml"
append :linked_dirs, "log", "tmp/pids", "tmp/cache", "tmp/sockets"
namespace :deploy do
before :compile_assets, :force_cleanup_assets do
on release_roles(fetch(:assets_roles)) do
within release_path do
with rails_env: fetch(:rails_env) do
execute :rake, 'assets:clobber'
end
end
end
end
app_service_name = "#{fetch(:application)}-#{fetch(:stage)}"
services = ["#{app_service_name}-workers"]
desc "Restart application"
task :restart do
on roles(:app), in: :sequence, wait: 5 do
execute :sudo, :systemctl, :stop, app_service_name
sleep 1
execute :sudo, :systemctl, :start, app_service_name
# execute :sudo, :systemctl, :restart, app_service_name
end
end
desc "Restart Workers"
task :restart_services do
on roles(:app), in: :sequence, wait: 5 do
services.each { |service| execute "sudo systemctl restart #{service}" }
end
end
desc "Start Workers"
task :start_services do
on roles(:app), in: :sequence, wait: 5 do
services.each { |service| execute "sudo systemctl start #{service}" }
end
end
desc "Stop Workers"
task :stop_services do
on roles(:app), in: :sequence, wait: 5 do
services.each { |service| execute "sudo systemctl stop #{service}" }
end
end
end
after "deploy:publishing", "deploy:restart"
after "deploy:publishing", "deploy:restart_services"
Is your organization using a proxy with ca certificate?.
Are you pulling from github site using SSL or from another git clone with a self signing certificate?.
Please try to su to the user used for the deployment, and attempt git pull, to see if it works?.
Are you using Tokens to authenticate or credentials or certificates?.
I would attempt to tcpdump to see what's going on, if effectively it attempts to connect to github.
Your deploy works with full clone or pull?. Can you deploy using full clone?.
Are you using SSH or HTTPS, and default or special ports?.
Can you publish the trace, or at least check that you don't have something like:
Connection refused - connect(2)
I guess that the ending spaces after your repourl are not in your final file.
Cheers
This could happen because of ownership/permissions inside <deploy_path>/repo, for example if once you had run deploy or git pull on server under other user.
Make sure that you have correct username in your deploy/<env>.rb configs and chown -r that_user:that_user <deploy_path>/repo (and may be other directories as well)
I am trying to deploy my rails application (Ruby 2.1.2 and Rails 4.1.4) through capistrano from mac. I have ssh keys set up on server. But i keep getting authentication error whenever i try to deploy. The error is:
SSHKit::Runner::ExecuteError: Exception while executing on host xxx.xxx: Authentication failed for user deploy#xxx.xxx
followed by:
Net::SSH::AuthenticationFailed: Authentication failed for user deploy#xxx.xx
This is my staging.rb:
server "xxx.xx.xxx", user: "deploy", roles: %w{web app db}
set :ssh_options, {
user: "root",
forward_agent: false,
keys: '~/.ssh/id_rsa',
auth_methods: %w(publickey password)
}
set :branch, "master"
set :rails_env, "staging"
I am able to login to server via terminal using ssh root#xxx.xx but cannot login with capistrano. Any help or advice will be appericiated.
At first. You use two different users in one config. Choice one and edit your staging.rb
Also I am not sure that using public key is a good way. Try to add private key for user Deploy. Then if you able to login as deploy
ssh -i id_rsa deploy#xxx.xx.xx.xx
try to update gem net-ssh to version 3.0.1. Then you can write your config like
set :ssh_options, {
user: "deploy",
keys: ["~/.ssh/id_rsa"]
}
I have faced the same issue for my application https://www.wiki11.com.
Those who are getting error
Net::SSH::AuthenticationFailed: Authentication failed for user deploy#XXX.XX.XX.XXX
Here is the solution,
First of all you need to ssh to your server and run
eval `ssh-agent`
and then
ssh-add ~/.ssh/id_rsa
and now change
set :ssh_options, { forward_agent: true, user: fetch(:user), keys: %w(~/.ssh/id_rsa.pub) }
#...
to
set :ssh_options, { forward_agent: true, user: fetch(:user), keys: %w(~/.ssh/id_rsa) }
#...
I just removed pub from id_rsa.pub.
And then run
cap production deploy:initial
It should work now.
The database, username, and password combination definitely work. The following configuration for grafana doesn't tho.
datasources: {
influxdb: {
type: 'influxdb',
url: "http://XXX.XXX.XXX.XX:8086/db/dbname",
username: 'username',
password: 'password',
default: true
},
},
I've tried removing the default parameter, changing influxdb to influx, and append /series to the url, all to no avail. Has anyone gotten this to work?
InfluxDB v0.7.3 (git: 216a3eb)
Grafana 1.6.0 (2014-06-16)
I'm using this below configuration and it works. Try insert the grafana database into your db and add grafana db configuration.
...
datasources: {
influxdb: {
type: 'influxdb',
url: "http://localhost:8086/db/test",
username: 'root',
password: 'XXXX'
},
grafana: {
type: 'influxdb',
url: "http://localhost:8086/db/grafana",
username: 'root',
password: 'XXXX',
grafanaDB: true
}
},
...
I had the same issue using the config shown by annelorayne above. It turned out that Grafana was not able to connect to localhost:8086, but it could connect to the actual IP address of the server (ie. 10.0.1.100:8086).
This was true even though 'telnet localhost 8086' worked.
I changed the Grafana config to this, and it worked:
datasources: {
influxdb: {
type: 'influxdb',
url: "http://10.0.1.100:8086/db/collectd",
username: 'root',
password: 'root',
grafanaDB: true
},
grafana: {
type: 'influxdb',
url: "http://10.0.1.100:8086/db/grafana",
username: 'root',
password: 'root'
},
},
I'm sorry I can't explain why this happens. Since telnet works, I have to assume it's a Grafana issue.
This question has been asked multiple times on the mailing list. See these threads for more info thread1, thread2, thread3. There's also a blog post on how to get grafana and InfluxDB working together here's a link
The browser sometimes caches the config.js and therefore looks at old configurations.
Please try clearing the cache or use incognito/private mode to load grafana dashboard.
I faced the same issue and using incognito worked for me.
Verify the config.js contents using grafana( host:port/config.js) .