I am trying to deploy using Capistrano 3.x.
I configured agent forwarding in my ~/.ssh/config file:
Host git-codecommit.*.amazonaws.com
Hostname xxxx
ForwardAgent yes
IdentityFile /path/to/codecommit_rsa
I did the same thing for my server connection with ForwardAgent yes also.
I verified my server allows agent forwarding in the /etc/ssh/sshd_config file also:
AllowAgentForwarding yes
INFO ----------------------------------------------------------------
INFO START 2017-11-18 16:09:44 -0500 cap production deploy
INFO ---------------------------------------------------------------------------
INFO [b43ed70f] Running /usr/bin/env mkdir -p /tmp as deploy#50.116.2.15
DEBUG [b43ed70f] Command: /usr/bin/env mkdir -p /tmp
INFO [b43ed70f] Finished in 1.132 seconds with exit status 0 (successful).
DEBUG Uploading /tmp/git-ssh-testapp-production-blankman.sh 0.0%
INFO Uploading /tmp/git-ssh-testapp-production-blankman.sh 100.0%
INFO [b1a90dc1] Running /usr/bin/env chmod 700 /tmp/git-ssh-testapp-production-blankman.sh as deploy#50.116.2.15
DEBUG [b1a90dc1] Command: /usr/bin/env chmod 700 /tmp/git-ssh-testapp-production-blankman.sh
INFO [b1a90dc1] Finished in 0.265 seconds with exit status 0 (successful).
INFO [b323707d] Running /usr/bin/env git ls-remote ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/fuweb HEAD as deploy#50.116.2.15
DEBUG [b323707d] Command: ( export GIT_ASKPASS="/bin/echo" GIT_SSH="/tmp/git-ssh-testapp-production-blankman.sh" ; /usr/bin/env git ls-remote ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/fuweb HEAD )
DEBUG [b323707d] Permission denied (publickey).
DEBUG [b323707d] fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
What am I missing here?
You need to make Capistrano aware that you expect it to forward your local key. This can be done by going into you project's config/deploy.rb and adding this line:
ssh_options[:forward_agent] = true
IIRC, Capistrano executes commands remotely through SSHKit, so even if you invoke the ssh-agent and add a key locally, I can't say if it will persist for the next command.
As discussed in the comments, an SSH agent must run on the remote server as well as on the local machine that contains the key because the agents at each end need to cooperate to forward the key information. The agent (ssh-agent) is different from the SSH server (sshd). The server accepts connections, while the (otherwise optional) agent manages credentials.
Some systems start an agent automatically upon login. To check if this is the case, log in to the server and run:
$ env | grep SSH
...looking for variables like SSH_AGENT_PID or SSH_AGENT_SOCK. If it isn't started, we can execute the following command to start the agent on the server:
$ eval "$(ssh-agent)"
As we can see, this evaluates the output of the ssh-agent command because ssh-agent returns a script that sets some needed environment variables in the session.
We'll need to make sure the agent starts automatically upon login so that it doesn't interfere with the deploy process. If we checked and determined that the agent does not, in fact, start on login, we can add the last command to the "deploy" user's ~/.profile file (or ~/.bash_profile).
Note also that the host specified in the local ~/.ssh/config must match the name or IP address of the host that we want to forward credentials to, not the host that ultimately authenticates using the forwarded key. We need to change:
Host git-codecommit.*.amazonaws.com
...to:
Host 50.116.2.15
We can verify that the SSH client performs agent forwarding by checking the verbose output:
$ ssh -v deploy#50.116.2.15
...
debug1: Requesting authentication agent forwarding.
...
Of course, be sure to register any needed keys with the local agent by using ssh-add (this can also be done automatically when logging in as shown above). We can check which keys the agent loaded at any time with:
$ ssh-add -l
This usually helps me:
ssh-add -D
ssh-agent
ssh-add
Related
I tried to connect on remote server with docker context
when tried docker ps from client(local mac) , I got this error message.
error during connect: Get "http://docker.example.com/v1.24/containers/json": command [ssh -l wwww -- tane-dev-0.ccn docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=wwww#xxx-dev-0.xxx: Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
After googled about this problem, there's few things to checkup
docker version later 18.09 ✅
client: 20.10.14
remote: 20.10.17
register ssh key with ssh-agent ✅
ssh-add ~/.ssh/id_rsa
Identity added: /Users/ma_kyeongwook/.ssh/id_rsa (ma_kyeongwook#xxx.xxx)
ssh-add -l
4096 SHA256:~~~ ma_kyeongwook#xxx.xxx (RSA)
ssh connect test ✅
also checked cat ~/.ssh/authorized_keys contains client's pub key
Did i missed something?
I am using Jenkins to run some Ansible playbooks. One of the simple tests I did was to have the playbook to cat the fstab file on a remote server:
The playbook looks like this:
---
- hosts: "tesst-1-server"
tasks:
- name: dislpay /etc/fstab
shell: cat /etc/fstab
register: fstab_reg
- debug: msg="{{ fstab_reg.stdout }}"
In Jenkins, I have a freestyle project, it uses Invoke Ansible Playbook to call the above playbook, and the project credentials was setup with a different: ansible-user. This is different from the default user-jenkins that runs Jenkins. User ansible-user can ssh to all my servers. I have ansible-user setup in Jenkins Credential with its private key and passphrase. But when I run the project, I got an error:
[update_fstab] $ /usr/bin/ansible-playbook google/ansible/test-scripts/test/sub_book.yml -i /etc/ansible/hosts -f 5 --private-key /tmp/ssh14117407503194058572.key -u ansible-user
[WARNING]: Invalid characters were found in group names but not replaced, use
-vvvv to see details
fatal: [test-1-server]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ansible-user#test-1-server: Permission denied (publickey).", "unreachable": true}
I am not quiet sure what exactly the error is saying as I have setup the private key and passphrase to ansible-user's credentials. What does the group names in the message mean? Because this is done through Jenkins, I am not sure how to do the -vvv as it suggested.
How can I make Jenkins to pass the private key and passphrase to the Ansible playbook?
Thanks!
I think I have found the "issue". After I switched to a different user other than ansible-user, the playbook worked. Interesting thing is that when I created the private key pairs for ansible-user, I used "-m PEM" and it should be good for Jenkins.
I use an ansible script to load & start the https://hub.docker.com/r/rastasheep/ubuntu-sshd/ container.
so it starts well of course :
bash-4.4$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8bedbd3b7d88 rastasheep/ubuntu-sshd "/usr/sbin/sshd -D" 37 minutes ago Up 36 minutes 0.0.0.0:49154->22/tcp test
bash-4.4$
so after ansible failure on ssh access to it I tested manually from shell
this is also ok.
bash-4.4$ ssh root#172.17.0.2
The authenticity of host '172.17.0.2 (172.17.0.2)' can't be established.
ECDSA key fingerprint is SHA256:YtTfuoRRR5qStSVA5UuznGamA/dvf+djbIT6Y48IYD0.
ECDSA key fingerprint is MD5:43:3f:41:e9:89:45:06:6f:f6:42:c4:6a:70:37:f8:1d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.
root#172.17.0.2's password:
root#8bedbd3b7d88:~# logout
Connection to 172.17.0.2 closed.
bash-4.4$
so the step that failed is trying to get on it from ansible script & make access to ssh-copy-id
ansible error message is :
Fatal: [172.17.0.2]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,password).\r\n", "unreachable": true}
---
- hosts: 127.0.0.1
tasks:
- name: start docker service
service:
name: docker
state: started
- name: load and start the container we wanna use
docker_container:
name: test
image: rastasheep/ubuntu-sshd
state: started
ports:
- "49154:22"
- name: Wait maximum of 300 seconds for ports to be available
wait_for:
host: 0.0.0.0
port: 49154
state: started
- hosts: 172.17.0.2
vars:
passwordadmin: $6$pbE6yznA$AeFIdI.....K0
passwordroot: $6$TMrxQUxT$I8.JIzR.....TV1
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
tasks:
- name: Build test container root user rsa ssh-key
shell: docker exec test ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N ""
so I cannot even run the needed step to build ssh
how to do then ??
1st step (ansible task) : load docker container
2cd step (ansible task on only 172.17.0.2) : connect to it & setup it
there will be 3rd step to run application on it after that.
the problem occurs only when starting the 2cd step
Ok after many trys on a second container
conclusion is my procedure was bad
what I have done to solve that :
build a diroctory tree separating ./ ./inventory ./includes
build 1 yaml file by host (local, docker, labo)
build 1 main yaml file on ./
build 1 new host file in ./inventory
connect forced by sshpass to docker on default password
changed it
add the host key on authorized key to a login dedicated usage
installed pyhton (needed to answer ansible host else it makes
randomly module errors or refused connections depending on current
action)
setup a ssh login user in sudoers
then I can un the docker.yaml actions
then only at last I can run the labo.yaml actions.
Thanks for help
now I'm able to build the missing tools.
In my CI chain I execute end-to-end tests after a "docker-compose up". Unfortunately my tests often fail because even if the containers are properly started, the programs contained in my containers are not.
Is there an elegant way to verify that my setup is completely started before running my tests ?
You could poll the required services to confirm they are responding before running the tests.
curl has inbuilt retry logic or it's fairly trivial to build retry logic around some other type of service test.
#!/bin/bash
await(){
local url=${1}
local seconds=${2:-30}
curl --max-time 5 --retry 60 --retry-delay 1 \
--retry-max-time ${seconds} "${url}" \
|| exit 1
}
docker-compose up -d
await http://container_ms1:3000
await http://container_ms2:3000
run-ze-tests
The alternate to polling is an event based system.
If all your services push notifications to an external service, scaeda gave the example of a log file or you could use something like Amazon SNS. Your services emit a "started" event. Then you can subscribe to those events and run whatever you need once everything has started.
Docker 1.12 did add the HEALTHCHECK build command. Maybe this is available via Docker Events?
If you have control over the docker engine in your CI setup you could execute docker logs [Container_Name] and read out the last line which could be emitted by your application.
RESULT=$(docker logs [Container_Name] 2>&1 | grep [Search_String])
logs output example:
Agent pid 13
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
parse specific line:
RESULT=$(docker logs ssh_jenkins_test 2>&1 | grep Enter)
result:
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
I found a github gist where someone is compiling rails assets locally and then copying them to the server.
It fails to connect to the server with most likely due to the fact that I changed my ssh port to lets say 90.
How can I make it so this variable connects using port 90?
remote_dir = "#{host.user}##{host.hostname}:#{shared_path}/public/assets/"
Typically when I connect via ssh to the server i do this:
ssh myUser#myServer -p90
https://gist.github.com/Jesus/80ef0c8db24c6d3a2745
Seems this question has been asked before: Is it possible to specify a different ssh port when using rsync?
The trick was to use:
run_locally { execute "rsync -av -e 'ssh -p 90' --delete #{local_dir} #{remote_dir}" }
instead of
run_locally { execute "rsync -av --delete #{local_dir} #{remote_dir}" }