I'm still learning ROS and I'm not good. I have a question. I ran roscore on the master and ran the turtle node on the slave computer and checked if the topic was posted correctly. The matters were true for the master and the slave. So I could see the threads posted on the host and the auxiliary computer. However, when I want to send a speed command from the host, the slave does not receive the commands sent by the master. (My Ros version is Kinetic) main ip: 192.168.137.aaa , slave ip: 192.168.137.bbb. I made the ROS master-slave setting as follows:
Master Computer (~/.bashrc);
export ROS_IP = 192.168.137.aaa
export ROS_MASTER_URI = http: //192.168.137.aaa: 11311
source /opt/ros/kinetic/setup.bas
echo "ROS_IP:" $ ROS_IP
echo "ROS_MASTER_URI:" $ ROS_MASTER_URI
Slave Computer (~/.bashrc);
export ROS_IP = 192.168.137.bbb
export ROS_MASTER_URI = http: //192.168.137.aaa: 11311
source /opt/ros/kinetic/setup.bash
echo "ROS_IP:" $ ROS_IP
echo "ROS_MASTER_URI:" $ ROS_MASTER_URI
Hi this problem solved for me as follows:
i edited sudo nano ~/.bashrc for master and slave computer:
Master computer:
export ROS_IP = 192.168.137.aaa
export ROS_MASTER_URI = http: //192.168.137.aaa: 11311
source /opt/ros/kinetic/setup.bash
Slave Computer:
export ROS_IP = 192.168.137.bbb
export ROS_MASTER_URI = http: //192.168.137.aaa: 11311
source /opt/ros/kinetic/setup.bash
I ran Roscore in Master computer than ran turtlesim_node in slave computer.
I ran rosrun turtlesim turtle_teleop_key in Master Computer and controlled the slave computer with the keyboard.
Remember to reboot after export and make sure you are running the correct nodes. One of my mistakes was running the wrong control code.
Related
I am trying to create argo workflow and getting below error.
E0819 10:53:21.439832 9420 portforward.go:378] error copying from remote stream to local connection: readfrom tcp6 [::1]:2746->[::1]:52354: write tcp6 [::1]:2746->[::1]:52354: wsasend: An established connection was aborted by the software in your host machine.
Here are the steps I followed:-
minikube start which started minikube in virtualbox.
kubectl -n argo port-forward deployment/argo-server 2746:2746
in another terminal ran kubectl -n argo create -f wf-hello-world.yaml to create workflow.
In Teminal I see message as workflow.argoproj.io/hello-world-mj2gn created, but in argo UI workflow is not created. ie https://localhost:2746/workflows?limit=50
How to resolve this?
SOLUTION:
Followed instructions from https://argoproj.github.io/argo-workflows/quick-start/
Installed argo cli from https://github.com/argoproj/argo-workflows/releases/tag/v3.3.9
Downloaded argo-windows-amd64.exe.gz from https://github.com/argoproj/argo-workflows/releases/tag/v3.3.9
Extracted argo-windows-amd64.exe from argo-windows-amd64.exe.gz file using GZ extractor tool.
Renamed argo-windows-amd64.exe to argo.exe.
Move file to c:\projects\argo-cli.
Open environment variables > system vars > new > c:\projects\argo-cli.
Cmd prompt> argo version, should give version.
Now, after above steps used below command to create the workflow
kubectl -n argo create -f wf-hello-world.yaml
workflow created successfully.
I am attempting to docker-machine create to a Ubuntu 16.04 host like this:
ssh-keygen -R ${remote_host}
ssh-copy-id -i ~/.ssh/id_host_rsa.pub root#${remote_host}
docker-machine create \
--driver generic \
--generic-ip-address=${remote_host} \
--generic-ssh-key ~/.ssh/id_host_rsa \
--generic-ssh-user=root ${machine_name}
Version information:
docker --version
Docker version 19.03.6, build 369ce74a3c
docker-machine --version
docker-machine version 0.16.2, build bd45ab13
I am repeatedly asked for a password .. Why is this?
Here is the output:
...
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: ERROR: Received disconnect from 77.68.21.66 port 22:2: Too many authentication failures
ERROR: Disconnected from 77.68.21.66 port 22
Running pre-create checks...
Creating machine...
(production) Importing SSH key...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Password:
Detecting the provisioner...
Password:
Provisioning with ubuntu(systemd)...
Password:
.. etc
The reason for this problem was the ordering of ~/.ssh/config.
I had a Host * entry in config first, before that of my corresponding specific server Host XX.XX.XX.XX entry.
I moved the wildcard entry at the end of ~/.ssh/config and now the password is no longer constantly asked for and the problem is now fixed.
I how this helps someone.
I want to connect a slave to Master-Jenkins, but when trying to connect i'm getting following Error:
[05/02/18 15:26:59] [SSH] Opening SSH connection to <IP>
Key exchange was not finished, connection is closed.
java.io.IOException: There was a problem while connecting to <IP>:22
at com.trilead.ssh2.Connection.connect(Connection.java:818)
at hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:1324)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:831)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:820)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Key exchange was not finished, connection is closed.
at com.trilead.ssh2.transport.KexManager.getOrWaitForConnectionInfo(KexManager.java:93)
at com.trilead.ssh2.transport.TransportManager.getConnectionInfo(TransportManager.java:230)
at com.trilead.ssh2.Connection.connect(Connection.java:770)
... 7 more
Caused by: java.io.IOException: Cannot negotiate, proposals do not match.
at com.trilead.ssh2.transport.KexManager.handleMessage(KexManager.java:405)
at com.trilead.ssh2.transport.TransportManager.receiveLoop(TransportManager.java:777)
at com.trilead.ssh2.transport.TransportManager$1.run(TransportManager.java:489)
... 1 more
[05/02/18 15:26:59] Launch failed - cleaning up connection
[05/02/18 15:26:59] [SSH] Connection closed.
Configuration for Node:
- Start-Method: Start Slave over SSH
- Hostname: is the IP
- Access Data: user i created for SSH Access - > public key is in authorized keys on Slave Node
If i am on my Master as user "jenkins" and do a ssh jenkins#<IP> i can login wihtout problems (public key is on slave).
Why it doesn't work for "UI-Jenkins".
Jenkins-Version: 1.658
Node: Ubuntu 14.04
SSH-Slave Plugin: 1.26
That "solved" the issue:
"Workaround is by commenting out MACs and KexAlgorithm line in /etc/ssh/sshd_config of Jenkins Slave and restarting the sshd (service ssh restart on Ubuntu)
UPDATE: the issue has been resolved as of 2017-04-29 "
Jenkins master fails to connect to the slave over SSH
Thought I'd throw my experience in this thread: my environment had a Windows master and mixed Windows and Linux agents. One Windows agent refused to connect to master, even after Master pushed 'jenkins-agent' and the other supporting files to the agent.
This agent had 6 different versions of the JDK and JRE installed. I uninstalled all of them, reinstalled only the latest JDK we needed, and set JAVA_HOME. This fixed the connectivity issue.
Execute this command on destination node.
sudo -i su -c 'sed -i -e "s/MACs /MACs hmac-sha1,/g" /etc/ssh/sshd_config; service sshd restart'
Just recently run into this issue with docker
Find the Java Path
/home/jenkins # which java
/opt/java/openjdk/bin/java
Export the Java Path. In this case I am using the docker-compose
...
exp_agent:
image: jenkins/ssh-agent:alpine
restart: always
environment:
JENKINS_AGENT_SSH_PUBKEY: $ENV_JENKINS_AGENT_SSH_PUBKEY
JAVA_HOME: $ENV_JAVA_HOME
container_name: jenkins-ssh-agent
ports:
- 22:22
networks:
- jenkins
...
The master still complains about the path of Java as /opt/java/openjdk/bin/java is not among the expected paths
...
[12/04/21 11:44:07] [SSH] Checking java version of /usr/bin/java
...
Create a symbolic link between the java path and one of the expected paths in the docker container (This could be automated in a Dockerfile)
ln -s /opt/java/openjdk/bin/java /usr/bin/java
I am trying to deploy using Capistrano 3.x.
I configured agent forwarding in my ~/.ssh/config file:
Host git-codecommit.*.amazonaws.com
Hostname xxxx
ForwardAgent yes
IdentityFile /path/to/codecommit_rsa
I did the same thing for my server connection with ForwardAgent yes also.
I verified my server allows agent forwarding in the /etc/ssh/sshd_config file also:
AllowAgentForwarding yes
INFO ----------------------------------------------------------------
INFO START 2017-11-18 16:09:44 -0500 cap production deploy
INFO ---------------------------------------------------------------------------
INFO [b43ed70f] Running /usr/bin/env mkdir -p /tmp as deploy#50.116.2.15
DEBUG [b43ed70f] Command: /usr/bin/env mkdir -p /tmp
INFO [b43ed70f] Finished in 1.132 seconds with exit status 0 (successful).
DEBUG Uploading /tmp/git-ssh-testapp-production-blankman.sh 0.0%
INFO Uploading /tmp/git-ssh-testapp-production-blankman.sh 100.0%
INFO [b1a90dc1] Running /usr/bin/env chmod 700 /tmp/git-ssh-testapp-production-blankman.sh as deploy#50.116.2.15
DEBUG [b1a90dc1] Command: /usr/bin/env chmod 700 /tmp/git-ssh-testapp-production-blankman.sh
INFO [b1a90dc1] Finished in 0.265 seconds with exit status 0 (successful).
INFO [b323707d] Running /usr/bin/env git ls-remote ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/fuweb HEAD as deploy#50.116.2.15
DEBUG [b323707d] Command: ( export GIT_ASKPASS="/bin/echo" GIT_SSH="/tmp/git-ssh-testapp-production-blankman.sh" ; /usr/bin/env git ls-remote ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/fuweb HEAD )
DEBUG [b323707d] Permission denied (publickey).
DEBUG [b323707d] fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
What am I missing here?
You need to make Capistrano aware that you expect it to forward your local key. This can be done by going into you project's config/deploy.rb and adding this line:
ssh_options[:forward_agent] = true
IIRC, Capistrano executes commands remotely through SSHKit, so even if you invoke the ssh-agent and add a key locally, I can't say if it will persist for the next command.
As discussed in the comments, an SSH agent must run on the remote server as well as on the local machine that contains the key because the agents at each end need to cooperate to forward the key information. The agent (ssh-agent) is different from the SSH server (sshd). The server accepts connections, while the (otherwise optional) agent manages credentials.
Some systems start an agent automatically upon login. To check if this is the case, log in to the server and run:
$ env | grep SSH
...looking for variables like SSH_AGENT_PID or SSH_AGENT_SOCK. If it isn't started, we can execute the following command to start the agent on the server:
$ eval "$(ssh-agent)"
As we can see, this evaluates the output of the ssh-agent command because ssh-agent returns a script that sets some needed environment variables in the session.
We'll need to make sure the agent starts automatically upon login so that it doesn't interfere with the deploy process. If we checked and determined that the agent does not, in fact, start on login, we can add the last command to the "deploy" user's ~/.profile file (or ~/.bash_profile).
Note also that the host specified in the local ~/.ssh/config must match the name or IP address of the host that we want to forward credentials to, not the host that ultimately authenticates using the forwarded key. We need to change:
Host git-codecommit.*.amazonaws.com
...to:
Host 50.116.2.15
We can verify that the SSH client performs agent forwarding by checking the verbose output:
$ ssh -v deploy#50.116.2.15
...
debug1: Requesting authentication agent forwarding.
...
Of course, be sure to register any needed keys with the local agent by using ssh-add (this can also be done automatically when logging in as shown above). We can check which keys the agent loaded at any time with:
$ ssh-add -l
This usually helps me:
ssh-add -D
ssh-agent
ssh-add
I am attempting to create a docker machine on Digital Ocean, but with the 16.04 LTS instead of the default 15.10. The do-access-token file contains my token.
Here's the script (create-do):
#!/usr/bin/env bash
# Creates a digital-ocean server with Ubuntu 16.04 instead of the default
if [ "$1" != "" ]; then
echo "Creating: " $1
docker-machine \
create \
--driver digitalocean \
--digitalocean-access-token=`cat do-access-token` \
--digitalocean-image=ubuntu-16-04-x64 \
--digitalocean-ipv6=true \
$1
else
echo "Must have server name!"
fi
When I run the script like this:
$ ./create-do ps-server
It successfully allocates the machine at Digital Ocean, then craps out with this:
Creating: ps-server
Running pre-create checks...
Creating machine...
(ps-server) Creating SSH key...
(ps-server) Creating Digital Ocean droplet...
(ps-server) Waiting for IP address to be assigned to the Droplet...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Error creating machine: Error running provisioning: Something went wrong
running an SSH command!
command : sudo apt-get update
err : exit status 100
output : Reading package lists...
E: Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable)
E: Unable to lock directory /var/lib/apt/lists/
The machine is running, but I can't get to it since the SSH key was apparently not set before things started going wrong.
Anyone seen this before and/or have a work-around?
Update: May 21, 2016
Broken again with same error this morning. Tried 4 times, failed same way each time.
Update: May 20, 2016
This was, according do Digital Ocean's support, due to an issue with their Ubuntu 16.04 image which has now been corrected and I have confirmed that this now works.
Related GitHub issue (not yet closed):
https://github.com/docker/machine/issues/3358
this worked for me:
docker-machine provision your-node
I've taken this solution from here: https://github.com/docker/machine/issues/3358
I hope this helps!