I am trying to create a distributed client network using Tsung. I have a cluster of 14 different machines. I want to use m01 as the server and machines m02 and m03 as the clients (or simulated users).
Here is what I wrote:
<!-- Client side setup -->
<clients>
<client host="localhost" maxusers="400" cpu="1"><ip value="192.168.1.2"/></client>
<client host="m03" maxusers="400" cpu="1"><ip value="192.168.1.3"/></client>
</clients>
The server I am targeting is defined here:
<!-- Server side setup -->
<servers>
<server host="192.168.1.1" port="5000" type="tcp"></server>
</servers>
Whenever I try to run this, I get the following error:
Host key verification failed.
For notes purposes, m02 is a localhost that I am running tsung on.
I have installed tsung and erlan on all machines and have done various testing to make sure that I can run non-distributed tests.
I am not sure how to move from here.
Tsung Cluster configuration.
For configuration of Tsung cluster you need to have nodes (different computers with same operation system and with same version of Tsung).
All nodes should have possibility to access to master node without promting password. For this operation you have to generate ssl certificates in master node and then add public key in all slave nodes. Follow the commands below:
Generate the certificate in master node:
ssh-keygen -t rsa
Copy the files to all nodes home directories (in our example there are 3 nodes):
scp ./id_rsa.pub USERNAME#NODE_1_IP_ADDRESS:~
scp ./id_rsa.pub USERNAME#NODE_2_IP_ADDRESS:~
scp ./id_rsa.pub USERNAME#NODE_3_IP_ADDRESS:~
Add public key in all nodes:
cat id_rsa.pub >> .ssh/authorized_keys
After successful keygen generation and installation on all nodes you should check via ssh command the access to all nodes. First time login via ssh is required either you should get Host key verification failed.
example :
please do this:
ssh [thesameusernamewhichisintsungtestplan]#yournodehostname
NOTE: Your all nodes' /etc/hosts should have the cluster and Test Servers credentials.
Tsung distributed load testing is based on SSH shell distribution.
Make sure you set up your SSH system so that you can ssh without password prompt (with key) from master to all the slave nodes.
From Tsung documentation:
for distributed tests, you need an ssh access to remote machines without password (use a RSA/DSA key without pass-phrase or ssh-agent) (rsh is also supported)
Have you ever ssh'd to the machines you are trying to use from the machine you are on?
ubuntu#ip-10-168-221-101:~/sessions$ tsung -f project.xml -l logs/tsung.log start
Starting Tsung
"Log directory is: /home/ubuntu/sessions/logs/20120830-1008"
Host key verification failed.
Host key verification failed.
Host key verification failed.
Host key verification failed.
^C
BREAK: (a)bort (c)ontinue (p)roc info (i)nfo (l)oaded
(v)ersion (k)ill (D)b-tables (d)istribution
^Cubuntu#ip-10-168-221-101:~/sessions$ grep client project.xml
<clients>
<client host="localhost"/>
<client host="ip-10-161-74-53"/>
<client host="ip-10-168-154-136"/>
<client host="ip-10-168-15-66"/>
<client host="ip-10-168-86-249"/>
</clients>
the mean inter-arrival time between new clients and the phase
ubuntu#ip-10-168-221-101:~/sessions$ ssh ip-10-161-74-53 erl
The authenticity of host 'ip-10-161-74-53 (10.161.74.53)' can't be established.
ECDSA key fingerprint is d0:92:3c:f1:56:99:c8:34:8b:0f:99:e8:10:7e:69:a6.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ip-10-161-74-53,10.161.74.53' (ECDSA) to the list of known hosts.
Eshell V5.8.5 (abort with ^G)
1> ^C
ubuntu#ip-10-168-221-101:~/sessions$ for d in $(grep client project.xml | grep ip | sed 's/<client host="\([^"]\+\)"\/>/\1/'); do ssh $d cat /etc/hosts; done
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
[...]
ubuntu#ip-10-168-221-101:~/sessions$ tsung -f project.xml -l logs/tsung.log start
Starting Tsung
"Log directory is: /home/ubuntu/sessions/logs/20120830-1013"
Profit!"
1 Use this on server(master) to check if the SSH login without password is ok:
ssh client-002 erl
2 If it's not ok, just do this to keep your public key file is the newest:
ssh-copy-id your-hostname
PS:
If you setup your SSH login without password OK, then DO NOT use ssh-keygen to generate new public key.
Steps
1.Reboot the VMs/machines and start new session
2.Remove lines of /home/user/.ssh/known_hosts related to machines m01,m02 and m03 from each of the machines
3.modify /etc/hosts files of all of them to contain ip address and hostname/fqdn/shortname of m01,m02 and m03
4.Copy the contents of publickey in to /home/user/.ssh/authorized_keys file and copy the private key file in to /home/user/.ssh/ folder. Generate new private and public keys using keygen if not generated.
5.(important step)Now run command:" ssh m03 " from m01 and m02 .It is important to use the same name(or hostname) in ...(in your .xml file) , in /etc/hosts file and while doing ssh. (the hostname you use for ssh will be added in known_hosts file). Similarly do in other two machines.
Reference: http://cryolite.iteye.com/blog/378758 (please translate)
"Host key verification failed." error will never appear again :)
Related
I'm running an Oracle 11g image (https://hub.docker.com/r/oracleinanutshell/oracle-xe-11g) on a docker container.
I'm creating the container with the debug option as explained:
docker run --name oracle-xe-11g -idt -p 1521:1521 -p 49161:8080 -e ORACLE_ALLOW_REMOTE=true oracleinanutshell/oracle-xe-11g /bin/bash
After that I logged in the container as sudo, configured the listener.ora with the correct hostname, everything following this guide (it's in pt-Br, but the commands are easy to understand)
http://loredata.com.br/2017/08/31/rodando-o-oracle-no-docker/
I can connect with SQL Developer and with my main application running in a Wildfly server, but for support purposes I need to debug some package and stored procedures.
I compiled all my packages and procedures to allow debugging, gave the debug permissions to the user, but when I try to debug a procedure in a package using the SQL Developer default debug options I get the following error:
Conectando ao banco de dados SFW_DOCKER.
Executando PL/SQL: ALTER SESSION SET PLSQL_DEBUG=TRUE
Executando PL/SQL: CALL DBMS_DEBUG_JDWP.CONNECT_TCP( '127.0.0.1', '20587' )
ORA-30683: falha ao estabelecer conexão com o depurador
ORA-12541: TNS:não há listener
ORA-06512: em "SYS.DBMS_DEBUG_JDWP", line 68
ORA-06512: em line 1
Processo encerrado.
Desconectando do banco de dados SFW_DOCKER.
It says there's no listener, but I'm sure everything is running fine.
I also tried to run in ports 4000-4999 exposing them in the create container command and forcing SQL Developer to use them, but I get the same error.
Anyone can help me with this question?
To solve try:
Use IPv4 from your local machine
Set 'Debugging Port Range' from 4000 to 4000
Check the option 'Prompt for Debugger Host for Database Debugging'
SQL Developer -> Tools -> Preferences -> Debugger
Debugger configuration
I solved it by setting DatabaseDebuggerDisableJDWP=true in ide.properties. On linux this can be done with this:
find ~/.sqldeveloper/ -name ide.properties -type f -exec sh -c "echo 'DatabaseDebuggerDisableJDWP=true' >> {}" \;
I'm trying to connect a windows agent to jenkins with no luck. I'm using open ssh and no verification for now during setup. When I launch the agent, Jenkins can reach it and it puts the remote.jar in the requested folder, but it still has an issue starting the agent. I get no error description whatsoever
SSHLauncher{host='NLQA1', port=22, credentialsId='10314a78-c648-4891-aa78-c5510875e8e7', jvmOptions='', javaPath='c:/jenkins2/jdk/bin/java.exe', prefixStartSlaveCmd='', suffixStartSlaveCmd='', launchTimeoutSeconds=210, maxNumRetries=10, retryWaitTime=15, sshHostKeyVerificationStrategy=hudson.plugins.sshslaves.verifiers.NonVerifyingKeyVerificationStrategy, tcpNoDelay=true, trackCredentials=true}
[06/20/19 13:36:26] [SSH] Opening SSH connection to NLQA1:22.
[06/20/19 13:36:27] [SSH] WARNING: SSH Host Keys are not being verified. Man-in-the-middle attacks may be possible against this connection.
[06/20/19 13:36:28] [SSH] Authentication successful.
[06/20/19 13:36:28] [SSH] The remote user's environment is:
ALLUSERSPROFILE=C:\ProgramData
APPDATA=C:\Users\Admin\AppData\Roaming
CommonProgramFiles=C:\Program Files\Common Files
CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
CommonProgramW6432=C:\Program Files\Common Files
COMPUTERNAME=NLQA1
ComSpec=C:\WINDOWS\system32\cmd.exe
DriverData=C:\Windows\System32\Drivers\DriverData
GIT_SSH=C:\Program Files\TortoiseGit\bin\TortoisePLink.exe
HOMEDRIVE=C:
HOMEPATH=\Users\Admin
ICU_DATA=c:\Usd91\BIN
LOCALAPPDATA=C:\Users\Admin\AppData\Local
NUMBER_OF_PROCESSORS=2
OneDrive=C:\Users\Admin\OneDrive
OS=Windows_NT
Path=C:\app\client\Admin\product\12.1.0\client_1\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\Git\cmd;C:\Program Files\TortoiseGit\bin;C:\WINDOWS\system32\config\systemprofile\AppData\Local\Microsoft\WindowsApps;c:\Gnuwin32;C:\Users\Admin\AppData\Local\Microsoft\WindowsApps;C:\App;
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
PROCESSOR_ARCHITECTURE=AMD64
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 63 Stepping 2, GenuineIntel
PROCESSOR_LEVEL=6
PROCESSOR_REVISION=3f02
ProgramData=C:\ProgramData
ProgramFiles=C:\Program Files
ProgramFiles(x86)=C:\Program Files (x86)
ProgramW6432=C:\Program Files
PROMPT=Admin#Domain#NLQA1 $P$G
PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules
PUBLIC=C:\Users\Public
SSH_CLIENT=172.x.x.x 63458 22
SSH_CONNECTION=172.x.x.x 63458 172.x.x.x 22
SystemDrive=C:
SystemRoot=C:\WINDOWS
TEMP=C:\TEMP
TMP=C:\TEMP
USERDOMAIN=Domain
USERNAME=Admin#Domain
USERPROFILE=C:\Users\Admin
windir=C:\WINDOWS
[06/20/19 13:36:28] [SSH] Starting sftp client.
[06/20/19 13:36:28] [SSH] Copying latest remoting.jar...
Source agent hash is D2D1A740134BD20D6F0855B356344342. Installed agent hash is D2D1A740134BD20D6F0855B356344342
Verified agent jar. No update is necessary.
Expanded the channel window size to 4MB
[06/20/19 13:36:29] [SSH] Starting agent process: cd "c:/jenkins2" && c:/jenkins2/jdk/bin/java.exe -jar remoting.jar -workDir c:/jenkins2
Slave JVM has terminated. Exit code=0
[06/20/19 13:36:29] Launch failed - cleaning up connection
[06/20/19 13:36:29] [SSH] Connection closed.
Agent is running adoptopenjdk 11 with eclipsej9, Slave JVM has terminated. Exit code=0 is all information I get back from Jenkins. I can run the agent if I rdp to the machine and do c:/jenkins2/jdk/bin/java.exe -jar remoting.jar -workDir c:/jenkins2 manually, so it is not that the jar can't be started at all. jnlp is working as well, but I'd like to use the ssh route. Do you have a clue what is wrong or what I have to do to get more information regarding the failed launch?
I found the answer at the ssh-slaves-plugin git repository. I'll quote it here so it will be here in the future.
Launch Windows slaves using Microsoft OpenSSH
The current version of the plugin does not run directly on PowerShell, you have to use prefix and suffix settings to trick the command and make it works, Windows 10 machines can run as SSH agents with the Microsoft OpenSSH server by using:
Prefix Start Agent Command
powershell -Command "cd C:\J\S ; C:\J\S\jdk\bin\java.exe -jar remoting.jar" ; exit 0 ; rem '
Suffix Start Agent Command
'
EDIT 16-08-2019
After installing windows updates on the machine I had to change the prefix to
powershell -Command "cd C:\J\S ; C:\J\S\jdk\bin\java.exe -jar remoting.jar" ; exit 0 ; # '
The change from rem to # make it working again. The error I was getting was :
The string is missing the terminator: '.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : TerminatorExpectedAtEndOfString
Looks like its the && operator. simple example
powershell -Command "cd c:/" ; exit 0 ; rem 'cd && echo "abc"'
Adding the prefix and the suffix fixed it for my. If someone knows why wrapping it in another powershell command makes it work feel free to elaborate.
I am running Sonarqube in a docker container using the default image from docker hub. Sonarqube is working fine. I am now working on using LDAPS for system login and can't seem to get it to work. I created a centos:latest container and have sonarqube running there. I did this so I could have ldapsearch, vim, telnet, update-ca, etc. I used openssl to add the server certificate. I tested with ldapsearch and the following is successful:
[root#bf9accb5647d linux-x86-64]# ldapsearch -x -LLL -H ldaps://dir.example.com -b "dc=example,dc=com" -D "uid=svcSonar,ou=SvcAccts,ou=People,dc=example,dc=com" -W '(uid=usernamehere)' cn
Enter LDAP Password: ******
dn: uid=usernamehere,ou=Users,ou=People,dc=example,dc=com
cn: User Name
Here is my relevant ldap configuration in sonar.properties:
sonar.security.realm=LDAP
ldap.url=ldaps://dir.example.com
ldap.bindDN=uid=svcSonar,ou=SvcAccts,ou=People,dc=example,dc=com
ldap.bindPassword=mypassword
ldap.user.baseDn=ou=Users,ou=People,dc=example,dc=com
ldap.user.request=(uid={login})
Here is the relevant sonar.log entries with TRACE and DEBUG on:
2016.04.15 16:32:35 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin LDAP / 1.5.1 / 8960e08512a3d3ec4d9cf16c4c2c95017b5b7ec5
2016.04.15 20:19:07 INFO web[org.sonar.INFO] Security realm: LDAP
2016.04.15 20:19:07 INFO web[o.s.p.l.LdapSettingsManager] User mapping: LdapUserMapping{baseDn=ou=Users,ou=People,dc=example,dc=com, request=(uid={0}), realNameAttribute=cn, emailAttribute=mail}
2016.04.15 20:19:07 DEBUG web[o.s.p.l.LdapContextFactory] Initializing LDAP context {java.naming.provider.url=ldaps://dir.example.com, java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory, com.sun.jndi.ldap.connect.pool=true, java.naming.security.authentication=simple, java.naming.referral=follow}
2016.04.15 20:19:07 INFO web[o.s.p.l.LdapContextFactory] Test LDAP connection on ldaps://dir.example.com: OK
2016.04.15 20:19:07 INFO web[org.sonar.INFO] Security realm started
.
.
.
2016.04.15 20:26:55 DEBUG web[o.s.p.l.LdapUsersProvider] Requesting details for user usernamehere
2016.04.15 20:26:55 DEBUG web[o.s.p.l.LdapSearch] Search: LdapSearch{baseDn=ou=Users,ou=People,dc=example,dc=com, scope=s
ubtree, request=(uid={0}), parameters=[usernamehere], attributes=[mail, cn]}
2016.04.15 20:26:55 DEBUG web[o.s.p.l.LdapContextFactory] Initializing LDAP context {java.naming.provider.url=ldaps://di
r.example.com, java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory, com.sun.jndi.ldap.connect.pool=true, java.na
ming.security.authentication=simple, java.naming.referral=follow}
2016.04.15 20:26:55 DEBUG web[o.s.p.l.LdapUsersProvider] User usernamehere not found in <default>
I did the following for the certificate:
echo "" | openssl s_client -connect server:port -prexit 2>/dev/null | sed -n -e '/BEGIN\ CERTIFICATE/,/END\ CERTIFICATE/ p' > ldap.pem
update-ca-trust force-enable
cp ldap.pem /etc/pki/ca-trust/source/anchors/
update-ca-trust extract
I also used the keytool to add the ldap.pem to the java cacerts for the jre being using by Sonarqube.
Any ideas?
I found the problem. I needed to change ldap.bindDN to ldap.bindDn. :)
I'm trying to deploy my Rails application with Capistrano but when it comes to "git ls-remote" i get the following error:
$ /usr/bin/env git ls-remote --heads git#git.<server>:<project>.git
/bin/bash: line 0: exec: corkscrew: not found
DEBUG [a5205e2a] ssh_exchange_identification: Connection closed by remote host
DEBUG [a5205e2a] fatal: The remote end hung up unexpectedly
If I try to run the command on the server there is no problem. I've also got a deploy ssh key for the "deployer" user in gitlab.
Corkscrew is located under ~/bin/corkscrew and is added to the PATH variable.
$ echo $PATH
/home/deployer/.rbenv/shims:/home/deployer/.rbenv/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/deployer/bin
$ corkscrew
corkscrew 2.0 (agroman#agroman.net)
usage: corkscrew <proxyhost> <proxyport> <desthost> <destport> [authfile]
$ which corkscrew
~/bin/corkscrew
Update:
Here is my ~/.ssh/config:
Host *
ProxyCommand corkscrew <server> 8088 %h %p ~/.ssh/proxyauth
While the ~/.ssh/proxyauth file contains the credentials of the proxy user.
If you need additional information please let me know.
The problem seems to be that ssh can't find the corkscrew executable. I double-checked my local ~/.ssh/config file, and I use the full path to the corkscrew executable in there:
Host *
ProxyCommand /usr/local/bin/corkscrew <server> 8088 %h %p ~/.ssh/proxyauth
(Since I'm on OS X and have installed corkscrew through Homebrew, it's located in /usr/local/bin.)
Can you try to update your ~/.ssh/config to include the full path to the corkscrew executable? Something like this (I don't know whether the ~ will work, you might have to use the full path if it doesn't):
Host *
ProxyCommand ~/bin/corkscrew <server> 8088 %h %p ~/.ssh/proxyauth
I have successfully setuped BigCouch on two different machines. Both of them run locally very well. When I joins them in a cluster using one of or both this command: curl -X PUT machine1:5986/nodes/bigcouch#machine2 -d {} curl -X PUT machine2:5986/nodes/bigcouch#machine1 -d {}
I always receive positive results. The database nodes contains two documents bigcouch#machine2, bigcouch#machine1. But in fact, it is always erreous. I saw this error message in the command line of BigCouch
=*ERROR REPORT==== 9-Dec-2011::20:01:40 === Error in process <0.3117.0> on node 'bigcouch#machine1.fr' with exit value: {{rexi_DOWN,noconnect},[{mem3_rep,rexi_call,2},{mem3_rep,replicate_batch,1},{mem3_rep,go,3},{mem3_rep,go,2}]} <148>1 2011-12-09T19:01:40.559992Z machine1 twig <0.159.0> -------- - mem3_sync nodes -> 'bigcouch#machine2' {{rexi_DOWN,noconnect}, [{mem3_rep,rexi_call,2}, {mem3_rep,replicate_batch,1}, {mem3_rep,go,3}, {mem3_rep,go,2}]} <148>1 2011-12-09T19:01:40.560106Z machine1 twig <0.159.0> -------- - mem3_sync dbs -> 'bigcouch#machine2' {{rexi_DOWN,noconnect}, [{mem3_rep,rexi_call,2}, {mem3_rep,replicate_batch,1}, {mem3_rep,go,3}, {mem3_rep,go,2}]} <148>1 2011-12-09T19:01:40.560205Z machine1 twig <0.159.0> -------- - mem3_sync _users -> 'bigcouch#machine2' {{rexi_DOWN,noconnect}, [{mem3_rep,rexi_call,2}, {mem3_rep,replicate_batch,1}, {mem3_rep,go,3}, {mem3_rep,go,2}]} [error] [emulator] [--------] Error in process <0.3198.0> on node 'bigcouch#machine2' with exit value: {{rexi_DOWN,noconnect},[{mem3_rep,rexi_call,2},{mem3_rep,replicate_batch,1},{mem3_rep,go,3},{mem3_rep,go,2}]} <147>1 2011-12-09T19:01:45.560979Z machine1 twig emulator msg - Error in process <0.3198.0> on node 'bigcouch#machine1' with exit value: {{rexi_DOWN,noconnect},[{mem3_rep,rexi_call,2},{mem3_rep,replicate_batch,1},{mem3_rep,go,3},{mem3_rep,go,2}]}*
Maybe it's the firewalled? If Yes, plese tell me the range port to let nodes connect each other. If not, Please explain it to me and how to solve it to connect them.
In the document, they ask that nodes can ping each other and the nodes set the same magic cookie. My machines can ping each other, but what is magic cookie?
Occasionally you can see this error when a node is first connected as there are various processes that receive update messages and monitor the other nodes as well as an internal replicator. These messages are harmless but if you see "noconnect" persistently then something is wrong.
On each instance there is a file, /etc/vm.args in which you will see two values of interest, -name and -setcookie The first -name corresponds to the doc id you must use when connecting the nodes and the second is the magic cookie that must be the same on all the erlang nodes for them to talk to one another. If this cookie isn't set it defaults to the value in ~/.erlang-cookie
When you execute "make dev" it will build a 3 node cluster that you can inspect to see how these bits should be set.
Also you only need to run the connect on one side, .eg. node2 to node1 as the internal replicator will sync the nodes dbs across the cluster