Cannot run rancher UI using docker - docker

I user command line docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ --privileged \ rancher/rancher:latest
The container still run but I cannot access Rancher UI
8e95a158842c rancher/rancher:latest "entrypoint.sh" 45 minutes ago Up 7 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp relaxed_chandrasekhar
Then t run docker logs 8e95a158842c
2021-11-04 22:25:56.455037 W | pkg/fileutil: check file permission: directory "management-state/etcd" exist, but the permission is "drwxr-xr-x". The recommended permission is "-rwx------" to prevent possible unprivileged access to the data.
2021-11-04 22:25:56.543162 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 1839
raft2021/11/04 22:25:56 INFO: 8e9e05c52164694d switched to configuration voters=()
raft2021/11/04 22:25:56 INFO: 8e9e05c52164694d became follower at term 17
raft2021/11/04 22:25:56 INFO: newRaft 8e9e05c52164694d [peers: [], term: 17, commit: 1839, applied: 0, lastindex: 1839, lastterm: 17]
2021-11-04 22:25:56.547839 W | auth: simple token is not cryptographically signed
2021-11-04 22:25:56.573956 I | etcdserver: starting server... [version: 3.4.15, cluster version: to_be_decided]
2021-11-04 22:25:56.580742 I | embed: listening for peers on 127.0.0.1:2380
raft2021/11/04 22:25:56 INFO: 8e9e05c52164694d switched to configuration voters=(10276657743932975437)
2021-11-04 22:25:56.582873 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
2021-11-04 22:25:56.583346 N | etcdserver/membership: set the initial cluster version to 3.4
2021-11-04 22:25:56.583568 I | etcdserver/api: enabled capabilities for version 3.4
raft2021/11/04 22:26:02 INFO: 8e9e05c52164694d is starting a new election at term 17
raft2021/11/04 22:26:02 INFO: 8e9e05c52164694d became candidate at term 18
raft2021/11/04 22:26:02 INFO: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 18
raft2021/11/04 22:26:02 INFO: 8e9e05c52164694d became leader at term 18
raft2021/11/04 22:26:02 INFO: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 18
2021-11-04 22:26:02.051592 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
2021-11-04 22:26:02.052775 I | embed: ready to serve client requests
2021-11-04 22:26:02.059541 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
2021/11/04 22:26:02 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6443/version?timeout=15m0s": dial tcp 127.0.0.1:6443: connect: connection refused
2021/11/04 22:26:04 [INFO] Waiting for server to become available: the server is currently unable to handle the request
2021/11/04 22:26:16 [INFO] Running in single server mode, will not peer connections
2021-11-04 22:26:17.724466 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" " with result "range_response_count:92 size:445717" took too long (109.807669ms) to execute
2021/11/04 22:26:18 [INFO] Applying CRD features.management.cattle.io
2021/11/04 22:26:22 [INFO] Applying CRD navlinks.ui.cattle.io
2021/11/04 22:26:22 [INFO] Applying CRD clusters.management.cattle.io
2021/11/04 22:26:22 [INFO] Applying CRD apiservices.management.cattle.io
2021/11/04 22:26:23 [INFO] Applying CRD clusterregistrationtokens.management.cattle.io
2021/11/04 22:26:23 [INFO] Applying CRD settings.management.cattle.io
2021/11/04 22:26:24 [INFO] Applying CRD preferences.management.cattle.io
2021/11/04 22:26:24 [INFO] Applying CRD features.management.cattle.io
2021/11/04 22:26:25 [INFO] Applying CRD clusterrepos.catalog.cattle.io
2021/11/04 22:26:26 [INFO] Applying CRD operations.catalog.cattle.io
2021/11/04 22:26:31 [INFO] Applying CRD apps.catalog.cattle.io
2021-11-04 22:26:33.250474 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" " with result "range_response_count:92 size:445717" took too long (139.120063ms) to execute
2021/11/04 22:26:45 [INFO] Applying CRD fleetworkspaces.management.cattle.io
2021-11-04 22:26:47.449199 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" " with result "range_response_count:92 size:445717" took too long (321.346575ms) to execute
2021-11-04 22:26:52.656294 W | etcdserver: request "header:<ID:7587858304790119201 > txn:<compare:<target:MOD key:\"/registry/configmaps/kube-system/k3s\" mod_revision:1632 > success:<request_put:<key:\"/registry/configmaps/kube-system/k3s\" value_size:456 >> failure:<request_range:<key:\"/registry/configmaps/kube-system/k3s\" > >>" with result "size:16" took too long (107.766444ms) to execute
2021-11-04 22:27:03.165794 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "range_response_count:1 size:515" took too long (138.87999ms) to execute
2021-11-04 22:27:03.182578 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" " with result "range_response_count:36 size:10156" took too long (196.135777ms) to execute
2021-11-04 22:27:21.345406 W | etcdserver: read-only range request "key:\"/registry/flowschemas/exempt\" " with result "range_response_count:1 size:879" took too long (241.774296ms) to execute
2021-11-04 22:27:21.633929 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:340" took too long (248.96888ms) to execute
2021-11-04 22:27:30.019952 W | etcdserver: read-only range request "key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (102.695372ms) to execute
When I install rancher on my laptop everything is normal but when I try on my VPS this error appeared
How can I fix it?

Facing the same issue with the rancher/rancher:latest image as well. This works for me though rancher/rancher:v2.4-head
docker pull rancher/rancher:v2.4-head

Related

Crunchy Postgres log messages

I am new to Crunchy Postgres, and recently I installed a Crunchy PostgresCluster on an openshift environment. After the cluster was started, I had a look at the container log messages.
I also checked script startup.sh , which is called during Postgresql startup. In this shell script, there are some lines (begin with echo_info) used for log messsages, for example:
echo_info "Starting PostgreSQL.."
But I could not see this message in the logs.
NAME READY STATUS RESTARTS AGE ROLE
demo-instance1-4vtv-0 5/5 Running 0 7h36m replica
demo-instance1-dg7j-0 5/5 Running 0 7h36m replica
demo-instance1-f696-0 5/5 Running 0 7h36m master
:~$ oc logs -f demo-instance1-f696-0 -c database | more
2022-07-08 07:42:31,064 INFO: No PostgreSQL configuration items changed, nothing to reload.
2022-07-08 07:42:31,068 INFO: Lock owner: None; I am demo-instance1-f696-0
2022-07-08 07:42:31,383 INFO: trying to bootstrap a new cluster
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf-8".
The default text search configuration will be set to "english".
Data page checksums are enabled.
fixing permissions on existing directory /pgdata/pg14 ... ok
creating directory /pgdata/pg14_wal ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
/usr/pgsql-14/bin/pg_ctl -D /pgdata/pg14 -l logfile start
2022-07-08 07:42:35.953 UTC [92] LOG: pgaudit extension initialized
2022-07-08 07:42:35,955 INFO: postmaster pid=92
/tmp/postgres:5432 - no response
2022-07-08 07:42:35.998 UTC [92] LOG: redirecting log output to logging collector process
2022-07-08 07:42:35.998 UTC [92] HINT: Future log output will appear in directory "log".
/tmp/postgres:5432 - accepting connections
/tmp/postgres:5432 - accepting connections
2022-07-08 07:42:37,038 INFO: establishing a new patroni connection to the postgres cluster
2022-07-08 07:42:37,334 INFO: running post_bootstrap
2022-07-08 07:42:37,754 INFO: initialized a new cluster
2022-07-08 07:42:38,039 INFO: no action. I am (demo-instance1-f696-0), the leader with the lock
2022-07-08 07:42:48,504 INFO: no action. I am (demo-instance1-f696-0), the leader with the lock
2022-07-08 07:42:58,476 INFO: no action. I am (demo-instance1-f696-0), the leader with the lock
2022-07-08 07:43:08,497 INFO: no action. I am (demo-instance1-f696-0), the leader with the lock

Rancher 2.5.5 failed to start cluster controllers c-dbk7g: context canceled

I installed the rancher in a single node via container docker. I have 03 etcd and control Plane hosts and 03 Worker hosts. I get this message on the Rancher GUI:
This cluster is currently Unavailable; areas that directly interact with it will not be available until the API is ready.
This error is in the rancher container logs.
failed to start cluster controllers c-dbk7g: context cancel
2021/06/05 20:05:18 [ERROR] error syncing 'c-dbk7g': handler cluster-deploy: Get "https://192.168.0.153:6443/apis/apps/v1/namespaces/cattle-system/daemonsets/cattle-node-agent": waiting for cluster [c-dbk7g] agent to connect, requeuing
2021-06-05 20:06:34.914592 I | mvcc: store.index: compact 55883328
2021-06-05 20:06:34.957157 I | mvcc: finished scheduled compaction at 55883328 (took 41.306809ms)
2021/06/05 20:07:07 [INFO] Stopping cluster agent for c-dbk7g
2021/06/05 20:07:07 [ERROR] failed to start cluster controllers c-dbk7g: context canceled
W0605 20:07:33.746643 33 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
2021/06/05 20:09:12 [INFO] Stopping cluster agent for c-dbk7g
2021/06/05 20:09:12 [ERROR] failed to start cluster controllers c-dbk7g: context canceled
W0605 20:09:58.886016 33 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
W0605 20:10:27.733243 33 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
2021/06/05 20:11:18 [INFO] Stopping cluster agent for c-dbk7g
2021/06/05 20:11:18 [ERROR] failed to start cluster controllers c-dbk7g: context canceled
2021-06-05 20:11:34.920593 I | mvcc: store.index: compact 55885185
2021-06-05 20:11:34.960779 I | mvcc: finished scheduled compaction at 55885185 (took 38.698053ms)
2021/06/05 20:13:26 [INFO] Stopping cluster agent for c-dbk7g
2021/06/05 20:13:26 [ERROR] failed to start cluster controllers c-dbk7g: context canceled
2021/06/05 20:15:39 [INFO] Stopping cluster agent for c-dbk7g
2021/06/05 20:15:39 [ERROR] failed to start cluster controllers c-dbk7g: context canceled
2021-06-05 20:16:34.926857 I | mvcc: store.index: compact 55887037
2021-06-05 20:16:34.967700 I | mvcc: finished scheduled compaction at 55887037 (took 39.686485ms)
W0605 20:16:46.170323 33 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
W0605 20:17:23.630669 33 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
2021/06/05 20:17:40 [INFO] Stopping cluster agent for c-dbk7g
2021/06/05 20:17:40 [ERROR] failed to start cluster controllers c-dbk7g: context canceled
W0605 20:19:20.022226 33 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
2021/06/05 20:19:52 [INFO] Stopping cluster agent for c-dbk7g
2021/06/05 20:19:52 [ERROR] failed to start cluster controllers c-dbk7g: context canceled
2021-06-05 20:21:34.933216 I | mvcc: store.index: compact 55888893
2021-06-05 20:21:34.973827 I | mvcc: finished scheduled compaction at 55888893 (took 38.986348ms)
2021/06/05 20:21:45 [INFO] Stopping cluster agent for c-dbk7g
2021/06/05 20:21:45 [ERROR] failed to start cluster controllers c-dbk7g: context canceled
2021/06/05 20:22:39 [ERROR] error syncing 'c-dbk7g': handler cluster-deploy: Get "https://192.168.0.153:6443/apis/apps/v1/namespaces/cattle-system/daemonsets/cattle-node-agent": waiting for cluster [c-dbk7g] agent to connect, requeuing
W0605 20:22:53.276338 33 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
2021/06/05 20:23:36 [INFO] Stopping cluster agent for c-dbk7g
2021/06/05 20:23:36 [ERROR] failed to start cluster controllers c-dbk7g: context canceled
2021/06/05 20:25:57 [INFO] Stopping cluster agent for c-dbk7g
2021/06/05 20:25:57 [ERROR] failed to start cluster controllers c-dbk7g: context canceled
2021-06-05 20:26:34.940664 I | mvcc: store.index: compact 55890751
2021-06-05 20:26:34.981048 I | mvcc: finished scheduled compaction at 55890751 (took 38.961021ms)
W0605 20:26:40.871202 33 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
2021/06/05 20:27:59 [INFO] Stopping cluster agent for c-dbk7g
2021/06/05 20:27:59 [ERROR] **failed to start cluster controllers c-dbk7g: context canceled**
W0605 20:29:51.284898 33 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
2021/06/05 20:30:00 [INFO] Stopping cluster agent for c-dbk7g
2021/06/05 20:30:00 [ERROR] **failed to start cluster controllers c-dbk7g: context canceled**
W0605 20:30:08.914160 33 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
2021-06-05 20:31:34.947562 I | mvcc: store.index: compact 55892605
2021-06-05 20:31:34.986801 I | mvcc: finished scheduled compaction at 55892605 (took 38.106291m

"orderer" node / docker container exited few seconds after running docker-compose.cli

I am new to hyperledger fabric. I used the byfn example and it worked fine and I am now working on my own network. I created crypto-config, config.tx all docker files (including base) as the byfn example.
Every thing works fine untill I run the command "docker-compose -f docker-compose-cli.yaml up -d"
all the nodes are generated but the order node fails within a few seconds.
I think the problem could be in my artifacts/genesis.block file, but I could not solve it.
orderer.expleoFabric.com | 2020-05-21 16:17:59.624 UTC [orderer.common.server] initializeServerConfig -> INFO 003 Starting orderer with TLS enabled
orderer.expleoFabric.com | 2020-05-21 16:17:59.741 UTC [orderer.common.server] Main -> PANI 004 Failed validating bootstrap block: initializing configtx manager failed: bad channel ID: 'Orderer-channel' contains illegal characters
orderer.expleoFabric.com | panic: Failed validating bootstrap block: initializing configtx manager failed: bad channel ID: 'Orderer-channel' contains illegal characters
orderer.expleoFabric.com |
This is from my logs but I could not find Ordrer-channel in any of my files.
channel ID can only contain lowercase alphabetical character.
for more information : https://github.com/hyperledger/fabric/blob/0c3f3f78178f8a639374fba1a12344f381877459/common/configtx/validator.go#L72..L74

Jenkins - Unexpected executor death

I see all my executors frequently changing to Dead state in one of my Jenkins slave machine(Windows 2008 R2 SP2).
Jenkins ver. 1.651.3
I have restarted Jenkins server as well as the service.
error logs-
Unexpected executor death
java.io.IOException: Failed to create a temporary file in /var/lib/jenkins/jobs/ABCD/jobs/EFGH/jobs/Build
at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:68)
at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:55)
at hudson.util.TextFile.write(TextFile.java:118)
at hudson.model.Job.saveNextBuildNumber(Job.java:293)
at hudson.model.Job.assignBuildNumber(Job.java:351)
at hudson.model.Run.<init>(Run.java:284)
at hudson.model.AbstractBuild.<init>(AbstractBuild.java:167)
at hudson.model.Build.<init>(Build.java:92)
at hudson.model.FreeStyleBuild.<init>(FreeStyleBuild.java:34)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at jenkins.model.lazy.LazyBuildMixIn.newBuild(LazyBuildMixIn.java:175)
at hudson.model.AbstractProject.newBuild(AbstractProject.java:1018)
at hudson.model.AbstractProject.createExecutable(AbstractProject.java:1209)
at hudson.model.AbstractProject.createExecutable(AbstractProject.java:144)
at hudson.model.Executor$1.call(Executor.java:364)
at hudson.model.Executor$1.call(Executor.java:346)
at hudson.model.Queue._withLock(Queue.java:1365)
at hudson.model.Queue.withLock(Queue.java:1230)
at hudson.model.Executor.run(Executor.java:346)
Caused by: java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1006)
at java.io.File.createTempFile(File.java:1989)
at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:66)
... 21 more
I see this error log in my slave machine
INFO: File download attempt 1
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.clients.versioncontrol.VersionControlClient downloadFileToStreams
INFO: File download attempt 1
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.ws.runtime.client.SOAPService executeSOAPRequestInternal
INFO: SOAP method='UpdateLocalVersion', status=200, content-length=367, server-wait=402 ms, parse=0 ms, total=402 ms, throughput=913 B/s, gzip
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.clients.versioncontrol.VersionControlClient downloadFileToStreams
INFO: File download attempt 1
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.clients.versioncontrol.VersionControlClient downloadFileToStreams
INFO: File download attempt 1
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.clients.versioncontrol.VersionControlClient downloadFileToStreams
INFO: File download attempt 1
Can you please check the owner of the path /var/lib/jenkins/jobs/ABCD/jobs/EFGH/jobs/Build ? By any chance if it is created manually, you will get permission denied error if the owner is not Jenkins. Also check for free disk space on server as well as agent and try rebooting the slave agent. It has helped it at times.
How long are the real job names for ABCD and EFGH?
I've run into the 260 character maximum path length with Jenkins on Windows 2008 R2 before.
The path in:
java.io.IOException: Failed to create a temporary file in /var/lib/jenkins/jobs/ABCD/jobs/EFGH/jobs/Build
with the three /jobs in it seems strange to me. In Jenkins it normally should rather be:
+- /var/lib/jenkins/jobs
+- ABCD
| +- builds
| | +- ...
| +- ...
+- EFGH
| +- builds
| | +- ...
| +- ...
+- Build
+- builds
| +- ...
+- ...
Maybe there's some misconfiguration concerning paths and Jenkins tries a mkdir /var/lib/jenkins/jobs/ABCD/jobs/EFGH/jobs/Build and the Jenkins user or the user under which the job runs doesn't have permissions to do that.
See also File permissions and attributes:
| w | ... | The directory's contents can be modified (create new files or folders; [...]); requires the execute permission to be also set, otherwise this permission has no effect. |
In my situation, this happened because the server was very low on space. Click on "Build Executor Status" from the dashboard and see if there is low disk space or 0 swap space. Try to free up some space. Then restart the Jenkins server / service and try again.

JMeter Maven Plugin - remote test machine cannot be configured

I keep getting this errors (error code bellow) when I run jmeter tests through jenkins on remote slave machines :
[INFO] -------------------------------------------------------
[INFO] P E R F O R M A N C E T E S T S
[INFO] -------------------------------------------------------
[INFO]
[INFO]
[info]
[debug] JMeter is called with the following command line arguments: -n -t C:\Performance_Framework\Project\src\test\jmeter\Example.jmx -l C:\Performance_Framework\Project\target\jmeter\results\Example.jtl -d C:\Performance_Framework\Project\target\jmeter -L DEBUG -j C:\Performance_Framework\CMS\target\jmeter\logs\Example.jmx.log -r -R 10.0.20.100,10.0.20.101 -X -Djava.rmi.server.hostname 10.0.20.200 -Dsun.net.http.allowRestrictedHeaders true
[info] Executing test: Example.jmx
[info] Creating summariser <summary>
[info] Created the tree successfully using C:\Performance_Framework\Project\src\test\jmeter\Example.jmx
[info] Configuring remote engine: 10.0.20.100
[info] error unmarshalling return; nested exception is:
[info] java.lang.ClassNotFoundException: org.apache.jmeter.engine.RemoteJMeterEngineImpl_Stub (no security manager: RMI class loader disabled)
[info] Failed to configure 10.0.20.100
[info] Configuring remote engine: 10.0.20.101
[info] error unmarshalling return; nested exception is:
[info] java.lang.ClassNotFoundException: org.apache.jmeter.engine.RemoteJMeterEngineImpl_Stub (no security manager: RMI class loader disabled)
[info] Failed to configure 10.0.20.101
[info] Stopping remote engines
[info] Remote engines have been stopped
[info] Error in NonGUIDriver java.lang.RuntimeException: Following remote engines could not be configured:[10.0.20.100, 10.0.20.101]
[info] Completed Test: Example.jmx
Now my current POM settings for the machines:
<configuration>
--------------------------------
<propertiesSystem>
<java.rmi.server.hostname>10.0.20.200</java.rmi.server.hostname>
</propertiesSystem>
<remoteConfig>
<startServersBeforeTests>true</startServersBeforeTests>
<serverList>10.0.20.100,10.0.20.101</serverList>
<stopServersAfterTests>true</stopServersAfterTests>
</remoteConfig>
</configuration>
If I run the tests from JMETER GUI everything is ok, remote host start and execute the tests successfully.
I think that everything is set correctly, jmeter-server.bat is started before tests run on each slave.
Also there's something that I don't understand from this sentence on from jmeter maven plugin wiki :
runremote command being send to JMeter which will start up any remote
servers you have defined in your jmeter.properties when your first
test starts.
Which jmeter.properties file, of the project ?If yes, then i don't know how that could be defined as always the target folder is cleaned on every test run, that resulting jmeter properties file is derived.
Later Edit: I even created the jmeter.properties file and added in src/test/jmeter dir and defined there the remote hosts, but still nothing.
So what do you suggest guys?
I resolved somehow the issue with the connection by editing the jmeter-server file by adding also the java.rmi.server.hostname.
But what I don't like is the test execution time, it's horrible , even with one thread which is supposed to be be finished in less than 1-2 sec but it's still showing that is trying to receive shutdown message.
[INFO] -------------------------------------------------------
[INFO] P E R F O R M A N C E T E S T S
[INFO] -------------------------------------------------------
[INFO]
[INFO]
[info]
[debug] JMeter is called with the following command line arguments: -n -t C:\Performance_Framework\CMS\src\test\jmeter\Example.jmx -l C:\Performance_Framework\CMS\target\jmeter\results\Example.jtl -d C:\Performance_Framework\CMS\target\jmeter -L DEBUG -q C:\Performance_Framework\CMS\src\test\jmeter\jmeter.properties -j C:\Performance_Framework\CMS\target\jmeter\logs\Example.jmx.log -r -X -Djava.rmi.server.hostname 10.0.20.200 -Dsun.net.http.allowRestrictedHeaders true
[info] Executing test: SearchForModule.jmx
[info] Creating summariser <summary>
[info] Created the tree successfully using C:\Performance_Framework\CMS\src\test\jmeter\SearchForModule.jmx
[info] Configuring remote engine: 10.0.20.100
[info] Configuring remote engine: 10.0.20.101
[info] Starting remote engines
[info] Starting the test # Thu Jul 30 13:48:23 BST 2015 (1438260503717)
[info] Remote engines have been started
[info] Waiting for possible shutdown message on port 4445
Is something that is wrong on jenkins side, tomcat webapp?
First thing you need to fix are the server addresses:
https://github.com/jmeter-maven-plugin/jmeter-maven-plugin/wiki/Remote-Server-Configuration
10.0.x.100,10.0.x.101 are not correct IP addresses. This is what you can see in your error log.

Resources