Thingsboard installation using docker on Ubuntu - thingsboard

I'm facing issues when installing thingsboard using docker-compose on ubuntu
images are correctly pulled , container seems to be up but logs shows :
logs for thingsboard/application:1.2.2 :
thingsboard-db-schema container is still in progress. waiting until it
completed...
thingsboard-db-schema container is still in progress. waiting until it
completed...
thingsboard-db-schema container is still in progress. waiting until it
completed...
thingsboard-db-schema container is still in progress. waiting until it
completed...
thingsboard-db-schema container is still in progress. waiting until it
completed...
thingsboard-db-schema container is still in progress. waiting until it
completed...
logs for thingsboard/thingsboard-db-schema:1.2.2
Wait for Cassandra...
Failed to resolve "db".
WARNING: No targets were specified, so 0 hosts scanned.
Wait for Cassandra...
Failed to resolve "db".
WARNING: No targets were specified, so 0 hosts scanned.
Wait for Cassandra...
seems that the first container waiting cassandra to be up which is not the case
Any suggestions ?
Thanks in advance

Please check output of the DB container using command 'docker-compose logs -f db' and verify that cassandra is ready to accept client on 9042 port:
db_1 | INFO 11:02:07 Waiting for gossip to settle before accepting client requests...
db_1 | INFO 11:02:15 No gossip backlog; proceeding
db_1 | INFO 11:02:15 Netty using native Epoll event loop
db_1 | INFO 11:02:15 Using Netty Version: [netty-buffer=netty-buffer-4.0.39.Final.38bdf86, netty-codec=netty-codec-4.0.39.Final.38bdf86, netty-codec-haproxy=netty-codec-haproxy-4.0.39.Final.38bdf86, netty-codec-http=netty-codec-http-4.0.39.Final.38bdf86, netty-codec-socks=netty-codec-socks-4.0.39.Final.38bdf86, netty-common=netty-common-4.0.39.Final.38bdf86, netty-handler=netty-handler-4.0.39.Final.38bdf86, netty-tcnative=netty-tcnative-1.1.33.Fork19.fe4816e, netty-transport=netty-transport-4.0.39.Final.38bdf86, netty-transport-native-epoll=netty-transport-native-epoll-4.0.39.Final.38bdf86, netty-transport-rxtx=netty-transport-rxtx-4.0.39.Final.38bdf86, netty-transport-sctp=netty-transport-sctp-4.0.39.Final.38bdf86, netty-transport-udt=netty-transport-udt-4.0.39.Final.38bdf86]
db_1 | INFO 11:02:15 Starting listening for CQL clients on /0.0.0.0:9042 (unencrypted)...
Output should be like logs above.
Plus additionally verify that no errors happened during the cassandra start up.

Related

Hyperledger Fabric peer container fails to start after network was shut down

I'm using the test-network from the hyperledger fabric samples at LTS version 2.2.3. I bring up the network with ./network.sh up createChannel -s couchdb followed by the command for adding the third org in the addOrg3 folder: ./addOrg3.sh up -c mychannel -s couchdb. Sometimes I want to have a fresh start when working on a smart contract so I bring down the network with ./network.sh down. Then when I restart the network with the previously mentioned commands sometimes one of the peer nodes will just fail to start. The log just shows this:
2022-02-18 13:10:25.087 UTC [nodeCmd] serve -> INFO 001 Starting peer:
Version: 2.2.3
Commit SHA: 94ace65
Go version: go1.15.7
OS/Arch: linux/amd64
Chaincode:
Base Docker Label: org.hyperledger.fabric
Docker Namespace: hyperledger
2022-02-18 13:10:25.087 UTC [peer] getLocalAddress -> INFO 002 Auto-detected peer address: 172.18.0.9:11051
2022-02-18 13:10:25.088 UTC [peer] getLocalAddress -> INFO 003 Returning peer0.org3.example.com:11051
I tried connecting to the container and attach to the process peer node start which is the process that brings up the container to get some more info on why its hanging. But since it is the init process with pid 1 one neither attach to it nor kill it. Also killing the container is not working as it is just not responding so I need to kill the whole docker engine. I tried the following without success: Purging docker with docker system prune -a --volumes, restarting my computer, re-downloading the fabric folder and binaries. Still the same error occurs. How is this possible, which information is still on my machine that makes it fail? At least I assume there is something on my machine as the same freshly downloaded code works on another machine and after many times repeating the pruring and restarting and redownloading it also works again on my computer.

single node ejabberd on kubernetes -ejabberdctl status shows node down

I'm trying to deploy ejabberd docker image in kubernetes with the following folders are mounted from a persistent volume,
/home/ejabberd/logs
/home/ejabberd/conf
/home/ejabberd/database
populated the database,and conf directory with our configuration files and the database folder
from the docker image using an init container .Upon setting the permissions, we could able to
start the ejabberd service , the logs says that the services (on ports 5222,5269,5280) are ready .
when I check the xmpp server status in the container using " ejabberdctl status " , the output says "node down"
===========ejabberd.log===================================================
2020-12-16 09:18:58.477630+00:00 [info] <0.3406.0>#mod_mqtt:init_topic_cache/2:611 Building MQTT cache for mydomain this may take a while
2020-12-16 09:18:59.087380+00:00 [info] <0.483.0>#ejabberd_mnesia:create/2:267 Creating Mnesia ram table 'bytestream'
2020-12-16 09:19:01.193203+00:00 [info] <0.126.0>#ejabberd_cluster_mnesia:wait_for_sync/1:123 Waiting for Mnesia synchronization to complete
2020-12-16 09:19:02.401537+00:00 [info] <0.126.0>#ejabberd_app:start/2:62 ejabberd 20.4.0 is started in the node 'ejabberd#mydomain' in 49.77s
2020-12-16 09:19:02.403414+00:00 [info] <0.601.0>#ejabberd_listener:init/4:159 Start accepting TCP connections at [::]:5222 for ejabberd_c2s
2020-12-16 09:19:02.403479+00:00 [info] <0.602.0>#ejabberd_listener:init/4:159 Start accepting TCP connections at [::]:5269 for ejabberd_s2s_in
2020-12-16 09:19:02.403956+00:00 [info] <0.603.0>#ejabberd_listener:init/4:159 Start accepting TLS connections at [::]:5443 for ejabberd_http
2020-12-16 09:19:02.403999+00:00 [info] <0.604.0>#ejabberd_listener:init/4:159 Start accepting TCP connections at [::]:5280 for ejabberd_http
2020-12-16 09:19:02.404098+00:00 [info] <0.605.0>#ejabberd_listener:init/4:159 Start accepting TCP connections at [::]:1883 for mod_mqtt
2020-12-16 09:19:02.404345+00:00 [info] <0.3418.0>#ejabberd_listener:init/4:159 Start accepting TCP connections at 10.42.8.15:7777 for mod_proxy65_stream
========================================ejabberdctl status===========================
~ $ ./bin/ejabberdctl status
Failed RPC connection to the node 'ejabberd#mydomain': nodedown
Commands to start an ejabberd node:
start - Start an ejabberd node in server mode
debug - Attach an interactive Erlang shell to a running ejabberd node
iexdebug - Attach an interactive Elixir shell to a running ejabberd node
live - Start an ejabberd node in live (interactive) mode
iexlive - Start an ejabberd node in live (interactive) mode, within an Elixir shell
foreground - Start an ejabberd node in server mode (attached)
Optional parameters when starting an ejabberd node:
--config-dir dir Config ejabberd: /home/ejabberd/conf
--config file Config ejabberd: /home/ejabberd/conf/ejabberd.yml
--ctl-config file Config ejabberdctl: /home/ejabberd/conf/ejabberdctl.cfg
--logs dir Directory for logs: /home/ejabberd/logs
--spool dir Database spool dir: /home/ejabberd/database/ejabberd#mydomain
--node nodename ejabberd node name: ejabberd#mydomain
If anyone has tried ejabberd on kubernetes, Please share your thought on this issue
Thanks in advance

hyperledger fabric chaincode dev mode connection error

I was following this tutorial of building a simple chaincode in dev mode here.
I'm stuck here
First I cleared everything with docker rm -f $(docker ps -aq).
When I enter docker-compose -f docker-compose-simple.yaml up it ends with this Error: Error getting broadcast client: Error connecting to orderer:7050 due to context deadline exceeded
cli | Usage:
cli | peer channel create [flags]
cli |
orderer | 2017-12-05 20:44:53.681 UTC [orderer/common/deliver] Handle -> WARN 0d1 Error reading from stream: rpc error: code = Canceled desc = context canceled
orderer | 2017-12-05 20:44:53.681 UTC [orderer/main] func1 -> DEBU 0d2 Closing Deliver stream
cli exited with code 1
What is causing problem? Is it dns problem that it can't find orderer?
Alright, I finally figured that out, the problem is in network, so I had in my virtual machine parameters set NAT for network, and nothing worked. I've set it to bridge mode and everything worked fine.

SonarQube docker container can't start, elasticsearch issue

I'm trying to run official SonarQube Docker container locally. I'm using the command provided here:
https://hub.docker.com/_/sonarqube/
It exits about 1 minute after it was started. Logs are reporting Elasticsearch connectivity issue
2017.09.05 08:16:40 INFO web[][o.e.client.transport] [Edwin Jarvis] failed to connect to node [{#transport#-1}{127.0.0.1}{127.0.0.1:9001}], removed from nodes list
org.elasticsearch.transport.ConnectTransportException: [][127.0.0.1:9001] connect_timeout[30s]
.....
Caused by: java.net.ConnectException: Connection refused: /127.0.0.1:9001
.....
... 3 common frames omitted
2017.09.05 08:17:10 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
2017.09.05 08:17:10 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
Turns out SonarQube container didn't have enough resources. I shut down other docker containers and it works for me now.

docker container wont start with Mysql docker image

I use docker container with mysql offical images to create more than 11 database container, (container1 to container11). after setting up, all container running fine until container9. At container10, it only starts up about 1 mins and stop again. using docker logs to check container but I do not see anything. stop container9, and restart container10. It runs fine again. The situation seems only happen when I have 9 mysql container and trying to raise up 10th. If I stop one of them, and raise it up again. Then there is no problem. Is it bugs? or I miss some setting for docker bridge?
root#ec8dcb82f64d:/dev/shm# docker restart f4801b57c4cc
f4801b57c4cc
root#ec8dcb82f64d:/dev/shm# docker ps -a | grep f4801b57c4cc
f4801b57c4cc mysql/mysql-server:5.7 "/entrypoint.sh my..." 2 weeks ago Exited (1) 3 seconds ago db
root#ec8dcb82f64d:/dev/shm# docker logs f4801b57c4cc
Initializing database
Database initialized
MySQL init process in progress...
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
mysql: [Warning] Using a password on the command line interface can be insecure.
mysql: [Warning] Using a password on the command line interface can be insecure.
mysql: [Warning] Using a password on the command line interface can be insecure.
mysql: [Warning] Using a password on the command line interface can be insecure.
/entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
MySQL init process done. Ready for start up.
root#ec8dcb82f64d:/dev/shm#
I think I hit the solution after one week even though I do not really understand what happens. the following is what I tried and so far, I can bring up mysql container up to 20 with no problems.
1: Try to create a dummpy mysql container for testing
$ docker run -e MYSQL_ROOT_PASSWORD=password mysql
Unable to find image 'mysql:latest' locally
latest: Pulling from library/mysql
...
Initializing database
2017-08-09T17:58:30.034595Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2017-08-09T17:58:30.039274Z 0 [Warning] InnoDB: io_setup() failed with EAGAIN. Will make 5 attempts before giving up.
2017-08-09T17:58:30.039294Z 0 [Warning] InnoDB: io_setup() attempt 1.
2017-08-09T17:58:30.539495Z 0 [Warning] InnoDB: io_setup() attempt 2.
2017-08-09T17:58:31.039701Z 0 [Warning] InnoDB: io_setup() attempt 3.
2017-08-09T17:58:31.539902Z 0 [Warning] InnoDB: io_setup() attempt 4.
2017-08-09T17:58:32.040115Z 0 [Warning] InnoDB: io_setup() attempt 5.
2017-08-09T17:58:32.540330Z 0 [ERROR] InnoDB: io_setup() failed with EAGAIN after 5 attempts.
2017-08-09T17:58:32.540378Z 0 [ERROR] InnoDB: Cannot initialize AIO sub-system
2017-08-09T17:58:32.540390Z 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2017-08-09T17:58:32.540401Z 0 [ERROR] Plugin 'InnoDB' init function returned error.
2017-08-09T17:58:32.540408Z 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2017-08-09T17:58:32.540412Z 0 [ERROR] Failed to initialize plugins.
2017-08-09T17:58:32.540415Z 0 [ERROR] Aborting
and hit the error code for io_setup() failed with EAGAIN
2: Examines the current value of aio-max-nr
$ sysctl fs.aio-max-nr
fs.aio-max-nr = 65536
3: Increase the value of aio-max-nr to 2097152
$ sudo sysctl -w fs.aio-max-nr=2097152
3: Start mysql service
4: Try to create more mysql-containers and bring up original one with no problems
Run docker events in background and then try starting your 10th container. You will surely see what's going wrong. Given below is an example to start an exited container and error while starting it. Commands below in sequence:
~$ sudo docker events &
[1] 9414
~$ sudo docker start 48137950f1b7
2017-08-03T00:01:18.971406558+05:30 network connect c79096ff0fef046d24b2a23907b3cc82c4df0838db2475909f8fa9f796a0418e (container=48137950f1b714797143529d63ec7221d3cbcd38bb6c8d20a241b06ddbd3d27a, name=bridge, type=bridge)
2017-08-03T00:01:19.305063392+05:30 container start 48137950f1b714797143529d63ec7221d3cbcd38bb6c8d20a241b06ddbd3d27a (image=ubuntu, name=modest_northcutt)
48137950f1b7
2017-08-03T00:01:19.305915636+05:30 container die 48137950f1b714797143529d63ec7221d3cbcd38bb6c8d20a241b06ddbd3d27a (exitCode=0, image=ubuntu, name=modest_northcutt)
Hope you will figure out.

Resources