Encountering 'Invalid metadata blob' when I want to backup data? - influxdb

those is what happened when I wanna backup some data from influxdb:
[root#bj-collection-01 opt]# influxd backup -database cliReport -host localhost:8086 /opt/clireportbak/
2018/04/16 10:17:12 backing up metastore to /opt/clireportbak/meta.00
2018/04/16 10:17:12 Invalid metadata blob, ensure the metadata service is running >(default port 8088)
backup: invalid metadata received
but,when I type netstat -ntlp,I found:
tcp 0 0 127.0.0.1:8088 0.0.0.0:* LISTEN 13782/influxd

Do you try to use >localhost:8086 as your host parameter? Then that > might be the error.
On a remote connection, I'm encountering the same error! But from the machine running influxdb it works if I omit the -host parameter.

Related

JaegerTracing : Jaeger Ingester unable to read from Kafka Queue and store into ElasticSearch

I am new to Jaeger and Kafka,
I am trying to use Kafka as intermediate buffer.
I am using OpenTelemetry to send data to Jaeger-Collector directly using -Dotel.exporter.jaeger.endpoint.
Jaeger-Collector is deployed on Kubernetes and the Kafka is on another network but is accessible. I can confirm that the traces are sent to Jaeger-collector.
On hitting the /metrics of collector and output tells me that spans were written successfully to Kafka.
jaeger_kafka_spans_written_total{status="success"} 21
The Collector logs indicate what topic I am writing to
{"Brokers":["myKafkaBroker......"}},"topic":"tp6"}
I want to get this (Span) data from Kafka Queue to ElasticSearch. To do this I am starting the Jaeger Ingester as follows
docker run -e "SPAN_STORAGE_TYPE=elasticsearch" jaegertracing/jaeger-ingester:1.22 --kafka.consumer.topic=tp6 --kafka.consumer.brokers='myKafkaBroker' --es.tls.skip-host-verify
But the container is stopped after fatal error
{"level":"fatal","ts":1615546463.7784193,"caller":"command-line-arguments/main.go:64","msg":"Failed to init storage factory","error":"failed to create primary Elasticsearch client: health check timeout: Head \"http://127.0.0.1:9200\": dial tcp 127.0.0.1:9200: connect: connection refused: no Elasticsearch node available","stacktrace":"main.main.func1\n\tcommand-line-arguments/main.go:64\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra#v0.0.7/command.go:838\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra#v0.0.7/command.go:943\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra#v0.0.7/command.go:883\nmain.main\n\tcommand-line-arguments/main.go:113\nruntime.main\n\truntime/proc.go:204"}
The elasticsearch and ingester are being run on the same machine using docker. The elasticsearch is running on docker using
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node"ocker.elastic.co/elasticsearch/elasticsearch:7.11.2
I have disabled TLS so that shouldn't be a problem. I am unable to get this to work.

Running a Chainlink Node - Connecting local Docker/DB to Docker/Node issue in MacOS/OSX

Running the Chainlink Node with local docker/postgres in OSX Catalina is quite cumbersome due to failed ORM connection or any others.
Doc used: https://docs.chain.link/docs/running-a-chainlink-node
To check if my local db is indeed working ok. I've ran these commands with successful results:
psql postgresql://suchain:docker#127.0.0.1:5432/chainlink
psql -h localhost -U suchain -d chainlink
What have been tried so far
Adding --network host haven't resolve the connection issue
Error Message: Incorrect Usage. flag provided but not defined: -network
Note: Tried with --network=host - same result
Changing the db_url from 127.0.0.1 to localhost
Error Message: dial error (dial tcp 127.0.0.1:5432: connect: connection refused)
Changing the localhost/127.0.0.1 to docker instance name (like pg-docker)
Error Message: hostname resolving error (lookup pg-docker on 192.168.65.1:53: no such host)
Which other options can be used?
Much thanks in advance
What pages have been checked before filing this one:
Running a Chainlink Node - Can't connect to database
CHAINLINK NODE: How might I approach fixing "unable to lock ORM" errors?
https://youtu.be/jJOjyDpg1aA?t=521
Thanks to Patrick. The root cause is the same as in this link
Replacing the db link from localhost/127.0.0.1 to the private/local IP(192.168.0.x) fixed the issue.
FYI: in mac os to find your IP is ifconfig. You'll need to find the en0

schema registry docker from confluent

I want to use the schema registry docker (image owned by confluent) with my open-source Kafka I installed locally on my PC.
I am using the following command to run the image :
docker run -p 8081:8081 \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=PLAINTEXT://127.0.0.1:9092 \
-e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 \
-e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry:latest
but I am getting the following connection errors:
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
[main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list.
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[main] INFO io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ...
I have Kafka installed on my localhost.
Any idea to solve this, please?
I have Kafka installed on my localhost
As commented, localhost is unclear when you're actually using multiple machines (one physical and at least one virtual)
You need to use host.docker.internal:9092
https://docs.docker.com/docker-for-windows/networking/ (removed because host is not windows)
On a Linux host, you need to use host networking mode
https://docs.docker.com/network/host/
Although, realistically, running Kafka in a container would be simpler for connecting the two

Docker Container Connect to MSSQL: Worked Yesterday

I am new to docker and getting very confused. Yesterday, I set up a MSSQL docker image and it is running. Some output of docker ps -a:
IMAGE: mcr.microsoft.com/mssql/server CREATED: 25 hours ago
STATUS: Up 25 hours PORTS: 0.0.0.0:1433->1433/tcp
I am trying to connect remotely to run a script. Yesterday I was able to
sqlcmd -S <myIP>,1433 -U SA -P "<myPassword>" -i ./sql-scripts/all.sql -o "out.txt"
but today when I run the same exact command I get
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : TCP Provider: No connection could be made because the target machine actively refused it.
.
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : Login timeout expired.
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online..
Does anyone know what could change between today and yesterday to make this happen? Using windows 10 enterprise, docker for windows, TSQL. I also tried https://canyouseeme.org/ and confirmed that port 1433 is not open on my IP.
EDIT: I should probably note I'm still able to access sqlcmd repl inside the container with docker exec -it mySecondServer "bash" and then /opt/mssql-tools/bin/sqlcmd -S localhost -U SA inside.
EDIT 2: Today the same error happened and when I tried to restart my docker container (still not the solution I want) I got
Error response from daemon: driver failed programming external connectivity on endpoint mySecondServer (9d...): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:1433:tcp:172.17.0.2:1433: input/output error
Error: failed to start containers: 85...

Remote debug docker+wildfly with intelliJ 2017.2.6

So there are a lot of posts around this subject, but none of which seems to help.
I have an application running on a wildfly server inside a docker container.
And for some reason I cannot connect my remote debugger to it.
So, it is a wildfly 11 server that has been started with this command:
/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 -c standalone.xml --debug 9999;
And in my standalone.xml I have this:
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
The console output seems promising:
Listening for transport dt_socket at address: 9999
I can even access the admin console with the credentials admin:admin on localhost:9990/console
However IntelliJ refuses to connect... I've creates a remote JBoss Server configuration that in the server tab points to localhost with management port 9990.
And in the startup/connection tab I've entered 9999 as remote socket port.
The docker image has exposed the ports 9999 and 9990, and the docker-compose file binds those ports as is.
Even with all of this IntelliJ throws this message when trying to connect:
Error running 'remote':
Unable to open debugger port (localhost:9999): java.io.IOException "handshake failed - connection prematurally closed"
followed by
Error running 'remote':
Unable to connect to the localhost:9990, reason:
com.intellij.javaee.process.common.WrappedException: java.io.IOException: java.net.ConnectException: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed
I'm completely lost as to what the issue might be...
Interessting addition is that after intelliJ fails, if I invalidate caches and restart then wildfly reprints the message saying that it is listening on port 9999
In case someone else in the future comes to this thread with he same issue, I found this solution here:
https://github.com/jboss-dockerfiles/wildfly/issues/91#issuecomment-450192272
Basically, apparart from the --debug parameter, you also need to pass *:8787
Dockerfile:
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "--debug", "*:8787"]
docker-compose:
ports:
- "8080:8080"
- "8787:8787"
- "9990:9990"
command: /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 --debug *:8787
I have not tested the docker-compose solution, as my solution was on dockerfile.
Not sure if this can be seen as an answer since it goes around the problem.
But the way I solved this, was by adding a "pure" remote configuration in intelliJ instead of jboss remote. This means that it won't automagically deploy, but I'm fine with that

Resources