Ejabberd Clustering:
I have set up two Ejabberd servers in two different Digital Ocean Droplets.
And i am trying to build clustering on these two servers.
I followed the documentation in the Ejabberd official Docs i.e, 'https://docs.ejabberd.im/admin/guide/clustering/'
Copy the /home/ejabberd/.erlang.cookie file from ejabberd01 to ejabberd02.
Made sure my new ejabberd node is properly configured. My ejabberd.yml config file on the new node that on the other cluster nodes have same configs.
Then when i tried to start the clustering with the below command:
$ ejabberdctl --no-timeout join_cluster 'ejabberd#ejabberd01'
I get the below Error:
args: []
format: "Error when reading /opt/ejabberd/.erlang.cookie: eacces"
label: {error_logger,error_msg}
Please help me solve this issue.
Thank you in advance
That eacess thing in the error message is actually the EACCESS error return code standardized by POSIX:
[EACCES]
Permission denied.
An attempt was made to access a file in a way forbidden by its file access permissions.
In other words, the credentials which the Erlang BEAM process running your ejabberd node uses, are insufficient to open the Erlang cookie file /opt/ejabberd/.erlang.cookie.
You can start here to get more background on what Erlang cookies are.
Related
I have few files in smb server. My job is to sense everyday using filesensor if files exist in that path. If they exist, I have to process them and move them to S3 Bucket. But I am not able to connect with smb server.
1.I mounted smb server on my machine and tried to access files from it. It worked on my local.I could read and process.So I created a bash script with mounting commands and credentials and called that script from BashOperator in DAG.
Then realised that I must have to have a connection id to sense if files exist in that location using Filesensor.
2.So I installed samba provider on top of airflow and gave a connection by entering hostname, login and password. When I run the dag, I am getting this message.
WARNING - Unable to find an extractor. task_type=FileSensor airflow_dag_id=CSV_Upload_To_S3 task_id=Get_CDS_From_Qlik airflow_run_id=manual__2022-12-05T10:26:41.802952+00:00
[2022-12-05, 10:26:46 UTC] {factory.py:122} ERROR - Did not find openlineage.yml and OPENLINEAGE_URL is not set
[2022-12-05, 10:26:46 UTC] {factory.py:43} WARNING - Couldn't initialize transport; will print events to console.
Am I missing something in giving connection? Is there more to it?
I am using this Github repo and folder path I found: https://github.com/entechlog/kafka-examples/tree/master/kafka-connect-standalone to run Kafka connect locally in standalone mode. I have made some changes to the Docker compose file but mainly changes that pertain to authentication.
The problem I am now having is that when I run the Docker image, I get this error multiple times, for each partition (there are 10 of them, 0 through 9):
[2021-12-07 19:03:04,485] INFO [bq-sink-connector|task-0] [Consumer clientId=connector- consumer-bq-sink-connector-0, groupId=connect-bq-sink-connector] Found no committed offset for partition <topic name here>-<partition number here> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1362)
I don't think there are any issues with authenticating or connecting to the endpoint(s), I think the consumer (connect sink) is not sending the offset back.
Am I missing an environment variable? You will see this docker compose file has CONNECT_OFFSET_STORAGE_FILE_FILENAME: /tmp/connect.offsets, and I tried adding CONNECTOR_OFFSET_STORAGE_FILE_FILENAME: /tmp/connect.offsets (CONNECT_ vs. CONNECTOR_) and then I get an error Failed authentication with <Kafka endpoint here>, so now I'm just going in circles.
I think you are focused on the wrong output.
That is an INFO message
The offsets file (or topic in distributed mode) is for source connectors.
Sink connectors use consumer groups. If there is no found offset found for groupId=connect-bq-sink-connector, then the consumer group didn't commit it.
I have been using Kafka Connect in my work setup for a while now and it works perfectly fine.
Recently I thought of dabbling with few connectors of my own in my docker based kafka cluser with just one broker (ubuntu:18.04 with kafka installed) and a separate node acting as client for deploying connector apps.
Here is the problem:
Once my broker is up and running, I login to the client node (with no broker running,just the vanilla kafka installation), i setup the class path to point to my connector libraries. Also the KAFKA_LOG4J_OPTS environment variable to point to the location of log file to generate with debug mode enabled.
So every time i start the kafka worker using command:
nohup /opt//bin/connect-distributed /opt//config/connect-distributed.properties > /dev/null 2>&1 &
the connector starts running, but I don't see the log file getting generated.
I have tried several changes but nothing works out.
QUESTIONS:
Does this mean that connect-distributed.sh doesn't generate the log file after reading the variable
KAFKA_LOG4J_OPTS? and if it does, could someone explain how?
NOTE:
(I have already debugged the connect-distributed.sh script and tried the options where daemon mode is included and not included, by default if KAFKA_LOG4J_OPTS is not provided, it uses the connect-log4j.properties file in config directory, but even then no log file is getting generated).
OBSERVATION:
Only when I start zookeeper/broker on the client node, then provided KAFKA_LOG4J_OPTS value is picked and logs start getting generated but nothing related to the Kafka connector. I have already verified the connectivity b/w the client and the broker using kafkacat
The interesting part is:
The same process i follow in my workpalce and logs start getting generated every time the worker (connnect-distributed.sh) is started, but I haven't' been to replicate the behaviors in my own setup). And I have no clue what I am missing here.
Could someone provide some reasoning, this is really driving me mad.
I am creating a sample Blockchain network using tutorial https://hyperledger-fabric.readthedocs.io/en/release-1.2/build_network.html. I am facing an error while connecting the peers :
Error: failed to create deliver client: orderer client failed to connect to orderer.example.com:7050: failed to create new connection: context deadline exceeded.
I found a probable solution here which I would like to test but I need help for below :
How to update default network of the containers
How to add property for each container.
While accessing my etc/docker directory I am getting error 'Server returned empty listing for directory '/etc/docker' and also it says permission denied when I try to access it from terminal. Any help will be appreciated.
There is no need for making any changes in Docker containers. I faced similar issue, you can clean up the system space or if you are using a VM you install a fresh network in a new VM(assuming you already have all configuration files copied).
I am new to neo4j,I just follow the neo4j official manual:
install two instances on one machine ,my environment is ubuntu-11.10.I had success start up the neo4j service and entered the websites http://localhost:7474/webadmin/ .But when I tried to run the "DELETE /db/data/cleandb/secret-key' command in its http console .It returned error 401. Any idea about this?
Which version of neo4j are you using?
You have to configure two different ports for the two servers. Think you did this.
The clean-db-addon doesn't come out of the box, you have to download it and and copy it in the plugins directory and adjust the neo4j-server.properties config file.
org.neo4j.server.thirdparty_jaxrs_classes=org.neo4j.server.extension.test.delete=/cleandb
org.neo4j.server.thirdparty.delete.key=<please change secret-key>
Then you can call it for each of your servers with:
curl -X DELETE http://localhost:<port>/cleandb/secret-key