mongooseim cluster setup eacces error on ubuntu 14.04 - erlang

We are trying to create master-master cluster of two mongooseim instances on AWS in same virtual network..
All necessary ports are opened in AWS security group.
I suspect some issue with mongooseim setup on Ubuntu 14.04 LTS
After running join_cluster command on one of the node we get error as follows ( refer screenshot )
Error: {error,{badmatch,{error,eacces}}}
Attached screenshot with details.
Server configuration was not changed except vm args as shown in attached screenshot.
is this an issue with your binary ? or some other glitch ?

I ran into this issue myself. Mongoose uses erlangs internal mnesia storage system for a lot of information including cluster topology. The default path for mnesia's storage is /var/lib/mongooseim. When you do a mongooseimctl join_cluster ... it needs to wipe it's mnesia store and basically pulls a copy from the cluster it's joining. The issue arises because it also tries to delete /var/lib/mongooseim itself, which it won't have permissions to do because the running user mongooseim won't have permissions of the parent directory, /var/lib. Nor should it.
The way I fixed this was by creating a subdirectory which it could safely delete and recreate and configuring it to use that as it's mnesia directory:
sudo mkdir /var/lib/mongooseim/mnesia
sudo chown mongooseim:mongooseim /var/lib/mongooseim/mnesia
Configuration for the mnesia directory exists by default in /etc/mongooseim/app.config. In mine it was the third line. Originally looked like this:
{mnesia, [{dir, "/var/lib/mongooseim"}]},
I changed the path to the new directory I created
{mnesia, [{dir, "/var/lib/mongooseim/mnesia"}]},
After that, I stopped and started mongoose and was successfully able to join the cluster
mongooseimctl stop
mongooseimctl start && mongooseimctl started
mongooseimctl join_cluster mongooseim#other.node.name

Related

Converting subflows into modules in node-RED

i'm new to node-red and docker. For my internship i was asked to convert a subflow into a module (in order to be in the palette of every instance of node-RED created) So, i've started with a little example showing how to add custom node as a module by following these steps (the node-RED is installed in a docker container):
connecting to an ec2 machine
going inside the container by executing the command docker exec -it mynodered /bin/bash/
and then i follow the steps as shown in this example https://techeplanet.com/how-to-create-custom-node-in-node-red/ to create the node and install it. After that i went to the "manage palette" to look for the recently installed module but it's not there ... If some one could help i will appreciate that. Thanks
Firstly, Nodes installed on the command line with npm will not show up until Node-RED is restarted.
The problem with this, in your case is that you created the node in the docker container, under normal circumstances any files you created in the running container will be lost when you restart it. This is because containers do not persist changes.
Also in the docker container the userDir is not ~/.node-red but /data.
So when you restart the container the node will likely be lost and it also will not have been installed into the node_modules directory in the /data userDir unless you have /data backed by a persistent volume.
If you want to create a node on your local machine, you can test it locally by using npm to install it and then restarting the a local instance of Node-RED to pick up the new node.
You can then use the npm pack command to create a tgz file which you can upload to the remote instance via the Palette Manager to test it in the Docker container if needed.
For longer term use of this new node you have several choices:
Publish the node to public npm with suitable tags and have it added to the public list of Node-RED nodes as described in the doc. This will allow anybody to install the node. You should ONLY do this with nodes you expect anybody to be able to use
Build a custom docker container that installs your node as part of the build process. Examples of how to do this are here
Build a custom docker container with a custom settings.js that points to a private npm repo and catalogue service that will allow you to host custom nodes. A blog post touching on this is here
Secondly the guide you are following is for building Node-RED nodes, but not for converting a subflow into a node. While it is possible to completely reimplement the subflow from scratch it will probably require recreating lots of work done in the nodes in use, this is not really an efficient approach.
There is on going work to build a tool that will automatically convert subflows into nodes but it is not ready for release just yet.
I suggest you join the Node-RED Slack or Discourse forum to be notified when it is available.

foundationdb running docker image macos database unavailable

I am trying to run foundation db using a docker image in Macos as below.
docker run --init --rm --name=fdb-0 foundationdb/foundationdb:6.2.22
Starting FDB server on 172.17.0.2:4500
This seems to start. But then I connect to fdb cli after logging into the container I get the following error statuses.
docker exec -it fdb-0 /bin/bash
root#9e8bb6985be5:/var/fdb# fdbcli
Using cluster file `/var/fdb/fdb.cluster'.
The database is unavailable; type `status' for more information.
Welcome to the fdbcli. For help, type `help'.
fdb> status
Using cluster file `/var/fdb/fdb.cluster'.
The coordinator(s) have no record of this database. Either the coordinator
addresses are incorrect, the coordination state on those machines is missing, or
no database has been created.
172.17.0.2:4500 (reachable)
Unable to locate the data distributor worker.
Unable to locate the ratekeeper worker.
I saw this issue https://forums.foundationdb.org/t/fdbcli-access-external-docker/1069. But, could not successfully run in host network as well. Any help would be appreciated.
Try running fdbcli with fdbcli --exec "configure new single memory ; status". This will start the new database with single redundancy memory mode.

keep CDH container running

I am learning CDH and Docker and didn't have prior experiene in setting up both tools. After reading documentation i managed to run CDH docker in mac environment and also completed example given in quick start guid. But when next day when i started mac book again to learn something new but i didn't find my previous work which i found very strange and even couldn't see container running which seems fine to me.
What i really want to do is i don't want to loose my work even after stoping docker container. could you please guid me how do i configure docker so that i will not loose my work even after restarting docker again?
Every instance of a docker run will allocate a new filesystem, essentially starting from scratch.
If you actually want to "save" your work, then you need to volume mount (using -v docker flag) your local filesystem into the container for at least the following directories.
HDFS Data Directory
NameNode Data Directory
/home/cloudera
I think the hadoop data folders are somewhere under /var/lib/hadoop-*, by default
The better alternative for saving your workloads would be the CDH VM, where it actually has a persistent HDD associated with it.

Repair/Uninstall Mesos after cleanup

The mesos server ran out of disk space and so we were doing a cleanup by removing some of the old docker containers. But now the marathon won't start and digging deeper shows nor does zookeeper. The docker log says that it cannot load some containers.
But what we noticed was that zookeeper get started then stops. So we had at look at the zookeeper folder and the the conf was missing. This was also removed on the other master server as well which we had not touched. I presume this is to do with the link between the masters. Now the slave has this conf folder but it has the default folder and files and I noticed that this is a symlink that points to the /etc/alternatives/zookeeper-conf folder.
Running the dockerfile to recreate the missing cointainer says:
Error response from daemon: Cannot start container d13b8aa28d383a3ca54b39ce74f5a81d80030a2ad0dde52966293ced9ef26663: [8] System error: exec: "mesos-master": executable file not found in $PATH
It doesn't recognise the Restart command either.
Is there a quick way to repair this to get it working as it used to? I am using Mesos 0.23 on Ubuntu 14.04
How do I uninstall Mesos?
Any help is appreciated as I am fairly new to this and so only have a basic understanding of how all this works.
re "how to uninstall Mesos", my way is
configure --prefix="your_install_path"
make
make uninstall

Neo4j server failed to start in openshift

I want to create a social network in django framework in Openshift then I need at least a graph db (like Neo4j)and a relational db (like Mysql). I had trouble in add Neo4j to my project because openshift has not any cartridge for it. then I decide to install it with DIY, but I don't understand the functionality of start and stop files in .openshift/action hooks.Then I doing the following steps to install neo4j on server:
1.ssh to my account:
ssh 1238716...#something-prolife.rhcloud.com
2.go in a folder that have permission to write (I go to app-root/repo/ and mkdir test in it) and download the neo4j package from here. and extract it to the test folder that I created before :
tar -xvzf neo4j-community-1.9.4-unix.tar.gz
3.and finally run the neo4j file and start it:
neo4j-community-1.9.4/bin/neo4j start
but I see these logs and can't run the neo4j:
process [3898]... waiting for server to be ready............ Failed
to start within 120 seconds.
Neo4j Server may have failed to start, please check the logs.
how can I run this database in openshift ? where I am wrong ? and where is the logs in please check the logs?
I've developed an openshift cartridge that fixes the permission issue in openshift. I had to change the class HostBoundSocketFactory and SimpleAppServer in neo4j just to bind without using the 0 port, but using an openshift available port.
You can check at: https://github.com/danielnatali/openshift-neo4j-cartridge
it's working for me.
I would also not place it in the app-root/repo but instead I would put it in app-root/data.
You also need to use the IP of the gear - I think the env. variable is something like OPENSHIFT_INTERAL_IP. 127.0.0.1 is not available for binding but I think the ports should be open.
There are 2 ways neo4j can run : embedded or standalone(exposed via a rest service).
Standalone is what you are trying to do. I think the right way to setup neo4j would be by writing a cartridge for openshift, and then add the cartridge to your gear. There has been some discussion about this, but it seems that nobody has taken the time to do this. Check https://www.openshift.com/forums/openshift/neo4j-cartridge. If you decide to write your own cartridge, i might assist. Here are the docs: https://www.openshift.com/developers/download-cartridges.
The other option is running in embedded mode, which i have used. You need to set up a Java EE application(because neo4j embedded mode libraries are available only with java), and put the neo4j libraries in your project. Then, you would expose some routes, check for parameters and run your neo4j queries inside the servlets.

Resources