I am using ejabberd with mnesia. I want to add another node to our cluster, and at the same time modify the default mnesia configuration properties before starting ejabberd:
dump_log_write_threshold = 10000
dc_dump_limit = 10
I am using this command:
erl -sname ejabberd#node2 -mnesia dump_log_write_threshold 10000 \
-mnesia dc_dump_limit 10 -mnesia dir '"/var/lib/ejabberd"' \
-mnesia extra_db_nodes "['ejabberd#node1']" -s mnesia
But on starting ejabberd the values have reverted to default. I don't have much experience in erl or mnesia so I'm probably doing something obviously wrong :-)
Related
I wrote a simple docker image which starts up an Erlang node (rebar3 release, console launch mode). It starts fine and lets me ping the node from within the container. However, I can't get erl shell to ping it from the host — it simply returns pang and nothing is logged in the dockerized console.
The Dockerfile just starts the node, it doesn't do anything more interesting.
Checklist
Cookie is set and matches
sname is set on both nodes
Docker node is reachable from other container nodes
I refer to the docker node using full sname (tried nodename#localhost, nodename#machinename and nodename#127.0.0.1)
epmd port is exposed (tried without it as well)
What could I have forgotten to make it work?
Disclaimer: This answer is valid for Linux systems. Also, I think that the first and last options are the easiest
There are several ways to do so, but before, let's set some key ideas:
The following script is run inside docker simulating a node, it just waits for a node connection, prints it and terminates.
SCRIPT_RUN_IN_DOCKER='ok = net_kernel:monitor_nodes(true), fun F() -> receive {nodeup, N} -> io:format("Connected to ~p~n", [N]), init:stop() end end().'
In order for the distribution protocol to succeed, not only the node needs to be reached at the name used for the ping, the whole node name must match.
Erlang's -node can be used with IP, it will be used extensively in the following commands
Options
Now, let's get with the options (All the commands are to be run in different terminals)
Docker: Host network namespace
When starting docker in host's network namespace (--net=host), there's no difference with running both outside of docker (for network purposes). It's the easiest way to connect both nodes using plain docker.
-name (ip):
$> docker run --net=host erlang erl -noinput -name foo#127.0.0.1 -setcookie cookie -eval $SCRIPT_RUN_IN_DOCKER
Connected to 'bar#127.0.0.1'
$> erl -noinput -name bar#127.0.0.1 -setcookie cookie -eval "net_adm:ping('foo#127.0.0.1'), init:stop()."
-sname with #localhost:
$> docker run --net=host erlang erl -noinput -sname foo#localhost -setcookie cookie -eval $SCRIPT_RUN_IN_DOCKER
Connected to bar#localhost
$> erl -noinput -sname bar#localhost -setcookie cookie -eval "net_adm:ping('foo#localhost'), init:stop()."
-sname with #$(hostname -f):
$> docker run --net=host erlang erl -noinput -sname foo -setcookie cookie -eval $SCRIPT_RUN_IN_DOCKER
Connected to 'bar#amazing-hostname'
$> erl -noinput -sname bar -setcookie cookie -eval "net_adm:ping('foo#$(hostname -f)'), init:stop()."
Docker: Using docker's default bridge (docker0)
By default, docker starts the containers in its own bridge, and these ips can be reached without need to expose any port.
ip a show docker0 lists 172.17.0.1/16 for my machine, and erlang listens in 172.17.0.2 (shown in docker inspect <container>)
-name (ip):
$> docker run erlang erl -noinput -name foo#172.17.0.2 -setcookie cookie -eval $SCRIPT_RUN_IN_DOCKER
Connected to bar#baz
$> erl -noinput -name bar#baz -setcookie cookie -eval "net_adm:ping('foo#172.17.0.2'), init:stop()."
-sname (fake name resolving to container ip):
# The trick here is to have exactly the same node name for the destination, otherwise the distribution protocol won't work.
# We can achieve the custom DNS resolution in linux by editing /etc/hosts
$> tail -n 1 /etc/hosts
172.17.0.2 erlang_in_docker
$> docker run erlang erl -noinput -name foo#erlang_in_docker -setcookie cookie -eval $SCRIPT_RUN_IN_DOCKER
Connected to 'bar#amazing-hostname'
$> erl -noinput -sname bar -setcookie cookie -eval "net_adm:ping('foo#erlang_in_docker'), init:stop()."
Docker: Using some other docker bridge
Just create the new network and repeat the previous steps, using the ips from the new network
docker network create erlang_docker_network
docker inspect erlang_docker_network
Docker: Exposing ports with two EPMDs
When exposing ports you have to juggle ports and ips because the EPMD ports must be the same.
In this case you are going to have two epmds, one for the host and other for the container (EPMD rejects name requests from non-local peers), listening in the same port number.
The trick here is (ab)using the 127.0.0.* ips that point all to localhost to simulate different nodes. Note the flag to set the distribution port, as mentioned by #legoscia
-name (ip):
$> epmd -address 127.0.0.1
$> docker run -p 127.0.0.2:4369:4369/tcp -p 127.0.0.2:9000:9000/tcp erlang erl -noinput -name foo#127.0.0.2 -setcookie cookie -kernel inet_dist_listen_min 9000 -kernel inet_dist_listen_max 9000 -eval $SCRIPT_RUN_IN_DOCKER
Connected to bar#baz
$> erl -noinput -name bar#baz -setcookie cookie -eval "net_adm:ping('foo#127.0.0.2'), init:stop()."
-sname (fake name resolving to 127.0.0.2)
And here we need again the DNS resolution provided by /etc/hosts
$> tail -n 1 /etc/hosts
127.0.0.2 erlang_in_docker
$> epmd -address 127.0.0.1
$> docker run -p 127.0.0.2:4369:4369/tcp -p 127.0.0.2:9000:9000/tcp erlang erl -noinput -name foo#erlang_in_docker -setcookie cookie -kernel inet_dist_listen_min 9000 -kernel inet_dist_listen_max 9000 -eval $SCRIPT_RUN_IN_DOCKER
Connected to bar#baz
$> erl -noinput -sname bar#baz -setcookie cookie -eval "net_adm:ping('foo#erlang_in_docker'), init:stop()."
Docker-compose
docker-compose allows you to easily set up multi-container systems. With it, you don't need to create/inspect networks.
Given the following docker-compose.yaml:
version: '3.3'
services:
node:
image: "erlang"
command:
- erl
- -noinput
- -sname
- foo
- -setcookie
- cookie
- -eval
- ${SCRIPT_RUN_IN_DOCKER} # Needs to be exported
hostname: node
operator:
image: "erlang"
command:
- erl
- -noinput
- -sname
- baz
- -setcookie
- cookie
- -eval
- "net_adm:ping('foo#node'), init:stop()."
hostname: operator
If you run the following docker-compose run commands, you'll see the results:
$> docker-compose up node
Creating network "tmp_default" with the default driver
Creating tmp_node_1 ... done
Attaching to tmp_node_1
node_1 | Connected to baz#operator
tmp_node_1 exited with code 0
$> docker-compose run operator
I was reading a blog post on Percona Monitoring Plugins and how you can somehow monitor a Galera cluster using pmp-check-mysql-status plugin. Below is the link to the blog demonstrating that:
https://www.percona.com/blog/2013/10/31/percona-xtradb-cluster-galera-with-percona-monitoring-plugins/
The commands in this tutorial are run on the command line. I wish to try these commands in a Nagios .cfg file e.g, monitor.cfg. How do i write the services for the commands used in this tutorial?
This was my attempt and i cannot figure out what the best parameters to use for check_command on the service. I am suspecting that where the problem is.
So inside my /etc/nagios3/conf.d/monitor.cfg file, i have the following:
define host{
use generic-host
host_name percona-server
alias percona
address 127.0.0.1
}
## Check for a Primary Cluster
define command{
command_name check_mysql_status
command_line /usr/lib/nagios/plugins/pmp-check-
mysql-status -x wsrep_cluster_status -C == -T str -c non-Primary
}
define service{
use generic-service
hostgroup_name mysql-servers
service_description Cluster
check_command pmp-check-mysql-
status!wsrep_cluster_status!==!str!non-Primary
}
When i run the command Nagios and go to monitor it, i get this message in the Nagios dashboard:
status: UNKNOWN; /usr/lib/nagios/plugins/pmp-check-mysql-status: 31:
shift: can't shift that many
You verified that:
/usr/lib/nagios/plugins/pmp-check-mysql-status -x wsrep_cluster_status -C == -T str -c non-Primary
works fine on command line on the target host? I suspect there's a shell escape issue with the ==
Does this work well for you? /usr/lib64/nagios/plugins/pmp-check-mysql-status -x wsrep_flow_control_paused -w 0.1 -c 0.9
I am currently in the midst of doing distributed load testing on Amazon's EC2 services and have diligently followed all documentation/forum/support on how to get things to work, but unfortunately find myself stuck at this point. No one in any of the relevant IRC's has been able to answer this either...
Here is what I am seeing:
I am at a point where I can get Tsung to work perfectly well if I run it simply on the controller itself, but with options:
FROM tsung.xml
<client host="tester0" weight="8" maxusers="10000" cpu="4"/>
Also - it works with higher/lower CPU values.
I can also very easily get it to work locally using:
use_controller_vm="true"
but this is of no use to me now, as I cannot get the throughput that I'd like.
In order to get things working, I have ssh keys installed. I have opened all ports on these servers [ 0 - 65535 ] and have the exact same versions of Tsung, Erlang and, well, everything on the server is the same actually (they are images of each other).
Tsung version 1.4.2
Erlang R15B01
Ubuntu 12.04LTS
Same EC2 security group (all ports open - both TCP & UPD and NO iptables or SELinux)
Same EC2 availability zone
When I do start tsung, I get it to work when only send as above to tester0 and ts_config_server starts newbeam with:
ts_config_server:(6:<0.84.0>) starting newbeam on host tester0 with Args " -rsh ssh -detached -setcookie tsung -smp disable +A 16 +P 250000 -kernel inet_dist_listen_min 64000 -kernel inet_dist_listen_max 65500 -boot /usr/lib/erlang//lib/tsung-1.4.2/priv/tsung -boot_var TSUNGPATH /usr/lib/erlang/ -pa /usr/lib/erlang//lib/tsung-1.4.2/ebin -pa /usr/lib/erlang//lib/tsung_controller-1.4.2/ebin +K true -tsung debug_level 7 -tsung log_file ts_encoded_47home_47ubuntu_47_46tsung_47log_4720120719_451751"
However, whenever I try to run this with any remote server, the entire test fails and I get zero users:
<client host="tester1" weight="8" maxusers="10000" cpu="1"/>
ts_config_server:(6:<0.84.0>) starting newbeam on host tester1 with Args " -rsh ssh -detached -setcookie tsung -smp disable +A 16 +P 250000 -kernel inet_dist_listen_min 64000 -kernel inet_dist_listen_max 65500 -boot /usr/lib/erlang//lib/tsung-1.4.2/priv/tsung -boot_var TSUNGPATH /usr/lib/erlang/ -pa /usr/lib/erlang//lib/tsung-1.4.2/ebin -pa /usr/lib/erlang//lib/tsung_controller-1.4.2/ebin +K true -tsung debug_level 7 -tsung log_file ts_encoded_47home_47ubuntu_47_46tsung_47log_4720120719_451924"
However, when I try to run it with TWO clients (i.e, as below):
<client host="tester0" weight="8" maxusers="10000" cpu="1"/>
<client host="tester1" weight="8" maxusers="10000" cpu="1"/>
I once again get zero users to start hitting my web servers. I'm not sure why and this is not intuitive to me at all.
ts_config_server:(6:<0.84.0>) starting newbeam on host tester1 with Args " -rsh ssh -detached -setcookie tsung -smp disable +A 16 +P 250000 -kernel inet_dist_listen_min 64000 -kernel inet_dist_listen_max 65500 -boot /usr/lib/erlang//lib/tsung-1.4.2/priv/tsung -boot_var TSUNGPATH /usr/lib/erlang/ -pa /usr/lib/erlang//lib/tsung-1.4.2/ebin -pa /usr/lib/erlang//lib/tsung_controller-1.4.2/ebin +K true -tsung debug_level 7 -tsung log_file ts_encoded_47home_47ubuntu_47_46tsung_47log_4720120719_451751"
ts_config_server:(6:<0.85.0>) starting newbeam on host tester0 with Args " -rsh ssh -detached -setcookie tsung -smp disable +A 16 +P 250000 -kernel inet_dist_listen_min 64000 -kernel inet_dist_listen_max 65500 -boot /usr/lib/erlang//lib/tsung-1.4.2/priv/tsung -boot_var TSUNGPATH /usr/lib/erlang/ -pa /usr/lib/erlang//lib/tsung-1.4.2/ebin -pa /usr/lib/erlang//lib/tsung_controller-1.4.2/ebin +K true -tsung debug_level 7 -tsung log_file ts_encoded_47home_47ubuntu_47_46tsung_47log_4720120719_451751"
One thing that I do notice is that, of all the args passed to slave:start, only ONE does not exist, and that is the one following the -boot directive:
/usr/lib/erlang//lib/tsung-1.4.2/priv/tsung
Rather, in that directory, I have only the following files:
:~$ ls /usr/lib/erlang//lib/tsung-1.4.2/priv
tsung.boot tsung_controller.rel tsung_recorder.boot tsung_recorder.script
tsung_controller.boot tsung_controller.script tsung_recorder_load.boot tsung.rel
tsung_controller_load.boot tsung_load.boot tsung_recorder_load.script tsung.script
tsung_controller_load.script tsung_load.script tsung_recorder.rel
The last thing I've actually tried is to log what is happening with my ssh session when I try to slave:start, but I get no results. I did this by running:
erl -rsh ssh -sname tsung -r ssh_log_me
Where ssh_log_me is:
#!/bin/sh
echo "$0" "$#" > /tmp/my-ssh.log
ssh -v "$#" 2>&1 | tee -a /tmp/my-ssh.log
But I get no output when I run:
(tsung#tester0)1> slave:start_link(tester1, tsung, " -rsh ssh -detached -setcookie tsung -smp disable +A 16 +P 250000 -kernel inet_dist_listen_min 64000 -kernel inet_dist_listen_max 65500 -boot /usr/lib/erlang//lib/tsung-1.4.2/priv/tsung -boot_var TSUNGPATH /usr/lib/erlang/ -pa /usr/lib/erlang//lib/tsung-1.4.2/ebin -pa /usr/lib/erlang//lib/tsung_controller-1.4.2/ebin +K true -tsung debug_level 7 -tsung log_file ts_encoded_47home_47ubuntu_47_46tsung_47log_4720120719_451751").
{error,timeout}
I have looked through erlang's -boot directive and through the actual erlang code (for ts_config_server) but I am a little lost at this point and might just be missing one last piece of information.
I ask that you please take a look at my xml file here: http://pastebin.com/2MEbL6gd
I recompiled using git's most current version and it worked --- odd that it didn't work with my installed deb package...
Goes to show you - compile from source when unsure!
Ensure ssh key verification is disabled
~/.ssh/config Host * StrictHostKeyChecking no UserKnownHostsFile=/dev/null
Make sure all ports are accessible across controller and worker nodes. If it is in the cloud make sure firewall or security groups allow all ports.
3.Erlang, Tsung must have same version.
4.Ensure all machines are reachable to each other
5.Run erlang test
erl -rsh ssh -name subbu -setcookie tsung Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [smp:2:2] [async-threads:10] [hipe] [kernel-poll:false]
Eshell V5.10.4 (abort with ^G) (daya#ip-10-0-100-224.ec2.internal)1> slave:start("worker1.com",bar,"-setcookie tsung").
Warning: Permanently added 'worker1,10.0.100.225' (ECDSA) to the list of known hosts. {ok,bar#worker1}
Run this test from controller to all worker nodes.
You should be able run tests without any issues.
Good Luck!
Subbu
I'm working on a RabbitMQ distributed POC and I'm stuck at the basics of clustering the nodes.
I'm trying to follow the rabbit's tutorial on clustering so this is my reference.
After installing erlang (R14B04) and rabbit (2.8.2-1) I've copied the .erlang.cookie file contents from one node to the other two.
I wasn't sure about how to get erlang to notice this change to I had to restart the machines themselves (pretty brute force but I don't know erlang at all).
In addtion I opened in iptables 4369 and 5 additional ports for communications and placed under
/usr/lib64/erlang/bin/sys.config the following config:
{kernel,[{inet_dist_listen_min, XX00},{inet_dist_listen_max,XX05}]}]
Then another restart (dumb I know) to verify erlang takes these into consideration but still when I run:
rabbitmqctl cluster rabbit#HostName1
I get:
Clustering node rabbit#HostName2 with [rabbit#HostName1] ...
Error: {no_running_cluster_nodes,[rabbit#HostName1],
[rabbit#HostName1]}
There is a chance my fiddling with the erlang.cookie or with the ports did not succeed but I don't know how to check them. I tried typing erl in the cmd and then erl_epmd:names() or other commands to get more information but I'm probably way off in erlang land.
Would truly appreciate any help
Update:
I tried pinging two erlang nodes manually and got pang back.
I did the following:
Connected to two nodes, stopped rabbitmq (wasn't sure if needed but to be sure), started erlang like so (erl -sname dilbert and erl -sname dilbert2) when the erlang command line started i ran node(). on each of them and got dilbert#HostName1 and dilbert2#HostName2 respectively. I then tried to run net_adm:ping('dilbert'). and net_adm:ping('dilbert#HostName1'). with the single quote and without them from both nodes (changed names of course) and got on all 8 cases pang.
When I ran nodes(). on one of the machines I got back an empty array.
I've also tried to allow all traffic in the firewall (script) and then try to run the above commands (don't worry they're back on now) and still got back pang.
Update2:
For some reason I had cookies mismatch which I needed to resolve (thanks #kjw0188 for the suggestion [I ran erlang:get_cookie(). in the erlang command line]).
This did not help and I needed to stop iptables completely (not sure why but I'll figure it soon) and load the erlang node with -name dilbert#my-ip because my rackspace servers have no dns-name. This finally enabled me to get a pong and see the nodes see each other (nodes(). returns a non-empty array after the ping).
The problem I'm facing now is how to instruct RabbitMQ to use -name instead of -sname when starting erlang.
So I had multiple issues with connecting my two RabbitMQ nodes-
I'll add that my nodes are hosted on rackspace, and so don't have a default exposable hostname, and require iptables since there is no DMZ or built in security group concept like amazon.
Problems:
1. Cookie- Not sure how or why but I had multiple instances of .erlang.cookie (in /root, in my home directory and in /var/lib/rabbitmq/) I kept only the one in rabbitmq and verified all nodes have the same cookie.
2. IPTables- In order for the nodes to communicate I needed to open the epmd port and the range of ports for the actual communication inet_dist_listen_min inet_dist_listen_max.
/sbin/iptables -A INPUT -i eth1 -p tcp --dport ${epmd} -s ${otherNode} -j ACCEPT
/sbin/iptables -A INPUT -i eth1 -p tcp --dport ${inet_dist_listen_min}:${inet_dist_listen_max} -s ${otherNode} -j ACCEPT
empd is the usuall 4369 port and for the other range use whatever range you want.
${otherNode} is the ip of my other node.
I also needed to configure erlang through rabbitmq to use these ports (see config file at end)
3. HostName- Seeing as I don't have a hostname I needed to edit the rabbit scripts to use -name and not -sname (the first tells erlang to take the whole name, the latter stands for short name and thus appends an # symbol and the hostname).
This was accomplished by editing:
/usr/lib/rabbitmq/bin/rabbitmqctl
Added at the beginning the definition of the RABBITMQ_NODE_IP_ADDRESS property
DEFAULT_NODE_IP_ADDRESS=auto
DEFAULT_NODE_PORT=5672
[ "x" = "x$RABBITMQ_NODE_IP_ADDRESS" ] && RABBITMQ_NODE_IP_ADDRESS=${NODE_IP_ADDRESS}
[ "x" = "x$RABBITMQ_NODE_PORT" ] && RABBITMQ_NODE_PORT=${NODE_PORT}
[ "x" = "x$RABBITMQ_NODE_IP_ADDRESS" ] && [ "x" != "x$RABBITMQ_NODE_PORT" ] && RABBITMQ_NODE_IP_ADDRESS=${DEFAULT_NODE_IP_ADDRESS}
[ "x" != "x$RABBITMQ_NODE_IP_ADDRESS" ] && [ "x" = "x$RABBITMQ_NODE_PORT" ] && RABBITMQ_NODE_PORT=${DEFAULT_NODE_PORT}
and in the actual erl command I changed
-sname ${RABBITMQ_NODENAME} \ to
-name ${RABBITMQ_NODENAME}#${RABBITMQ_NODE_IP_ADDRESS}\.
This made rabbitmq listen only on the specified ip address (specified in the config file at the end) and load with that ip instead of the usuall hostname.
edited /usr/lib/rabbitmq/bin/rabbitmq-server
Changed the actual erl command from -sname ${RABBITMQ_NODENAME} \ to -name ${RABBITMQ_NODENAME}#${RABBITMQ_NODE_IP_ADDRESS}\
Added a rabbit conf (/etc/rabbitmq/rabbitmq-env.conf) file with-
#the ip address which rabbit should use, this is to limit rabbit to only use internal rackspace communication and not publicly accessible ports
NODE_IP_ADDRESS=myIpAdress
#had to change the nodename becaue otherwise rabbitmq used rabbit#Hostname and not only rabbit
NODENAME=myCompany
#This instructed rabbit to instruct erlang which ports it should use for its communications with other nodes
export SERVER_ERL_ARGS="$SERVER_ERL_ARGS -kernel inet_dist_listen_min somePort -kernel inet_dist_listen_max someOtherBiggerPort"
Some resources which helped me along the way:
RabbitMQ Clustering Guide
Clustering RabbitMQ servers for High Availability
rabbitmq-env.conf(5) manual page
Node communication by public IP address erlang mailing list (The middle post)
Configuring RabbitMQ Cluster on Cloud
Hope this will help anyone else.
EDIT:
Not sure how I was mistaken but it seemed my erlang-rabbit port instructions were not taken into consideration or were not enough. Ended up having to allow all communications between the two nodes...
One thing to really watch out for is whitespace of any kind in the erlang cookie file, especially line breaks AFTER the contents of the cookie. So long as both are identical, things are okay, but when one has a line break and the other doesn't, thing won't work.
Background: I was facing the same issue while setting up Rabbitmq cluster. I was using 2 docker containers running on my host-machine, which is equivalent to 2 separate nodes and I could not create a cluster of these two.
Solution: 1. Make sure you have same erlang cookie on all your cluster nodes, the default location is /var/lib/rabbitmq/.erlang.cookie. This file is used for authentication, so make sure, you have it same on all the nodes. After changing the .erlang.cookie restart your rabbitmq service.
Make sure that nodes are accessible from one other, use ping or telnet to check the connection.
Check that /etc/hosts have correct entries, for example if rabbit2 wants to join cluster rabbit1, /etc/hosts of rabbit2 should contain.
172.68.1.6 rabbit1
172.68.1.7 rabbit2
Now stop service using $rabbitmqctl stop_app followed by $rabbitmqctl join_cluster rabbit#rabbit1, start your service by rabbitmqctl start_app and check $rabbitmqctl cluster_status to see weather you have joined the cluster or not.
I followed the rabbitmq official documentation to setup the cluster.
to change RabbitMQ sname/name behaviour you can edit the scripts:
rabbitmq-multi
rabbitmq-server
rabbitmqctl
Example
In script rabbitmqctl there is the following piece of code:
exec erl \
-pa "${RABBITMQ_HOME}/ebin" \
-noinput \
-hidden \
${RABBITMQ_CTL_ERL_ARGS} \
-sname rabbitmqctl$$ \
-s rabbit_control \
-nodename $RABBITMQ_NODENAME \
-extra "$#"
You have to change it in:
exec erl \
-pa "${RABBITMQ_HOME}/ebin" \
-noinput \
-hidden \
${RABBITMQ_CTL_ERL_ARGS} \
-name rabbitmqctl$$ \
-s rabbit_control \
-nodename $RABBITMQ_NODENAME \
-extra "$#"
http://pearlin.info/?p=1672
so you need to copy the cookie from the node you trying to connect
example :- rabbit#node1
rabbit#node2
go to rabbit#node1 and copy the cookie from cat /var/lib/rabbitmq/.erlang.cookie
go to rabbit#node2 remove the current cookie and paste the new one.
on same node
/usr/sbin/rabbitmqctl stop_app
/usr/sbin/rabbitmqctl reset
/usr/sbin/rabbitmqctl cluster rabbit#node1
should do it.
same documented here.
http://pearlin.info/?p=1672
I have a running erlang application, launched with this command line
erl -boot start_sasl -config config/cfg_qa -detached -name peasy -cookie peasy -pa ./ebin -pa ./ebin/mochiweb -s peasy start
If I start a new node and run appmon:start(), the 'peasy' node won't show up, even if using the same cookie. The same happens with webtool:start()
Anyone?
Found.
As always with erlang, to have two nodes speak to each other, you need to ping:
1> net_adm:ping(other_node_you_want_to_monitor).
pong
2> appmon:start().
{ok,<0.48.0>}
And off you go :)