The title is what appears in my ejabberd log files after an unsuccessful start. I did some googling and have noticed it is probably related to my cookies..
Running get_cookie() in the erlang shell prints 'nocookie'. Is this typical for a fresh install of erlang? How should I ideally go about setting the cookie?
nocookie would likely mean that the node hasn't been started in distributed mode. All distributed nodes otherwise have a cookie.
You will have to make sure your node is started in distributed mode (through -name or -sname arguments) for things to work.
In case the node is already started and you have access to a shell, you can start the distributed kernel by hand with net_kernel:start([Name, shortnames]) or net_kernel:start([Name, longnames]).
Related
I'm new to Docker, and have been reconstructing a test environment with instructions that were left to me by the previous developer.
We have several in-house pieces of job tracking software that both the outside clients and internal employees can access.
The job tracking software has a few dependencies that require a mailhog container to be spun up, and a second container which contains all of the other required sub software.
Despite my amateur knowledge on the subject, and the complexity of the software, I successfully installed Docker with the required Linux Kernels and Ubuntu.
I pulled the required images I needed, and successfully built the other ones.
I configured the in-house software with all of the testing settings enabled, and have it all grouped in the proper file directory.
By all intents and purposes, it should be configured fine.
The trouble starts with starting the containers.
The command line prompt "docker-compose up -d" pointed in the proper sub-directory throws this error:
Error response from daemon: Get "https://registry-1.docker.io/v2/": remote error: tls: handshake failure
I figured screw it, I'll build it right from the Docker Desktop instead of using command line, and it worked! The two containers I needed successfully built.
I even ran the docker test command to make sure they were really running.
So here's where the trouble just gets worse.
The ex-developer states the jobtracking software can then be viewed in browser using this "https://localhost/jobtracker/"
The page doesn't load, and then throws err_connection_refused error.
I am at my wits end with trouble shooting, because I quite simply don't have the networking or development knowledge to search the right things, AND to make matters even more frustrating is the remote outsourced devs have their environment up and running, despite using the exact same instructions I have.
So I'm down to two possible issues. Either somewhere in the set-up process, I messed something up, OR my office network security is breaking something. However, I was told by our lead IT specialist, I should have all of the same network permissions the outsourced guys on the vpns have.
I am here now reaching out on several docker web sources to find a solution.
I am on Windows 10.
Thankfully my employers are pretty cool about me learning as I go, and understand the technical difficulties. I just don't want to squander the opportunity, and would like to make some progress.
I am using Mongooseim 3.2.0 from the source code on the ubuntu server. Below are concern:
What is the best way to run mongooseim as a service so that it automatically restarts if mongooseim crashes or system restarts?
How to interact via terminal with already running mongooseim instance on the ubuntu server like "mongooseimctl live". My guess is running "mongooseimctl live" will try to create another instance. I just want to see the live logs and interaction and don't want to keep scrolling the long log files for this purpose.
I apologize if the answer to above is obvious but just want to follow the best guidance.
mongooseimctl live or mongooseimctl foreground is mostly useful for development or smoke testing a deployment (unless you're running inside a container). For real world use cases you should start the server in the background with mongooseimctl start.
Back to the container - the best approach for containerised applications is to run them in the foreground, therefore in a container startup script use mongooseimctl foreground.
Once the server is running (no matter how it was started) attaching a shell to troubleshoot issues can be done with mongooseimctl debug. This is the command to use when you get the Protocol 'inet_tcp': the name mongooseim#localhost seems to be in use by another Erlang node error. Be careful if it's a production environment - you can easily take the server down with access to this shell.
If you're just interested in watching logs, with no interactive access to the server internals that the shell offers, a simple tail -f /your-configured-mongooseim-log-dir/* should be enough.
Ubuntu nowadays uses systemd for managing its services' lifetimes. A systemd .service file can be found at https://github.com/esl/MongooseIM/blob/master/tools/pkg/platforms/debian_stretch/files/build/mongooseim.service - we use it for packaging into Debian/Ubuntu .deb packages.
I'm running a distributed erlang system with one node per machine.
Since DNS is not available I all start them with the same -sname param e.g.
erl -sname foo ...
A operating-system daemon has the feature to execute shell (/bin/sh) commands when a certain event occurs (when a USB stick is pugged into the system).
I'm looking for a simple way to call a function on the erlang node local on this machine with this shell command (taking further action after the USB stick was detected and mounted).
I was thinking of calling erl -sname bar from the shell and run some code that looks like
[_,Host] = string:tokens(atom_to_list(node()), "#"),
The_node = list_to_atom("foo#" ++ Host),
spawn(The_node, My_fun),
Is this the way to go? Or is starting a whole new erlang node overkill (won't be doing it often though)
Or is it better to talk a socket opened by gen_tcp, or read a named pipe.
Or any other suggestions?
BTW this is running on a Unix system.
What you want to use is actually erl_call, an application that lets you contact currently distributed nodes and run arbitrary code.
erl_call makes it possible to start and/or communicate with a distributed Erlang node. It is built upon the erl_interface library as an example application. Its purpose is to use an Unix shell script to interact with a distributed Erlang node.
You can either give it commands, an escript or pretty much just code to evaluate and it will do it. You have more details and actual example in its documentation.
I need debug my ejabberd server and I want use pman for this purpose. But I have only access via ssh and server worked in screen.
I do:
ssh mydoman#example.com
erl -sname test#localhost
(test#localhost)1> pman:start().
<0.123.0>
and it works but I need get access to 'ejabberd#localhost' node from same machine
now I press Ctrl+G
--> r'ejabberd#localhos'
--> c
(ejabberd#localhost)1> pman:start().
** exited: {startup_timeout,pman} **
And my question is - how do I run pman properly?
Pman needs access to the screen on which it runs. I understand that you are running distributed erlang on both nodes and that they are connected and know of each other. The easiest way is then to run pman locally on your node, pman:start(). There is a Nodes menu which should contain all known nodes and if you pick ejabbered#localhost you should see all the processes on that node.
Not sure about pman, but if you want to monitor a remote node I've create entop for that purpose. It might not work exactly like pman but should be close enough.
https://github.com/mazenharake/entop
I have an Erlang application running as a daemon, configured as an SSH server. I can connect to it with an SSH client and I get the standard Erlang REPL.
If I 'q().' I shut down the Erlang VM, not the connection.
If I close the connection ('~.' for OpenSSH, close the window in PuTTY) some processes remain under the sshd_sup/ssh_system_xx_sup tree. These appear to be stale shell processes.
I do not see any exported function in the shell module that would let me shut down the shell (and therefore the SSH connection) without affecting the entire VM.
How should I be logging out of the SSH session to not leave stale processes in the VM?
'exit().' in the SSH client shuts down the connection without stopping the VM.
I could not find this documented anywhere, but it seems to do almost what I want.
Instead of leaving 4 stale processes per terminated connection like killing the client, 'exit().' leaves 2 stale processes.
This may now be in the realm of the 'ssh' module and no longer in the realm of the 'shell' module.