unable to start rabbitmq-server - erlang
I installed rabbitmq using homebrew. I am trying to start rabbitmq server but I always get this error which I am unable to figure out why!
I have erlang installed and there is no other application running on the same port.
$ rabbitmq-server
{error_logger,{{2013,2,11},{22,37,49}},"Can't set short node name!\nPlease check your configuration\n",[]}
{error_logger,{{2013,2,11},{22,37,49}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,320}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}},{ancestors,[net_sup,kernel_sup,]},{messages,[]},{links,[]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,610},{stack_size,24},{reductions,249}],[]]}
{error_logger,{{2013,2,11},{22,37,49}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfargs,{net_kernel,start_link,[[rabbitmqprelaunch1593,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]}
{error_logger,{{2013,2,11},{22,37,49}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfargs,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]}
{error_logger,{{2013,2,11},{22,37,49}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]}
{"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"}
Crash dump was written to: erl_crash.dump
Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})
btw, erl -sname abc gives the same output
Update:
This is what I have in /etc/hosts
127.0.0.1 localhost
255.255.255.255 broadcasthost
check your computer name and your short host name or alias name in /etc/hosts, match this
Check your computer name [wendy#nyc123]$
nyc123 is your computer name
Check your short hostname
[wendy#nyc123]$ hostname -s
[wendy#nyc123]$ nyc456
This error could happen because your computer name and short host name didn't match. To match this, you can change the computer hostname or alias name.
Change computer host name
[wendy#nyc123]$ hostname nyc456
close your terminal and open again
[wendy#nyc456]$
the computer name has changed
or
Change alias name in /etc/hosts
127.0.0.1 nyc123.com nyc123
save and check again
[wendy#nyc123]$ hostname -s
[wendy#nyc123]$ nyc123
Restart your rabbitmq!
[root#nyc123]$ rabbitmq-server start</p>
RabbitMQ 3.6.0. Copyright (C) 2007-2015 Pivotal Software, Inc.</p>
## ## Licensed under the MPL. See http://www.rabbitmq.com/</p>
## ##</p>
########## Logs: /var/log/rabbitmq/rabbitmq#nyc123.com.log</p>
###### ## /var/log/rabbitmq/rabbitmq#nyc123.com-sasl.log</p>
##########</p>
Starting broker... completed with 6 plugins.</p>
I looked for a similar error on google, and it looks like it can happen if your /etc/hosts file is in the wrong format. Try fixing it and see if that helps.
References:
http://www.ejabberd.im/node/18
Explanation on RabbitMQ Mailing list
Edit: For completeness, it seems like setting a long name (of the form abc#abc) worked.
Found the answer here:
control rabbitmq 'name' not 'sname'
Set your machine name to something simple and make it an alias to locahost
I also encountered this problem yesterday and found the root cause:
I had changed my system's hostname to a "long" name, pm3(hc desktop).
If your server's hostname is long or invalid, Linux can still work and no error message is prompted to you. As you just modify the /etc/hostname file and reboot. However, the rabbitmq server may not work and give this "short-name" error message to you.
I changed hostname back to "pm3", rebooted and everything went well.
I solved this issue changing the computer name (on windows 8.1). The problem was that the name had a strange character (é) spanish letter. My computer name was Andrés and I changed it to Andres, restarted my computer and everything worked well. I think that Rabbit could not recognize that name (Andrés) for that strange character.
remove old style config file /etc/rabbitmq/rabbitmq.config
and use rabbitmq.conf
with listeners.tcp.default = 5672
after that restart rabbitmq server again
In my case that solved the issue in ec2 instance
Related
fail2ban won't start using nextcloud.log with jail
I have nextcloud installed and working fine in a docker but want to have fail2ban monitor the log files for brute force attempts. I know nextcloud has it's own baked in but it just throttles the log in attempts and I would like to all out ban them (I also have this problem with other containers as well). The docker-compose is set to create the nextcloud.log file to /mnt/nextcloud/log/nextcloud.log. I followed this guide to create the jail https://www.c-rieger.de/nextcloud-installation-guide-ubuntu/#c06 Fail2ban is running on the host machine however, fail2ban fails to start with: [447]: ERROR Failed during configuration: Have not found any log file for nextcloud jail [447]: ERROR Async configuration of server failed Thinking it was simply a permission issue, I chowned everything to root and tried to start again but still the service won't start. What am I doing wrong? Thanks for the help!
The docker-compose is set to create the nextcloud.log file to /mnt/nextcloud/log/nextcloud.log Be sure this file really exists and your jail.local has correct entry logpath: [nextcloud] ... logpath = /mnt/nextcloud/log/nextcloud.log You can also check resulting config using dump: fail2ban-client -d | grep 'nextcloud.*logpath' But I'm still not sure the error message you provide was throwed by fail2ban, because its error messages look different, see https://github.com/fail2ban/fail2ban/commit/27947407bc7910f0f50972113218ebc73c4a22c7 It should be something like: -have not found a log file for nextcloud log +Have not found any log file for nextcloud jail
Prevent default redirection from port 80 to 5000 on Synology NAS (DSM 5)
I would like to use a nginx front server on my Synology NAS for reverse-proxying pruposes. The goal is to provide a facade for the non-standard port numbers used by diverse webservers hosted the NAS. nginx should be listening on port 80, otherwise all this wouldn't make any sense. However DSM comes out of the box with an Apache server that is already listening on port 80. What it does is really silly : it simply redirects to port 5000, which is the entry point to the NAS web manager (DSM). What I would like to do is disable this functionality, making the port 80 available for my nginx server. How can I do this ?
Since Google redirects to here also for recent Synology DSM, I answer for DSM6 (based on http://tonylawrence.com/posts/unix/synology/freeing-port-80/) From DSM6, nginx is used as HTTP server and redirection place. The following commands will leave ngingx in place, put run it at port 8880 instead of 80. ssh into your Synology sudo -s cd /usr/syno/share/nginx Make a backup of server.mustache, DSM.mustache, WWWService.mustache cp server.mustache server.mustache.bak cp DSM.mustache DSM.mustache.bak cp WWWService.mustache WWWService.mustache.bak sed -i "s/80/8880/g" server.mustache sed -i "s/80/8880/g" DSM.mustache sed -i "s/80/8880/g" WWWService.mustache Optionally, you can also move 443 to 8881: sed -i "s/443/8881/g" server.mustache sed -i "s/443/8881/g" DSM.mustache sed -i "s/443/8881/g" WWWService.mustache Quit the shell (e.g., via Ctrl+D) Go to the Control Panel and change any setting (e.g. the Application portal -> Reverse Proxy to forward http://YOURSYNOLOGYHOSTNAME:80 to http://localhost:8181 - 8181 is the port suggested by the pi-hole on DSM tutorial).
tl;dr Edit /usr/syno/etc/synoservice.d/httpd-user.cfg to look like: { "init_job_map":{"upstart":["httpd-user"]}, "user_controllable":"no", "mtu_sensitive":"yes", "auto_start":"no" } Then edit the stop on runlevel to be [0123456] in /etc/init/httpd-user.conf: Syno-Server> cat /etc/init/httpd-user.conf description "start httpd-user daemon" author "Development Infrastructure Team" console log reload signal SIGUSR1 start on syno.share.ready and syno.network.ready stop on runlevel [0123456] ... ... then reboot. Background infrormation The answer given by Backslash36 is not the easiest solution and it may also be more difficult to maintain. Here, I give a solution that also doesn't involve starting webstation, which most other solutions demand. Note, for updated documentation see here, which gives a lot of info in general about the synology systems. It is important to note that the new DSM (> 5.x) use upstart now, so much of the previous documentation is not correct. There are two httpd jobs which run by default on the synology machines: httpd-sys : serves the administration page(s) and is located on 5000/5001 by default. httpd-user : this, somewhat confusingly, always runs even if the webstation program is not enabled. If webstation: is enabled: then this program serves the user webpages. is not enabled: then this program sets /usr/syno/synoman/phpsrc/web as its DocumentRoot (/usr/syno/synoman/phpsrc/web/index.cgi -> /usr/syno/synoman/webman/index.cgi), meaning that a call to http://address.of.my.dsm will call the index.cgi file. This cgi file is what drives the redirect to 5000 (or whatever you have set the admin_port to be). From the command line, you can check what the [secure_]admin_port is set to: Syno-Server> get_key_value /etc/synoinfo.conf admin_port 5184 Syno-Server> get_key_value /etc/synoinfo.conf secure_admin_port 5185 where I have set mine differently. Ok, now to the solution. The best solution is simply to stop the httpd-user daemon from starting. This is presumably what you want anyways (e.g. to start another server like `nginx' in a docker). To do this, edit the relevant upstart configuration file: Syno-Server> cat /usr/syno/etc/synoservice.d/httpd-user.cfg { "init_job_map":{"upstart":["httpd-user"]}, "user_controllable":"no", "mtu_sensitive":"yes", "auto_start":"no" } so that the "auto_start" entry is "no" (as it is above). It will presumably be "yes" on your machine and by default. Then edit the stop on runlevel to be [0123456] in /etc/init/httpd-user.conf: Syno-Server> cat /etc/init/httpd-user.conf description "start httpd-user daemon" author "Development Infrastructure Team" console log reload signal SIGUSR1 start on syno.share.ready and syno.network.ready stop on runlevel [0123456] ... This last step is to ensure that the httpd-user service does actually start, but then automatically stops. This is because there are otherwise a number of services that depend upon it actually starting. Reboot your machine and you will now see that nothing is listening (or forwarding) on Port 80.
Done ! It was tricky, but now I have it working just fine. Here is how I did it. What follows requires to connect to the NAS with ssh, and may not be recommended if you want to keep warranty on your product (even though it's completely safe IMHO) TL;DR : In the following files, replace all occurences of port 80 by a non standard port (for example, 8080). This will release the port 80 and make it available to use by whatever you want. /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf-user /etc/httpd/conf/httpd.conf-sys /etc.defaults/httpd/conf/httpd.conf-user /etc.defaults/httpd/conf/httpd.conf-sys Note that modifying a subset of these files is probably sufficient (I could observe that the first one is actually computed from several others). I guess modifying the files in /etc.defaults/ would be enough, but if not, worst-case scenario is to modify all those files and you will be just fine. Once this is done, don't forget to restart your NAS ! For those interested in how I found out I'm not that familiar with the Linux filesystem, and even less with Apache configuration. But I knew that scripts dealing with startup processes are located in /etc/init. The Apache server that was performing the redirection would be certainly launched from there. This is where I had to get my hands dirty. I performed some cat <filename> | grep 80 for the files in that directory I considered relevant, hoping to find a configuration line that would set a port number to 80. That intuition paid off : /etc/init/httpd-user.conf contained the line echo "DocumentRoot \"/usr/syno/synoman/phpsrc/web\"" >> "${HttpdConf}" #port 80 to 5000. Bingo ! Looking at the top of the file, I discovered that the HttpdConf variable was referring to /etc/httpd/conf/httpd.conf. This is where the actual configuration was taking place. From there it is relatively straightforward, even for those John Snow out there that know nothing about Apache configuration. The trick was to notice that httpd.conf was instantiated from some template at startup (and changing this file was therefore not enough). Performing a find / -name "*httpd.conf*", combined with some grep 80 gave me the list of files to modify. When you look back all this looks obvious of course. However I wish Synology gave us more flexibility, so we don't have to perform dirty hacks like that...
Why does the LLDB Debugger constantly fail to attach?
I have seen a lot of answers for this question: error: failed to attach to process ID as switch to GDB. But no one addresses the reason of why it happens? Attaching works fine with the GDB debugger but the default and recommended project setting is LLDB. Can anybody explain why LLDB fails? Is it a common bug or am I doing something wrong? Alternatively, how can I set GDB as my default debugger without changing it manually when creating the new projects? System Info: OS: Lion RAM: 5GB XCode: Version 4.6 (4H127) Device: Mac mini My localhost setting:
Make sure you have localhost mapped to 127.0.0.1 in your /etc/hosts file: $ grep localhost /etc/hosts If grep doesn't show 127.0.0.1 then add it: $ sudo -i # echo "127.0.0.1 localhost" >> /etc/hosts ^ That '#' is root's command prompt; don't type it otherwise you will comment-out the statement and nothing will happen NOTE Use >> and not >! (better is to edit it using vi or mate or whatever). My /etc/hosts file shows (ignoring comments): 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost
Apple likes to move forward. So setting gdb as the debugger for all new projects is not an option. Sometimes, you have to reset the iOS Simulator to clean up the debugger.
RAILS, CUCUMBER: Getting the testing server address
While running a cucumber test, I need to know the local testing server address. It will be something like "localhost:47632". I've searched the ENV but it isn't in there, and I can't seem to find any other variables that might have it. Ideas?
I believe that the port is generated is dynamically generated on test runs. You can use OS level tools to inspect what connections are opened by process and glean the port that way. I do this on my ubuntu system infrequently so I can't tell you off the top of my head what tool does that. Netstat maybe? I always have to go out and google for it so consider this more of a hint than a complete answer. Ah, to be more clear...I put a debug breakpoint in, and when it breaks THEN I use the OS level tools to see what port the test server is running on at that moment in time. How to discover it predictively? No idea, sorry. here's what I use: netstat -an | grep LISTEN
(Answering my own question just so that the code formatting will be correct)... Using jaydel's idea to use netstat, here's the code. I extract the line from netstat that has the current pid. (Probably not the most elegant way to do this, but it works) value = %x( netstat -l -p --tcp ) pid = $$.to_s local_port = "" value.split( "\n" ).each do |i| if i.include?( pid ) m = i.match( /\*:(\d+)/ ) local_port = m[1].to_s end end
Erlang Web and Inets BindAddress
After installing Erlang Web 1.3 and starting it in interactive mode, I get the following error in the logs: Failed to start service: "config/inets.conf" due to: "httpd_conf: 0.0.0.0 is an invalid address" In my inets.conf I have the following: BindAddress 0.0.0.0 My sys.config: [{inets,[{services,[{httpd,"config/inets.conf"}]}]}]. Any suggestion?
I fixed the problem by myself. I just changed the BindAddress line in inets.conf into: BindAddress *
This configuration directive is being parsed and validated by httpd_conf, which in turn calls httpd_util:ip_address/2. Both of these were changed in R13B02. Have you tried with that Erlang/OTP version?
i have no experience with this language or situation, but it looks like that 0.0.0.0 is an invalid address, have you tried changing it to something like 127.0.0.1 ?