Erlang logger config does not respond to log level setting info - erlang

After starting the Erlang shell with erl I create a logger like so
4> Config = #{config => #{file => "./info.log"}, level => info}.
#{config => #{file => "./info.log"},level => info}
5> logger:add_handler(myhandler, logger_std_h, Config).
ok
Then if I try to log a warning message by logger:warn("foo"). it shows up in the log file. But if I try an info message by logger:info("foo"). it does not. Even though the log level has explicitly been set to info in the Config.

erl needs to be started with a kernel parameter to do this.
Starting it with erl -kernel logger_level info works as expected.

Related

Erlang OTP 21 How to disable logger console output

According to erlang documentation is is possibe to configure logger for file only:
from Erlang Documentation:
Modify the default handler to print to a file instead of standard_io:
[{kernel,
[{logger,
[{handler, default, logger_std_h, % {handler, HandlerId, Module,Config}
#{config => #{file => "log/erlang.log"}}}
]}]}].
My config file looks like:
[{kernel,[
{logger_level, error},
{logger,
[{handler, default, logger_std_h,
#{config => #{file => "log/erlang.log"}}}
]
}
]}].
i.o. exactly as in the documentation.
with log level set to error
When I stat application using reabar3 shell following printed into console:
===> The rebar3 shell is a development tool; to deploy applications in production, consider using releases (http://www.rebar3.org/docs/releases)
2019-04-10T15:14:33.363287+02:00 info: application: syntax_tools, started_at: 'node#x201'
2019-04-10T15:14:33.371897+02:00 info: application: compiler, started_at: 'node#x201'
2019-04-10T15:14:33.379962+02:00 info: supervisor: {local,gr_sup}, started: [{pid,<0.1181.0>},{id,gr_counter_sup},{mfargs,{gr_counter_sup,start_link,[]}},{restart_type,permanent},{shutdown,5000},{child_type,supervisor}]
2019-04-10T15:14:33.382379+02:00 info: supervisor: {local,gr_sup}, started: [{pid,<0.1182.0>},{id,gr_param_sup},{mfargs,{gr_param_sup,start_link,[]}},{restart_type,permanent},{shutdown,5000},{child_type,supervisor}]
So, OTP progress messages are not redirected to the file, but still printed to the console and file erlang.log is empty.
How to disable console output and redirect all messages to the file?
P.S. Erlang version:
Erlang/OTP 21 [erts-10.3.2] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [hipe]
After fighting for a while I found a problem:
In case of a wrong parameter, Logger just do not create file.
Looks like documentation is incomplete
[{kernel,
[{logger,
[{handler, default, logger_std_h, % {handler, HandlerId, Module,Config}
#{config => #{file => "log/erlang.log"}}}
]}]}].
{config => #{file => "log/erlang.log"}}
should be changed to:
#{config => #{ type => {file => "log/erlang.log"}}}}
[{kernel,
[{logger,
[{handler, default, logger_std_h, % {handler, HandlerId, Module,Config}
#{config => #{ type => {file => "log/erlang.log"}}}}
]}]}].
Here: Complete example of logger configuration file
like this:
{logger, [
{handler,default,undefined}
]}

Error during starting the application with cowboy example ('noproc', ranch_listener_sup)

I'm trying run this cowboy example using rebar3:
cowboy version 2.0.0-pre.5
What I do is:
rebar3 new app hello_world
copy the example src into my src
updating rebar.config {cowboy,".*", {git, "https://github.com/ninenines/cowboy", {branch, "master"}}}
rebar3 compile. Everything go fine
erl -pa _build/default/lib/*/ebin
application:start(hello_world).
then error occurs
{error,{bad_return,{{hello_world_app,start,[normal,[]]},
{'EXIT',{noproc,{gen_server,call,
[ranch_sup,
{start_child,{{ranch_listener_sup,http},
{ranch_listener_sup,start_link,
[http,100,ranch_tcp,
[{connection_type,supervisor},{port,...}],
cowboy_clear,
#{connection_type => supervisor,...}]},
permanent,infinity,supervisor,
[ranch_listener_sup]}},
infinity]}}}}}}
=INFO REPORT==== 24-Jan-2017::18:34:52 ===
application: hello_world
exited: {bad_return,
{{hello_world_app,start,[normal,[]]},
{'EXIT',
{noproc,
{gen_server,call,
[ranch_sup,
{start_child,
{{ranch_listener_sup,http},
{ranch_listener_sup,start_link,
[http,100,ranch_tcp,
[{connection_type,supervisor},
{port,8080}],
cowboy_clear,
#{connection_type => supervisor,
env => #{dispatch => [{'_',[],
[{[],[],toppage_handler,
[]}]}]}}]},
permanent,infinity,supervisor,
[ranch_listener_sup]}},
infinity]}}}}}
type: temporary
It seems that runch_sup can't start.
What's wrong with my approach?
I want to run exactly the same src code as in example.
Ranch 1.3 depends on the ssl application by default. If you don't
start it, Ranch fails to start. I would recommend matching on ok when
doing ok = application:start(App), you'd have known the issue much
quicker.
by essen
here is the issue

How to configure httpc profiles with rebar3?

How can I set configuration options for httpc's profiles when using rebar3?
Here is the only example being via erl -config inets.config that looks like this:
[{inets,
[{services,[{httpc,[{profile, server1}]},
{httpc, [{profile, server2}]}]}]
}].
I tried adopting it to my rebar3 project structure.
Code
Project was created with rebar3, with standard OTP layout:
rebar3 new release myapp
Here is my myapp/config/sys.config:
[
{ myapp, []},
{inets, [{services, [{httpc, [{profile, myapp}]}]}]}
].
rebar.config:
{erl_opts, [debug_info]}.
{deps, []}.
{relx, [{release, { myapp, "0.1.0" },
[myapp,
sasl]},
{sys_config, "./config/sys.config"},
{vm_args, "./config/vm.args"},
{dev_mode, true},
{include_erts, false},
{extended_start_script, true}]
}.
{profiles, [{prod, [{relx, [{dev_mode, false},
{include_erts, true}]}]
}]
}.
Here is my myapp.app.src file for completeness:
{application, myapp,
[{description, "An OTP application"},
{vsn, "0.1.0"},
{registered, []},
{mod, { myapp_app, []}},
{applications,
[kernel,
stdlib
]},
{env,[]},
{modules, []},
{maintainers, []},
{licenses, []},
{links, []}
]}.
Requests
Here is a request I'm trying to make from rebar`s shell:
$ ./rebar3 shell
1> ===> Booted myapp
1> ===> Booted sasl
...
1> httpc:request( "http://reddit.com", myapp).
** exception exit: {noproc,
{gen_server,call,
[httpc_myapp,
{request,
{request,undefined,<0.88.0>,0,http,
{"reddit.com",80},
"/",[],get,
{http_request_h,undefined,"keep-alive",undefined,
undefined,undefined,undefined,undefined,undefined,
undefined,...},
{[],[]},
{http_options,"HTTP/1.1",infinity,true,
{essl,[]},
undefined,false,infinity,...},
"http://reddit.com",[],none,[],1478280329839,
undefined,undefined,false}},
infinity]}}
in function gen_server:call/3 (gen_server.erl, line 212)
in call from httpc:handle_request/9 (httpc.erl, line 574)
Here is the request without a profile, to check that inets actually works:
2> httpc:request( "http://reddit.com").
=PROGRESS REPORT==== 4-Nov-2016::13:25:51 ===
supervisor: {local,inet_gethost_native_sup}
started: [{pid,<0.107.0>},{mfa,{inet_gethost_native,init,[[]]}}]
=PROGRESS REPORT==== 4-Nov-2016::13:25:51 ===
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.106.0>},
{id,inet_gethost_native_sup},
{mfargs,{inet_gethost_native,start_link,[]}},
{restart_type,temporary},
{shutdown,1000},
{child_type,worker}]
{ok,{{"HTTP/1.1",200,"OK"},...
rebar3 itself use inets http clients so when It starts your application in shell, inets is already started and configured. One workaround would be stop inets before your application starts, as It's suggested by rebar3 developer (copied below). Another one would be boot your release in console mode:
./_build/default/rel/myapp/bin/myapp console
Beside that there is another problem with your project. You have not told you want inets being started for you. You should have this line in myapp.src:
{applications, [kernel, stdlib, inets]}
Or you can list inets in rebar.config release section, to tell relx this app should be included in release and started on boot.
{relx, [{release, { myapp, "0.1.0" }, [inets, myapp, sasl]} ]}
Stop Inets from Loading on rebar3 shell startup
Here is the copy of the full answer by Fred Hebert from the rebar3 mailing list:
We do need inets for package fetching and will likely not turn it off
automatically for all use cases as this could compromise general usage
of the rebar3 agent in case where the user application does not use
inets, but still asks to fetch packages in a subsequent run. The
workaround I would suggest would be to use a hook script for it. Hook
scripts run before we boot user applications and are regular escripts:
#!/usr/bin/env escript
main(_) -> application:stop(inets).
You can then hook the script in with:
{shell, [{script_file, "path/to/file"}]}
in rebar3.config, or with
rebar3 shell --script_file test/check_env.escript
I can't find anything in the documentation to suggest that rebar.config can contain the application configuration you want.
Instead, application configuration is often kept in a configuration directory in your application, and like the example you gave says, you must use the -config flag when launching erl to point to the configuration file.
If you use rebar3 for making release and start your service by script made with relx (from _build/default/rel/<release_name>/bin/<release_name> by default) the application configuration file is passed to erl for you. If config file exist in your application directory, by default in config/sys.config, It will be regarded as application configuration, otherwise an empty configuration will be made. You can customize Its path by relax' release option sys_config.
For our software, we typically had a single config/sys.config file. The structure is the same as the config sample you have provided. Note that the configuration file can contain settings for many different services. For example, mixing inets with one of ours:
[{inets, [
{services,[{httpc,[{profile, server1}]},
{httpc, [{profile, server2}]}]}
]},
{my_app, [
{my_setting, "my value"}
]}
].
We launch with erl -config config/sys.config.
This way if we need to set service configuration we can simply update our configuration file, which also houses the configuration specific to this application.
As far as I'm aware, this is the correct method to use. I have not been able to find documentation supporting any other way of doing this.

puppet tomcat6 service does not receive environment variables

I am using Debian OS and tomcat6.
I export CATALINA_OPTS="-Xms1024m -Xmx2048m" environment variable and create a puppet service:
class tomcat6::service {
service { 'tomcat6':
ensure => running,
hasstatus => true,
hasrestart => true,
enable => true,
}
}
As /usr/share/tomcat6/bin/catalina.sh reads CATALINA_OPTS variables for starting tomcat6 service, the process should receive CATALINA_OPTS but it does not show in the process command. I execute ps aux|grep catalina to show the command detail:
tomcat6 10658 1.0 2.0 2050044 189572 ? Sl 18:04 0:16 /usr/lib/jvm/default- java/bin/java -Djava.util.logging.config.file=/var/lib/tomcat6/conf/logging.properties -Djava.awt.headless=true -Xmx128m -XX:+UseConcMarkSweepGC -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/usr/share/tomcat6/endorsed -classpath /usr/share/tomcat6/bin/bootstrap.jar -Dcatalina.base=/var/lib/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.io.tmpdir=/tmp/tomcat6-tomcat6-tmp org.apache.catalina.startup.Bootstrap start
Puppet does not receive CATALINA_OPTS properly.
My question is, how can I let puppet read CATALINA_OPTS when executing puppet tomcat6 service?
Thank you.
instead of
hasstatus => true,
put
hasstatus => false,
By doing this, you will force puppet to look up the proc table and find the daemon OR in other words, this will make puppet run ps auxw | grep tomcat6 before doing anything else.
hasstatus => true tells that if puppet receives a status != running it will do as directed, but in some cases several daemons don't return the status correctly (probably due to mutiple threading involved)
I fixed the issue by setting setenv.sh for tomcat6. It works properly.

How to always log/show the error reason when a supervisor child returns error from start_link?

When starting gen_server's from a supervisor (which itself is started by a application) I have the problem that when the start_link of the gen_server doesn't return {ok, ...} but {error, Reason} the only error message I see is:
=INFO REPORT==== 20-Jan-2011::13:14:43 ===
application: foo
exited: {shutdown,{foo_app,start,[normal,[]]}}
type: temporary
The Reason that for terminating is not shown/logged.
Is there a way to see/log these error returns to the supervisor?
The childspec I'm using is e.g.:
{ok, {{one_for_one, 3, 10}, ...
{usb_mux_1,
{usb_mux, start_link,
[Some_Params]},
permanent,
10000,
worker,
[usb_mux]}, ...
Edit: Clarification
I know about error_logger and using it already. The question is not how to get something logged but how to get supervisor to log the reason for it terminating, e.g. log who terminated with an error return and what did it return.
And just to get this also out of the way, yes I start erlang with sasl on:
-boot start_sasl
Just discovered the answer myself:
The supervisor is really logging the error exit as crash report.
The problem is the shell doesn't show these crash reports. Just to confuse me it shows info/warning and error reports but no progess reports and crash reports from the supervisor.
If I look in the on disk log there is a detailed crash report there:
10> rb:show(4).
CRASH REPORT <0.53.0> 2011-01-20 17:33:52
===============================================================================
Crashing process
initial_call {usb_mux,init,['Argument__1']}
pid <0.53.0>
registered_name []
error_info
{exit,{undef,[{usb_port,get_gw_hw_spec,[<0.59.0>]},
...
The reason the SASL events were not shown on the screen was a ommission in the -config file, which looked like this:
[{sasl, [
{sasl_error_logger, false}, %% no SASL error logger installed
{error_logger_mf_dir,"./log"},
{error_logger_mf_maxbytes,10485760}, % 10 MB
{error_logger_mf_maxfiles, 10}
]}].
Meaning there was a multi-file erroro logger installen (all the error_logger_mf_* entries) but no on screen logger for SASL events.
Changing the entry like this fixed it:
{sasl_error_logger, tty}, %% SASL reports to tty
From the sasl manpage:
sasl_report_tty_h:
Formats and writes supervisor reports, crash reports and progress reports to stdio .

Resources