Fairly straight forward question, is it possible to trace messages arriving in (the mailbox of) a Process/GenServer? Note, this is different from tracing when a message is received (which would be once it leaves the mailbox and is handled). I've not found a way of doing this until now.
In erlang you have flags for it in dbg:p/2, s for sending and r for receiving:
1> dbg:tracer().
{ok,<0.82.0>}
2> dbg:p(self(), r).
(<0.80.0>) << {dbg,{ok,[{matched,nonode#nohost,1}]}}
(<0.80.0>) << {io_reply,#Ref<0.2586582558.1779957764.183997>,319}
{ok,[{matched,nonode#nohost,1}]}
(<0.80.0>) << {io_reply,#Ref<0.2586582558.1779957764.184000>,
[{expand_fun,#Fun<group.0.82824323>},
{echo,true},
{binary,false},
{encoding,latin1}]}
(<0.80.0>) << {io_reply,#Ref<0.2586582558.1779957764.184002>,ok}
3> self() ! trace_me.
(<0.80.0>) << {shell_cmd,<0.73.0>,
{eval,[{op,{1,8},
'!',
{call,{1,1},{atom,{1,1},self},[]},
{atom,{1,10},trace_me}}]},
cmd}
(<0.80.0>) << trace_me
(<0.80.0>) << {io_reply,#Ref<0.2586582558.1779957764.184006>,319}
trace_me
(<0.80.0>) << {io_reply,#Ref<0.2586582558.1779957764.184008>,
[{expand_fun,#Fun<group.0.82824323>},
{echo,true},
{binary,false},
{encoding,latin1}]}
(<0.80.0>) << {io_reply,#Ref<0.2586582558.1779957764.184011>,ok}
Related
Ive been trying to instantiate a genserver process that will subscribe to PubSub in Phoenix framework, these are my files and errors:
config.ex:
config :excalibur, Excalibur.Endpoint,
pubsub: [
adapter: Phoenix.PubSub.PG2,
name: Excalibur.PubSub
]
Module using genserver:
defmodule RulesEngine.Router do
import RulesEngine.Evaluator
use GenServer
alias Excalibur.PubSub
def start_link(_) do
GenServer.start_link(__MODULE__, name: __MODULE__)
end
def init(_) do
{:ok, {Phoenix.PubSub.subscribe(PubSub, :evaluator)}}
IO.puts("subscribed")
end
# Callbacks
def handle_info(%{}) do
IO.puts("received")
end
def handle_call({:get, key}, _from, state) do
{:reply, Map.fetch!(state, key), state}
end
end
and when i do a iex -S mix this error happens:
** (Mix) Could not start application excalibur: Excalibur.Application.start(:normal, []) returned an error: shutdown: failed to start child: RulesEngine.Router
** (EXIT) an exception was raised:
** (FunctionClauseError) no function clause matching in Phoenix.PubSub.subscribe/2
(phoenix_pubsub 1.1.2) lib/phoenix/pubsub.ex:151: Phoenix.PubSub.subscribe(Excalibur.PubSub, :evaluator)
(excalibur 0.1.0) lib/rules_engine/router.ex:11: RulesEngine.Router.init/1
(stdlib 3.12.1) gen_server.erl:374: :gen_server.init_it/2
(stdlib 3.12.1) gen_server.erl:342: :gen_server.init_it/6
(stdlib 3.12.1) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
So which is the correct way to spin up a Genserver that will subscribe to the PubSub topic ?
As by documentation, Phoenix.PubSub.subscribe/3 has the following spec:
#spec subscribe(t(), topic(), keyword()) :: :ok | {:error, term()}
where #type topic :: binary(). That said, your init/1 should look like
def init(_) do
{:ok, {Phoenix.PubSub.subscribe(PubSub, "evaluator")}}
|> IO.inspect(label: "subscribed")
end
Please note I also altered IO.puts/1 to IO.inspect/2 to preserve the returned value.
How do I print to multiple files simultaneously in Julia? Is there a cleaner way other than:
for f in [open("file1.txt", "w"), open("file2.txt", "w")]
write(f, "content")
close(f)
end
From your question I assume that you do not mean write in parallel (which probably would not speed up things due to the fact that the operation is probably IO bound).
Your solution has one small problem - it does not guarantee that f is closed if write throws an exception.
Here are three alternative ways to do it making sure the file is closed even on error:
for fname in ["file1.txt", "file2.txt"]
open(fname, "w") do f
write(f, "content")
end
end
for fname in ["file1.txt", "file2.txt"]
open(f -> write(f, "content"), fname, "w")
end
foreach(fn -> open(f -> write(f, "content"), fn, "w"),
["file1.txt", "file2.txt"])
They give the same result so the choice is a matter of taste (you can derive some more variations of similar implementations).
All the methods are based on the following method of open function:
open(f::Function, args...; kwargs....)
Apply the function f to the result of open(args...; kwargs...)
and close the resulting file descriptor upon completion.
Observe that the processing will still be terminated if an exception is actually thrown somewhere (it is only guaranteed that the file descriptor will be closed). In order to ensure that every write operation is actually attempted you can do something like:
for fname in ["file1.txt", "file2.txt"]
try
open(fname, "w") do f
write(f, "content")
end
catch ex
# here decide what should happen on error
# you might want to investigate the value of ex here
end
end
See https://docs.julialang.org/en/latest/manual/control-flow/#The-try/catch-statement-1 for the documentation of try/catch.
If you really want to write in parallel (using multiple processes) you can do it as follows:
using Distributed
addprocs(4) # using, say, 4 examples
function ppwrite()
#sync #distributed for i in 1:10
open("file$(i).txt", "w") do f
write(f, "content")
end
end
end
For comparison, the serial version would be
function swrite()
for i in 1:10
open("file$(i).txt", "w") do f
write(f, "content")
end
end
end
On my machine (ssd + quadcore) this leads to a ~70% speedup:
julia> #btime ppwrite();
3.586 ms (505 allocations: 25.56 KiB)
julia> #btime swrite();
6.066 ms (90 allocations: 6.41 KiB)
However, be aware that these timings might drastically change for real content, which might have to be transferred to different processes. Also they probably won't scale as IO will typically be the bottleneck at some point.
Update: larger (string) content
julia> using Distributed, Random, BenchmarkTools
julia> addprocs(4);
julia> global const content = [string(rand(1000,1000)) for _ in 1:10];
julia> function ppwrite()
#sync #distributed for i in 1:10
open("file$(i).txt", "w") do f
write(f, content[i])
end
end
end
ppwrite (generic function with 1 method)
julia> function swrite()
for i in 1:10
open("file$(i).txt", "w") do f
write(f, content[i])
end
end
end
swrite (generic function with 1 method)
julia> #btime swrite()
63.024 ms (110 allocations: 6.72 KiB)
julia> #btime ppwrite()
23.464 ms (509 allocations: 25.63 KiB) # ~ 2.7x speedup
Doing the same thing with string representations of larger 10000x10000 matrices (3 instead of 10) results in
julia> #time swrite()
7.189072 seconds (23.60 k allocations: 1.208 MiB)
julia> #time swrite()
7.293704 seconds (37 allocations: 2.172 KiB)
julia> #time ppwrite();
16.818494 seconds (2.53 M allocations: 127.230 MiB) # > 2x slowdown of first call
julia> #time ppwrite(); # 30%$ slowdown of second call
9.729389 seconds (556 allocations: 35.453 KiB)
Just to add a coroutine version that does IO in parallel like the multiple-process one, but also avoids the data duplication and transfer.
julia> using Distributed, Random
julia> global const content = [randstring(10^8) for _ in 1:10];
julia> function swrite()
for i in 1:10
open("file$(i).txt", "w") do f
write(f, content[i])
end
end
end
swrite (generic function with 1 method)
julia> #time swrite()
1.339323 seconds (23.68 k allocations: 1.212 MiB)
julia> #time swrite()
1.876770 seconds (114 allocations: 6.875 KiB)
julia> function awrite()
#sync for i in 1:10
#async open("file$(i).txt", "w") do f
write(f, "content")
end
end
end
awrite (generic function with 1 method)
julia> #time awrite()
0.243275 seconds (155.80 k allocations: 7.465 MiB)
julia> #time awrite()
0.001744 seconds (144 allocations: 14.188 KiB)
julia> addprocs(4)
4-element Array{Int64,1}:
2
3
4
5
julia> function ppwrite()
#sync #distributed for i in 1:10
open("file$(i).txt", "w") do f
write(f, "content")
end
end
end
ppwrite (generic function with 1 method)
julia> #time ppwrite()
1.806847 seconds (2.46 M allocations: 123.896 MiB, 1.74% gc time)
Task (done) #0x00007f23fa2a8010
julia> #time ppwrite()
0.062830 seconds (5.54 k allocations: 289.161 KiB)
Task (done) #0x00007f23f8734010
If you only needed to read the files line by line, you could probably do something like this:
for (line_a, line_b) in zip(eachline("file_a.txt"), eachline("file_b.txt"))
# do stuff
end
As eachline will return an iterable EachLine, which will have an I/O stream linked to it.
I have a logger to print to a file:
logger = Logger.new Rails.root.join 'log/my.log'
How can I print several dots in one line in different logger calls?
i.e., if I do
some_array.each do |el|
logger.info '.'
el.some_method
end
These dots will be printed in different lines.
The << method lets you write messages without any formatting the log file.
For example:
require 'logger'
log = Logger.new "test.log"
log.info "Start"
5.times { log << "." }
log << "\n"
log.info "End"
Produces:
I, [2014-03-10T15:11:04.320495 #3410] INFO -- : Start
.....
I, [2014-03-10T15:11:42.157131 #3410] INFO -- : End
Unfortunately, this doesn't let you write to the same line as previous calls to log.info
You can use Logger#<<(msg)
Dump given message to the log device without any formatting. If no log
device exists, return nil.
some_array.each do |el|
logger << '.'
el.some_method
end
And you can still keep it restricted by the logger.level by using .info?
some_array.each do |el|
logger << '.' if logger.info?
el.some_method
end
The need to send and receive SMS messages via Smpp, Register Number, received systemID and password, but fails to connect. Connected to the project gem 'ruby-smpp', to use the example of the getaway this, it only changed the values systemID and password.
in the logs:
<- (BindTransceiver) len = 37 cmd = 9 status = 0 seq = 1364360797 (<systemID><password>)
Hex dump follows:
<- 00000000: 0000 0025 0000 0009 0000 0000 5152 7e5d | ...% ........ QR ~]
<- 00000010: 3531 3639 3030 0068 4649 6b4b 7d7d 7a00 | <systemID>.<password>.
<- 00000020: 0034 0001 00 | .4 ...
Starting enquire link timer (with 10s interval)
Delegate: transceiver unbound
in the console:
Connecting to SMSC ...
MT: Disconnected. Reconnecting in 5 seconds ..
MT: Disconnected. Reconnecting in 5 seconds ..
MT: Disconnected. Reconnecting in 5 seconds ..
MT: Disconnected. Reconnecting in 5 seconds ..
MT: Disconnected. Reconnecting in 5 seconds ..
Tell me, please, I do not, maybe in the config to something else to add or change? And smpp connection, as I understand, it works only with a specific IP-address, but the logs on the server and on the local machine are the same
class Gateway
include KeyboardHandler
# MT id counter.
##mt_id = 0
# expose SMPP transceiver's send_mt method
def self.send_mt(*args)
##mt_id += 1
##tx.send_mt(##mt_id, *args)
end
def logger
Smpp::Base.logger
end
def start(config)
# The transceiver sends MT messages to the SMSC. It needs a storage with Hash-like
# semantics to map SMSC message IDs to your own message IDs.
pdr_storage = {}
# Run EventMachine in loop so we can reconnect when the SMSC drops our connection.
puts "Connecting to SMSC..."
loop do
EventMachine::run do
##tx = EventMachine::connect(
config[:host],
config[:port],
Smpp::Transceiver,
config,
self # delegate that will receive callbacks on MOs and DRs and other events
)
print "MT: "
$stdout.flush
# Start consuming MT messages (in this case, from the console)
# Normally, you'd hook this up to a message queue such as Starling
# or ActiveMQ via STOMP.
EventMachine::open_keyboard(KeyboardHandler)
end
puts "Disconnected. Reconnecting in 5 seconds.."
sleep 5
end
end
# ruby-smpp delegate methods
def mo_received(transceiver, pdu)
logger.info "Delegate: mo_received: from #{pdu.source_addr} to #{pdu.destination_addr}: #{pdu.short_message}"
end
def delivery_report_received(transceiver, pdu)
logger.info "Delegate: delivery_report_received: ref #{pdu.msg_reference} stat #{pdu.stat}"
end
def message_accepted(transceiver, mt_message_id, pdu)
logger.info "Delegate: message_accepted: id #{mt_message_id} smsc ref id: #{pdu.message_id}"
end
def message_rejected(transceiver, mt_message_id, pdu)
logger.info "Delegate: message_rejected: id #{mt_message_id} smsc ref id: #{pdu.message_id}"
end
def bound(transceiver)
logger.info "Delegate: transceiver bound"
end
def unbound(transceiver)
logger.info "Delegate: transceiver unbound"
EventMachine::stop_event_loop
end
end
module KeyboardHandler
include EventMachine::Protocols::LineText2
def receive_line(data)
sender, receiver, *body_parts = data.split
unless sender && receiver && body_parts.size > 0
puts "Syntax: <sender> <receiver> <message body>"
else
body = body_parts.join(' ')
puts "Sending MT from #{sender} to #{receiver}: #{body}"
SampleGateway.send_mt(sender, receiver, body)
end
prompt
end
def prompt
print "MT: "
$stdout.flush
end
end
/initializers
require 'eventmachine'
require 'smpp'
LOGFILE = Rails.root + "log/sms_gateway.log"
Smpp::Base.logger = Logger.new(LOGFILE)
/script
puts "Starting SMS Gateway. Please check the log at #{LOGFILE}"
config = {
:host => '127.0.0.1',
:port => 6000,
:system_id => <SystemID>,
:password => <Password>,
:system_type => '', # default given according to SMPP 3.4 Spec
:interface_version => 52,
:source_ton => 0,
:source_npi => 1,
:destination_ton => 1,
:destination_npi => 1,
:source_address_range => '',
:destination_address_range => '',
:enquire_link_delay_secs => 10
}
gw = Gateway.new
gw.start(config)
file from the script / run through the rails runner
Problem solved. Initially were incorrectly specified host and port, as smpp-server was not on the same machine as the RoR-application. As for encoding, sending in Russian layout should be
message = text.encode ("UTF-16BE"). force_encoding ("BINARY")
Gateway.send_mt (sender, receiver, message, data_coding: 8)
To receive a form in the right
message = pdu.short_message.force_encoding ('UTF-16BE'). encode ('UTF-8')
How can I configure the rails logger to output its log strings in another format? I would like to get something that is more informative like:
[Log Level] [Time] [Message]
Debug : 01-20-2008 13:11:03.00 : Method Called
This would really help me when I want to tail my development.log for messages that only come from a certain log level, like debug.
Did some digging and found this post in the RubyOnRails Talk google group.
So I modified it a little bit and put it at the end of my environment.rb:
module ActiveSupport
class BufferedLogger
def add(severity, message = nil, progname = nil, &block)
return if #level > severity
message = (message || (block && block.call) || progname).to_s
level = {
0 => "DEBUG",
1 => "INFO",
2 => "WARN",
3 => "ERROR",
4 => "FATAL"
}[severity] || "U"
message = "[%s: %s #%d] %s" % [level,
Time.now.strftime("%m%d %H:%M:%S"),
$$,
message]
message = "#{message}\n" unless message[-1] == ?\n
buffer << message
auto_flush
message
end
end
end
This results in a format string like this:
[DEBUG: 0121 10:35:26 #57078] Rendered layouts/_header (0.00089)
For rails 4 apps, I've put together a simple gem that not only adds support for basic tagging like time stamp and log level, but even adds color to the log messages themselves.
https://github.com/phallguy/shog
The problem with tags is that they clutter your logs to the point where they are unreadable.
I'd recommend something like timber. It automatically augments your logs with context (level, time, session id, etc) without sacrificing readability.
# config/initializers/rack_logger.rb
module Rails
module Rack
class Logger < ActiveSupport::LogSubscriber
# Add UserAgent
def started_request_message(request)
'Started %s "%s" for %s at %s by %s' % [
request.request_method,
request.filtered_path,
request.ip,
Time.now.to_default_s,
request.env['HTTP_USER_AGENT'] ]
end
end
end
end
source link