How to Choose a Port Number? - network-programming

I'm writing a program which uses ZeroMQ to communicate with other running programs on the same machine. I want to choose a port number at run time to avoid the possibility of collisions. Here is an example of a piece of code I wrote to accomplish this.
#!/usr/bin/perl -Tw
use strict;
use warnings;
my %in_use;
{
local $ENV{PATH} = '/bin:/usr/bin';
%in_use = map { $_ => 1 } split /\n/, qx(
netstat -aunt |\
awk '{print \$4}' |\
grep : |\
awk -F: '{print \$NF}'
);
}
my ($port) = grep { not $in_use{$_} } 50_000 .. 59_999;
print "$port is available\n";
The procedure is:
invoke netstat -aunt
parse the result
choose the first port on a fixed range which doesn't appear on netstat list.
Is there a system utility better suited to accomplishing this?

context = zmq.Context()
socket = context.socket(zmq.ROUTER)
port_selected = socket.bind_to_random_port('tcp://*', min_port=6001, max_port=6004, max_tries=100)

First of all, from your code it looks like you are trying to choose a port between 70000 and 79999. You do know that port numbers only go up to 65535, right? :-)
You can certainly do it this way, even though there are a couple of problems with the approach. The first problem is that netstat output differs between different operating systems so it's hard to do it portably. The second problem is that you still need to wrap the code in a loop which tries again to find a new port number in case it was not possible to bind to the chosen port number, because there's a race condition between ascertaining that the port is free and actually binding to it.
If the library you are using allows you to specify the port number as 0 and allows you to call getsockname() on the socket after it is bound, then you should just do that. Using 0 makes the system choose any free port number, and with getsockname() you can find out which port it chose.
Failing that, it would probably actually be more efficient to not bother calling netstat and just try to find to different port numbers in a loop. If you succeed, break from the loop. If you fail, increment the port number by 1, go back, and try again.

Related

Configure Prometheus alerting rules combining the status of 2 different instances

I'm trying to configure into Prometheus Alerting Manager an alert that appears when the status of 2 different hosts is down.
To better explain, I have these couples of hosts (host=instance):
host1.1
host1.2
host2.1
host2.2
host3.1
host3.2
host4.1
host4.2
...
and I need an alert that appears when both hosts of the SAME couple are DOWN:
expr = ( icmpping{instance=~"hostX1"}==0 and icmpping{instance=~"hostX2"}==0 )
(I know that the syntax is not correct, I just wanted to underline that X refers to the same number on both icmppingconditions)
Any hint?
The easiest way is perhaps to generate a label at ingestion time reflecting this logic, using relabel_config
relabel_configs:
- source_labels: [host]
regex: ^(.*)\.\d+$
target_label: host_group
It will generate the label you need for matching:
host=host1.1 => host_group=host1
host=host1.2 => host_group=host1
You can then use it for your alerting rules.
sum(icmpping) on(host_group) == 0
If this is not possible, you can use label_replace to achieve the same (only on instant vectors)
sum(label_replace(icmpping,"host_group","$1","host","(.*)\._\\d+")) on(host_group) == 0

DTrace build in built-in variable stackdepth always return 0

I am recently using DTrace to analyze my iOS app。
Everything goes well except when I try to use the built-in variable stackDepth。
I read the document here where shows the introduction of built-in variable stackDepth.
So I write some D code
pid$target:::entry
{
self->entry_times[probefunc] = timestamp;
}
pid$target:::return
{
printf ("-----------------------------------\n");
this->delta_time = timestamp - self->entry_times[probefunc];
printf ("%s\n", probefunc);
printf ("stackDepth %d\n", stackdepth);
printf ("%d---%d\n", this->delta_time, epid);
ustack();
printf ("-----------------------------------\n");
}
And run it with sudo dtrace -s temp.d -c ./simple.out。 unstack() function goes very well, but stackDepth always appears to 0。
I tried both on my iOS app and a simple C program.
So anybody knows what's going on?
And how to get stack depth when the probe fires?
You want to use ustackdepth -- the user-land stack depth.
The stackdepth variable refers to the kernel thread stack depth; the ustackdepth variable refers to the user-land thread stack depth. When the traced program is executing in user-land, stackdepth will (should!) always be 0.
ustackdepth is calculated using the same logic as is used to walk the user-land stack as with ustack() (just as stackdepth and stack() use similar logic for the kernel stack).
This seems like a bug in the Mac / iOS implementation of DTrace to me.
However, since you're already probing every function entry and return, you could just keep a new variable self->depth and do ++ in the :::entry probe and -- in the :::return probe. This doesn't work quite right if you run it against optimized code, because any tail-call-optimized functions may look like they enter but never return. To solve that, you can turn off optimizations.
Also, because what you're doing looks a lot like this, I thought maybe you would be interested in the -F option:
Coalesce trace output by identifying function entry and return.
Function entry probe reports are indented and their output is prefixed
with ->. Function return probe reports are unindented and their output
is prefixed with <-.
The normal script to use with -F is something like:
pid$target::some_function:entry { self->trace = 1 }
pid$target:::entry /self->trace/ {}
pid$target:::return /self->trace/ {}
pid$target::some_function:return { self->trace = 0 }
Where some_function is the function whose execution you want to be printed. The output shows a textual call graph for that execution:
-> some_function
-> another_function
-> malloc
<- malloc
<- another_function
-> yet_another_function
-> strcmp
<- strcmp
-> malloc
<- malloc
<- yet_another_function
<- some_function

Configuring zabbix to monitor ping from a server

I am new to zabbix. I would like to monitor the ping from my server and I want to activate a trigger if the ping gets unresponsive or ping time exceeds 20 milliseconds.
I don't know how to configure the trigger expression to suit my needs. Please help. Thanks.
I used
type -> Simple check
key -> icmppingsec
Type of information -> Numeric(Float)
Units -> s
Flexible intervial -> 10secs, from 7:00-24:00
This is the trigger expression.
And a graph I created.
According to simple check documentation, icmppingsec item returns ping time in seconds or 0 if the host is not reachable. Therefore, your trigger can be as follows:
{Template ICMP Ping:icmppingsec.avg(5m)} > 0.020 |
{Template ICMP Ping:icmppingsec.max(5m)} = 0
If you are using at least Zabbix 2.4, you should use or instead of | (see What's new in Zabbix 2.4.0).
Note also that there is no point in using "1-7,00:00-24:00" flexible interval. You can just put "10" into "Updated interval (in sec)" field.

Can I get a list of all currently-registered atoms?

My project has blown through the max 1M atoms, we've cranked up the limit, but I need to apply some sanity to the code that people are submitting with regard to list_to_atom and its friends. I'd like to start by getting a list of all the registered atoms so I can see where the largest offenders are. Is there any way to do this. I'll have to be creative about how I do it so I don't end up trying to dump 1-2M lines in a live console.
You can get hold of all atoms by using an undocumented feature of the external term format.
TL;DR: Paste the following line into the Erlang shell of your running node. Read on for explanation and a non-terse version of the code.
(fun F(N)->try binary_to_term(<<131,75,N:24>>) of A->[A]++F(N+1) catch error:badarg->[]end end)(0).
Elixir version by Ivar Vong:
for i <- 0..:erlang.system_info(:atom_count)-1, do: :erlang.binary_to_term(<<131,75,i::24>>)
An Erlang term encoded in the external term format starts with the byte 131, then a byte identifying the type, and then the actual data. I found that EEP-43 mentions all the possible types, including ATOM_INTERNAL_REF3 with type byte 75, which isn't mentioned in the official documentation of the external term format.
For ATOM_INTERNAL_REF3, the data is an index into the atom table, encoded as a 24-bit integer. We can easily create such a binary: <<131,75,N:24>>
For example, in my Erlang VM, false seems to be the zeroth atom in the atom table:
> binary_to_term(<<131,75,0:24>>).
false
There's no simple way to find the number of atoms currently in the atom table*, but we can keep increasing the number until we get a badarg error.
So this little module gives you a list of all atoms:
-module(all_atoms).
-export([all_atoms/0]).
atom_by_number(N) ->
binary_to_term(<<131,75,N:24>>).
all_atoms() ->
atoms_starting_at(0).
atoms_starting_at(N) ->
try atom_by_number(N) of
Atom ->
[Atom] ++ atoms_starting_at(N + 1)
catch
error:badarg ->
[]
end.
The output looks like:
> all_atoms:all_atoms().
[false,true,'_',nonode#nohost,'$end_of_table','','fun',
infinity,timeout,normal,call,return,throw,error,exit,
undefined,nocatch,undefined_function,undefined_lambda,
'DOWN','UP','EXIT',aborted,abs_path,absoluteURI,ac,accessor,
active,all|...]
> length(v(-1)).
9821
* In Erlang/OTP 20.0, you can call erlang:system_info(atom_count):
> length(all_atoms:all_atoms()) == erlang:system_info(atom_count).
true
I'm not sure if there's a way to do it on a live system, but if you can run it in a test environment you should be able to get a list via crash dump. The atom table is near the end of the crash dump format. You can create a crash dump via erlang:halt/1, but that will bring down the whole runtime system.
I dare say that if you use more than 1M atoms, then you are doing something wrong. Atoms are intended to be static as soon as the application runs or at least upper bounded by some small number, 3000 or so for a medium sized application.
Be very careful when an enemy can generate atoms in your vm. especially calls like list_to_atom/1 is somewhat dangerous.
EDITED (wrong answer..)
You can adjust number of atoms with +t
http://www.erlang.org/doc/efficiency_guide/advanced.html
..but I know very few use cases when it is necessary.
You can track atom stats with erlang:memory()

Parsing Get-Counter data to just get the values

I would like to get the total number of bytes that my computer has sent/received over some interval.
Powershell has a handy cmdlet that gets me access to that information (Qualcomm Atheros AR9285 Wireless Network Adapter is the name of my interface):
Get-Counter -Counter "\Network Interface(Qualcomm Atheros AR9285 Wireless Network Adapter)\Bytes received/sec" -Continuous
It gets the data just fine, but it comes out like this:
The closes I could get to having it come out the way I wanted was using Get-Counter -Counter "\Network Interface(Qualcomm Atheros AR9285 Wireless Network Adapter)\Bytes received/sec" -Continuous | Format-Table -Property Readings but that still had a long path name in front of the value I want.
Additionally, if I try setting the output to be some variable, no assignment ever gets done (variable stays null).
How can I get this data in a decent format?
Note: The goal is to keep a running tally of this information, and then do something when it reaches a threshold. I can do that if I can get the above to work, but if there is a better way to do it, I am more than happy to use that instead.
Pipe to Foreach-Object and get the value from the CounterSamples property. CounterSamples is an array and the value is in the first item:
Get-Counter -Counter "\Network Interface(Qualcomm Atheros AR9285 Wireless Network Adapter)\Bytes received/sec" -Continuous |
Foreach-Object {$_.CounterSamples[0].CookedValue}

Resources