I want to configure all access point in the building to send the received 802.11 management frames to my server.
and then I am able to save them on my server in order to use them for my location application to track the moving object.
Note:Syslog doesn't include all the received frames
I am asking:
Is it possible to synchronise the access point with a server to get all the received frames?
Is it possible to read this information periodically by SNMP or Telnet?
Can you tell me which MIP OID should I requests & get to retrieve a list of management frames(such as probe request) from Access points?
The solution is to use any capturing technique such as libpcap or RAW socket in monitoring mode and then analyse the radiotap header.
I need to monitor several Linux servers placed in a different location from my farm.
I have VPN connection to this remote location.
Internally I use Zenoss 4 to monitor the systems, I would like to use Zenoss to monitor remote systems too. For contract policy, I cannot use VPN connection for Zenoss data (e.g. SNMP or SSH).
What I created is a bunch of scripts that fetch desired data from remote systems to an internal server. The format of the returned data is one CVS per every location, containing data from all appliances placed in that location.
For example:
$ cat LOCATION_1/current/current.csv
APPLIANCE1,out_of_memory,no,=,no,3,-
APPLIANCE1,postgre_idle,no,=,no,3,-
APPLIANCE2,out_of_memory,no,=,no,3,-
APPLIANCE2,postgre_idle,no,=,no,3,-
The format of CVS is this one:
HOSTNAME,CHECK_NAME,RESULT_VALUE,COMPARE,DESIRED_VALUE,INFO
How can i integrate those data in Zenoss, as the machines were placed in the internal farm?
If it is necessary, I could eventually change the format of fetched data.
Thank you very much
One possibility is for your internal server that communicates with remote systems (let's call it INTERNAL1) to re-issue the events as SNMP traps (or write them to the rsyslog file) and then process them in Zenoss.
For example, the message can start with the name of the server: "[APPLIANCE1] Out of Memory". In the "Event Class transform" section of your Zenoss web interface (http://my_zenoss_install.local:8080/zport/dmd/Events/editEventClassTransform), you can transform attributes of incoming messages (using Python). I frequently use this to lower the severity of an event. E.g.,
if evt.component == 'abrt' and evt.message.find('Saved core dump of pid') != -1:
evt.severity = 2 # was originally 3, I think
For your needs, you can set the evt.device to APPLIANCE1 if the message comes from INTERNAL1, and contains [APPLIANCE1] tag as the message prefix, or anything else you want to use to uniquely identify messages/traps from remote systems.
I don't claim this to be the best way of achieving your goal. My knowledge of Zenoss is strictly limited to what I currently need to use it for.
P.S. here is a rather old document from Zenoss about using event transforms. Unfortunately documentation in Zenoss is sparse and scattered (as you may have already learned), so searching old posts and/or asking questions on the Zenoss forum may be necessary.
Simply you can deploy one collector in remote location, and you add that host into collector pool , you can monitor remote linux servers also
I am writing a distributed Erlang application where several nodes are connected via a network with limited bandwidth. Thus, I would like to be able to minimize the size of the packets sent across the network when processes on different nodes send each other messages.
From http://www.erlang.org/doc/apps/erts/erl_ext_dist.html, I understand that the Erlang distribution mechanism uses erlang:term_to_binary/1,2 internally to convert Erlang messages to the external binary format that is sent over the network. Now, term_to_binary/2 supports several options that are useful for reducing the size of the binaries (http://www.erlang.org/doc/man/erlang.html#term_to_binary-1), including a compression option as well as the ability to choose a minor version with more efficient encoding of floats.
I would like to be able to tell the distribution mechanism to use both of these options every time it sends a message over the network. In other words, I would like to be able to specify the Options list that the distribution mechanism calls term_to_binary with. However, I have not been able to find any documentation on this subject. Is this possible to do?
Thanks for your help! :)
If I understand the code correctly, message encoding hardcoded around the line 1565 of dist.c/dsig_send() so you can't change the way messages are encoded without patching and recompiling the emulator.
However you can change the carrier for message distribution as described here. There is an example of use SSL for Erlang distribution. So you can create a connection which compress all transmission messages (maybe it's even possible with tweaked SSL example).
There are few examples of standard distribution modules:
inet_tcp_dist.erl
inet6_tcp_dist.erl
inet_ssl_dist.erl
uds_dist example
Are you using rpc from node to node? Or OTP behaviours? if so try to compress with zlib the binary before it is sent
I implemented S3 multi-part upload in Java, both high level and low level version, based on the sample code from
http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?HLuploadFileJava.html and http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?llJavaUploadFile.html
When I uploaded files of size less than 4 GB, the upload processes completed without any problem. When I uploaded a file of size 13 GB, the code started to show IO exception, broken pipes. After multiple retries, it still failed.
Here is the way to repeat the scenario. Take 1.1.7.1 release,
create a new bucket in US standard region
create a large EC2 instance as the client to upload file
create a file of 13GB in size on the EC2 instance.
run the sample code on either one of the high-level or low-level API S3 documentation pages from the EC2 instance
test either one of the three part size: default part size (5 MB) or set the part size to 100,000,000 or 200,000,000 bytes.
So far the problem shows up consistently. I did a tcpdump. It appeared the HTTP server (S3 side) kept resetting the TCP stream, which caused the client side to throw IO exception - broken pipe, after the uploaded byte counts exceeding 8GB . Anyone has similar experiences when upload large files to S3 using multi-part upload?
First of all this is not a question seeking Programming help. I am doing a project using FTP in that i employed a logic I want peoples to make comment whether the logic is ok or I have to employ a better one. I transfer files using FTP, for ex if the filesize is 10 MB I will split that file into "X" no of files of "y" size depending upon my network speed, after that I send these files one by one and merge it on the client machine.
the network speed in my client side is very low(1kbps) so I want to split files into size of 512 bytes and send it.
It would be better not to split the file but use a client and server that both support resuming file transfers.