I have some incoming files with 2 different formats.
Is there a way I can route the different files to use the appropriate map in BizTalk?
What kind of formats are you talking about? If this is XML or flat file (text-based) it is easy to determine the file type. If you are talking about something binary (PDF, Excel 2003, etc...) then things can become more complicated.
Please elaborate on which formats you are talking about.
To be short, BizTalk maps in receive and send ports rely on the context property BTS.MessageType to determine what the input format is and can then determine which mapping to execute.
What you would need to do:
Create the schemas for all formats in Visual Studio.
Create your mappings for both formats.
Deploy your schemas DLL in your BizTalk application
Create a receive location for your input files (I suppose it is one folder for both formats?)
Use the XMLReceive pipeline for your receive location. This has a XML Disassembler pipeline component which will recognize the format and try to match it to your input formats.
Configure your mappings on the receive port.
Create a send port that subscribes to your receive port.
Hope this suits your needs.
Related
A website usually consists of multiple objects (e.g. text file, a few png files etc.), I would like to know if there's a tool that can capture the individual requests/responses in different pcap files?
So for example if I browse to http://somewebsite.com , and some http://somewebsite.com consists of say {index.html, image1.png, somestylefile.css, image2.png}, the tool would capture the entire load of http://somewebsite.com but generate {index.html.pcap, image1.png.pcap, somestylefile.css.pcap, image2.png.pcap}
I don't know of any tool that can do this, or its possible using scapy or something similar?
A HTTP connection can have multiple requests inside the same TCP connection and browsers make heavy use of this HTTP keep alive. With HTTP pipelining the requests/responses don't even need to be fully separated in time, i.e. a client can send another request even though the response for the previous one is not there. And with HTTP/2 the data can also be interleaved, i.e. several responses transferred at the same time inside the same connection.
Insofar it is not always possible to capture the data as separate pcap file because they might not be separable at the packet level. But if you don't need the original packet boundaries it would be possible to create separate pcap files for each request which not necessarily reflect the original packets but which reflect the application layer, i.e. the response matching the request.
One tool which makes this is httpflow.pl which can extract HTTP/1.x requests/response pairs from an existing pcap (or sniff directly) and writes each request/response into a separate pcap file, as if it would have been a separate TCP connection. It can also clean up the data for easier analysis, i.e. unchunk and uncompress the HTTP body.
I need to monitor several Linux servers placed in a different location from my farm.
I have VPN connection to this remote location.
Internally I use Zenoss 4 to monitor the systems, I would like to use Zenoss to monitor remote systems too. For contract policy, I cannot use VPN connection for Zenoss data (e.g. SNMP or SSH).
What I created is a bunch of scripts that fetch desired data from remote systems to an internal server. The format of the returned data is one CVS per every location, containing data from all appliances placed in that location.
For example:
$ cat LOCATION_1/current/current.csv
APPLIANCE1,out_of_memory,no,=,no,3,-
APPLIANCE1,postgre_idle,no,=,no,3,-
APPLIANCE2,out_of_memory,no,=,no,3,-
APPLIANCE2,postgre_idle,no,=,no,3,-
The format of CVS is this one:
HOSTNAME,CHECK_NAME,RESULT_VALUE,COMPARE,DESIRED_VALUE,INFO
How can i integrate those data in Zenoss, as the machines were placed in the internal farm?
If it is necessary, I could eventually change the format of fetched data.
Thank you very much
One possibility is for your internal server that communicates with remote systems (let's call it INTERNAL1) to re-issue the events as SNMP traps (or write them to the rsyslog file) and then process them in Zenoss.
For example, the message can start with the name of the server: "[APPLIANCE1] Out of Memory". In the "Event Class transform" section of your Zenoss web interface (http://my_zenoss_install.local:8080/zport/dmd/Events/editEventClassTransform), you can transform attributes of incoming messages (using Python). I frequently use this to lower the severity of an event. E.g.,
if evt.component == 'abrt' and evt.message.find('Saved core dump of pid') != -1:
evt.severity = 2 # was originally 3, I think
For your needs, you can set the evt.device to APPLIANCE1 if the message comes from INTERNAL1, and contains [APPLIANCE1] tag as the message prefix, or anything else you want to use to uniquely identify messages/traps from remote systems.
I don't claim this to be the best way of achieving your goal. My knowledge of Zenoss is strictly limited to what I currently need to use it for.
P.S. here is a rather old document from Zenoss about using event transforms. Unfortunately documentation in Zenoss is sparse and scattered (as you may have already learned), so searching old posts and/or asking questions on the Zenoss forum may be necessary.
Simply you can deploy one collector in remote location, and you add that host into collector pool , you can monitor remote linux servers also
Do you know any way to have a copy of all documents printed through print queues of a Windows Server 2003 machine? I'd like to audit what people is printing.
Regards
You could just set the queue to keep printed documents.
EDIT:
If you want to 'move them' to another place then you will probably have to put some kind of different port monitor in place. You could use something like Redmon (or we have a Commercial product) that would write the data to file, and then route it to the actual printer.
There are other server based tools that could allow for this along with other tracking capabilities.
With Flash 10.1+ and the ability to use appendBytes on a NetStream, its possible to use HTTP streaming in Flash for video delivery. But it seems that the delivery method requires the segments to be stored in a single file on disk, which can only be broken into discrete segment files with an FMS or an Apache module. You can cache the individual segment files once they're created, but the documentation indicates that you still must always use an FMS / Apache module to produce those files in the first instance.
Is it possible to break the single on-disk file into multiple on-disk segments without using an FMS, Wowza product or Apache?
There was an application which decompiled the output of the F4fpackager to allow it to be hosted anywhere, without the Apache Module. Unfortunately this application was withdrawn.
It should be possible to use a proxy to cache the fragments. Then you can use these cached files on any webserver.
A client has a system which reads large files (up to 1 GB) of multiple video images. Access is via an indexing file which "points" into the larger file. This works well on a LAN. Does anyone have any suggestions as to how I can access these files through the internet if they are held on a remote server. The key constraint is that we cannot afford the time necessary to download the whole file before accessing individual images within it.
You could put your big file behind an HTTP server like Apache, then have your client side use HTTP Range headers to fetch the chunk it needs.
Another alternative would be to write a simple script in PHP, Perl or server-language-of-your-choice which takes the required offsets as input and returns the chunk of data you need, again over HTTP.
If I understand the question correctly, it depends entirely on the format chosen to contain the images as a video. If the container has been designed in such a way that the information about each image is accessible just before or just after the image, rather than at the end of the container, you could extract images from the video container and the meta-data of the images, to start working on what you have downloaded until now. You will have to have an idea of the binary format used.
FTP does let you use 'paged files' where sections of the file can be transferred independently
To transmit files that are discontinuous, FTP defines a page
structure. Files of this type are sometimes known as
"random access files" or even as "holey files".
In FTP, the sections of the file are called pages -- rfc959
I've never used it myself though.