I use log parser 2.2 to read the IIS log and copy the log into a database. Initially IIS log was having the default fields and I was able to copy the log in to database. Now I included one more field in IIS log but the log parser does not return the details of new column. Can anyone help to make log parser to read the additional fields along with the old log files?
Following query is used to read IIS log.
select * from C:\inetpub\logs\LogFiles\W3SVC3\*.*
If it's newly added, then I think log parser stops checking the newly defined field after the first 50 entries it finds (perhaps total or per log) try using just the IIS log that has the new field in it to determine if it's working or not. Also, make sure that the first 3 lines reflect the # Fields: stuff entire you're looking for.
ie:
select * from C:\inetpub\logs\LogFiles\W3SVC3\todays.log
Related
I am writing out json structured log messages to stdout with exactly one time field, called origin_timestamp.
I collect the log messages using Fluent Bit with the tail input plugin, which uses the parser docker. The parser is configured with the Time_Key time.
The documentation about Time_Key says:
If the log entry provides a field with a timestamp, this option
specify the name of that field.
Since time != origin_timestamp, I would have thought no time fields will be added by Fluent Bit, however the final log messages ending up in Elasticsearch have the following time fields:
(origin_timestamp within the field log that contains the original log message)
origin_timestamp
time
#timestamp (sometimes even multiple times).
The #timestamp field is probably added by the es output plugin I am using in Fluent Bit, but where the heck is the time field coming from?
I came across the following issue in the Fluent-bit issue tracker, Duplicate #timestamp fields in elasticsearch output, which sounds like it might be related to your issue in question.
I've deep linked to a particular comment from one of the contributors, which outlines two possible solutions depending on whether you are using their Kubernetes Filter plugin, or are ingesting the logs into Elasticsearch directly.
Hope this helps.
The time field being added by the docker json plugin. Docker logging plugin takes logs from your stdout and logs to a file in following format by default:
{"log":"Log line is here\n","stream":"stdout","**time**":"2019-01-01T11:11:11.111111111Z"}
So, you might observe three timestamps in your final log:
Added by you (origin_timestamp)
Added by docker driver (time)
Added by fluent bit plugin (#timestamp)
Ref - https://docs.docker.com/config/containers/logging/json-file/
I had a project in Redmine with more than 600 issues. I moved all the issues to a different project. I had no idea that the move deletes all the data for the custom fields!
So all the custom field values are now lost. I did not backup the database before this action as I really did not think that I was going to do any harm by moving issues as moving is a native function in the UI.
What I noticed is though that the production.log contains events for all creation and updates. All my 600 issues are in order in the production log. How can I use these log statements to repeat the actions? If I can import all the log actions, I can migrate the custom fields that it writes to the original Redmine instance and restore my values.
Entries look like this:
Processing IssuesController#update (for XX.XX.XX.X at 2013-02-07 11:19:54) [PUT]
Parameters: {"_method"=>"put", "authenticity_token"=>"nWNSSRYjHhN0BGb+Ya8M4pYWPPgsfdM=", "issue"=>{"assigned_to_id"=>"", "custom_field_values"=>{"10"=>"", "5"=>"Not translated", "1"=>"fi", "8"=>"http://screencast.com/t/ODknR8K", "9"=>"", "3"=>"", "4"=>""}, "done_ratio"=>"0", "due_date"=>"", "priority_id"=>"4", "estimated_hours"=>"", "start_date"=>"2013-02-07", "subject"=>"1\tInstallation in English", "tracker_id"=>"1", "lock_version"=>"0", "description"=>"Steps:\r\nOpen Nitro\r\n\r\nProblem:\r\nNot localized"}, "controller"=>"issues", "time_entry"=>{"hours"=>"", "activity_id"=>"", "comments"=>""}, "attachments"=>{"1"=>{"description"=>""}}, "id"=>"3876", "action"=>"update", "commit"=>"Submit", "notes"=>""}
I am really hoping that there is a way, any help will be greatly appreciated
You could use a decent text editor and/or spreadsheet application and do a massive find and replace and construct a series of UPDATE SQL commands and run them directly on the database (TEST FIRST!!)
Extract from log
Remove unnessary information
Copy into spreadsheet
Split text into columns
Add in columns with necessary SQL commands "UPDATE SET etc" copy into all rows of this column etc.
Join columns to make one text command per row
Export joined data to a text file
Run against test database as sql
If all goes well run against production database as sql
The log entry, following "Parameters:", looks like a regular Ruby hash definition. I'd parse that out and eval it back into a hash variable.
From there you will need to peel off elements and insert them into a database. I'd do that using Sequel, but use what works for you.
Talk to the RedMine support people and get the schema for their tables so you can figure out what data goes where and the database driver needed.
I'm having a problem is sending(creating) an HL7 message using mirth.
I want to read data from my patient table in SQLSERVER 2008 and, using that data,
I want to send a message to my destination connector, a file writer. I want my messages to get saved in the file writer's output directory.
So far I'm able to generate the message, but the size of the output file in my destination directory is increasing as the channel's polling time goes on.
Have I done something wrong in the transformer mapping?
UPDATE:
The size of the output file in my destination directory IS increasing. (My .txt file starts from 1 kb and goes to 900kb and so on). This is happening becasue same data is getting generated again and again and multiple times too. for eg. my generated message has one(MSH,PID,PV1,ORM) for one row of data in my Database. The same MSH,PID, PV1 and ORM are getting generated multiple times.
If you are seeing the same data generated in your output directory multiple time, the most likely cause is that you are not doing anything to indicate to your database that a given record has been processed.
For example, if you have 1 record in your database: ["John", "Smith", "12134" ...] on the first poll, you will generate 1 message. If on the second poll you also have a second record ["Fred", "Jones", "98371" ...], you will generate TWO messages - one for John Smith and one for Fred Jones. And so on.
The key is to use the "Run On-Update Statement" of your Database Reader (Source) connector to update the database table you are polling with an indication that a given record has been processed. This ensures that the same record is not processed multiple times.
This requires that your source table have some kind of column to indicate the record has been processed. Mirth will not keep track of this for you - you must do it manually.
You can't have a file reader as a destination, so I assume you mean file writer. You say that "the size of my file in my destination is increasing." Is that a typo? Do you mean NOT increasing?
If it is increasing, then your messages are getting generated and you can view them to start your next round of troubleshooting...
If not, the you should look at the message log in the dashboard to see what is happening on a message-by-message basis - that would be the next place to troubleshoot.
You have to have a way of distinguishing what records to pull from the database by filtering on some sort of status flag or possible a time-stamp. Then, you have to use some sort of On-Update statement to mark these same records as processed.
i.e.
Select id, patient, result from results where status_flag='N'
or
Select * from results where status_flag = 'N' and created_date >= '9/25/2012'
Then, in either a transformer step or the On-Update section of your Source, you would do something like:
Update results
set status_flag = 'Y' where id=$(id)
If you do not do something like this and you have Mirth polling at a certain interval, it will just keep pulling the same records over and over.
You have to change your connector type as Database reader in source.
You have to change your connector type as file writer in the destination.
And you can write your data in the file, For which you have access to write.
while creating HL7 template you have to use the following code in outbound message template
MSH|^~\&|||
Thanks
Krishna
I recently wrote a mailing platform for one of our employees to use. The system runs great, scales great, and is fun to use. However, it is currently inoperable due to a bug that I can't figure out how to fix (fairly inexperienced developer).
The process goes something like this...
Upload a CSV file to a specific FTP directory.
Go to the import_mailing_list page.
Choose a CSV file within the FTP directory.
Name and describe what the list contains.
Associate file headings with database columns.
Then, the back-end loops over each line of the file, associating the values with a heading, and importing these values into a database.
This all works wonderfully, except in a specific case, when a raw CSV is not correctly formatted. For example...
fname, lname, email
Bob, Schlumberger, bob#bob.com
Bobbette, Schlumberger
Another, Record, goeshere#email.com
As you can see, there is a missing comma on line two. This would cause an error when attempting to pull "valArray[3]" (or valArray[2], in the case of every language but mine).
I am looking for the most efficient solution to keep this error from happening. Perhaps I should check the array length, and compare it to the index we're going to attempt to pull, before pulling it. But to do this for each and every value seems inefficient. Anybody have another idea?
Our stack is ColdFusion 8/9 and MySQL 5.1. This is why I refer to the array index as [3].
There's ArrayIsDefined(array, elementIndex), or ArrayLen(array)
seems inefficient?
You gotta code what you need to code, forget about inefficiency. Get it right before you get it fast (when needed).
I suppose if you are looking for another way of doing this (instead of checking the array length each time, although that really doesn't sound that bad to me), you could wrap each line insert attempt in a try/catch block. If it fails, then stuff the failed row in a buffer (including the line number and error message) that you could then display to the user after the batch has completed, so they could see each of the failed lines and why they failed. This has the advantages of 1) not having to explicitly check the array length each time and 2) catching other errors that you might not have anticipated beforehand (maybe a value is too long for your field, for example).
Since I'm not really seeing any content anywhere that doesn't point back to the original Microsoft documents on this matter, or source code that really doesn't seem to answer the questions I'm having, I thought I might ask a few things here. (Delphi tag is there because that's what my dev environment is on the code I'm making from this)
That said, I had a few questions the API document wasn't answering. First one: fdi_notify messages. What is "my responsibility" is in coding these: fdintCABINET_INFO: fdintPARTIAL_FILE: fdintNEXT_CABINET: fdintENUMERATE: ? I'll illustrate what I mean by an example. For fdintCLOSE_FILE_INFO, "my responsibility" is to Close a file related to handle given me, and set the file's date and time according to the data passed in fdi_notify.
I figure I'm missing something since my code isn't handling extracting spanned CAB files...any thoughts on how to do this?
What you're more than likely running into is that FDICopy only reads the cab you passed in. It will use fdintNEXT_CABINET to get spanned data for any files you extract in response to fdintCOPY_FILE, but it only calls fdintCOPY_FILE for files that start on that first cab.
To get a directory listing for the entire set, you need to call FDICopy in a loop. Every time you get a fdintCABINET_INFO event, save off the psz1 parameter (next cab name). When FDICopy returns, check that. If it's an empty string you're done, if not call FDICopy again with the next cab as the new path.
fdintCABINET_INFO: The only responsibility for this is returning 0 to continue processing. You can use the information provided (the path of the next cabinet, next disk, path name, nad set ID), but you don't need to.
fdintPARTIAL_FILE: Depending on how you're processing your cabs, you can probably ignore this. You'll only see it for the second and later images in a set, and it's to tell you that the particular entry is continued from a previous cab. If you started at the first cab in the set you'll have already seen an fdintCOPY_FILE for the file. If you're processing random .cabs, you won't really be able to use it either, since you won't have the start of the file to extract.
fdintNEXT_CABINET: You can use this to prompt the user for a new directory for the next cabinet, but for simple spanning support just return 0 if the passed in filename is valid or -1 if it isn't. If you return 0 and the cab isn't valid, or is the wrong one, this will get called again. The easiest approach (if you don't request a new disk/directory), is just to check pfdin^.fdie. If it's FDIError_None it's equal the first time being called for the requested cab, so you can return 0. If it's anything else it's already tried to open the requested cab at least once, so you can return -1 as an error.
fdintENUMERATE: I think you can ignore this. It isn't covered in the documentation, and the two cab libraries I've looked at don't use it. It may be a leftover from a previous API version.