I am using Powershell 4 and trying to parse an archived event log into a csv file that includes all of the data and has headers associated with them. The closest I have been able to come is by using the following command:
Get-WinEvent -Path .\Security.evtx |Select-Object TimeCreated, ProviderName, Id, Message, Level, Keyword, UserID, Data, Subject, SubjectUserSid, SubjectUserName, SubjectLogonId, ComputerName | Export-Csv .\Logging.csv
This gives me all the header information for all of the fields in the csv file but the only fields that contain data are TimeCreated, ProviderName, ID, Level, & Message. I am trying to get the missing data into columns also but not succeeding. So what am I doing wrong here?
This was copied from an edit to the question itself, and should be credited to the original question author
Ok, I finally figured it out...At least for what I need to accomplish. Hopefully this will help someone.
Get-WinEvent -Path .\Security.evtx | select TimeCreated, ProviderName, Id, #{n='Message';e={$_.Message -replace '\s+', " "}} | Export-Csv .\Logging.csv
This code allows you to export the archived eventlog into csv with headers and puts the whole message body into one cell, which allows import into a database with ease when you have no tools to work with.
Related
In my Docusaurus project my internal links work on my local environment, but when I push to GitLab they no longer work. Instead of replacing the original doc title with the new one it adds it to the url at the end ('https://username.io/test-site/docs/overview/add-a-category.html'). I looked over my config file, but I do not understand why this is happening.
I tried updating the id in the front matter for the page, and making sure it matches the id in the sidebars.json file. I have also added customDocsPath and set it to 'docs/' in the config file, though that is supposed to be the default.
---
id: "process-designer-overview"
title: "Process Designer Overview"
sidebar_label: "Overview"
---
# Process Designer
The Process Designer is a collaborative business process modeling and
design workspace for the business processes, scenarios, roles and tasks
that make up governed data processes.
Use the Process Designer to:
- [Add a Category](add-a-category.html)
- [Add a Process or Scenario](Add%20a%20Process%20or%20Scenario.html)
- [Edit a Process or Scenario](Edit%20a%20Process%20or%20Scenario.html)
I updated the add a category link in parenthesis to an md extension, but that broke the link on my local and it still didn't work on GitLab. I would expect that when a user clicks on the link it would replace the doc title in the url with the new doc title ('https://username.gitlab.io/docs/add-a-category.html') but instead it just tacks it on to the end ('https://username.gitlab.io/docs/process-designer-overview/add-a-category.html') and so the link is broken as that is not where the doc is located.
There were several issues with my links. First, I converted these files from html to markdown using Pandoc and did not add front matter - relying instead on the file name to connect my files to the sidebars. This was fine, except almost all of the file names had spaces in them, which you can see in my code example above. This was causing real issues, so I found a Bash script to replace all of the spaces in my file names with underscores, but now all of my links were broken. I updated all of the links in my files with a search and replace in my code editor, replacing "%20" with "_". I also needed to replace the ".html" extension with ".md" or my project would no longer work locally. Again, I did this with a search and replace in my code editor.
Finally, I ended up adding the front matter because otherwise my sidebar titles were all covered in underscores. Since I was working with 90 files, I didn't want to do this manually. I looked for a while and found a great gist by thebearJew and adjusted it so that it would take the file name and add it as the id, and the first heading and add it as the title and sidebar_label, since as it happens that works for our project. Here is the Bash script I found online to convert the spaces in my file names to underscores if interested:
find $1 -name "* *.md" -type f -print0 | \
while read -d $'\0' f; do mv -v "$f" "${f// /_}"; done
Here is the script I ended up with if anyone else has a similar setup and doesn't want to update a huge amount of files with front matter:
# Given a file path as an argument
# 1. get the file name
# 2. prepend template string to the top of the source file
# 3. resave original source file
# command: find . -name "*.md" -print0 | xargs -0 -I file ./prepend.sh file
filepath="$1"
file_name=$("basename" -a "$filepath")
# Getting the file name (title)
md='.md'
title=${file_name%$md}
heading=$(grep -r "^# \b" ~/Documents/docs/$title.md)
heading1=${heading#*\#}
# Prepend front-matter to files
TEMPLATE="---
id: $title
title: $heading1
sidebar_label: $heading1
---
"
echo "$TEMPLATE" | cat - "$filepath" > temp && mv temp "$filepath"
I need some help with PHPEXCEL library, everything works great, I'm successfully extracting my SQL query to excel5 file, I need to give this file to transport company in order to auto collect informations about packages, unfotunately the generated excel file has some ascii characters between each letter of the cell text, and when the excel file is imported you need to manually delete these charaters.
If I open the excel file, everything is fine I see: COMPANY NAME, If I open the excel file with notepad++, I see the cell values this way: C(NUL)O(NUL)M(NUL)P(NUL)A(NUL)N(NUL)Y N(NUL)A(NUL)M(NUL)E
If I open again the file with excel and save, then reopen with notepad++ I see COMPANY NAME.
So I do not understan why every time I create an excel file using PHPEXCEL my every letter of all words are filled with (nul) every letter.
So how do I prevent the generated excel file to include (nul) between every word????
Also if you open the original excel file generated from PHPExcel samples are also filled with (nul) and if you open and save it, the (nul) is gone.
Any help would be appreciated, thanks.
what is the (nul) ??? 0x00??? char(0)???
ok, here is the example:
error_reporting(E_ALL);
ini_set('display_errors', TRUE);
ini_set('display_startup_errors', TRUE);
date_default_timezone_set('Europe/London');
if (PHP_SAPI == 'cli')
die('Disponibile solo su browser');
require_once dirname(__FILE__) . '/Classes/PHPExcel.php';
$objPHPExcel = new PHPExcel();
$objPHPExcel->getProperties()->setCreator("Solidus")
->setLastModifiedBy("Solidus")
->setTitle("Import web")
->setSubject("Import File")
->setDescription("n.a")
->setKeywords("n.a")
->setCategory("n.a");
$objPHPExcel->setActiveSheetIndex(0)
->setCellValueExplicit("A1", "COMPANY")
->setCellValue('A2', 'SAMSUNG');
$objPHPExcel->getActiveSheet()->setTitle('DDT');
$objPHPExcel->setActiveSheetIndex(0);
header('Content-Type: application/vnd.ms-excel');
header('Content-Disposition: attachment;filename="TEST.xls"');
header('Cache-Control: max-age=0');
header('Cache-Control: max-age=1');
header('Cache-Control: private',false);
$objWriter = PHPExcel_IOFactory::createWriter($objPHPExcel, 'Excel5');
ob_end_clean();
$objWriter->save('php://output');
As you can see from this little example, this scripts creates a file excel5 with 2 cells, A1 = COMPANY, A2 = SAMSUNG
when I send this file to the transport company, they import the file into their system, but as you can see from the picture, there is an weird character between each letter.
so I noticed every time I open the generated Excel5 with notepad++ file I get:
S(nul)A(nul)M(nul)S(nul)U(nul)N(nul)G
If I save the save with excel and then open it again with notepad++ I get:
SAMSUNG
and this file is ok for the transport company
so my question is, how should I avoid the file generated to contain thi '(nul) charachter between each letter????
some help?
weird characters
SAMSUNG
I found the soluion by myself, I explain just in case anyone has also this problem:
there is not way to change the way the excelfile is encoded by PHPEXCEL
so I figured out the problem was reading the file, I did some simulations and reproduce the problem, every time a read the file and put the result into inputs a get weird characters:
C�O�M�P�A�N�Y�
If I set the output enconding enconding as follows:
$excel->setOutputEncoding('UTF-8');
the file loads fine, so the problem was not creating the excel file, but reading the excel file.
If I print the variable with ECHO I get: "COMPANY",
if I put the variable on input as value I get: "C�O�M�P�A�N�Y�"
setting the output solves the problem, but I would like to know why the difference when I put the variable on input as value, thanks
I am trying to schedule a list of URL's in maintenance mode in SCOM 2007 using powershell. I am trying to get the list of URLs display name from a text file and trying to pass as input to below command.However it's not working. Can some body help how to pass the display name in text file as input
$URLStuff = Get-Content C:\Display.txt
$URLWatcher = (Get-MonitoringClass -name Microsoft.SystemCenter.WebApplication.Perspective) |
Get-MonitoringObject | where {$_.DisplayName -eq $URLStuff}
get-content is returning you an array of string objects, one per line found in the file. You need to turn your where-object around to search that array for the DisplayName of each object found from SCOM.
$URLWatcher = (Get-MonitoringClass -name Microsoft.SystemCenter.WebApplication.Perspective) |
Get-MonitoringObject | where {$URLStuff -contains $_.DisplayName}
I'm assuming that you've already verified that DisplayName does contain the data you're looking for and will match the contents of Display.txt.
I am using Neo4j on my browser on Ubuntu. I got over 1 million nodes and I want to export them as csv file.
When return data size is small like "match n return n limit 3" there is a big fat "download csv" button I could use. But when it comes to big result set like over 1000 the shell just says "Resultset too large(over 1000 rows)" and the button doesnt show up.
How can I export csv files for large resultset?
You can also use my shell extensions to export cypher results to CSV.
See here: https://github.com/jexp/neo4j-shell-tools#cypher-import
Just provide an -o output.csv file to the import-cypher command.
Well, I just used linux shell to do all the job.
neo4j-shell -file query.cql | sed 's/|/;/g' > myfile.csv
In my case, I had also to convert from UTF-8 to ISO-8859-1 so I typed:
neo4j-shell -file query.cql | sed 's/|/;/g' | iconv -f UTF-8 -t ISO-8859-1 -o myfile.csv
PS: sed performs the replace: 's/|/;/g' means, substitute (s) all "|" to ";" even though there is more than one per line (g)
Hope this can help.
Regards
We followed the approach below using mentioned. It works very well for us.
data is formatted properly in csv format.
https://github.com/jexp/neo4j-shell-tools#cypher-import
import-cypher command from neo4J shell.
neo4j-sh (?)$ import-cypher -o test.csv MATCH (m:TDSP) return m.name
I know this is an old post, but maybe this will help someone else. For anyone using Symfony Framework, you can make a pretty simple controller to export Neo4j Cypher Queries as CSV. This uses the Graphaware NEO4J PHP OGM ( https://github.com/graphaware/neo4j-php-ogm ) to run raw cypher queries. I guess this can also easily be implemented without Symfony only using plain PHP.
Just make a form (with twig if you want):
<form action="{{ path('admin_exportquery') }}" method="get">
Cypher:<br>
<textarea name="query"></textarea><br>
<input type="submit" value="Submit">
</form>
Then set up the route of "admin_exportquery" to point to the controller. And add a controller to handle the export:
public function exportQueryAction(Request $request)
{
$query = $request->query->get('query');
$em = $this->get('neo4j.graph_manager')->getClient();
$response = new StreamedResponse(function() use($em,$query) {
$result = $em->getDatabaseDriver()->run($query);
$handle = fopen('php://output', 'w');
fputs( $handle, "\xEF\xBB\xBF" );
$header = $result->getRecords()[0]->keys();
fputcsv($handle, $header);
foreach($result->getRecords() as $r){
fputcsv($handle, $r->values());
}
fclose($handle);
});
$response->headers->set('Content-Type', 'application/force-download');
$response->headers->set('Cache-Control', 'no-store, no-cache');
$response->headers->set('Content-Disposition','attachment; filename="export.csv"');
return $response;
}
This lets you download a CSV with utf-8 characters directly from your browser and gives you all the freedom of Cypher.
IMPORTANT: This has no query check what so ever and it is a very good idea to set up appropriate security or query check before using :)
The cypher-shell:
https://neo4j.com/docs/operations-manual/current/tools/cypher-shell/
that is included in the latest versions of neo4j does this easily:
cat query.cql | cypher-shell > answer.csv
The limit 1000 is due to the browser MaxRows setting. You can change it to e.g. 10000 and thus be able to download those 10000 in one go via the download/export csv button described in the original post. On my laptop, the limit for the download button is somewhere between 10000 and 20000.
By setting the MaxRows limit to 50000 or 300000, I have been able to get the data on screen after waiting a while. Manual select, copy, and paste works. However, doing anything else than closing the browser after each query has not been possible as the browser becomes very slow.
There is another option to export the data as CSV using the cURL command with http/https neo4j transaction/commit endpoint.
here is an example on how to do that.
curl -H accept:application/json -H content-type:application/json \
-d '{"statements":[{"statement":"MATCH (p1:PROFILES)-[:RELATION]-(p2) RETURN ... LIMIT 4"}]}' \
http://localhost:7474/db/data/transaction/commit \
| jq -r '(.results[0]) | .columns,.data[].row | #csv'
and this command uses jq. make sure jq is installed in your machine and this converts the results to the CSV format.
Note: You may need to pass Authorization in the header for authentication.
Just pass -H 'Authorization: Basic XXXXX=' \ to avoid 401
here is the blog with a detailed explanation.
https://neo4j.com/blog/export-csv-from-neo4j-curl-cypher-jq/
I have a command that formats it's output in the form of CSV. I have a list of machine this command will run against using a foreach loop. in the below example $serverlist is automatically generated with an AD Query.
foreach ($server in $serverlist) {
$outputlist = mycommand
}
what I would like to do is somehow end up with objects from the resulting CSV so I can then only select certain objects for a report. However the only way I can see to do this is using import-csv, which only seems to want to work with files and not variable: ie.
Import-Csv output.csv | ft "HostName","TaskName" |
Where-object {$_.TaskName -eq 'Blah'}
I'd like to be able to have import-csv $outputlist instead. doing this causes import-csv to have a hissyfit :)
Can anyone point me in the right direction on how to achieve this?
Cheers
The command you want is called ConvertFrom-CSV. The syntax is shown below.
NAME
ConvertFrom-CSV
SYNOPSIS
Converts object properties in comma-separated value (CSV) format into CSV
versions of the original objects.
SYNTAX
ConvertFrom-CSV [[-Delimiter] <char>] [-InputObject] <PSObject[]> [-Header <string[]>] [<CommonParameters>]
ConvertFrom-CSV -UseCulture [-InputObject] <PSObject[]> [-Header <string[]>] [<CommonParameters>]