Importing CSV from a variable instead of file? - parsing

I have a command that formats it's output in the form of CSV. I have a list of machine this command will run against using a foreach loop. in the below example $serverlist is automatically generated with an AD Query.
foreach ($server in $serverlist) {
$outputlist = mycommand
}
what I would like to do is somehow end up with objects from the resulting CSV so I can then only select certain objects for a report. However the only way I can see to do this is using import-csv, which only seems to want to work with files and not variable: ie.
Import-Csv output.csv | ft "HostName","TaskName" |
Where-object {$_.TaskName -eq 'Blah'}
I'd like to be able to have import-csv $outputlist instead. doing this causes import-csv to have a hissyfit :)
Can anyone point me in the right direction on how to achieve this?
Cheers

The command you want is called ConvertFrom-CSV. The syntax is shown below.
NAME
ConvertFrom-CSV
SYNOPSIS
Converts object properties in comma-separated value (CSV) format into CSV
versions of the original objects.
SYNTAX
ConvertFrom-CSV [[-Delimiter] <char>] [-InputObject] <PSObject[]> [-Header <string[]>] [<CommonParameters>]
ConvertFrom-CSV -UseCulture [-InputObject] <PSObject[]> [-Header <string[]>] [<CommonParameters>]

Related

Is there a script that can extract particular link from txt and write it in another txt file?

I'm looking for a script (or if there isn't, I guess I'll have to write my own).
I wanted to ask if anyone here knows a script that can take a txt file with n links (lets say 200). I need to extract only links that have particular characters in them, let's say I only need links that contain "/r/learnprogramming". I need the script to get those links and write them to another txt files.
Edit: Here is what helped me: grep -i "/r/learnprogramming" 1.txt >2.txt
you can use ajax to read .txt file using jquery
<script src=https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.1/jquery.min.js></script>
<script>
jQuery(function($) {
console.log("start")
$.get("https://ayulayol.imfast.io/ajaxads/ajaxads.txt", function(wholeTextFile) {
var lines = wholeTextFile.split(/\n/),
randomIndex = Math.floor(Math.random() * lines.length),
randomLine = lines[randomIndex];
console.log(randomIndex, randomLine)
$("#ajax").html(randomLine.replace(/#/g,"<br>"))
})
})
</script>
<div id=ajax></div>
If you are using linux or macOS you could use cat and grep to output the links.
cat in.txt | grep /r/programming > out.txt
Solution provided by OP:
grep -i "/r/learnprogramming" 1.txt >2.txt
Since you did not provide the exact format of the document I assume those links are separated by newline characters. In this case, the code is pretty straightforward using Python/awk since you can iterate over file.readlines() and print only those that match your pattern (either by using a lines.contains(pattern) or using a regex if the pattern is more complex). To store the links in a new file simply redirect the stdout to a new file like this:
python script.py > links.txt
The solution above works even if links are separated by an arbitrary symbol s, first read the file into a single string and split it over s. I hope this helps.

Converting text output to object using powershell

I am trying to schedule a list of URL's in maintenance mode in SCOM 2007 using powershell. I am trying to get the list of URLs display name from a text file and trying to pass as input to below command.However it's not working. Can some body help how to pass the display name in text file as input
$URLStuff = Get-Content C:\Display.txt
$URLWatcher = (Get-MonitoringClass -name Microsoft.SystemCenter.WebApplication.Perspective) |
Get-MonitoringObject | where {$_.DisplayName -eq $URLStuff}
get-content is returning you an array of string objects, one per line found in the file. You need to turn your where-object around to search that array for the DisplayName of each object found from SCOM.
$URLWatcher = (Get-MonitoringClass -name Microsoft.SystemCenter.WebApplication.Perspective) |
Get-MonitoringObject | where {$URLStuff -contains $_.DisplayName}
I'm assuming that you've already verified that DisplayName does contain the data you're looking for and will match the contents of Display.txt.

How to export csv file of large resultset using cypher in Neo4j in the browser?

I am using Neo4j on my browser on Ubuntu. I got over 1 million nodes and I want to export them as csv file.
When return data size is small like "match n return n limit 3" there is a big fat "download csv" button I could use. But when it comes to big result set like over 1000 the shell just says "Resultset too large(over 1000 rows)" and the button doesnt show up.
How can I export csv files for large resultset?
You can also use my shell extensions to export cypher results to CSV.
See here: https://github.com/jexp/neo4j-shell-tools#cypher-import
Just provide an -o output.csv file to the import-cypher command.
Well, I just used linux shell to do all the job.
neo4j-shell -file query.cql | sed 's/|/;/g' > myfile.csv
In my case, I had also to convert from UTF-8 to ISO-8859-1 so I typed:
neo4j-shell -file query.cql | sed 's/|/;/g' | iconv -f UTF-8 -t ISO-8859-1 -o myfile.csv
PS: sed performs the replace: 's/|/;/g' means, substitute (s) all "|" to ";" even though there is more than one per line (g)
Hope this can help.
Regards
We followed the approach below using mentioned. It works very well for us.
data is formatted properly in csv format.
https://github.com/jexp/neo4j-shell-tools#cypher-import
import-cypher command from neo4J shell.
neo4j-sh (?)$ import-cypher -o test.csv MATCH (m:TDSP) return m.name
I know this is an old post, but maybe this will help someone else. For anyone using Symfony Framework, you can make a pretty simple controller to export Neo4j Cypher Queries as CSV. This uses the Graphaware NEO4J PHP OGM ( https://github.com/graphaware/neo4j-php-ogm ) to run raw cypher queries. I guess this can also easily be implemented without Symfony only using plain PHP.
Just make a form (with twig if you want):
<form action="{{ path('admin_exportquery') }}" method="get">
Cypher:<br>
<textarea name="query"></textarea><br>
<input type="submit" value="Submit">
</form>
Then set up the route of "admin_exportquery" to point to the controller. And add a controller to handle the export:
public function exportQueryAction(Request $request)
{
$query = $request->query->get('query');
$em = $this->get('neo4j.graph_manager')->getClient();
$response = new StreamedResponse(function() use($em,$query) {
$result = $em->getDatabaseDriver()->run($query);
$handle = fopen('php://output', 'w');
fputs( $handle, "\xEF\xBB\xBF" );
$header = $result->getRecords()[0]->keys();
fputcsv($handle, $header);
foreach($result->getRecords() as $r){
fputcsv($handle, $r->values());
}
fclose($handle);
});
$response->headers->set('Content-Type', 'application/force-download');
$response->headers->set('Cache-Control', 'no-store, no-cache');
$response->headers->set('Content-Disposition','attachment; filename="export.csv"');
return $response;
}
This lets you download a CSV with utf-8 characters directly from your browser and gives you all the freedom of Cypher.
IMPORTANT: This has no query check what so ever and it is a very good idea to set up appropriate security or query check before using :)
The cypher-shell:
https://neo4j.com/docs/operations-manual/current/tools/cypher-shell/
that is included in the latest versions of neo4j does this easily:
cat query.cql | cypher-shell > answer.csv
The limit 1000 is due to the browser MaxRows setting. You can change it to e.g. 10000 and thus be able to download those 10000 in one go via the download/export csv button described in the original post. On my laptop, the limit for the download button is somewhere between 10000 and 20000.
By setting the MaxRows limit to 50000 or 300000, I have been able to get the data on screen after waiting a while. Manual select, copy, and paste works. However, doing anything else than closing the browser after each query has not been possible as the browser becomes very slow.
There is another option to export the data as CSV using the cURL command with http/https neo4j transaction/commit endpoint.
here is an example on how to do that.
curl -H accept:application/json -H content-type:application/json \
-d '{"statements":[{"statement":"MATCH (p1:PROFILES)-[:RELATION]-(p2) RETURN ... LIMIT 4"}]}' \
http://localhost:7474/db/data/transaction/commit \
| jq -r '(.results[0]) | .columns,.data[].row | #csv'
and this command uses jq. make sure jq is installed in your machine and this converts the results to the CSV format.
Note: You may need to pass Authorization in the header for authentication.
Just pass -H 'Authorization: Basic XXXXX=' \ to avoid 401
here is the blog with a detailed explanation.
https://neo4j.com/blog/export-csv-from-neo4j-curl-cypher-jq/

Powershell parse get-winevent into csv with headers including all child objects

I am using Powershell 4 and trying to parse an archived event log into a csv file that includes all of the data and has headers associated with them. The closest I have been able to come is by using the following command:
Get-WinEvent -Path .\Security.evtx |Select-Object TimeCreated, ProviderName, Id, Message, Level, Keyword, UserID, Data, Subject, SubjectUserSid, SubjectUserName, SubjectLogonId, ComputerName | Export-Csv .\Logging.csv
This gives me all the header information for all of the fields in the csv file but the only fields that contain data are TimeCreated, ProviderName, ID, Level, & Message. I am trying to get the missing data into columns also but not succeeding. So what am I doing wrong here?
This was copied from an edit to the question itself, and should be credited to the original question author
Ok, I finally figured it out...At least for what I need to accomplish. Hopefully this will help someone.
Get-WinEvent -Path .\Security.evtx | select TimeCreated, ProviderName, Id, #{n='Message';e={$_.Message -replace '\s+', " "}} | Export-Csv .\Logging.csv
This code allows you to export the archived eventlog into csv with headers and puts the whole message body into one cell, which allows import into a database with ease when you have no tools to work with.

I cannot get Powershell 2.0 to move a file from one path to another

I need to make changes to to the following Powershell script, but am having a dickens of a time getting the resulting files to write to a different path...let's call it $destPath.
Consider:
Get-ChildItem $sourcePath | % { [system.io.file]::Move($_.fullname, ($_.FullName -replace '\[|\]|-|,|\(|\)', '') ) }
Based on my understanding of move syntax, $_.fullname is my original file, and $_.FullName -replace... is the NEW filename. Hoever, when I try to use $destPath.FullName -replace I get an error that an empty filename is not legal. Obviously, Powershell is not recognizing that as a valid pathname for the move command.
What am I missing?
Since you did not mention $destPath in context - maybe you did not define the variable $destPath at all? It is quite unlikely but just trying to narrow things down.
Another way to make this work:
Rename child-items first at original location. Then move them.
get-childitem *.txt | rename-item -newname {$_.name -replace 'o','O'}
get-childitem *.txt | % {move-item -path $_.fullname -destination .\Test-files}
But I prefer your one liner:)

Resources