Maybe someone here can help me out with this. I am trying to convert all XLS to XLSX/M files with powershell and interop. So far so good. In my next step, I have to adapt the link sources in each file, which works sometimes (also from XLS to XLSX/M).
I don’t know why, but sometimes the original worksheet name does not exist in the linked Excel file and results in a pop up with which the user has to interact:
I actually really don’t care so much about the sheet and I just want to ignore the message so that the script can continue.
In my code I use the function ChangeLink, like this:
$workbook.ChangeLink($fileLink_old, $fileLink_new)
I also have deactivated any warning on the excel object itself, but nothing helps:
$excel.DisplayAlerts = $False
$excel.WarnOnFunctionNameConflict = $False
$excel.AskToUpdateLinks = $False
$excel.DisplayAlerts = $False
The most convinient way for me would be just ignoring the pop up.
Is there a way without going through all cells by itself or modifing the externalLinks/_rels inside of the excel file?
Thanks in advance
Stephan
Edit:
To loop through each cell, not really efficient
ForEach ($Worksheet in #($workbook.Sheets)) {
Write-Host $Worksheet.Name
ForEach ($filelink in $fileLinks){
$worksheetname = $null
$fl_we = $fileLink.Substring(0, $fileLink.LastIndexOf('.'))
$found = $Worksheet.Cells.Find($fl_we.Substring(0, $fl_we.LastIndexOf('\')) + '\[' + $fl_we.Substring($fl_we.LastIndexOf('\')+1))
if($found -ne $null){
Write-Host Search $filelink
Write-Host $Worksheet.Cells($found.Row,$found.Column).Formula
$str_formula = $Worksheet.Cells($found.Row,$found.Column).Formula
$worksheetname = $str_formula.Substring($str_formula.IndexOf(']')+1,$str_formula.IndexOf('!')-$str_formula.IndexOf(']')-2)
Write-Host $worksheetname -ForegroundColor DarkGray
#Add worksheets with filename to list
}
}
}
#Check if worksheet exists in linked file
Related
I have a LARGE number of bookmarks and wanted to export them and share them with a group I work with. The issue is that when I export them, there are ADD_DATE and LAST_MODIFIED fields added by the browser (Firefox). I was hoping to just use cut or awk to pull the fields I want but the lack of a space before the >(website_name) is making that difficult. And my regex skills are weak.
How do I add a single space before the second to last > at the end of the line so that I can use cut or awk to pull out the fields I want into a new file?
Ex: 123456">SecurityTrails would become 123456 >SecurityTrails
Please see below for examples of what I'm working with. Any help is greatly appreciated!
<DT>SecurityTrails
i use firefox myself. it frequently also embeds favicon into the exported bookmarks.html file via base64 encoding. so to account for the different scenarios (than just the one mentioned by OP), maybe something like
{mawk/mawk2/gawk} 'BEGIN { FS = "\042" } $1 = $1'
then do whatever cutting that you want. That's just assuming OP wanted to keep every bit of it, and simply remove the quotations.
Now, if the objective is just to take out URL+Name of it,
{mawk/mawk2/gawk} 'BEGIN { DBLQT="\042"; FS = "(<A HREF=" DBLQT "|>)" } /<A HREF=/ {
url = substr($2, 1, index($2, DBLQT) - 1);
sitename = $(NF-1);
sub(/<\/A$/, "", sitename) ;
print url " > " sitename ; }' # or whatever way you want the output to be
I just typed it in extra verbosity to show what \042 meant - the ascii octal for double quote.
I need to read a file where the content is like below :
Computer Location = afp.local/EANG
Description = RED_TXT
Device Name = EANG04W
Domain Name = afp.local
Full Name = Admintech
Hardware Monitoring Type = ASIC2
Last Blocked Application Scan Date = 1420558125
Last Custom Definition Scan Date = 1348087114
Last Hardware Scan Date = 1420533869
Last Policy Sync Date = 1420533623
Last Software Scan Date = 1420533924
Last Update Scan Date = 1420558125
Last Vulnerability Scan Date = 1420558125
LDAP Location = **CN=EANG04W**,OU=EANG,DC=afp,DC=local
Login Name = ADMINTECH
Main Board OEM Name = Dell Inc.
Number of Files = 384091
Primary Owner = **CN= LOUHICHI anoir**,OU=EANG,DC=afp,DC=localenter code here
I need to replace CN=$value by CN=Compagny where $value is what is retrived after CN= and before ,.
Ok, so you really should have updated your question an not posted the code in a comment, because it's really hard to read. Here's what I think you intended:
$file = 'D:\sources\scripts\2.txt'
$content = Get-Content $file | foreach ($line in $content) {
if ($line.Contains('CN=')) {
$variable = $line.Split(',').Split('=')[2]
$variable1 = $variable -replace $variable, "Compagny"
} Set-Content -path $file
}
That deffinately has some syntax errors. The first line is great, you define the path. Then things go wrong... Your call to Get-Content is fine, that will get the contents of the file, and send them down the pipe.
You pipe that directly into a ForEach loop, but it's the wrong kind. What you really want there is a ForEach-Object loop (which can be confusing, because it can be shortened to just ForEach when used in a pipeline like this). The ForEach-Object loop does not declare an internal variable (such as ($line in $content)) and instead the scriptblock uses the automatic variable $_. So your loop needs to become something like:
Get-Content $file | ForEach { <do stuff> } | Set-Content
Next let's look inside that loop. You use an If statement to see if the line contains "CN=", understandable, and functional. If it does you then split the line on commas, and then again on equals, selecting the second record. Hm, you create an array of strings anytime you split one, and you have split a string twice, but only specify which record of the array you want to work with for the second split. That could be a problem. Anyway, you assign that substring to $variable, and proceed to replace that whole thing with "company" and store that output to $variable1. So there's a couple issues here. Once you split the string on the commas you have the following array of strings:
"LDAP Location = **CN=EANG04W**"
"OU=EANG"
"DC=afp"
"DC=local"
That's an array with 4 string objects. So then you try to split at least one of those (because you don't specify which one) on the equals sign. You now have an array with 4 array objects, where each of those has 2 string objects:
("LDAP Location", "**CN", "EANG04W**")
("OU", "EANG")
("DC","afp")
("DC","local")
You do specify the third record at this point (arrays in PowerShell start at record 0, so [2] specifies the third record). But you didn't specify which record in the first array so it's just going to throw errors. Let's say that you actually selected what you really wanted though, and I'm guessing that would be "EANG04W". (by the way, that would be $_.Split(",")[0].Split("=")[1]). You then assign that to $Variable, and proceed to replace all of it with "Company", so after PowerShell expands the variable it would look like this:
$variable1 = "EANG04W" -replace "EANG04W", "company"
Ok, you just successfully assigned "company" to a variable. And your If statement ends there. You never output anything from inside your If statement, so Set-Content has nothing to set. Also, it would set that nothing for each and every line that is piped to the ForEach statement, re-writing the file each time, but fortunately for you the script didn't work so it didn't erase your file. Plus, since you were trying to pipe to Set-Content, there was no output at the end of the pipeline, you have assigned absolutely nothing to $content.
So let's try and fix it, shall we? First line? Works great! No change. Now, we aren't saving anything in a variable, we just want to update a file's content, so there's no need to have $Content = there. We'll just move on then, shall we? We pipe the Get-Content into a ForEach loop, just like you tried to do. Once inside the ForEach loop, we're going to do things a bit differently though. The -replace method performs a RegEx match. We can use that to our advantage here. We will replace the text you are interested in for each line, and if it's not found, no replacement will be made, and pass each line on down the pipeline. That will look something like this for the inside of the ForEach:
$_ -replace "(<=CN\=).*?(?=,)", "Company"
The breakdown of that RegEx match can be seen here: https://regex101.com/r/gH6hP2/1
But, let's just say that it looks for text that has 'CN=' immediately before it, and goes up to the first comma following it. In your example, that includes the two trailing asterisks, but it doesn't touch the leading ones. Is that what you intended? That would make the last line of your example file:
Primary Owner = **CN=Company,OU=EANG,DC=afp,DC=localenter code here
Well, if that is as intended, then we have a winner. Now we close out the ForEach loop, and pipe the output to Set-Content and we're all set! Personally, I would highly suggest outputting to a new file, in case you need to reference the original file for some reason later, so that's what I'm going to do.
$file = 'D:\sources\scripts\2.txt'
$newfile = Join-Path (split-path $file) -ChildPath ('Updated-'+(split-path $file -Leaf))
Get-Content $file | ForEach{$_ -replace "(?<=CN\=).*?(?=,)", "Company"} | Set-Content $newfile
Ok, that's it. That code will produce D:\sources\scripts\Updated-2.txt with the following content:
Computer Location = afp.local/EANG
Description = RED_TXT
Device Name = EANG04W
Domain Name = afp.local
Full Name = Admintech
Hardware Monitoring Type = ASIC2
Last Blocked Application Scan Date = 1420558125
Last Custom Definition Scan Date = 1348087114
Last Hardware Scan Date = 1420533869
Last Policy Sync Date = 1420533623
Last Software Scan Date = 1420533924
Last Update Scan Date = 1420558125
Last Vulnerability Scan Date = 1420558125
LDAP Location = **CN=Company,OU=EANG,DC=afp,DC=local
Login Name = ADMINTECH
Main Board OEM Name = Dell Inc.
Number of Files = 384091
Primary Owner = **CN=Company,OU=EANG,DC=afp,DC=localenter code here
foreach old_cellname $old_cell_full_name {
echo $old_cellname >> origin.txt}
foreach origin $cell_origin {
echo $origin >> origin.txt}
foreach new_cellname $new_cell_full_name {
echo $new_cellname >> origin.txt}
Using the above code I am able to get the output in the origin.txt as old cell names followed by their origin numbers followed by the new cell names. But i want my output as rows ie old cell name its origin and new cell name. Is it possible to make these changes? Any help is highly appreciated.
If the lists are of the same length and match up sensibly, yes, of course. Just use a multi-list foreach:
foreach old $old_cell_full_name origin $cell_origin new $new_cell_full_name {
# echo isn't a standard Tcl command, but I guess this ought to work
echo "$old\t$origin\t$new" >> origin.txt
}
I assume tab-separated will do. It's pretty convenient since it lets you import the data into a spreadsheet easily. If you prefer commas, use , instead of \t.
I am testing the new Google Spreadsheets as there is a new feature I really need: the 200 sheets limit has been lifted (more info here: https://support.google.com/drive/answer/3541068).
However, I can't publish a spreadsheet to CSV like you can in the old version. I go to 'File>Publish to the web' and there is no more options to publish 'all sheets' or certain sheets and you can't specify cell ranges to publish to CSV etc.
This limitation is not mentioned in the published 'Unsupported Features' documentation found at: https://support.google.com/drive/answer/3543688
Is there some other way this gets enabled or has it in fact been left out of the new version?
My use case is: we retrieve Bigquery results into the spreadsheets, we publish the sheets as a CSV automatically using the "publish automatically on update" feature which then produces the CSV URL which gets placed into charting tools that read the CSV URL to generate the visuals.
Does anyone know how to do this?
The new Google spreadsheets use a different URL (just copy your <KEY>):
New sheet : https://docs.google.com/spreadsheets/d/<KEY>/pubhtml
CSV file : https://docs.google.com/spreadsheets/d/<KEY>/export?gid=<GUID>&format=csv
The GUID of your spreadsheet relates to the tab number.
/!\ You have to share your document using the Anyone with the link setting.
Here is the solution, just write it like this:
https://docs.google.com/spreadsheets/d/<KEY>/export?format=csv&id=<KEY>
I know it's weird to write the KEY twice, but it works perfectly. A teammate from work discovered this by opening the excel file in Google Docs, then File -> Download as -> Comma separated values. Then, in the downloads section of the browser appears a link to the CSV file, like this:
https://docs.google.com/spreadsheets/d/<KEY>/export?format=csv&id=<KEY>&gid=<SOME NUMBER>
But it doesn't work in this format, what my friend did was remove "&gid=<SOME NUMBER>" and it worked! Hope it helps everyone.
If you enable "Anyone with the link sharing" for spreadsheet, here is a simple method to get range of cells or columns (or whatever your feel like) export in format of HTML, CSV, XML, JSON via the query:
https://docs.google.com/spreadsheet/tq?key=YOUR-KEY&gid=1&tq=select%20A,%20B&tqx=reqId:1;out:html;%20responseHandler:webQuery
For tq variable read query language reference.
For tqx variable read request format reference.
Downside to this is that your doc is still availble in full via the public link, but if you want to export/import data to say Excel this is a perfect way.
It's not going to help everyone, but I've made a PHP script to read the HTML into an array.
I've added converting back to a CSV at the end. Hopefully this will help some people who have access to PHP.
$html_link = "https://docs.google.com/spreadsheets/d/XXXXXXXXXX/pubhtml";
$local_html = "sheets.html";
$file_contents = file_get_contents($html_link);
file_put_contents($local_html,$file_contents);
$dom = new DOMDocument();
$html = #$dom->loadHTMLFile($local_html); //Added a # to hide warnings - you might remove this when testing
$dom->preserveWhiteSpace = false;
$tables = $dom->getElementsByTagName('table');
$rows = $tables->item(0)->getElementsByTagName('tr');
$cols = $rows->item(0)->getElementsByTagName('td'); //You'll need to edit the (0) to reflect the row that your headers are in.
$row_headers = array();
foreach ($cols as $i => $node) {
if($i > 0 ) $row_headers[] = $node->textContent;
}
foreach ($rows as $i => $row){
if($i == 0 ) continue;
$cols = $row->getElementsByTagName('td');
$row = array();
foreach ($cols as $j => $node) {
$row[$row_headers[$j]] = $node->textContent;
}
$table[] = $row;
}
//Convert to csv
$csv = "";
foreach($table as $row_index => $row_details){
$comma = false;
foreach($row_details as $value){
$value_quotes = str_replace('"', '""', $value);
$csv .= ($comma ? "," : "") . ( strpos($value,",")===false ? $value_quotes : '"'.$value_quotes.'"' );
$comma = true;
}
$csv .= "\r\n";
}
//Save to a file and/or output
file_put_contents("result.csv",$csv);
print $csv;
Here is another temporary, non-PHP workaround:
Go to an existing NEW google sheet
Go to "File -> New -> Spreadsheet"
Under "File -> Publish to the web..." now has the option to publish a csv version
I believe this is actually creating an old Google sheet but for my purposes (importing google sheet data from clients or myself into R for statistical analysis) it works until they hopefully update this feature.
I posted this in a Google Groups forum also, please find it here:
https://productforums.google.com/forum/#!topic/docs/An-nZtjaupU
The correct URL for downloading a Google spreadsheet as CSV is:
https://docs.google.com/spreadsheets/export?id=<ID>&exportFormat=csv
The current answers do not work anylonger. The following has worked for me:
Do File -> "Publish to the web" and select 'start publishing' and the format. I choose text (which is TSV)
Now just copy the URL there which will be similar to https://docs.google.com/spreadsheet/pub?key=YOUR_KEY&single=true&gid=0&output=txt
That new feature appears to have disappeared. I don't see any option to publish a csv/tsv version. I can download tsv/csv with the export, but that's not available to other people with merely the link (it redirects them to a google docs sign-in form).
I found a fix! So I discovered that old spreadsheets before this change were still allowing only publishing certain sheets. So I made a copy of an old spreadsheet, cleared the data out, copy and pasted my current info into it and now I'm happily publishing just a single sheet of my large spreadsheet. Yay
I was able to implement a query to the result, see this table
https://docs.google.com/spreadsheets/d/1LhGp12rwqosRHl-_N_N8eTjTwfFsHHIBHUFMMyhLaaY/gviz/tq?tq=select+A,B,I,J,K+where+B%3E=4.5&pli=1
the spreadsheet fetches data from earthquake, but I just want to select MAG 4.5+ earthquakes so it makes the query and the columns, just a problem:
I cannot parse the result, I tried to decode as json but was not able to parse it.
I would like to be able to show this as HTML or CSV or how to parse this ? for example to be able to plot it on a Google Map.
I wrote the script below. It works great. However I've missed an important step. To evaluate if the servicename is: wrong/mispelled. I've already captured one these mispellings and would like to add logic in my script to handle it.
Here's my script. thanks
Set-ExecutionPolicy -ExecutionPolicy Unrestricted
$Services = Import-Csv '\\wnp6636\d$\Scripts\ProdList(1).csv' | foreach-object {$_}
$ErrorActionPreference = "SilentlyContinue"
foreach ($Serve in $Services) {
get-service -computername $Serve.Server -name $Serve.Services,$Serve.Services_A,$Serve.Services_B,$Serve.Services_C,`
$Serve.Services_D,$Serve.Services_E,$Serve.Services_F,$Serve.Services_G,$Serve.Services_H |`
foreach-object {if($_.Status -eq "Stopped")`
{write-host $Serve.Server $_.Displayname $_.Status -Fore "Red"}
else
{write-host $Serve.Server $_.Displayname $_.Status -Fore "Green"}
}
}
I hope i understand what you are wanting...
switch might be easier to work with?
switch($_.status)
{
"Stopped"{whatever code you want when it matches stopped;break}
"Running"{whatever code you want when it matches running:break}
Default{whatever code you want when it matches none of expected results:break}
}
so get rid of the if and else and feed the switch with your for loop
you could get as fancy as you want with it. also (i think $services is an array) if so then you could just feed the array to a do loop containing the get-services then use switch
hope i have not wasted your time
good luck
just realized you might need a bit more help with the default
{read-host "No such service found:Please check spelling and try again"}
that should get you there