Redirect Powershell output into .ps1 - powershell-2.0

I use powershell to invoke a sql stored procedure, now I want to redirect the complete set of the output into a .ps1 file, because the the output line is executable in powershell.
I'm trying to use >output.ps1, it works, but I'm checking the output file, it contains a lot of '...' to replace real output.
How to export the complete output? also stripe the header off?
Thanks.

It depends how you invoke the stored procedure. If you're invoking it within PowerShell, you should be able to collect the output, so I assume you're starting it as a separate task its own window. Without your actual example, here's a way to collect the output from the tasklist.exe command. You may find it applicable.
cls
$exe = 'c:\Windows\System32\tasklist.exe'
$processArgs = '/NH'
try {
Write-Host ("Launching '$exe $processArgs'")
$info = New-Object System.Diagnostics.ProcessStartInfo
$info.UseShellExecute = $false
$info.RedirectStandardError = $true
$info.RedirectStandardOutput = $true
$info.RedirectStandardInput = $true
$info.WindowStyle = [System.Diagnostics.ProcessWindowStyle]::Hidden
$info.CreateNoWindow = $true
$info.ErrorDialog = $false
$info.WorkingDirectory = $workingDir
$info.Filename = $exe
$info.Arguments = $processArgs
$process = [System.Diagnostics.Process]::Start($info)
Write-Host ("Launched $($process.Id) at $(Get-Date)")
<#
$process.StandardOutput.ReadToEnd() is a synchronous read. You cannot sync read both output and error streams.
$process.BeginOutputReadLine() is an async read. You can do as many of these as you'd like.
Either way, you must finish reading before calling $process.WaitForExit()
http://msdn.microsoft.com/en-us/library/system.diagnostics.processstartinfo.redirectstandardoutput.aspx
#>
$output = $process.StandardOutput.ReadToEnd()
$process.WaitForExit() | Out-Null
Write-Host ("Exited at $(Get-Date)`n$output")
} catch {
Write-Host ("Failed to launch '$exe $processArgs'")
Write-Host ("Failure due to $_")
}
$output

Related

How evaluate external command to a Nix value?

I want to parse a file to a Nix list value inside flake.nix.
I have a shell script which does that
perl -007 -nE 'say for m{[(]use-package \s* ([a-z-0-9]+) \s* (?!:nodep)}xsgm' init.el
How can I execute external command while evaluating flake.nix?
programs.emacs = {
enable = true;
extraConfig = builtins.readFile ./init.el;
extraPackages = elpa: (shellCommandToParseFile ./init.el); # Runs shell script
};
You can run ./init.el by the same way you perform any other impure step in Nix: With a derivation.
This might look something vaguely like:
programs.emacs = {
enable = true;
extraConfig = ../init.el;
extraPackages = elpa:
let
packageListNix =
pkgs.runCommand "init-packages.nix" { input = ../init.el; } ''
${pkgs.perl}/bin/perl -007 -nE '
BEGIN {
say "{elpa, ...}: with elpa; [";
say "use-package";
};
END { say "]" };
while (m{[(]use-package \s* ([a-z-0-9]+) \s* (;\S+)?}xsgm) {
next if $2 eq ";builtin";
say $1;
}' "$input" >"$out"
'';
in (import "${packageListNix}" { inherit elpa; });
};
...assuming that, given the contents of your ./init.el, the contents of your resulting el-pkgs.nix is actually valid nix source code.
That said, note that like any other derivation (that isn't either fixed-output or explicitly impure), this happens inside a sandbox with no network access. If the goal of init.el is to connect to a network resource, you should be committing its output to your repository. A major design goal of flakes is to remove impurities; they're not suitable for impure derivations.

File.exists() in a Jenkins groovy file does not work

I want to create a groovy function in my Jenkins job that looks into a folder and deletes all files who are older than X days.
So I start looking in the internet and found different kind of solutions.
At first I create a .groovy file with Visual Studio Code on my local PC to understand how it works. That is the reason why my code looks not similar to the codes in the internet because I changed it so that I understand how the code works.
def deleteFilesOlderThanDays(int daysBack, String path) {
def DAY_IN_MILLIS = 24 * 60 * 60 * 1000
File directory = new File(path)
if(directory.exists()){
File[] listFiles = directory.listFiles()
for(File listFile : listFiles) {
def days_from_now = ( (System.currentTimeMillis() - listFile.lastModified()) /(DAY_IN_MILLIS))
if(days_from_now > daysBack) {
println('------------')
println('file is older')
println(listFile)
}
else{
println('------------')
println('File is not older')
println(listFile)
}
}//End: for(File listFile : listFiles) {
}//End: if(directory.exists()){
}
(I know, the code do not delete something. It is only for my understanding)
The second step was to include this new created function into my Jenkins groovy file. But since then I'm desperate.
I have the problem that I do not get a positive result at the beginning from the code if the folder really exist.
The line:
if(directory.exists()){
makes me a lot of problems and it is not clear for me why.
I have tried so many kind of versions but I haven’t found a solution for me.
I have also used the “Pipeline Syntax” example [Sample Step fileExists] but it doesn’t help for me.
I have included:
import java.io.File
At the beginning of my file.
I have a basic file which I include in the Jenkins job. This file includes my library files. One of this library files is the file.groovy. In the basic Jenkins file I execute the function file.deleteFilesOlderThanDays() (for testing I do not use any parameters).
The code from my function for testing is:
def deleteFilesOlderThanDays() {
dir = '.\\ABC'
echo "1. ----------------------------------------"
File directory1 = new File('.\\ABC\\')
exist = directory1.exists()
echo 'Directory1 name is = '+directory1
echo 'exist value is = '+exist
echo "2. ----------------------------------------"
File directory2 = new File('.\\ABC')
exist = directory2.exists()
echo 'Directory2 name is = '+directory2
echo 'exist value is = '+exist
echo "3. ----------------------------------------"
File directory3 = new File(dir)
exist = directory3.exists()
echo 'Directory3 name is = '+directory3
echo 'exist value is = '+exist
echo "4. Pipeline Syntax ------------------------"
exist = fileExists '.\\ABC'
echo 'exist value is = '+exist
echo "5. ----------------------------------------"
File directory5 = new File(dir)
echo 'Directory5 name is = '+directory5
// execute an error
// exist = fileExists(directory5)
exist = fileExists "directory5"
echo 'exist value is = '+exist
echo "6. ----------------------------------------"
exist = fileExists(dir)
echo 'exist value is = '+exist
File[] listFiles = directory5.listFiles()
echo 'List file = '+listFiles
}
And the Output in the Jenkins Console Output is: (I cleaned it a little bit up….)
1. ----------------------------------------
Directory1 name is = .\ABC\
exist value is = false
2. ----------------------------------------
Directory2 name is = .\ABC
exist value is = false
3. ----------------------------------------
Directory3 name is = .\ABC
exist value is = false
4. Pipeline Syntax ------------------------
exist value is = true
5. ----------------------------------------
Directory5 name is = .\ABC
exist value is = false
6. ----------------------------------------
exist value is = true
List file = null
I only get a true value in step 4 and 6. So I can be sure that the folder really exist.
So it seems to be for me that the command:
File directory = new File(dir)
Not work correct in my case.
I can’t create a listFile variable because the directory would not be initialized correct.
For me is also not clear which kind of commands I should use. The groovy examples use always functions like:
.exists()
But in the Jenkins examples I always find code like this:
fileExists()
Why there are some differences between groovy and Jenkins groovy style? It should be the same ore not?
Does anyone have an idea for me or can told me what I’m doing wrong?
You may benefit from this answer from a similar question:
"
java.io.File methods will refer to files on the master where Jenkins is running, so not in the current workspace on the slave machine.
To refer to files on the slave machine, you should use the readFile method
"
def dir = readFile("${WORKSPACE}/ABC");
Link to original answer
Thanks for all that feedback.
OK, for me is now clear that Jenkins Groovy != Groovy is.
I have read a lot about it that there are different command if you are executing file search on a Jenkins Master or on a Jenkins Slave.
The suggestion from Youg to start after confirmation helps me.
I had problems with deleting the file so at the end I used a primitive batch command to get my function run.
The finally functions looks like now:
def deleteFilesOlderThanXDays(daysBack, path) {
def DAY_IN_MILLIS = 24 * 60 * 60 * 1000
if(fileExists(path)){
// change into path
dir(path) {
// find all kind of files
files = findFiles(glob: '*.*')
for (int i = 0; i < files.length; i++) {
def days_from_now = ( (System.currentTimeMillis() - files[i].lastModified) /(DAY_IN_MILLIS))
if(days_from_now > daysBack) {
echo('file : >>'+files[i].name+'<< is older than '+daysBack+' days')
bat('del /F /Q "'+files[i].name+'"')
}
else{
echo('file : >>'+files[i].name+'<< is not only than '+daysBack+' days')
}
}// End: for (int i = 0; i < files.length; i++) {
}// End: dir(path) {
}// End: if(fileExists(path)){
}
Thanks for helping and best regards,
You can add below script to list the file and folders in current work directory, so that you can confirm the folder ABC is exists or not.
After you confirm the ABC folder exist, then dig into the rest code.
def deleteFilesOlderThanDays() {
// print current work directory
pwd
// if jenkins job run on window machine
bat 'dir'
// if jenkins job run on linux machine
sh 'ls -l'
dir = '.\\ABC'
echo "1. ----------------------------------------"
.....
For fileExists usage, I think the correct way as following:
fileExists './ABC'
def dir = './ABC'
fileExists dir
Should use / as path separator, rather than \ according to its document at below:

How can I build custom rules using the output of workspace_status_command?

The bazel build flag --workspace_status_command supports calling a script to retrieve e.g. repository metadata, this is also known as build stamping and available in rules like java_binary.
I'd like to create a custom rule using this metadata.
I want to use this for a common support function. It should receive the git version and some other attributes and create a version.go output file usable as a dependency.
So I started a journey looking at rules in various bazel repositories.
Rules like rules_docker support stamping with stamp in container_image and let you reference the status output in attributes.
rules_go supports it in the x_defs attribute of go_binary.
This would be ideal for my purpose and I dug in...
It looks like I can get what I want with ctx.actions.expand_template using the entries in ctx.info_file or ctx.version_file as a dictionary for substitutions. But I didn't figure out how to get a dictionary of those files. And those two files seem to be "unofficial", they are not part of the ctx documentation.
Building on what I found out already: How do I get a dict based on the status command output?
If that's not possible, what is the shortest/simplest way to access workspace_status_command output from custom rules?
I've been exactly where you are and I ended up following the path you've started exploring. I generate a JSON description that also includes information collected from git to package with the result and I ended up doing something like this:
def _build_mft_impl(ctx):
args = ctx.actions.args()
args.add('-f')
args.add(ctx.info_file)
args.add('-i')
args.add(ctx.files.src)
args.add('-o')
args.add(ctx.outputs.out)
ctx.actions.run(
outputs = [ctx.outputs.out],
inputs = ctx.files.src + [ctx.info_file],
arguments = [args],
progress_message = "Generating manifest: " + ctx.label.name,
executable = ctx.executable._expand_template,
)
def _get_mft_outputs(src):
return {"out": src.name[:-len(".tmpl")]}
build_manifest = rule(
implementation = _build_mft_impl,
attrs = {
"src": attr.label(mandatory=True,
allow_single_file=[".json.tmpl", ".json_tmpl"]),
"_expand_template": attr.label(default=Label("//:expand_template"),
executable=True,
cfg="host"),
},
outputs = _get_mft_outputs,
)
//:expand_template is a label in my case pointing to a py_binary performing the transformation itself. I'd be happy to learn about a better (more native, fewer hops) way of doing this, but (for now) I went with: it works. Few comments on the approach and your concerns:
AFAIK you cannot read in (the file and perform operations in Skylark) itself...
...speaking of which, it's probably not a bad thing to keep the transformation (tool) and build description (bazel) separate anyways.
It could be debated what constitutes the official documentation, but ctx.info_file may not appear in the reference manual, it is documented in the source tree. :) Which is case for other areas as well (and I hope that is not because those interfaces are considered not committed too yet).
For sake of comleteness in src/main/java/com/google/devtools/build/lib/skylarkbuildapi/SkylarkRuleContextApi.java there is:
#SkylarkCallable(
name = "info_file",
structField = true,
documented = false,
doc =
"Returns the file that is used to hold the non-volatile workspace status for the "
+ "current build request."
)
public FileApi getStableWorkspaceStatus() throws InterruptedException, EvalException;
EDIT: few extra details as asked in the comment.
In my workspace_status.sh I would have for instance the following line:
echo STABLE_GIT_REF $(git log -1 --pretty=format:%H)
In my .json.tmpl file I would then have:
"ref": "${STABLE_GIT_REF}",
I've opted for shell like notation of text to be replaced, since it's intuitive for many users as well as easy to match.
As for the replacement, relevant (CLI kept out of this) portion of the actual code would be:
def get_map(val_file):
"""
Return dictionary of key/value pairs from ``val_file`.
"""
value_map = {}
for line in val_file:
(key, value) = line.split(' ', 1)
value_map.update(((key, value.rstrip('\n')),))
return value_map
def expand_template(val_file, in_file, out_file):
"""
Read each line from ``in_file`` and write it to ``out_file`` replacing all
${KEY} references with values from ``val_file``.
"""
def _substitue_variable(mobj):
return value_map[mobj.group('var')]
re_pat = re.compile(r'\${(?P<var>[^} ]+)}')
value_map = get_map(val_file)
for line in in_file:
out_file.write(re_pat.subn(_substitue_variable, line)[0])
EDIT2: This is how the Python script is how I expose the python script to rest of bazel.
py_binary(
name = "expand_template",
main = "expand_template.py",
srcs = ["expand_template.py"],
visibility = ["//visibility:public"],
)
Building on Ondrej's answer, I now use somthing like this (adapted in SO editor, might contain small errors):
tools/bazel.rc:
build --workspace_status_command=tools/workspace_status.sh
tools/workspace_status.sh:
echo STABLE_GIT_REV $(git rev-parse HEAD)
version.bzl:
_VERSION_TEMPLATE_SH = """
set -e -u -o pipefail
while read line; do
export "${line% *}"="${line#* }"
done <"$INFILE" \
&& cat <<EOF >"$OUTFILE"
{ "ref": "${STABLE_GIT_REF}"
, "service": "${SERVICE_NAME}"
}
EOF
"""
def _commit_info_impl(ctx):
ctx.actions.run_shell(
outputs = [ctx.outputs.outfile],
inputs = [ctx.info_file],
progress_message = "Generating version file: " + ctx.label.name,
command = _VERSION_TEMPLATE_SH,
env = {
'INFILE': ctx.info_file.path,
'OUTFILE': ctx.outputs.version_go.path,
'SERVICE_NAME': ctx.attr.service,
},
)
commit_info = rule(
implementation = _commit_info_impl,
attrs = {
'service': attr.string(
mandatory = True,
doc = 'name of versioned service',
),
},
outputs = {
'outfile': 'manifest.json',
},
)

Reading connection string from Environment variable using Powershell

I have asp.net mvc application, for which database connection string is placed in Environment variable DbConn as
Min Pool Size=20;Application Name=***;**initial catalog=*****;server=***;Integrated Security=True;
Now my question is how to read this connection string using Powershell script. I know how to read the entire connection string Get-ChildItem Env:DBConn,but I need a portion of it like the initial catalog,server etc.
Please guide me in resolving this.
You could parse and tokenize the connection string with -split, Trim() and a hashtable:
$DBConnValues = #{}
$env:DBConn.Trim(";") -split ";" |ForEach-Object {
$key,$value = $_ -split "="
$DBConnValues[$key] = $value
}
Now you can access each property by name:
PS C:\> $DBConnValues["Server"]
ServerName
This would be a perfect case for ConvertFrom-StringData so that you can make it into a custom object.
$props = $env:DBConn -replace ";","`r`n" | ConvertFrom-StringData
$dbConn = New-Object -TypeName psobject -Property $props
ConvertFrom-StringData wants a string where each line is a key value pair. We get just that by replacing the semicolons with newlines. It returns a hashtable which we feed to New-object to get a custom PowerShell object. Then you can use the new object like you would any in PowerShell.
$dbConn."Min Pool Size"

How to Add hookups to Powershell script

I have a powershell script called PostPro.ps1.I would like to provide a hookups to this script so
that if there is need one can add functionality before and after execution of PostPro.ps1 script.
Thanks in advance for your help!
Ramani
another way with parameters :
postpro.ps1:
[CmdletBinding()]
Param(
[ScriptBlock]$before,
[ScriptBlock]$after
)
if($before -ne $null){
Invoke-Command $before
}
write-host "hello"
if($after -ne $null){
Invoke-Command $after
}
then one can provide script to execute :
$b={write-host "before"}
$a={write-host 'after' }
PS>.\postpro.ps1 -before $b -after $a
before
hello
after
One way to do this would be to use modules. If you put all of your extension functions in modules in a certain folder with a certain name format, and then each module needs a runBefore and a runAfter function.
In your PostPro.ps1 script you can load the modules like this:
$modules = ls $(Join-Path $hookDir "postPro-extension-*.psm1") |
% { import-Module $_.FullName -AsCustomObject }
This will load all of the files in $hookDir that have a name that looks like postPro-extension-doSomething.psm1. Each module will be stored in a object that will give you access to each modules functions. To run the functions you can just call them on each object as show below.
You can go like this before the main part of the script
$modules | % { $_.runBefore }
and this after the main part of the script
$module | % { $_.runAfter }

Resources