Sitecore docker build. Invoke-RemoteScript.ps1 times out - docker

Working on this for over 2 weeks and I'm run out of ideas.
Hoping for some1 experienced with dockerized Sitecore.
I have a Sitecore Docker (windows container) build and i'm trying to run script on it.
Sitecore is built with this tutorial: http://rockpapersitecore.com/2019/10/yet-another-sitecore-docker-series/
Script:
Write-Host "Preparing for DB upgrade"
Import-Module C:\automation\SPE
$session = New-ScriptSession -Username admin -Password b -ConnectionUri http://localhost
$jobId = Invoke-RemoteScript -Session $session -ScriptBlock {
Install-Package -Path C:\automation\CT_DB_upgrade.zip -InstallMode Merge -MergeMode Merge
} -AsJob
Start-Sleep -s 5
Wait-RemoteScriptSession -Session $session -Id $jobId -Delay 5 -Verbose
Write-Host "CT_DB_upgrade.zip installed"
Stop-ScriptSession -Session $session
More over script suppose to update clean DB with tables for our Sitecore connector.
Script works fine, tables are added and Sitecore works, but...script times out..
After "CT_DB_upgrade.zip installed" it runs for about 2min and times out.
Normally on VM script runs for about 0.5/ 1 sec and doesn't crash.
PS C:\automation> .\install-ct-db.ps1
Preparing for DB upgrade
VERBOSE: Checking the Runspace for the variable id.
VERBOSE: Preparing to invoke the script against the service at url
http://localhost/-/script/script/?sessionId=2ca051be-195d-4f0a-90b7-d084b9246ca3&rawOutput=False&persistentSession=F alse
VERBOSE: Transferring script to server
VERBOSE: Script transfer complete.
VERBOSE: Polling job 55ed096e-16ae-49cd-8d20-4d6a4ec219d1. Status : Available.
VERBOSE: Finished polling job 55ed096e-16ae-49cd-8d20-4d6a4ec219d1.
VERBOSE: Checking the Runspace for the variable id.
VERBOSE: Preparing to invoke the script against the service at url
http://localhost/-/script/script/?sessionId=2ca051be-195d-4f0a-90b7-d084b9246ca3&rawOutput=False&persistentSession=F alse
VERBOSE: Transferring script to server
VERBOSE: Script transfer complete.
CT_DB_upgrade.zip installed
Exception calling "Invoke" with "1" argument(s): "The operation has timed out"
At C:\automation\SPE\ConvertFrom-CliXml.ps1:24 char:9
+ $deserializer = $ctor.Invoke($xr)
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : WebException
Exception calling "InvokeMember" with "5" argument(s): "Non-static method requires a target."
At C:\automation\SPE\ConvertFrom-CliXml.ps1:26 char:16
+ ... while (!$type.InvokeMember("Done", "InvokeMethod,NonPublic,Insta ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : TargetException
As i understand it crashes at: Stop-ScriptSession -Session $session.
I have tried changing different web.config settings, like timeouts, but even if i set timeout after 10min it will timeout anyway after around 2min.
Here is ConvertFrom-CliXml.ps1:
function ConvertFrom-CliXml {
param(
[Parameter(Position=0, Mandatory=$true, ValueFromPipeline=$true)]
[ValidateNotNullOrEmpty()]
[String[]]$InputObject
)
begin
{
$OFS = "`n"
[String]$xmlString = ""
}
process
{
$xmlString += $InputObject
}
end
{
#This try catch ignores rest of the code and let script finish with no error.
#try {
$type = [PSObject].Assembly.GetType('System.Management.Automation.Deserializer')
$ctor = $type.GetConstructor('instance,nonpublic', $null, #([xml.xmlreader]), $null)
$sr = New-Object System.IO.StringReader $xmlString
$xr = New-Object System.Xml.XmlTextReader $sr
$deserializer = $ctor.Invoke($xr)
$done = $type.GetMethod('Done', [System.Reflection.BindingFlags]'nonpublic,instance')
while (!$type.InvokeMember("Done", "InvokeMethod,NonPublic,Instance", $null, $deserializer, #()))
{
try {
$type.InvokeMember("Deserialize", "InvokeMethod,NonPublic,Instance", $null, $deserializer, #())
} catch {
Write-Warning "Could not deserialize ${string}: $_"
}
}
$xr.Close()
$sr.Dispose()
#} catch {
# Write-Host "Could not finish script correctly"
<#
.SYNOPSIS
Short description
.DESCRIPTION
Long description
.PARAMETER InputObject
Parameter description
.EXAMPLE
An example
.NOTES
General notes
#>
}
}

Related

Failed to create Release artifact directory error after cancelling Release and starting new one

This seems to only happen if I cancel a release deployment and then start a new one. It forces me to go into the agents and manually restart them.
The actual error is..
"Failed to create Release artifact directory 'C:\agent_work\r3\a'. ---> System.IO.IOException: The process cannot access the file '\?\C:\agent_work\r3\a' because it is being used by another process."
Is there a way in TFS to clean up any of these potential issues when creating a new release after a cancelled one? If I let it fully run its course, the new release runs fine no problem. This only happens when I cancel and attempt to start a new one.
You can use utility like handle to write a script that releases locked files or folders.
For example:
$pathToRelease = $env:System.DefaultWorkingDirectory
Write-Host "$PathToRelease is locked! trying to kill the process..."
$processes = path\to\handle64.exe -nobanner -accepteula $PathToRelease
# Remove empty lines
$processes = $processes | Where-Object {$_ -ne ""}
Write-Host $processes.ForEach({ Write-Host $_ })
if($processes -notmatch "No matching handles found.")
{
foreach($process in $processes)
{
# Some excluded processes, you can decide what which you want
if($process -match "explorer.exe" -or $process -match "powershell.exe"
{
continue
}
$pidNumber = $process.Substring(($process.IndexOf("pid") + 5),6)
$isProcessStillAlive = Get-Process | Where-Object {$_.id -eq $pidNumber}
if($Null -ne $isProcessStillAlive)
{
Stop-Process -Id $pidNumber -Force
Start-Sleep -Seconds 1
}
}
}
else
{
exit 0
}
Configure the script to run even the release is canceled.

Unable to start the Jmeter-Server in background in Jenkins pipeline. Getting ConnectException

I have a requirement to implement distributed performance testing where I have a chance of launching multiple slave node parallelly when user count is high. Hence I suppose to launch master and slave nodes.
I have tried all the way to start jmeter-server in the background since it has to keep on running in the slave node to receive the incoming request.
But still, I am unable to start it in the background.
node(performance) {
properties([disableConcurrentBuilds()])
stage('Setup') {
cleanAndInstall()
checkout()
}
max_instances_to_boot = 1
for (val in 1..max_instances_to_boot) {
def instance_id = val
node_builder[instance_id] = {
timestamps {
node(node_label) {
stage('Node -> ' + instance_id + ' Launch') {
def ipAddr = ''
script {
ipAddr = sh(script: 'curl http://xxx.xxx.xxx.xxx/latest/meta-data/local-ipv4', returnStdout: true)
node_ipAddr.add(ipAddr)
}
cleanAndInstall()
checkout()
println "Node IP Address:"+node_ipAddr
dir('apache-jmeter/bin') {
exec_cmd = "nohup sh jmeter-server -Jserver.rmi.ssl.disable=true -Djava.rmi.server.hostname=$ipAddr > ${env.WORKSPACE}/jmeter-server-nohup.out &"
println 'Server Execution Command: ' + exec_cmd
sh exec_cmd
}
sleep time: 1, unit: 'MINUTES'
sh """#!/bin/bash
echo "============ jmeter-server.log ============"
cat jmeter-server.log
echo "============ nohup.log ============"
cat jmeter-server-nohup.out
"""
}
}
}
}
}
parallel node_builder
stage('Execution') {
exec_cmd = "apache-jmeter/bin/jmeter -n -t /home/jenkins/workspace/release-performance-tests/test_plans/delights/fd_regression_delight.jmx -e -o /home/jenkins/workspace/release-performance-tests/Performance-Report -l /home/jenkins/workspace/release-performance-tests/JTL-FD-773.jtl -R xx.0.3.210 -Jserver.rmi.ssl.disable=true -Dclient.tries=3"
println 'Execution Command: ' + exec_cmd
sh exec_cmd
}
}
Getting following error
Error in rconfigure() method java.rmi.ConnectException: Connection refused to host: xx.0.3.210; nested exception is:
java.net.ConnectException: Connection refused (Connection refused)
We're unable to provide the answer without seeing the contents of your nohup.out file which is supposed to contain your script output.
Blind shot: by default JMeter uses secure communication between the master and the slaves so you need to have a Java Keystore to contain certificates necessary for the requests encryption. The script is create-rmi-keystore.sh and you need to launch and perform the configuration prior to starting the JMeter Slave.
If you don't need encrypted communication between master and slaves you can turn this feature off so you won't to create the keystore, it can be done either by adding the following command-line argument:
server.rmi.ssl.disable=true
like:
nohup jmeter-server -Jserver.rmi.ssl.disable=true &
or alternatively add the next line to user.properties file (lives in "bin" folder of your JMeter installation)
server.rmi.ssl.disable=true
More information:
Configuring JMeter
Apache JMeter Properties Customization Guide
Remote hosts and RMI configuration
This is resolve by adding inside the node stage.
JENKINS_NODE_COOKIE=dontKillMe nohup sh jmeter-server -Jserver.rmi.ssl.disable=true -Djava.rmi.server.hostname=xx.xx.xx.xxx > ${env.WORKSPACE}/jmeter-server-nohup.out &

Jenkins Pipeline: no permission to create directories inside script block with .execute()

I want to run an external shell command (for example, git clone) inside a Jenkins pipeline.
I have found 2 ways of doing this.
This one works:
steps {
sh "git clone --branch $BRANCH --depth 1 --no-single-branch $REMOTE $LOCAL
}
Downsides:
I only see the output when the complete command is finished. Which is annoying if the command takes a long time.
I need to do some Groovy scripting to look up values in a Map, based on parameters chosen by the user who starts the build. Haven't found a way to do that without a script {} block.
A variation is to run a Bash script that runs the git clone command, that also works. Which will get me into trouble when running on Windows nodes.
The next one errors on
fatal: could not create work tree dir 'localFolder'.: Permission denied
steps {
script {
def localFolder = new File(products[params.PRODUCT].local)
if (!localFolder.exists()) {
def gitCommand = 'git clone --branch ' + params.BRANCH + ' --depth 1 --no-single-branch ' + products[params.PRODUCT].remote + ' ' + localFolder
runCommand(gitCommand)
}
}
}
This is the runCommand() wrapper:
def runCommand = { strList ->
assert ( strList instanceof String ||
( strList instanceof List && strList.each{ it instanceof String } ) \
)
def proc = strList.execute()
proc.in.eachLine { line -> println line }
proc.out.close()
proc.waitFor()
print "[INFO] ( "
if(strList instanceof List) {
strList.each { print "${it} " }
} else {
print strList
}
println " )"
if (proc.exitValue()) {
println "gave the following error: "
println "[ERROR] ${proc.getErrorStream()}"
}
assert !proc.exitValue()
}
My question is: how come I have permission to create directories when running a sh command, and how come I don't have that permission when I do the same thing inside a script {} block with .execute()?
I'm intentionally using the example of the git clone command to avoid getting answers that don't read the question, like using a dir {} block. If I can't create the git directory, then I can also not create the files inside that directory.
If you want to run any shell commands, use sh step, not Groovy's process execution. There is one main reason for that - any Groovy code you execute inside the script block, gets executed on the master node. And this is (probably) the reason you see this permission denied issue. The sh step executes on the expected node and thus creates a workspace there. And when you execute a Groovy code that is designed to create a folder in your workspace, it fails, because there is no workspace on a master node.
"1. Except for the steps themselves, all of the Pipeline logic, the Groovy conditionals, loops, etc execute on the master. Whether simple or complex! Even inside a node block!"
Source: https://jenkins.io/blog/2017/02/01/pipeline-scalability-best-practice/#fundamentals
However, there is a solution to that. You can easily combine the sh step with the script block. There is no issue with using any of the existing Jenkins pipeline steps inside the script block. Consider the following example:
steps {
script {
def localFolder = products[params.PRODUCT].local
if (!fileExists(file: localFolder)) {
sh 'git clone --branch ' + params.BRANCH + ' --depth 1 --no-single-branch ' + products[params.PRODUCT].remote + ' ' + localFolder
}
}
}
Keep in mind that this example uses fileExists and readFile steps to check if file exists in the workspace, as well as to read its content. Using new File(...) won't work correctly when your workspace is shared between master and slave nodes.
If you want to safely create files in the workspace(s), use writeFile step to make sure that the file is created on the node that executes your pipeline's current stage.
A solution to my problem:
Don't bother with showing output as a command progresses, just deal with it that I will only see it at the end.
Compose the entire command inside a script {} block.
put a sh statement inside the script {} block.
Like this:
steps {
script {
def localFolder = new File(products[params.PRODUCT].local)
if (!localFolder.exists()) {
def gitCommand = 'git clone --branch ' + params.BRANCH + ' --depth 1 --no-single-branch ' + products[params.PRODUCT].remote + ' ' + localFolder
sh gitCommand
}
}
}
This still doesn't answer my question about the permission issue, I would still like to know the root cause.

PowerShell workflow doesn't work in Jenkins?

I have a script in PowerShell. It's running from Jenkins via a PowerShell step. Without Jenkins all works fine. But when I build it with Jenkins, I got nothing... no errors, just nothing. What's wrong? Jenkins can't use PowerShell workflow?
Simple example:
workflow config {
Param([string[]]$servers, $MaxEnvSize, $MaxMemPerShell)
$servers = $servers.Trim()
foreach -parallel -throttlelimit 50 ($server in $servers) {
if (Test-Connection -ComputerName $server -Quiet -Count 1) {
inlinescript {
try {
Invoke-Command -ComputerName $using:server -ea Stop -ScriptBlock {
Param($MaxEnvSize, $MaxMemPerShell)
Set-Item WSMan:\localhost\MaxEnvelopeSizekb -EA Stop -Value $MaxEnvSize
Set-Item WSMan:\localhost\Shell\MaxMemoryPerShellMB -EA Stop $MaxMemPerShell
Set-Item WSMan:\localhost\Plugin\Microsoft.PowerShell\Quotas\MaxMemoryPerShellMB -EA Stop $MaxMemPerShell
#Restart-Service winrm
} -ArgumentList $using:MaxEnvSize , $using:MaxMemPerShell
} catch {
"$using:server : $Error[0].Exception"
}
}
} else {
Write-Output "$server no ping"
}
}
}
config -Servers $env:servers -MaxEnvSize 16454 -MaxMemPerShell 5192
By default jenkins will use 32-bit powershell. Powershell workflow is supported only in 64-bit powershell. trigger powershell script using C:\Windows\Sysnative\WindowsPowerShell\v1.0\powershell.exe which will redirect to 64-bit powershell.

How to run a shell script in Groovy with arguments

I'm in the process of translating Jenkins 2 freestyle jobs to work as pipeline jobs with Groovy, which I have very little experience in. I can't for the life of me figure out how to get the arguments to run inside of Groovy. Here's the important bit of the script;
stage ('Clean') {
try {
notifyBuild('STARTED')
dir("cloudformation") {
def list = sh(script: "ls -1 *.template", returnStdout: true)
for (i in list) {
sh "aws s3 cp $i s3://URLHERE —expires 1 —cache-control 1"
}
} } catch (e) {
// If there was an exception thrown, the build failed
currentBuild.result = "FAILED"
throw e
} finally {
// Success or failure, always send notifications
notifyBuild(currentBuild.result)
} }
The relevant bit is sh "aws s3 cp $i s3://URLHERE —expires 1 —cache-control 1". Attempting to run this returns the following error;
[cloudformation] Running shell script
+ aws s3 cp e s3://URLHERE —expires 1 —cache-control 1
Unknown options: —expires,1,—cache-control,1
Google has produced little in the way of shell scripts with arguments inside of Groovy. Obviously it's trying to deal with each space-delineated chunk as its own bit; how do I stop that behavior?
Edited to add:
I have tried sh "aws s3 cp $i s3://URLHERE '—expires 1' '—cache-control 1'" which then returns the same error but with Unknown options: —expires 1,—cache-control 1 so I get that I can include spaces by quoting appropriately, but that still leaves the underlying issue.
The cache-control parameter needs 2 dashes --cache-control <value>, as well as the expires parameter.
See the S3 documentation of cp.

Resources