Deploying an OVF/OVA file - esxi

Deploying an OVF/OVA file to a remote ESXi server
I am trying to deploy an OVF/OVA file to a remote ESXi server.
I want to do this from the command line.
I have written a simple batch file which deploys the ovf using
ovftool.exe.
Here is my batch file:
#echo off
CLS
set OVF_COMMAND="C:\Program Files (x86)\VMware\VMwareWorkstation\OVFTool\ovftool.exe"
set OVF_DEPLOY_OFF=ovftool
IF NOT EXIST %OVF_COMMAND% (
#echo powershell does not exists at:
#echo %OVF_COMMAND%
pause
)
#echo START OF THE BATCH SCRIPT
#echo ###############**strong text**########################################################
%OVF_DEPLOY_OFF% --noSSLVerify --disableVerification --skipManifestGeneration C:\Newfolder\vAppTS2\vAppTS2.ovf vi://administrator:jim#141.192.91.124/nrtms-training/host/141.192.91.9/
#echo #######################################################################
This works fine, but it is too slow. The OVF file comprises of one vApp with one VM. When all will be done the vApp will contain about 9 VMs.
It takes about 20 minutes to deploy the the current vApp which contains only one VM. I cannot imagine how long it will take to deploy a vApp with 9 VMs.
Is it a way to make it faster?
Cheers.

I have managed to find work-around.
Instead of importing a ovf file from some remote location I have chosen to clone that vApp from a predefined resource pool.
So at the beginning I have created a resource pool on which I have uploaded a vApp.
//connect to server
Connect-VIServer -Server $args[2].ToString() -Username $args[3] -Password $args[4]
// search which vApp to move into the new source pool
// The name of the vApp is given as an argument to the powerCLI script
// It must be one of the existing vApps
foreach ($vApps in (Get-vApp ) )
{
if ($vApps.name -eq $args[0])
{
# defined source and destination hosts
$vmHost_dest = Get-VMHost -Name "100.106.37.10"
$vmHost_source = Get-VMHost -Name "100.106.37.9"
# create a resource pool on destination host
$myDestinationRP = New-ResourcePool -Name "datastore-13" -Location $vmHost_dest
New-VApp -Name MyVApp2 -VApp $vApps -Location $myDestinationRP
}
}
So I can build a custom vApp and store it to a specific source pool from where I can clone it later on as I please.
If I want to remove the newly cloned vApp I can do as followS:
Get-VApp $vApps | Remove-VApp -Confirm:$false
Hope this helps

Related

Is it possible to automatically save playlists (to files) in Rhythmbox

Besides the question in the title I would like to explain my motivation, maybe there is another solution for my situation.
I work at different stations of a little local network, I usually work in station 3, where I listen to music while I work and where I add new songs to my playlists.
If, for a couple of days, I have to work at station 5, I would like to listen to music saved at one of my playlists. In order to do so, I have to save the playlist to a file in station 3, and then import it in station 5, but sometimes I forget to do it and when I'm already in station 5 I have to go back to station 3 and save the pl.
So, one part is the question asked in the title, and another would be how to automatically update or import the saved playlist (in station 5, or any other.)
Thanks.
Ok, here it goes how I solved my issue. First I have to explain how my network is set:
5 computers in the network, Station 1 is the "File server" giving this service via NFS (all computers in the network are Linux). Stations 2 to 5 mount directories as set in the "/etc/fstab" file, for exemple:
# File server
fileserv:/home/REMOTEUSER/Documents /home/LOCALUSER/Documents nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Music /home/LOCALUSER/Music nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Video /home/LOCALUSER/Video nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Downloads /home/LOCALUSER/Downloads nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Images /home/LOCALUSER/Images nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
NOTE: if you don't have your server in the /etc/hosts file you can use the ip instead, like:
192.168.1.1:/home/REMOTEUSER/Documents /home/LOCALUSER/Documents nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
etc...
Having previous data in mind. In station 3 I have set an every hour cron job that runs the next command (I could find the way to execute a script on logout, but I usually only turnoff the machine which does not run the script. If I put the script in rc6.d the problem is that station 3 root user is not allowed in station 1 (file server), and the "local user" of station 3 is already logged out).
crontab -l
# m h dom mon dow command
0 * * * * cp /home/USER/.local/share/rhythmbox/playlists.xml /home/USER/Documents/USER/musiclists/
To recover music lists from station 3, I have created next script in station 5:
File: .RhythmboxPlaylists.sh
#!/bin/sh
### Modify variables as needed
REMUS="USER" #Remote user
LOCUS="USER" #Local user
### Rhythmbox play list location saved from station 3
ORIGPL="/home/$LOCUS/Documents/$LOCUS/musiclists/playlists.xml"
#### Local Rhythmbox play list location
DESTPL="/home/$LOCUS/.local/share/rhythmbox/playlists.xml"
### DO NOT MODIFY FROM THIS LINE DOWN
sed -i "s/home\/$REMUS\//home\/$LOCUS\//g" $ORIGPL
mv $ORIGPL $DESTPL
Set file as executable
chmod +X .RhythmboxPlaylists.sh
Add next line:
sh $HOME/.RhythmboxPlaylists.sh
at the end of file .bashrc to run it at user login (save .bashrc).
Then, when I open Rhythmbox in station 5 I have the same playlists with the same songs as in station 3.
I finally came out with a partial solution. It is partial because it covers only the "Automatically saving Rhythmbox playlists to files". I still don't know how to automatically load playlists from files into Rhythmbox... let's see the script I've created (which you can put either at starting or shutting down your system):
File: playlist.sh
#!/bin/sh
#Variables [Replace USER by your Linux user and set the playlistDir where suits you the best]
playlistXml="/home/USER/.local/share/rhythmbox/playlists.xml"
playlistDir="/home/USER/musiclists"
# Create a file per list
xmlstarlet sel -t -v 'rhythmdb-playlists/playlist/#name' -nl "$playlistXml" |
while read name; do
xmlstarlet sel -t --var name="'$name'" -v 'rhythmdb-playlists/playlist[#name = $name]' "$playlistXml" > "$playlistDir/$name.pls"
#Delete empty lines from generated files
sed -i "/^$/d" "$playlistDir/$name.pls"
#Add line numbers to define file number
cat -n "$playlistDir/$name.pls" > tmp
mv tmp "$playlistDir/$name.pls"
#Add file headder
songs=$(wc -l < "$playlistDir/$name.pls")
sed -i "1i \[playlist\]\nX-GNOME-Title=$name\nNumberOfEntries=$songs" "$playlistDir/$name.pls"
done
#Format playlist
sed -i -r "s/^\s+([0-9]+)\s+file:(.*)$/File\1=file:\2\nTitle\1=/g" $playlistDir/*.pls
Set the file as executable: chmod +x playlist.sh
I have implemented another user based solution. For this to work you need to log into the different workstations with the same user....
Close Rhythmbox on the stations/users involved.
In the user directory located on the file server create a new subdirectory, let's call it rhythmbox.
Inside the newly created rhythmbox subdirectory, create two new subdirectories, cache and share.
From the workstation where you usually manage Rhythmbox, that is, where you create and maintain playlists, move the Rhythmbox cache to the file server cache directory:
# mv $HOME/.cache/rhythmbox //file-server/home/USER/rhythmbox/cache/
Move the Rhythmbox shared directory to the file server:
# mv $HOME/.local/share/rhythmbox //file-server/home/USER/rhythmbox/share/
Where the original directories where, create symbolic links.
a1. # cd $HOME/.cache/
a2. # ln -s //file-server/home/USER/rhythmbox/cache/rhythmbox
b1. # cd $HOME/.local/share/
b2. # ln -s //file-server/home/USER/rhythmbox/rhythmbox/share/rhythmbox
On the other stations remove the Rhythmbox cache and share directories and replace them with the symbolic links.
Then, the next time you open your Rhythmbox from any station logging in with the same user, your Music application will access the same data, so the settings and playlists will be the same on all stations.

Error in adding 4th organization in to Hyperledger Fabric 2.0

I am new to Fabric 2.0 and recently installed all samples and I was able to run test-network without an issue with 2 orgs. Then I followed the directory on addOrg3 to add 3rd organization and join the channel I created earlier.
Now the fun part came in when I wanted to add 4th organization. What I did was, I copied the addOrg3 folder and renamed almost everything in each file to represent 4th organization. I even assigned new PORT for this organization. However I am seeing the following error.
I've also added the following in Scripts/envVar.sh
export PEER0_ORG4_CA=${PWD}/organizations/peerOrganizations/org4.example.com/peers/peer0.org4.example.com/tls/ca.crt
And added the following in envVarCLI.sh
elif [ $ORG -eq 4 ]; then
CORE_PEER_LOCALMSPID="Org4MSP"
CORE_PEER_TLS_ROOTCERT_FILE=$PEER0_ORG4_CA
CORE_PEER_ADDRESS=peer0.org4.example.com:12051
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/peerOrganizations/org4.example.com/users/Admin#.../msp
I have also added step1Org4.sh and step2Org4.sh basically following by following addOrg3's structure.
What steps do you follow to add additional organizations ? Please help.
"No such container: Org4cli"
Sorry for the formatting since I wasn't able to put in to coding style but here is the output from running the command "./addOrg4.sh up"
**Add Org4 to channel 'mychannel' with '10' seconds and CLI delay of '3' seconds and using database 'leveldb'
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/cryptogen
##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
##########################################################
############ Create Org4 Identities ######################
##########################################################
+ cryptogen generate --config=org4-crypto.yaml --output=../organizations
org4.example.com
+ res=0
+ set +x
Generate CCP files for Org4
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/configtxgen
##########################################################
####### Generating Org4 organization definition #########
##########################################################
+ configtxgen -printOrg Org4MSP
2020-05-29 13:33:04.609 EDT [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-05-29 13:33:04.617 EDT [common.tools.configtxgen.localconfig] LoadTopLevel -> INFO 002 Loaded configuration: /Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/configtx.yaml
+ res=0
+ set +x
###############################################################
####### Generate and submit config tx to add Org4 #############
###############################################################
Error: No such container: Org4cli
ERROR !!!! Unable to create config tx **
In your addOrg4.sh have condition check like this:
CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
fi
If you already run addOrg3.sh up, CONTAINER_IDS alway have value (Example: 51b4ad60d812). It is ContainerID of Org3cli. So function Org4Up will never call. Simple way is just comment code like this:
# CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
# if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
# fi
It will bring up Org4cli you missing.
First check the container is up or not and if it is up then I think the CLI where the command is executed is not bootstrapped with the Org4 details.
I have added the 4th Organization from the three Org Hyperledger Fabric Network .Firstly, you have to create the Org4-artifacts (Crypto.yaml and Org4 docker file including the Org4Cli) and then try to follow the manual (step by step) process to add the new Organization from the official documentation.
https://hyperledger-fabric.readthedocs.io/en/release-2.0/channel_update_tutorial.html
Omit the process of editing scripts (step1 Org3.sh ...) because the workflow for adding the 4th or a new Org is slightly changed.So,you will spend a lot of time in just modifying the scripts.
I will write an article to add a new Org (4th) on medium,will paste the link here too.

How to use file parameter in jenkins

I am executing parameterised build in jenkins to count no. of lines in file which has 1 file parameter. Its file location is pqr. The name of the script file is linecount.sh which is saved at remote server. When i tried to execute it using command sh linecount.sh filename, it works perfectly from jenkins. But as i remove filename from the argument and execute same script as parameterised build it is showing below error on console :
Started by user Prasoon Gupta
[EnvInject] - Loading node environment variables.
Building in workspace users/Prasoon/sample_programs
Copying file to pqr
[sample_programs] $ /bin/sh -xe /tmp/hudson3529902665956638862.sh
+ sh linecount.sh
PRASOON4
linecount.sh: line 15: parameterBuild.txt: No such file or directory
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I am uploading file (parameterBuild.txt) from my local machine. Why is it giving this error?
My doubt is in shell script I used argument as $1. How can I refer this when I am taking file as parameter.
The uploaded file will not retain the same name as it has on your local computer. It will be named after the File location argument specified in the file parameter settings:
In this example I will get a file called file.txt in my workspace root, regardless of what I call it on my computer.
So if I now build my job and enter the following in the parameter dialog (note that my local filename is table.html):
Then I get the following in the log (I have a build step which does ls -l):
Building on master in workspace /var/lib/jenkins/workspace/fs
Copying file to file.txt
[fs] $ /bin/sh -xe /tmp/hudson845437350739055843.sh
+ ls -l
total 4
-rw-r--r-- 1 jenkins jenkins 292 Feb 15 07:23 file.txt
Finished: SUCCESS
Note that table.html now is called file.txt, e.g. what I entered as File location.
So in you're case the command should be:
sh linecount.sh pqr
There is a a bug since ages that makes impossible to use fileParameter:
Handle file parameters
file parameter not working in pipeline job
There is a workaround for this issue https://github.com/janvrany/jenkinsci-unstashParam-library
and in a pipeline script you do:
library "jenkinsci-unstashParam-library"
node {
def file_in_workspace = unstashParam "file"
sh "cat ${file_in_workspace}"
}
If it's to do with Free-Style job & if your configuration looks similar to this - https://i.stack.imgur.com/vH7mQ.png then you can run simply do sh linecount.sh ${pqr} to get what you are looking for?

Fortify, how to start analysis through command

How we can generate FortiFy report using command ??? on linux.
In command, how we can include only some folders or files for analyzing and how we can give the location to store the report. etc.
Please help....
Thanks,
Karthik
1. Step#1 (clean cache)
you need to plan scan structure before starting:
scanid = 9999 (can be anything you like)
ProjectRoot = /local/proj/9999/
WorkingDirectory = /local/proj/9999/working
(this dir is huge, you need to "rm -rf ./working && mkdir ./working" before every scan, or byte code piles underneath this dir and consume your harddisk fast)
log = /local/proj/9999/working/sca.log
source='/local/proj/9999/source/src/**.*'
classpath='local/proj/9999/source/WEB-INF/lib/*.jar; /local/proj/9999/source/jars/**.*; /local/proj/9999/source/classes/**.*'
./sourceanalyzer -b 9999 -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/working/9999/working/sca.log -clean
It is important to specify ProjectRoot, if not overwrite this system default, it will put under your /home/user.fortify
sca.log location is very important, if fortify does not find this file, it cannot find byte code to scan.
You can alter the ProjectRoot and Working Directory once for all if your are the only user: FORTIFY_HOME/Core/config/fortify_sca.properties).
In such case, your command line would be ./sourceanalyzer -b 9999 -clean
2. Step#2 (translate source code to byte code)
nohup ./sourceanalyzer -b 9999 -verbose -64 -Xmx8000M -Xss24M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+UseParallelGC -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/9999/sca.log -source 1.5 -classpath '/local/proj/9999/source/WEB-INF/lib/*.jar:/local/proj/9999/source/jars/**/*.jar:/local/proj/9999/source/classes/**/*.class' -extdirs '/local/proj/9999/source/wars/*.war' '/local/proj/9999/source/src/**/*' &
always unix background job (&) in case your session to server is timeout, it will keep working.
cp : put all your known classpath here for fortify to resolve the functiodfn calls. If function not found, fortify will skip the source code translation, so this part will not be scanned later. You will get a poor scan quality but FPR looks good (low issue reported). It is important to have all dependency jars in place.
-extdir: put all directories/files you don't want to be scanned here.
the last section, files between ' ' are your source.
-64 is to use 64-bit java, if not specified, 32-bit will be used and the max heap should be <1.3 GB (-Xmx1200M is safe).
-XX: are the same meaning as in launch application server. only use these to control the class heap and garbage collection. This is to tweak performance.
-source is java version (1.5 to 1.8)
3. Step#3 (scan with rulepack, custom rules, filters, etc)
nohup ./sourceanalyzer -b 9999 -64 -Xmx8000M -Dcom.fortify.sca.ProjectRoot=/local/proj/9999 -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/ssap/proj/9999/working/sca.log **-scan** -filter '/local/other/filter.txt' -rules '/local/other/custom/*.xml -f '/local/proj/9999.fpr' &
-filter: file name must be filter.txt, any ruleguid in this file will not be reported.
rules: this is the custom rule you wrote. the HP rulepack is in FORTIFY_HOME/Core/config/rules directory
-scan : keyword to tell fortify engine to scan existing scanid. You can skip step#2 and only do step#3 if you did notchange code, just want to play with different filter/custom rules
4. Step#4 Generate PDF from the FPR file (if required)
./ReportGenerator -format pdf -f '/local/proj/9999.pdf' -source '/local/proj/9999.fpr'

jenkins plugin for triggering build whenever any file changed in a given directory

I am looking for functionality where we have a directory with some files in it.
Whenever any one makes a change in any of the files in the directory, jenkins shoukd trigger a build.
Is there any plugin or mathod for this functionality. Please advise.
Thanks in advance.
I have not tried it myself, but The FSTrigger plugin seems to do what you want:
FSTrigger provides polling mechanisms to monitor a file system and
trigger a build if a file or a set of files have changed.
If you can monitor the directory with a script, you can trigger the build with a HTTP GET, for example with wget or curl:
wget -O- $JENKINS_URL/job/JOBNAME/build
Although slightly related.. it seems like this issue was about monitoring static files on system.. however there are many version control systems for just this purpose.
I answered this in another post if you're using git to track changes on the files themselves:
#!/bin/bash
set -e
job_name="whatever"
JOB_URL="http://myserver:8080/job/${job_name}/"
FILTER_PATH="path/to/folder/to/monitor"
python_func="import json, sys
obj = json.loads(sys.stdin.read())
ch_list = obj['changeSet']['items']
_list = [ j['affectedPaths'] for j in ch_list ]
for outer in _list:
for inner in outer:
print inner
"
_affected_files=`curl --silent ${JOB_URL}${BUILD_NUMBER}'/api/json' | python -c "$python_func"`
if [ -z "`echo \"$_affected_files\" | grep \"${FILTER_PATH}\"`" ]; then
echo "[INFO] no changes detected in ${FILTER_PATH}"
exit 0
else
echo "[INFO] changed files detected: "
for a_file in `echo "$_affected_files" | grep "${FILTER_PATH}"`; do
echo " $a_file"
done;
fi;
You can add the check directly to the top of the job's exec shell, and it will exit 0 if no changes detected.. Hence, you can always poll the top level of the repo for check-in's to trigger a build. And only complete a build if the files in question change.

Resources