Getting VncRecorder working on Jenkins - jenkins

I am trying to set Jenkins up to perform continuous testing of some online applications.
I have installed Jenkins on Ubuntu 16.04 and have a slave which is running Windoze 10.
I have installed UltraVnc on the slave and am trying to get VncRecorder to record the test session.
At the moment, my job simply does some random stuff. The console output is as follows:
Started by user anonymous
Building remotely on Nove1 (UITest) in workspace
C:\Users\Jenkins\workspace\TestTester
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Done
Starting xvnc
[TestTester] $ "C:\Program Files\uvnc bvba\UltraVNC\winvnc.exe" -connect host:76 Recording from vnc server: 172.24.27.210:0
Using vnc passwd file: /var/lib/jenkins/secrets/vncpassword
job/TestTester/14/Recording from server: 172.24.27.210:0, to: /var/lib/jenkins/jobs/TestTester/builds/14/archive/TestTester_14.swf
[TestTester] $ cmd /c call
C:\Users\Jenkins\AppData\Local\Temp\hudson6483326613410629302.bat
C:\Users\Jenkins\workspace\TestTester>echo "Start" "Start"
C:\Users\Jenkins\workspace\TestTester>exit 0
ERROR: File
/var/lib/jenkins/jobs/TestTester/builds/14/archive/TestTester_14.swf doesn't exist.
Feature "Record VNC session" failed!
Terminating xvnc.
Finished: FAILURE
I've spent the past 2 days searching on Google and found nothing, so can ay of you good folks help?
Thanks!
Paul

Problem Steps Recorder is a cool Windows tool that can record your actions as a series of images. You can use this tool which is built-into windows :)
psr.exe [/start |/stop][/output <fullfilepath>] [/sc (0|1)] [/maxsc <value>]
[/sketch (0|1)] [/slides (0|1)] [/gui (o|1)]
[/arcetl (0|1)] [/arcxml (0|1)] [/arcmht (0|1)]
[/stopevent <eventname>] [/maxlogsize <value>] [/recordpid <pid>]
/start :Start Recording. (Outputpath flag SHOULD be specified)
/stop :Stop Recording.
/sc :Capture screenshots for recorded steps.
/maxsc :Maximum number of recent screen captures.
/maxlogsize :Maximum log file size (in MB) before wrapping occurs.
/gui :Display control GUI.
/arcetl :Include raw ETW file in archive output.
/arcxml :Include MHT file in archive output.
/recordpid :Record all actions associated with given PID.
/sketch :Sketch UI if no screenshot was saved.
/slides :Create slide show HTML pages.
/output :Store output of record session in given path.
/stopevent :Event to signal after output files are generated.
PSR Usage Examples:
psr.exe
psr.exe /start /output fullfilepath.zip /sc1 /gui 0 /record <PID>
/stopevent <eventname> /arcetl 1
psr.exe /start /output fullfilepath.xml /gui 0 /recordpid <PID>
/stopevent <eventname>
psr.exe /start /output fullfilepath.xml /gui 0 /sc 1 /maxsc <number>
/maxlogsize <value> /stopevent <eventname>
psr.exe /stop
Notes:
1. Output path should include a directory path (e.g. '.\file.xml').
2. Output file can either be a ZIP file or XML file
3. Can't specify /arcxml /arcetl /arcmht /sc etc. if output is not a ZIP file.

The port number of your VNC server 0 looks suspicious..
What about the firewall settings of your slave - is the vnc port blocked ?
You could verify port of your vncserver with vncviewer 172.24.27.210:.
Regards,
Dimitri (developer of vncrecorder plugin)

Related

Jenkins: spawn multiple processes and wait for them to terminate

I've set up a Jenkins build server that's running a nightly build for a Unity project, building two different instances of it. Once these builds are done it runs a job on a different node to copy over the build binaries and run them. What I'm running into is finding a good way for the job to (1) run both executables simultaneously, (2) wait for both of them to finish before moving to the next 'build step' in the job (where it verifies test logs etc).
Initially this seemed to work when I tested it on my own computer: https://stackoverflow.com/a/18762607/14764114
.. but it does not in Jenkins, because the Jenkins node runs as a Windows Service and thus cannot use the START command in Batch.
I'm reading that running separate services might be a solution to explore here, but before I start diving into that I figured I'd ask the community if there isn't a more elegant solution here. In summary, I want to:
Run two executables from a Jenkins build step at the same time (from a Jenkins node running on Windows)
Wait for both executables to exit before continuing to the next build step
In the end I went with this solution, as it seems the Task Scheduler seems to be the only thing capable of starting a Unity game window in my scenario. So I create a task, run it and then delete it, after which I just wait for the processes to disappear from the tasklist:
#echo off
echo "Run FirstApp"
schtasks /create /sc MONTHLY /tn FirstAppTask /tr "%TARGET_DIR%\%APP_First%\FirstApp.exe -automatedtest -duration=%TEST_DURATION_SECONDS%"
schtasks /run /tn FirstAppTask
schtasks /delete /f /tn FirstAppTask
echo "Run SecondApp"
schtasks /create /sc MONTHLY /tn SecondAppTask /tr "%TARGET_DIR%\%APP_Second%\SecondApp.exe -automatedtest -duration=%TEST_DURATION_SECONDS%"
schtasks /run /tn SecondAppTask
schtasks /delete /f /tn SecondAppTask
echo "Wait for FirstApp.exe to end"
:LOOP1
tasklist | find /i "FirstApp" >nul 2>&1
IF ERRORLEVEL 1 (
GOTO CONTINUE1
) ELSE (
ping -n 5 ::1 >NUL
GOTO LOOP1
)
:CONTINUE1
echo "Wait for SecondApp.exe to end"
:LOOP2
tasklist | find /i "SecondApp" >nul 2>&1
IF ERRORLEVEL 1 (
GOTO CONTINUE2
) ELSE (
ping -n 5 ::1 >NUL
GOTO LOOP2
)
:CONTINUE2
echo Done running tests

Error in adding 4th organization in to Hyperledger Fabric 2.0

I am new to Fabric 2.0 and recently installed all samples and I was able to run test-network without an issue with 2 orgs. Then I followed the directory on addOrg3 to add 3rd organization and join the channel I created earlier.
Now the fun part came in when I wanted to add 4th organization. What I did was, I copied the addOrg3 folder and renamed almost everything in each file to represent 4th organization. I even assigned new PORT for this organization. However I am seeing the following error.
I've also added the following in Scripts/envVar.sh
export PEER0_ORG4_CA=${PWD}/organizations/peerOrganizations/org4.example.com/peers/peer0.org4.example.com/tls/ca.crt
And added the following in envVarCLI.sh
elif [ $ORG -eq 4 ]; then
CORE_PEER_LOCALMSPID="Org4MSP"
CORE_PEER_TLS_ROOTCERT_FILE=$PEER0_ORG4_CA
CORE_PEER_ADDRESS=peer0.org4.example.com:12051
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/peerOrganizations/org4.example.com/users/Admin#.../msp
I have also added step1Org4.sh and step2Org4.sh basically following by following addOrg3's structure.
What steps do you follow to add additional organizations ? Please help.
"No such container: Org4cli"
Sorry for the formatting since I wasn't able to put in to coding style but here is the output from running the command "./addOrg4.sh up"
**Add Org4 to channel 'mychannel' with '10' seconds and CLI delay of '3' seconds and using database 'leveldb'
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/cryptogen
##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
##########################################################
############ Create Org4 Identities ######################
##########################################################
+ cryptogen generate --config=org4-crypto.yaml --output=../organizations
org4.example.com
+ res=0
+ set +x
Generate CCP files for Org4
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/configtxgen
##########################################################
####### Generating Org4 organization definition #########
##########################################################
+ configtxgen -printOrg Org4MSP
2020-05-29 13:33:04.609 EDT [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-05-29 13:33:04.617 EDT [common.tools.configtxgen.localconfig] LoadTopLevel -> INFO 002 Loaded configuration: /Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/configtx.yaml
+ res=0
+ set +x
###############################################################
####### Generate and submit config tx to add Org4 #############
###############################################################
Error: No such container: Org4cli
ERROR !!!! Unable to create config tx **
In your addOrg4.sh have condition check like this:
CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
fi
If you already run addOrg3.sh up, CONTAINER_IDS alway have value (Example: 51b4ad60d812). It is ContainerID of Org3cli. So function Org4Up will never call. Simple way is just comment code like this:
# CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
# if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
# fi
It will bring up Org4cli you missing.
First check the container is up or not and if it is up then I think the CLI where the command is executed is not bootstrapped with the Org4 details.
I have added the 4th Organization from the three Org Hyperledger Fabric Network .Firstly, you have to create the Org4-artifacts (Crypto.yaml and Org4 docker file including the Org4Cli) and then try to follow the manual (step by step) process to add the new Organization from the official documentation.
https://hyperledger-fabric.readthedocs.io/en/release-2.0/channel_update_tutorial.html
Omit the process of editing scripts (step1 Org3.sh ...) because the workflow for adding the 4th or a new Org is slightly changed.So,you will spend a lot of time in just modifying the scripts.
I will write an article to add a new Org (4th) on medium,will paste the link here too.

ROS pointgrey_camera_driver crashes when I echo its topics

With Ubuntu 14.04 and Indigo, I cloned the pointgrey_camera_driver to catkinws/src and did catkin_make install. When I:
roslaunch pointgrey_camera_driver camera.launch
if I
rostopic echo /camera/image_color
the camera_nodelet_manager process dies. I don't know where the problem might be. This did work at one time.
I have tried using a calibration file as well as setting the launch argument "calibrated" to 0.
When the process dies this is the message:
[camera/camera_nodelet_manager-2] process has died [pid 14845, exit code -11, cmd /opt/ros/indigo/lib/nodelet/nodelet manager __name:=camera_nodelet_manager __log:=/home/mitch/.ros/log/9c7752b6-2116-11e6-a626-c03fd56e6751/camera-camera_nodelet_manager-2.log].
log file: /home/mitch/.ros/log/9c7752b6-2116-11e6-a626-c03fd56e6751/camera-camera_nodelet_manager-2*.log
The log files are not enlightening.

Fortify, how to start analysis through command

How we can generate FortiFy report using command ??? on linux.
In command, how we can include only some folders or files for analyzing and how we can give the location to store the report. etc.
Please help....
Thanks,
Karthik
1. Step#1 (clean cache)
you need to plan scan structure before starting:
scanid = 9999 (can be anything you like)
ProjectRoot = /local/proj/9999/
WorkingDirectory = /local/proj/9999/working
(this dir is huge, you need to "rm -rf ./working && mkdir ./working" before every scan, or byte code piles underneath this dir and consume your harddisk fast)
log = /local/proj/9999/working/sca.log
source='/local/proj/9999/source/src/**.*'
classpath='local/proj/9999/source/WEB-INF/lib/*.jar; /local/proj/9999/source/jars/**.*; /local/proj/9999/source/classes/**.*'
./sourceanalyzer -b 9999 -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/working/9999/working/sca.log -clean
It is important to specify ProjectRoot, if not overwrite this system default, it will put under your /home/user.fortify
sca.log location is very important, if fortify does not find this file, it cannot find byte code to scan.
You can alter the ProjectRoot and Working Directory once for all if your are the only user: FORTIFY_HOME/Core/config/fortify_sca.properties).
In such case, your command line would be ./sourceanalyzer -b 9999 -clean
2. Step#2 (translate source code to byte code)
nohup ./sourceanalyzer -b 9999 -verbose -64 -Xmx8000M -Xss24M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+UseParallelGC -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/9999/sca.log -source 1.5 -classpath '/local/proj/9999/source/WEB-INF/lib/*.jar:/local/proj/9999/source/jars/**/*.jar:/local/proj/9999/source/classes/**/*.class' -extdirs '/local/proj/9999/source/wars/*.war' '/local/proj/9999/source/src/**/*' &
always unix background job (&) in case your session to server is timeout, it will keep working.
cp : put all your known classpath here for fortify to resolve the functiodfn calls. If function not found, fortify will skip the source code translation, so this part will not be scanned later. You will get a poor scan quality but FPR looks good (low issue reported). It is important to have all dependency jars in place.
-extdir: put all directories/files you don't want to be scanned here.
the last section, files between ' ' are your source.
-64 is to use 64-bit java, if not specified, 32-bit will be used and the max heap should be <1.3 GB (-Xmx1200M is safe).
-XX: are the same meaning as in launch application server. only use these to control the class heap and garbage collection. This is to tweak performance.
-source is java version (1.5 to 1.8)
3. Step#3 (scan with rulepack, custom rules, filters, etc)
nohup ./sourceanalyzer -b 9999 -64 -Xmx8000M -Dcom.fortify.sca.ProjectRoot=/local/proj/9999 -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/ssap/proj/9999/working/sca.log **-scan** -filter '/local/other/filter.txt' -rules '/local/other/custom/*.xml -f '/local/proj/9999.fpr' &
-filter: file name must be filter.txt, any ruleguid in this file will not be reported.
rules: this is the custom rule you wrote. the HP rulepack is in FORTIFY_HOME/Core/config/rules directory
-scan : keyword to tell fortify engine to scan existing scanid. You can skip step#2 and only do step#3 if you did notchange code, just want to play with different filter/custom rules
4. Step#4 Generate PDF from the FPR file (if required)
./ReportGenerator -format pdf -f '/local/proj/9999.pdf' -source '/local/proj/9999.fpr'

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

Resources