I'm trying to get full Jenkins consoleText and write it to a file as following:
ByteArrayOutputStream stream = new ByteArrayOutputStream()
currentBuild.rawBuild.getLogText().writeLogTo(0, stream)
writeFile file: "archive/${jobName}-log.txt", text: stream.toString()
However, the contents of the file are getting truncated intermittently.
Jenkins version: 2.361.1
The size of the stream is 32 bytes initially and it is increased if needed.
I noticed that the size of the truncated files is 1.2-1.5 MB but the size of the original file is around 3 MB.
Could there be some memory issues?
Also, there is no problem with the files that are < 1MB.
If you simply want to get a copy of the console log you can do something like the below in your Pipeline.
sh"""
cp ${JENKINS_HOME}/jobs/${JOB_NAME}/builds/${BUILD_NUMBER}/log archive/${JOB_NAME}-log.txt
"""
sh"""
curl ${JENKINS_HOME}/jobs/${JOB_NAME}/builds/${BUILD_NUMBER}/consoleText -o archive/${JOB_NAME}-log.txt
"""
Related
I'm trying to run Google's usd_from_gltf utility inside AWS Lambda, using a custom Docker image. The setup seems to be working locally but when executing the same Lambda in AWS, it fails for certain input files.
Minimal app
https://github.com/petrbroz/glb-to-usdz-test
This is a minimalistic AWS SAM app with a Lambda function called GlbToUsdzFunction that downloads a Glb file from specified URL and converts it to Usdz. The Lambda function uses a custom Docker image (https://github.com/leon/docker-gltf-to-udsz), and Python's subprocess to run the usd_from_gltf tool to handle the conversion.
Sample file URLs
https://petrbroz.s3.us-west-1.amazonaws.com/glb-to-usdz-issues/snowmobile.glb
https://petrbroz.s3.us-west-1.amazonaws.com/glb-to-usdz-issues/wall-e.glb
When running locally
The Lambda function succeeds for both snowmobile.glb and wall-e.glb. Here's an example output for the former:
$ sam build
$ echo "{ \"url\": \"https://petrbroz.s3.us-west-1.amazonaws.com/glb-to-usdz-issues/snowmobile.glb\" }" | sam local invoke "GlbToUsdzFunction" --event -
Reading invoke payload from stdin (you can also pass it from file with --event)
Invoking Container created from glbtousdzfunction:glb-to-usdz-lambda
Building image.................
Skip pulling image and use local one: glbtousdzfunction:rapid-1.46.0-x86_64.
START RequestId: 720b6b49-e36c-4429-96fb-9e0e5c02c09b Version: $LATEST
Downloading file
Converting file
Warning: extensionsUsed: Extension is in extensionsUsed but not actually referenced: KHR_texture_transform [GLTF_WARN_EXTENSION_UNREFERENCED]
END RequestId: 720b6b49-e36c-4429-96fb-9e0e5c02c09b
REPORT RequestId: 720b6b49-e36c-4429-96fb-9e0e5c02c09b Init Duration: 0.22 ms Duration: 19997.59 ms Billed Duration: 19998 ms Memory Size: 1024 MB Max Memory Used: 1024 MB
{"status": "success"}
When running in AWS
The Lambda function succeeds for snowmobile.glb but fails for wall-e.glb. Here's the output for the latter:
START RequestId: b1bdc496-ec12-430e-a641-2574af354d60 Version: $LATEST
Downloading file
Converting file
ERROR: USD: Insufficient permissions to write to destination directory '/var/tmp' (Replace) [UFG_ERROR_USD]
ERROR: USD: Failed to map '/var/tmp/output.usdc': No such file or directory (AddFile) [UFG_ERROR_USD]
Warning: USD: Failed to add temporary layer at '/var/tmp/output.usdc' to the package at path 'output.usdz'. (_CreateNewUsdzPackage) [UFG_WARN_USD]
ERROR: Cannot write USD: "/tmp/output.usdz" [UFG_ERROR_IO_WRITE_USD]
Command '['usd_from_gltf', '/tmp/input.glb', '/tmp/output.usdz']' returned non-zero exit status 255.
END RequestId: b1bdc496-ec12-430e-a641-2574af354d60
REPORT RequestId: b1bdc496-ec12-430e-a641-2574af354d60 Duration: 2039.96 ms Billed Duration: 5166 ms Memory Size: 1024 MB Max Memory Used: 101 MB Init Duration: 3125.71 ms
Has anyone run into this? Am I doing something wrong here, or is this perhaps a bug on the AWS side, or on the usd_from_gltf side?
Some USD conversions cause the library to write intermediate files and it looks like it is built to use /var/tmp for this purpose. Since Lambdas can only write to /tmp, the workaround we came up with is to link /var/tmp to /tmp:
in the glb-to-usdz Dockerfile, add a line like RUN rm -rf /var/tmp && ln -s /tmp /var/tmp
This allows your second example to succeed.
I have a substantial (~1 TB) directory that already has a backup on google archive storage. For space reasons on local machine, I had to migrate the directory to somewhere else but now when I try to run the script that was synchronizing it to the cloud (using new directory as source) it attempts to upload everything. I guess the problem lies with timestamps on migrated files, because when I experiment with "-c" (CRC comparison) it works fine but just far too slow to be workable (even with compiled CRC).
By manually inspecting timestamps it seems they were copied across well (used robocopy /mir for the migration), so what timestamp exactly is upsetting/confusing gsutil..?
I see few ways out of this:
Finding a way to preserve original timestamps on copy (I still have the original folder, so that's an option)
Somehow convincing gsutil to only patch the timestamps of the cloud files or fall back to size-only
Bite the bullet and re-upload everything
Will appreciate any suggestions.
command used for the migration:
robocopy SOURCE TARGET /mir /unilog+:robocopy.log /tee
Also tried:
robocopy SOURCE TARGET /mir /COPY:DAT /DCOPY:T /unilog+:robocopy.log /tee
command used for sync with google:
gsutil -m rsync -r "source" "gs://MYBUCKET/target"
So turns out that even when you try to sync timestamps they end up different:
>>> os.stat(r'file.copy')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987626L, st_mtime=1257521848L, st_ctime=1512570325L)
>>> os.stat(r'file.original')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987624L, st_mtime=1257521847L, st_ctime=1512570325L)
can clearly see that mtime and atime are just fractionally off (later)
trying to sync them:
>>> os.utime(r'file.copy', (1606987626, 1257521847))
>>> os.stat(r'file.copy')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987626L, st_mtime=1257521848L, st_ctime=1512570325L)
results in mtime still being off, but if i go a bit further back in time:
>>> os.utime(r'file.copy', (1606987626, 1257521845))
>>> os.stat(r'file.copy')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987626L, st_mtime=1257521846L, st_ctime=1512570325L)
It changes, but still not accurate.
However, now after taking it back in time I can use the "-u" switch to ignore newer files in destination:
gsutil -m rsync -u -r "source" "gs://MYBUCKET/target"
script that does fixes timestamps for all files in target:
import os
SOURCE = r'source'
TARGET = r'target'
file_count = 0
diff_count = 0
for root, dirs, files in os.walk(SOURCE):
for name in files:
file_count += 1
source_filename = os.path.join(root, name)
target_filename = source_filename.replace(SOURCE, TARGET)
try:
source_stat = os.stat(source_filename)
target_stat = os.stat(target_filename)
except WindowsError:
continue
delta = 0
while source_stat.st_mtime < target_stat.st_mtime:
diff_count += 1
#print source_filename, source_stat
#print target_filename, target_stat
print 'patching', target_filename
os.utime(target_filename, (source_stat.st_atime, source_stat.st_mtime-delta))
target_stat = os.stat(target_filename)
delta += 1
print file_count, diff_count
it's far from being perfect, but running the command no longer results in everything trying to sync. Hopefully someone will fine that useful, other solutions are still welcome.
I am configuring Jenkins to automatically deploy my successfull builds to my Kubernetes cluster. I have manually set up the KUBECONFIG file in /var/lib/jenkins/.kube/config.
But my Jenkins job keeps giving the same error:
+ kubectl config --kubeconfig=/var/lib/jenkins/.kube/config view
error: error loading config file "/var/lib/jenkins/.kube/config": v1.Config.Contexts: \
[]v1.NamedContext: Clusters: []v1.NamedCluster: v1.NamedCluster.Name: Cluster: v1.Cluster.Server: \
CertificateAuthorityData: decode base64: illegal base64 data at input byte 47, error found in #10 byte of ... \
|ASXY9gkN$","server":|..., bigger context ...|"LS3tGS1PR0dJTiBDRVJUMLPAR0FURS0tLS0tCk1JSUV5gkN$", \
"server":"https://clsx-cloud-d734ef-0b|...
I copied the kube config file manually from my SSH accessible account, i.e
cat home/username/.kube/config
You most likely copied the wrong screen output from the terminal, or while editing the file in i.e nano.
The $ characters are illegal characters and the result of truncated file viewing in the terminal, make sure that you copy the real file data correctly.
For example:
xclip -sel clip < home/username/.kube/config
I have fixed this error by removing files in this directory:
/var/lib/jenkins/.kube/
Note: take backup on first priority of this directory as well.
I have used spiffsimg to create a single file containing multiple lua files:
# ./spiffsimg -f lua.img -c 262144 -r lua.script
f 4227 init.lua
f 413 cfg.lua
f 2233 setupWifi.lua
f 7498 configServer.lua
f 558 cfgForm.htm
f 4255 setupConfig.lua
f 14192 main.lua
#
I then use esptool.py to flash the NodeMCU firmware and the file containing the lua files to the esp8266 (NodeMCU dev kit):
c:\esptool-master>c:\Python27\python esptool.py -p COM7 write_flash -fs 32m -fm dio 0x00000 nodemcu-dev-9-modules-2016-07-18-12-06-36-integer.bin 0x78000 lua.img
esptool.py v1.0.2-dev
Connecting...
Running Cesanta flasher stub...
Flash params set to 0x0240
Writing 446464 # 0x0... 446464 (100 %)
Wrote 446464 bytes at 0x0 in 38.9 seconds (91.9 kbit/s)...
Writing 262144 # 0x78000... 262144 (100 %)
Wrote 262144 bytes at 0x78000 in 22.8 seconds (91.9 kbit/s)...
Leaving...
I then run ESPLorer to check the status and get:
PORT OPEN 115200
Communication with MCU..Got answer! AutoDetect firmware...
Can't autodetect firmware, because proper answer not received.
NodeMCU custom build by frightanic.com
branch: dev
commit: b21b3e08aad633ccfd5fd29066400a06bb699ae2
SSL: true
modules: file,gpio,http,net,node,rtctime,tmr,uart,wifi
build built on: 2016-07-18 12:05
powered by Lua 5.1.4 on SDK 1.5.4(baaeaebb)
lua: cannot open init.lua
>
----------------------------
No files found.
----------------------------
>
Total : 3455015 bytes
Used : 0 bytes
Remain: 3455015 bytes
The NodeMCU firmware flashed correctly, but the lua files can't be located.
I have tried flashing to other locations (0x84000, 0x7c000), but I am just guessing at these locations based on reading threads on github.
I used the NodeMCU file.fscfg() routine to get the flash address and size. If I only flash the NodeMCU firmware I get the following:
print (file.fscfg())
524288 3653632
534288 is 0x80000, so I tried flashing only the spiffsimg file (lua.img) to 0x8000, then ran the same print statement and got:
print (file.fscfg())
786432 3391488
The flash address incremented by the exact number of bytes in the lua.img - which I don't understand, why would the flash address change? Is the first number returned by file.fscfg not the starting flash address, but the ending flash address?
What is the correct address for flashing an image file, contain lua files, that was created by spiffsimg?
The version of spiffsimg found here will provide the correct address for flashing an image file that contains lua files.
Do not use this version of spiffsimg as it is out of date.
To install the spiffsimg utility, you need to download and install the entire nodemcu-firmware package (into a linux environment, use make to install - note: make on my debian linux box generated an error, but i was able to go to the ../tools/spiffsimg subdirectory and run make on the Makefile found in that directory to create the utility).
The spiffsimg instructions found here are quite clear, with one exception: the file name you specify, with the -f parameter, needs to include the characters %x. The %x will be replaced with the address that the image file should be flashed to.
For example, the command
spiffsimage -f %x-luaFiles.img -S 4MB -U 465783 -r lua.script
will create a file, in the local directory, with a name like: 80000-luaFiles.img. Which means you should install that image file at address 0x80000 on the ESP8266.
I've never done that myself but I'm reasonably confident the correct answer can be extracted from the docs.
-f specifies the filename for the disk image. '%x' will be replaced
by the calculated offset of the file system.
And a bit further down
The disk image file is placed into the bin directory and it is named
0x<offset>-<size>.bin where the offset is the location where it
should be flashed, and the size is the size of the flash part.
However, there's a slight mismatch between the two statements. We may have a bug in the docs. If "'%x' will be replaced..." then I'd expected the final name won't contain 0x anymore.
Furthermore, it is possible to define a fixed SPIFFS position when you build the firmware.
#define SPIFFS_FIXED_LOCATION 0x100000
This specifies that the SPIFFS filesystem starts at 1Mb from the start of the flash. Unless
otherwise specified, it will run to the end of the flash (excluding
the 16k of space reserved by the SDK).
How we can generate FortiFy report using command ??? on linux.
In command, how we can include only some folders or files for analyzing and how we can give the location to store the report. etc.
Please help....
Thanks,
Karthik
1. Step#1 (clean cache)
you need to plan scan structure before starting:
scanid = 9999 (can be anything you like)
ProjectRoot = /local/proj/9999/
WorkingDirectory = /local/proj/9999/working
(this dir is huge, you need to "rm -rf ./working && mkdir ./working" before every scan, or byte code piles underneath this dir and consume your harddisk fast)
log = /local/proj/9999/working/sca.log
source='/local/proj/9999/source/src/**.*'
classpath='local/proj/9999/source/WEB-INF/lib/*.jar; /local/proj/9999/source/jars/**.*; /local/proj/9999/source/classes/**.*'
./sourceanalyzer -b 9999 -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/working/9999/working/sca.log -clean
It is important to specify ProjectRoot, if not overwrite this system default, it will put under your /home/user.fortify
sca.log location is very important, if fortify does not find this file, it cannot find byte code to scan.
You can alter the ProjectRoot and Working Directory once for all if your are the only user: FORTIFY_HOME/Core/config/fortify_sca.properties).
In such case, your command line would be ./sourceanalyzer -b 9999 -clean
2. Step#2 (translate source code to byte code)
nohup ./sourceanalyzer -b 9999 -verbose -64 -Xmx8000M -Xss24M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+UseParallelGC -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/9999/sca.log -source 1.5 -classpath '/local/proj/9999/source/WEB-INF/lib/*.jar:/local/proj/9999/source/jars/**/*.jar:/local/proj/9999/source/classes/**/*.class' -extdirs '/local/proj/9999/source/wars/*.war' '/local/proj/9999/source/src/**/*' &
always unix background job (&) in case your session to server is timeout, it will keep working.
cp : put all your known classpath here for fortify to resolve the functiodfn calls. If function not found, fortify will skip the source code translation, so this part will not be scanned later. You will get a poor scan quality but FPR looks good (low issue reported). It is important to have all dependency jars in place.
-extdir: put all directories/files you don't want to be scanned here.
the last section, files between ' ' are your source.
-64 is to use 64-bit java, if not specified, 32-bit will be used and the max heap should be <1.3 GB (-Xmx1200M is safe).
-XX: are the same meaning as in launch application server. only use these to control the class heap and garbage collection. This is to tweak performance.
-source is java version (1.5 to 1.8)
3. Step#3 (scan with rulepack, custom rules, filters, etc)
nohup ./sourceanalyzer -b 9999 -64 -Xmx8000M -Dcom.fortify.sca.ProjectRoot=/local/proj/9999 -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/ssap/proj/9999/working/sca.log **-scan** -filter '/local/other/filter.txt' -rules '/local/other/custom/*.xml -f '/local/proj/9999.fpr' &
-filter: file name must be filter.txt, any ruleguid in this file will not be reported.
rules: this is the custom rule you wrote. the HP rulepack is in FORTIFY_HOME/Core/config/rules directory
-scan : keyword to tell fortify engine to scan existing scanid. You can skip step#2 and only do step#3 if you did notchange code, just want to play with different filter/custom rules
4. Step#4 Generate PDF from the FPR file (if required)
./ReportGenerator -format pdf -f '/local/proj/9999.pdf' -source '/local/proj/9999.fpr'