how to get vimdiff apply both REMOTE and LOCAL changes? - vimdiff

when resolving a git conflict using vimdiff is it possible to apply changes from both buffers ? i tried
:diffget RE LO
and hoped that it would apply the changes in that order, so
<some content>
<REMOTE>
<LOCAL>
<rest of the file>
but it failed:
E94: No matching buffer for RE LO

Related

gsutil rsync tries to re-upload everything after migrating source to new storage

I have a substantial (~1 TB) directory that already has a backup on google archive storage. For space reasons on local machine, I had to migrate the directory to somewhere else but now when I try to run the script that was synchronizing it to the cloud (using new directory as source) it attempts to upload everything. I guess the problem lies with timestamps on migrated files, because when I experiment with "-c" (CRC comparison) it works fine but just far too slow to be workable (even with compiled CRC).
By manually inspecting timestamps it seems they were copied across well (used robocopy /mir for the migration), so what timestamp exactly is upsetting/confusing gsutil..?
I see few ways out of this:
Finding a way to preserve original timestamps on copy (I still have the original folder, so that's an option)
Somehow convincing gsutil to only patch the timestamps of the cloud files or fall back to size-only
Bite the bullet and re-upload everything
Will appreciate any suggestions.
command used for the migration:
robocopy SOURCE TARGET /mir /unilog+:robocopy.log /tee
Also tried:
robocopy SOURCE TARGET /mir /COPY:DAT /DCOPY:T /unilog+:robocopy.log /tee
command used for sync with google:
gsutil -m rsync -r "source" "gs://MYBUCKET/target"
So turns out that even when you try to sync timestamps they end up different:
>>> os.stat(r'file.copy')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987626L, st_mtime=1257521848L, st_ctime=1512570325L)
>>> os.stat(r'file.original')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987624L, st_mtime=1257521847L, st_ctime=1512570325L)
can clearly see that mtime and atime are just fractionally off (later)
trying to sync them:
>>> os.utime(r'file.copy', (1606987626, 1257521847))
>>> os.stat(r'file.copy')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987626L, st_mtime=1257521848L, st_ctime=1512570325L)
results in mtime still being off, but if i go a bit further back in time:
>>> os.utime(r'file.copy', (1606987626, 1257521845))
>>> os.stat(r'file.copy')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987626L, st_mtime=1257521846L, st_ctime=1512570325L)
It changes, but still not accurate.
However, now after taking it back in time I can use the "-u" switch to ignore newer files in destination:
gsutil -m rsync -u -r "source" "gs://MYBUCKET/target"
script that does fixes timestamps for all files in target:
import os
SOURCE = r'source'
TARGET = r'target'
file_count = 0
diff_count = 0
for root, dirs, files in os.walk(SOURCE):
for name in files:
file_count += 1
source_filename = os.path.join(root, name)
target_filename = source_filename.replace(SOURCE, TARGET)
try:
source_stat = os.stat(source_filename)
target_stat = os.stat(target_filename)
except WindowsError:
continue
delta = 0
while source_stat.st_mtime < target_stat.st_mtime:
diff_count += 1
#print source_filename, source_stat
#print target_filename, target_stat
print 'patching', target_filename
os.utime(target_filename, (source_stat.st_atime, source_stat.st_mtime-delta))
target_stat = os.stat(target_filename)
delta += 1
print file_count, diff_count
it's far from being perfect, but running the command no longer results in everything trying to sync. Hopefully someone will fine that useful, other solutions are still welcome.

How to check the blockchain height in hyperledger-fabric

I am playing with hyperledger-fabric v.1.0 - actually a newbie. How can I check the chain height ? Is there a command or something that I can use to "ask" about the blockchain height? Thanks in advance.
Well, you have a few options of how you can do it:
You can leverage peer cli command line tool to obtain latest available block by running
peer channel fetch newest -o ordererIP:7050 -c mychannel last.block
Next you can leverage configtxlator to decode content of the block as following:
curl -X POST --data-binary #last.block http://localhost:7059/protolator/decode/common.Block
(note you need to start configtxlator first)
Alternative path assumes you are going to use one of available SDK's to invoke QSCC (Query System ChainCode) with GetChainInfo command. This will return you back following structure:
type BlockchainInfo struct {
Height uint64 `protobuf:"varint,1,opt,name=height" json:"height,omitempty"`
CurrentBlockHash []byte `protobuf:"bytes,2,opt,name=currentBlockHash,proto3" json:"currentBlockHash,omitempty"`
PreviousBlockHash []byte `protobuf:"bytes,3,opt,name=previousBlockHash,proto3" json:"previousBlockHash,omitempty"`
}
Which has information about current ledger height.
Another alternative.
Using the cli peer command line (for example docker exec -it cli bash) you can do:
peer channel getinfo -c mychannel
It seems that I found something - maybe cumbersome, but better than nothing:
Command:
docker logs -f peer0.org1.example.com 2>&1 | grep blockNo
Check for the "latest" line in the output, something like:
2017-07-18 19:40:39.586 UTC [historyleveldb] Commit -> DEBU b75b Channel [mychannel]: Updates committed to history database for blockNo [34]
So, if I am not wrong, in this case the block height is: 34
Thanks
you can use blockchain-explorer (UI tool)
https://github.com/hyperledger/blockchain-explorer
You should also be able to use the fabric CORE API (JSON/REST).
See the docs for the Blockchain GET/chain operation at;
https://github.com/hyperledger-archives/fabric/blob/master/docs/API/CoreAPI.md#rest-api

Interpreting Fortify results file (.fpr) through command line

As part of automating the process of running secure code analysis, I have a Jenkins job which uses the sourceanalyzer command line tool to generate an .fpr results file. At the moment I'm opening this results file in Audit Workbench application to view the results and check if there's any newly introduced issues etc, and generating a report from there in PDF/XML format.
Does anyone is it possible to invoke Audit Workbench through the command line and generate a report on the issues, which we could then leverage through a Jenkins script and also then mail the results? Looking online the command line usage seems to stop at the fpr generation stage.
Thanks in advance!
There is a command-line utility to generate an Report from the FPR file.
Currently there are two report generators: Legacy and BIRT. The BIRT report engine was introduced into Audit Workbench with version 4.40.
Here is an example using the BIRT Report engine to generate a DISA STIG report
BIRTReportGenerator -template "DISA STIG" -source HelloWorld_second.fpr
-output BirtReport.pdf -format PDF -showSuppressed --Version "DISA STIG 3.9"
-UseFortifyPriorityOrder
Using the legacy one is a little more involved. The command is:
ReportGenerator -format pdf -f LegacyReport.pdf -source HelloWorld_second.fpr
-template DisaStig3.10.xml -showSuppressed -showHidden
You can either use one of the predefined template reports located in the <SCA Install Dir>/Core/config/reports directory or generate one using the Report Wizard and saving the template which gets stored in the C:\Users\<USER>\AppData\Local\Fortify\config\AWB-XX.XX\reports\ directory in Windows.
On Linux/Mac look at the configuration file <SCA Install Dir>/Core/config/fortify.properties for the com.fortify.WorkingDirectory property, this is where the reports will be stored
#SBurris,
If you don't want to show Suppressed/Hidden is it just -hideSuppressed and -hideHidden?
Also, is there a way to add custom filters to not show things like "nones" from the STIG/SANS/OWASP like you can create in the AWB GUI?
Basically, I need a command(s) to merge two FPRs and then compare them based on what is found new on the scanned code vs. the old FPR.
Merge should be:
FPRUtility -merge -project <newest_scan.fpr> -source <previous_scan.fpr> -f <BUILDXX_MergedWith_BUILDXY.fpr>
The custom filter I need after the merge is:
"[OWASP Top 10 2013]:!<none> OR [SANS Top 25 2011]:!<none> OR [STIG 3.9]:!<none> AND [Detected On]:!/^/"
Where the Detected On field is a custom tag that I need to carry through from the previous FPR file into the newly merged one.
AND THEN output the report from that newly merged fpr in pdf and xml format to a location/filename I specify. Something along the lines of:
~AWB_Installation_Dir/bin/ReportGenerator -format pdf -f [BUILDXX_MergedWith_BUILDXY].pdf -source output.fpr
-template DisaStig3.10.xml -hideSuppressed -hideHidden
Obviously this can be a multitude of commands as long as we can get it back to Bamboo. Any help would be greatly appreciated. Thanks.
FPRUtility interprets the space-separated conditions in the -information -search -query ... parameter by applying the boolean AND operator. To obtain a union of 2 conditions A || B, I figured I could intersect negations of other conditions that complement the former: !C && !D (where A || B || C || D always holds true). I.e., to find all high and critical issues, I use
FORTIFY_ROOT\jre\bin\java -d64 -Xmx4096M -jar FORTIFY_ROOT\Core\lib\exe\fpr-utility-exe.jar -project APP_VER_DATE.fpr -information -search -query "[OWASP Top 10 2017]:A [fortify priority order]:!low [fortify priority order]:!medium" -categoryIssueCounts -listIssues > issues.txt
In case of an audit, I figured I needed the older report generation utility to include suppressed issues (and their comments),
sed -e 's/\(IssueListing limit=\)"[^"]\+"/\1"-1"/' -i "FORTIFY_ROOT/Core/config/reports/DeveloperWorkbook.xml"
cmd /c call ReportGenerator -template DeveloperWorkbookAll.xml -format pdf -source APP_VER_DATE.fpr -showSuppressed -f "APP_VER_DATE_with_suppressed.pdf"

Fortify, how to start analysis through command

How we can generate FortiFy report using command ??? on linux.
In command, how we can include only some folders or files for analyzing and how we can give the location to store the report. etc.
Please help....
Thanks,
Karthik
1. Step#1 (clean cache)
you need to plan scan structure before starting:
scanid = 9999 (can be anything you like)
ProjectRoot = /local/proj/9999/
WorkingDirectory = /local/proj/9999/working
(this dir is huge, you need to "rm -rf ./working && mkdir ./working" before every scan, or byte code piles underneath this dir and consume your harddisk fast)
log = /local/proj/9999/working/sca.log
source='/local/proj/9999/source/src/**.*'
classpath='local/proj/9999/source/WEB-INF/lib/*.jar; /local/proj/9999/source/jars/**.*; /local/proj/9999/source/classes/**.*'
./sourceanalyzer -b 9999 -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/working/9999/working/sca.log -clean
It is important to specify ProjectRoot, if not overwrite this system default, it will put under your /home/user.fortify
sca.log location is very important, if fortify does not find this file, it cannot find byte code to scan.
You can alter the ProjectRoot and Working Directory once for all if your are the only user: FORTIFY_HOME/Core/config/fortify_sca.properties).
In such case, your command line would be ./sourceanalyzer -b 9999 -clean
2. Step#2 (translate source code to byte code)
nohup ./sourceanalyzer -b 9999 -verbose -64 -Xmx8000M -Xss24M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+UseParallelGC -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/9999/sca.log -source 1.5 -classpath '/local/proj/9999/source/WEB-INF/lib/*.jar:/local/proj/9999/source/jars/**/*.jar:/local/proj/9999/source/classes/**/*.class' -extdirs '/local/proj/9999/source/wars/*.war' '/local/proj/9999/source/src/**/*' &
always unix background job (&) in case your session to server is timeout, it will keep working.
cp : put all your known classpath here for fortify to resolve the functiodfn calls. If function not found, fortify will skip the source code translation, so this part will not be scanned later. You will get a poor scan quality but FPR looks good (low issue reported). It is important to have all dependency jars in place.
-extdir: put all directories/files you don't want to be scanned here.
the last section, files between ' ' are your source.
-64 is to use 64-bit java, if not specified, 32-bit will be used and the max heap should be <1.3 GB (-Xmx1200M is safe).
-XX: are the same meaning as in launch application server. only use these to control the class heap and garbage collection. This is to tweak performance.
-source is java version (1.5 to 1.8)
3. Step#3 (scan with rulepack, custom rules, filters, etc)
nohup ./sourceanalyzer -b 9999 -64 -Xmx8000M -Dcom.fortify.sca.ProjectRoot=/local/proj/9999 -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/ssap/proj/9999/working/sca.log **-scan** -filter '/local/other/filter.txt' -rules '/local/other/custom/*.xml -f '/local/proj/9999.fpr' &
-filter: file name must be filter.txt, any ruleguid in this file will not be reported.
rules: this is the custom rule you wrote. the HP rulepack is in FORTIFY_HOME/Core/config/rules directory
-scan : keyword to tell fortify engine to scan existing scanid. You can skip step#2 and only do step#3 if you did notchange code, just want to play with different filter/custom rules
4. Step#4 Generate PDF from the FPR file (if required)
./ReportGenerator -format pdf -f '/local/proj/9999.pdf' -source '/local/proj/9999.fpr'

roslaunch failed: cannot launch node

I have downloaded and compiled some Ros nodes from here (just to have more info). I am trying to launch the five ROS nodes with parameters using a launchfile that is taken from that repo.
After executing source catkin_ws/devel_isolated/setup.bash and executing roslaunch crab.launch(the launch file from the link above) the next error appears:
root#beaglebone:~# roslaunch crab.launch
... logging to /root/.ros/log/4f6332fe-dbe2-11e3-86a8-7ec70b079d59/roslaunch-beaglebone-2067.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://beaglebone:58881/
SUMMARY
========
PARAMETERS
* /clearance
* /duration_ripple
* /duration_tripod
* /joint_lower_limit
* /joint_upper_limit
* /port_name
* /robot_description
* /rosdistro
* /rosversion
* /trapezoid_h
* /trapezoid_high_radius
* /trapezoid_low_radius
NODES
/
crab_body_kinematics (crab_body_kinematics/body_kinematics)
crab_gait (crab_gait/gait_kinematics)
crab_imu (crab_imu/imu_control)
crab_leg_kinematics (crab_leg_kinematics/leg_ik_service)
crab_maestro_controller (crab_maestro_controller/controller_sub)
ROS_MASTER_URI=http://localhost:11311
core service [/rosout] found
ERROR: cannot launch node of type [crab_leg_kinematics/leg_ik_service]: can't locate node [leg_ik_service] in package [crab_leg_kinematics]
ERROR: cannot launch node of type [crab_maestro_controller/controller_sub]: can't locate node [controller_sub] in package [crab_maestro_controller]
ERROR: cannot launch node of type [crab_body_kinematics/body_kinematics]: can't locate node [body_kinematics] in package [crab_body_kinematics]
ERROR: cannot launch node of type [crab_gait/gait_kinematics]: can't locate node [gait_kinematics] in package [crab_gait]
ERROR: cannot launch node of type [crab_imu/imu_control]: can't locate node [imu_control] in package [crab_imu]
I have reinstalled the packages as suggested in some other threats about similar problems.
I also have noticed that
1º- if I move all the executablesof the nodes to the folder src/<package>/, I'm able to execute roslaunch crab.launch. But I don´t want to leave it like that, not proper way to work ;)
Additional info:
2º- If I execute, for example, source devel_isolated/<package>/setup.bashand then roslaunch crab.launch, the package which I have just source-d works and executes... (while the other still don't)
3º- So I have source-d all the source devel_isolated/<package>/setup.bash and try again: no one worked this time.
This leads to think that the problems are due to ROS variable enviroment: if I make an export | grep ROSafter 2º, I can see that the package path appears in $ROS_PATH-s and the others are not there:
root#beaglebone:~# export | grep ROS
declare -x ROS_DISTRO="hydro"
declare -x ROS_ETC_DIR="/opt/ros/hydro/etc/ros"
declare -x ROS_MASTER_URI="http://localhost:11311"
declare -x ROS_PACKAGE_PATH="/root/catkin_ws/src/crab_msgs:/root/catkin_ws/src/joy:/root/catkin_ws
/src/ps3joy:/root/catkin_ws/src/xacro:/root/catkin_ws/src/roslint:/root/catkin_ws/src/kdl_parser:/root/catkin_ws
/src/urdf:/root/catkin_ws/src/urdf_parser_plugin:/root/catkin_ws/src:/opt/ros/hydro/share:/opt/ros/hydro
/stacks:/root/ros_catkin_ws/install_isolated/share:/root/ros_catkin_ws/install_isolated/stacks"
declare -x ROS_ROOT="/opt/ros/hydro/share/ros"
declare -x ROS_TEST_RESULTS_DIR="/root/catkin_ws/build_isolated/crab_msgs/test_results"
root#beaglebone:~# source catkin_ws/devel_isolated/crab_imu/setup.bash
declare -x ROS_PACKAGE_PATH="/root/catkin_ws/src/crab_imu:/root/catkin_ws/src/crab_msgs:/root/catkin_ws
/src/joy:/root/catkin_ws/src/ps3joy:/root/catkin_ws/src/xacro:/root/catkin_ws/src/roslint:/root/catkin_ws
/src/kdl_parser:/root/catkin_ws/src/urdf:/root/catkin_ws/src/urdf_parser_plugin:/root/catkin_ws/src:/opt
/ros/hydro/share:/opt/ros/hydro/stacks:/root/ros_catkin_ws/install_isolated/share:/root/ros_catkin_ws
/install_isolated/stacks"
declare -x ROS_TEST_RESULTS_DIR="/root/catkin_ws/build_isolated/crab_imu/test_results"
Seems that 3º overwrites the source executed before..., meaning that in ROS_PACKAGE_PATHdoes not appear all he packages as they should.
I also have tried to force ROS_PACKAGE_PATHusing exportcommand, but it didn't work. So, I have to change more environment variables apart from that, but don't know which one...
So, I don't know if I diagnosis is correct and, if so, what should I do to correct this... Hope I have gathered enough info.
Thanks in advance!!
Iñigo
set the executable bit for files. most probably you need to set executable permissions for files.
chmod +x filename.

Resources