We work in a team and run Fortify software on our machines locally. We all have our project code setup in different root directories e.g I have project code at C:\work\development\, few of my colleagues have something like C:\Development\mainCodeLine\ etc etc. i.e. the root-folder where the project-code resides differs. Initially only I was working on Fortify but now there are many members of the team running Fortify. We currently share the FPR file that is saved in repository. We download it from the repository and run SCA commands over the same file so as to retain the details like hidden/suppressed issues. Over the period of time we observed that :
The Unique Instance ID that gets generated is unique over a single machine only. i.e. the Unique Instance ID remains same over scans on my machine only and it changes when the scan is carried out in my team-mate's machine. Is there any way we can configure Fortify to keep it same over multiple scans over multiple machines? Because of this we can't use the Unique Instance ID in the filter-file.
If I and my team-mate run scans parallelly on 2 separate machines on same code (only the project's root directory differs as stated earlier) then is there any way we can integrate these 2 reports?
There are indeed methods to combine scan results generated on different machines. I believe that the best way to accomplish this is to utilize the Fortify Software Security Center (SSC). Users conduct "fresh" scans each time, and when uploaded into a project in SSC, they will be merged - retaining any previous auditing information.
An alternative approach is to use the command line FPRUtility. (I don't have an install in front of me at the moment so the name might be slightly off - but it's in the bin directory along with sourceanalyzer and auditworkbench). The -h option should provide the info to get started merging FPRs.
Hope this helps.
If different IIDs are being changed by a different overall root, that seems a bug. SCA uses the canonical root usually so it shouldn't make any difference where its placed.
Xelco52 had it partially correct, but if you want to merge when they have different IIDs, it's best to use FPRUtility with the -forceMigration option, such as:
FPRUtility -merge -project Results1.fpr -source Results2.fpr -f mergedResults.fpr -forceMigration
You should also be able to get this affect in AWB by setting com.fortify.model.ForceIIDMigration=true in Core/config/fortify.properties (and restarting AWB)
Look into using HP Fortify Software Security Center (SSC) if possible. This will allow users to upload scans to a central repository and merge results. This helps create a running history of scans and know who uploaded what.
Also this will allow your team to use a feature called "Collaborative Audit" that will let each developer pull the latest FPR down from Software Security Center (SSC) and into their IDE. Developers can then make changes and push back up to SSC where results are once again merged.
I don't think merge is the right approach. I would do it this way:
(1) Among all developers (user#), on their own machine, establish the naming convention of the ProjectRoot (point to user#'s code base, i.e. /home/user#/mycode) and WorkingDirectory (i.e. /local/sharebuild)
(2) Each user use the following commands on their own machine:
(2a) CLEAN CACHE: ~/sourceanalyzer -b user# -Dcom.fortify.sca.ProjectRoot=/home/user#/mycode -Dcom.fortify.WorkingDirectory=/local/sharebuild/ -logfile /local/sharebuild/user#.sca.log -clean
(2b) TRANSLATE: ~/sourceanalyzer -b user# -64 -Xmx11000M -Xss24M -Dcom.fortify.sca.ProjectRoot=/home/user#/mycode/ -Dcom.fortify.WorkingDirectory=/local/sharebuild/ -logfile /local/sharebuild/user#.sca.log -source 1.5 -cp 'your_class_path' -extdirs 'your *.war file' '/home/user#/mycode/**/*'
(3) INTEGRATE ALL INTERMEDIATE CODE TO THE BUILD MACHINE : each user copy his entire /local/sharebuild/sca#.## to the centralized build machine, under directory /local/sharebuild/sca#.##/build/ (you will find sub directory ./user# (each user's build id) which contains all intermediate code tree (.nst).
(4) SCAN: on the build server, do scan with command:
~/sourceanalyzer -b user1 -b user2 -b user3 -b user# -64 -Xmx11000M -Xss24M -Dcom.fortify.sca.ProjectRoot=/home/user#/mycode/ -Dcom.fortify.WorkingDirectory=/local/sharebuild/ -logfile /local/sharebuild/scan.sca.log -scan -f build_all.fpr
The step 4 will pick up all .nst (normalized syntax tree) files and performing the scan.
If each user mount his portion of code to the centralized machine on step 2a, then step 3 can be omitted.
Related
We want to have the same VSCode settings for the whole crew of developers. Also it would be fine to have a oneline command to tear VSCode down and restart it from scratch with predefined settings and plugins so that you do not have to worry about trying out plugins and getting beck to the known state. Kind of Config-as-Code for VSCode.
I already found:
https://code.visualstudio.com/docs/editor/extension-gallery#_command-line-extension-management
https://github.com/microsoft/vscode-dev-containers
https://marketplace.visualstudio.com/items?itemName=Shan.code-settings-sync&ssr=false#qna
https://github.com/gantsign/ansible-role-visual-studio-code-extensions
https://code.visualstudio.com/docs/remote/containers
https://github.com/gantsign/ansible-role-visual-studio-code
But non of these provides a good solution to me
We are using Mac and Windows machines and develop most of the time locally (not remotely in the cloud or the like).
I imagine like having a script like
.... projectname up
or
.... projectname reset
(or
.... projectname down)
to receive/reset the configured settings and newest plugins that have been configured for the project.
Have any ideas or use a similar solution already?
After doing a lot of research, playing with Docker, Ansible and so on... it seems that although I excluded it at first the plugin Settings Sync Plugin from Shan Khan is the way to go. It has round about 1 million installs!
Only dependency - you need a GitHub account to host your configs. That is what held me back at first - but it should be not that much of a problem to get one for everyone in the team and connect it to like a company-github-account.
Copy the files settings.json and keybindings.json to your target machine(s) to copy the settings. You can find those files here:
Win: ~\AppData\Roaming\Code\User
Mac: ~/Library/Application Support/Code/User/
Linux: ~/.config/Code/User
You can copy extentions from ~/.vscode/extensions or C:\Users\username\.vscode\extensions from linux/mac or windows respectively.
FalcoGer's answer should explain how to copy the files in a way VS Code will pick them up. If you only need to copy the config files once, this solution would be fine.
If you need to "sync" these config files on a regular basis, I would advise to create a Git repository where all config files will be stored.
When cloning the repo to local machines, you can symlink the files to the config destinations (see FalcoGer's anwser). Then when you need to "sync", you only have to run git pull and restart VS Code to apply the changes.
For your other script-related question, you could create a CLI for this. Python would be the most friendly way to do this. You can find an example here.
I don't understand what is the need/use of the git unpack-objects command.
If I have a pack file outside of my repository and run the git unpack-objects on it the pack file will be "decompressed" and all the object files will be placed in .git/objects. But what is the need for this? If I just place the pack and index files in .git/objects I can still see all the commits and have a functional repo and have less space occupied by my .git since the pack file is compact.
So why would anyone need to run this command?
Pack file uses the format that is used with normal transfer over the network. So, I can think of two main reasons to use the manual command instead of network:
having a similar update workflow in an environment without network configuration between the machines or where that cannot be used for other reasons
debugging/inspecting the contents of the transfer
For 1), you could just use a disk or any kind of mobile media for your files. It could be encrypted, for instance.
I'm building a Yocto image for a project but it's a long process. On my powerful dev machine it takes around 3 hours and can consume up to 100 GB of space.
The thing is that the final image is not "necessarily" the end goal; it's my application that runs on top of it that is important. As such, the yocto recipes don't change much, but my application does.
I would like to run continuous integration (CI) for my app and even continuous delivery (CD). But both are quite hard for now because of the size of the yocto build.
Since the build does not change much, I though of "caching" it in some way and use it for my application's CI/CD and I though of Docker. That would be quite interesting as I could maintain that image and share it with colleagues who need to work on the project and use it in CI/CD.
Could a custom Docker image be built for that kind of use?
Would it be possible to build such an image completely offline? I don't want to have to upload the 100GB and have to re-download it on build machines...
Thanks!
1. Yes.
I've used docker to build Yocto images for many different reasons, always with positive results.
2. Yes, with some work.
You want to take advantage of the fact that Yocto caches all the stuff you need to do your build in what it calls "Shared State Cache". This is normally located in your build directory under ${BUILDDIR}/sstate-cache, and it contains exactly what you are looking for in this case. There are a couple of options for how to get these files to your build machines.
Option 1 is using sstate mirrors:
This isn't completely offline, but lets you download a much smaller cache and build from that cache, rather than from source.
Here's what's in my local.conf file:
SSTATE_MIRRORS ?= "\
file://.* http://my.shared-computer.com/some-folder/PATH"
Don't forget the PATH at the end. That is required. The build system substitutes the correct path within the directory structure.
Option 2 lets you keep a local copy of your sstate-cache and build from that locally.
In your dockerfile, create the sstate-cache directory (location isn't important here, I like /opt for my purposes):
RUN mkdir -p /opt/yocto/sstate-cache
Then be sure to bindmount these directories when you run your build in order to preserve the contents, like this:
docker run ... -v /place/to/save/cache:/opt/yocto/sstate-cache
Edit the local.conf in your build directory so that it points at these folders:
SSTATE_DIR ?= "/opt/yocto/sstate-cache"
In this way, you can get your cache onto your build machines in whatever way is best for you (scp, nfs, sneakernet).
Hope this helps!
I would like to move from one saleforce Dev org to another Dev org using ANT Migration Tool. I would like to autogenerate package.xml file which takes care for all customfields, customObjects and all custom components so which helps me to move easily Source org to target org without facing dependencies issues.
There are a lot of answers here.
I would like to rank those answers here.
The simplest way I think is to consider using Ben Edwards's heroku service https://packagebuilder.herokuapp.com/
Another option is to use npm module provided by Matthias Rolke.
To grab a full package.xml use force-dev-tool, see: https://github.com/amtrack/force-dev-tool.
npm install --global force-dev-tool
force-dev-tool remote add mydev user pass --default
force-dev-tool fetch --progress
force-dev-tool package -a
You will now have a full src/package.xml.
Jar file provided by Kim Galant
Here's a ready made Java JAR that you point to an org (through properties files), tell it what metadata types to go and look for, and it then goes off and inventories your org and builds a package.xml for you based on the types you've specified. Even has a handy-dandy feature allowing you to skip certain things based on a regular expression, so you can easily e.g. exclude managed packages, or some custom namespaces (say you prefix a bunch of things that belong together with CRM_) from the generated package.
So a command line like this:
java -jar PackageBuilder.jar [-o <parameter file1>,<parameterfile2>,...] [-u <SF username>] [-p <SF password>] [-s <SF url>] [-a <apiversion>] [-mi <metadataType1>,<metadataType2>,...] [-sp <pattern1>,<pattern2>,...] [-d <destinationpath>] [-v]
will spit out a nice up-to-date package.xml for your ANT pleasure.
Also another way is to use Ant https://www.jitendrazaa.com/blog/salesforce/auto-generate-package-xml-using-ant-complete-source-code-and-video/
I had idea to create some competing service to aforementioned services but I dropped that project ( I didn't finish the part that was retrieving all parts from reports and dashboards)
There is an extention in VS Code that allows you to choose components and generate package.xml file using point and click
Salesforce Package.xml generator for VS Code
https://marketplace.visualstudio.com/items?itemName=VignaeshRamA.sfdx-package-xml-generator
I am affiliated to this free vs code extension as the developer
I'm attempting to sync Intellij's built in TFS plugin workspace with the one used by TEE's command line 'tf' command on OSX Mountain Lion and failing miserably.
This question appears to be very similar to mine, however it has no reference to what one should do when the computer name reported by each tool is different.
Intellij says my computer name is the fully qualified domain name (ex: hostname.domain.com) whereas the 'tf workspaces' command reports the computer name to be just the the hostname (ex: hostname). Consequently, they are unable to use the same workspace. I do know that you can change the computer name of a workspace, but I'd like to use both at the same time as we have some ant tasks using the 'tf' command locally. Our Windows users in the group are able to do this just fine.
Is there any way to make these tools report the same thing for the computer name? I believe I could then use the 'tf workspaces' command and enable me to use both at the same time in the same workspace. Much obliged.
It's not supported (according to the responsible developer). Please submit a request and we'll see what can be done to make it work.
Team Explorer Everywhere allows you to override your local hostname with the computerName system property. You can edit your tf launcher script to match what IntelliJ is using. You can change the last few lines of the file to be:
exec java -Xmx512M -classpath "$CLC_CLASSPATH" \
-DcomputerName=`hostname -f` \
"-Dcom.microsoft.tfs.jni.native.base-directory=$BASE_DIRECTORY/native" \
$RANDOM_DEVICE_PROPERTY com.microsoft.tfs.client.clc.vc.Main "$#"
If hostname -f does not actually report the same hostname that IntelliJ is determining, of course, you can simply hardcode that instead.