When FITNESSE is started with command -e 0, then by default the .zip folders are not created.
Now how can we track the version of WIKI pages in FITNESSE
java -cp !CLASSPATH! fitnesseMain.FitNesseMain -p 9090 -e 0
I usually use 'normal' version control (e.g. Git or Subversion) on the wiki files in the project. This also has the advantage that you choose when to commit the changes you make (i.e. not each change/edit is committed, but only when one or more pages are 'done' and tried locally) and you can provide explicit messages what was changed and why in that commit.
You probably don't want to store all files in the wiki in version control, but only the test pages and suites you make and the files in the 'files section'. The .gitignore file I usually use can be found at: https://github.com/fhoeben/sample-fitnesse-project/blob/master/.gitignore. (For Subversion you can set an svn-ignore property instead of using a .gitignore file)
I expect you want to exclude at least:
fitnesse-standalone.jar
FitNesseRoot/**/*.zip
FitNesseRoot/files/testResults
FitNesseRoot/files/testProgress
FitNesseRoot/ErrorLogs
FitNesseRoot/FitNesse
FitNesseRoot/RecentChanges*
updateDoNotCopyOverList
updateList
properties
Related
I have 2 .properties files for my project on hybris .
First one is used for CI process and as a result a got 4 zip files with my already built platform(after ant production).
On my prod instance i need to switch to another properties because there are all my connections to extended services such as mysql solr.. etc
How i can do that without running all ANT steps.
. ./setantenv.sh && sync && ant config -Denv=my_new_properties
then ./hybrisserver.sh start doesn't work.
There is no information on wiki https://cxwiki.sap.com/display/release5/ant+production+improvements
Check if Updating Configuration Settings at Runtime will be useful for you. You will need to use the FileBasedConfigLoader class and the runtime.config.file.path property.
Other best practices include using system variables for secure settings like DB URL. See "Using Environment Variables instead of Files for Secure Settings" section in Configuring the Behavior of SAP Commerce.
Another option you can look at is to have different config folders for different environments (e.g. config-dev, config-prd), and pass it to ant. e..g -Denv=config--dev
I don't understand what is the need/use of the git unpack-objects command.
If I have a pack file outside of my repository and run the git unpack-objects on it the pack file will be "decompressed" and all the object files will be placed in .git/objects. But what is the need for this? If I just place the pack and index files in .git/objects I can still see all the commits and have a functional repo and have less space occupied by my .git since the pack file is compact.
So why would anyone need to run this command?
Pack file uses the format that is used with normal transfer over the network. So, I can think of two main reasons to use the manual command instead of network:
having a similar update workflow in an environment without network configuration between the machines or where that cannot be used for other reasons
debugging/inspecting the contents of the transfer
For 1), you could just use a disk or any kind of mobile media for your files. It could be encrypted, for instance.
I would like to move from one saleforce Dev org to another Dev org using ANT Migration Tool. I would like to autogenerate package.xml file which takes care for all customfields, customObjects and all custom components so which helps me to move easily Source org to target org without facing dependencies issues.
There are a lot of answers here.
I would like to rank those answers here.
The simplest way I think is to consider using Ben Edwards's heroku service https://packagebuilder.herokuapp.com/
Another option is to use npm module provided by Matthias Rolke.
To grab a full package.xml use force-dev-tool, see: https://github.com/amtrack/force-dev-tool.
npm install --global force-dev-tool
force-dev-tool remote add mydev user pass --default
force-dev-tool fetch --progress
force-dev-tool package -a
You will now have a full src/package.xml.
Jar file provided by Kim Galant
Here's a ready made Java JAR that you point to an org (through properties files), tell it what metadata types to go and look for, and it then goes off and inventories your org and builds a package.xml for you based on the types you've specified. Even has a handy-dandy feature allowing you to skip certain things based on a regular expression, so you can easily e.g. exclude managed packages, or some custom namespaces (say you prefix a bunch of things that belong together with CRM_) from the generated package.
So a command line like this:
java -jar PackageBuilder.jar [-o <parameter file1>,<parameterfile2>,...] [-u <SF username>] [-p <SF password>] [-s <SF url>] [-a <apiversion>] [-mi <metadataType1>,<metadataType2>,...] [-sp <pattern1>,<pattern2>,...] [-d <destinationpath>] [-v]
will spit out a nice up-to-date package.xml for your ANT pleasure.
Also another way is to use Ant https://www.jitendrazaa.com/blog/salesforce/auto-generate-package-xml-using-ant-complete-source-code-and-video/
I had idea to create some competing service to aforementioned services but I dropped that project ( I didn't finish the part that was retrieving all parts from reports and dashboards)
There is an extention in VS Code that allows you to choose components and generate package.xml file using point and click
Salesforce Package.xml generator for VS Code
https://marketplace.visualstudio.com/items?itemName=VignaeshRamA.sfdx-package-xml-generator
I am affiliated to this free vs code extension as the developer
I am using Gerrit for code review for all the SQL files that is being used in the Project. Gerrit is hosted on Linux machine and its version is 2.6.1.
I have problem in comparing SQL patch set and all the SQL files are considered to a binary file by Gerrit and hence unable to provide the comparison.
For reference, following is the response on Gerrit comparison:
diff --git a/web/dev-db/sp/dbo.usp_getactivityownerlist.sql b/web/dev-db/sp/dbo.usp_getactivityownerlist.sql
index f623dd3..e2ed93b 100644
--- a/web/dev-db/sp/dbo.usp_getactivityownerlist.sql
+++ b/web/dev-db/sp/dbo.usp_getactivityownerlist.sql
Binary files differ
Is there any way I can configure Gerrit to consider .SQL file as a text file rather than binary file so that patch comparison is easy.
Try adding the following line to your $repo/.git/info/attributes:
*.sql crlf diff
It normally happens when user set core.autocrlf to false in global config file. This effectively disabled "smart" detection of line endings in text files.
It can be encoding issue as well, Git works best with utf-8, if the encoding is in something like utf-16, Git will think it's binary, no matter what you set in .gitattributes
As the title says (and as it may be visible that I am still a beginner). In my rails app, I have implemented an MVC for support pages to my app.
I wanted to show the pages that I created to my mentor, so I committed and pushed to GitHub, but I noticed that only the images were pushed to GitHub! (I use CKeditor to handle images).
Now I am sure that the pages (that consists of a Title and Contents fields) exist, because when I execute the command db.support_pages.find() in the Mongo Shell, it gives me back a list of the pages with their contents and titles. But when I open those pages (localhost) and edit the content I see that git is not even tracking them!
I don't know what more information I should post, I will post the .gitignore file:
*.rbc
*.sassc
*~
.sass-cache
.project
capybara-*.html
.rspec
/.bundle
/vendor/bundle
/log/*
/tmp/*
/public/assets/*
/db/*.sqlite3
/public/system/*
/coverage/
/spec/tmp/*
/spec/coverage/*
**.orig
rerun.txt
pickle-email-*.html
# Ignore all logfiles and tempfiles.
/log/*.log
/tmp
.idea
/attic/*
Any tips, leads, advice (or even queries to post more info regarding this issue) are welcomed. :)
Thanks in advance.
Your MongoDB database is composed of multiple data files containing the data and indexes.
If you want to commit the contents of the database to version control you will want to export the data and index definitions using mongodump.
If you want to share your database with your mentor, I would suggest using mongodump to get a full copy of the database, then compress and add that dump into git.
For example (on Linux) .. assuming a database called mydatabase:
cd ~/backup
mongodump -d mydatabase
tar -czvf mydatabase.tgz dump/
git add mydatabase.tgz
Your mentor would need to have MongoDB installed, and could extract the tgz file (tar xzvf mydatabase.tgz) and use mongorestore to load the data. I expect your application might require future configuration, which you would document in a README.
Git will track changes made in its directory. The pages you're talking about are stored in the database which is located somewhere else in your computer. We will need more information to give you some advice as to where you should dig.