I've seen a couple iOS apps that allow access to svn logs, but none as nice as iOctocat is for git. It appears I could use iOctocat on a network to collect all the data, and then view it offline. I need to do that for svn, not git.
I'm looking for a way to read svn log commit diffs in an offline state on an eReader (prefer iPad, but could switch to say Kindle Fire if required.) Is there any OSX software/scripts that can get an svn server log, perform diffs, and output into files for viewing on an iPad, or alternatively, into a PDF that can be viewed darn near anywhere?
I'm trying to get a bit more productive on my 1.5 hour bus ride, and this could help tremendously...
Git is a DVCS which consists in having the whole repository, with its whole history, on every "client" machine. SVN is a centralized VCS, where the working copy only has the latest version of every file. The history is only available on the server.
If offline work is so important, git is obviously a much better choice, and you should switch to git. I don't think any SVN tool will ever give you access to previous versions offline, because that's just not how SVN works.
If you just want to review the logs, you can generate a PDF file easily enough. The command:
svn log | enscript -o log.ps
makes a PostScript file from the log for the current working directory. You can follow that with:
pstopdf log.ps
to generate log.pdf, a PDF file that contains your Subversion log. You can obviously automate that process to your heart's content. You might even run the process every few hours and post the result on an internal web server where it's easy to grab. You can also make the PDF file that you generate a lot fancier by configuring enscript, which has oodles of options (font, margins, columns, headers, footers, etc.) so you can make a really nice looking file.
For convenience, here's a version all on one line:
svn log -l 10 | enscript -o - | pstopdf -i -o svnlog.pdf
The -l 10 option to svn log limits the output to the 10 most recent log entries -- customize that as you wish, or customize the output with other options.
The next step would be to write a shell script or other small program that would filter the log to show just the most recent changes, generate diffs for the changed files, and wrap that all up into a PDF for you to review. As you can see from the above, the tools you'd need to do that are already there -- you just need to put them together in the right order.
Related
I have two artifacts that I download from my Octopus server in order to expose in my vNext build (we are using an on-premises TFS).
The code is:
$Artifacts.GetEnumerator() |% {
curl $_.Value -OutFile "$TestResults\$($_.Key)" -Headers #{
"X-Octopus-ApiKey" = $ApiKey
}
Write-Host "##vso[task.addattachment type=NUnitTestResults;name=$($_.Key);]$TestResults\$($_.Key)"
Write-Host "##vso[task.uploadfile]$TestResults\$($_.Key)"
Write-Host "##vso[artifact.upload containerfolder=NUnitTestResults2;artifactname=$($_.Key);]$TestResults\$($_.Key)"
#Write-Host "##vso[build.uploadlog]$TestResults\$($_.Key)"
}
Two files - CSTests.xml and PSTests.xml are downloaded and placed in a folder. Then I am issuing the VSTS logging commands.
The only documentation I could find for them is https://github.com/Microsoft/azure-pipelines-tasks/blob/master/docs/authoring/commands.md and it leaves a lot of space to our imagination.
What I have learned so far:
build.uploadlog
Embeds the contents of the files in the log of the respective task. For example:
As one can see, the NUnit test results are prepended to the step log proper. And here is what the documentation says:
I hope it makes sense to somebody, to me it does not make any. Next:
artifact.upload
This one is easy - it adds the files as artifacts to the build:
But each artifact contains ALL the files. So, it does not matter which Explore button I click (for CSTests.xml or PSTests.xml), I always get this:
Sounds like I am expected to place the two artifacts in different container folders, but then what is the purpose of having both the container folders and the artifact names? I am confused.
task.uploadfile
Using this one I got my NUnit test result files included in the log archive when downloading logs:
No questions here.
task.addattachment
This one is a mystery to me. It has no apparent effect. The documentation says:
Not clear what kind of an attachment it is and where can we find it.
So, my questions are:
Is there a serious documentation for the VSTS logging commands beyond the half baked aforementioned markdown page?
build.uploadlog - does it always prepend the contents of the files to the step log or appending is also an option?
artifact.upload - how publish files as separate artifacts? Does it mean separate container folders? But then the name of the file is likely to be mentioned in two places - container folder and artifact name. Is it the way?
task.addattachment - what does it do?
I've been similarly frustrated with the documentation for these logging commands.
As for task.addattachment I discovered the Build/Attachments REST API and the attachments show up there. It has this format:
GET https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/attachments/{type}?api-version=6.0-preview.2
Note that type is required. You simply use the same type that was specified in the task.addattachment command.
I was able to build a url and just plug that into my browser window. Then it displays the raw json response.
I do alot of personal development tweaks on code on my side, like adding an account automatically, opening up sublime in a certain way when there's an exception (with a rescue_from from an ApplicationController), and other misc tweaks I think are very useful for me but that I don't think I should/other colleagues would like to have committed.
I searched around a bit and supposedly git doesn't have any way to ignore single file lines.
I figured a solution (albeit probably a little complicated and involving markup) would be using Git pre-commit hooks, but... doesn't sound very neat to me.
How can I keep personal code tweaks on my side, inside existing, committed files, without manually stashing/restoring them between commits, while also being branch-independent?
I searched around a bit and supposedly git doesn't have any way to ignore single file lines.
Good news you can do it.
How?
You will use something called hunk in git.
Hunk what?
Hunk allow you to choose which changes you want to add to the staging area and then committing them. You can choose any part of the file to add (as long as its a single change) or not to add.
Once you have chosen your changes to commit you will "leave" the changes you don't wish to commit in your working directory.
You can then choose if you want this file to be tracked as modified or not withe the help of the assume-unchanged flag.
Here is a sample code for you.
# make any changes to any given file
# add the file with the `-p` flag.
git add -p
# now you can choose form the following options what you want to do.
# usually you will use the `s` for splitting up your changes.
git add -P
Using git add -p to add only parts of changes which you will choose to commit.
You can choose which changes you wish to add (picking the changes) and not committing them all.
# once you done editing you will have 2 copies of the file
# (assuming you did not add all the changes)
# one file with the "private" changes in your working dir
# and the "public" changes waiting for commit in the staging area.
Add the file to .gitignore file
This will ignore the file and any changes made to it.
--assume-unchaged
Raise the --assume-unchaged flag on this file so it will stop tracking changes on this file
Using method (2) will tell git to ignore this file even when ts already committed.
It will allow you to modify the file without having to commit it to the repository.
git-update-index
--[no-]assume-unchanged
When this flag is specified, the object names recorded for the paths are not updated. Instead, this option sets/unsets the "assume unchanged" bit for the paths. When the "assume unchanged" bit is on, the user promises not to change the file and allows Git to assume that the working tree file matches what is recorded in the index. If you want to change the working tree file, you need to unset the bit to tell Git. This is sometimes helpful when working with a big project on a filesystem that has very slow lstat(2) system call (e.g. cifs).
Git will fail (gracefully) in case it needs to modify this file in the index e.g. when merging in a commit; thus, in case the assumed-untracked file is changed upstream, you will need to handle the situation manually.
I am trying to migrate the setup here at the office from SVN to Git and am setting up Redmine as the host for our projects and issue management (Currently we use a version of Gforge + SVN). I should preface by saying that I'm an embedded C software developer by day and have basically zero experience with Rails or web apps, but I like trying new things so I volunteered to set up the project management tools which will take us into the future.
I have Redmine setup and am using Gitolite as the Git repo manager. Additionally, I am using the ericpaulbishop/redmine_git_hosting plugin to facilitate automatic public ssh key pushing to Gitolite and automatic repo creation when we register a new project. Everything seems to work except the repo view within the project does not keep track of the changesets. (The "History" is just empty, although when you view the files, it does show the latest version correctly)
I copied the post-receive hook from the plugin's contrib directory to the .gitolite/ common hooks, but again I know little about Ruby and how these gitolite hooks work so I don't know how to debug this. I notice there are log messages and things in the hook, but I have no idea where those are printed, etc...
I even tried the Howto on the Redmine wiki, HowTo setup automatic refresh of repositories in Redmine on commit:
#!/bin/sh
curl "http://<redmine url>/sys/fetch_changesets?key=<your service key>"
Any ideas on where I start debugging? I've been able to resolve every problem up to this point, but I'm a little stuck now. The plugin doesn't make it obvious how this is supposed to work, and to be honest, I'm not even sure if this is a problem with Redmine not reading the repo correctly (or at all), or gitolite not communicating as Redmine expects, etc...
I guess I could answer this...
I checked the issues under the Github page and I found this on:
https://github.com/ericpaulbishop/redmine_git_hosting/issues/89
Which was pretty much exactly my problem. This does appear to be a small bug in the plugin, but you can work around it by changing Max Cache Time to "1 minute or until next commit". This immediately fixed my problem. I simply left it like that but one of the posters claimed that you could change it back to until next commit and it works from then on...
For example, I have written a document and submited to p4.Then I would like to share it on the company intranet or notify the others by email.
I create a post and refer to the p4 document as a hyperlink in the post.
When the user click on it, his local P4 will be launched to sync the document according to his p4 config(would be failed if he is not allow to access the relevant repo), then the document will be opend on his PC.
By imaging this feature, I am just trying to find a solution to share p4 document easily.
Since I dont want to upload the documents to the intranet then sync it between p4 manually.
Any suggestion is welcome.
thanks.
You can try using a custom url protocol and handler. This would allow you to write urls like p4v://Some/place/some/where/. The setup will depend on your platform.
Windows
Gnome
You could then set the handler to the p4v executable with the -s option. This will open a location, but it won't provide any kind of syncing.
p4v -s "//Some/place/some/where/"
You may also need to coerce the url into a valid perforce path. For example, windows urls will include the text before the colon, causing problems.
p4v -s "p4v://Some/place/some/where/"
So you will probably have to write a wrapper script around the execution of p4v to do some text filtering. This is all kind of a pain, which is why I haven't done it myself.
Another simple technique is to set up your intranet web server to serve documents from a frequently-sync'd workspace, as described here: http://www.perforce.com/customers/white_papers/web_content_management_perforce in the section "2. A Simple WCM Approach".
I have used this mechanism, with an Apache web server, and a Perforce client workspace with a cron script sync'ing the workspace every 10 seconds, to share documents via URL in a development environment with dozens of active developers, quite successfully.
The only thing that comes to mind is P4Web, which serves Perforce depot via HTTP. It still wouldn't solve the "sync automatically according to his p4 config", but at least you could send links to your document.
Currently the msbuild logs for team build are appalling as they are just plain text and are very difficult to read. Also the ones created by my build are approx 30Mb and take quite a while to download (our TFS server is in our datacentre).
Does anyone know any way of being able to view these logs easier, prefereably integrated with either TFS itself or TFS WebAccess?
Take a look at the following blog post I did a while ago:
http://www.woodwardweb.com/teamprise/000415.html
This describes how to create a simple ASP.NET page that will stream the contents of your log file to you over HTTP. The advantage of doing it this way is that you don't have to wait for the entire page to load before the log starts to render for you in Visual Studio.
Also - you can add some simple formatting to the file while streaming. In the example on my blog I simply make the start of each target appear in bold to make them stand out a bit more, but you can see how you could go crazy with this approach if you wanted.
If increasing bandwidth isn't an option then I would suggest you to write your own html logger and attach it to the build process. Splitting the html build log into minor parts (definded by targets and/or projects) and having one index file pointing to all the minor parts with appropiate information whether a given part failed or succeeded. Then you only need to parse the index file and any requested part over the link.
A third possibility is to compress the log-file after the build completes.