how do I set multiple values for a config-key? Some sections supports multiple values:
[remote "origin"]
url = git#github.com:schacon/simplegit-progit.git
fetch = +refs/heads/master:refs/remotes/origin/master
fetch = +refs/heads/qa/*:refs/remotes/origin/qa/*
Something like this is not working in gitLib2Sharp:
string[] refSpecs = {"+refs/heads/master:refs/remotes/origin/master", "+refs/heads/qa/*:refs/remotes/origin/qa/*"};
repo.Config.Set( #"remote.origin.fetch", refSpec );
How do I set multiple values for a config-key
This is indeed a currently missing feature in LibGit2Sharp. An issue has just been opened to track this.
However, if what you're after is setting/updating the default refspecs of a remote, the repo.Network.Remotes.Update() method may already fit that need without having to wait for the issue to be fixed.
Pull request #567 recently enhanced the Remotes.Update() method to make it cope with refspecs updation. As such, your example could be fulfilled with the following code.
var fetchSpecs = new string[]
{
"+refs/heads/master:refs/remotes/origin/master",
"+refs/heads/qa/*:refs/remotes/origin/qa/*"
};
using (var repo = new Repository(path))
{
var remote = repo.Network.Remotes["origin"];
repo.Network.Remotes.Update(remote, r => r.FetchRefSpecs = fetchSpecs);
}
More or less related, pull request #553 introduced an easy way to enumerate all the refspecs of a remote
Related
We have been using spring-data-elasticsearch for 4.1.13 until recently for querying from elastic search. For grouping something we used aggregations
Consider a index of books. For each book there can be one or multipe authors
To get count of books by author we used TermsAggregationbuilder to get this grouping as shown below
SearchSourceBuilder builder = this.getQuery(filter, false);
String aggregationName = "group_by_author_id";
TermsAggregationBuilder aggregationBuilders =
AggregationBuilders.terms(aggregationName).field("authors");
var query =
new NativeSearchQueryBuilder()
.withQuery(builder.query())
.addAggregation(aggregationBuilders)
.build();
var result = elasticsearchOperations.search(query, EsBook.class, ALIAS_COORDS);
if (!result.hasAggregations()) {
throw new IllegalStateException("No aggregations found after query with aggregations!");
}
Terms groupById = result.getAggregations().get(aggregationName);
var buckets = groupById.getBuckets();
Map<Long, Integer> booksCount = new HashMap<>();
buckets.forEach(
bucket ->
booksCount.put(
bucket.getKeyAsNumber().longValue(), Math.toIntExact(bucket.getDocCount())));
return booksCount ;
We recently upgraded to spring-data-elasticsearch 4.4.2 and saw that there are some breaking changes.
First .addAggregations were replaced by withAggregations
Second unlike before I cant seem to directly get Terms and buckets after querying as
result.getAggregations().get(aggregationName); is no more possible and only other option I see is result.getAggregations().aggregations(). So am wondering if anyone has done the same. The documentation itself is so poor in Elastic search.
First .addAggregations were replaced by withAggregations
addAggregation(AbstractAggregationBuilder<?>) has been deprecated and should be replaced by withAggregations. This is not a breaking change.
The change in the returned value for SearchHits.getAggregations() is documented in the migration guides from 4.2 to 4.3 "Removal of org.elasticsearch classes from the API."
So from 4.3 on result.getAggregations().aggregations() returns the same that previously result.getAggregations() returned.
All,
I am trying to get the list of all the files that are in a particular repo in TFS GIT using REST API.
I found the below one but it only display the contents of the specific file name mentioned after "scopePath=/buld.xml", it only display the contents of file build.xml.
But I am trying, only to list all the files that are in a particular repository with out mentioning the particular file name.
Please help me.
https://{accountName}.visualstudio.com/{project}/_apis/git/repositories/{repositoryId}/items?items?scopePath=/&api-version=4.1
You can use the api below:
https://{accountName}.visualstudio.com/{project}/_apis/git/repositories/{repositoryId}/items?recursionLevel=Full&api-version=4.1
Also that could be achieved using VisualStudioOnline libs (at the date of writing comment it becomes AzureDevOps): Microsoft.TeamFoundationServer.Client, Microsoft.VisualStudio.Services.Client.
First, you need to create access token. Then just use code below:
VssBasicCredential credintials = new VssBasicCredential(String.Empty, "YOUR SECRET CODE HERE");
VssConnection connection = new VssConnection(new Uri("https://yourserverurl.visualstudio.com/"), credintials);
GitHttpClient client = connection.GetClient<GitHttpClient>();
List<GitRepository> repositories = await client.GetRepositoriesAsync(true); // or use GetRepositoryAsync()
var repo = repositories.FirstOrDefault(r => r.Name == "Some.Repo.Name");
GitVersionDescriptor descriptor = new GitVersionDescriptor()
{
VersionType = GitVersionType.Branch,
Version = "develop",
VersionOptions = GitVersionOptions.None
};
List<GitItem> items = await client.GetItemsAsync(repo.Id, scopePath: "/", recursionLevel: VersionControlRecursionType.Full, versionDescriptor: descriptor);
Under the hood it's using the REST API. So if you try the same effect using c# lang, better delegate it to lib.
You need to call the items endpoint first, which gives you an objectId (the gitObjectType should be "tree"):
http://{tfsURL}/tfs/{collectionId}/{teamProjectId}/_apis/git/repositories/{repositoryId}/items?recursionLevel=Full&api-version=4.1
Then call the trees end point to list the objects in the tree:
http://{tfsURL}/tfs/{collectionId}/{teamProjectId}/_apis/git/repositories/{repositoryId}/trees/{objectId}?api-version=4.1
test
I have a strange behavior regarding the extraMetadata (I am using OData)
1. I have created a clone function - I am creating new manager and importing into in an entity which I perform the operation
ctor.prototype.clone = function() {
var clonedManager = this.entityAspect.entityManager.createEmptyCopy(),
exportData = this.entityAspect.entityManager.exportEntities([this], true), //export it to the new manager
cloned;
clonedManager.importEntities(exportData);
cloned = clonedManager.getEntityByKey(this.entityAspect.getKey());
return cloned;
};
how ever I had to add
cloned.entityAspect.extraMetadata = this.entityAspect.extraMetadata;
cause I saw that it isn't being exported/imported
when I get entities using expand, those entities don't hold extraMetadata;
and without the extraMetadata I cannot commit the changes - as I get exception
As of BreezeJs version 1.4.13, available now, this has been fixed.
I want to use amazon Dynamo DB with rails.But I have not found a way to implement pagination.
I will use AWS::Record::HashModel as ORM.
This ORM supports limits like this:
People.limit(10).each {|person| ... }
But I could not figured out how to implement following MySql query in Dynamo DB.
SELECT *
FROM `People`
LIMIT 1 , 30
You issue queries using LIMIT. If the subset returned does not contain the full table, a LastEvaluatedKey value is returned. You use this value as the ExclusiveStartKey in the next query. And so on...
From the DynamoDB Developer Guide.
You can provide 'page-size' in you query to set the result set size.
The response of DynamoDB contains 'LastEvaluatedKey' which will indicate the last key as per the page size. If response does't contain 'LastEvaluatedKey' it means there are no results left to fetch.
Use the 'LastEvaluatedKey' as 'ExclusiveStartKey' while fetching next time.
I hope this helps.
DynamoDB Pagination
Here's a simple copy-paste-run proof of concept (Node.js) for stateless forward/reverse navigation with dynamodb. In summary; each response includes the navigation history, allowing user to explicitly and consistently request either the next or previous page (while next/prev params exist):
GET /accounts -> first page
GET /accounts?next=A3r0ijKJ8 -> next page
GET /accounts?prev=R4tY69kUI -> previous page
Considerations:
If your ids are large and/or users might do a lot of navigation, then the potential size of the next/prev params might become too large.
Yes you do have to store the entire reverse path - if you only store the previous page marker (per some other answers) you will only be able to go back one page.
It won't handle changing pageSize midway, consider baking pageSize into the next/prev value.
base64 encode the next/prev values, and you could also encrypt.
Scans are inefficient, while this suited my current requirement it won't suit all!
// demo.js
const mockTable = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
const getPagedItems = (pageSize = 5, cursor = {}) => {
// Parse cursor
const keys = cursor.next || cursor.prev || [] // fwd first
let key = keys[keys.length-1] || null // eg ddb's PK
// Mock query (mimic dynamodb response)
const Items = mockTable.slice(parseInt(key) || 0, pageSize+key)
const LastEvaluatedKey = Items[Items.length-1] < mockTable.length
? Items[Items.length-1] : null
// Build response
const res = {items:Items}
if (keys.length > 0) // add reverse nav keys (if any)
res.prev = keys.slice(0, keys.length-1)
if (LastEvaluatedKey) // add forward nav keys (if any)
res.next = [...keys, LastEvaluatedKey]
return res
}
// Run test ------------------------------------
const runTest = () => {
const PAGE_SIZE = 6
let x = {}, i = 0
// Page to end
while (i == 0 || x.next) {
x = getPagedItems(PAGE_SIZE, {next:x.next})
console.log(`Page ${++i}: `, x.items)
}
// Page back to start
while (x.prev) {
x = getPagedItems(PAGE_SIZE, {prev:x.prev})
console.log(`Page ${--i}: `, x.items)
}
}
runTest()
I faced a similar problem.
The generic pagination approach is, use "start index" or "start page" and the "page length".
The "ExclusiveStartKey" and "LastEvaluatedKey" based approach is very DynamoDB specific.
I feel this DynamoDB specific implementation of pagination should be hidden from the API client/UI.
Also in case, the application is serverless, using service like Lambda, it will be not be possible to maintain the state on the server. The other side is the client implementation will become very complex.
I came with a different approach, which I think is generic ( and not specific to DynamoDB)
When the API client specifies the start index, fetch all the keys from
the table and store it into an array.
Find out the key for the start index from the array, which is
specified by the client.
Make use of the ExclusiveStartKey and fetch the number of records, as
specified in the page length.
If the start index parameter is not present, the above steps are not
needed, we don't need to specify the ExclusiveStartKey in the scan
operation.
This solution has some drawbacks -
We will need to fetch all the keys when the user needs pagination with
start index.
We will need additional memory to store the Ids and the indexes.
Additional database scan operations ( one or multiple to fetch the
keys )
But I feel this will be very easy approach for the clients, which are using our APIs. The backward scan will work seamlessly. If the user wants to see "nth" page, this will be possible.
In fact I faced the same problem and I noticed that LastEvaluatedKey and ExclusiveStartKey are not working well especially when using Scan So I solved Like this.
GET/?page_no=1&page_size=10 =====> first page
response will contain count of records and first 10 records
retry and increase number of page until all record come.
Code is below
PS: I am using python
first_index = ((page_no-1)*page_size)
second_index = (page_no*page_size)
if (second_index > len(response['Items'])):
second_index = len(response['Items'])
return {
'statusCode': 200,
'count': response['Count'],
'response': response['Items'][first_index:second_index]
}
I'm writing an application that pulls changesets from TFS and exports a csv file that describes the latest changes for use in a script to push those changes into ClearCase. The "latest" doesn't necessarily mean the latest, however. If a file was added and then edited, I only need to know that the file was added, and get the latest version so that my script knows how to properly handle it. Most of this is fairly straight-forward. I'm getting hung up on files that have been renamed or moved, as I do not want to show that item as being deleted, and another item added. To uphold the integrity of ClearCase, I need to have in the CSV file that the item is moved or renamed, along with the old location and the new location.
So, the issue I'm having is tracing a renamed (or moved) file back to the previous name or location so that I can correlate it to the new location/name. Where in the API can I get this information?
Here is your answer:
http://social.msdn.microsoft.com/Forums/en/tfsgeneral/thread/f9c7e7b4-b05f-4d3e-b8ea-cfbd316ef737
Using QueryHistory you can find out that an item was renamed, then using its previous changeset (previous to the one that says it was renamed) you can find its previous name.
You will need to use VersionControlServer.QueryHistory in a manner similar to the following method. Pay particular attention to SlotMode which must be false in order for renames to be followed.
private static void PrintNames(VersionControlServer vcs, Change change)
{
//The key here is to be sure Slot Mode is enabled.
IEnumerable<Changeset> queryHistory =
vcs.QueryHistory(
new QueryHistoryParameters(change.Item.ServerItem, RecursionType.None)
{
IncludeChanges = true,
SlotMode = false,
VersionEnd = new ChangesetVersionSpec(change.Item.ChangesetId)
});
string name = string.Empty;
var changes = queryHistory.SelectMany(changeset => changeset.Changes);
foreach (var chng in changes)
{
if (name != chng.Item.ServerItem)
{
name = chng.Item.ServerItem;
Console.WriteLine(name);
}
}
}
EDIT: Moved the other solution up. What follows worked when I was testing against a pure Rename change but broke when I tired against a Rename and Edit change.
This is probably the most efficient way to get the previous name. While it works (TFS2013 API against as TFS2012 install), it look like a bug to me.
private static string GetPreviousServerItem(VersionControlServer vcs, Item item)
{
Change[] changes = vcs.GetChangesForChangeset(
item.ChangesetId,
includeDownloadInfo: false,
pageSize: int.MaxValue,
lastItem: new ItemSpec(item.ServerItem, RecursionType.None));
string previousServerItem = changes.Single().Item.ServerItem;
//Yep, this passes
Trace.Assert(item.ServerItem != previousServerItem);
return previousServerItem;
}
it would be used like:
if (change.ChangeType.HasFlag(ChangeType.Rename))
{
string oldServerPath = GetPreviousServerItem(vcs, change.Item);
// ...
}