bitbucket API 2.0 page parameters using non-default pagelen - bitbucket

I have run into a cumbersome limitation of the bitbucket API 2.0 - I am hoping there is a way to make it more usable.
When one wants to retrieve a list of repositories from the bitbucket API 2.0, this url can be used:
https://api.bitbucket.org/2.0/repositories/{teamname}
This returns the first 10 repos in the list. To access the next 10, one simply needs to add a page parameter:
https://api.bitbucket.org/2.0/repositories/{teamname}?page=2
This returns the next 10. One can also adjust the number of results returned using the pagelen parameter, like so:
https://api.bitbucket.org/2.0/repositories/{teamname}?pagelen=100
The maximum number can vary per account, but 100 is the maximum any team is able to request with each API call. The cumbersome part is that I cannot find a way to get page 2 with a pagelen of 100. I have tried variations on the following:
https://api.bitbucket.org/2.0/repositories/{teamname}?pagelen=100&page=2
https://api.bitbucket.org/2.0/repositories/{teamname}?page=2&pagelen=100
I've also tried using parameters such as limit or size to no avail. Is the behavior I seek even possible? Some relevant documentation can be found here.

EDIT: It appears this behavior is possible, however the bitbucket 2.0 API will only recognize multiple parameters if the entire url is in quotes.
Example:
curl "https://api.bitbucket.org/2.0/repositories/{teamname}?pagelen=100&page=2"
ORIGINAL ANSWER: I was able to get around this by creating a bash script that looped through each page of 10 results, adding each new 10 repos to a temporary file and then cloning into those 10 repos. The only manual thing that needs to be done is to update the upper limit in the for loop to be the last page expected.
Here is an example script:
for thisPage in {1..23}
do
curl https://api.bitbucket.org/2.0/repositories/[organization]?page=$thisPage -u [username]:[password] > repoinfo
for repo_name in `cat repoinfo | sed -r 's/("slug": )/\n\1/g' | sed -r 's/"slug": "(.*)"/\1/' | sed -e 's/{//' | cut -f1 -d\" | tr '\n' ' '`
do
echo "Cloning " $repo_name
git clone https://[username]#bitbucket.org/[organization]/$repo_name
echo "---"
done
done
Much help was gleaned from:
https://haroldsoh.com/2011/10/07/clone-all-repos-from-a-bitbucket-source/
and http://adomingues.github.io/2015/01/10/clone-all-repositories-from-a-user-bitbucket/ Thanks!

Related

Cancel job within CircleCI workflow when another workflow with that job is triggered

Let's say we have a workflow called Workflow1 which contains jobs A, B, C and D.
First developer pushes a change and triggers Workflow1.
Second developer also pushes a change and triggers Workflow1.
Is there a way to ensure that when job C starts in the second developer's workflow, it automatically cancels only job C in the first developer's workflow, without affecting any of the other jobs?
You could implement something using the CircleCI API v2 and some jq wizardry. Note: you'll need to create a personal API token, and store it in an environment variable (let's call it MyToken)
I'm suggesting the below approach, but there could be another (maybe simpler ¯\_(ツ)_/¯) way.
Get the IDs of pipelines in the project that have the created status:
PIPE_IDS=$(curl --header "Circle-Token: $MyToken" --request GET "https://circleci.com/api/v2/project/gh/$CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME/pipeline?branch=$CIRCLE_BRANCH"|jq -r '.items[]|select(.state == "created")')
Get the IDs of currently running/on_hold workflows with the name Workflow1 excluding the current workflow ID:
if [ ! -z "$PIPE_IDS" ]; then
for PIPE_ID in $PIPE_IDS
do curl --header "Circle-Token: $MyToken" --request GET "https://circleci.com/api/v2/pipeline/${PIPE_ID}/workflow"|jq -r --arg CIRCLE_WORKFLOW_ID "$CIRCLE_WORKFLOW_ID" '.items[]|select(.status == "on_hold" or .status == "running")|select(.name == "Workflow1")|select(.id != $CIRCLE_WORKFLOW_ID)|.id' >> currently_running_Workflow1s.txt
done
fi
Then (sorry, I'm getting a bit lazy here, and you also need to do some of the work :p), use the currently_running_Workflow1s.txt file generated above and the "Get a workflow's jobs" endpoint to get the job number of each job in the related running Workflow1 workflows whose name matches job C and that has a status running.
Finally, use the "Cancel job" endpoint to cancel each of these jobs.
Note that there might be a slight delay between the "cancel API call" and the job actually being cancelled, so you might want to add
a short sleep, or even better a while loop that checks those
jobs' respective statuses, before moving further.
I hope this helps.

Search string occurrence and display directory wise count

We have a error log directory structure wherein we store all errors log files for a particular day in datewise directories -
errorbackup/20150629/errorlogFile3453123.log.xml
errorbackup/20150629/errorlogFile5676934.log.xml
errorbackup/20150629/errorlogFile9812387.log.xml
errorbackup/20150628/errorlogFile1097172.log.xml
errorbackup/20150628/errorlogFile1908071_log.xml
errorbackup/20150627/errorlogFile5675733.log.xml
errorbackup/20150627/errorlogFile9452344.log.xml
errorbackup/20150626/errorlogFile6363446.log.xml
I want to search for a particular string in the error log file and get the output such that I will get directory wise search result of a count of that string's occurrence. For example grep "blahblahSQLError" should output something like-
20150629:0
20150628:0
20150627:1
20150626:1
This is needed because we fixed some errors in one of the release and I want to make sure that there are no occurrences of that error since the day it was deployed to Prod. Also note that there are thousands of error log files created every day. Each error log file is created with a random number in its name to ensure uniqueness.
If you are sure the filenames of the log files will not contain any "odd" characters or newlines then something like the following should work.
for dir in errorbackup/*; do
printf '%s:%s\n' "${dir#*/}" "$(grep -l blahblahSQLError "$dir/"*.xml | wc -l)"
done
If they can have unexpected names then you would need to use multiple calls to grep and count the matching files manually I believe. Something like this.
for dir in errorbackup/*; do
_dcount=0;
for log in "$dir"/*.xml; do
grep -l blahblahSQLError "$log" && _dcount=$((_dcount + 1));
done
done
Something like this should do it:
for dir in errorbackup/*
do
awk -v dir="${dir##*/}" -v OFS=':' '/blahblahSQLError/{c++} END{print dir, c+0}' "$dir"/*
done
There's probably a cuter way to do it with find and xargs to avoid the loop and you could certainly do it all within one awk command but life's too short....

Docker Remote API Filter Exited

I see in the Docker Remote API Docs that filter can be used to filter on status but I'm unsure how to form the request:
https://docs.docker.com/reference/api/docker_remote_api_v1.16/#list-containers
GET /containers/json?filters=status[exited] ?????
How should this be formatted to display ONLY exited containers?
jwodder is correct on the filter but I wanted to go through this step by step as I wasn't familiar with the Go data types.
The Docker API documentation refers to using a map[string][]string for a filter, which is a Go map (hash table)
map[string] defines a map with keys of type string
[]string is the type definition for the values in the map. A slice
[] is an array without fixed length. Then the slice is made up of
string values.
So the API requires a hash map of arrays containing strings. This Go Playground demonstrates marshalling the Go filter data:
mapS := map[string][]string{ "status":[]string{"exited"} }
Into JSON:
{ "status": [ "exited" ] }
So adding that JSON to the Docker API request you get:
GET /containers/json?all=1&filters={%22status%22:[%22exited%22]}
all=1 is included to report exited containers (like -a on the command line).
It might be easier for non Go people if they just documented the JSON structure for the API : /
The most elegant way to use docker with curl and don't bother with encoding I found in this answer. Basically, it's tell curl to use data as query parameter and encode it. To get exited container the query may look like:
curl -G -XGET "http://localhost:5555/containers/json" \
-d 'all=1' \
--data-urlencode 'filters={"status":["exited"]}' | python -m json.tool
By my reading of the docs, it should be:
GET /containers/json?filters={"status":["exited"]}
Some of that might need to be URL-encoded, though.

jq substring gives "jq: error: Cannot index string with object"

Problem
I'm trying to filter a json JQ result to only show a substring of the original string. For example if a JQ filter grabed the value
4ffceab674ea8bb5ec421c612536696839bbaccecf64e851dfc270d795ee55d1
I want it to only return the first 10 characters 4ffceab674.
What I've tried
On the Official JQ website you can find an example that should give me what I need:
Command: jq '.[2:4]'
Input: "abcdefghi"
Output: "cd"
I've tried to test this out with a simple example in the unix terminal:
# this works fine, => "abcdefghi"
echo '"abcdefghi"' | jq '.'
# this doesn't work => jq: error: Cannot index string with object
echo '"abcdefghi"' | jq '.[2:4]'
So, it turns out most of these filters are not yet in the released version. For reference see issue #289
What you could do is download the latest development version and compile from source. See download page > From source on Linux
After that, if indexing still doesn't work for strings, you should, at least, be able to do explode, index, implode combination, which seems to have been your plan.
Looking at the jq-1.3 manual I suspect there isn't a solution using that version since it offers no primitives for extacting parts of a string.

How to change Get-Mailbox output

I apologize in advance as I am still fairly new to powershell. I'm figuring things out as I go, but this specific issue is stumping me. Currently this is with powershell 2.0 on exchange 2007.
I am trying to add to a script that I have been writing up that shows the basic information for our exchange accounts. This is just a small tool to be introduced to our help desk to assist in a quick overview of what is going on with a user's account. I have everything working, however, I want to change what is displayed. For example, I have:
Get-Mailbox $username | ft #{label="Hidden from GAL"; expression= {$_.HiddenFromAddressListsEnabled}}, #{label="Restricted Delivery"; expression={$_.AcceptMessagesOnlyFrom}} -auto | out-string;
Which ends up returning true/false for hidden from address list, but for Accept Messages, if it is disabled, it returns "{}" (without quotes). If it is enabled, it displays the full group name (along the lines of admin.local/groupname). I only want to change {} to disabled, and instead of showing the group name, just show "enabled"
I tried an if/then statement, and then tried putting the variable "messRestrict" in the expression for accept messages above, and then the function name, but neither worked. They just returned blank values or always said "true." Here is the function:
function restricted {
$accept = Get-Mailbox $username | AcceptMessagesOnlyFrom | select -expand Priority
#if ($accept -match "\s")
#{$messRestrict="False"};
#else
#{$messRestrict="True"};
}
The output is the standard Get-Mailbox output, I just want to replace what it says under the header.
Thanks!
You can try this :
#{label="Restricted Delivery"; expression={if($_.AcceptMessagesOnlyFrom){"Enabled"}else{"Disabled"}}}
It gives :
Get-Mailbox $username | ft #{label="Hidden from GAL"; expression= {$_.HiddenFromAddressListsEnabled}}, #{label="Restricted Delivery"; expression={if($_.AcceptMessagesOnlyFrom){"Enabled"}else{"Disabled"}}} -auto | out-string;

Resources