Exporting > 1000 issues from JIRA - jira

I am trying to export JIRA tasks via API and I hit a wall on excel due to JIRA only allowing a 1000 limit. I can do an export manually to CSV and get over 1000 results and was wondering if anyone had any luck with large JIRA exports via REST API and can help point me in the right direction on this.
Guessing an export to CSV then pull into excel for reporting might work?
Thanks!

JIRA's REST API supports pagination to prevent that clients of the API can put too much load on the application. This means you cannot just pull in all issue data with 1 REST call.
You can only retrieve "pages" of max 1000 issues using the paging query parameters startAt and maxResults. See the Pagination section here.
If you run a JIRA standalone server then you can tweak the maximum number of results that JIRA returns, but for a cloud instance this is not possible. See this KB article for more info.

using jira-python (according to your tag)
# search_issues can only return 1000 issues, so if there are more we have to search again, thus startAt=count
issues = []
count = 0
while True:
tmp_issues = jira_connection.search_issues('', startAt=count, maxResults=count + 999)
if len(tmp_issues) == 0:
# Since Python does not offer do-while, we have to break here.
break
issues.extend(tmp_issues)
count += 999

The code below will fetch results 200 records at a time, till all records are exported.
you can export max 1000 records at a go by updating the page size, it will recursively fetch 1000 records until everything is exported
var windowSlider = 200
const request = require('request')
const fs = require('fs')
const chalk = require('chalk')
var windowSlider = 200
var totlExtractedRecords = 0;
fs.writeFileSync('output.txt', '')
const option = {
url: 'https://jira.yourdomain.com/rest/api/2/search',
json: true,
qs: {
jql: "project in (xyz)",
maxResults: 200,
startAt: 0,
}
}
const callback = (error, response) => {
const body = response.body
console.log(response.body)
const dataArray = body.issues
const total = body.total
totlExtractedRecords = dataArray.length + totlExtractedRecords
if (totlExtractedRecords > 0) {
option.qs.startAt = windowSlider + option.qs.startAt
}
dataArray.forEach(element => {
fs.appendFileSync('output.txt', element.key + '\n')
})
console.log(chalk.red.inverse('Total extracted data : ' + totlExtractedRecords))
console.log(chalk.red.inverse('Total data: ' + total))
if (totlExtractedRecords < total) {
console.log('Re - Running with start as ' + option.qs.startAt)
console.log('Re - Running with maxResult as ' + option.qs.maxResults)
request(option, callback).auth('api-reader', 'APITOKEN', true)
}
}
request(option, callback).auth('api-reader', 'APITOKEN', true)

Related

PlaylistItems doesn't showing all VideoNames

I use youtube reporting API to get VideoIDs and some metrics. Then I also use Youtube Data API to get list of ALL VideoNames. But when I combine these two groups (to get names to these IDs), I found out that a lot of names are missing.
HTTP request: GET https://www.googleapis.com/youtube/v3/playlistItems
What is the best HTTP request to get ALL existing VideoNames historically?
Why playlistItems doesn't work properly and not showing all VideoNames?
Thank you
def get_videos():
for f in glob.glob(f'YoutubeAnalytics/videos/*.json'):
os.unlink(f)
for ch_name, token_file, ch_id in channels:
print(ch_name)
print(ch_id, 'UU' + ch_id[2:])
jsn = json.load(open(TOKEN_PATH + token_file))
svc = get_youtube_data(jsn)
name = token_file.replace('.json', '')
rsp = svc.playlistItems().list(part='snippet', playlistId= 'UU' + ch_id[2:], maxResults=50).execute()
# rsp = svc.channels().list(part='id,snippet', mine=True).execute()
i = 0
while 1:
# tak se to stahne to originalniho folderu Python
with open(f'YoutubeAnalytics/videos/{name}_{i:04d}.json', 'w') as w:
json.dump(rsp, w)
if 'nextPageToken' in rsp:
i += 1
if i % 10 == 0:
print(i)
rsp = svc.playlistItems().list(part='snippet', playlistId= 'UU' + ch_id[2:], maxResults=50, pageToken=rsp['nextPageToken']).execute()
else:
break
def make_videos_csv():
htag = re.compile(r"\s#\S+")
with open(f'YoutubeAnalytics/videos/videos.csv', 'w', encoding='utf-8', newline='') as csvf:
wrt = csv.writer(csvf)
for f in glob.glob(f'YoutubeAnalytics/videos/*.json'):
jsn = json.load(open(f))
for i in jsn['items']:
snip = i['snippet']
descr = snip['description']
tags = ','.join([ t[1:] for t in htag.findall(descr) ])
wrt.writerow((snip['resourceId']['videoId'], i['id'], i['etag'], snip['channelId'], snip['publishedAt'][:-1], snip['title'], snip['description'], tags))
Consider that some videos may be marked as private, then, if you're not the owner of the video(s), you wont be able to get the details of such video(s).
In your comment, you added these video_ids examples:
zzr8YwY0y2U, zypHHsc3Q_Y, zyXCdTAdL2s, zvgtoZvL-Gs
Here are the results of each one:
zzr8YwY0y2U = Catch the Crooks - LEGO City - mini movie: Ep. 13
zypHHsc3Q_Y = LEGO Marvel: Spider-Man 'Vexed By Venom' Episode 4: Paging Ghost-Spider
zyXCdTAdL2s = PUBLIC INFO NOT AVAILABLE - this video is private.
zvgtoZvL-Gs = ~どんなどうぶつが暮らすサファリをつくる?篇~「みやぞんとあらぽんがつくる!つながる、ひろがる、 レゴ シティ!」
See the answers here and modify your code in order to handle such unavailable video(s) you might get with your current code - or just accept that you cannot get data if those video(s) are not publicly available.

Google ads script Failed due to system errors

I have a Google ads script that will find any ad groups with on active RSA's, and export the campaigns and ad group name to a Google Sheet.
But sometimes then it runs it says it "Failed due to system errors" and gives the following error message:
7/11/2022 3:50:02 PM Exception: Call to GoogleAdsService.Search failed: The request took too long to respond.
at adsapp_compiled:18112:138
at adsapp_compiled:18123:9
at sa (adsapp_compiled:227:15)
at Object.search (adsapp_compiled:235:20)
at iI.search (adsapp_compiled:18238:36)
at SH.search (adsapp_compiled:17815:19)
at TH.search (adsapp_compiled:17910:20)
at $H.search (adsapp_compiled:18002:19)
at fd (adsapp_compiled:1041:32)
at fd.next ()
I think it is a runtime error, because it doesn't receive any response from the server.
i have been told it might have something to do with the syntax order, but i don't know how to fix that if that is the case.
I have tried to do the export to a new clear sheet, just to see if the sheet i used had to many formulas in it which could slow it down, but it still gave the same error message.
I have also tried to do it on an entirely different account that is smaller but still same issue.
/**********************
RSA Checker
**********************/
var SPREADSHEET_URL = 'INSERT SPREADSHEET URL HERE';
var Sheet_name = 'INSERT SHEET NAME HERE';
function main() {
var sheet = SpreadsheetApp.openByUrl(SPREADSHEET_URL).getSheetByName(Sheet_name);
var range = sheet.getRangeList(['A1:A', 'B1:B'])
range.clearContent();
sheet.getRange("A1").setValue("Campaign");
sheet.getRange("B1").setValue("Ad groups");
var GetAdGroups = AdWordsApp.adGroups()
.withCondition('Status = ENABLED')
.withCondition('CampaignStatus = ENABLED')
.withCondition("AdvertisingChannelType = SEARCH")
.withCondition("CampaignName DOES_NOT_CONTAIN_IGNORE_CASE 'dsa'")
.withCondition("AdGroupName DOES_NOT_CONTAIN_IGNORE_CASE 'dsa'")
.withCondition("campaign.experiment_type = BASE")
.get();
for (var row = 2; GetAdGroups.hasNext(); row ++) {
var AdGroups = GetAdGroups.next();
var RSACount = AdGroups.ads().withCondition('Type=RESPONSIVE_SEARCH_AD').withCondition('Status = ENABLED').get().totalNumEntities();
if ((RSACount < 1)) {
sheet.appendRow( [AdGroups.getCampaign().getName(), AdGroups.getName()] );
}
}
}

Can I pick a cell from every Google Sheet in a Folder?

I want to be able to pick say C3 from a list of Google spreadsheets in a folder.
I have a bunch of structurally identical sheets, but I'd like to be able to provide a sum of the values in C3 across say a hundred sheets in a directory.
Ultimately, would be great to highlight the largest or smallest value of C3 in a directory.
This could be useful in many places where you want to be able to aggregate, aggregate data.
SUGGESTION
If you have hundreds of Google spreadsheet files in a Google Drive folder, I agree with #player0 that it is best to use a script. With the Apps Script, you can:
Automate the process in iterating through Spreadsheet files in your Drive folder.
Filter only the Google Spreadsheet type (e.g you have a bunch of
different file types inside).
Get the range data & process them the way you want.
See this sample below that was derived from existing resources:
Script:
function readSheetsInAFolder() {
//FOLDER_ID is your drive folder ID
var query = '"FOLDER_ID" in parents and trashed = false and ' +
'mimeType = "application/vnd.google-apps.spreadsheet"';
var range = "C3"; //The range to look for on every Spreadsheet files in the Drive folder
var files, pageToken;
var finalRes = [];
do {
files = Drive.Files.list({
q: query,
maxResults: 100,
pageToken: pageToken
});
files.items.forEach(sheet => {
finalRes.push(viewRangeValue(range, sheet.id));
})
pageToken = files.nextPageToken;
} while (pageToken);
const arrSum = array =>
array.reduce(
(sum, num) => sum + (Array.isArray(num) ? arrSum(num) : num * 1),
0
);
var max = Math.max.apply(null, finalRes.map(function(row){ return Math.max.apply(Math, row) })); //Gets the largest number
var min = Math.min.apply(null, finalRes.map(function(row){ return Math.min.apply(Math, row); })); //Gets the smallest number
var sum = arrSum(finalRes) // Gets the sum
console.log('RANGE VALUES: %s \nRANGE: %s \nTOTAL SHEET(s) FOUND: %s \n________________\nSUM OF VALUES: %s \nLargest Value: %s \nSmallest Value: %s',finalRes,range, files.items.length,sum,max,min)
}
function viewRangeValue(range, sheetID) {
var sid = sheetID;
var rn = range;
var parms = { valueRenderOption: 'UNFORMATTED_VALUE', dateTimeRenderOption: 'SERIAL_NUMBER' };
var res = Sheets.Spreadsheets.Values.get(sid, rn, parms);
return res.values.map(num => {return parseInt(num)});
}
Demonstration:
Sample Test Drive Folder (w/ 3 test Spreadsheet files):
Every C3 cell on each of these 3 files contain either 0,10 or 6 value.
On the Apps Script Editor, I've added the Drive & Sheets API on the services:
Result
After running the script:
Resources:
Advanced Drive Service
Drive API Files: list
Sheets API spreadsheets.values.get
Max Value of an array

Script fails when running normally but in debug its fine

I'm developing a google spreadsheet that is automatically requesting information from a site, below is the code. The variable 'tokens' is an array consisting of about 60 different 3 letter unique identifiers. The problem that i have been getting is that the code keeps failing to request all information on the site. Instead it falls back (at random) on the validation part, and fills the array up with "Error!" strings. Sometimes its row 5, then 10-12, then 3, then multiple rows, etc. When i run it in debug mode everythings fine, can't seem to be able to reproduce the problem.
Already tried to place a sleep (100ms) but that fixed nothing. Also looked at the amount of traffic the API accepts (10 requests per second, 1.200 per minute, 100.000 per day) , it shouldn't be a problem.
Runtime is limited so i need it to be as efficient as possible. I'm thinking it is an issue of computational power after i pushed all values in the json request into the 'tokens' array. Is there a way to let the script wait as long as necessary for the changes to be committed?
function newGetOrders() {
var starttime = new Date().getTime().toString();
var refreshTime = new Date();
var tokens = retrieveTopBin();
var sheet = SpreadsheetApp.openById('aaafFzbXXRzSi-eXBu9Xh81Ne2r09vM8rLFkA4fY').getSheetByName("Sheet37");
sheet.getRange('A2:OL101').clear();
for (var i=0; i<tokens.length; i++) {
var request = UrlFetchApp.fetch("https://api.binance.com/api/v1/depth?symbol=" + tokens[i][0] + "BTC", {muteHttpExceptions:true});
var json = JSON.parse(request.getContentText());
tokens[i].push(refreshTime);
Utilities.sleep(100);
for (var k in json.bids) {
tokens[i].push(json.bids[k][0]);
tokens[i].push(json.bids[k][1]);
}
for (var k in json.asks) {
tokens[i].push(json.asks[k][0]);
tokens[i].push(json.asks[k][1]);
}
if (tokens[i].length < 402) {
for (var x=tokens[i].length; x<402; x++) {
tokens[i].push("ERROR!");
}
}
}
sheet.getRange(2, 1, tokens.length, 402).setValues(tokens);
}

I need to get more than 100 pages in my query

I want to get all video information posible from Youtube for my proyect. I know that the limit page is 100.
I do the next code:
ArrayList<String> videos = new ArrayList<>();
int i = 1;
String peticion = "http://gdata.youtube.com/feeds/api/videos?category=Comedy&alt=json&max-results=50&page=" + i;
URL oracle = new URL(peticion);
URLConnection yc = oracle.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(
yc.getInputStream()));
String inputLine = in.readLine();
while (in.readLine() != null)
{
inputLine = inputLine + in.readLine();
}
System.out.println(inputLine);
JSONObject jsonObj = new JSONObject(inputLine);
JSONObject jsonFeed = jsonObj.getJSONObject("feed");
JSONArray jsonArr = jsonFeed.getJSONArray("entry");
while(i<=100)
{
for (int j = 0; j < jsonArr.length(); j++) {
videos.add(jsonArr.getJSONObject(j).getJSONObject("id").getString("$t"));
System.out.println("Numero " + videosTotales + jsonArr.getJSONObject(j).getJSONObject("id").getString("$t"));
videosTotales++;
}
i++;
}
When the program finish, I have 5000 videos per category, but I need much more, much much more, but the limit is page = 100.
So, how can I get more than 10 millions of videos?
Thank you!
Are those 5000 also unique id's ?
I see the use of max-results=50, but not a start-index parameter in your url.
There is a limit on the results you can get per request. There is also a limit on the number of requests that you can send within some time interval. By checking the statuscode of the response and any error message you can find these limits, as they may change in time.
Besides the category parameter, use some other parameters too. For instance, you may vary the q parameter (used with some keywords) and/or order parameter to get a different results set.
See the documentation for available parameters.
Note, that you are using api version 2, which is deprecated. There is an api version 3.

Resources