I have created set of YouTube reporting jobs for a YouTube channel. The jobs were created and run every day as scheduled. However when I go to download the jobs they are all blank.
This is how I authenticate with the API:
def authenticate_from_credentials(API_SERVICE_NAME, API_VERSION):
youtube_client_id = os.environ['youtube_client_id']
youtube_client_secret = os.environ['youtube_client_secret']
youtube_refresh_token = os.environ['youtube_refresh_token']
credentials = client.OAuth2Credentials(
access_token=None,
client_id=youtube_client_id,
client_secret=youtube_client_secret,
refresh_token=youtube_refresh_token,
token_expiry=None,
token_uri='https://oauth2.googleapis.com/token',
user_agent=None,
revoke_uri=None
)
youtube_reporting = build(API_SERVICE_NAME, API_VERSION, credentials=credentials)
return youtube_reporting
This is the method I have been using to create the jobs:
# Call the YouTube Reporting API's jobs.create method to create a job.
def create_reporting_job(youtube_reporting, report_type_id, name):
# Provide keyword arguments that have values as request parameters.
reporting_job = youtube_reporting.jobs().create(
body=dict(
reportTypeId=report_type_id,
name=name
),
).execute()
print ('Reporting job "%s" created for reporting type "%s" at "%s"'
% (reporting_job['name'], reporting_job['reportTypeId'],
reporting_job['createTime']))
I authenticate like this:
youtube_reporting=authenticate_from_credentials('youtubereporting','v1')
And I will create a job like this:
create_reporting_job(youtube_reporting,"channel_combined_a2","Channel Combined a2")
I am not sure what the problem is here. The channel does have content and subscribers so the reports shouldn't be empty. I think there could be an issue with credentials or perhaps the wrong channel is associated with the report since the developer's Google accounts are different than the content owners. But I checked the channels associated with the Oauth credentials I am using and it was the right channel.
Why might my reports be empty and how can I fix this?
I hit the same issue, the problem is that you need to wait several hours for the report to become generated on the backend, at which point re-querying for reports will show results.
There is a subtle mention about this delay on https://developers.google.com/youtube/reporting/v1/reports under Step 3:
The API response to the jobs.create method contains a Job resource,
which specifies the ID that uniquely identifies the job. You can
start retrieving the report within 48 hours of the time that the job
is created, and the first available report will be for the day that
you scheduled the job.
This was quite confusing.
We have a chat interface which we are using to request to dialogflow. Here the chat window is designed in php.
Now the chat can be initiated in any location, so when user connects we are also trying to capture timezone and send in the request to dialogflow. Based on the timeZone the results also differ.
Now i used the below code to append timezone, based on api considered from github in php.
GIT HUB link from where i have taken the code
https://github.com/GoogleCloudPlatform/php-docs-samples/blob/master/dialogflow/src/detect_intent_texts.php
The script taken from git hub is as follows
// Actual Script
// START dialogflow_detect_intent_text
//namespace Google\Cloud\Samples\Dialogflow;
use Google\Cloud\Dialogflow\V2\SessionsClient;
use Google\Cloud\Dialogflow\V2\TextInput;
use Google\Cloud\Dialogflow\V2\QueryInput;
use Google\Cloud\Dialogflow\V2\QueryParameters;
use Google\Cloud\Dialogflow\V2\SessionEntityTypesClient;
use Google\Cloud\Dialogflow\V2\EntityTypesClient;
use Google\Cloud\Dialogflow\V2\ContextsClient;
function detect_intent_texts($projectId, $texts, $sessionId, $languageCode = 'en-US')
{
$test = array('credentials' => 'apikey/test-cd4f0-XXXXX.json');
// new session
$sessionsClient = new SessionsClient($test);
$session = $sessionsClient->sessionName($projectId, $sessionId ?: uniqid());
printf('Session path: %s' . PHP_EOL, $session);
// query for each string in array
// create text input
$textInput = new TextInput();
$textInput->setText($text);
$textInput->setLanguageCode($languageCode);
// create query input
$queryInput = new QueryInput();
$queryInput->setText($textInput);
// get response and relevant info
$response = $sessionsClient->detectIntent($session, $queryInput);
$queryResult = $response->getQueryResult();
$queryText = $queryResult->getQueryText();
$intent = $queryResult->getIntent();
$displayName = $intent->getDisplayName();
$confidence = $queryResult->getIntentDetectionConfidence();
// output relevant info
$fulfilmentText = $queryResult->getFulfillmentText();
$sessionsClient->close();
}
echo detect_intent_texts('vehicle-test-cd4f0','text chat','123456','en-US','America/New_York');
// END dialogflow_detect_intent_text
Environment Details
OS : Linux
PHP 7.1
dialogflow v2
I have modified the script when sending data from detectIntent.
But my queryParams object returns me empty data, therefore the default timezone mentioned in the agent is considered. My main concern is to send TimeZone in the request. Which i am able to do with the online testing in google cloud interface https://cloud.google.com/dialogflow/docs/reference/rest/v2/projects.agent.sessions/detectIntent?apix=true. The same implementation is not working for me. What i am doing wrong? Please suggest.
/*
Modified script
All the other statements
*/
// I tried to add this code before the detectintent method to get timezone but the variable returns empty data.
// get response and relevant info
//queryParams optional adding new code before detectintent
$optionalArgs = new QueryParameters();
$optionalArgs->setTimeZone('America/New_York');
$optionalArgs->getTimeZone();
$optionArgs = (array)$optionalArgs;
$response = $sessionsClient->detectIntent($session, $queryInput, $optionArgs);// I have added $optionArgs for adding time zone
// other statements
//
// end of code
I'd like to understand how to create a new ticket in JIRA using REST API from Jenkins. Is there any limitations or special things I should be aware of?
I'm going to write a Python script, which will parse the build log and then create a new ticket in JIRA project.
I checked the plugins, but most of them only can update the existing tickets.
Thanks
There's documentation here about the JSON schema and some example JSON which needs to go in the body of your POST request to /rest/api/2/issue
https://docs.atlassian.com/jira/REST/cloud/#api/2/issue-createIssue
Here's a basic python3 script to make a post request
import requests, json
from requests.auth import HTTPBasicAuth
base_url = "myjira.example.com" # The base_url of the Jira insance.
auth_user = "simon" # Jira Username
auth_pass = "N0tMyRe3lP4ssw0rd" # Jira Password
url = "https://{}/rest/api/2/issue".format(base_url)
# Set issue fields in python dictionary. See docs and comment below regarding available fields
fields = {
"summary": "something is wrong"
}
payload = {"fields": fields}
headers = {"Content-Type": "application/json"}
response = requests.post(
url,
auth=(auth_user, auth_pass),
headers=headers,
data=json.dumps(payload))
print("POST {}".format(url))
print("Response {}: {}".format(response.status_code, response.reason))
_json = json.loads(response.text)
Using this HTTP requests library for python http://docs.python-requests.org/en/master/
You can make a GET request to /rest/api/2/issue/{issueIdOrKey}/editmeta using the id or key of existing issue in the same project as the issue's you will be creating via the API will go to in order to get a list of all the fields you can set and which ones are required.
https://docs.atlassian.com/jira/REST/cloud/#api/2/issue-getEditIssueMeta
We have several JIRA issues which have over 1000 duplicated, bogus, spam-like comments. How can we quickly delete them?
Background:
We disabled a user in active directory (Exchange) but not JIRA, so JIRA kept trying to email them updates. The email server gave a bounce-back message, and JIRA dutifully logged it to the task, which caused it to send another update, and a feedback loop was born.
The messages have this format:
Delivery has failed to these recipients or groups:
mail#example.com<mail#example.com>
The e-mail address you entered couldn't be found. Please check the recipient's e-mail address and try to resend the message. If the problem continues, please contact your helpdesk.
Diagnostic information for administrators:
Generating server: emailserver.example.com
user#example.com
#550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ##
Original message headers:
Received: from jiraserver.example.com (10.0.0.999) by emailserver.example.com (10.0.0.999)
with Microsoft SMTP Server id nn.n.nnn.n; Mon, 13 Jun 2016 15:57:04 -0500
Date: Mon, 13 Jun 2016 15:57:03 -0500
Our research did not discover an easy way without using purchased plug-ins such as Script Runner or "hacking" the database, which we wanted to avoid.
Note:
We came up with a solution and are posting here to share.
I created a python script to remove all comments for a specific Jira issue.
It uses the API from Jira.
'''
This script removes all comments from a specified jira issue
Please provide Jira-Issue-Key/Id, Jira-URL, Username, PAssword in the variables below.
'''
import sys
import json
import requests
import urllib3
# Jira Issue Key or Id where comments are deleted from
JIRA_ISSUE_KEY = 'TST-123'
# URL to Jira
URL_JIRA = 'https://jira.contoso.com'
# Username with enough rights to delete comments
JIRA_USERNAME = 'admin'
# Password to Jira User
JIRA_PASSWORD = 'S9ev6ZpQ4sy2VFH2_bjKKQAYRUlDfW7ujNnrIq9Lbn5w'
''' ----- ----- Do not change anything below ----- ----- '''
# Ignore SSL problem (certificate) - self signed
urllib3.disable_warnings()
# get issue comments:
# https://developer.atlassian.com/cloud/jira/platform/rest/#api-api-2-issue-issueIdOrKey-comment-get
URL_GET_COMMENT = '{0}/rest/api/latest/issue/{1}/comment'.format(URL_JIRA, JIRA_ISSUE_KEY)
# delete issue comment:
# https://developer.atlassian.com/cloud/jira/platform/rest/#api-api-2-issue-issueIdOrKey-comment-id-delete
URL_DELETE_COMMENT = '{0}/rest/api/2/issue/{1}/comment/{2}'
def user_yesno():
''' Asks user for input yes or no, responds with boolean '''
allowed_response_yes = {'yes', 'y'}
allowed_response_no = {'no', 'n'}
user_response = input().lower()
if user_response in allowed_response_yes:
return True
elif user_response in allowed_response_no:
return False
else:
sys.stdout.write("Please respond with 'yes' or 'no'")
return False
# get jira comments
RESPONSE = requests.get(URL_GET_COMMENT, verify=False, auth=(JIRA_USERNAME, JIRA_PASSWORD))
# check if http response is OK (200)
if RESPONSE.status_code != 200:
print('Exit-101: Could not connect to api [HTTP-Error: {0}]'.format(RESPONSE.status_code))
sys.exit(101)
# parse response to json
JSON_RESPONSE = json.loads(RESPONSE.text)
# get user confirmation to delete all comments for issue
print('You want to delete {0} comments for issue {1}? (yes/no)' \
.format(len(JSON_RESPONSE['comments']), JIRA_ISSUE_KEY))
if user_yesno():
for jira_comment in JSON_RESPONSE['comments']:
print('Deleting Jira comment {0}'.format(jira_comment['id']))
# send delete request
RESPONSE = requests.delete(
URL_DELETE_COMMENT.format(URL_JIRA, JIRA_ISSUE_KEY, jira_comment['id']),
verify=False, auth=(JIRA_USERNAME, JIRA_PASSWORD))
# check if http response is No Content (204)
if RESPONSE.status_code != 204:
print('Exit-102: Could not connect to api [HTTP-Error: {0}; {1}]' \
.format(RESPONSE.status_code, RESPONSE.text))
sys.exit(102)
else:
print('User abort script...')
source control: https://gist.github.com/fty4/151ee7070f2a3f9da2cfa9b1ee1c132d
Use the JIRA REST API through the Chrome JavaScript Console.
Background:
We didn't want to write a full application for what we hope is an isolated occurrence. We originally planned to use PowerShell's Invoke-WebRequest. However, authentication proved to be a challenge. The API supports Basic Authentication, though it's only recommended when using SSL, which we weren't using for our internal server. Also, our initial tests resulted in 401 errors (perhaps due to a bug).
However, the API also supports cookie-based authentication, so as long as you are generating the request from a browser which has a valid JIRA session, it just works. We chose that method.
Solution details:
First, find and review the relevant comment and issue IDs:
SELECT * FROM jira..jiraaction WHERE actiontype = 'comment' AND actionbody LIKE '%RESOLVER.ADR.RecipNotFound%';
This might be a slow query depending on the size of your JIRA data. It seems to be indexed on the issueid, so if you know that, specify it. Also, add other criteria to this query so that it only represents the comments you wish to delete.
The solution below is written for comments on a single issue, but with some additional JavaScript could be expanded to support multiple issues.
We need the list of comment IDs for use in the Chrome JavaScript console. A useful format is a comma-delimited list of strings, which you can create as follows:
SELECT '"' + CONVERT(VARCHAR(50),ID) + '", ' FROM jira..jiraaction WHERE actiontype = 'comment' AND actionbody LIKE '%RESOLVER.ADR.RecipNotFound%' AND issueid = #issueid FOR XML PATH('')
(This is not necessarily the best way to concatenate strings in SQL, but it's simple and works for this purpose.)
Now, open a new browser session and authenticate to your JIRA instance. We used Chrome, but any browser with a JavaScript console should do.
Take the string produced by that query and drop it in the JavaScript console inside of a statement like this:
CommentIDs = [StringFromSQL];
You will need to trim the trailing comma manually (or adjust the above query to do so for you). It will look like this:
CommentIDs = ["1234", "2345"];
When you run that command, you will have created a JavaScript array with all of those comment IDs.
Now we arrive at the meat of the technique. We will loop over the contents of that array and make a new AJAX call to the REST API using XMLHttpRequest (often abbreviated XHR). (There is also a jQuery option.)
for (let s of CommentIDs) {let r = new XMLHttpRequest; r.open("DELETE","http://jira.example.com/rest/api/2/issue/11111/comment/"+s,true); r.send();}
You must replace "11111" with the relevant issue ID. You can repeat this for multiple issue IDs, or you can build a multi-dimensional array and a fancier loop.
This is not elegant. It doesn't have any error handling, but you can monitor the progress using the Chrome JavaScript API.
I would use a jira-python script or a ScriptRunner groovy script. Even for a one-off bulk update, because it is easier to test and requires no database access.
Glad it worked for you though!
We solved this problem, which occurs from time to time, with ScriptRunner and a Groovy script:
// this script takes some time, when executing it in console, it takes a long time to repsonse, and then the console retunrs "null"
// - but it kepps running in the backgorund, give it some time - at least 1 second per comment and attachment to delete.
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.IssueManager
import com.atlassian.jira.issue.MutableIssue
import com.atlassian.jira.issue.comments.Comment
import com.atlassian.jira.issue.comments.CommentManager
import com.atlassian.jira.issue.attachment.Attachment
import com.atlassian.jira.issue.managers.DefaultAttachmentManager
import com.atlassian.jira.issue.AttachmentManager
import org.apache.log4j.Logger
import org.apache.log4j.Level
log.setLevel(Level.DEBUG)
// NRS-1959
def issueKeys = ['XS-8071', 'XS-8060', 'XS-8065', 'XRFS-26', 'NRNM-45']
def deleted_attachments = 0
def deleted_comments = 0
IssueManager issueManager = ComponentAccessor.issueManager
CommentManager commentManager = ComponentAccessor.commentManager
AttachmentManager attachmentManager = ComponentAccessor.attachmentManager
issueKeys.each{ issueKey ->
MutableIssue issue = issueManager.getIssueObject(issueKey)
List<Comment> comments = commentManager.getComments(issue)
comments.each {comment ->
if (comment.body.contains('550 5.1.1 The email account that you tried to reach does not exist')) {
log.info issueKey + " DELETE comment:"
//log.debug comment.body
commentManager.delete(comment)
deleted_comments++
} else {
log.info issueKey + " KEEP comment:"
log.debug comment.body
}
}
List<Attachment> attachments = attachmentManager.getAttachments(issue)
attachments.each {attachment ->
if (attachment.filename.equals('icon.png')) {
log.info issueKey + " DELETE attachment " + attachment.filename
attachmentManager.deleteAttachment(attachment)
deleted_attachments++
} else {
log.info issueKey + " KEEP attachment " + attachment.filename
}
}
}
log.info "${deleted_comments} deleted comments, and ${deleted_attachments} deleted attachments"
return "${deleted_comments} deleted comments, and ${deleted_attachments} deleted attachments"
Bitbucket doesn't expose this information in the web interface, so I'll likely need to find it using the API.
Some examples:
https://api.bitbucket.org/2.0/repositories/tutorials/tutorials.bitbucket.org/pullrequests/?state=OPEN
https://api.bitbucket.org/2.0/repositories/tutorials/tutorials.bitbucket.org/pullrequests/?state=MERGED
and search for the size entry in the response (eg: "size": 7)
The following python code uses the requests library to interact with the bitbucket API. It should print the number of merged pull requests authored by the bitbucket account my_bb_username. Note that you will need to edit url0 to point to the appropriate repository.
import requests
numprs = 0
url0 = "https://bitbucket.org/api/2.0/repositories/{username}/{reposlug}/pullrequests/?state=merged"
url = url0
while True:
r = requests.get(url)
if r.status_code != 200:
raise RuntimeError
data = r.json()
values = data['values']
for value in values:
if value['author']['username'] == 'my_bb_username':
print value['title']
numprs += 1
if 'next' in data.keys():
url = data['next']
else:
break
print numprs
If you want a list of all PRs, append ?state=merged,open,declined to your API call. By default, the API will only include open PRs.