I'm trying to perform a query via the Zendesk Search API that compares two fields. E.g. I want to get all tickets where created_at is equal to updated_at.
This syntax is accepted, but the result is not correct query=type:ticket created_at:(updated_at).
Are such predicates supported by the Zendesk Search API?
If not - are there other endpoints, which can provide me the desired outcome?
I can't see anything that supports this syntax in the search reference. You would have to query tickets in your preferred time range, or of all time, and assert if the creation and update time indeed match. Here is a sample code on how you do it in python using a Zendesk API wrapper called Zenpy:
from dotenv import load_dotenv
from zenpy import Zenpy
from os import environ
load_dotenv()
def main():
zenpy_client = Zenpy(
subdomain=environ["SUBDOMAIN"],
email=environ["EMAIL"],
token=environ["API_TOKEN"],
)
tickets = zenpy_client.tickets.incremental(start_time=1)
for ticket in tickets:
if ticket.created_at == ticket.updated_at:
print(ticket)
if __name__ == "__main__":
main()
The code will print the ticket id of any ticket that had no update since creation.
Related
I have been using tweepy to scrape pieces of information from twitter like tweets with a particular keyword, user id, favorite count, retweeters id, etc. I tried to fetch user id who has retweeted certain tweets. After going through tweepy documentation I found the way to do this is.
API.get_retweets(id, *, count, trim_user)
I have gone through similar questions here ,but I could not figure out how to retrieve retweet ids?
number_of_tweets=10
tweets = []
like = []
time = []
author=[]
retweet_count=[]
tweet_id=[]
retweets_list=[]
retweeters_list=[]
for tweet in tweepy.Cursor(api.search_tweets,q='bullying',lang ='en', tweet_mode="extended").items(number_of_tweets):
tweet_id.append(tweet.id) # Instead of i._json['id']
tweets.append(tweet.full_text)
like.append(tweet.favorite_count)
time.append(tweet.created_at)
author.append(tweet.user.screen_name)
retweet_count.append(tweet.retweet_count)
retweets_list = api.get_retweets(tweet.id)
for retweet in retweets_list:
retweeters_list.append(retweet.user.id)
# print(retweet.user.id)
You should chose more explicit names for your variables (i? j?), that would help you (and the helpers here) to check the logic of your code.
Besides, what you want to achieve and what does not work is pretty unclear.
Does it work if you remove the first inner loop? (I don't understand its purpose)
for tweet in tweepy.Cursor(api.search_tweets,q='bullying',lang ='en', tweet_mode="extended").items(number_of_tweets):
tweet_id.append(tweet.id) # Instead of i._json['id']
tweets.append(tweet.full_text)
like.append(tweet.favorite_count)
time.append(tweet.created_at)
author.append(tweet.user.screen_name)
retweet_count.append(i.retweet_count)
# No need for the first inner loop
retweets_list = api.get_retweets(tweet.id)
for retweet in retweets_list:
print(retweet.id) # If you want the retweet id
print(retweet.user.id)
If not, please give more debug information (what is the problem? What do you want to do? Is there any error message? Where? What have you tried? Show some printed variables etc.).
I'm tracking all mentions of #UN with Tweepy using Twitter stream v1 API. However, I'm also getting all mentions of usernames containing #UN such as #UN_Women. I could filter them out in a post-processing step but this seems very inefficient.
Is there any way to avoid this?
This is my code:
class MyStreamListener(tweepy.StreamListener):
def on_status(self, status):
print(status.text)
myStreamListener = MyStreamListener()
myStream = tweepy.Stream(auth = api.auth, listener=myStreamListener())
myStream.filter(track=['#UN'])
Using follow instead of track should work. With follow, you supply a list of user IDs:
myStream.filter(follow=['14159148'])
I don't know if tweepy provides any further functionalities to avoid this. But what you can do here is, filter out the results while saving it to your database or csv.
Check the json response and look for entities object and in that check for user_mentions and screen_name. Save only the one with your desired screen_name
I am trying to list down all my meetings with a certain email id. I only want to select few things out of the huge response. Duration is one of the main insight and it won't allow to return the start and end datetime.
URL:
https://graph.microsoft.com/v1.0/me/messages?$select=startDateTime,subject,from,sender,toRecipients&$search="participants:xyz#contoso.com and kind:meetings"
Error I am getting is
"message": "Could not find a property named 'startDateTime' on type 'Microsoft.OutlookServices.Message'
Is this expected?
Are you looking for how to query all meetings ? but your query is for messages. Can you please clarify?
If you are looking for help with messages from specific person along with created data time you can update your query like this -
https://graph.microsoft.com/v1.0/me/messages?$select=createdDateTime,Id,lastModifiedDateTime,from,subject&$filter=from/emailAddress/name eq 'XYZ A'
I am trying to pull reports automatically from shopify admin portal. From source page I can see that javascript function makes this call -
var shopifyQL = "**SHOW** quantity_count, total_sales BY product_type, product_title, sku, shipping_city, traffic_source, source, variant_title, host, shipping_country, shipping_province, day, month, referrer FROM **products** SINCE xxx UNTIL yyy ORDER BY total_sales DESC";
var options = {"category":"product_reports","id":wwwwww,"name":"Product Report by SKU","shopify_ql":"SHOW quantity_count, total_sales BY product_type, product_title, sku, shipping_city, traffic_source, source, variant_title, host, shipping_country, shipping_province, day, month, referrer FROM products SINCE xxxx UNTIL yyyy ORDER BY total_sales DESC","updated_at":"zzz"};
However looking at the product API (https://docs.shopify.com/api/product) I do not see most of the attributes. I am assuming some join tables or seperate calls to the model. Also I tried to pull single sku information but it pulls everything.
ShopifyAPI::Product.find(:all, :params => {:variants => {:sku => 'zzzz'}})
Does anybody had any experience to work with reports??
You need to grab the data from the api and play with it. The available objects are clearly stated on the Shopify API docs. Admin dashboard data can't be pulled like the way you seem to envision unless you play with JavaScript injection (tampermonkey...) which is highly not recommended.
It would go like this for you. First off, if you pull products, you have do so in chunks of 250. The :all symbol gives you up to 250. Supplying a page and limit parameter would help there.
Second, you cannot filter by SKU. Instead, download all the products, and then inside each product are the variants. A variant has a SKU, so you'd search that way.
Doing that, you could setup your own nice reference data structure, ready to be used in reporting as you see fit.
I want to integrate 2 JIRA instances through email, one uses Jira 4.2.1 and the other uses 4.3.3.
one instance has certain custom fields, another has certain custom fields, both of the JIRA instances has to interchange the issue details, updates of the issue, through email. i.e both has to be in sync.
For Example
1) if an issue is created in Instance 1, a mail will be triggered and using that email, Instance 2 will create an issue there.
2) Also, if there is a update for an issue in Insance1 then a mail will be triggered to Instance 2 which will update the same issue in Instance 2.
Hope it clears !!
If I got your intentions right, i believe that there is an easier way to do so using the Jira remote API. For example, you could easily write a Python script, using the XML-RPC library, comparing the two systems and updating them if needed.
The problem with the email method you suggested is that you could easily create an endless loop of issue creating...
First, create a custom field in both instances, and call it something like "Sync". This will be used to mark issues once we'll sync them.
Next, enable the RPC plugin.
Finally, write a script that will copy the issues via RPC, example:
#!/usr/bin/python
# Sample Python client accessing JIRA via XML-RPC. Methods requiring
# more than basic user-level access are commented out.
#
# Refer to the XML-RPC Javadoc to see what calls are available:
# http://docs.atlassian.com/software/jira/docs/api/rpc-jira-plugin/latest/com/atlassian/jira/rpc/xmlrpc/XmlRpcService.html
import xmlrpclib
s1 = xmlrpclib.ServerProxy('http://your.first.jira.url/rpc/xmlrpc')
auth1 = s1.jira1.login('user', 'password')
s2 = xmlrpclib.ServerProxy('http://your.second.jira.url/rpc/xmlrpc')
auth2 = s2.jira1.login('user', 'password')
# go trough all issues that appear in the next filter
filter = "10200"
issues = s1.jira1.getIssuesFromFilter(auth1, filter)
for issue in issues:
# read issues:
for customFields in issue['customFieldValues']:
if customFields['customfieldId'] == 'customfield_10412': # sync custome field
# cf exists , dont sync!
continue
# no sync field, sync now
proj = issue['project']
type = issue['type']
sum = issue['summary']
desc = issue['project']
newissue = s2.jira1.createIssue(auth2, { "project": issue['project'], "type": issue['type'], "summary": issue['summary'], "description": issue['description']})
print "Created %s/browse/%s" % (s.jira1.getServerInfo(auth)['baseUrl'], newissue['key'])
# mark issue as synced
s.jira1.updateIssue(auth, issue['key'], {"customfield_10412": ["yes"]})
The script wasn't tested but should work. You'll probably need to copy the rest of the fields you have, check out this link for more info. As well, this is just one way sync, you have to sync it the other way around as well.