P2P communication using Android smartphone and nfcpy - nfc-p2p

I would like to send a NDEF message from my NFC reader (using nfcpy on Linux) to my smartphone. I have the following code.
#!/usr/bin/python
# -*- coding: latin-1 -*-
# -----------------------------------------------------------------------------
# Copyright 2010-2011 Stephen Tiedemann <stephen.tiedemann#googlemail.com>
#
# Licensed under the EUPL, Version 1.1 or - as soon they
# will be approved by the European Commission - subsequent
# versions of the EUPL (the "Licence");
# You may not use this work except in compliance with the
# Licence.
# You may obtain a copy of the Licence at:
#
# http://www.osor.eu/eupl
#
# Unless required by applicable law or agreed to in
# writing, software distributed under the Licence is
# distributed on an "AS IS" basis,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied.
# See the Licence for the specific language governing
# permissions and limitations under the Licence.
# -----------------------------------------------------------------------------
import logging
log = logging.getLogger('main')
import os
import sys
import time
import argparse
sys.path.insert(1, os.path.split(sys.path[0])[0])
from cli import CommandLineInterface
import nfc
import nfc.snep
import nfc.ndef
class DefaultServer(nfc.snep.SnepServer):
def __init__(self, llc):
service_name = 'urn:nfc:sn:snep'
super(DefaultServer, self).__init__(llc, service_name)
def put(self, ndef_message):
log.info("default snep server got put request")
log.info(ndef_message.pretty())
return nfc.snep.Success
class ValidationServer(nfc.snep.SnepServer):
def __init__(self, llc):
service_name = "urn:nfc:xsn:nfc-forum.org:snep-validation"
super(ValidationServer, self).__init__(llc, service_name, 10000)
self.ndef_message_store = dict()
def put(self, ndef_message):
log.info("validation snep server got put request")
key = (ndef_message.type, ndef_message.name)
log.info("store ndef message under key " + str(key))
self.ndef_message_store[key] = ndef_message
return nfc.snep.Success
def get(self, acceptable_length, ndef_message):
log.info("validation snep server got get request")
key = (ndef_message.type, ndef_message.name)
log.info("client requests ndef message with key " + str(key))
if key in self.ndef_message_store:
ndef_message = self.ndef_message_store[key]
log.info("found matching ndef message")
log.info(ndef_message.pretty())
if len(str(ndef_message)) <= acceptable_length:
return ndef_message
else: return nfc.snep.ExcessData
return nfc.snep.NotFound
class TestProgram(CommandLineInterface):
def __init__(self):
parser = argparse.ArgumentParser()
super(TestProgram, self).__init__(
parser, groups="llcp dbg clf")
def on_llcp_startup(self, clf, llc):
self.default_snep_server = DefaultServer(llc)
self.validation_snep_server = ValidationServer(llc)
return llc
def on_llcp_connect(self, llc):
self.default_snep_server.start()
self.validation_snep_server.start()
return True
if __name__ == '__main__':
TestProgram().run()
Where, and how, should I make a NDEF messege en send it with the existing server to my smartphone? The other way arround (from smartphone to NFC reader is working already).
Thanks in advance!

The Simple NDEF Exchange Protocol that is used to send NDEF messages is a client/server protocol. The code sample provided implements a SNEP server and thus succeeds in receiving an NDEF message from the phone. To send an NDEF message a SNEP client must be used, the equivalent in nfcpy is examples/snep-test-client.py. But the much better option is to use examples/beam.py send ndef <file.ndef>.

Related

django all-auth enable 2FA using email

As per the installation instructions at https://django-allauth-2FA.readthedocs.io/en/latest/ have implemented all steps and have no idea how to enable email based OTP for login for existing users. Even added a manual adapter.py as below:
from allauth.account.adapter import DefaultAccountAdapter
from urllib.parse import urlencode
from allauth.account.adapter import DefaultAccountAdapter
from allauth.exceptions import ImmediateHttpResponse
from django.http import HttpResponseRedirect
from django.urls import reverse
from allauth_2fa.utils import user_has_valid_totp_device
class NoNewUsersAccountAdapter(DefaultAccountAdapter):
"""
Adapter to disable allauth new signups
Used at equilang/settings.py with key ACCOUNT_ADAPTER
https://django-allauth.readthedocs.io/en/latest/advanced.html#custom-redirects """
def is_open_for_signup(self, request):
"""
Checks whether or not the site is open for signups.
Next to simply returning True/False you can also intervene the
regular flow by raising an ImmediateHttpResponse
"""
return False
def has_2fa_enabled(self, user):
"""Returns True if the user has 2FA configured."""
return user_has_valid_totp_device(user)
def login(self, request, user):
# Require two-factor authentication if it has been configured.
if self.has_2fa_enabled(user):
# Cast to string for the case when this is not a JSON serializable
# object, e.g. a UUID.
request.session["allauth_2fa_user_id"] = str(user.id)
redirect_url = reverse("two-factor-authenticate")
# Add "next" parameter to the URL.
view = request.resolver_match.func.view_class()
view.request = request
success_url = view.get_success_url()
query_params = request.GET.copy()
if success_url:
query_params[view.redirect_field_name] = success_url
if query_params:
redirect_url += "?" + urlencode(query_params)
raise ImmediateHttpResponse(response=HttpResponseRedirect(redirect_url))
# Otherwise defer to the original allauth adapter.
return super().login(request, user)
But no OTP's are fired. When I manually created a TOTP device a window appears for asking OTP after login but don't know where and how the email with OTP is sent.
Any help or lead please

How to connect to KaaIOT MQTT broker

I want to connect my KaaIOT cloud and subscribe a topic to show result from terminal. I do not know where to get the topic name to subscribe with. I had read the KaaIOT documentation but still cannot have a clear idea on it. Could someone help me with a sample code for me to reference?
KaaIOT Information
appVersion.name: c184ijqrqa51q5haskp0-v1
appVersion.registeredDate: 2021-03-16T05:59:54.185Z
createdDate: 2021-03-16T05:59:54.186Z
endpointId: fc2c5833-77c5-445a-89a0-9b0e7498c048
model: Raspberry Pi (192.168.0.171)
metadataUpdatedDate: 2021-03-17T09:13:01.809Z
Sample Code
import itertools
import json
import queue
import random
import string
import sys
import time
import paho.mqtt.client as mqtt
KPC_HOST = "mqtt.cloud.kaaiot.com" # Kaa Cloud plain MQTT host
KPC_PORT = 1883 # Kaa Cloud plain MQTT port
APPLICATION_VERSION = "" # Paste your application version
ENDPOINT_TOKEN = "" # Paste your endpoint token
class MetadataClient:
def __init__(self, client):
self.client = client
self.metadata_by_request_id = {}
self.global_request_id = itertools.count()
get_metadata_subscribe_topic = f'kp1/{APPLICATION_VERSION}/epmx/{ENDPOINT_TOKEN}/get/#'
self.client.message_callback_add(get_metadata_subscribe_topic, self.handle_metadata)
def handle_metadata(self, client, userdata, message):
request_id = int(message.topic.split('/')[-2])
if message.topic.split('/')[-1] == 'status' and request_id in self.metadata_by_request_id:
print(f'<--- Received metadata response on topic {message.topic}')
metadata_queue = self.metadata_by_request_id[request_id]
metadata_queue.put_nowait(message.payload)
else:
print(f'<--- Received bad metadata response on topic {message.topic}:\n{str(message.payload.decode("utf-8"))}')
def get_metadata(self):
request_id = next(self.global_request_id)
get_metadata_publish_topic = f'kp1/{APPLICATION_VERSION}/epmx/{ENDPOINT_TOKEN}/get/{request_id}'
metadata_queue = queue.Queue()
self.metadata_by_request_id[request_id] = metadata_queue
print(f'---> Requesting metadata by topic {get_metadata_publish_topic}')
self.client.publish(topic=get_metadata_publish_topic, payload=json.dumps({}))
try:
metadata = metadata_queue.get(True, 5)
del self.metadata_by_request_id[request_id]
return str(metadata.decode("utf-8"))
except queue.Empty:
print('Timed out waiting for metadata response from server')
sys.exit()
def patch_metadata_unconfirmed(self, metadata):
partial_metadata_udpate_publish_topic = f'kp1/{APPLICATION_VERSION}/epmx/{ENDPOINT_TOKEN}/update/keys'
print(f'---> Reporting metadata on topic {partial_metadata_udpate_publish_topic}\nwith payload {metadata}')
self.client.publish(topic=partial_metadata_udpate_publish_topic, payload=metadata)
def main():
# Initiate server connection
print(f'Connecting to Kaa server at {KPC_HOST}:{KPC_PORT} using application version {APPLICATION_VERSION} and endpoint token {ENDPOINT_TOKEN}')
client_id = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(6))
client = mqtt.Client(client_id=client_id)
client.connect(KPC_HOST, KPC_PORT, 60)
client.loop_start()
metadata_client = MetadataClient(client)
# Fetch current endpoint metadata attributes
retrieved_metadata = metadata_client.get_metadata()
print(f'Retrieved metadata from server: {retrieved_metadata}')
# Do a partial endpoint metadata update
metadata_to_report = json.dumps({"model": "BFG 9000", "mac": "00-14-22-01-23-45"})
metadata_client.patch_metadata_unconfirmed(metadata_to_report)
time.sleep(5)
client.disconnect()
if __name__ == '__main__':
main()
I have established a communication with EMQX broker by using MQTT protocol. But I don't know about KaaIoT much but this might help you. As I went through your code, I didn't see the part where you have subscribed to the topic(correct me if I am wrong). You can refer this. I have implemented the sub pub model and below is the subscriber code which runs fine for EMQX broker. You can try it for KaaIoT.
import paho.mqtt.client as mqtt
import time
import logging
def on_connect(client, userdata, flags, rc):
logging.info("Connected flags"+str(flags)+"result code " + str(rc)+ "client1_id")
client.connected_flag=True
def on_message(client, userdata, message):
print("Received message: " ,str(message.payload.decode("utf-8")))
def on_disconnect(client, userdata, rc):
if rc != 0:
print("Unexpected MQTT disconnection. Will auto-reconnect")
client = mqtt.Client('''Your client id string''')
client.connect("mqtt.cloud.kaaiot.com", 1883, 60)
client.subscribe('''Your topic name (mentioned where data is published)''',qos=1)
client.on_connect = on_connect
client.on_message=on_message
client.on_disconnect = on_disconnect
client.loop_forever()

requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read, 512 more expected)', IncompleteRead

I wanted to write a program to fetch tweets from Twitter and then do sentiment analysis. I wrote the following code and got the error even after importing all the necessary libraries. I'm relatively new to data science, so please help me.
I could not understand the reason for this error:
class TwitterClient(object):
def __init__(self):
# keys and tokens from the Twitter Dev Console
consumer_key = 'XXXXXXXXX'
consumer_secret = 'XXXXXXXXX'
access_token = 'XXXXXXXXX'
access_token_secret = 'XXXXXXXXX'
api = Api(consumer_key, consumer_secret, access_token, access_token_secret)
def preprocess(tweet, ascii=True, ignore_rt_char=True, ignore_url=True, ignore_mention=True, ignore_hashtag=True,letter_only=True, remove_stopwords=True, min_tweet_len=3):
sword = stopwords.words('english')
if ascii: # maybe remove lines with ANY non-ascii character
for c in tweet:
if not (0 < ord(c) < 127):
return ''
tokens = tweet.lower().split() # to lower, split
res = []
for token in tokens:
if remove_stopwords and token in sword: # ignore stopword
continue
if ignore_rt_char and token == 'rt': # ignore 'retweet' symbol
continue
if ignore_url and token.startswith('https:'): # ignore url
continue
if ignore_mention and token.startswith('#'): # ignore mentions
continue
if ignore_hashtag and token.startswith('#'): # ignore hashtags
continue
if letter_only: # ignore digits
if not token.isalpha():
continue
elif token.isdigit(): # otherwise unify digits
token = '<num>'
res += token, # append token
if min_tweet_len and len(res) < min_tweet_len: # ignore tweets few than n tokens
return ''
else:
return ' '.join(res)
for line in api.GetStreamSample():
if 'text' in line and line['lang'] == u'en': # step 1
text = line['text'].encode('utf-8').replace('\n', ' ') # step 2
p_t = preprocess(text)
# attempt authentication
try:
# create OAuthHandler object
self.auth = OAuthHandler(consumer_key, consumer_secret)
# set access token and secret
self.auth.set_access_token(access_token, access_token_secret)
# create tweepy API object to fetch tweets
self.api = tweepy.API(self.auth)
except:
print("Error: Authentication Failed")
Assume all the necessary libraries are imported. The error is on line 69.
for line in api.GetStreamSample():
if 'text' in line and line['lang'] == u'en': # step 1
text = line['text'].encode('utf-8').replace('\n', ' ') # step 2
p_t = preprocess(text)
I tried checking on the internet the reason for the error but could not get any solution.
Error was:
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read, 512 more expected)', IncompleteRead(0 bytes read, 512 more expected))
I'm using Python 2.7 and requests version 2.14, the latest one.
If you set stream to True when making a request, Requests cannot release the connection back to the pool unless you consume all the data or call Response.close. This can lead to inefficiency with connections. If you find yourself partially reading request bodies (or not reading them at all) while using stream=True, you should make the request within a with statement to ensure it’s always closed:
with requests.get('http://httpbin.org/get', stream=True) as r:
# Do things with the response here.
I had the same problem but without stream, and as stone mini said, just apply "with" clause before to make sure your request is closed before a new request.
with requests.request("POST", url_base, json=task, headers=headers) as report:
print('report: ', report)
actually the problem with your django2.7 or earlier version based application. that django versions by default allowed 2.5mb data upload memory size of request body.
I was facing the same issue with django2.7 based application, I just updated the setting.py file of my django application where my urls(endpoints) were working.
DATA_UPLOAD_MAX_MEMORY_SIZE = None
I just add the above variable in my application's settings.py file.
you can also readout about that from here
I'm pretty sure this will work for you.

How to get content owner info for particular video via YouTube Partner API?

Is there a way to get content owner information (attribution) for particular video via YouTube Partner API?
For example, this video: http://www.youtube.com/watch?v=I67cgXr6L6o&feature=c4-overview&list=UUGnjeahCJW1AF34HBmQTJ-Q is attributed to VEVO.
Is there a way to get that info somehow via API?
It's possbible now:
https://developers.google.com/youtube/partner/docs/v1/ownership/
[EDIT]
I've used this sample to create the snipped below and succesfully got the assets content owner.
You will need setup your python enviromment and the clients_secrets.file correctly
#!/usr/bin/python
import httplib2
import os
import sys
import logging
import requests
import json
import time
from apiclient.discovery import build
from oauth2client.file import Storage
from oauth2client.client import flow_from_clientsecrets
from oauth2client.tools import argparser, run_flow
from symbol import for_stmt
# The CLIENT_SECRETS_FILE variable specifies the name of a file that contains
# the OAuth 2.0 information for this application, including its client_id and
# client_secret. You can acquire an OAuth 2.0 client ID and client secret from
# the Google Developers Console at
# https://console.developers.google.com/.
# Please ensure that you have enabled the YouTube Data API for your project.
# For more information about using OAuth2 to access the YouTube Data API, see:
# https://developers.google.com/youtube/v3/guides/authentication
# For more information about the client_secrets.json file format, see:
# https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
CLIENT_SECRETS_FILE = "client_secrets.json"
# This variable defines a message to display if the CLIENT_SECRETS_FILE is
# missing.
MISSING_CLIENT_SECRETS_MESSAGE = """
WARNING: Please configure OAuth 2.0
To make this sample run you will need to populate the client_secrets.json file
found at:
%s
with information from the Developers Console
https://console.developers.google.com/
For more information about the client_secrets.json file format, please visit:
https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
""" % os.path.abspath(os.path.join(os.path.dirname(__file__), CLIENT_SECRETS_FILE))
YOUTUBE_SCOPES = (
# This OAuth 2.0 access scope allows for read-only access to the authenticated
# user's account, but not other types of account access.
"https://www.googleapis.com/auth/youtube.readonly",
# This OAuth 2.0 scope grants access to YouTube Content ID API functionality.
"https://www.googleapis.com/auth/youtubepartner")
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
YOUTUBE_PARTNER_API_SERVICE_NAME = "youtubePartner"
YOUTUBE_PARTNER_API_VERSION = "v1"
KEY = "USE YOUR KEY"
# Authorize the request and store authorization credentials.
def get_authenticated_services(args):
flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE, scope=" ".join(YOUTUBE_SCOPES), message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, storage, args)
youtube = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, http=credentials.authorize(httplib2.Http()))
youtube_partner = build(YOUTUBE_PARTNER_API_SERVICE_NAME, YOUTUBE_PARTNER_API_VERSION, http=credentials.authorize(httplib2.Http()))
return (youtube, youtube_partner)
def get_content_owner_id(youtube_partner):
content_owners_list_response = youtube_partner.contentOwners().list(fetchMine=True).execute()
return content_owners_list_response["items"][0]["id"]
#def
def claim_search(youtube_partner, content_owner_id, _videoId):
video_info = get_video_info(_videoId)
print '\n---------- CLAIM SEARCH LIST - VIDEO (',video_info['title'],') ID: (',_videoId,')'
claims = youtube_partner.claimSearch().list( videoId=_videoId, onBehalfOfContentOwner=content_owner_id).execute()
print ' --- CLAIMS: ',claims
if claims['pageInfo']['totalResults'] < 1:
print " --- DOESN'T HAVE CLAIMS"
return
assetId = claims['items'][0]['assetId']
print ' --- ASSET ID: ', assetId
ownershipHistory = youtube_partner.ownershipHistory().list(assetId=assetId, onBehalfOfContentOwner=content_owner_id).execute()
print ' --- OWNERSHIP HISTORY: ', ownershipHistory
contentOwnerId = ownershipHistory['items'][0]['origination']['owner']
print ' --- CONTENT OWNER ID: ', contentOwnerId
contentOwners = youtube_partner.contentOwners().get(contentOwnerId=contentOwnerId, onBehalfOfContentOwner=content_owner_id).execute()
print ' --- CONTENT OWNERS: ', contentOwners
def get_video_info(videoId):
r = requests.get("https://www.googleapis.com/youtube/v3/videos?id="+videoId+"&part=id,snippet,contentDetails,status,statistics,topicDetails,recordingDetails&key="+KEY)
content = r.content
content = json.loads(content)
return content['items'][0]['snippet']
if __name__ == "__main__":
args = argparser.parse_args()
(youtube, youtube_partner) = get_authenticated_services(args)
#video id's. Ex: "https://www.youtube.com/watch?v=lgSLz5FeXUg"
videos = ['I67cgXr6L6o', 'lgSLz5FeXUg']
content_owner_id = get_content_owner_id(youtube_partner)
for vid in videos :
claim_search(youtube_partner, content_owner_id, vid)
time.sleep(0.5)
The (censored) proof :
There is no way to check other people's ownership or management rights on a video or channel. YouTube deliberately avoids that.
While ownershipHistory works, I saw GET https://www.googleapis.com/youtube/partner/v1/assets?id=[multiple asset ids] also worked when it's called with fetchOwnership=effective option
https://developers.google.com/youtube/partner/docs/v1/assets/list

Jira for bug tracking and customer support?

We are thinking of using Jira for bug tracking and to integrate it with Git to connect bug fixes with version handling.
Do you recommend Jira also for customer support or should we find another system like for example Zendesk for that purpose? I know that it is possible somehow to integrate for example Hipchat with Jira to enable chat functionality with customers but is Jira too complex for Customer Service to handle? What is your experience?
We use Jira for customer support, but we found that Jira is missing many must-have features that are needed for this. that's why we make many changes.
All and all, we are very happy with our choice, and we managed to save a lot of money by using Jira instead of other solutions.
Here are the major changes that we made, this will show you what is missing, while on the other hand show you that with a little bit of programming, Jira can do anything :)
Note: The scripts writen below should be attach to a workflow transition. The scripts are written using Jython, so it needs to be installed to use it.
Create issues by email
Jira only sends emails to Jira users. Since we didn't want to create a user for every person that addressed the support, we used anonymous users instead, and used scripts to send them email.
First, set Jira to create issues from emails. Than, use Script Runner pluging to save customer's email and names to custom field. . code:
from com.atlassian.jira import ComponentManager
import re
cfm = ComponentManager.getInstance().getCustomFieldManager()
# read issue description
description = issue.getDescription()
if (description is not None) and ('Created via e-mail received from' in description):
# extract email and name:
if ('<' in description) and ('>' in description):
# pattern [Created via e-mail received from: name <email#company.com>]
# split it to a list
description_list = re.split('<|>|:',description)
list_length = len(description_list)
for index in range(list_length-1, -1, -1):
if '#' in description_list[index]:
customer_email = description_list[index]
customer_name = description_list[index - 1]
break
else:
# pattern [Created via e-mail received from: email#company.com]
customer_name = "Sir or Madam"
# split it to a list
description_list = re.split(': |]',description)
list_length = len(description_list)
for index in range(list_length-1, -1, -1):
if '#' in description_list[index]:
customer_email = description_list[index]
break
# if the name isn't in the right form, switch it's places:
if (customer_name[0] == '"') and (customer_name[-1] == '"') and (',' in customer_name):
customer_name = customer_name[1:-1]
i = customer_name.index(',')
customer_name = customer_name[i+2:]+" "+customer_name[:i]
# insert data to issue fields
issue.setCustomFieldValue(cfm.getCustomFieldObject("customfield_10401"),customer_email)
issue.setCustomFieldValue(cfm.getCustomFieldObject("customfield_10108"),customer_name)
Send customer issue created notification
Send the mail using the following script:
import smtplib,email
from smtplib import SMTP
from email.MIMEMultipart import MIMEMultipart
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
from email import Encoders
import os
import re
from com.atlassian.jira import ComponentManager
customFieldManager = ComponentManager.getInstance().getCustomFieldManager()
cfm = ComponentManager.getInstance().getCustomFieldManager()
# read needed fields from the issue
key = issue.getKey()
#status = issue.getStatusObject().name
summary = issue.getSummary()
project = issue.getProjectObject().name
# read customer email address
toAddr = issue.getCustomFieldValue(cfm.getCustomFieldObject("customfield_10401"))
# send mail only if a valid email was entered
if (toAddr is not None) and (re.match('[A-Za-z0-9._%+-]+#(?:[A-Za-z0-9-]+\.)+[A-Za-z]{2,4}',toAddr)):
# read customer name
customerName = issue.getCustomFieldValue(cfm.getCustomFieldObject("customfield_10108"))
# read template from the disk
template_file = 'new_case.template'
f = open(template_file, 'r')
htmlBody = ""
for line in f:
line = line.replace('$$CUSTOMER_NAME',customerName)
line = line.replace('$$KEY',key)
line = line.replace('$$PROJECT',project)
line = line.replace('$$SUMMARY',summary)
htmlBody += line + '<BR>'
smtpserver = 'smtpserver.com'
to = [toAddr]
fromAddr = 'jira#email.com'
subject = "["+key+"] Thank You for Contacting Support team"
mail_user = 'jira#email.com'
mail_password = 'password'
# create html email
html = '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" '
html +='"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml">'
html +='<body style="font-size:12px;font-family:Verdana">'
html +='<p align="center"><img src="http://path/to/company_logo.jpg" alt="logo"></p> '
html +='<p>'+htmlBody+'</p>'
html +='</body></html>'
emailMsg = email.MIMEMultipart.MIMEMultipart('alternative')
emailMsg['Subject'] = subject
emailMsg['From'] = fromAddr
emailMsg['To'] = ', '.join(to)
emailMsg.attach(email.mime.text.MIMEText(html,'html'))
# Send the email
s = SMTP(smtpserver) # ip or domain name of smtp server
s.login(mail_user, mail_password)
s.sendmail(fromAddr, [to], emailMsg.as_string())
s.quit()
# add sent mail to comments
cm = ComponentManager.getInstance().getCommentManager()
email_body = htmlBody.replace('<BR>','\n')
cm.create(issue,'anonymous','Email was sent to the customer ; Subject: '+subject+'\n'+email_body,False)
content of new_case.template:
Dear $$CUSTOMER_NAME,
Thank you for contacting support team.
We will address your case as soon as possible and respond with a solution very quickly.
Issue key $$KEY has been created as a reference for future correspondence.
If you need urgent support please refer to our Frequently Asked Questions page at http://www.example.com/faq.
Thank you,
Support Team
Issue key: $$KEY
Issue subject: $$PROJECT
Issue summary: $$SUMMARY
Issue reminder - open for 24/36/48 hours notifications
Created a custom field called "Open since" - a 'Date Time' field to hold the time the issue has been opened.
Created a custom field called "Notification" - a read only text field.
Using the Script Runner pluging , I've created a post-function and placed it on every transition going to the 'Open' status. This is to keep the issue opening time.
the code:
from com.atlassian.jira import ComponentManager
from datetime import datetime
opend_since_field = "customfield_10001"
# get opened since custom field:
cfm = ComponentManager.getInstance().getCustomFieldManager()
# get current time
currentTime = datetime.today()
# save current time
issue.setCustomFieldValue(cfm.getCustomFieldObject(opend_since_field),currentTime)
I've created a new filter to get the list of issues that are open for over 24h:
JQL:
project = XXX AND status= Open ORDER BY updated ASC, key DESC
Lastly - I've used the Jira remote API - the XML-RPC method to write a python script scheduled to run every 5 minutes. The script
reads all the issued from the filter, pulls all of them that have an 'Open' status for over 24h/36h/48h, send a reminder email, and mark them as notified, so only one reminder of each type will be sent.
The python code:
#!/usr/bin/python
# Refer to the XML-RPC Javadoc to see what calls are available:
# http://docs.atlassian.com/software/jira/docs/api/rpc-jira-plugin/latest/com/atlassian/jira/rpc/xmlrpc/XmlRpcService.html
# /home/issues_reminder.py
import xmlrpclib
import time
from time import mktime
from datetime import datetime
from datetime import timedelta
import smtplib,email
from smtplib import SMTP
from email.MIMEMultipart import MIMEMultipart
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
from email import Encoders
# Jira connction info
server = 'https://your.jira.com/rpc/xmlrpc'
user = 'user'
password = 'password'
filter = '10302' # Filter ID
# Email definitions
smtpserver = 'mail.server.com'
fromAddr = 'support#your.jira.com'
mail_user = 'jira_admin#your.domain.com'
mail_password = 'password'
toAddr = 'support#your.domain.com'
mysubject = "hrs Issue notification!!!"
opend_since_field = "customfield_10101"
COMMASPACE = ', '
def email_issue(issue,esc_time):
# create html email
subject = '['+issue+'] '+esc_time+mysubject
html = '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" '
html +='"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml">'
html +='<body style="font-size:12px;font-family:Verdana">'
html +='<p align="center"><img src="your_logo.jpg" alt="logo" height="43" width="198"></p> '
html +='<p> The issue ['+issue+'] is open for over '+esc_time+' hours.</p>'
html +='<p> A link to view the issue: https://your.jira.com/browse/'+issue+'.</p>'
html +='<BR><p> This is an automated email sent from Jira.</p>'
html +='</body></html>'
emailMsg = email.MIMEMultipart.MIMEMultipart('alternative')
emailMsg['Subject'] = subject
emailMsg['From'] = fromAddr
emailMsg['To'] = toAddr
emailMsg.attach(MIMEText(html, 'html'))
# Send the email
emailserver = SMTP(smtpserver) # ip or domain name of smtp server
emailserver.login(mail_user, mail_password)
emailserver.sendmail(fromAddr, [toAddr], emailMsg.as_string())
emailserver.quit()
return
s = xmlrpclib.ServerProxy(server)
auth = s.jira1.login(user, password)
esc12List = []
esc24List = []
esc48List = []
issues = s.jira1.getIssuesFromFilter(auth, filter)
print "Modifying issue..."
for issue in issues:
creation = 0;
# get open since time
for customFields in issue['customFieldValues']:
if customFields['customfieldId'] == opend_since_field :
print "found field!"+ customFields['values']
creation = customFields['values']
if (creation == 0):
creation = issue['created']
print "field not found"
creationTime = datetime.fromtimestamp(mktime(time.strptime(creation, '%d/%b/%y %I:%M %p')))
currentTime = datetime.fromtimestamp(mktime(time.gmtime()))
delta = currentTime - creationTime
esc12 = timedelta(hours=12)
esc24 = timedelta(hours=24)
esc48 = timedelta(hours=48)
print "\nchecking issue "+issue['key']
if (delta < esc12):
print "less than 12 hours"
print "not updating"
continue
if (delta < esc24):
print "less than 24 hours"
for customFields in issue['customFieldValues']:
if customFields['customfieldId'] == 'customfield_10412':
if customFields['values'] == '12h':
print "not updating"
break
else:
print "updating !!!"
s.jira1.updateIssue(auth, issue['key'], {"customfield_10412": ["12h"]})
esc12List.append(issue['key'])
break
continue
if (delta < esc48):
print "less than 48 hours"
for customFields in issue['customFieldValues']:
if customFields['customfieldId'] == 'customfield_10412':
if customFields['values'] == '24h':
print "not updating"
break
else:
print "updating !!!"
s.jira1.updateIssue(auth, issue['key'], {"customfield_10412": ["24h"]})
esc24List.append(issue['key'])
break
continue
print "more than 48 hours"
for customFields in issue['customFieldValues']:
if customFields['customfieldId'] == 'customfield_10412':
if customFields['values'] == '48h':
print "not updating"
break
else:
print "updating !!!"
s.jira1.updateIssue(auth, issue['key'], {"customfield_10412": ["48h"]})
esc48List.append(issue['key'])
break
for key in esc12List:
email_issue(key,'12')
for key in esc24List:
email_issue(key,'24')
for key in esc48List:
email_issue(key,'48')
The main pros of this method is that it's highly customizable, and by saving the data to custom fields it's easy to create filters and reports to show issues that have been opened for a long time.
Escalating to the development team
Create a new transition - Escalate. This will create an issue for the development team, and link the new issue to the support issue. Add the following post function:
from com.atlassian.jira.util import ImportUtils
from com.atlassian.jira import ManagerFactory
from com.atlassian.jira.issue import MutableIssue
from com.atlassian.jira import ComponentManager
from com.atlassian.jira.issue.link import DefaultIssueLinkManager
from org.ofbiz.core.entity import GenericValue;
# get issue objects
issueManager = ComponentManager.getInstance().getIssueManager()
issueFactory = ComponentManager.getInstance().getIssueFactory()
authenticationContext = ComponentManager.getInstance().getJiraAuthenticationContext()
issueLinkManager = ComponentManager.getInstance().getIssueLinkManager()
customFieldManager = ComponentManager.getInstance().getCustomFieldManager()
userUtil = ComponentManager.getInstance().getUserUtil()
projectMgr = ComponentManager.getInstance().getProjectManager()
customer_name = customFieldManager.getCustomFieldObjectByName("Customer Name")
customer_email = customFieldManager.getCustomFieldObjectByName("Customer Email")
escalate = customFieldManager.getCustomFieldObjectByName("Escalate to Development")
if issue.getCustomFieldValue(escalate) is not None:
# define issue
issueObject = issueFactory.getIssue()
issueObject.setProject(projectMgr.getProject(10000))
issueObject.setIssueTypeId("1") # bug
# set subtask attributes
issueObject.setSummary("[Escalated from support] "+issue.getSummary())
issueObject.setAssignee(userUtil.getUserObject("nadav"))
issueObject.setReporter(issue.getAssignee())
issueObject.setDescription(issue.getDescription())
issueObject.setCustomFieldValue(customer_name, issue.getCustomFieldValue(customer_name)+" "+issue.getCustomFieldValue(customer_email))
issueObject.setComponents(issue.getComponents())
# Create subtask
subTask = issueManager.createIssue(authenticationContext.getUser(), issueObject)
# Link parent issue to subtask
issueLinkManager.createIssueLink(issueObject.getId(),issue.getId(),10003,1,authenticationContext.getUser())
# Update search indexes
ImportUtils.setIndexIssues(True);
ComponentManager.getInstance().getIndexManager().reIndex(subTask)
ImportUtils.setIndexIssues(False)
Moving to sales
reate a new transition - Move to sales. Many support calls end up as a sale call, this will move the issue to the sales team. Add the following post function:
from com.atlassian.jira.util import ImportUtils
from com.atlassian.jira.issue import MutableIssue
from com.atlassian.jira import ComponentManager
customFieldManager = ComponentManager.getInstance().getCustomFieldManager()
userUtil = ComponentManager.getInstance().getUserUtil()
issue.setStatusId("1");
issue.setAssignee(userUtil.getUserObject("John"))
issue.setSummary("[Moved from support] "+issue.getSummary())
issue.setProjectId(10201);
issue.setIssueTypeId("35");
ImportUtils.setIndexIssues(True);
ComponentManager.getInstance().getIndexManager().reIndex(issue)
ImportUtils.setIndexIssues(False)
# add to comments
from time import gmtime, strftime
time = strftime("%d-%m-%Y %H:%M:%S", gmtime())
cm = ComponentManager.getInstance().getCommentManager()
currentUser = ComponentManager.getInstance().getJiraAuthenticationContext().getUser().toString()
cm.create(issue,currentUser,'Email was moved to Sales at '+time,False)
Do you recommend Jira also for customer support or should we find
another system like for example Zendesk for that purpose?
Full disclosure: I'm the creator of DoneDone but this question is basically why our product exists.
DoneDone is a simple bug tracker and customer support/shared inbox tool rolled into one. We use it for general customer support (both via our support email address and the contact form on our website). The shared inbox tool lets you have private discussion on emails, along with allowing you to assign, prioritize, tag, and create/change statuses on them (e.g. "Open", "In Progress", etc.)
DoneDone lets you connect customer conversations (a.k.a. incoming support email) to internal tasks. So, if your company has distinct support and client-facing people while also having internal devs and you want to separate their work, you can create any number of subtasks from an incoming conversation.
If your looking for a good way to organize internal work with customer support feedback, it might be worth signing up for a free trial.

Resources