I have working Durable Function based on following tutorial. I have not modified code yet.
https://learn.microsoft.com/en-us/azure/azure-functions/durable/quickstart-python-vscode
How to send json file to Activity Function.
For debugging purposes how to do logging.info for json in Activity Function.
This function an HTTP starter function for Durable Functions.
import logging
import azure.functions as func
import azure.durable_functions as df
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
client = df.DurableOrchestrationClient(starter)
instance_id = await client.start_new(req.route_params["functionName"], None, None)
logging.info(f"Started orchestration (Ken) with ID = '{instance_id}'.")
return client.create_check_status_response(req, instance_id)
This function is an Orchestration Function
import logging
import json
import azure.functions as func
import azure.durable_functions as df
def orchestrator_function(context: df.DurableOrchestrationContext):
logging.info(f"CalcOrc")
result1 = yield context.call_activity('CalculateActivity', "Tokyo")
result2 = yield context.call_activity('CalculateActivity', "Seattle")
result3 = yield context.call_activity('CalculateActivity', "London")
return [result1, result2, result3]
main = df.Orchestrator.create(orchestrator_function)
This function is an Activity function
import logging
def main(name: str) -> str:
logging.info(f"CalcAct") # Could log contents of json sent by HTTP Post
return f"Hello {name}!"
This modified version an HTTP starter function. It works with Get but not with Post.
import logging
import json
import azure.functions as func
import azure.durable_functions as df
async def main(req: func.HttpRequest, starter: str) ->
func.HttpResponse:
#Added for testing
jsoninput = req.params.get('jsoninput')
client = df.DurableOrchestrationClient(starter)
#instance_id = await
client.start_new(req.route_params["functionName"], None, None)
instance_id = await
client.start_new(req.route_params["functionName"], jsoninput, None)
logging.info(f"Started orchestration with ID = '{instance_id}'.")
logging.info(f"jsonInput = '{jsoninput}'.")
return client.create_check_status_response(req, instance_id)
Below code will helps you to pass the json object data to the activity function:
Activity function:
def main(req: func.HttpRequest) -> func.HttpResponse:
req_body = req.get_json()
return func.HttpResponse(f"description is {req_body.get('description')}")
Related
Hi team i need to fix the 12200 - Schema validation warning twilio.
everything is working but i dont receive the whatsapp respond back .
enter image description here
here is the app.py code:
from helper.openai_api import text_complition
from helper.twilio_api import send_message
from twilio.twiml.messaging_response import MessagingResponse
from flask import Flask, request
from dotenv import load_dotenv
load_dotenv()
app = Flask(__name__)
#app.route('/')
def home():
return 'All is well...'
#app.route('/twilio/receiveMessage', methods=['POST'])
def receiveMessage():
try:
message = request.form['Body']
sender_id = request.form['From']
# Placeholder code
result = {}
result['status'] = 1
result['response'] = "Hi, I'm CODAI, I have received your message."
send_message(sender_id, result['response'])
except:
pass
return 'OK', 200
here is the openai.py code:
import os
import openai
from dotenv import load_dotenv
from twilio.twiml.messaging_response import MessagingResponse
load_dotenv()
openai.api_key = os.getenv('OPENAI_API_KEY')
def text_complition(prompt: str) -> dict:
'''
Call Openai API for text completion
Parameters:
- prompt: user query (str)
Returns:
- dict
'''
try:
response = openai.Completion.create(
model='text-davinci-003',
prompt=f'Human: {prompt}\nAI: ',
temperature=0.9,
max_tokens=150,
top_p=1,
frequency_penalty=0,
presence_penalty=0.6,
stop=['Human:', 'AI:']
)
return {
'status': 1,
'response': response['choices'][0]['text']
}
except:
return {
'status': 0,
'response': ''
}
here is the twilio.py code:
import os
from twilio.twiml.messaging_response import MessagingResponse
from twilio.rest import Client
from dotenv import load_dotenv
load_dotenv()
account_sid = os.getenv('TWILIO_ACCOUNT_SID')
auth_token = os.getenv('TWILIO_AUTH_TOKEN')
client = Client(account_sid, auth_token)
def send_message(to: str, message: str) -> None:
'''
Send message through Twilio's WhatsApp API.
Parameters:
- to(str): recipient's phone number in the format of "whatsapp:+[country code][phone number]"
- message(str): text message to send
Returns:
- None
'''
_ = client.messages.create(
from_=os.getenv('FROM'),
body=message,
to="whatsapp:" + to
)
i need help to fix the error 12200 - Schema validation warning twilio
I'm trying to send messages to specific websocket instances, but neither channel_layer.send, nor using channel_layer.group_send with unique groups for each instance seems to be working, no errors are being raised, they just aren't received by the instances.
The function that sends the message is:
def listRequest(auth_user, city):
request_country = city["country_name"]
request_city = city["city"]
request_location = request_city +", "+request_country
concatenate_service_email = auth_user.service + "-" + auth_user.email
this_request = LoginRequest(service_and_email=concatenate_service_email, location=request_location)
this_request.generate_challenge()
this_request.set_expiry(timezone.now() + timezone.timedelta(minutes=5))
this_request.save()
channel_layer = get_channel_layer()
print(auth_user.current_socket)
async_to_sync(channel_layer.group_send)(
auth_user.current_socket,{
"type": "new.request",
"service_and_email" : concatenate_service_email
},
)
My current working consumers.py (receive and scanrequest don't have anything that's likely to be relevant to the issue):
from asgiref.sync import async_to_sync
from channels.generic.websocket import WebsocketConsumer
from .helpers import verify_identity, unique_group
from django.utils import timezone
from .models import Authentication, LoginRequest
import json
import time
class AuthConsumer(WebsocketConsumer):
account_list =[]
def connect(self):
print("Connect attempted")
print(self.channel_name)
print(unique_group(self.channel_name))
async_to_sync(self.channel_layer.group_add)(unique_group(self.channel_name), self.channel_name)
self.accept()
def disconnect(self, close_code):
print("Disconnect attempted")
async_to_sync(self.channel_layer.group_discard)(unique_group(self.channel_name), self.channel_name)
for i in self.account_list:
serviceEmailSplit = i.split("-")
try:
auth_user = Authentication.objects.get(service=serviceEmailSplit[0],email=serviceEmailSplit[1])
auth_user.set_socket("NONE")
auth_user.save()
except:
print("Error user %s does not exist" %i)
pass
def receive(self, text_data):
print("Receiving data")
if text_data[0:7] == "APPROVE":
data_as_list = text_data.split(",")
serviceEmailSplit = data_as_list[1].split("-")
auth_user = Authentication.objects.get(service=serviceEmailSplit[0],email=serviceEmailSplit[1])
this_request = LoginRequest.objects.get(service_and_email=data_as_list[1],approved=False, expiry__gt=timezone.now())
if verify_identity(auth_user.public_key, data_as_list[2], this_request.challenge):
this_request.set_approved()
self.send("Request Approved!")
else:
self.send("ERROR: User verification failed")
else:
self.account_list = text_data.split(",")
self.account_list.pop(-1)
print(self.account_list)
for i in self.account_list:
serviceEmailSplit = i.split("-")
try:
auth_user = Authentication.objects.get(service=serviceEmailSplit[0],email=serviceEmailSplit[1])
auth_user.set_socket(unique_group(self.channel_name))
auth_user.save()
except:
self.send("Error user %s does not exist" %i)
self.scanRequest()
def scanRequest(self):
requestSet = LoginRequest.objects.filter(service_and_email__in = self.account_list, approved = False, request_expiry__gt = timezone.now())
if requestSet.count() > 0:
for request in requestSet:
self.send(request.service_and_email+","+request.location+","+str(request.challenge))
else:
self.send("NOREQUESTS")
def new_request(self,event):
print("NEW REQUEST!")
this_request = LoginRequest.objects.filter(service_and_email = event["service_and_email"]).latest('request_expiry')
self.send(this_request.service_and_email+","+this_request.location+","+str(this_request.challenge))
And my routing.py:
from django.urls import re_path
from . import consumers
from django.conf.urls import url
websocket_urlpatterns = [
url(r"^ws/$", consumers.AuthConsumer.as_asgi()),
]
"NEW REQUEST!" is never printed, having tried to call it both by sending a message directly, and neither does using groups like I have written above.
My redis server appears to be working from testing like the documentation for the channels tutorial suggests:
https://channels.readthedocs.io/en/stable/tutorial/part_2.html
I'm pretty stumped after attempts to fix it, and I've looked at the other posts on stackoverflow with the same/similar issues and I'm already following whatever solutions they have in my code.
I want to send a message from server to client at 1 second intervals,
writing the string 'send clock' to console before the message is sent.
However, client does not receive "clock" message.
I have got the echo message is working fine, so it's not a connection issue.
How can I send a message at 1 second intervals?
Server code(python):
import threading
import time
import socketio
import eventlet
socket = socketio.Server(async_mode='eventlet')
app = socketio.WSGIApp(socket)
#socket.on('echo')
def echo(sid, message):
socket.emit('echo', message)
def worker1():
eventlet.wsgi.server(eventlet.listen(('', 5030)), app)
def worker2():
while(1):
print("send clock")
socket.emit('clock', '1 sec')
time.sleep(1)
def main():
t1 = threading.Thread(target=worker1)
t2 = threading.Thread(target=worker2)
t1.start()
t2.start()
if __name__ == '__main__':
main()
Client code(nodejs):
const io = require("socket.io-client");
socket = io("http://xxx.xxx.xxx.xxx:5030", {
transports: ["websocket"],
});
socket.emit("echo", "test");
socket.on("echo", (data) => {
console.log(data);
});
socket.on("clock", (data) => {
console.log("receive clock");
});
My server side environment:
python: 3.7.3
python-socketio: 4.5.1
eventlet: 0.25.1
Thank you for reading it until the very end. <3
You can't combine threads or the sleep() function with eventlet, you should use greenlet based equivalents instead. The easiest way to do this is to start your background tasks with the socket.start_background_task() helper function, and sleep with socket.sleep(). I'm writing this by memory so there might be small mistakes, but your application could do something like the following:
import socketio
import eventlet
socket = socketio.Server(async_mode='eventlet')
app = socketio.WSGIApp(socket)
#socket.on('echo')
def echo(sid, message):
socket.emit('echo', message)
def worker1():
eventlet.wsgi.server(eventlet.listen(('', 5030)), app)
def worker2():
while(1):
print("send clock")
socket.emit('clock', '1 sec')
socket.sleep(1)
def main():
socket.start_background_task(worker2)
worker1()
if __name__ == '__main__':
main()
i created a async/await function in another file thus its handler is returning a Future Object. Now i can't understand how to give response to client with content of that Future Object in Dart. I am using basic dart server with shelf package.Below is code where ht.handler('list') returns a Future Object and i want to send that string to client as response. But i am getting internal server error.
import 'dart:io';
import 'package:args/args.dart';
import 'package:shelf/shelf.dart' as shelf;
import 'package:shelf/shelf_io.dart' as io;
import 'HallTicket.dart' as ht;
// For Google Cloud Run, set _hostname to '0.0.0.0'.
const _hostname = 'localhost';
main(List<String> args) async {
var parser = ArgParser()..addOption('port', abbr: 'p');
var result = parser.parse(args);
// For Google Cloud Run, we respect the PORT environment variable
var portStr = result['port'] ?? Platform.environment['PORT'] ?? '8080';
var port = int.tryParse(portStr);
if (port == null) {
stdout.writeln('Could not parse port value "$portStr" into a number.');
// 64: command line usage error
exitCode = 64;
return;
}
var handler = const shelf.Pipeline()
.addMiddleware(shelf.logRequests())
.addHandler(_echoRequest);
var server = await io.serve(handler, _hostname, port);
print('Serving at http://${server.address.host}:${server.port}');
}
Future<shelf.Response> _echoRequest(shelf.Request request)async{
shelf.Response.ok('Request for "${request.url}"\n'+await ht.handler('list'));
}
The analyzer gives your the following warning for your _echoRequest method:
info: This function has a return type of 'Future', but
doesn't end with a return statement.
And if you check the requirement for addHandler you will see it expects a handler to be returned.
So you need to add the return which makes it work on my machine:
Future<shelf.Response> _echoRequest(shelf.Request request) async {
return shelf.Response.ok(
'Request for "${request.url}"\n' + await ht.handler('list2'),
headers: {'Content-Type': 'text/html'});
}
I am trying to run Twitter Streaming Example in Zeppelin. After I searched around, I added "org.apache.bahir:spark-streaming-twitter_2.11:2.0.0" into Spark Interpreter. So I can make the first part work, as in:
Apache Zeppelin 0.6.1: Run Spark 2.0 Twitter Stream App
Now I am trying to add the second half as:
case class Tweet(createdAt:Long, text:String, screenName:String)
twt.map(status=>
Tweet(status.getCreatedAt().getTime()/1000, status.getText(), status.getUser().getScreenName())
).foreachRDD(rdd=>
rdd.toDF().registerTempTable("tweets")
)
Now I got the error:
<console>:56: error: not found: type StreamingContext
val ssc = new StreamingContext(sc, Seconds(2))
^
<console>:56: error: not found: value Seconds
val ssc = new StreamingContext(sc, Seconds(2))
^
<console>:61: error: not found: value Seconds
val twt = tweets.window(Seconds(60))
Actually I added the case line, I got the above error. I really had no idea what happened here.
Any one has any clue here?
Here are details
Spark: 2.0.0
Zeppelin: 0.6.2
Thanks a lot.
=====================================================================
// All codes for your reference:
import org.apache.spark.streaming.twitter
import org.apache.spark.streaming._
import org.apache.spark.storage.StorageLevel
import scala.io.Source
import scala.collection.mutable.HashMap
import java.io.File
import org.apache.log4j.Logger
import org.apache.log4j.Level
import sys.process.stringSeqToProcess
import org.apache.spark.SparkConf
// ********************************* Configures the Oauth Credentials for accessing Twitter ****************************
def configureTwitterCredentials(apiKey: String, apiSecret: String, accessToken: String, accessTokenSecret: String) {...}
// ***************************************** Configure Twitter credentials ********************************************
val apiKey = ...
val apiSecret = ...
val accessToken = ...
val accessTokenSecret = ...
configureTwitterCredentials(apiKey, apiSecret, accessToken, accessTokenSecret)
// ************************************************* The logic itself *************************************************
val ssc = new StreamingContext(sc, Seconds(2))
val tweets = TwitterUtils.createStream(ssc, None)
val twt = tweets.window(Seconds(60))
twt.print
// above codes work correctly
// If added the following line, it failed with the above error
case class Tweet(createdAt:Long, text:String, screenName:String)
I had the same problem, and I have no idea why moving the import statements from the top to right before the new StreamingContext fixed it, but it did.
import org.apache.spark.streaming._ //moved here from top
import org.apache.spark.streaming.twitter._ //moved here from top
val ssc = new StreamingContext(sc, Seconds(2)) //existing
I had a similar issue. Using the FQCNs worked ok, so I ended up using that as a workaround.