How to work with django-channels 2 if number of long living tasks is more than number of workers? - django-channels

I'm using django-channels 2 to track progress of several long living tasks - parsing M files. I send a websocket message from frontend and send the names of files to N workers. M>N (let M=4, N=3).
In all experiments the last(4th) task executed strong after all other tasks fully completed.
How could I make django-channels to start other tasks after first task completed?
Here is piece of django-channels consumer code:
class WSConsumer(AsyncWebsocketConsumer):
###skipped connect and disconnect methods
async def receive(self, text_data=None, bytes_data=None):
print('try receive:')
print(text_data)
data=json.loads(text_data)
if (data['type']=='collect'):
await self.handle_collect_message(data['id'], data['filename'])
elif (data['type']=='upload'):
await self.handle_upload_message()
async def handle_collect_message(self, id, filename):
print('collect:',id, filename)
self.tasks.append({
'type':'upload_file',
'reply-channel':self.channel_name,
'id':id,
'filename': filename,
})
async def handle_upload_message(self):
print('start upload:')
for future in asyncio.as_completed(map(lambda value:self.send_task_to_worker(
{
'type':'prepare_file',
'reply-channel':value['reply-channel'],
'id':value['id'],
'filename':value['filename']
}
),self.tasks)):
await future
async def send_task_to_worker(self,task):
print('task: ',task['type'],' id=',task['id'])
await self.channel_layer.send('progress-worker',task)
async def worker_progress(self, message):
if (message['state']=='complete'):
print('task #',message['id'],' complete')
await self.send(text_data=json.dumps(message))
async def worker_prepared(self, message):
id=message['id']
lines=message['lines']
task=list(filter(lambda x : x['id']==id,self.tasks))[0]
task['lines']=lines
self.prepared_tasks.append(task)
if (len(self.prepared_tasks)==len(self.tasks)):
sorted_tasks=sorted(self.prepared_tasks, key=lambda k: k['lines'],reverse=True)
for future in asyncio.as_completed(map(lambda value:self.send_task_to_worker(value),sorted_tasks)):
await future

Related

arweave-sol option is not working well when uploading images for mainnet-beta on solana

I'm going to upload images to arweave-sol storage for mainnet-beta on solana, but am encountering on errors everytimes.
I have now 1.2 sol and know it's enough to upload images.
The errors are following.
wallet public key: xxxxxxxxxxxxxxxxxxxxx
Beginning the upload for 2 (img+json) pairs
started at: 1644902774661
initializing candy machine
initialized config for a candy machine with publickey: xxxxxxxxxxxxxxxxxxxxx
Uploading Size 2 { mediaExt: '.png', index: '0' }
Saved bundle upload result to cache.
Computed Bundle range, including 2 file pair(s) totaling 0.014MB.
Uploading bundle via bundlr... in multiple transactions
0.000002188 SOL to upload
Failed bundlr upload, automatically retrying transaction in 10s (attempt: 1) Error: Not enough funds to send data
at NodeUploader.dataItemUploader (/home/user/solana-mint/metaplex/js/packages/cli/node_modules/#bundlr-network/client/build/common/upload.js:43:23)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async uploadTransaction (/home/user/solana-mint/metaplex/js/packages/cli/src/helpers/upload/arweave-bundle.ts:520:13)
at async processBundle (/home/user/solana-mint/metaplex/js/packages/cli/src/helpers/upload/arweave-bundle.ts:535:11)
at async uploadV2 (/home/user/solana-mint/metaplex/js/packages/cli/src/commands/upload.ts:202:11)
at async Command.<anonymous> (/home/user/solana-mint/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts:190:7)
Failed bundlr upload, automatically retrying transaction in 10s (attempt: 2) Error: Not enough funds to send data
at NodeUploader.dataItemUploader (/home/user/solana-mint/metaplex/js/packages/cli/node_modules/#bundlr-network/client/build/common/upload.js:43:23)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async uploadTransaction (/home/user/solana-mint/metaplex/js/packages/cli/src/helpers/upload/arweave-bundle.ts:520:13)
at async /home/user/solana-mint/metaplex/js/packages/cli/src/helpers/upload/arweave-bundle.ts:531:15
at async uploadTransaction (/home/user/solana-mint/metaplex/js/packages/cli/src/helpers/upload/arweave-bundle.ts:520:13)
at async processBundle (/home/user/solana-mint/metaplex/js/packages/cli/src/helpers/upload/arweave-bundle.ts:535:11)
at async uploadV2 (/home/user/solana-mint/metaplex/js/packages/cli/src/commands/upload.ts:202:11)
at async Command.<anonymous> (/home/user/solana-mint/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts:190:7)
upload was not successful, please re-run. Error: Not enough funds to send data
at NodeUploader.dataItemUploader (/home/user/solana-mint/metaplex/js/packages/cli/node_modules/#bundlr-network/client/build/common/upload.js:43:23)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async uploadTransaction (/home/user/solana-mint/metaplex/js/packages/cli/src/helpers/upload/arweave-bundle.ts:520:13)
at async /home/user/solana-mint/metaplex/js/packages/cli/src/helpers/upload/arweave-bundle.ts:531:15
at async uploadTransaction (/home/user/solana-mint/metaplex/js/packages/cli/src/helpers/upload/arweave-bundle.ts:520:13)
at async /home/user/solana-mint/metaplex/js/packages/cli/src/helpers/upload/arweave-bundle.ts:531:15
at async uploadTransaction (/home/user/solana-mint/metaplex/js/packages/cli/src/helpers/upload/arweave-bundle.ts:520:13)
at async processBundle (/home/user/solana-mint/metaplex/js/packages/cli/src/helpers/upload/arweave-bundle.ts:535:11)
at async uploadV2 (/home/user/solana-mint/metaplex/js/packages/cli/src/commands/upload.ts:202:11)
at async Command.<anonymous> (/home/user/solana-mint/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts:190:7)
how can I fix these errors?
Looks like you don't have fund on your wallet
" Error: Not enough funds to send data"
send sol to the address of your default file wallet
(make sure solana cli is installed then "solana address" will give you the address of your default wallet. the path of the default wallet is displayed when you type "solana config get")

Twilio - how do I transfer an active outbound call to a queue?

I am trying to add the option to transfer an outbound call to a task queue.
I have it working just fine for inbound calls which are originally enqueued, then answered by the workers. But when I try to do the same thing on an outgoing call the call just drops.
This is what I'm using at the moment to place the call back into a queue. (C#)
Update In progress call
await CallResource.UpdateAsync(
pathSid: sid,
url: new Uri($"{_CallbackUrl}/EnqueueTransfer/{department}")
);
CallBack Response
var response = new VoiceResponse();
response.Play(new Uri("**URL**/please_wait_recording.Mp3"));
Enqueue enqueue = new Enqueue(workflowSid: _workflowSid);
enqueue.Task($"{{\"selected_department\":\"{department.ToLower()}\"}}");
response.Append(enqueue);
return new TwiMLResult(response);
Any ideas on why this only works for inbound calls and how could I get this working for outbounds?
My outbounds are created using the js client
let newCall = await device.connect({
params: {
To: formatNumber(number),
From: worker.attributes?.contact_uri,
WorkerTeam: worker.attributes?.team,
WorkerDepartment: worker.attributes?.department
}
});
#### UPDATE ####
The above code is working for both in and outbound calls HOWEVER when I transfer an already transferred outbound call. The calls drop from the customer phone and the agent gets redirected to the queue. I've tried updating the call resource using the following:
var Call2 = await CallResource.FetchAsync(pathSid: transfer.CallSid);
ResourceSet<CallResource> calls;
string sid= transfer.CallSid;
calls = await CallResource.ReadAsync(parentCallSid: transfer.CallSid);
if (Call2.Direction.ToLower() == "inbound")
{
sid = calls.FirstOrDefault().Sid;
}
await CallResource.UpdateAsync(
pathSid: sid,
url: new Uri($"{_CallbackUrl}/Transfer/{transfer.Team}")
);

Cannot build up a string using StringBuffer and a server response

I'm trying to fetch data from this link via Dart. Since I'm making use of dart:io library's HttpClientResponse based instance to listen to the data obtained from the above link, therefore I thought that an instance of StringBuffer would be the best option to capture the received data. It would help me to build the response string in an incremental fashion. But it seems like I'm not making proper use of StringBuffer, because in the end the response string (stored in receivedBuffer) remains empty.
Code:
import 'dart:io';
import 'dart:convert';
void main() async
{
StringBuffer receivedBuffer = new StringBuffer("");
String url = "https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty";
HttpClient client = new HttpClient();
HttpClientRequest request = await client.getUrl(Uri.parse(url));
HttpClientResponse response = await request.close();
print("[info] Fetch successful. Proceeding to transform received data ...");
response.transform(utf8.decoder).listen((contents) => receivedBuffer.write(contents));
print("[info] Done. Contents:\n");
print(receivedBuffer);
}
Output :
[info] Fetch successful. Proceeding to transform received data ...
[info] Done. Contents:
Also, instead of receivedBuffer.write(contents), if I were to write print(contents), then all the
required data is printed as one would expect it to. But while trying to write the contents to recievedBuffer it seems like receivedBuffer wasn't even updated once.
I read this article, and tried to incorporate the answer present over there in my code. To be precise, I made use of Completer instance to take care of my issue, but it didn't help.
What's the issue in the above provided code?
You are not waiting for the stream to complete.
In the listen call, you set up a receiver for stream events, but you don't wait for the events to arrive.
You can either add an onDone parameter to the listen call and do something there, but more likely you will want to just wait for it here, and then I recommend:
await response.transform(utf8.decoder).forEach(receivedBuffer.write);
Using forEach is usually what you want when you are calling listen, but not remembering the returned subscription. Alternatively use an await for:
await for (var content in response.transform(utf8.decoder)) {
receivedBuffer.write(content);
}
(which corresponds to a forEach call in most ways).

Django Channels group send (exclude the data sender)

I'm using Django channels as an intermediate agent, which passes data from one browser(parent/sender) to other connected browsers(children/receivers). And in my consumers, I do a channel_layer.group_send(data) once data are received from the parent browser, so that children browsers can get the data from redis channel later.
However, what I really want is the data passed to the channel should be received by all the children, except the parent browser. My question is, how to exclude the data sender in the group?
Unfortunately, django channels does not offer a filtering like that. I have solved the problem by checking in the chat_message function whether the current connection is the sender.
async def receive(self, text_data):
text_data_json = json.loads(text_data)
# Send message to room group
await self.channel_layer.group_send(
self.GROUP_NAME,
{
'type': 'chat_message',
'data': text_data_json,
'sender_channel_name': self.channel_name
}
)
# Receive message from room group
async def chat_message(self, event):
# send to everyone else than the sender
if self.channel_name != event['sender_channel_name']:
await self.send(text_data=json.dumps(event))
I'm end up using session id to exclude sender in consumer (group send outside the consumer):
in application:
def emit_websocket_update_info():
sender_sessionid = info.context.COOKIES["sessionid"]
channel_layer = get_channel_layer()
async_to_sync(channel_layer.group_send)(
GraphQLUpdatesConsumer.group_name,
{
"type": GraphQLUpdatesConsumer.mutation_handler,
"text": json.dumps({"conditions": time.time()}),
"sender_sessionid": sender_sessionid,
},
)
in consumer:
async def mutation_event(self, event):
# send to all except sender itself...
if self.scope["cookies"]["sessionid"] != event["sender_sessionid"]:
await self.send(text_data=event["text"])

Any idea why requests to vertx embedded in grails are synchronously queued up

Environment: Mac osx lion
Grails version: 2.1.0
Java: 1.7.0_08-ea
If I start up vertx in embedded mode within Bootstrap.groovy and try to hit the same websocket endpoint through multiple browsers, the requests get queued up.
So depending on the timing of the requests, after one request is done with its execution the next request gets into the handler.
I've tried this with both websocket and SockJs and noticed the same behavior on both.
BootStrap.groovy (SockJs):
def vertx = Vertx.newVertx()
def server = vertx.createHttpServer()
def sockJSServer = vertx.createSockJSServer(server)
def config = ["prefix": "/eventbus"]
sockJSServer.installApp(config) { sock ->
sleep(10000)
}
server.listen(8088)
javascript:
<script>
function initializeSocket(message) {
console.log('initializing web socket');
var socket = new SockJS("http://localhost:8088/eventbus");
socket.onmessage = function(event) {
console.log("received message");
}
socket.onopen = function() {
console.log("start socket");
socket.send(message);
}
socket.onclose = function() {
console.log("closing socket");
}
}
OR
BootStrap.groovy (Websockets):
def vertx = Vertx.newVertx()
def server = vertx.createHttpServer()
server.setAcceptBacklog(10000);
server.websocketHandler { ws ->
println('**received websocket request')
sleep(10000)
}.listen(8088)
javascript
socket = new WebSocket("ws://localhost:8088/ffff");
socket.onmessage = function(event) {
console.log("message received");
}
socket.onopen = function() {
console.log("socket opened")
socket.send(message);
}
socket.onclose = function() {
console.log("closing socket")
}
From the helpful folks at vertx:
def server = vertx.createHttpServer() is actually a verticle and a verticle is a single threaded process
As bluesman says, each verticle goes in its own thread. You can span your verticles across cores in your hardware, even clustering them with more machines. But this add capacity to accept simultaneous requests.
When programming realtime apps, we should try to build the response as soon as posible to avoid blocking. If you think your operation can be time intensive, consider this model:
Make a request
Pass the task to a worker verticle and assign this task an UUID (for example), and put it into response. The caller now knows that the work is in progress and receive the response so fast
When the worker ends the task, put a notification in event bus using the UUID assigned.
The caller check the event bus for the task result.
This is tipically done in a web application vía websockets, sockjs, etc.
This way you can accept thousands of request without blocking. And clients will receive the result without blocking the UI.
Vert.x use the JVM to create a so called "multi-reactor pattern", that is a reactor pattern modified to perform better.
As far as I understood is not true that each verticle has its own thread: the fact is that each verticle is always served by the same event loop, but more verticles can be binded with the same event loop and there can be multiple event loops. An event loop is basically a thread, so few threads should serve many verticles.
I didn't use vert.x in embedded mode (and I don't know if the main concept change) but you should perform much better instantiating many verticles for the job
Regards,
Carlo
As mentioned before Vertx concept is based on reactor pattern which means the single instance has at least one single-threaded event loop and processes events sequentially. Now the request processing may consist of several events, the point here is to serve the request and each event with non-blocking routines.
E.g. when you wait for Web Socket message the request should be suspended and in the event of message it is woken back. Whatever you do with the message should be also non-blocking thus asynchronous, like any file IO, networking IO, DB access. Vertx provides basic elements which you should use to build such async flow: Buffers, Pumps, Timers, EventBus.
To wrap it up - just never block. The use of sleep(10000) kills the concept. If you really need to halt the execution use VertX's Timers instead.

Resources