i use executor service to launch multiple thread to sent request to api and get data back. sometimes i see some threads haven't finished their job yet, the service kill that thread already, how can i force the service to wait until the thread finish their job?
here is my code:
ExecutorService pool = Executors.newFixedThreadPool(10);
List<Future<List<Book>>> futures = Lists.newArrayList();
final ObjectMapper mapper1 = new ObjectMapper();
for (final Author a : authors) {
futures.add(pool.submit(new Callable<List<Book>>() {
#Override
public List<Book> call() throws Exception {
String urlStr = "http://localhost/api/book?limit=5000&authorId=" + a.getId();
List<JsonBook> Jsbooks = mapper1.readValue(
new URL(urlStr), BOOK_LIST_TYPE_REFERENCE);
List<Book> books = Lists.newArrayList();
for (JsonBook jsonBook : Jsbooks) {
books.add(jsonBook.toAvro());
}
return books;
}
}));
}
pool.shutdown();
pool.awaitTermination(3, TimeUnit.MINUTES);
List<Book> bookList = Lists.newArrayList();
for (Future<List<Book>> future : futures) {
if (!future.isDone()) {
LogUtil.info("future " + future.toString()); <-- future not finished yet
throw new RuntimeException("Future to retrieve books: " + future + " did not complete");
}
bookList.addAll(future.get());
}
and i saw some excepitons at the (!future.isDone()) block. how can i make sure every future is done when executor service shutdown?
I like to use the countdown latch.
Set the latch to the size that you're iterating and pass that latch into your callables, then in your run / call method have a try/finally block that decrements the countdown latch.
After everything has been enqueued to your executor service, just call your latch's await method, which will block until it's all done. At that time all your callables will be finished, and you can properly shut down your executor service.
This link has an example of how to set it up.
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/CountDownLatch.html
Related
Is it possible create a thread for commands and a thread to receive answers from server ?
Isolate are not shared memory but it is possible share few stuff over messages.
I would like to understand how to create a command pattern or what I need in order to write a code to send commands to server and take decision based on server answer
here the simplified code
// receiver thread
void rIsolate(SendPort Listener) async {
//Listen for any data from server
while (true) { //Receiver LOOP
var data = Receive();
//Send received data to client
Listener.send(data);
}
}
//Client thread
void client(SendPort clientListener) async {
ReceivePort port = ReceivePort();
//Create a new receiver isolate
await Isolate.spawn(rIsolate, port.sendPort);
port.listen((data) async{
// Receiving data from rIsolate
if (data=='hello world'){ command2().. }
});
print("::: Start... :::");
// Commands..
requestData(); // request for new data
command2(); //??
...
}
// ::: default thread :::
void main() async {
// ::::::: Create new Client :::::::
ReceivePort clientPort = ReceivePort();
await Isolate.spawn(client, clientPort.sendPort);
clientPort.listen((message) {
// msg from client()..
print(message);
});
}
I dont'know where to put the code relating to commands and at the same time keep active two thread. Any suggestion or link thank you
I need custom behavior for the timeout function. For example, when I use:
timeout(time: 10, unit: 'MINUTES') {
doSomeStuff()
}
it terminates the doSomeStuff() function.
What I want to achieve is not to terminate the execution of the function, but to call another function every 10 minutes until doSomeStuff() is done with executing.
I can't use the Build-timeout Plugin from Jenkins since I need to apply this behavior to pipelines.
Any help would be appreciated.
In case anyone else has the same issue: After some research, the only way that came to my mind to solve my problem was to modify the notification plugin for the jenkins pipeline, in a way to add a new field that would contain value of time (in minutes) to delay the invoking of the url. In the code itself, where the url was invoked, i put those lines in a new thread and let that thread sleep for the needed amount of time before executing the remaining code. Something like this:
#Override
public void onStarted(final Run r, final TaskListener listener) {
HudsonNotificationProperty property = (HudsonNotificationProperty) r.getParent().getProperty(HudsonNotificationProperty.class);
int invokeUrlTimeout = 0;
if (property != null && !property.getEndpoints().isEmpty()){
invokeUrlTimeout = property.getEndpoints().get(0).getInvokeUrlTimeout();
}
int finalInvokeUrlTimeout = invokeUrlTimeout;
new Thread(() -> {
sleep(finalInvokeUrlTimeout * 60 * 1000);
Executor e = r.getExecutor();
Phase.QUEUED.handle(r, TaskListener.NULL, e != null ? System.currentTimeMillis() - e.getTimeSpentInQueue() : 0L);
Phase.STARTED.handle(r, listener, r.getTimeInMillis());
}).start();
}
Maybe not the best solution but it works for me, and I hope it helps other people too.
I intend to execute some time consuming code using using parallelStream. This seems to work well but I have the problem that the subsequent code is not executed:
#PreDestroy
public void tearDown() {
final int mapSize = eventStreamProcessorMap.size();
LOG.info("There are {} subscriptions to be stopped!", mapSize);
final long start = System.currentTimeMillis();
LocalTime time = LocalTime.now();
final AtomicInteger count = new AtomicInteger();
eventStreamProcessorMap.entrySet().parallelStream().forEach(entry -> {
final Subscription sub = entry.getKey();
final StreamProcessor processor = entry.getValue();
LOG.info("Attempting to stop subscription {} of {} with id {} at {}", count.incrementAndGet(), mapSize, sub.id(), LocalTime.now().format(formatter));
LOG.info("Stopping processor...");
processor.stop();
LOG.info("Processor stopped.");
LOG.info("Removing subscription...");
eventStreamProcessorMap.remove(sub);
LOG.info("Subscription {} removed.", sub.id());
LOG.info("Finished stopping processor {} with subscription {} in ParallelStream at {}: ", processor, sub, LocalTime.now().format(formatter));
LOG.info(String.format("Duration: %02d:%02d:%02d:%03d (hh:mm:ss:SSS)",
TimeUnit.MILLISECONDS.toHours(System.currentTimeMillis() - start),
TimeUnit.MILLISECONDS.toMinutes(System.currentTimeMillis() - start)%60,
TimeUnit.MILLISECONDS.toSeconds(System.currentTimeMillis() - star0)%60,
TimeUnit.MILLISECONDS.toMillis(System.currentTimeMillis() - start)%1000));
LOG.info("--------------------------------------------------------------------------");
});
LOG.info("Helloooooooooooooo?????");
LOG.info(String.format("Overall shutdown duration: %02d:%02d:%02d:%03d (hh:mm:ss:SSS)",
TimeUnit.MILLISECONDS.toHours(System.currentTimeMillis() - start),
TimeUnit.MILLISECONDS.toMinutes(System.currentTimeMillis() - start)%60,
TimeUnit.MILLISECONDS.toSeconds(System.currentTimeMillis() - start)%60,
TimeUnit.MILLISECONDS.toMillis(System.currentTimeMillis() - start)%1000));
LOG.info("xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx");
}
The code after the parallelStream processing is not executed:
LOG.info("Helloooooooooooooo?????");
does never appear in the log. Why not?
This is caused by eventStreamProcessorMap.remove(sub); (which you have removed from the code now with the edit that you made). You are streaming over a Map entrySet (eventStreamProcessorMap) and removing elements from it - this is not allowed, that is why you get that ConcurrentModificationException.
If you really want to remove while iterating, use an Iterator or map.entrySet().removeIf(x -> {...})
I am trying to learn Reactor but I am having a lot of trouble with it. I wanted to do a very simple proof of concept where I simulate calling a slow down stream service 1 or more times. If you use reactor and stream the response the caller doesn't have to wait for all the results.
So I created a very simple controller but it is not behaving like I expect. When the delay is "inside" my flatMap (inside the method I call) the response is not returned until everything is complete. But when I add a delay after the flatMap the data is streamed.
Why does this code result in a stream of JSON
#GetMapping(value = "/test", produces = { MediaType.APPLICATION_STREAM_JSON_VALUE })
Flux<HashMap<String, Object>> customerCards(#PathVariable String customerId) {
Integer count = service.getCount(customerId);
return Flux.range(1, count).
flatMap(k -> service.doRestCall(k)).delayElements(Duration.ofMillis(5000));
}
But this does not
#GetMapping(value = "/test2", produces = { MediaType.APPLICATION_STREAM_JSON_VALUE })
Flux<HashMap<String, Object>> customerCards(#PathVariable String customerId) {
Integer count = service.getCount(customerId);
return Flux.range(1, count).
flatMap(k -> service.doRestCallWithDelay(k));
}
It think I am missing something very basic of the reactor API. On that note. can anyone point to a good book or tutorial on reactor? I can't seem to find anything good to learn this.
Thanks
The code inside the flatMap runs on the main thread (that is the thread the controller runs). As a result the whole process is blocked and the method doesnt return immediately. Have in mind that Reactor doesnt impose a particular threading model.
On the contrary, according to the documentation, in the delayElements method signals are delayed and continue on the parallel default Scheduler. That means that the main thread is not blocked and returns immediately.
Here are two corresponding examples:
Blokcing code:
Flux.range(1, 500)
.map(i -> {
//blocking code
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(Thread.currentThread().getName() + " - Item : " + i);
return i;
})
.subscribe();
System.out.println("main completed");
Result:
main - Item : 1
main - Item : 2
main - Item : 3
...
main - Item : 500
main completed
Non-blocking code:
Flux.range(1, 500)
.delayElements(Duration.ofSeconds(1))
.subscribe(i -> {
System.out.println(Thread.currentThread().getName() + " - Item : " + i);
});
System.out.println("main Completed");
//sleep main thread in order to be able to print the println of the flux
try {
Thread.sleep(30000);
} catch (InterruptedException e) {
e.printStackTrace();
}
Result:
main Completed
parallel-1 - Item : 1
parallel-2 - Item : 2
parallel-3 - Item : 3
parallel-4 - Item : 4
...
Here is the project reactor reference guide
"delayElements" method only delay flux element by a given duration, see javadoc for more details
I think you should post details about methods "service.doRestCallWithDelay(k)" and "service.doRestCall(k)" if you need more help.
During development, I'm generating a lot of bogus messages on my Amazon SQS. I was about to write a tiny app to delete all the messages (something I do frequently during development). Does anyone know of a tool to purge the queue?
If you don't want to write script or delete your queue. You can change the queue configuration:
Right click on queue > configure queue
Change Message Retention period to 1 minute (the minimum time it can be set to).
Wait a while for all the messages to disappear.
I found that this way works well for deleting all messages in a queue without deleting the queue.
As of December 2014, the sqs console now has a purge queue option in the queue actions menu.
For anyone who has come here, looking for a way to delete SQS messages en masse in C#...
//C# Console app which deletes all messages from a specified queue
//AWS .NET library required.
using System;
using System.Net;
using System.Configuration;
using System.Collections.Specialized;
using System.IO;
using System.Linq;
using System.Text;
using Amazon;
using Amazon.SQS;
using Amazon.SQS.Model;
using System.Timers;
using System.Collections.Generic;
using System.Text.RegularExpressions;
using System.Diagnostics;
namespace QueueDeleter
{
class Program
{
public static System.Timers.Timer myTimer;
static NameValueCollection appConfig = ConfigurationManager.AppSettings;
static string accessKeyID = appConfig["AWSAccessKey"];
static string secretAccessKeyID = appConfig["AWSSecretKey"];
static private AmazonSQS sqs;
static string myQueueUrl = "https://queue.amazonaws.com/1640634564530223/myQueueUrl";
public static String messageReceiptHandle;
public static void Main(string[] args)
{
sqs = AWSClientFactory.CreateAmazonSQSClient(accessKeyID, secretAccessKeyID);
myTimer = new System.Timers.Timer();
myTimer.Interval = 10;
myTimer.Elapsed += new ElapsedEventHandler(checkQueue);
myTimer.AutoReset = true;
myTimer.Start();
Console.Read();
}
static void checkQueue(object source, ElapsedEventArgs e)
{
myTimer.Stop();
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest();
receiveMessageRequest.QueueUrl = myQueueUrl;
ReceiveMessageResponse receiveMessageResponse = sqs.ReceiveMessage(receiveMessageRequest);
if (receiveMessageResponse.IsSetReceiveMessageResult())
{
ReceiveMessageResult receiveMessageResult = receiveMessageResponse.ReceiveMessageResult;
if (receiveMessageResult.Message.Count < 1)
{
Console.WriteLine("Can't find any visible messages.");
myTimer.Start();
return;
}
foreach (Message message in receiveMessageResult.Message)
{
Console.WriteLine("Printing received message.\n");
messageReceiptHandle = message.ReceiptHandle;
Console.WriteLine("Message Body:");
if (message.IsSetBody())
{
Console.WriteLine(" Body: {0}", message.Body);
}
sqs.DeleteMessage(new DeleteMessageRequest().WithQueueUrl(myQueueUrl).WithReceiptHandle(messageReceiptHandle));
}
}
else
{
Console.WriteLine("No new messages.");
}
myTimer.Start();
}
}
}
Check the first item in queue. Scroll down to last item in queue.
Hold shift, click on item. All will be selected.
I think the best way would be to delete the queue and create it again, just 2 requests.
I think best way is changing Retention period to 1 minute, but here is Python code if someone needs:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import boto.sqs
from boto.sqs.message import Message
import time
import os
startTime = program_start_time = time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime())
### Lets connect to SQS:
qcon = boto.sqs.connect_to_region(region,aws_access_key_id='xxx',aws_secret_access_key='xxx')
SHQueue = qcon.get_queue('SQS')
m = Message()
### Read file and write to SQS
counter = 0
while counter < 1000: ## For deleting 1000*10 items, change to True if you want delete all
links = SHQueue.get_messages(10)
for link in links:
m = link
SHQueue.delete_message(m)
counter += 1
#### The End
print "\n\nTerminating...\n"
print "Start: ", program_start_time
print "End time: ", time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime())
Option 1: boto sqs has a purge_queue method for python:
purge_queue(queue)
Purge all messages in an SQS Queue.
Parameters: queue (A Queue object) – The SQS queue to be purged
Return type: bool
Returns: True if the command succeeded, False otherwise
Source: http://boto.readthedocs.org/en/latest/ref/sqs.html
Code that works for me:
conn = boto.sqs.connect_to_region('us-east-1',
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
)
q = conn.create_queue("blah")
#add some messages here
#invoke the purge_queue method of the conn, and pass in the
#queue to purge.
conn.purge_queue(self.queue)
For me, it deleted the queue. However, Amazon SQS only lets you run this once every 60 seconds. So I had to use the secondary solution below:
Option 2: Do a purge by consuming all messages in a while loop and throwing them out:
all_messages = []
rs = self.queue.get_messages(10)
while len(rs) > 0:
all_messages.extend(rs)
rs = self.queue.get_messages(10)
If you have access to the AWS console, you can purge a queue using the Web UI.
Steps:
Navigate to Services -> SQS
Filter queues by your "QUEUE_NAME"
Right-click on your queue name -> Purge queue
This will request for the queue to be cleared and this should be completed with 5 or 10 seconds or so.
See below for how to perform this operation:
To purge an SQS from the API see:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_PurgeQueue.html