We are using lua scripts to perform batch deletes of data on updates to our DB. Jedis executes the lua script using a pipeline.
local result = redis.call('lrange',key,0,12470)
for i,k in ipairs(result) do
redis.call('del',k)
redis.call('ltrim',key,1,k)
end
try (Jedis jedis = jedisPool.getResource()) {
Pipeline pipeline = jedis.pipelined();
long len = jedis.llen(table);
String script = String.format(DELETE_LUA_SCRIPT, table, len);
LOGGER.info(script);
pipeline.eval(script);
pipeline.sync();
} catch (JedisConnectionException e) {
LOGGER.info(e.getMessage());
}
For large ranges we notice that the lua scripts slow down and we get SocketTimeOutExceptions.
running redis-cli slowlog displays only the lua scripts that have taken too long to execute.
Is there a better way to do this? is my lua script blocking?
When I use just pipeline to do the batch deletes, the slowlog also returns slow queries.
try (Jedis jedis = jedisPool.getResource()) {
Pipeline pipeline = jedis.pipelined();
long len = jedis.llen(table);
List<String> queriesContainingTable = jedis.lrange(table,0,len);
if(queriesContainingTable.size() > 0) {
for (String query: queriesContainingTable) {
pipeline.del(query);
pipeline.lrem(table,1,query);
}
pipeline.sync();
}
} catch (JedisConnectionException e) {
LOGGER.info("CACHE INVALIDATE FAIL:"+e.getMessage());
}
slowlog is capable of storing top 128 slowlogs alone (can be changed in redis.conf slowlog-max-len 128). So your 1st model of using LUA script is surely a blocking one.
If you delete such a number (12470) one by one it is surely a blocking one as it take more time to complete. Out of the 2 models 2nd one is fine for me (using pipeline), because you avoid the iteration all you do is hitting del query for n times.
You can use del of multiple keys for every 100 or 1000 (whichever you feel as optimal after a small testing). You can group them to a pipeline altogether.
Or if you can do the same without atomicity, you can delete every 100 or 1000 keys at once in a loop, so that it wouldn't be a blocking call.
Try out with different combinations take the metrics and go with the optimized one.
Related
I have an Apache Beam pipeline running on Google Dataflow whose job is rather simple:
It reads individual JSON objects from Pub/Sub
Parses them
And sends them via HTTP to some API
This API requires me to send the items in batches of 75. So I built a DoFn that accumulates events in a list and publish them via this API once they I get 75. This results to be too slow, so I thought instead of executing those HTTP requests in different threads using a thread pool.
The implementation of what I have right now looks like this:
private class WriteFn : DoFn<TheEvent, Void>() {
#Transient var api: TheApi
#Transient var currentBatch: MutableList<TheEvent>
#Transient var executor: ExecutorService
#Setup
fun setup() {
api = buildApi()
executor = Executors.newCachedThreadPool()
}
#StartBundle
fun startBundle() {
currentBatch = mutableListOf()
}
#ProcessElement
fun processElement(processContext: ProcessContext) {
val record = processContext.element()
currentBatch.add(record)
if (currentBatch.size >= 75) {
flush()
}
}
private fun flush() {
val payloadTrack = currentBatch.toList()
executor.submit {
api.sendToApi(payloadTrack)
}
currentBatch.clear()
}
#FinishBundle
fun finishBundle() {
if (currentBatch.isNotEmpty()) {
flush()
}
}
#Teardown
fun teardown() {
executor.shutdown()
executor.awaitTermination(30, TimeUnit.SECONDS)
}
}
This seems to work "fine" in the sense that data is making it to the API. But I don't know if this is the right approach and I have the sense that this is very slow.
The reason I think it's slow is that when load testing (by sending a few million events to Pub/Sub), it takes it up to 8 times more time for the pipeline to forward those messages to the API (which has response times of under 8ms) than for my laptop to feed them into Pub/Sub.
Is there any problem with my implementation? Is this the way I should be doing this?
Also... am I required to wait for all the requests to finish in my #FinishBundle method (i.e. by getting the futures returned by the executor and waiting on them)?
You have two interrelated questions here:
Are you doing this right / do you need to change anything?
Do you need to wait in #FinishBundle?
The second answer: yes. But actually you need to flush more thoroughly, as will become clear.
Once your #FinishBundle method succeeds, a Beam runner will assume the bundle has completed successfully. But your #FinishBundle only sends the requests - it does not ensure they have succeeded. So you could lose data that way if the requests subsequently fail. Your #FinishBundle method should actually be blocking and waiting for confirmation of success from the TheApi. Incidentally, all of the above should be idempotent, since after finishing the bundle, an earthquake could strike and cause a retry ;-)
So to answer the first question: should you change anything? Just the above. The practice of batching requests this way can work as long as you are sure the results are committed before the bundle is committed.
You may find that doing so will cause your pipeline to slow down, because #FinishBundle happens more frequently than #Setup. To batch up requests across bundles you need to use the lower-level features of state and timers. I wrote up a contrived version of your use case at https://beam.apache.org/blog/2017/08/28/timely-processing.html. I would be quite interested in how this works for you.
It may simply be that the extremely low latency you are expecting, in the low millisecond range, is not available when there is a durable shuffle in your pipeline.
Original situation:
I have a job in Jenkins that is running an ant script. I easily managed to test this ant script on more then one software version using a "Multi-configuration project".
This type of project is really cool because it allows me to specify all the versions of the two software that I need (in my case Java and Matlab) an it will run my ant script with all the combinations of my parameters.
Those parameters are then used as string to be concatenated in the definition of the location of the executable to be used by my ant.
example: env.MATLAB_EXE=/usr/local/MATLAB/${MATLAB_VERSION}/bin/matlab
This is working perfectly but now I am migrating this scripts to a pipline version of it.
Pipeline migration:
I managed to implement the same script in a pipeline fashion using the Parametrized pipelines pluin. With this I achieve the point in which I can manually select which version of my software is going to be used if I trigger the build manually and I also found a way to execute this periodically selecting the parameter I want at each run.
This solution seems fairly working however is not really satisfying.
My multi-config project had some feature that this does not:
With more then one parameter I can set to interpolate them and execute each combination
The executions are clearly separated and in build history/build details is easy to recognize which settings hads been used
Just adding a new "possible" value to the parameter is going to spawn the desired executions
Request
So I wonder if there is a better solution to my problem that can satisfy also the point above.
Long story short: is there a way to implement a multi-configuration project in jenkins but using the pipeline technology?
I've seen this and similar questions asked a lot lately, so it seemed that it would be a fun exercise to work this out...
A matrix/multi-config job, visualized in code, would really just be a few nested for loops, one for each axis of parameters.
You could build something fairly simple with some hard coded for loops to loop over a few lists. Or you can get more complicated and do some recursive looping so you don't have to hard code the specific loops.
DISCLAIMER: I do ops much more than I write code. I am also very new to groovy, so this can probably be done more cleanly, and there are probably a lot of groovier things that could be done, but this gets the job done, anyway.
With a little work, this matrixBuilder could be wrapped up in a class so you could pass in a task closure and the axis list and get the task map back. Stick it in a shared library and use it anywhere. It should be pretty easy to add some of the other features from the multiconfiguration jobs, such as filters.
This attempt uses a recursive matrixBuilder function to work through any number of parameter axes and build all the combinations. Then it executes them in parallel (obviously depending on node availability).
/*
All the config axes are defined here
Add as many lists of axes in the axisList as you need.
All combinations will be built
*/
def axisList = [
["ubuntu","rhel","windows","osx"], //agents
["jdk6","jdk7","jdk8"], //tools
["banana","apple","orange","pineapple"] //fruit
]
def tasks = [:]
def comboBuilder
def comboEntry = []
def task = {
// builds and returns the task for each combination
/* Map the entries back to a more readable format
the index will correspond to the position of this axis in axisList[] */
def myAgent = it[0]
def myJdk = it[1]
def myFruit = it[2]
return {
// This is where the important work happens for each combination
node(myAgent) {
println "Executing combination ${it.join('-')}"
def javaHome = tool myJdk
println "Node=${env.NODE_NAME}"
println "Java=${javaHome}"
}
//We won't declare a specific agent this part
node {
println "fruit=${myFruit}"
}
}
}
/*
This is where the magic happens
recursively work through the axisList and build all combinations
*/
comboBuilder = { def axes, int level ->
for ( entry in axes[0] ) {
comboEntry[level] = entry
if (axes.size() > 1 ) {
comboBuilder(axes[1..-1], level + 1)
}
else {
tasks[comboEntry.join("-")] = task(comboEntry.collect())
}
}
}
stage ("Setup") {
node {
println "Initial Setup"
}
}
stage ("Setup Combinations") {
node {
comboBuilder(axisList, 0)
}
}
stage ("Multiconfiguration Parallel Tasks") {
//Run the tasks in parallel
parallel tasks
}
stage("The End") {
node {
echo "That's all folks"
}
}
You can see a more detailed flow of the job at http://localhost:8080/job/multi-configPipeline/[build]/flowGraphTable/ (available under the Pipeline Steps link on the build page.
EDIT:
You can move the stage down into the "task" creation and then see the details of each stage more clearly, but not in a neat matrix like the multi-config job.
...
return {
// This is where the important work happens for each combination
stage ("${it.join('-')}--build") {
node(myAgent) {
println "Executing combination ${it.join('-')}"
def javaHome = tool myJdk
println "Node=${env.NODE_NAME}"
println "Java=${javaHome}"
}
//Node irrelevant for this part
node {
println "fruit=${myFruit}"
}
}
}
...
Or you could wrap each node with their own stage for even more detail.
As I did this, I noticed a bug in my previous code (fixed above now). I was passing the comboEntry reference to the task. I should have sent a copy, because, while the names of the stages were correct, when it actually executed them, the values were, of course, all the last entry encountered. So I changed it to tasks[comboEntry.join("-")] = task(comboEntry.collect()).
I noticed that you can leave the original stage ("Multiconfiguration Parallel Tasks") {} around the execution of the parallel tasks. Technically now you have nested stages. I'm not sure how Jenkins is supposed to handle that, but it doesn't complain. However, the 'parent' stage timing is not inclusive of the parallel stages timing.
I also noticed is that when a new build starts to run, on the "Stage View" of the job, all the previous builds disappear, presumably because the stage names don't all match up. But after the build finishes running, they all match again and the old builds show up again.
And finally, Blue Ocean doesn't seem to vizualize this the same way. It doesn't recognize the "stages" in the parallel processes, only the enclosing stage (if it is present), or "Parallel" if it isn't. And then only shows the individual parallel processes, not the stages within.
Points 1 and 3 are not completely clear to me, but I suspect you just want to use “scripted” rather than “Declarative” Pipeline syntax, in which case you can make your job do whatever you like—anything permitted by matrix project axes and axis filters and much more, including parallel execution. Declarative syntax trades off syntactic simplicity (and friendliness to “round-trip” editing tools and “linters”) for flexibility.
Point 2 is about visualization of the result, rather than execution per se. While this is a complex topic, the usual concrete request which is not already supported by existing visualizations like Blue Ocean is to be able to see test results distinguished by axis combination. This is tracked by JENKINS-27395 and some related issues, with design in progress.
I have a requirement to run a set of tasks for a build in parallel, The tasks for the build are dynamic it may change. I need some help in the implementation of that below are the details of it.
I tasks details for a build will be generated dynamically in an xml which will have information of which tasks has to be executed in parallel/serial
example:
say there is a build A.
Which had below task and the order of execution , first task 1 has to be executed next task2 and task3 will be executed in parallel and next is task 4
task1
task2,task3
task4
These details will be in an xml dynamically generated , how can i parse that xml and schedule task accordingly using pipeline plugin. I need some idea to start of with.
You can use Groovy to read the file from the workspace (readFile) and then generate the map containing the different closures, similar to the following:
parallel(
task2: {
node {
unstash('my-workspace')
sh('...')
}
},
task3: {
node {
unstash('my-workspace')
sh('...')
}
}
}
In order to generate such data structure, you simply iterate over the task data read using XML parsing in Groovy over the file contents you read previously.
By occasion, I gave a talk about pipelines yesterday and included very similar example (presentation, slide 34ff.). In contrast, I read the list of "tasks" from another command output. The complete code can be found here (I avoid pasting all of this here and instead refer to this off-site resource).
The kind of magic bit is the following:
def parallelConverge(ArrayList<String> instanceNames) {
def parallelNodes = [:]
for (int i = 0; i < instanceNames.size(); i++) {
def instanceName = instanceNames.get(i)
parallelNodes[instanceName] = this.getNodeForInstance(instanceName)
}
parallel parallelNodes
}
def Closure getNodeForInstance(String instanceName) {
return {
// this node (one per instance) is later executed in parallel
node {
// restore workspace
unstash('my-workspace')
sh('kitchen test --destroy always ' + instanceName)
}
}
}
I am using the plugin: Grails CSV Plugin in my application with Grails 2.5.3.
I need to implement the concurrency functionality with for example: GPars, but I don't know how I can do it.
Now, the configuration is sequential processing. Example of my code fragment:
Thanks.
Implementing concurrency in this case may not give you much of a benefit. It really depends on where the bottleneck is. For example, if the bottleneck is in reading the CSV file, then there would be little advantage because the file can only be read in sequential order. With that out of the way, here's the simplest example I could come up with:
import groovyx.gpars.GParsPool
def tokens = csvFileLoad.inputStream.toCsvReader(['separatorChar': ';', 'charset': 'UTF-8', 'skipLines': 1]).readAll()
def failedSaves = GParsPool.withPool {
tokens.parallel
.map { it[0].trim() }
.filter { !Department.findByName(it) }
.map { new Department(name: it) }
.map { customImportService.saveRecordCSVDepartment(it) }
.map { it ? 0 : 1 }
.sum()
}
if(failedSaves > 0) transactionStatus.setRollbackOnly()
As you can see, the entire file is read first; hence the main bottleneck. The majority of the processing is done concurrently with the map(), filter(), and sum() methods. At the very end, the transaction is rolled back if any of the Departments failed to save.
Note: I chose to go with a map()-sum() pair instead of using anyParallel() to avoid having to convert the parallel array produced by map() to a regular Groovy collection, perform the anyParallel(), which creates a parallel array and then converts it back to a Groovy collection.
Improvements
As I already mentioned in my example the CSV file is first read completely before the concurrent execution begins. It also attempts to save all of the Department instances, even if one failed to save. You may want that (which is what you demonstrated) or not.
We have a Grails project that runs behind a load balancer. There are three instances of the Grails application running on the server (using separate Tomcat instances). Each instance has its own searchable index. Because the indexes are separate, the automatic update is not enough keeping the index consistent between the application instances. Because of this we have disabled the searchable index mirroring and updates to the index are done manually in a scheduled quartz job. According to our understanding no other part of the application should modify the index.
The quartz job runs once a minute and it checks from the database which rows have been updated by the application, and re-indexes those objects. The job also checks if the same job is already running so it doesn’t do any concurrent indexing. The application runs fine for few hours after the startup and then suddenly when the job is starting, LockObtainFailedException is thrown:
22.10.2012 11:20:40 [xxxx.ReindexJob] ERROR Could not update searchable index, class org.compass.core.engine.SearchEngineException:
Failed to open writer for sub index [product]; nested exception is
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
out:
SimpleFSLock#/home/xxx/tomcat/searchable-index/index/product/lucene-a7bbc72a49512284f5ac54f5d7d32849-write.lock
According to the log the last time the job was executed, re-indexing was done without any errors and the job finished successfully. Still, this time the re-index operation throws the locking exception, as if the previous operation was unfinished and the lock had not been released. The lock will not be released until the application is restarted.
We tried to solve the problem by manually opening the locked index, which causes the following error to be printed to the log:
22.10.2012 11:21:30 [manager.IndexWritersManager ] ERROR Illegal state, marking an index writer as open, while another is marked as
open for sub index [product]
After this the job seems to be working correctly and doesn’t become stuck in a locked state again. However this causes the application to constantly use 100 % of the CPU resource. Below is a shortened version of the quartz job code.
Any help would be appreciated to solve the problem, thanks in advance.
class ReindexJob {
def compass
...
static Calendar lastIndexed
static triggers = {
// Every day every minute (at xx:xx:30), start delay 2 min
// cronExpression: "s m h D M W [Y]"
cron name: "ReindexTrigger", cronExpression: "30 * * * * ?", startDelay: 120000
}
def execute() {
if (ConcurrencyHelper.isLocked(ConcurrencyHelper.Locks.LUCENE_INDEX)) {
log.error("Search index has been locked, not doing anything.")
return
}
try {
boolean acquiredLock = ConcurrencyHelper.lock(ConcurrencyHelper.Locks.LUCENE_INDEX, "ReindexJob")
if (!acquiredLock) {
log.warn("Could not lock search index, not doing anything.")
return
}
Calendar reindexDate = lastIndexed
Calendar newReindexDate = Calendar.instance
if (!reindexDate) {
reindexDate = Calendar.instance
reindexDate.add(Calendar.MINUTE, -3)
lastIndexed = reindexDate
}
log.debug("+++ Starting ReindexJob, last indexed ${TextHelper.formatDate("yyyy-MM-dd HH:mm:ss", reindexDate.time)} +++")
Long start = System.currentTimeMillis()
String reindexMessage = ""
// Retrieve the ids of products that have been modified since the job last ran
String productQuery = "select p.id from Product ..."
List<Long> productIds = Product.executeQuery(productQuery, ["lastIndexedDate": reindexDate.time, "lastIndexedCalendar": reindexDate])
if (productIds) {
reindexMessage += "Found ${productIds.size()} product(s) to reindex. "
final int BATCH_SIZE = 10
Long time = TimeHelper.timer {
for (int inserted = 0; inserted < productIds.size(); inserted += BATCH_SIZE) {
log.debug("Indexing from ${inserted + 1} to ${Math.min(inserted + BATCH_SIZE, productIds.size())}: ${productIds.subList(inserted, Math.min(inserted + BATCH_SIZE, productIds.size()))}")
Product.reindex(productIds.subList(inserted, Math.min(inserted + BATCH_SIZE, productIds.size())))
Thread.sleep(250)
}
}
reindexMessage += " (${time / 1000} s). "
} else {
reindexMessage += "No products to reindex. "
}
log.debug(reindexMessage)
// Re-index brands
Brand.reindex()
lastIndexed = newReindexDate
log.debug("+++ Finished ReindexJob (${(System.currentTimeMillis() - start) / 1000} s) +++")
} catch (Exception e) {
log.error("Could not update searchable index, ${e.class}: ${e.message}")
if (e instanceof org.apache.lucene.store.LockObtainFailedException || e instanceof org.compass.core.engine.SearchEngineException) {
log.info("This is a Lucene index locking exception.")
for (String subIndex in compass.searchEngineIndexManager.getSubIndexes()) {
if (compass.searchEngineIndexManager.isLocked(subIndex)) {
log.info("Releasing Lucene index lock for sub index ${subIndex}")
compass.searchEngineIndexManager.releaseLock(subIndex)
}
}
}
} finally {
ConcurrencyHelper.unlock(ConcurrencyHelper.Locks.LUCENE_INDEX, "ReindexJob")
}
}
}
Based on JMX CPU samples, it seems that Compass is doing some scheduling behind the scenes. From 1 minute CPU samples it seems like there are few things different when normal and 100% CPU instances are compared:
org.apache.lucene.index.IndexWriter.doWait() is using most of the CPU time.
Compass Scheduled Executor Thread is shown in the thread list, this was not seen in a normal situation.
One Compass Executor Thread is doing commitMerge, in a normal situation none of these threads was doing commitMerge.
You can try increasing the 'compass.transaction.lockTimeout' setting. The default is 10 (seconds).
Another option is to disable concurrency in Compass and make it synchronous. This is controlled with the 'compass.transaction.processor.read_committed.concurrentOperations': 'false' setting. You might also have to set 'compass.transaction.processor' to 'read_committed'
These are the compass settings we are currently using:
compassSettings = [
'compass.engine.optimizer.schedule.period': '300',
'compass.engine.mergeFactor':'1000',
'compass.engine.maxBufferedDocs':'1000',
'compass.engine.ramBufferSize': '128',
'compass.engine.useCompoundFile': 'false',
'compass.transaction.processor': 'read_committed',
'compass.transaction.processor.read_committed.concurrentOperations': 'false',
'compass.transaction.lockTimeout': '30',
'compass.transaction.lockPollInterval': '500',
'compass.transaction.readCommitted.translog.connection': 'ram://'
]
This has concurrency switched off. You can make it asynchronous by changing the 'compass.transaction.processor.read_committed.concurrentOperations' setting to 'true'. (or removing the entry).
Compass configuration reference:
http://static.compassframework.org/docs/latest/core-configuration.html
Documentation for the concurrency of read_committed processor:
http://www.compass-project.org/docs/latest/reference/html/core-searchengine.html#core-searchengine-transaction-read_committed
If you want to keep async operations, you can also control the number of threads it uses. Using compass.transaction.processor.read_committed.concurrencyLevel=1 setting would allow asynchronous operations but just use one thread (the default is 5 threads). There are also the compass.transaction.processor.read_committed.backlog and compass.transaction.processor.read_committed.addTimeout settings.
I hope this helps.