Data Feeders implementation in Gatiling for RESTful webservices load testing - load-testing

I am working on implementing Gatling tool to load test few of our RESTful web API methods,
for some reason I am not successful in parameterize my input data into the URI.
I am getting "i.g.h.a.AsyncHandlerActor" error..
it would be great if I'm able to see what exactly the end URI where the call was made.
below is my Scala code for one of the method to be load tested
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
import java.util.concurrent.ThreadLocalRandom
class StockRun3 extends Simulation {
val httpConf = http
.baseURL("http://xxx.xxx.xx.xx:95/v9/stk")
.acceptHeader("application/json")
.authorizationHeader("appKeyToken=XXXXXXX&appKey=YYYYYYYYYY")
object Search {
val Datafeeder= csv("StockDataSource2.csv").random
val search = feed(Datafeeder)
.exec(http("Search")
.get("/availability")
.queryParam("""productIds""","""${product}""")
.queryParam("""ocationIds""","""${store}""")
)
.pause(1)
}
val users = scenario("Users").exec(Search.search)
setUp(
users.inject(nothingFor(4 seconds),
atOnceUsers(10),
rampUsers(10) over(60 seconds),
constantUsersPerSec(2) during(30 seconds))
).protocols(httpConf)
}

As explained in the documentation, lower the logging level in logback.xml.
Then, you have a typo in "ocationIds" queryParam.

Related

Grails 4 how to get an handle to artifacts in custom command

I need to build a custom command in a Grails 4 application (https://docs.grails.org/4.0.11/guide/single.html#creatingCustomCommands), and I need to get an handle to some Grails Services and Domain classes which I will query as needed.
The custom command skeleton is quite simple:
import grails.dev.commands.*
import org.apache.maven.artifact.Artifact
class HelloWorldCommand implements GrailsApplicationCommand {
boolean handle() {
return true
}
}
While the documentation says that a custom command has access to the whole application context, I haven't found any examples on how to get an handle of that and start accessing the various application artifacts.
Any hints?
EDIT: to add context and clarify the goal of the custom command in order for further recommendation/best practices/etc.: the command reads data from a file in a custom format, persist the data, and writes reports in another custom format.
Will eventually be replaced by a recurrent job, once the data will be available on demand from a third party REST API.
See the project at github.com/jeffbrown/marco-vittorini-orgeas-artifacts-cli.
grails-app/services/marco/vittorini/orgeas/artifacts/cli/GreetingService.groovy
package marco.vittorini.orgeas.artifacts.cli
class GreetingService {
String greeting = 'Hello World'
}
grails-app/commands/marco/vittorini/orgeas/artifacts/cli/HelloCommand.groovy
package marco.vittorini.orgeas.artifacts.cli
import grails.dev.commands.*
class HelloCommand implements GrailsApplicationCommand {
GreetingService greetingService
boolean handle() {
println greetingService.greeting
return true
}
}
EDIT:
I have added a commit at github.com/jeffbrown/marco-vittorini-orgeas-artifacts-cli/commit/49a846e3902073f8ea0539fcde550f6d002b9d89 which demonstrates accessing a domain class, which was part of the question I overlooked when writing the initial answer.

flatMap vs map, basic explanation is ok, but what happens when my transformation function isn't synchronous by itself?

I like basic explanations of complex concepts in reactor all over the web, they are not particularly useful in production code, so following piece of code I wrote which sends a message to kafka using reactor kafka + spring boot:
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.kafka.sender.KafkaSender;
import reactor.kafka.sender.SenderOptions;
import reactor.kafka.sender.SenderRecord;
import reactor.kafka.sender.SenderResult;
import java.util.Properties;
public class CallbackSender {
private ObjectMapper objectMapper;
private String topic;
private static final Logger log = LoggerFactory.getLogger(CallbackSender.class.getName());
private final KafkaSender<String, String> sender;
public CallbackSender(ObjectMapper objectMapper, Properties senderProps, String topic) {
this.sender = KafkaSender.create(SenderOptions.create(senderProps));
this.objectMapper = objectMapper;
this.topic = topic;
}
public Mono<SenderResult<String>> sendMessage(ProcessContext<? extends AbstractMessage> processContext) throws JsonProcessingException {
ProducerRecord<String, String> producerRecord = new ProducerRecord<>(topic,
objectMapper.writeValueAsString(processContext.getMessage()));
SenderRecord<String, String, String> senderRecord = SenderRecord.create(producerRecord, processContext.getId());
return sender.send(Flux.just(senderRecord))
.doOnError(e -> log.error("Send failed", e))
.last();
}
}
What I can't grasp in my mind is what exactly is the difference between calling this.sendMessage as .map vs .flatMap from the outer pipeline, so what for the explanation that map applying synchronous transformation to the emitted element if my synchronous function is not really doing anything synchronous apart from basic fields fetch?
Here Kafka sender is already reactive and async , so it doesn't matter which one I use? Is that correct assumption?
Is my code non-idiomatic?
Or for this particular it would be just a safe wrap of everything I am doing inside .sendMessage in .flatMap in case someone would add synchronous code in future, i.e. sugar-safety syntax.
My understanding is that .map will simply prepare pipeline in this case which returns Mono, and subscriber for outer calling pipeline will trigger entire domino effect, is that correct?
What I can't grasp in my mind is what exactly is the difference between calling this.sendMessage as .map vs .flatMap from the outer pipeline
map() applies a synchronous function (i.e. one "in-place" with no subscriptions or callbacks) and just returns the result as is. flatMap() applies an asynchronous transformer function, and unwraps the Publisher when done. So:
My understanding is that .map will simply prepare pipeline in this case which returns Mono, and subscriber for outer calling pipeline will trigger entire domino effect, is that correct?
Yes, that's correct (if by "domino effect" you mean that the returning mono will be subscribed to and its result returned.)
so what for the explanation that map applying synchronous transformation to the emitted element if my synchronous function is not really doing anything synchronous apart from basic fields fetch?
Quite simply, because that's what you've told it to do. There's nothing inherently asynchronous about setting up a publisher, just its execution once it's been subscribed to (which doesn't happen with a map() call.)

Using spray with "chunked responses"

I am using Spray to query a REST endpoint which will return a largish amount of data with several items that should be processed. The data is a series of json objects. Is there a way to convert the response into a stream of these objects that doesn not require me to read the entire response into memory?
Reading the docs there is mention of "chunked responses", which seem to be along the lines of what I want. How do I use that in a spray-client pipeline?
I've just implemented something like this today, thanks to the excellent article found at http://boldradius.com/blog-post/VGy_4CcAACcAxg-S/streaming-play-enumerators-through-spray-using-chunked-responses.
Essentially, what you want to do is to get hold of the RequestContext in one of your Route definitions, and get a reference to its "responder" Actor. This is the Actor by which Spray sends responses back to the client that sent the original request.
To send back a chunked response, you have to signal that the response is starting, then send the chunks one by one, and then finally signal that the response has finished. You do this via the ChunkedResponseStart, MessageChunk, and ChunkedMessageEnd classes from spray.http package.
Essentially what I end up doing is sending a response as a series of these classes like this:
0) A bunch of imports to put into the class with your Routes in, and a case object:
import akka.actor.{Actor, ActorRef}
import spray.http._
import akka.actor.ActorRef
import akka.util.Timeout
import akka.pattern.ask
import spray.http.HttpData
import scala.concurrent.duration._
import scala.concurrent.{ExecutionContext, Future}
import akka.actor.{ActorContext, ActorRefFactory, Props}
import spray.http.{HttpData, ContentType}
import spray.routing.RequestContext
import scala.concurrent.ExecutionContext
import scala.concurrent.ExecutionContext.Implicits.global
import spray.json.RootJsonFormat
import spray.http.MediaTypes._
object Messages {
case object Ack
}
1) Get a hold of the requestContext from your Route:
path ("asdf") {
get { requestContext => {
... further code here for sending chunked response ...
}
}
2) Start the response (as a JSON envelope that'll hold the response data in a JSON array called "myJsonData" in this case):
responder.forward(ChunkedResponseStart(HttpResponse(entity = HttpEntity(`application/json`, """{"myJsonData": ["""))).withAck(Ack))
3) Iterate over your array of results, sending their JSONified versions to the response as elements in the JSON array, comma separated until the final element is sent - then no need for a trailing comma:
requestContext.responder.forward(MessageChunk(HttpData(myArray.toJson).withAck(Ack))
if (!lastElement) { // however you work this out in your code!
requestContext.responder.forward(MessageChunk(HttpData(",").withAck(Ack))
}
4) When there's nothing left to send, close the JSON envelope:
responder.forward(MessageChunk("]}").withAck(Ack))
and signal the end of the response:
responder.forward(ChunkedMessageEnd().withAck(Ack))
In my solution I have been working with Play Iteratees and Enumerators and so I have not included big chunks of code here because they are very much tied up with these mechanisms which may not be suitable for your needs. The point of the "withAck" call is that this will cause the responder to ask for an Acknowledgement message when the network signals that it's OK to accept more chunks. Ideally you would craft your code to wait for the return of the Ack message in the future before sending more chunks.
I hope that the above may give you a starter for ten at least, and as I say, these concepts are explained really well in the article I linked to!
Thanks,
Duncan

Properties of Entity sent from iOS are set to null when objectify is used to store the entity into datastore

I send an entity from an iOS client and it is processed by the following backendAPI method:
#ApiMethod(name="dataInserter.insertData",path="insertData",httpMethod="post")
public Entity insertData(customEntity userInput){
ofy().save().entity(userInput).now();
return userInput;
}
customEntity is defined within customEntity.java as follows:
//Import Statements here
#Entity
public class customEntity {
#Id public String someID;
#Index String providedData;
}
After the above code runs, datastore contains the following entry:
ID/Name providedData
id=5034... <null>
If I add the following lines to my method:
customEntity badSoup=new customEntity();
badSoup.providedData="I am exhausted";
ofy().save().entity(badSoup).now();
I see the following in the datastore after I run the code:
ID/Name providedData
id=5034... I am exhausted
In a post almost similar to this one, the poster -- Drux -- concludes "...assignments to #Indexed properties only have actual effects on indices (and hence queries) if they are carried out directly with Objectify on the server (not indirectly on iOS clients and then passed to the server with Google Cloud Endpoints)." stickfigure then responds, "It sounds like what you're saying is 'cloud endpoints is not reconstituting your SomeEntity object correctly'. Objectify is not involved; it just saves whatever you give it."
It's hard to tell whether stickfigure is correct most especially given the fact that when I explore my API using Google's APIs Explorer, the same problem described above still occurs.
Is anyone able to explain what's causing this or is Drux's conclusion correct?

Scalatest sharing service instance across multiple test suites

I have FlatSpec test classes which need to make use of a REST service for some fixture data. When running all the tests in one go I only really want to instantiate the REST client once as it may be quite expensive. How can I go about this and can I also get it to work for running just one test class when I am running in my IDE?
1. Use mocking:
I would advice you to use some kind of mocking when you try to test REST service. You can try for example scala-mock. Creation of mock service isn't time/cpu consuming, so you can create mocks in all your tests and don't need to share them.
Look:
trait MyRestService {
def get(): String
}
class MyOtherService(val myRestService: MyRestService) {
def addParentheses = s"""(${myRestService.getClass()})"""
}
import org.scalamock.scalatest.MockFactory
class MySpec extends FreeSpec with MockFactory {
"test1 " in {
// create mock rest service and define it's behaviour
val myRestService = mock[MyRestService]
val myOtherService = new MyOtherService(myRestService)
inAnyOrder {
(myRestService.get _).expects().returning("ABC")
}
//now it's ready, you can use it
assert(myOtherService.addParentheses === "(ABC)")
}
}
2. Or use Sharing fixtures:
If you still want to use real implementation of you REST service and create only one instance and then share it with some test condider using:
get-fixture methods => Use it when you need the same mutable fixture objects in multiple tests, and don't need to clean up after.
withFixture(OneArgTest) => Use when all or most tests need the same fixtures that must be cleaned up afterwords.
Refer to http://www.scalatest.org/user_guide/sharing_fixtures#loanFixtureMethods for more details and code examples.
If you want to share the same fixture against multiple Suites use org.scalatest.Suites and #DoNotDiscover annotation (these requires at least scalatest-2.0.RC1)
Pawel's last comment fits well.
It was easier by inheriting from Suite with BeforaAndAfterAll instead of Suites.
import com.typesafe.config.ConfigFactory
import com.google.inject.Guice
import org.scalatest.{BeforeAndAfterAll, Suite}
import net.codingwell.scalaguice.InjectorExtensions.ScalaInjector
class EndToEndSuite extends Suite with BeforeAndAfterAll {
private val injector = {
val config = ConfigFactory.load
val module = new AppModule(config) // your module here
new ScalaInjector(Guice.createInjector(module))
}
override def afterAll {
// your shutdown if needed
}
override val nestedSuites = collection.immutable.IndexedSeq(
injector.instance[YourTest1],
injector.instance[YourTest2] //...
)
}

Resources