There is a website with tickets and my developer write a parser(written in JS), the parser itself is legitimate to the Telegram in the form of a bot. Only 2 commands - /day and /route. Every 15 minutes, Telegram bot will give an answer with existing flights at a given date and route.
In brief about the work:
There is a site - we send a POST request, we receive HTML, and
JS parses this HTML, and then gives answers to our bot
There are no requirements for non-functional testing. What types would you recommend producing? Can I somehow use Postman, JMeter, and other testing software?
I am new in non-functional testing, but I'm very interested to carry out some kind of non-functional testing of this functional.
Related
I'm trying to send different message cards to multiple teams channels.
I have already created a webhook (telekom/webhook) for this which gives me the right variables via json.
There are four department receiver channels (telekom/rest-api-component) which are also configured to send pre-formatted teams message cards with the variables they have submitted.
Currently this happens to all channels at the same time. In between I would need an "action" in which I can decide which of the channels is served based on the input values. Unfortunately I don't find anything suitable due to the variety of the apis. Do you know how I could realize this ? So something like if value department = Backoffice then (Teams "Account Management") action.
In order to be able to talk with the different applications from Office 365 I wanted to use the Microsoft Graph api which is now available for some time. I couldn't find them in Flowground. Are you planning to include this module ?
For the implementation with Office365 flows this would be absolutely necessary for me.
I want to come back to this question: The CBR is a good choice for executing decisions indeed. But is is this the best solution in every situation? I do not think so.
Assume the following task:
Depending on an input parameter test you want to fire a request to different web services (WS1:google.de and WS2:bing.de)
Solution 1: You realize the requests with dedicated connectors for WS1 and WS2.
In this case you need the CBR in front of WS1 connector and WS2 connector to decide, what connector has to been used next.
Solution 2: You are able to realize both requests with REST-API connector. In this case you can use a JSONATA expression as URL mapping, e.g.
(test="google") ? "http://google.de" : http://bing.de
By using JSONATA expressions every connector has (limited) capability for executing decisions.
Solution 2 has a big advantage when you are using realtime flows. In this case you are able to reduce the number of connectors they are needed for running the flow and (very important from a cost perspective ) the number of permanently claimed token by this flow.
For reducing the complexity of JSONATA expressions (e.g. when you add further search engines) and for separation of individual configuration items you can use the configuration connector (we can discuss this in a separate thread if needed).
Solution 1 is the choice without alternative when you have to decide between different structures/connectors they need to be executed within a flow.
Please try the Content-Based-Router: https://doc.flowground.net/guides/content-based-router.html, it is available on the Connector Catalog.
I want to test a set of ruby-on-rails applications. Specifically, I want to trigger all possible GET/POST requests available. I am considering using some web crawler-like tool, which could (recursively) send requests to my web server, get responses, and parse the response HTML file to get all possible "href tags", "form submission buttons", etc.
Essentially I want to see the performance of these web applications and get some logs of things like what are the request routes, parameters, database accesses, queries, transactions, etc.
Sending GET requests is relatively easy to handle, I would need to simply parse the HTML response and extract the href attributes of all anchors. However, I don't know how to handle those POST requests; they would require me to fill in all these parameter fields included in the form fields. I am wondering if there exist some tools doing such work. Or some tools I can easily modify (not too much) code to achieve my functionality?
Thanks a lot.
Using Jmeter GUI, I recorded a test scenario (placing an order) and the script ran successfully. But when I replay the test scripts it doesn't function as it was recorded to do, it did not make an order.
After query the dev, found that with each item selected, the server generate a CSRF token, and put the token in the URL path (Like: /cart/add/type/product_id/7245985/_csrf_token/b46c0aec2e5891808ec42141b1956943204ae8f8) when the item is added to the shopping cart. This is all recorded in the script. This path with the token is used to add the item to cart.
My question is how to test this dynamic token when it is concatenated in the path of URL?
Any help are appreciated.
If you have not already added Tree View Listener to your Test Plan, then add it now. You can use it to view the details of requests & responses. JMeter considers a request successful if it gets "some" response from Server-side. It does not matter if the response is functionally valid or not. So, in order to make sure that JMeter is sending valid parameters and receiving expected response, you will have to check the details of requests / responses in Tree view listener.
You can also add Response Assertions to requests so JMeter itself verifies that it is getting expected responses.
Important Tips:
Use TreeView Listener for debugging only. In real load test keep it disabled as it consumes lot of memory.
Do not use response assertions excessively as they consume lot of memory as well.
JMeter is not a browser-based tool. It just deals with back-end requests. Hence it is expected to be very fast. So nothing wrong with that. You should remove un-necessary timers as there is nothing wrong with it being fast.
If your requests involve some kind of login authorization then have a look at this question for further details Load testing using jmeter with basic authentication
Recording doesn't guarantee working script, it gives you only a "skeleton" and usually you need to perform some correlation (the process of extracting mandatory dynamic parameter from previous response and adding it to the next request).
Reference material:
Building a Web Test Plan
Building an Advanced Web Test Plan
How to use JMeter for Login Authentication?
How to make JMeter behave more like a real browser
I need to test some HTTP components in my Delphi app. I use DUnit and want to add some automation into testing.
So my testing code need to start the local HTTP server, configure it (for example, prepare for connection break in 3 seconds, or to simulate low bandwidth, or to ask for login/password etc), run my unit-tests and close HTTP server.
Are there some HTTP servers available exactly for Delphi/DUnit?
I know that Mozilla team have such server, but it's not too easy to integrate it into DUnit.
I use Indy's TIdHttpServer to serve stuff in the same process.
This approach allows me to check that the requests coming in are correct, as well as checking the behaviour from the client end.
Also, you can individually set up the server on a testcase by testcase basis, making your unit tests easier to understand (meaning that you don't have a piece of the 'test' somewhere else).
While #Nat's answer is workable, the setup code for stubbing requests and their associated responses using Indy can be pretty heavy. Also, when working in this way, I found the test code to be quite a time drain in both writing and debugging. Hence I built a framework Delphi WebMocks for DUnitX (sorry, not DUnit) to do exactly this with a syntax that should be straight-forward using HTTP terminology.
For example, the setup code is as simple as:
WebMock.StubRequest('GET', '/')
.ToRespond
.WithHeader('Content-Type', 'application/json')
.WithBody('{ "value": 123 }');
You can also verify the requests actually got made like:
WebMock.Assert
.Post('/')
.WithHeader('Content-Type', 'application/json')
.WithBody('{ "value": 123 }')
.WasRequested;
If the assertion fails, it will fail the DUnitX test.
There is a lot more to it in terms of how you can specify request matching and responses so please check it out if you think you'd find it useful.
You may use unit test / DUnit to construct automatic integration tests. Say, you HTTP components as http client make calls to a Web service. You may make your own mock Web service, or just use any public Web service, like one of those from Google or Amazon. So you just need to create a Google or Amazon developer account, and consume some basic service functions for testing.
If you're testing SOAP services, use SoapUI to stand up a "mock" service based on your WSDL.
You can have it return a variety of responses (either sequentially, or use some simple scripting to match responses to the request contents.) I've done this by matching the "request ID" (just a GUID) in my request sent from the DUnit test, to a response in the SoapUI. It's a simple xpath query to match them up.
You can have it return "canned" errors/exceptions, and of course when it's not running, you'll have the "nobody's home" test case.
Edit #2: Does anyone have a good method of testing the "middle" of a client-server application where we can intercept requests and responses, fake the client or server as needed, and which provides self-documentation of the api?
Cucumber might be a good solution in many cases, but it's not quite what I'm looking for. And this middle layer should be client/server implementation agnostic. (e.g., black-box).
Our client-server model is a ruby-on-rails server with a Flex client, using a RESTish interface with JSON as the data format. So anything the client posts to the server is usually a single JSON parameter. The server does it's thing and responds with a pure JSON model.
We have standard rails testing on the server and we're working on getting proper FlexUnit tests completed on the client (it's a moving target). However, there's a debate in my team about the effectiveness of the current testing model, since every change on the server seems to break part of the API. This is telling me that there is both a problem with API communication (between team members, self-documentation in code, etc..), and a lack of proper API sanity testing.
So I've been questioning whether we need to have a mock client for testing the server at a pure JSON level (without all the other complexities of a rich client), and possibly a mock-server for doing the same thing with the rich client. This would serve two purposes, to document the API and to provide more thorough testing of the API itself.
The reason there's a debate is that the rails guy claims that the rails integration testing is sufficient for testing all the server requests, and the middle-ground testing environment would simply be redundant.
So the question here is, given our situation, how should be go about self-documenting the API, and how should we test the API itself?
EDIT:
We have routes like /foo/444/bar.js, but the parameters can be virtually any complex JSON string depending on the action, e.g.:
json={
"foo":{
"x":1,
"y":2
},
"bar":[1,2,3,4,5]
}
but besides manually-edited API docs, there's no self-documentation. The rails controller often just deserializes and applies changes directly to the model. Would be nice to have common tests to tell us when it's changed, and what's expected.
I just started looking at this web functional testing tool called Maxq and I think it has the potential to solve your problem, Maxq acts as a proxy server between your web client and server application.
It sits on top of Junit so that means you could do proper unit testing for your API by asserting the behavior and responses of calls to your server app.
It basically captures and records all the requests you make from a web client and the responses you get back from a server, it also has the ability to generate test scripts of your request which you could use to play back and test on any server.
You should try it out http://maxq.tigris.org/
You can think of it as two different projects.. if you had two project, you would've writing two separate test suites right?
You should start by establishing the API between the server and the client- as if you wont have any communication between the teams after you start implementhing.
Then you build the client that consume the API and a server that produce the API (or the tests first, if you TDD).
For testing, one team need a mock-server to supply fake API responses to test the client, and the other team need to test the produced data of the server (i.e, the second team is using ails integration testing like your rails guy claims)
I would recommend Cucumber. It allows you to write specific tests for your application by emulating a browser. This way you can send requests and validate the JSON response easily.
One way you can do this is controller tests using rspec (you can also use test::unit)
describe PersonApiController
# GET /person/1.json
it "should respond with a person" do
person = Person.create(:name => "scott")
get :show, :id => person.id, :format => 'json'
response.should be_success
response.body.should have_selector('name', :content => person.name)
end
end