Our project involves containerisation of services / application and later they will be deployed on Kuberentes. My job is to do performance testing using Jmeter after the services are hosted on Kubernetes.
I am relatively new to Performance testing and have basic experience on Jmeter that I gained from working on it. I have understood how the app is load / perf tested using basic URLs or APIs but I want to know how I should go about handling performance testing for Docker containers hosted on Kubernetes.
How could I handle the above scenario?
JMeter doesn't know anything about the underlying technologies used at the backend, it just sends requests via Samplers, waits for responses and measures the elapsed time of the request and some other performance metrics. Later on you can generate HTML Reporting Dashboard to visualize the results
So your goal is to:
Identify the business use cases you need to implement for the performance testing
Identify network protocols which are being used under the hood of these business use cases
Create a JMeter Test Plan to precisely mimic the real user (or other application) accessing your system and doing what it supposed to be doing
Related
I am looking for monitoring tool for the following use cases:
Collect basic metrics about virtual machine (cpu usage, memory usage, i/o, available space)
Extract metrics from SQL Server (probably running some queries)
Extract information from external service about processing i.e how many processing are currently running and for how long. I am thinking about writing python scripts, but don't know how to combine with monitoring tool
Have the ability to plot charts and manage alerts and it will nice to have ability to send not only mails, but send message to slack/ms teams.
I was thing about Prometheus, because it has wmi_exporter, node_exporter, sql exporter, alert manager with possibility to send notifications to multiple destinations, but I don't know what to do with this external service and python scripts.
Any suggestions?
Prometheus can definitely do what you say you need done. Some of it may not be trivial, but you can definitely fill in the blanks yourself.
E.g. you can get machine metrics basically out of the box by firing up a node_exporter and having it scraped by Prometheus, but I don't think it has e.g. information on all running processes. The latter might require you to write an agent/exporter: a simple web server that exposes metrics on /metrics; there exists a Python client library to help with that. Or have said processes (assuming they're your code) push metrics to a Pushgateway instead, if they're short lived batch jobs.
Oh, and for charts/dashboards you probably want Grafana, as Prometheus' abilities in that area are rather limited and Grafana integrates rather well with Prometheus.
I'm using Spring Cloud Dataflow local server and deploying 60+ streams with a Kafka topic and custom sink. The memory/cpu usage cost is not currently scalable. I've set the Xmx to 64m for most streams.
Currently exploring my options.
Disable embedded Tomcat server. I tried this once and SCDF couldn't tell the deployment status of the stream.
Group multiple Kafka "source" topics to a single sink app. This is allowed by Kafka but unclear if SCDF will permit subscribing to multiple topics.
Switch to using Kubernetes deployer. Won't exactly reduce the memory/cpu usage but distribute it across multiple machines. Haven't pursued this option because Kubernetes isn't used in my org yet. Maybe this will force the issue.
Open to other ideas. Might also be able to tweak Kafka configs such as max.poll.records and reduce memory usage.
Thanks!
First, I'd like to clarify the differences between SCDF and Stream/Task apps in the data pipeline.
SCDF is a lightweight Spring Boot app that includes the DSL, REST-APIs, and the Dashboard. Simply put, it serves as the orchestrator to define and deploy stream and task/batch data pipelines made of stream and task applications respectively.
The actual business logic, its performance, and the underlying resource consumption are at the individual Stream/Task application level. SCDF doesn't interfere with the app's operation, nor it contributes to the resource load. Everything, in the end, is standalone Boot apps - standalone Java processes.
Now, to your exploratory steps.
Disable embedded Tomcat server. I tried this once and SCDF couldn't tell the deployment status of the stream.
SCDF is a REST server and it requires the application container (in this case Tomcat), you cannot disable it.
Group multiple Kafka "source" topics to a single sink app. This is allowed by Kafka but unclear if SCDF will permit subscribing to multiple topics.
Again, there is no relation between SCDF and the apps. SCDF orchestrates full-blown Stream/Task (aka: Boot apps) into coherent data pipeline. If you have to produce or consumer to/from multiple Kafka topics, it is done at application level. Checkout the multi-io sample for more details.
There's the facility to consume from multiple topics directly via named-destination, too. SCDF provides a DSL/UI capability to build fan-in and fan-out pipelines. Refer to docs for more details. This video could be useful, too.
Switch to using Kubernetes deployer.
SCDF's Local-server is generally recommended for development. Primarily because there's no resiliency baked into the Local-server implementation. For example, if the streaming apps crash for any reason, there's no mechanism to restart them automatically. This is exactly why we recommend either SCDF's Kubernetes or Cloud Foundry server implementations in production. The platform provides the resiliency and fault-tolerance by automatically restarting the apps under fault scenarios.
From resourcing standpoint, once again, it depends on each application. They are standalone microservice application doing a specific operation at runtime, and it is up to how much resources the business logic requires.
Where I work, we are migrating our entire infrastructure which was until now based on monolithic services that ran directly on a windows/linux VM to a docker based architecture that will be orchestrated by Kubernetes.
One of the things that came to my mind is how we would handle logs in this new infrastructure.
Up until now, each app had its own way of handling logs, some were using log4net/log4j to write to file system, some were writing to GrayLog via a dedicated library.
The main problem I have with that is that one of the core ideas of programming micro-services in a Docker environment is that every service should assume as little as possible about the rest of services or the platform.
So basically I was looking into how I can abstract the logging process from the application, make it independent from the rest of the infrastructure.
One interesting thing that I found was that you could write the logs to standard output (stdout) and then configure Kubernetes to pull these logs and direct them to a centralised storage or a centralised logging server (like GrayLog) https://kubernetes.io/docs/concepts/cluster-administration/logging/
I have several concerns with this approach, for once, I haven't seen too many companies that do it, most popular logging solutions are to use a dedicated library to log to filesystem.
I am also concerned about how it might impact performance, some languages block if you write to stdout, whereas when you use a standard logging library, the logs are queued.
So what about services that output massive amount of user related logs?
I was interested about what you think, I didn't see this approach used widely, maybe there is reason for that.
Logging to whatever stream (File, stdout, GrayLog...) can either be synchronous (blocking) or asynchronous (non-blocking). Inherently, that has nothing to do with the medium you log to per-se. It is true that using System.out.println in Java will result in heavy thread-contention.
All the major logging frameworks (like log4j) provide you with the means to log in an asynchronous fashion to every medium that you like.
Your perception of not many companies doing this I think is wrong. Logging to stdout and configuring your underlying architecture to forward logs somewhere is the defacto standard of all PaaS/containerized applications.
So my tip is going to be: Log to stdout using a good logging framework which ensures asynchronous usage of the stream. For the rest you'll probably be fine.
I'm thinking about moving my DAL which uses DocumentDb and Azure Table Storage to a separate Web API and host it as a cloud service on Azure.
The primary purpose of doing this is to make sure that I keep a high performance DAL that can scale up easily and independently of my front-end application -- currently ASP.NET MVC 5 running as a cloud service on Azure but I'll definitely add mobile apps as well. With DocumentDb and Azure Table Storage, I'm finding myself doing a lot of data handling in my C# code, therefore, I think it would be a good idea to keep that separate from my front-end application.
However, I'm very concerned about latency issues introduced by HTTP calls from one cloud service to another which would defeat the purpose of separating DAL into its own application/cloud service.
What is the best way to separate my DAL from my front-end application without introducing any latency issues?
I think the trade off between scaling-out/partitioning resources and network latency is unavoidable. That being said, you may find the trade-off well worth it for many reasons (i.e. enabling parallel execution of application tasks, increased reliability, etc.) when working w/ large-scale systems.
Here are some general tips to help you minimize the hit on network latency:
Use caching to avoid cross-service calls whenever possible.
Batch cross-service calls and re-use connections whenever possible to minimize the cost associated w/ traversing the NAT out of one cloud service and through the load balancer into another. Note - your application must also be able to handle dropped connections (inevitable in cloud architecture).
Monitor performance metrics as much as possible to take measurements and identify bottlenecks.
Co-locate your applications layers within the same datacenter to keep cross-service latency to a minimum.
You may also find the following literature useful: http://azure.microsoft.com/en-us/documentation/articles/best-practices-performance/
I recently split out my DAL to a WebAPI that serves data from DocumentDB for both the MVC website and mobile applications for the same reasons stated by the questioner.
The statements from aliuy are valid performance considerations generally accepted as good practice.
But more specifically - in order to call Web API from MVC without latency using Azure cloud services, one should specify same affinity group for each resource (websites, cloud services, etc).
Affinity groups are a way you can group your cloud services by
proximity to each other in the Azure datacenter in order to achieve
optimal performance. When you create an affinity group, it lets Azure
know to keep all of the services that belong to your affinity group as
physically close to each other as possible.
https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-migrate-to-regional-vnet/
I'm really impressed with the power of cloud computing when it comes to the possibility to scale up and down your facilities depending on your load.
How can I shift my paradigm and learn to write my applications in that way? Write it once and forget(no matter of the future load) would be the best solution.
How can I practice my skills in that area?
Setup virtualization environment when I can add another VMs into the private cloud(via command line?) on some smart algorithms to foresee the load for some period of time?
Ideally I want to practice it without buying actual Cloud computing services and just on my hardware.
The only thing I want to practice here is app/web role and/or message queue systems scaling when current workers have too many jobs in queue. So let's rule out database scaling from the question's goal as too big topic.
One option I will throw out is to use a native Cloud execution framework. You might look at CloudIQ Platform. One component is CloudIQ Engine. It allows you to develop cloud native apps in C/C++, Java and .NET. You get the capabilities of scale up by simply adding workers to your cloud. The framework automatically distributes your applications to the new machine(s), and once installed, will begin sending work to them as requests come in. So in effect the cloud handles your queueing issue for you.
Check out the Download and Community links for more information.
You should try AWS- Amazon's offering a free tier that gives you storage, messaging and micro instances (only linux). you can start developing small try-outs without paying. writing an application that scales isn't that hard- try to break your flow into small, concurrent tasks. client-server applications are even easier- use a load balancer to raise\terminate servers by demand.