Facing Error while running a JMeter script using docker file on AWS - docker

Hi I am Niladri Shekhar De, relatively new to performance testing. I am trying to run my load test (scripted in JMeter) using docker file on AWS. I am editing the docker file as mentioned in docker file this picture. Also, I have edited the entrypoint as this
entry point picture. Then while I am trying to run it is waiting for long after "Waiting for possible shutdown...." line and finally I am getting all 10 errors (My script has 2 transactions and I am running for 5 users) as shown in the picturecloudwatch. The script name may be mentioned here is different than in the docker file but that I have changed later. Could anyone please look into this and help me out? It will really be a great help..

Although your way to take screenshots is fantastic you should not be posting code as images on StackOverflow
Coming back to your question: we cannot see any failure reason there so I would suggest to check:
The .jtl results file, it should have status code, response message, maybe response details, etc.
The jmeter.log file which normally can give a clue regarding what's wrong. If it doesn't - you can try increasing JMeter logging verbosity

Related

Enable k8s experimental features in Docker Desktop

does anyone know if this is possible?
All I can find in docs is reference to enabling docker experimental features, but not the kubernetes experimental features.
I tried this, but still get error.
k alpha debug -it exchange-pricing-865d579659-s8x6d --image=busybox --target=exchange-pricing-865d579659-s8x6d
error: ephemeral containers are disabled for this cluster (error from server: "the server could not find the requested resource").
Thanks
I had the same intent (as have others in this feature request). After several hours of trial and error, I finally found out a way to do so.
Steps:
Depending on which file you're trying to edit, you may need to fully shut down Docker Desktop, and restart WSL. (right-click tray-icon and press "Quit Docker Desktop", then run wsl --shutdown, then run wsl)
Open the [...]/kubeadm/manifests folder, in the Docker filesystem.
On Windows, navigate Windows Explorer to:
For Docker Desktop 4.2.0: \\wsl$\docker-desktop-data\version-pack-data\community\kubeadm\manifests
For Docker Desktop 4.11.0: \\wsl$\docker-desktop-data\data\kubeadm\manifests
Open the kube-controller-manager.yaml, kube-apiserver.yaml, and kube-scheduler.yaml files, adding the line below:
spec:
containers:
- command:
[...]
- --feature-gates=EphemeralContainers=true <-- add this line
Start Docker Desktop again.
It looks so easy when its already figured out, huh? Well trust me, it was a pain to find out.
Some of the slowdowns I hit:
It took me quite a while to even find those manifest files. (eventually found it using grepWin, searching through the whole \\wsl$\docker-desktop-data folder for any matches of a line I grabbed from the kube-apiserver-docker-desktop pod's config, which I viewed using Lens)
Once I found it, I got confused by this documentation. When I read FEATURE STATE: Kubernetes v1.22 [alpha], I thought that meant you needed version 1.22 or higher of Kubernetes for the feature to be available. This caused a huge wild goose chase where I tried to change the version of Kubernetes that was being launched in Docker Desktop, which Docker Desktop didn't seem to like. (in retrospect, the issue may have just been the minor one in point 3 below...)
When I first made changes to the manifest files, I was using Notepad++. And despite my liking Notepad++, it's apparently not quite as smart as vscode in the following regard: it does not automatically detect the indentation type for yaml files. Thus, when I pressed tab to create an indent, so I could add the new flag to the argument list, it added it as a tab character rather than spaces. This caused Kubernetes to fail reading of the file. That might not be so bad if Kubernetes gave a sane error message for that, but instead it merely gave the message unexpected EOF. And I didn't even see that error message at first because it was not being propagated to the kube-controller-manager-docker-desktop pod (which was the only relevant one that wasn't immediately erroring/closing). Anyway, I didn't realize this was the problem at the time, so...
I decided to try bypassing the manifest-files and applying my modification to the etcd data-store directly. In retrospect, this was not a good idea, because the etcd data-store is pretty complex, the tooling is substandard, and the documentation is substandard. I spent a ton of time just trying to figure out how to send commands to read and write data to it (eventually managed to do so by calling etcdctl within the etcd-docker-desktop pod). I spent further time still writing up a NodeJS script capable of reading all the data as JSON, storing it in a dump file, and being able to write changes to entries back despite there being 3+ levels of quoting involved (I eventually was able to use stdin to pass the value rather than as part of the command string, to avoid quotation-mark-inception). After all the work on etcd reading/writing above, I found it didn't work anyway because Kubernetes invariably "breaks" if anyone else writes to its etcd data-store. (even if you write the exact same value that had been there before -- as verified by comparing the dumps before and after)
After all of the above, I decided to have one last go with just adding the flags to mentioned manifest files. Was still getting the startup failure/error, but at the very end, I decided I wanted to see exactly what about my changes was causing Kubernetes to reject them. So I tried commenting out my added line; the error remained. I thought maybe it was a checksum-based rejection then. But then I thought, maybe the YAML parser that Kubernetes is using is just outdated and is finicky about what comments it is able to recognize. So I tried moving the comment around to different places, and was puzzled when the manifest was being accepted just by moving the comment to the root level. I moved it back to various locations, with it working and not working, until I thought to try making the line "half-indented" since it's "in-between" the working and non-working versions. That's when I noticed the line had a tab as its indent. And then it hit me; are the other lines also using tabs? I checked, and nope, they were using spaces. And that's when I realized I had wasted the last few hours on something I coulda just fixed with a simple indent change.
The moral of the story for some is that YAML is a bad configuration format, because it makes it easy to make trivial errors like this. But I actually place the blame more on whatever parser Kubernetes is using for the YAML files; it is unacceptable that a YAML parser would encounter an indentation mismatch and give a message so generic as unexpected EOF. I don't know what the identity of that YAML parser is, but I'm tired enough of the subject that I'm not even going to look into it right now. If one of you finds it, please make an issue report for it -- perhaps including this story as a real-world example of the pain that ambiguous error messages can cause.
Since Ephemeral Containers is still an alpha feature, it is disabled by default.
As you can read here, for this to work, it requires the EphemeralContainers feature gate to be enabled, and Kubernetes client and server version v1.16 or later.
As to the 2nd requirement I assume both your Kuberntes server and client versions are v1.16 or later but it looks like, for the time being, the 1st requirement cannot be met on Docker Desktop. According to this issue, it currently doesn't support enabling Feature Gates.
However you may still try to ssh to your master node and edit the following files:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
by adding inside the command section:
--feature-gates=EphemeralContainers=true
Then you need to delete those pods so they are recreated with new settings applied. You'll find them by running:
kubectl get pods -n kube-system

How to get resolution of "closed" from optilude when getting Jira stats

Wondering if anyone can help me with this.
I'm trying to run Optilude (https://github.com/optilude/jira-cycle-extract) in the Windows10 command line.
I have installed everything, set up the yaml file as instructed, and have it running.
Every time I run it however, I get the following error:
r.status_code, error, r.url, request=request, response=r, **kwargs) jira.exceptions.JIRAError: JiraError HTTP 400 url: https://livesport.atlassian.net/rest/api/2/search?jql=project+%3D+VELCRO+AND+issueType+IN+%28%22Story%22%2C+%22Task%22%2C+%22Bug%22%29+AND+%28resolution+IS+EMPTY+OR+resolution+IN+%28%22Done%22%2C+%22Closed%22%29%29+ORDER+BY+updatedDate+DESC&validateQuery=True&startAt=0&expand=changelog
text: The value 'Closed' does not exist for the field 'resolution'.
This is the yaml file snippet, which asks for the resolutions:
And this is my Jira board, with resolution set to both Closed and Done.
Does anyone have any suggestions as to why it's not picking up the resolution for "Closed"? Is there some other way I need to set this, or write in the yaml file?
If I remove the "Closed" from the yaml file, it runs happily past that error point (and onto the next unrelated error - but that's another issue..)
Are you sure that you have "Closed" resolution at this JIRA setup? If issue has a resolution it mean that it is closed.

Unsatisfied Link Error: Flink

I was trying to run a basic program in java by submitting to the job manager in Flink. I have a native library from open CV. When I try to submit the job I get "java.lang.UnsatisfiedLinkError: no opencv_java310 in java.library.path", however when I run it on eclipse by setting up the flink execution environment I get correct results.
I have followed some solutions from the apache flink support website:https://mail-archives.apache.org/mod_mbox/flink-user/201604.mbox/%3CCAO0MGUj_h==sw76-TWF6x8fnT_Vdc84mwu=YLejjn=bG-up+MQ#mail.gmail.com%3E and have modified my conf.yaml file accordingly (by pointing env.java.opts: -Djava.library.path="/path of Open CV library", but no luck,
Maybe my question is very basic , but still I am stuck, any help would be highly appreciated. Thanks :-)
I had a similar problem, often people references something like the "Tomcat" solution. Also, Flink with RocksDB writes the so to a tmp file, but this was also wrong.
If anyone else should pass this way- I wrote a short blog outlining the steps I took. OP's comment answer seems evident, but only after I also see the solution (when I was working on this, it was non-informative).
Shameless self promo:
https://rawkintrevo.org/2017/08/14/using-jnis-like-opencv-in-flink/

symfony 1.4 propel:build-model not working as expected

Just wondering if anyone might know what's happening here. I have several schema.yml files, and when I try to build model classes using symfony propel:build-model I don't get any error message, however instead of any classes being generated I get xml files generated in the same config folder as the schema yml files. i.e. if I have a file named logger_schema.yml in the config directory, after I run build-model, I will also have a generated-logger_schema.xml file in the config directory as well, and no generated classes.
Any idea what could be causing this?
The XML file in question is a worker file symfony/Propel creates as part of the class generation process - it's not an "error" as such.
symfony CLI tasks require quite a lot of PHP memory, especially on Windows. If the Propel task is failing, I would recommend a permanent change to the php.ini file setting on memory allocation to at least 256M. I know this seems high, but you should only ever need these tasks on a development machine. As you note, you saw evidence of memory exhaustion on another related task.
If that doesn't fix it, could you add to your question all of the CLI output when you run the task? It might shed some light on the step which is failing.
After looking at this ticket, it appears the XML files are likely the result of a symfony error, despite the fact that I repeatedly got no error message using propel:build-model. After trying propel:build --model --forms, I did in fact get a "memory exhausted" error, which was solved by temporarily increasing the PHP memory limit.

Extract information from a log file using powershell

Hi I am new to the language of powershell s i though about playing around with it. I am trying to extract information out of a log file (the file belongs to a program called event viewer). I need to use the information under Boot Duration.
Could somebody guide me a little bit?
It will be greatly appreciated
Thanks.
Logs are always the same. Not sure if you are going to monitor boot log of windows or linux or what.. but will try to answer.
If you edit your question and add info on the operating system and an example of relevant lines of boot log file I can provide you with some powershell code.
In general you should do:
Identify how to manually see boot time in log file. For example
probably it will have a starting boot time and a finished boot time.
Something similar to this.
[2012-06-08 12:00:04] starting boot
lot of log entries
[2012-06-08 12:00:34] finished boot
Once you know how to do it manually, you have to convince powershell to do it for you. You can use regular expressions to look for the pattern of dates. In my example look for lines that contains "starting boot" and then parse it to load date.
Here you have an useful link on powershell and regular expressions: http://www.regular-expressions.info/powershell.html

Resources