Http Trigger Not Scaling up using Osiris Keda - docker

I am trying to deploy an HTTP Trigger to keda. I have installed Osiris components for the same. It helped me to scale to zero when no request is coming, but it is not scaling up from 1 instance. I have removed all replica-constraints from deploy.yaml file still no effects.
Using AKS Cluster with 3 node
Installed Keda and Osiris components using Core Tools install
I expect the nodes to be scaled up when doing a load test for 100 users. But it always show 1 instance.Please help me if anybody have tried it.

Related

Enable k8s experimental features in Docker Desktop

does anyone know if this is possible?
All I can find in docs is reference to enabling docker experimental features, but not the kubernetes experimental features.
I tried this, but still get error.
k alpha debug -it exchange-pricing-865d579659-s8x6d --image=busybox --target=exchange-pricing-865d579659-s8x6d
error: ephemeral containers are disabled for this cluster (error from server: "the server could not find the requested resource").
Thanks
I had the same intent (as have others in this feature request). After several hours of trial and error, I finally found out a way to do so.
Steps:
Depending on which file you're trying to edit, you may need to fully shut down Docker Desktop, and restart WSL. (right-click tray-icon and press "Quit Docker Desktop", then run wsl --shutdown, then run wsl)
Open the [...]/kubeadm/manifests folder, in the Docker filesystem.
On Windows, navigate Windows Explorer to:
For Docker Desktop 4.2.0: \\wsl$\docker-desktop-data\version-pack-data\community\kubeadm\manifests
For Docker Desktop 4.11.0: \\wsl$\docker-desktop-data\data\kubeadm\manifests
Open the kube-controller-manager.yaml, kube-apiserver.yaml, and kube-scheduler.yaml files, adding the line below:
spec:
containers:
- command:
[...]
- --feature-gates=EphemeralContainers=true <-- add this line
Start Docker Desktop again.
It looks so easy when its already figured out, huh? Well trust me, it was a pain to find out.
Some of the slowdowns I hit:
It took me quite a while to even find those manifest files. (eventually found it using grepWin, searching through the whole \\wsl$\docker-desktop-data folder for any matches of a line I grabbed from the kube-apiserver-docker-desktop pod's config, which I viewed using Lens)
Once I found it, I got confused by this documentation. When I read FEATURE STATE: Kubernetes v1.22 [alpha], I thought that meant you needed version 1.22 or higher of Kubernetes for the feature to be available. This caused a huge wild goose chase where I tried to change the version of Kubernetes that was being launched in Docker Desktop, which Docker Desktop didn't seem to like. (in retrospect, the issue may have just been the minor one in point 3 below...)
When I first made changes to the manifest files, I was using Notepad++. And despite my liking Notepad++, it's apparently not quite as smart as vscode in the following regard: it does not automatically detect the indentation type for yaml files. Thus, when I pressed tab to create an indent, so I could add the new flag to the argument list, it added it as a tab character rather than spaces. This caused Kubernetes to fail reading of the file. That might not be so bad if Kubernetes gave a sane error message for that, but instead it merely gave the message unexpected EOF. And I didn't even see that error message at first because it was not being propagated to the kube-controller-manager-docker-desktop pod (which was the only relevant one that wasn't immediately erroring/closing). Anyway, I didn't realize this was the problem at the time, so...
I decided to try bypassing the manifest-files and applying my modification to the etcd data-store directly. In retrospect, this was not a good idea, because the etcd data-store is pretty complex, the tooling is substandard, and the documentation is substandard. I spent a ton of time just trying to figure out how to send commands to read and write data to it (eventually managed to do so by calling etcdctl within the etcd-docker-desktop pod). I spent further time still writing up a NodeJS script capable of reading all the data as JSON, storing it in a dump file, and being able to write changes to entries back despite there being 3+ levels of quoting involved (I eventually was able to use stdin to pass the value rather than as part of the command string, to avoid quotation-mark-inception). After all the work on etcd reading/writing above, I found it didn't work anyway because Kubernetes invariably "breaks" if anyone else writes to its etcd data-store. (even if you write the exact same value that had been there before -- as verified by comparing the dumps before and after)
After all of the above, I decided to have one last go with just adding the flags to mentioned manifest files. Was still getting the startup failure/error, but at the very end, I decided I wanted to see exactly what about my changes was causing Kubernetes to reject them. So I tried commenting out my added line; the error remained. I thought maybe it was a checksum-based rejection then. But then I thought, maybe the YAML parser that Kubernetes is using is just outdated and is finicky about what comments it is able to recognize. So I tried moving the comment around to different places, and was puzzled when the manifest was being accepted just by moving the comment to the root level. I moved it back to various locations, with it working and not working, until I thought to try making the line "half-indented" since it's "in-between" the working and non-working versions. That's when I noticed the line had a tab as its indent. And then it hit me; are the other lines also using tabs? I checked, and nope, they were using spaces. And that's when I realized I had wasted the last few hours on something I coulda just fixed with a simple indent change.
The moral of the story for some is that YAML is a bad configuration format, because it makes it easy to make trivial errors like this. But I actually place the blame more on whatever parser Kubernetes is using for the YAML files; it is unacceptable that a YAML parser would encounter an indentation mismatch and give a message so generic as unexpected EOF. I don't know what the identity of that YAML parser is, but I'm tired enough of the subject that I'm not even going to look into it right now. If one of you finds it, please make an issue report for it -- perhaps including this story as a real-world example of the pain that ambiguous error messages can cause.
Since Ephemeral Containers is still an alpha feature, it is disabled by default.
As you can read here, for this to work, it requires the EphemeralContainers feature gate to be enabled, and Kubernetes client and server version v1.16 or later.
As to the 2nd requirement I assume both your Kuberntes server and client versions are v1.16 or later but it looks like, for the time being, the 1st requirement cannot be met on Docker Desktop. According to this issue, it currently doesn't support enabling Feature Gates.
However you may still try to ssh to your master node and edit the following files:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
by adding inside the command section:
--feature-gates=EphemeralContainers=true
Then you need to delete those pods so they are recreated with new settings applied. You'll find them by running:
kubectl get pods -n kube-system

Running Camunda in clustered environment locks further deployment

I have a problem with my clustered Camunda environment. What I am trying is to run multiple Camunda instances on my Openshift Cluster. All of them are connected to a single oracle db instance.
My problem is, that the deployment of the first instance is working as expected. However as soon as I'm trying to scale the pods to e.x. 3 instances, at least one of them failes and remains stuck on the following output:
{"timestamp":"2020-07-15 14:04:39.503","level":"DEBUG","thread":"main","logger":"org.camunda.bpm.engine.cmd","message":"ENGINE-13009 opening new command context","context":"default"}
14:01:00.741","level":"DEBUG","thread":"main","logger":"org.camunda.bpm.engine.impl.persistence.entity.PropertyEntity.lockDeploymentLockProperty","message":"==> Preparing: SELECT VALUE_ FROM ACT_GE_PROPERTY WHERE NAME_ = 'deployment.lock' for update ","context":"default"}
{"timestamp":"2020-07-15 14:01:00.748","level":"DEBUG","thread":"main","logger":"org.camunda.bpm.engine.impl.persistence.entity.PropertyEntity.lockDeploymentLockProperty","message":"==> Parameters: ","context":"default"}
As the logs tell, it hast something to do with locking of process deployment. After further investigation I came across this article on the offical Camunda page:
https://docs.camunda.org/manual/7.13/user-guide/process-engine/deployments/
And have seen these entries in the database:
Problem: I do understand why the deployments are locked but the main problem is that the lock remains there forever and never gets released. I would appreciate any help!
Are you using autodeployment?! The mentioned article describes a weired situation where multiple nodes try to deploy the same resources. In my opinion this only should happen, when each node trys to autodeploy resources.
Using an explicit deployment (after nodes are started) should be executed on a single node.
KR, Joachim

Openstack SDK to resize the instance

I have a requirement to read the input YAML file and resize the servers with the specified configuration like(VCPU, Disk,memory..). Note that the server name is already exist in the environment. I have automated this using python code using the cli command. Reference link for command
https://docs.openstack.org/nova/latest/user/resize.html
But the requirement is to implement this via SDK. Please let me know how to implement this logic via python code by invoking openstack SDK?
Operating System: Ubuntu 16.04
Input Yaml:
Servername1: test1
VCPU: 2
Disk: 4000
Memory: 200
Servername2: test2
VCPU: 1
Disk: 1000
Memory: 100
According to the SDK API documentation, the Compute class (doc) has methods called resize_server, confirm_resize_server and revert_resize_server.
Please let me know how to implement this logic via python code by invoking openstack SDK?
The sequence would be:
Read your yaml file.
Find the existing server that you want to resize.
Lookup flavor1 with the specs that you need (VCPUs, disk, memory, etc).
Check that the server doesn't have that flavor already.
Resize the server
Check server is working correctly. How you do that will depend on the context. But if you skip this step and "confirm" anyway there is a risk that you will lose the existing server.
Either confirm resize or revert resize
For more information how to obtain and make calls on the Compute object, please see "Using OpenStack Compute".
1 - You could also synthesize flavors on the fly, but that is liable to give you a flavor management issue in the long term.

Better approach to docker images

I'm new in docker so I want to know what is the better approach to use it. I have a Project that needs three components to work:
Jboss server application
PostgreSQL
A spring boot application
So, based on it my questions are:
1) Should I have one docker image for each component mentioned above? If yes, why not just put all together? My idea of docker is simplify the deploy of a application so put all together will make easy to install this app in another environment, right?
2) If yes (one docker image per component), spring boot is just a "Java -jar" command is really necessary have a docker image to it?
3) In case of PostgreSQL should I have the image with all my database structure and data or just vanilla PostgreSQL without anything?
To answer your questions
1) should I have one docker image for each component mentioned above ?
If yes, why not just put all together? My idea of docker is simplify
the deploy of a application so put all together will make easy to
install this app in another environment, right?
It is best to put them on a separate components so that:
You can isolate cases(will help you in debugging)
You can selectively scale(horizontally) specific stateless components when you run on k8s or docker-swarm
You can set hardware limit(RAM, CPU, etc) per component
You have different base images(might be useful for optimizations)
You want to build & test your components independently
List goes on
2) if yes (one docker image per component), spring boot is just a
"Java -jar" command is really necessary have a docker image to it?
Please check the list mentioned above (why it's best to separate) if it fits your use case. Note that adding it to existing components will affect your scaling strategy
Example - you run 3 instances of jboss component with spring boot app, you will spawn 3 instances for both of them w/c you might not want.
3) in case of PostgreSQL should I have the image with all my database
structure and data or just vanilla PostgreSQL without anything?
I would recommend that you mount your structure & data to the host volume, so that it doesn't get lost when the image is restarted. see example so i'll recommend using vanilla postgres
I hope this helps you in some way

Is Clustering for Influxdb available on Windows

I have managed to build influxdb from source for windows, without much issue.
I am now trying to get their clustering to work as per:
https://influxdb.com/docs/v0.9/guides/clustering.html
That assumes a linux os.
When updating the influxdb.conf from localhost to the RealHostName in step 2 and starting the first node
The logs return:
2016/01/06 15:01:44 Go version go1.5.2, GOMAXPROCS set to 8
2016/01/06 15:01:44 Using configuration at: influxdb.conf
[metastore] 2016/01/06 15:01:44 Using data dir: D:\XXXXXX\.influxdb\meta
[metastore] 2016/01/06 15:01:44 Skipping cluster join: already member of cluster
: nodeId=1 raftEnabled=true peers=[localhost:8088 RealHostName:8088]
[metastore] 2016/01/06 15:01:44 Node at RealHostName:8088 [Follower]
[metastore] 2016/01/06 15:01:45 Node at RealHostName:8088 [Leader]. peers=[localhost
:8088 RealHostName:8088]
[metastore] 2016/01/06 15:01:45 Node at RealHostName:8088 [Follower]. peers=[localhost:8088 RealHostName:8088]
Is there some I am missing? or is this part of their disclaimer:
Clustering is in a alpha state right now. There are still a good
number of rough edges. If you notice any issues please report them.
Clustering in InfluxDB 0.9 should be considered alpha functionality, and Windows is not yet a supported OS for InfluxDB. Since you are using alpha functionality on an unsupported OS it may be impossible to fix whatever issues are happening.
I recommend waiting for the 0.10 release later this month, which will have clustering in a beta/RC state. Full Windows support is coming soon but I do not have an estimate yet.
You might also consider running your cluster Linux server. Are you completely locked into deploying InfluxDB on Windows?

Resources