I've setup a docker monitoring stack using Prometheus, Grafana and cAdvisor. While using this query to get running containers:
count_scalar(container_last_seen{name=~container1|container2})
It picks up the containers allright, as soon as i launch a new container it is picked up right away. The problem is when a container is stopped or removed it does not pick it up, it still shows it as a running container.
From the cAdvisor/metrics endpoint it is removed as soon as the container stops.
Is there something wrong with the query?
(this is what i used for the stack: https://github.com/vegasbrianc/prometheus)
It seems to be related to the amount of time cAdvisor stores the data in memory.
While cAdvisor keeps the data in memory, you still have a valid date in container_last_seen metric. So the count_scalar instruction still 'sees' the container as it has a valid value.
In my test setup, cAdvisor keeps the data during 5 minutes. After this duration, I get the right information out of your formula because the container_last_seen metric has disappeared.
You can change this cAdvisor configuration with the --storage_duration flag.
--storage_duration=2m0s: How long to store data.
As an alternative if you wan't quick alerting, you could also consider running a query that would compare last seen date with current date:
count_scalar(time()-container_last_seen{name=~"container1|container2"}<=60)
Related
I’m working with a ksqldb server deployed in Kubernetes, and since some time ago it crashed for some reason, I want to implement High Availability as described in https://docs.ksqldb.io/en/latest/operate-and-deploy/high-availability/
We are deploying the server with docker, so the properties that we put inside the config file are:
KSQL_KSQL_STREAMS_NUM_STANDBY_REPLICAS: “2”
KSQL_KSQL_QUERY_PULL_ENABLE_STANDBY_READS: “true”
KSQL_KSQL_HEARTBEAT_ENABLE: “true”
KSQL_KSQL_LAG_REPORTING_ENABLE: “true”
When doing so and restarting the server, I can see that only the first 2 properties are properly set, and I can see the last two (for example with SHOW PROPERTIES from the ksqdb CLI).
Do you have an idea about why I can’t see them?
Do I have to manually deploy a second ksqldb server with the same ksql.service.id?
If this is the case, what is the correct way to do it? Are there particular properties to be set?
I'm just starting out with Google Cloud, and I have a single VM instance in zone europe-west2-c.
This morning, I promoted the VM's IP address from ephemeral to static, and selected Start for the VM. Since then, the Status is showing a spinning wheel, I can't connect to the VM because it says "the status is stopped", the options Start/Resume, Stop, Suspend and Reset options on the hamburger menu are all greyed out. Clicking on View logs does not reveal any data.
If I mouse over the spinning wheel, it shows "The instance has been staged and will soon begin running".
That was over two hours ago, and I've now lost confidence that anything is going to happen without my intervening in some way. Does anyone have any idea what I do next?
[update: I've given up, deleted the VM and started again. But it would be nice to know what to do if the problem recurs]
If the console isn't reponding you can try either different browser or even in incognito mode.
If you still see the same message try using gcloud utility:
gcloud compute instances describe instance_name --zone=your-zone | grep status
which will give you the state of the VM.
You can always try to start it right away:
gcloud compute instances start instance_name --zone=us-your_zone
You can always try stopping it first and then start it again (also using gcloud).
We have a fargate service running. On CloudWatch we can see the metrics for ECS/ContainerInsights->StorageWriteBytes growing every hour, and at some point it will not increase anymore probably because out of disk space. We will start to see log errors if we do not force a new deployment of the ECS. The error looks like:
error: org.apache.logging.log4j.core.appender.AppenderLoggingException: Error
writing to RandomAccessFile /apollo/env/ReaverFeatureGating/var/output/logs/application.log.%d{yyyy-MM-dd-HH}
Questions:
Is this normal to all the fargate services? Do we setup something
wrong?
Can we remove all the AmazonRollingRandomAccessFile and just use STDOUT in log4j2-container.xml? Will that still post our events to
CloudWatch, but just not writing to the disk?
After some research this is what I got:
Because the default template includes AmazonRollingRandomAccessFile, the log will be generated locally but never be cleaned up. There are some suggestions about adding a cron job to delete the logs, but for our case we don't need the local logs.
Yes, CloudWatch just need STDOUT.
Also, StorageWriteBytes only represent how many bytes are read/write to the storage. It is not equal to the used disk space. To monitor the disk space, we can build CloudWatch Agent into the container image and then use disk_used metric.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/metrics-collected-by-CloudWatch-agent.html
I am working on a project that is using Openwhisk. I have created a Kubernetes cluster on Google cloud with 5 nodes and installed OW on it. My serverless function is written in Java. It does some processing based on arguments I pass to it. The processing can last up to 30 seconds and I invoke the function multiple times during these 30 seconds which means I want to have a greater number of runtime containers(pods) created without having to wait for the previous invocation to finish. Ideally, there should be a container for each invocation until the resources are finished.
Now, what happens is that when I start invoking the function, the first container is created, and then after few seconds, another one to serve the first two invocation. From that point on, I continue invoking the function (no more than 5 simultaneous invocation) but no containers are started. Then, after some time, a third container is created and sometimes, but rarely, a fourth one, but only after long time. What is even weirded is that the containers are all started on a single cluster node or sometimes on two nodes (always the same two nodes). The other nodes are not used. I have set up the cluster carefully. Each node is labeled as invoker. I have tried experimenting with memory assigned to each container, max number of containers, I have increased the max number of invocations I can have per minute but despite all this, I haven't been able to increase the number of containers created. Additionally, I have tried with different machines used for the cluster (different number of cores and memory) but it was in vain.
Since Openwhisk is still relatively a young project, I don't get enough information from the official documentation unfortunately. Can someone explain how does Openwhisk decide when to start a new container? What parameters can I change in values.yaml such that I achieve greater number of containers?
The reason why very few containers were created is the fact that worker nodes do not have Docker Java runtime image and that it needs be downloaded on each of the nodes the first this environment is requested. This image weights a few hundred MBs and it needs time to be downloaded (a couple of seconds in google cluster). I don't know why Openwhisk controller decided to wait for already created pods to be available instead of downloading the image on other nodes. Anyway, once I downloaded the image manually on each of the nodes, using the same application with the same request rate, a new pod was created for each request that could not be served with an existing pod.
The OpenWhisk scheduler implements several heuristics to map an invocation to a container. This post by Markus Thömmes https://medium.com/openwhisk/squeezing-the-milliseconds-how-to-make-serverless-platforms-blazing-fast-aea0e9951bd0 explains how container reuse and caching work and may be applicable for what you are seeing.
When you inspect the activation record for the invokes in your experiment, check the annotations on the activation record to determine if the request was "warm" or "cold". Warm means container was reused for a new invoke. Cold means a container was freshly allocated to service the invoke.
See this document https://github.com/apache/openwhisk/blob/master/docs/annotations.md#annotations-specific-to-activations which explains the meaning of waitTime and initTime. When the latter is decorating the annotation, the activation was "cold" meaning a fresh container was allocated.
It's possible your activation rate is not fast enough to trigger new container allocations. That is, the scheduler decided to allocate your request to an invoker where the previous invoke finished and could accept the new request. Without more details about the arrival rate or think time, it is not possible to answer your question more precisely.
Lastly, OpenWhisk is a mature serverless function platform. It has been in production since 2016 as IBM Cloud Functions, and now powers multiple public serverless offerings including Adobe I/O Runtime and Naver Lambda service among others.
Is it possible for a Docker Task to know which task number and how many total tasks there are running of a particular Service?
E.g. I'd like to have a Service that works on different ranges in a job queue. The range of jobs that any one Service instance (i.e. Task) works on is dependent on the total number of Tasks and which Task the current one is. So if it's the 5th task out of 20, then it will work on one range of jobs, but if it's the 12th task out of 20, it will work on a different range.
I'm hoping there is an DOCKER_SERVICE_TASK_NUMBER environment variable or something like that.
Thanks!
I've seen this requested a few times, so you're not alone, but it's not a current feature of docker's swarm mode. Implementing this would be non-trivial because of the need to support scaling a service along with other updates to the swarm service. If you scale a service down and docker cleans up task 2 of 5 because it's on the busiest node, you're left in an odd situation where the starting count is less than the current count and there's a hole in the current numbering scheme.
If you have this requirement, then an external service discovery tool like consul or etcd may be useful. You can also try implementing your own solution taking advantage of the tasks.$service_name DNS entry that's available inside the container. That gives you the IP's of all the other containers providing this service just like you had with the round robin load balancer before the swarm mode VIP abstracted that away.