I have set up the notification for cloud build CI/CD which is pushing notification to a respective slack channel.
After a successful build that image push to kubernetes cluster and rolling update strategy followed by deployment.
So I want to push notification when new pod become ready and old pod terminated so that time gets an idea about new changes applied to deployment.
Note : I am using GKE cluster but not installed Prometheus due to resource limits.
There are multiple ways of doing this, I can think of two ways right now:
Use Prometheus + Alert manager to send you a slack notification when pods became ready.
Use CI/CD pipeline to continuously check for the status of the pods, once they are updated successfully, send a notification.
Hope this answers your question.
EDIT:
If you would like to stick to using stackdriver, then there is a solution for it as well: https://kubernetes.io/docs/tasks/debug-application-cluster/events-stackdriver/
If you cannot afford to Prometheus stack due to resources limitation check kubewatch it has slack support build-in so it should be suitable for your needs.
Using kubernetes api watch can be implemented
https://github.com/kubernetes-client/csharp/tree/master/examples/watch
Basically it will check the not running pod and notify using Microsoft Teams web-hook. It will also notify pod which was initially not running and came back to running status again(recovered pod)
C# code snippet below with Main and Notify function.
You can replace the deployment over pod
static async Task Main(string[] args)
{
// Load from the default kubeconfig on the machine.
var config = KubernetesClientConfiguration.BuildConfigFromConfigFile();
// Use the config object to create a client.
var client = new Kubernetes(config);
try
{
var podlistResp = await client.ListNamespacedPodWithHttpMessagesAsync(Namespace, watch: true);
using (podlistResp.Watch<V1Pod, V1PodList>(async (type, item) =>
{
Console.WriteLine(type);
Console.WriteLine("==on watch event==");
var message = $"Namespace: {Namespace} Pod: {item.Metadata.Name} Type: {type} Phase:{item.Status.Phase}";
var remessage = $"Namespace: {Namespace} Pod: {item.Metadata.Name} Type: {type} back to Phase:{item.Status.Phase}";
Console.WriteLine(message);
if (!item.Status.Phase.Equals("Running") && !item.Status.Phase.Equals("Succeeded"))
{ Console.WriteLine("==on watch event==");
await Notify(message);
Console.WriteLine("==on watch event==");
}
if ( type== WatchEventType.Modified && item.Status.Phase.Equals("Running") )
{ Console.WriteLine("==on watch event==");
await Notify(remessage);
Console.WriteLine("==on watch event==");
}
}))
{
Console.WriteLine("press ctrl + c to stop watching");
var ctrlc = new ManualResetEventSlim(false);
Console.CancelKeyPress += (sender, eventArgs) => ctrlc.Set();
ctrlc.Wait();
}
}
catch (System.Exception ex)
{
Console.Error.WriteLine($"An error happened Message: {ex.Message}", ex);
}
}
private static async Task Notify(string message)
{
using (var client = new HttpClient())
{
client.BaseAddress = new Uri("https://outlook.office.com");
var body = new { text = message };
var content = new StringContent(JsonConvert.SerializeObject(body));
var result = await client.PostAsync("https://outlook.office.com/webhook/xxxx/IncomingWebhook/xxx", content);
result.EnsureSuccessStatusCode();
}
}
For anyone looking for such tool, do try BotKube. It is built to send
Kubernetes events to messengers like Slack, Mattermost and Microsoft Teams. It also has features like custom filters
Related
I have created an offline documentation with MkDocs and Workboxjs.
I execute workbox generateSW on the files generated by MkDocs which generates a Service Worker with precache setup with the precacheAndRoute function.
This works fine but when I update the documentation and generate new html files and the Service Worker it does not serve the new content until I completely close the browser. Refreshing or just closing the tab is not enough.
The worker is updating the content to the Cache Storage correctly which I can see from the Chrome devtools (Application -> Cache Storage -> workbox-precache*) but no matter how many times I hit refresh the browser won't display the new content.
I use this function to register the Service Worker
async function register() {
const registration = await navigator.serviceWorker.register(SW_URL);
registration.onupdatefound = () => {
const installingWorker = registration.installing;
installingWorker.onstatechange = () => {
if (installingWorker.state === "installed") {
if (navigator.serviceWorker.controller) {
console.log(
"New content is available; please refresh."
);
} else {
console.log("Content is cached for offline use.");
}
}
};
};
}
I wonder if I have to do something extra to make the content refresh properly?
My workbox-config.js is
module.exports = {
globDirectory: ".doc_build",
globPatterns: ["**/*"],
swDest: ".doc_build/sw.js"
};
This happens on both Firefox and Chrome.
Thanks to Robert Rowntree's link in the question comment I figured this out.
I my case the content gets refreshed to cache the but old version of the precache service worker still keeps running which has a list of objects like this
{
"url": "index.html",
"revision": "e4919b0cd0e772b3beb2d1f3d09af437"
}
as you can see it has the checksum of the old version in it and it will keep serving that until the old service worker gets deactivated and the new one activated.
It is possible to see that by checking registration.waiting when the old service worker is waiting for to be deactivated and new one to be installed. It seems that browser does this "at some point". It actually seems to happen if I just keep the tabs closed long enough.
The solution for my question is to force the service worker to skip the waiting period. It is possible to do that by sending a message to the service worker from the update event
async function register() {
const registration = await navigator.serviceWorker.register(SW_URL);
registration.onupdatefound = () => {
const installingWorker = registration.installing;
installingWorker.onstatechange = async () => {
if (installingWorker.state === "installed") {
if (navigator.serviceWorker.controller) {
console.log(
"New content is available; please refresh."
);
// Send message to the service worker telling
// it should stop waiting for browser to deactivate it
registration.waiting.postMessage("skipWaiting");
} else {
console.log("Content is cached for offline use.");
}
}
};
};
}
Then in the Service Worker code I had to handle that message and call skipWaiting()
self.addEventListener("message", messageEvent => {
if (messageEvent.data === "skipWaiting") {
return skipWaiting();
}
});
To do this I had to move from workbox generateSW to workbox injectManifest to be able to add the skipping code.
But there are caveats in this solution. Read from the Robert's link onwards from
"The simplest and most dangerous approach is to just skip waiting during installation."
https://redfin.engineering/how-to-fix-the-refresh-button-when-using-service-workers-a8e27af6df68
Fortunately this is good enough for my case.
I am trying to monitor my dropwizard web service using ganglia. I ran gmond and gmetad on my local machine. And I was able to see basic metrics(eg.cpu,memory usage) on ganglia-web.
I also added ganglia reporter in my service according to this. But nothing shows up on my ganglia-web.
private static final MetricRegistry metrics = new MetricRegistry();
private final Timer ingest = metrics.timer("MyApp");
try {
final GMetric ganglia = new GMetric("localhost", 8649, GMetric.UDPAddressingMode.MULTICAST, 1);
final GangliaReporter gangliaReporter = GangliaReporter.forRegistry(metrics)
.convertRatesTo(TimeUnit.SECONDS)
.convertDurationsTo(TimeUnit.MILLISECONDS)
.build(ganglia);
gangliaReporter.start(1, TimeUnit.MINUTES);
} catch (Exception e) {
LOGGER.error("Can not initiate GangliaReporter",e);
}
It looks to me that you entered a normal network address but tell GMetric to expect a multicast address. Here is what I used (and works):
GMetric ganglia = new GMetric("192.168.0.40", 8649, UDPAddressingMode.UNICAST, 1);
If this does not help you, please show your gmond.conf (udp channel config)
I am trying to use the Azure Runtime Reconfiguration Pattern to allow me to change a appSetting in the normal Web.config file via PowerShell (later by Microsoft Azure Web Sites Management Library).
My problem is that the RoleEnvironment.Changing event is not being called in my MVC app, so the web app is being restarted. I have placed event set up code in the MVC Application_Start as described in the Azure article, i.e.
protected void Application_Start()
{
RoleEnvironment.Changing += RoleEnvironment_Changing;
RoleEnvironment.Changed += RoleEnvironment_Changed;
//normal MVC code etc...
AreaRegistration.RegisterAllAreas();
}
The event handlers are a straight copy of the handled from the Azure article and look like this:
private const string CustomSettingName = "TestConfig";
public static string TestConfigValue;
private static void RoleEnvironment_Changing(object sender,
RoleEnvironmentChangingEventArgs e)
{
RoleLogs.Add("RoleEnvironment_Changing: started");
var changedSettings = e.Changes.OfType<RoleEnvironmentConfigurationSettingChange>()
.Select(c => c.ConfigurationSettingName).ToList();
Trace.TraceInformation("Changing notification. Settings being changed: "
+ string.Join(", ", changedSettings));
if (changedSettings
.Any(settingName => !string.Equals(settingName, CustomSettingName,
StringComparison.Ordinal)))
{
Console.WriteLine("Cancelling dynamic configuration change (restarting).");
RoleLogs.Add("RoleEnvironment_Changing: restarting!");
// Setting this to true will restart the role gracefully. If Cancel is not
// set to true, and the change is not handled by the application, the
// application will not use the new value until it is restarted (either
// manually or for some other reason).
e.Cancel = true;
}
else
{
RoleLogs.Add("RoleEnvironment_Changing: change is OK. Not restarting");
Console.WriteLine("Handling configuration change without restarting. ");
}
}
private static void RoleEnvironment_Changed(object sender,
RoleEnvironmentChangedEventArgs e)
{
RoleLogs.Add("RoleEnvironment_ChangED: Starting");
Console.WriteLine("Updating instance with new configuration settings.");
foreach (var settingChange in
e.Changes.OfType<RoleEnvironmentConfigurationSettingChange>())
{
if (string.Equals(settingChange.ConfigurationSettingName,
CustomSettingName,
StringComparison.Ordinal))
{
// Execute a function to update the configuration of the component.
RoleLogs.Add("RoleEnvironment_ChangED: TestConfig has changed");
Console.WriteLine("TestConfig has changed.");
TestConfigValue = RoleEnvironment.GetConfigurationSettingValue(CustomSettingName);
}
}
}
I have added logs which prove that my RoleEnvironment_Changing and RoleEnvironment_Changed are not being called in the MVC WebApp which means the WebApp is restarted when I change an appSetting via PowerShell. This also means the RoleEnvironment.Changing event never gets to the WebJob.
I am using Azure SDK 2.7.0
Any ideas?
UPDATE
#richag gave me an answer, which made me realise that my problem is because I am using a App Service rather than a Cloud Service. This SO answer and plus this video (see at 5:00mins) talks about the difference (Note: the video is old so the name of the web app is different, but the concept is the same).
I don't really want to change this late in the development, and I have worked round the problem another way. Maybe on the next project and will look at Cloud Services as I can see some positives, like better control of my WebJobs configuration.
From the runtime reconfiguration pattern: "Microsoft Azure Cloud Services roles detect and expose two events that are raised when the hosting environment detects a change to the ServiceConfiguration.cscfg files" These events are not fired if you make changes to app.config/web.config files. Only when the cloud service configuration is changed, i.e. if you upload a new configuration file through the azure portal's configure tab or change a setting directly on the azure portal.
According to the debugger, none of the following events are fired when I update the Azure Portal to change an AppSetting for an ASP.NET WebAPI app:
RoleEnvironment.Changing
RoleEnvironment.Changed
RoleEnvironment.StatusCheck
RoleEnvironment.SimultaneousChanging
RoleEnvironment.SimultaneousChanged
RoleEnvironment.Stopping
Do others have different experience?
I am using a dual service/console model to test a service of mine. The code in the spotlight is:
static void Main(string[] args)
{
// Seems important to use the same service instance, regardless of debug or runtime.
var service = new HostService();
service.EventLog.EntryWritten += EventLogEntryWritten;
if (Environment.UserInteractive)
{
service.OnStart(args);
Console.WriteLine("Host Service is running. Press any key to terminate.");
Console.ReadLine();
service.OnStop();
}
else
{
var servicesToRun = new ServiceBase[] { service };
Run(servicesToRun);
}
}
When I run the app under the debugger, using F5, on the line Console.ReadLine(); I get a System.IO.IOException, with "Not enough storage is available to process this command."
The only purpose of the ReadLine is to wait until someone presses a key to end the app, so I can't imagine where the data is coming from that needs so much storage.
This is a service, and its output is likely set to Windows Application, change the output to Console Application and this should go away.
I having the same problem, I found the setting under project properties but I am creating a windows application so I can not change the application type.
This is the code I use.
Dim t As Task = New Task(AddressOf DownloadPageAsync)
t.Start()
Console.WriteLine("Downloading page...")
Console.ReadLine()
Async Sub DownloadPageAsync()
Using client As HttpClient = New HttpClient()
Using response As HttpResponseMessage = Await client.GetAsync(page)
Using content As HttpContent = response.Content
' Get contents of page as a String.
Dim result As String = Await content.ReadAsStringAsync()
' If data exists, print a substring.
If result IsNot Nothing And result.Length > 50 Then
Console.WriteLine(result.Substring(0, 50) + "...")
End If
End Using
End Using
End Using
End Sub
I'm using GitLab with an external issue tracker (JIRA), and it works well.
My problem is when I create a new GitLab project (using API), I have to go the GitLab's project settings and manually select the issue tracker I want to use and manually enter the project's id of my external issue tracker.
This screen will be more eloquent:
(source: bayimg.com)
(The two fields I am talking about are "Issue tracker" and "Project name or id in issues tracker")
So here is my question: is there any way to set up this two fields automatically, using API or other ? Currently, GitLab API does not mention anything about external issues tracker settings.
This code helped me to automatically set the GitLab's external issues-tracker settings, using Apache HttpClient and Jsoup.
This code is absolutely not 100% good, but it shows the main idea, wich is to recreate the corresponding POST request that the web form sends.
// 1 - Prepare the HttpClient object :
BasicCookieStore cookieStore = new BasicCookieStore();
LaxRedirectStrategy redirectStrategy = new LaxRedirectStrategy();
CloseableHttpClient httpclient = HttpClients.custom()
.setDefaultCookieStore(cookieStore)
.setRedirectStrategy(redirectStrategy)
.build();
try {
// 2 - Second you need to get the "CSRF Token", from a <meta> tag in the edit page :
HttpUriRequest getCsrfToken = RequestBuilder.get()
.setUri(new URI("http://localhost/_NAMESPACE_/_PROJECT_NAME_/edit"))
.build();
CloseableHttpResponse responseCsrf = httpclient.execute(getCsrfToken);
try {
HttpEntity entity = responseCsrf.getEntity();
Document doc = Jsoup.parse(EntityUtils.toString(entity));
String csrf_token = doc.getElementsByAttributeValue("name", "csrf-token").get(0).attr("content");
// 3 - Fill and submit the "edit" form with new values :
HttpUriRequest updateIssueTracker = RequestBuilder
.post()
.setUri(new URI("http://localhost/_NAMESPACE_/_PROJECT_NAME_"))
.addParameter("authenticity_token", csrf_token)
.addParameter("private_token", "_MY_PRIVATE_TOKEN_")
.addParameter("_method", "patch")
.addParameter("commit", "Save changes")
.addParameter("utf8", "✓")
.addParameter("project[issues_tracker]", "jira")
.addParameter("project[issues_tracker_id]", "_MY_JIRA_PROJECT_NAME_")
.addParameter("project[name]", "...")
...
.build();
CloseableHttpResponse responseSubmit = httpclient.execute(updateIssueTracker, httpContext);
} finally {
responseCsrf.close();
}
} finally {
httpclient.close();
}
Change _NAMESPACE_/_PROJECT_NAME_ to make it corresponds to your project URL, change _MY_PRIVATE_TOKEN_ with your admin account's token, and change _MY_JIRA_PROJECT_NAME_ with ... your jira project's name.