I need to automate the stream creation in Graylog. Currently, we are login into the Graylog portal and creating streams manually for errors and information. Is there a way that we can automate or create custom templates so that we can remove the manual effort of creating streams for different env( QA, UAT, PROD etc)
There is a rest API to automate Graylog.
There is also Hundreds of Add-ons for Graylog on the Graylog market place including software libraries for various programming languages.
Related
I am developing a tuition fee management system using asp .net MVC for a university assignment. I am quite new with asp .net just learn it this April 2021. One of the requirements is that the system automatically sends an email every month to every user as a reminder about the outstanding balance. So how do I start to develop this requirement since I've been searching and only found tutorials email send manually and to one user only?
There is not built-in functionality in .NET that runs your code once a month, but there are several tools to do this. If you are using Azure, AWS or GCP (or any other cloud platform), you’d might consider a serverless Function to do this. These Functions can be triggered once a month by the cloud provider.
If you’re not hosting it in the cloud (or want to avoid any provider-specific features), you can use for instance Quartz (https://www.quartz-scheduler.net) or Hangfire (https://www.hangfire.io). There are many libraries available, all with their pros and cons. Hangfire for instance has a dashboard built-in to monitor and debug any issues, but this also costs some server resources and might be an overkill if you have a single job to run.
You should however take into account that communication with an SMTP server is quite time-consuming. Sending thousands of emails could therefore take a lot of time. Especially if you are using serverless Functions, this will become an issue due to the time-limits that apply for these Functions. Also when you run these as jobs in Quartz or Hangfire, you want to take into account this job might be aborted halfway. Therefore, you usually insert those mails into a queue (or database) and then have a second process to actually send these mails. Maybe even via a specialized email delivery service?
I'm trying to monitor Keycloak on dynatrace, but I see only process metrics.
is there a way to see metrics about sessions, Connected Users...
an existing plugin for keycloak on dynatrace?
If you want to monitor Keycloak you will see it as a process only and not on a transaction level unfortunately.
Supported technologies can be viewed here: https://www.dynatrace.com/support/help/shortlink/supported-technologies
If there is option of collecting of extra Parameters you can always create a custom OneAgent extension. There is currently not a plugin available. You will be able to define the metrics that you would like to see in the UI: https://www.dynatrace.com/support/help/shortlink/oneagent-extensions-tutorial
I am trying to setup Spring Cloud Data Flow (SCDF) to run in Local mode and how few questions which may help me decide if its a suitable platform for my requirements.
Even though the recommendation is to use Cloud Foundry, Kubernetes etc as task execution environment my preference to run things on production is local mode mainly because I don't have a lot of workload and cant deal with all the additional complexity. Now in local mode will I be able to run all types of SCDF apps, namely Streams, Jobs and tasks with no limitations? Some parts of the document mentions that only Jobs can be run in local mode.
Security - I am looking to put controls in place around deployment of apps and operational access to the tool (dashboard) and do see the support for LDAP with roles as an option but the whole concept of using Cloudfoundry UAA, another product to drive the user managements seems like an overkill. Is there no way to configure the tool with an existing LDAP server? Found the following in one of the LDAP issues in Github but its not clear whether it uses UAA in its docker image. Worst case I wont mind if the dashboard can be run in a view/read only mode.
https://github.com/spring-cloud/spring-cloud-dataflow/issues/2871
If Spring Cloud Data Flow were a purely monolithic application, integrating all aspects of security directly into the app is definitely easier to to wrap one's mind around. This is how Spring Cloud Data Flow originally started out from a security perspective and thus, versions of Spring Cloud Dataflow <2.0.0 supported what we labelled traditional security.
However, even before 2.0.x Spring Cloud Data Flow:
Had to integrate with external platforms such as Cloud Foundry
Became more and more microservices oriented (e.g. by using Skipper)
As a result 2 parallel security architectures had emerged, one using traditional security and the other one driven by OAuth2/OpenID Connect.
This started to become increasingly harder to maintain and for 2.0.x we decided to exclusively focus on OAuth2/OpenID Connect. However, we still had to support a rich set of enterprise features such as Roles, LDAP integration etc. As such, we find that the open-source, production-ready CloudFoundry User Account and Authentication (UAA) Server is an excelling choice. Its LDAP support and features actually exceeds the features offered by Spring Cloud Dataflow <2.0.0.
So yes, in order to setup security for Spring Cloud Data Flow locally, you need to run the UAA. And the UAA would also provide the LDAP support. Technically, Spring Cloud Data Flow has no awareness of the LDAP setup at all.
I hope this provides some background regarding how the Spring Cloud Data Flow security architecture emerged. Please have a look at the reference documentation and the aforementioned SCDF Security with UAA + LDAP example. Don't hesitate to reach out in case of further questions!
Disclaimer: I am a committer on the project.
Starting from v2.0, we delegate to UAA for authentication and authorization. There are a variety of write-ups on this matter; a more comprehensive one to look at is the end-to-end sample on how all this could be put together locally. You do not need CF or K8s, all this can run locally also. We rely on UAA as the gateway to standardize on end-to-end SSO across all the client tools, including shell, dashboard, RESTful APIs, CTR, etc.,
Sample: SCDF Security with UAA + LDAP. For further reading, please refer to the security section in the ref. guide.
Lastly, we do not recommend Local for a production install, but I understand that resiliency and/or restartability of apps under failure condition is not a requirement for some workloads.
I have a few questions concerning how to create a VoiceXML application.
I found some nice tutorials, but there are still some questions:
-what's a good development environment? I wanted to use VS08, there should be under C#, a project called "speech", but it doesn't appear, do I have to install the speech server local too in order to use this? (I would prefer some kind of visual workflow)
-what's the ending? is it .xml, .aspx, or .speax? I couldn't get that.
-how do I run the voicexml? it's at the speech server as an application, any further steps?
These questions are all over the map on the basics, but I'll try to provide some pointers:
what's a good development enviroment?
You will likely be building a web style application. So a VS08 ASP application is a reasonable starting point.
do i have to install the speech server local too in order to use this?
Yes. There are a variety of platforms that support VoiceXML. Nearly all are designed specifically for telephone calls (VoiceXML's main purpose). There are a few free implementations, but most are commercial. I believe the Opera web browser has some VoiceXML functionality. I've seen settings for it in their configuration, but no direct experience.
what's the ending? is it .xml, .aspx, or .speax ? i couldn't get that.
Endings usually aren't relevant, except maybe to tools. I don't believe VisualStudio provides any direct support for VoiceXML. Some browsers do care what mimetypes are provided.
how do i run the voicexml? it's at the speech server as an application, any furhter steps?
Does this mean you are looking at the OCS/Lync product line ? I believe their IVR in that suite does support VoiceXML as well as a few other APIs. The product should contain basic setup and configuration information. More information on Lync:
Microsoft Lync site
Wikipedia
One of the main goals of VoiceXML was to decouple the rendering of the voice application (on a speech server) from the voice application itself. This allows you to serve VoiceXML pages from any web server, anywhere, using any technology stack you want.
If you just want to learn VoiceXML in general, developer sites like Voxeo's Evolution allow you to render your voice applications on their voice hosting infrastructure. You configure your developer account to point to an initial VoiceXML page served from your external web server. In return, you get a phone number to call. When you call it, the hosting infrastructure fetches your initial VoiceXML page from your web server.
(I don't know offhand if Microsoft Lync hosting services are available yet.)
I am going to write a Ruby application that implements a video conversion workflow consisting of multiple audio and video encoding/processing steps.
The application interface has two core features:
queueing new videos
monitoring the progress for each video
The user can access these features using a website written in Ruby on Rails.
The challenge is this: I want make the workflow app a self-sufficient application, not dependent on the existence of the web view.
To enable this separation I think that adding a network API to the workflow application is a good solution because this allows the workflow app to reside on a different server than the web server.
My question is: Which solution do you suggest for such a network API?
A few options are:
implement a simple TCP server and invent my own string based API
use some sort of REST api (I don't know if this is appropriate for this situation)
some sort of web-services solution (SOAP, XML-RPC)
another existing framework
Feel free to share your thoughts on this.
I would suggest two things:
First, use REST as your API. This allows you to write one core application with both a user interface and an API for outside applications to use.
Second, take a look at PandaStream. It's a Merb application that encodes videos from multiple formats into flash. It has a REST API, and there's even a Rails plugin so you can integrate it with your application. It might be a good example codebase, or even a replacement for the one you're trying to build.
Hope my answer helped,
Mike