In a dotnet core app, if one configures mssql as a sink via appsettings, how can you configure a "backup" sink such as a file? So if serilog cant write to the database in some cases (say its unreachable, or the credentials are wrong), it tries to write to the 2nd sink.
Related
I have created a simple ASP.NET Core MVC application using EF Core and SQL Server. On the Windows development machine it is using localdb. I am trying to deploy to Azure App Service (Linux). I have created an Azure SQL database. Deploying from Visual Studio 2019, I have set the database as a dependency. In the publish profile settings I have selected the Azure SQL connection string for the database context I am using. I have also checked the EF Migrations and on deployment the script successfully created the tables for the application. I can connect to Azure SQL and see the tables. However when I run the deployed application and try a database operation I get: PlatformNotSupportedException: LocalDB is not supported on this platform
I can see from the docs various ways to set the connection string but I would like to know what the publish wizard in Visual Studio 2019 is trying to do and why it is not working? I'm also unclear where the password is stored. In the publish profile the password seems to be in the connection string as plain text, not good. I'd like to know how to get this right for production.
Update I have fixed this for the moment by following the steps in the Linux tutorial, using the Azure CLI and running the following command:
az webapp config connection-string set --resource-group [myResourceGroup] --name [app name] --settings MyDbConnection='[connection-string]' --connection-string-type SQLServer
I am not sure of the security of this approach and plan to investigate further.
The publish wizard simply handles the database creation/migration for you, it doesn't modify your project, as that's 1) not its purpose and 2) it can't make the configuration decision for you (i.e. use appsettings, environment variables, etc.)
You need to provide the connection string in production via configuration, just as in development. Since you're deploying to an Azure App Service, the most logical place for that would be to the App Settings in the Azure. These will be loaded in via environment variables. Simply specify the same key you're using in development and specify the production database target there.
My Spring Cloud Data Flow deleted log file in folder after I stopped it.
Why SCDF does that and How can I keep these log files?
You can customize the logging configuration in the logback config file and pass it as a configuration properties for the SCDF server. Assuming you are trying this with the local data flow server, you can refer this documentation for logback configuration.
I have a Spring Cloud Data Flow (SCDF) server running on Kubernetes cluster with Kafka as the message broker. Now I am trying to launch a Spring Cloud Task (SCT) that writes to a topic in Kafka. I would like the SCT to use the same Kafka that SCDF is using. This brings up two questions that I have and hope they can be answered:
How to configure the SCT to use the same Kafka as SCDF?
Is it possible to configure the SCT so that the Kafka server uri can be passed to the SCT automatically when it launches, similar to
the data source properties that get passed to the SCT at launch?
As I could not find any examples on how to achieve this, help is very appreciated.
Edit: My own answer
This is how I get it working for my case. My SCT requires spring.kafka.bootstrap-servers to be supplied. From SCDF's shell, I provide it as an argument --spring.kafka.bootstrap-servers=${KAFKA_SERVICE_HOST}:${KAFKA_SERVICE_PORT}, where KAFKA_SERVICE_HOST and KAFKA_SERVICE_PORT are environment variables created by SCDF's k8s setup script.
This is how to launch the task within SCDF's shell
dataflow:>task launch --name sample-task --arguments "--spring.kafka.bootstrap-servers=${KAFKA_SERVICE_HOST}:${KAFKA_SERVICE_PORT}"
You may want to review the Spring Cloud Task Events section in the reference guide.
The expectation is that you'd choose the binder of choice and pack that library in the Task application's classpath. With that dependency, you'd then configure the application with Spring Cloud Stream's Kafka binder properties such as the spring.cloud.stream.kafka.binder.brokers and others that are relevant to connect to the existing Kafka cluster.
Upon launching the Task application (from SCDF) with these configurations, you'd be able to publish or receive events in your Task app.
Alternatively, with the Kafka-binder in the classpath of the Task application, you can define the Kafka binder properties to all the Task's launched by SCDF via global configuration. See Common Application Properties in the ref. guide for more information. In this model, you don't have to configure each of the Task application with Kafka properties explicitly, but instead, SCDF would propagate them automatically when it launches the Tasks. Keep in mind that these properties would be supplied to all the Task launches.
I am using the local server of spring cloud dataflow. Each time I restart the server, all deployed apps and stream definitions are lost. How can I persist my stream definitions so that they survive server restarts?
As of RC1, the stream/task/job definitions among other metadata specs can be configured to persist in an RDBMS, and there's support for many of the commonly used databases. If nothing is provided, the default embedded h2 database is used, which is in-memory and it is recommended only for development purposes.
I am able to configure an agent for window but i have a confusion regarding connectivity between web servers logs with agent.
1: How to connect web server with agent ?
2: while starting flume.bat file. It is generating flume.log file in which i am getting below mentioned Exception.
org.apache.flume.conf.ConfigurationException: No channel configured for sink: hdfssink
at org.apache.flume.conf.sink.SinkConfiguration.configure(SinkConfiguration.java:51)
atorg.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateSinks(FlumeConfiguration.java:661)
1.The data flow is as below
your application (or web server) --> source --> channel --> sink
Now, the data can flow from your webserver to the source either by "pull" mechanism or "push" mechanism. In your case, you can either tail the webserver logs or use a spooling source.
2.This looks like a misconfiguration issue. You need to post your config file to figure out the issue