My Spring Cloud Data Flow deleted log file in folder after I stopped it.
Why SCDF does that and How can I keep these log files?
You can customize the logging configuration in the logback config file and pass it as a configuration properties for the SCDF server. Assuming you are trying this with the local data flow server, you can refer this documentation for logback configuration.
Related
I am new to spring cloud data flow. I am trying to build a simple http source and rabbitmq sink stream using SCDF stream app.The stream should be deployed on OSCF (Cloud Foundry). Once deployed, the stream should be able to receive HTTP POST Request and send the request data to RabbitMQ.
So far, I have downloaded Data Flow Server using below link and push to cloud foundry. I am using Shall application from my local.
https://dataflow.spring.io/docs/installation/cloudfoundry/cf-cli/.
I also have HTTP Source and RabbitMQ Sink application which is deployed in CF. RabbitMQ service is also bound to sink application.
My question - how can I create a stream using application deployed in CF? Registering app requires HTTP/File/Maven URI but I am not sure how can an app deployed on CF be registered?
Appreciate your help. Please let me know if more details are needed?
Thanks
If you're using the out-of-the-box apps that we ship, the relevant Maven repo configuration is already set within SCDF, so you can freely already deploy the http app, and SCDF would resolve and pull it from the Spring Maven repository and then deploy that application to CF.
However, if you're building custom apps, you can configure your internal/private Maven repositories in SCDF/Skipper and then register your apps using the coordinates from your internal repo.
If Maven is not a viable solution for you on CF, I have seen customers resolve artifacts from s3 buckets and persistent-volume services in CF.
I have a keycloak docker image and I import the configuration of my realm from a json file. And it works, so far so good.
But in my configuration there is an LDAP provider, which doesn't have the right credentials (Bind DN and Bind Credentials). They are not inserted in the JSON due to security purposes. So I have to manually insert the credentials in the Admin Console after startup.
I am now trying to find a secure way to automate that without exposing the credentials in clear text, so that we don't have to manually insert the credentials after each startup.
I thought about inserting them in the JSON file inside the container with a shell script or whatever and then importing the resulting file when starting keycloak. The problem is that the credentials would then be exposed in clear text in the JSON file inside the container. So anybody with access to the container would be able to see them.
I'm thinking about inserting the credentials in that JSON file based on environment variables (these are securely stored in the Gitlab runner and masked in the logs), starting keycloak and then removing the JSON file on the fly after keycloak successfully starts without exposing the credentials in any of the layers. But I couldn't find a way to do that.
Can anybody think of an idea of how this can be achieved?
Any help would be much appreciated.
A workaround is to bind your keycloak instance to an external database with a persistent volume (examples from keycloak here) and to change the migration strategy from OVERWRITE_EXISTING, to IGNORE_EXISTING (documentation here) in your docker-compose, like this:
command: '-b 0.0.0.0 -Dkeycloak.migration.strategy=IGNORE_EXISTING'
In this way, your configuration is persistent so you just enter your LDAP credentials the first time and don't need complex operations with pipelines.
My web application works fine with the created log4j2.xml file on an aws ec2 instance. But now I containerized it and it's running in ECS Fargate. I can see catalina logs in CloudWatch but not application specific logs that I configured in log4j2.xml file. log4j2.xml is located in a specific path like /var/webapp/conf and I've put the path in catalina.properties as shared.loader=/var/webapp/conf. Also, I see this ERROR in my catalina logs:
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'log4j2.debug' to show Log4j2 internal initialization logging.
Note: I don't want to change tomcat default logging. I'm just trying to send my application logs to the console as well, so I can see all the logs in one CloudWatch log stream.
Configuration for log4j logging driver is not being recognised by your Fargate Task. The reason being, with Fargate tasks we can only setup some specific logging drivers via the Task Definition.
Amazon ECS task definitions for Fargate support the awslogs, splunk, firelens, and fluentd log drivers for the log configuration.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html
I recommend to use CloudWatch log driver:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html
I am currently running a rails application and a SpringBoot configuration service in the same local network. Is it possible to configure rails to use the config files provided by the service in Springboot?
More specifically I am looking to fetch the database connection and user data via the service and let rails connect to a remote database.
The service provides theses files via http as json or yml.
Thank you.
Edit: Solved it by using a bash script with wget to pull and assemble config files manually via container scripts that are executed before each deploy.
I am able to configure an agent for window but i have a confusion regarding connectivity between web servers logs with agent.
1: How to connect web server with agent ?
2: while starting flume.bat file. It is generating flume.log file in which i am getting below mentioned Exception.
org.apache.flume.conf.ConfigurationException: No channel configured for sink: hdfssink
at org.apache.flume.conf.sink.SinkConfiguration.configure(SinkConfiguration.java:51)
atorg.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateSinks(FlumeConfiguration.java:661)
1.The data flow is as below
your application (or web server) --> source --> channel --> sink
Now, the data can flow from your webserver to the source either by "pull" mechanism or "push" mechanism. In your case, you can either tail the webserver logs or use a spooling source.
2.This looks like a misconfiguration issue. You need to post your config file to figure out the issue