airflow application logs from kubernetes pods to ElasticSeacrh - airflow-webserver

Hi We are running airflow application hosted on kubernetes , we are using airflow base image airflow-2.1.2-python-3.8.10 , when we enable the logs EFK log configuration with remote logging pods getting crash , pls help us we are trying since last 6 months and still no success to see our airflow application logs in Kibana . for any information let me know I can provide , I just wanted my airflow application logs to be shipped to EFK and can be visualize in Kibana :)
Thank you

Related

Setting up Elastic Enterprise Search and App Search - Docker - ELK

I'm trying to setup Elastic Enterprise Search and App Search using Docker. So far I managed to install Elastic Search and Kibana using Docker on Centos 7. Right now, I want to establish a connection with GitHub, for which I'll need Enterprise search. I opened the page, but it's prompting to "Add your workspace Search host URL to your Kibana Configuration - enterpriseSearch.host: 'http://localhost:3002'
I didn't quite understand on how to do that. I'm stuck with this. Can anyone please provide some step-by-step instructions?
As per the ElasticSearch documentation:
enterpriseSearch.host | The URL of your Enterprise Search instance
I am looking at this step as well, to configure the kibana docker container you can either pass environment variables to the docker container as you run it (usually by making use of a docker-compose.yml file), or you can pass it a kibana.yml file on the command line.
Reference:
https://www.elastic.co/guide/en/kibana/current/docker.html#configuring-kibana-docker
It's worth noting if you are running elasticsearch on docker following the same instructions as me you have not opened up port 3002 when launching that which may need to be completed by changing the run code to include 3002:3002.

Deploying WSO2 in docker I'm unable to get to the Analytics Carbon page

Deployed via the docker hub instructions to VMWare containers.
https://github.com/wso2/docker-ei/tree/master/dockerfiles/alpine
When accessing https://dockerhost:9444/carbon I get the following error:
Problem accessing: /carbon. Reason: Not Found
I re-deployed the Analytics worker container but that did not change anything.
Any help would would be excellent. Trying to get this working in the next day or two.

what's the BestPractice for Docker logging?

Im using docker with my Web service.
when I deploy using Docker, loosing some logging files (nginx accesslog, service log, system log.. etc)
Cause, docker deployment system using down and up container architecures.
So I thought about this problem.
LoggingServer and serviceServer(for api) must seperate!
using these, methods..
First, Using logstash(in elk)(attaching all my logFile) .
Second, Using batch system, this batch system will moves logfiles to otherServer on every midnight.
isn't it okay?
I expect a better answer.
thanks.
There are many ways for logging which most the admin uses for containers
1 ) mount log directory to host , so even if docker goes up/down logs will be persisted on host.
2) ELK server, using logstash/filebeat for pushing logs to elastic search server with tailing option of file, so if new log contents it pushes to server.
3) if there is application logs like maven based projects, then there are many plugins which pushes logs to server
4) batch system , which is not recommended because if containers dies before mid-night then logs will be lost.

Connect to Phoenix Hbase deployed as a docker image through SquirrelClient Remotely

I've deployed Hbase(Standalone), Zookeeper and Phoenix as a docker image in a virtual host. The image started successfully without any issues. Also after some changes in the config file, I could connect to the Hbase using Phoenix by ./sqlline.py 127.0.0.1:2181:/hbase-unsecure in the docker image container. After successfully creating table and some sample queries tested, I tried to connect through Squirrel-Client from my windows machine which is throwing TimeOutException.
For Info, the required hbaseclient.jar and Phoenixjar has been copied to the squirrel Client.
Error in SqurrelClient app:
java.util.concurrent.TimeoutException
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at
Any help on how to connect to the Phoenix remotely would be appreciated.
Thank You!!

how can i launch the kafka scheduler using marathon in minimesos?

I'm trying to launch the kafka-mesos framework scheduler using the docker container as prescribed at https://github.com/mesos/kafka/tree/master/src/docker#running-image-in-marathon using the Marathon implementation running in minimesos (I would like to add a minimesos tag, but don't have the points). The app is registered and can be seen in the Marathon console but it remains in Waiting state and the Deployment GUI says that it is trying to ScaleApplication.
I've tried looking for /var/log files in the marathon and mesos-master containers that might show why this is happening. Initially i thought it may have been because the image was not pulled, so i added "forcePullImage": true to the JSON app configuration but it still waits. I've also changed the networking from HOST to BRIDGE on the assumption that this is consistent with the minimesos caveats at http://minimesos.readthedocs.org/en/latest/ .
In the mesos log i do see:
I0106 20:07:15.259790 15 master.cpp:4967] Sending 1 offers to framework 5e1508a8-0024-4626-9e0e-5c063f3c78a9-0000 (marathon) at scheduler-575c233a-8bc3-413f-b070-505fcf138ece#172.17.0.6:39111
I0106 20:07:15.266100 9 master.cpp:3300] Processing DECLINE call for offers: [ 5e1508a8-0024-4626-9e0e-5c063f3c78a9-O77 ] for framework 5e1508a8-0024-4626-9e0e-5c063f3c78a9-0000 (marathon) at scheduler-575c233a-8bc3-413f-b070-505fcf138ece#172.17.0.6:39111
I0106 20:07:15.266633 9 hierarchical.hpp:1103] Recovered ports(*):[33000-34000]; cpus(*):1; mem(*):1001; disk(*):13483 (total: ports(*):[33000-34000]; cpus(*):1; mem(*):1001; disk(*):13483, allocated: ) on slave 5e1508a8-0024-4626-9e0e-5c063f3c78a9-S0 from framework 5e1508a8-0024-4626-9e0e-5c063f3c78a9-0000
I0106 20:07:15.266770 9 hierarchical.hpp:1140] Framework 5e1508a8-0024-4626-9e0e-5c063f3c78a9-0000 filtered slave 5e1508a8-0024-4626-9e0e-5c063f3c78a9-S0 for 2mins
I0106 20:07:16.261010 11 hierarchical.hpp:1521] Filtered offer with ports(*):[33000-34000]; cpus(*):1; mem(*):1001; disk(*):13483 on slave 5e1508a8-0024-4626-9e0e-5c063f3c78a9-S0 for framework 5e1508a8-0024-4626-9e0e-5c063f3c78a9-0000
I0106 20:07:16.261245 11 hierarchical.hpp:1326] No resources available to allocate!
I0106 20:07:16.261335 11 hierarchical.hpp:1421] No inverse offers to send out!
but I'm not sure if this is relevant since it does not correlate to the resource settings in the Kafka App config. The GUI shows that no tasks have been created.
I do have ten mesosphere/inky docker tasks running alongside the attempted Kafka deployment. This may be a configuration issue specific to the Kafka docker image. I just don't know the best way to debug it. Perhaps a case of increasing the log levels in a config file. It may be an environment variable or network setting. I'm digging into it and will update my progress, but any suggestions would be appreciated.
thanks!
Thanks for trying this out! I am looking into this and you can follow progress on this issue at https://github.com/ContainerSolutions/minimesos/issues/188 and https://github.com/mesos/kafka/issues/172
FYI I got Mesos Kafka installed on minimesos via a quickstart shell script. See this PR on Mesos Kafka https://github.com/mesos/kafka/pull/183
It does not use Marathon and the minimesos install command yet. That is the next step.

Resources