Send Docker Entrypoint logs to APP in realtime - docker

I'm looking for ideas to send Docker Logs for each runs to be sent to my application in realtime. I'm looking ways this can be done. Please let me know how this can be done.
Let me know if you have done this already or know how this can be achieved. I want to build feature similar to Netlify or vercel where they show you all build log on UI in realtime. I want something similar for my node application.

You can achieve this with Vercel and Log Drains.
Log Drains make it easy to collect logs from your deployments and forward them to archival, search, and alerting services by sending them via HTTPS, HTTP, TLS, and TCP once a new log line is created.
At the time of writing, we currently support 3 types of Log Drains:
JSON
NDJSON
Syslog
Along with Log Drains, we are introducing two new open-source integrations with logging services for you to start using them today: LogDNA and Datadog.
Install the integration: https://vercel.com/integrations?category=logging
See the announcement blog post: https://vercel.com/blog/log-drains
Note that Vercel does not allow Docker deployments, but does support Serverless Functions.

Related

Server timeout and sftp timeout. What to do?

Since 12h the website (Wordpress website) that is hosted on Google Cloud Platform has a time out issue. After 60 seconds of trying to load the website, the following message appears "The connection has timed out".
When trying to connect with SFTP, same issue.
What should I do to resolve this?
Since two different services stopped to work at the same time it
sounds like a networking issue. There is a timeout, therefore there is the server not answering at all to the requests.
What to do?
I would proceed with this general troubleshooting steps, if you want you can upload your question with the result of these commands/question to proceed with the troubleshooting.
First of all I would check if you are able to ping the
external/public IP of the instance.
I would check if the firewall rules allows TCP80/TCP443 and TCP22, Notice that on GCP you need to create the rule and assign the TAG to the machine from its detail page if the the rule does not apply to the whole network.
Are you able to ssh into the instance?
I would check if the processes are actually listening netstat -tuplen
If you are able when logged into the machine do you have access to the internet? Are you able to ping external IP? If not whats about internal IP?
I would go to the "activity" page of your Google Cloud Console to check which actions have been taken while the instance was still running.
I would check as well the history of the Linux machine to check if you run some commands acting on the network configuration of the machine.
Note that if you cannot SSH into the machine you can always access through serial console setting a password for your username through a startup script.
UPDATE
I had the possibility to take a look into the project, the machine was stopped due to issue with the billing account (it was closed) after the free trial period ended.
I would suggest you to go again trough the documentation regarding the upgrade of the billing account
If you have still some doubts or question after you perform this operations you can file a case at this link with the billing team and they will help you to solve the issue.

Twilio IP Messaging token issue

I'm setting up an iOS app to use the IP Messaging and video calling apis. I'm able to connect, create channels and setup a video call if I manually create hard-coded tokens for the app. However, if I want to use the PHP server (as described here https://www.twilio.com/docs/api/ip-messaging/guides/quickstart-ios) then I always get an error and it can't connect anymore.
I'm attaching a screenshot of what I see when I hit the http://localhost:8080 address which seems to produce a 500 Internal error on this URL: https://cds.twilio.com/v2/Streams
Thanks so much!
After much time spent on this I decided to try the node backend instead - under other server-side languages of the PHP and I have it running in 2 minutes! I used the exact same credentials as the ones that I was using on the PHP config file so either my PHP environment has something strange or the PHP backend needs some fixing. In any case, I'm able to move forward using the node backend, so if you run into the same issue just try node instead of PHP. woohoo!

Are any HTTP log files saved on my system by default?

I have an application hosted on Amazon EC2 on a Ubuntu machine, written in Ruby (on Rails), deployed with Capistrano and running on Nginx.
Last friday one module of my application has crashed and nobody in the company noticed until this morning. We spent some money with Facebook and Google ads and received a few hundreds of visits, but nobody created an account due to this bug.
I wonder if this configuration is saving the HTTP requests and its bodies somewhere in a log file. But we didnt explicitly set it, so it would only happen if any of these technologies do it by default.
Do you know whether there is such log or not?
Nope, that wouldn't be anywhere in a usable form (I'm inferring you want to try to create the accounts from request bodies in log files). You'll have the requests themselves in your nginx logs, and the rails logs will contain more info about the request, but as a matter of security, by default, any sensitive information (e.g. passwords) would be scrubbed from them. You may still be able to get some info from them.
To answer your question a little more specifically, the usual place for these logs on your system would be:
/var/log/nginx/
/path/to/your/rails/app/log/production.log
On a separate note, I would recommend looking into an error reporting service like Honeybadger, Airbrake, Raygun, Appsignal, or others so that you don't have silent failures like this moving forward.

How to generate the URL which shows the live logs in mesos for a job

I am developing a UI in which I need to show the live logs (stdout and stderr) of jobs running in a mesos slave. I am finding out a way in which I will be able to generate a URL which will point to the mesos logs for the job. Is there a way to do the same? Basically, I need to know the slave id, executor id, master id etc. for generating the URL. Is there a way to find these information?
The sandbox URL is of the form http://
$slave_url:5050/read.json?$work_dir/work/slaves/$slave_id/frameworks/$framework_id/executors/$executor_id/runs/$container_id/stdout, and you can even use the browse.json endpoint to browse around within the sandbox.
Alternatively, you can use the mesos tail $task_id CLI command to access these logs.
For more details, see the following mailing list thread: http://search-hadoop.com/m/RFt15skyLE/Accessing+stdout%252Fstderr+of+a+task+programmattically
How about using reverse approach. You need to present live logs from stderr and stdout. How about storing them outside of mesos slave e.g., elastic-search? You will get nearly live updates, old logs available after, nice search options.
From version 0.27.0 Mesos supports ContainerLogger. You can write your own implementation of ContainerLogger that will push logs to central logs repository (Graylog, Logstash, e.t.c) and then expose it in your UI.
Mesos offers a REST interface where you get the information you want. Visit with your browser http://<MESOS_MASTER_IP>:5050/help (using default port) to check the options you have to query (for example, you can get the information you need from http://<MESOS_MASTER_IP>:5050/master/state.json). Check this link to see an example using it.

Is it possible to have Centralised Logging for ElasticBeanstalk Docker apps?

We have custom Docker web app running in Elastic Beanstalk Docker container environment.
Would like to have application logs be available for viewing outside. Without downloading through instances or AWS console.
So far neither of solutions been acceptable. Maybe someone achieved centralised logging for Elastic Benastalk Dockerized apps?
Solution 1: AWS Console log download
not acceptable - requires to download logs, extract every time. Non real-time.
Solution 2: S3 + Elasticsearch + Fluentd
fluentd does not have plugin to retrieve logs from S3
There's excellent S3 plugin, but it's only for log output to S3. not for input logs from S3.
Solution 3: S3 + Elasticsearch + Logstash
cons: Can only pull all logs from entire bucket or nothing.
The problem lies with Elastic Beanstalk S3 Log storage structure. You cannot specify file name pattern. It's either all logs or nothing.
ElasticBeanstalk saves logs on S3 in path containing random instance and environment ids:
s3.bucket/resources/environments/logs/publish/e-<random environment id>/i-<random instance id>/my.log#
Logstash s3 plugin can be pointed only to resources/environments/logs/publish/. When you try to point it to environments/logs/publish/*/my.log it does not work.
which means you can not pull particular log and tag/type it to be able to find in Elasticsearch. Since AWS saves logs from all your environments and instances in same folder structure, you cannot chose even the instance.
Solution 4: AWS CloudWatch Console log viewer
It is possible to forward your custom logs to CloudWatch console. Do achieve that, put configuration files in .ebextensions path of your app bundle:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html
There's a file called cwl-webrequest-metrics.config which allows you to specify log files along with alerts, etc.
Great!? except that configuration file format is neither yaml,xml or Json, and it's not documented. There is absolutely zero mentions of that file, it's format either on AWS documentation website or anywhere on the net.
And to get one log file appear in CloudWatch is not simply adding a configuration line.
The only possible way to get this working seem to be trial and error. Great!? except for every attempt you need to re-deploy your environment.
There's only one reference to how to make this work with custom log: http://qiita.com/kozayupapa/items/2bb7a6b1f17f4e799a22 I have no idea how that person reverse engineered the file format.
cons:
Cloudwatch does not seem to be able to split logs into columns when displaying, so you can't easily filter by priority, etc.
AWS Console Log viewer does not have auto-refresh to follow logs.
Nightmare undocumented configuration file format, no way of testing. Trial and error requires re-deploying whole instance.
Perhaps an AWS Lambda function is applicable?
Write some javascript that dumps all notifications, then see what you can do with those.
After an object is written, you could rename it within the same bucket?
Or notify your own log-management service about the creation of a new object?
Lots of possibilities there...
I've started using Sumologic for the moment. There's a free trial and then a free tier (500mb /day, 7 day retention). I'm not out of the trial period yet and my EB app does literally nothing (it's just a few HTML pages serve by Nginx in a docker container. Looks like it could get expensive once you hit any serious amount of logs though.
It works ok so far. You need to create an IAM user that has access to the S3 bucket you want to read from and then it sucks the logs over to Sumologic servers and does all the processing and searching over there. Bit fiddly to set up, but I don't really see how it could be simpler and it's reasonably well-documented.
It lets you provide different path expressions with wildcards, then assign a "sourceCategory" to those different paths. You then use those sourceCategories to filter your log searching to a specific type of logging.
My plan long-term is to use something like your solution 3, but this got me going in very short order so I can move on to other things.
You can use a Multicontainer environment, sharing the log folder to another docker container with the tool of your preference to centralize the logs, in our case we connected an Apache Flume to move the files to an HDFS. Hope this helps you with this.
The easiest method I found to do this was using papertrail via rsyslog and .ebextensions, however it is very expensive for logging everything.
The good part is with rsyslog you can essentially send your logs anywhere and you are not tied to papertrail.
example ebextension
I've found loggly to be the most convenient.
It is a hosted service which might not be what you want. However if you check out their setup page you can see a number of ways your situation is supported (docker specific solutions, as well as like 10 amazon specific options). Even if loggly isn't to your taste, you can look at those solutions and easily see how some of them could be applied to most any centralized logging solution you might use or write.

Resources