I have a simple service running that doesn't log at all. The logs view shows currently 31.1gb of logs and is growing fast. What's going on?
This number represents the size of all logs for all services across your project. The Cloud Run logging page is scanning all logs and filtering for logs from the Cloud Run resource.
Related
We have a fargate service running. On CloudWatch we can see the metrics for ECS/ContainerInsights->StorageWriteBytes growing every hour, and at some point it will not increase anymore probably because out of disk space. We will start to see log errors if we do not force a new deployment of the ECS. The error looks like:
error: org.apache.logging.log4j.core.appender.AppenderLoggingException: Error
writing to RandomAccessFile /apollo/env/ReaverFeatureGating/var/output/logs/application.log.%d{yyyy-MM-dd-HH}
Questions:
Is this normal to all the fargate services? Do we setup something
wrong?
Can we remove all the AmazonRollingRandomAccessFile and just use STDOUT in log4j2-container.xml? Will that still post our events to
CloudWatch, but just not writing to the disk?
After some research this is what I got:
Because the default template includes AmazonRollingRandomAccessFile, the log will be generated locally but never be cleaned up. There are some suggestions about adding a cron job to delete the logs, but for our case we don't need the local logs.
Yes, CloudWatch just need STDOUT.
Also, StorageWriteBytes only represent how many bytes are read/write to the storage. It is not equal to the used disk space. To monitor the disk space, we can build CloudWatch Agent into the container image and then use disk_used metric.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/metrics-collected-by-CloudWatch-agent.html
I've a custom Docker image based on 7.4-Apache that is being used for f1 instance type of compute engine. I successfully deployed and my website is reachable but after around 30 minutes or less the health check times out and then the container crashes.
I tried to see if there are any logs to investigate if this is an application issue or something else.
I wanted to ask are there any logs which I can see what's going on, if not how can I add logs?
Since I don't know your exact configuration I can point you to some documentation at this point.
First have a look at Cloud Audit Logs documentation - it describes how to view logs and find what you need. You will find more here about viewing audit logs.
Try looking for related logs in logs viewer.
Also have a look at how to contruct query to extract information you're looking for.
If you provide more details about you congiguration I might be able to give a more precise answer.
I have a few processes on my machine that I would like to have constantly running. I like however, how Jenkins organizes the jobs logging and I can go and see a build executing and see its STDOUT in realtime.
Would it be an issue to have a job that never finishes? I've heard after time there would be interruptions. Is there a better tool for something like this? Would basically love to be able to see the output from a web based perspective of the tool (and add hooks on failures)
For example if I were hosting a Node.js site, and wanted to be able to see the output of people connecting to the website or whatever is logged by the site. Ideally as long as you want to run the server, the process would be running constantly
Last week I installed the Docker/Kubernetes based version of Spring Cloud Data Flow
Although there were not overt errors, things are not working correctly.
I am able to create streams and tasks in the web UI and Spring Cloud Data Flow Shell but nothing runs.
I am most interested in Tasks.
When I create them, they all show with a Task Status of UNKNOWN.
Unfortunately, no matter how many times I launch them, the status always remains UNKNOWN.
I'm able to delete them but what magic must I use to make them run?
There's nothing apparent from the description as to what has failed. Perhaps if you can update it with more details, it'd be useful.
From a troubleshooting standpoint, when deploying streams or if the launch of Tasks' fails for any reason, they will be logged in the SCDF-server/Skipper-server logs. You'd have to tail the logs of the respective pod to learn more about the failures.
Also, it'd be useful to verify the output of kubectl describe pod/<POD_NAME> to see what's causing the stream/task pods not to start successfully. They're usually listed towards the end of this command-output.
The usual suspects are due to pods' health-check failures and/or the stream/task application docker images aren't resolvable at runtime. You'll see the reasons in the logs, of course.
This was a misconfiguration on my end.
I'm able to run as expected now.
I'm getting the following error in the recent jobs I'm trying to submit:
2015-01-07T15:51:56.404Z: (893c24e7fd2fd6de): Workflow failed.
Causes: (893c24e7fd2fd601):
There was a problem creating the GCE VMs or starting Dataflow on the VMs so no data was processed. Possible causes:
1. A failure in user code on in the worker.
2. A failure in the Dataflow code.
Next Steps:
1. Check the GCE serial console for possible errors in the logs.
2. Look for similar issues on http://stackoverflow.com/questions/tagged/google-cloud-dataflow.
There are no other errors.
What does this error mean?
Sorry for the trouble.
The Dataflow starts up VM instances and then launches an agent on those VMs. Those agents then do the heavy lifting of executing your code (e.g. ParDo's, reading and writing) your Data.
The error indicates the job failed because no agents were requesting work. As a result, the service marked the job as a failure because it wasn't making any progress and never would since there weren't any agents to process your data.
So we need to figure out where in the agent startup process things failed.
The first thing to check is whether the VMs actually started. When you run your job do you see any VMs created in your project? It might take a minute or two for the VMs to startup but they should appear shortly after the runner prints out the message "Starting worker pool setup". The VMs should be named something like
<PREFIX-OF-JOB-NAME>-<TIMESTAMP>-<random hexadecimal number>-<instance number>
Only a prefix of the job name is used to ensure we don't exceed GCE name limits.
If the VMs startup the next thing to do is to inspect the worker logs to look for errors indicating problems in launching the agent.
The easiest way to access the logs is using the UI. Go to the Google Cloud Console and then select the Dataflow option in the left hand frame. You should see a list of your jobs. You can click on the job in question. This should show you a graph of your job. On the right side you should see a button "view logs". Please click that. You should then see a UI for navigating the logs and you can look for errors.
The second option is to look for the logs on GCS. The location to look for is:
gs://PATH TO YOUR STAGING DIRECTORY/logs/JOB-ID/VM-ID/LOG-FILE
You might see multiple log files. The one we are most interested in is the one that starts with "start_java_worker". If that log file doesn't exist then the worker didn't make enough progress to actually upload the file; or else there might have been a permission problem uploading the log file.
In that case the best thing to do is to try to ssh into one of the VMs before it gets torn down. You should have about 15 minutes before the job fails and the VMs are deleted.
Once you login to the VM you can find all the logs in
/var/log/dataflow/...
The log we care most about at this point is:
/var/log/dataflow/taskrunner/harness/start_java_worker-SOME ID.log
If there is a problem starting the code that runs on the VM that log should tell us. That log and the other logs should also tell us if there is a permission problem that prevents the code running on the worker from being able to access Dataflow.
Please take a look and let us know if you find anything.
Apart from Jeremy Lewi's great answer, I would like to add that I've seen this error appear when you don't enable the proper Google APIs in the Developers Console, as mentioned here, which leads to a permission issue, like Jeremy said.