I'm running a Flask API inside a Docker Container.
My application has to download files from Google Cloud, and sometimes, after some minutes of execution, my container exits with the following message:
pm-api exited with code 247
I am not sure, but I think it might be related to the size of the data I'm trying to download from GCP, becuse I'm using a query and I don't seem to have any problem whem limiting the number of rows I get from it.
Could be data-size related? And if so, how can I configure my docker container to not break when downloading/saving large files?
In order to solve this problem I had to increase the size of my docker container. This is found in settings resources. Increasing the memory removed this issue.
Yes, possibly due to the large file size.
You may have to define FILE_UPLOAD_MAX_MEMORY_SIZE larger than your downloading file.What is code 247
Also refer
max-memory-vs-data-upload-max-memory
I have a similar issue. docker container (python code processing some data) exited
command terminated with exit code 247
my docker container is running on k8s.
the issue was caused because I set the k8s resources memory limit to 4GB, but the python container needs to use >4GB memory. after I increase the memory limit to 8GB, the issue is sovled.
but about why the exit code is 247, I don't find the answer. from the docker exit code, it doesn't have code 247.
Related
I am using docker over https https://x.x.198.38:2376/v1.40/images/load
And I started getting this error when running docker on Centos, this was not an issue on Ubuntu.
The image in question is 1.1gb in size.
Error Message:
Error processing tar file(exit status 1): open /root/.cache/node-gyp/12.21.0/include/node/v8-testing.h: no space left on device
I ran into a similar issue some time back.
The image might have a lot of small files and you might be falling short on disk space or inodes.
I was able to get to it only when I did "watch df -hi", it showed me that inodes were pegging up to 100 but docker cleaned up and it was back to 3%. Check this screensshot
And further analysis showed that the volume attached was very small, it was just 5gb out of which 2.9 was already used by some unused images and stopped or exited containers
Hence as a quick fix
sudo docker system prune -a
And this increased the inodes from 96k to 2.5m
And as a long-term fix, I increased the aws abs volume to up to 50gb as we had plans to use windows images too in the future..
HTH
#bjethwan you caught very good command. I solved my problem.Thank you. I am using redhat. I want to add something.
watch command works 2 seconds interval at default. When i used it default, It couldnt catch the problematic inodes.
I ran watch command with 0.5 seconds. This arrested the guilty volume :)
watch -n 0.5 df -hi
After detecting the true volume you must increase it.
I had started Hyperledger-composer from fabric-dev-server, So all images running as regular.
Now after two weeks I had seen that my HDD space is occupied by docker container.
So, Here are some screenshots of my hdd space:
Day-1
Day-2
In 2 days, the hdd available size become 9.8G to 9.3G.
So, How can I resolve this issue?
I think the problem is that the docker container of peer0 is generating too many logs, so if you run that container continuously, it will generate more logs when you access the fabric network.
you can check the file size of the log for particular docker container:
Find container id of peer0.
Goto directory /var/lib/docker/containers/container_id/.
There should be a file named as container_id-json.log.
So in my case:
My fabric was running from 2 weeks, and the logs file is at (example):
/var/lib/docker/containers/a50ea6b441ee327587a73e2a0efc766ff897bed2e187575fd69ff902b56a5830/a50ea6b441ee327587a73e2a0efc766ff897bed2e187575fd69ff902b56a5830-json.log
I had check the size of that file, it was near 6.5GB.
Solution (Temporary):
Run below command, which will delete data of that file (example):
> var/lib/docker/containers/a50ea6b441ee327587a73e2a0efc766ff897bed2e187575fd69ff902b56a5830/a50ea6b441ee327587a73e2a0efc766ff897bed2e187575fd69ff902b56a5830-json.log
Solution (Permanent):
What you can do this just make a script that run everyday and remove data from that log file.
You can use crontab, which give you ability to run script on specific time,day etc.
I've got a Docker container currently running in production on a CentOS 7 VM. We have encountered a problem where the logs of the container are filling up the host drive (the log files found at /var/lib/docker/{continer_name}) over time and causing the container to become unresponsive forcing us to clear logs on the host in order to enable it to continue processing.
We can't take the container down, meaning I can't just bring it back up using the --log-opt flag to set up some log rotation options.
We've tried using logrotate, but the nature of the container means the logs are being written to regularly and what we find is often the logs are rotated, but the original file does not decrease in size due to being written to whilst the rotation is underway.
I'm trying to find a solution to this problem where we can set up some kind of task that will clear the logs down to a specific file size. Any help is greatly appreciated.
i would suggest mounting the containers logs directory to a host directory, and there you can schedule whatever task to zip/move/delete the log files...
I'm using Scrapy to do some crawling with Splash using the Scrapinghub/splash docker container however the container exit after a while by itself with exit code 139, I'm running the scraper on an AWS EC2 instance with 1GB swap assigned.
i also tried to run it in background and view the logs later but nothing indicates an error it just exit.
From what i understand 139 is for Segmentation Fault errors in UNIX, is there anyway to check or log what part of memory being accessed or code being executed to debug this?
Or can i increase the container memory or swap size to avoid this?
I'm running into an issue where my docker container will exit with exit code 137 after ~a day of running. The logs for the container contains no information indicating that an error code has occurred. Additionally, attempts to restart the container returns an error that the PID already exists for the application.
The container is built using the sbt docker plugin, sbt docker:publishLocal and is then run using
docker run --name=the_app --net=the_app_nw -d the_app:1.0-SNAPSHOT.
I'm also running 3 other docker containers which all together do use 90% of the available memory, but its only ever that particular container which exits.
Looking for any advice on to where to look next.
The error code 137 (128+9) means that it was killed (like kill -9 yourApp) by something. That something can be a lot of things (maybe it was killed because was using too much resources by docker or something else, maybe it got out of memory, etc)
Regarding the pid problem, you can add to your build.sbt this
javaOptions in Universal ++= Seq(
"-Dpidfile.path=/dev/null"
)
Basically this should instruct Play to not create a RUNNING_PID file. If it does not work you can try to pass that option directly in Docker using the JAVA_OPTS env variable.