when looking at a running container with docker stats command, I can see that the memory usage of a container is 202.3MiB.
However, when looking at the same container through the REST API with
GET /containers/container_name/stats -> memory_stats-> usage , the usage there shows 242.10 MiB.
there is a big difference between those values.
What might be the reason for the difference? I know that the docker client uses the REST API to get its stats, but what am I missing here?
Solved my problem. Initially, I did not take into account cache memory when calculating memory usage.
Say "stats" is the returned json from
GET /containers/container_name/stats,
the correct formula is:
memory_usage = stats["memory_stats"]["usage"] - stats["memory_stats"]["stats"]["cache"]
limit = stats["memory_stats"]["limit"]
memory_utilization = memory_usage/limit * 100
Use the rss value i.e (rss = usage - cache)
"memory_stats": {
"stats": {
"cache": 477356032,
"rss": 345579520,
},
"usage": 822935552
}
On Linux, the Docker CLI reports memory usage by subtracting page cache usage from the total memory usage.
The API does not perform such a calculation but rather provides the total memory usage and the amount from the page cache so that clients can use the data as needed. (https://docs.docker.com/engine/reference/commandline/stats/)
The accepted answer is incorrect for recent Docker versions (version greater than 19.03).
The correct way that gets the same number docker stats reports is:
memory = stats['memory_stats']['usage'] - stats["memory_stats"]["stats"]["total_inactive_file"]
memory_limit = stats['memory_stats']['limit']
memory_perc = (memory / memory_limit) * 100
JavaScript code according to the Docker cli source code:
const memStats = stats.memory_stats
const memoryUsage = memStats.stats.total_inactive_file && memStats.stats.total_inactive_file < memStats.usage
? memStats.usage - memStats.stats.total_inactive_file
: memStats.usage - memStats.stats.inactive_file
Related
I am new to websockets and am pulling some data via the Coinbase websocket, upon every message I append to a pandas df in memory (I know this is bad practice, just trying to get a working version). Every minute I upload this df to TimescaleDB and clear out old data from the df.
I am noticing though that on occasion I am failing to upload the df and as a result I am not clearing the df of old values, and eventually the df consumes all the memory.
Is this a feature of websockets? Or is something off with my scheduler?
This is my scheduler for reference -
while True
nowtime = datetime.now()
floornow = pd.Timestamp(nowtime).floor(freq='1S')
candlefloor = floornow.floor(freq=f'{candle_len}T')
if (floornow == candlefloor):
try:
upload_timescale()
if (candlefloor != 'last datetime'):
timetowait = candle_len*60-(datetime.now() - candlefloor).total_seconds()-0.05
time.sleep(timetowait)
except:
raise ValueError('bug in uploading to timescaledb')
else:
tts = candle_len*60-(nowtime - candlefloor).total_seconds()
if (tts > 2):
time.sleep(tts)
What is the best way to handle a use case where I need to store websocket data intermittently to process, before cleaning it out?
Thanks!
When I initialize ray with ray.init() in docker container, the memory usage increases over time(the mem useage in docker stats increases) and container dies when memory over limit (only ray.init() can cause this issue)
Also, too many duplicated processes spawns when ray.init. (RAY:IDLE, ray dashboard, something ray-related processes)
I reproduced this issue with official ray image : https://pypi.org/project/ray/#history
P.S : Our use-case is combination of docker container, fastapi scheduler and ray. (i.e : we initialize ray instance once, and do ray.put, ray.get every pre-defined cycle.)
Let me share my test design pattern to reproduce this issue.
ray.init(num_cpus=4,dashboard_host='0.0.0.0',dashboard_port=8888, configure_logging=False)
app = FastAPI()
#app.on_event("startup")
#repeat_every(seconds=1, raise_exceptions=True)
#app.get("/test")
def test():
dd = []
bb = ray.put(dd)
fut = []
for i in range(10):
fut.append(aa.remote(bb))
ss = ray.get(fut)
#ray.remote
def aa(ss):
a = np.random.rand(380,640)
ss.append(a)
return ss
This doesn't help solve your issues, but your issue could be related https://github.com/tiangolo/fastapi/issues/1624 There appears to be memory leak issue with FastApi and Docker
While this Q/A does not address the actual issue of: How to detect with client (eg redis-py) that redis is running out of memory constraint not by machine but by the maxmem configuration? Before inserts fail which command to use in the programm to detect about to be full?
My first guess is: info and check if used_memory_peak < maxmem setting. Is this correct?
(Besides, for out of machine memory, since defrag, use which setting, none of the returned INFO fields help here)
Well should i just try an insert and see if fail (but that would be after the fact then.)
Trail and error, good enough tested by running
while true; do redis-cli lpush mm longstringhere; done; results on maxmem - used_memory < 0.1MB with insert failures:
(error) OOM command not allowed when used memory > 'maxmemory'.
So i have set i poll it via redis-py client and once the diff goes <1mb threshold throw up, sry raise Error of course. Make sure the user_memory memory addon of your longest command is < threshold too of course otherwise you run into it on insert.
I try to figure how to calc the ~percentage of used mem so i get notification way earlier eg 90% of maxmem, therefore this solution is fine.
Info dump:
# Memory
used_memory:3126272
used_memory_human:2.98M
used_memory_rss:5292032
used_memory_rss_human:5.05M
used_memory_peak:4914296
used_memory_peak_human:4.69M
used_memory_peak_perc:63.62%
used_memory_overhead:696654...
Furthermore maxmem is not a hardcap, when running it further by eg adding members to existing set.
used_memory:3162584
used_memory_human:3.02M
code to get percent 0-100
rmem_info = pipe.info(section='memory')
{'redis_mem_percent': math.ceil(rmem_info['used_memory'] / rmem_info['maxmemory'] *100)}
I am trying to migrate a Play 2.5 version to 2.6.2. I keep getting the URI-length exceeds error. Anyone knows how to override this?
I tried below Akka setting but still no luck.
play.server.akka{
http.server.parsing.max-uri-length = infinite
http.client.parsing.max-uri-length = infinite
http.host-connection-pool.client.parsing.max-uri-length = infinite
http.max-uri-length = infinite
max-uri-length = infinite
}
Simply add
akka.http {
parsing {
max-uri-length = 16k
}
}
to your application.conf. The prefix play.server is only used for a small subset of convenience features for Akka-HTTP integration into the Playframework, e.g. play.server.akka.requestTimeout. Those are documented in the Configuring the Akka HTTP server backend documentation.
I was getting error due to header length exceeding default 8 KB(8192). Added the following to build.sbt and it worked for me :D
javaOptions += "-Dakka.http.parsing.max-header-value-length=16k"
You can try similar for uri length if other options don't work
This took me way to long to figure out. It is somehow NOT to be found in the documentation.
Here is a snippet (confirmed working with play 2.8) to put in your application.conf which is also configurable via an environment variable and works for BOTH dev and prod mode:
# Dev Mode
play.akka.dev-mode.akka.http.parsing.max-uri-length = 16384
play.akka.dev-mode.akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
# Prod Mode
akka.http.parsing.max-uri-length = 16384
akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
You can then edit the config or with an already deployed application just set PLAY_MAX_URI_LENGTH and it is dynamically configurable without the need to modify commandline arguments.
env PLAY_MAX_URI_LENGTH=16384 sbt run
If anyone getting this type of error in chrome browser when trying to access a site or login. [HTTP header value exceeds the configured limit of 8192 characters]
, Go to chrome
settings -> Security and Privacy -> Site Settings , View Permission and data stored across sites
Search for the specific website and on that site do Clear all data.
I am using flume to write to Google Cloud Storage. Flume listens on HTTP:9000. It took me some time to make it work (add gcs libaries, use a credentials file...) but now it seems to communicate over the network.
I am sending very small HTTP request for my tests, and I have plenty of RAM available:
curl -X POST -d '[{ "headers" : { timestamp=1417444588182, env=dev, tenant=myTenant, type=myType }, "body" : "some body ONE" }]' localhost:9000
I encounter this memory exception on first request (then of course, it stops working):
2014-11-28 16:59:47,748 (hdfs-hdfs_sink-call-runner-0) [INFO - com.google.cloud.hadoop.util.LogUtil.info(LogUtil.java:142)] GHFS version: 1.3.0-hadoop2
2014-11-28 16:59:50,014 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:467)] process failed
java.lang.OutOfMemoryError: Java heap space
at java.io.BufferedOutputStream.<init>(BufferedOutputStream.java:76)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.<init>(GoogleHadoopOutputStream.java:79)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.create(GoogleHadoopFileSystemBase.java:820)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
(see complete stack trace as a gist for full details)
The strange part is that folders and files are created the way I want, but files are empty.
gs://my_bucket/dev/myTenant/myType/2014-12-01/14-36-28.1417445234193.json.tmp
Is it something wrong with the way I configured flume + GCS or is it a bug in GCS.jar ?
Where should I check to gather more data ?
ps : I am running flume-ng inside docker.
My flume.conf file:
# Name the components on this agent
a1.sources = http
a1.sinks = hdfs_sink
a1.channels = mem
# Describe/configure the source
a1.sources.http.type = org.apache.flume.source.http.HTTPSource
a1.sources.http.port = 9000
# Describe the sink
a1.sinks.hdfs_sink.type = hdfs
a1.sinks.hdfs_sink.hdfs.path = gs://my_bucket/%{env}/%{tenant}/%{type}/%Y-%m-%d
a1.sinks.hdfs_sink.hdfs.filePrefix = %H-%M-%S
a1.sinks.hdfs_sink.hdfs.fileSuffix = .json
a1.sinks.hdfs_sink.hdfs.round = true
a1.sinks.hdfs_sink.hdfs.roundValue = 10
a1.sinks.hdfs_sink.hdfs.roundUnit = minute
# Use a channel which buffers events in memory
a1.channels.mem.type = memory
a1.channels.mem.capacity = 10000
a1.channels.mem.transactionCapacity = 1000
# Bind the source and sink to the channel
a1.sources.http.channels = mem
a1.sinks.hdfs_sink.channel = mem
related question in my flume/gcs journey: What is the minimal setup needed to write to HDFS/GS on Google Cloud Storage with flume?
When uploading files, the GCS Hadoop FileSystem implementation sets aside a fairly large (64MB) write buffer per FSDataOutputStream (file open for write). This can be changed by setting "fs.gs.io.buffersize.write" to a smaller value, in bytes, in core-site.xml. I imagine 1MB would suffice for low-volume log collection.
In addition, check what the maximum heap size is set to when launching the JVM for flume. The flume-ng script sets a default JAVA_OPTS value of -Xmx20m to limit the heap to 20MB. This can be set to a larger value in flume-env.sh (see conf/flume-env.sh.template in the flume tarball distribution for details).