In Perforce (P4V), I am trying to delete a workspace but getting an error that says the File System P4ROOT doesnt have enough space - storage

the exact error message says The filesystem 'P4ROOT' has only 1.9G free, but the server configuration requires at least 2G available.
I am trying to delete a new workspace i made by accident but keep getting this error, P4V now wont let me do anything, including deleting this workspace that seems to be causing the issue. How do i fix this?

P4ROOT is on the server, so if you're connecting to a remote server, you need to contact the admin of that server and let them know that it's wedged. Your workspace specifically is not the problem, the overall lack of space on the server is. All that needs to happen to fix it is increasing the available disk space. (Deleting your workspace would free up a little space on the remote server by pruning the associated db entries, but those are very small compared to the depot files.)
The "requires 2G available" thing is because by default the server looks for an available 2GB of empty space before it starts any operation; that's to provide reasonable assurance that it won't run out of space completely during the operation, since actually hitting a hard limit can be hard to recover from (db tables might be in a partially-written state, etc).
If the admin wants to try fixing this by obliterating large files (this is usually a pain and I'd recommend just throwing a bigger hard drive at the problem instead), they can temporarily lower that threshold to be able to run the obliterate, but I'd recommend bumping it back afterwards.

Related

How to prevent storing files i haven't imported\pinned into my node?

I have just installed an IPFS Desktop app on my computer for the first time ever, gone to Files sectoion and removed all 2 pinned files that were there. I didn't even get why something was pinned by default right after installation.
Then, I just started to watch what would happen. After a few minutes I've started to see spikes in network bandwidth as well as an amount of blocks and storage size started to increase.
So, the questions are:
If I haven't even imported\pinned any file yet, why the storage is started to fill? I guess it was filling with someones files.
How can I prevent it and "seed" only files\data I manually add to my IPFS node?
I'd like to just "seed" my files in read-only mode and prevent constant writes and wearing out my SSD as well as exclude unneeded network traffic.
IPFS caches things you access by default.
That cache is cleared during "garbage collection", which happens by default once every hour.
You can change this default behavior:
reprovider/strategy "pinned" https://docs.ipfs.io/how-to/configure-node/#reprovider
routing.type "dhtclient" https://docs.ipfs.io/how-to/configure-node/#routing

neo4j is storing arbitrary files in drive C?

my C Drive size is growing and my server is not running any thing but neo4j.
even though i configured neo4j to store database information on some other drive.
node count might be irrelevant but for the record, i have almost 10 million nodes and traffic to database about 200 request / minute.
is there any thing else written by neo4j that i should be aware of?
dbms.directories.data=E:/MyNeoDB4/
dbms.directories.logs=E:/MyNeoDb4
dbms.jvm.additional=-Dunsupported.dbms.udc.source=zip
dbms.memory.heap.initial_size=15
dbms.memory.heap.max_size=15G
dbms.security.procedures.unrestricted=apoc.*
dbms.memory.pagecache.size=8G
Update 1:
things i have checked already:
my debug log is being written some where other than Drive C
metrics.enabled=false
Update 2:
- as #InverseFalcon said i also checked transaction logs in the first step. they were being written in some other directory.
(Note: Answer was written before original question was updated to say that neither metrics nor logs were the likely culprits)
Logs, and possibly metrics
I'm not sure what your logging needs have been like, but a major source of disk consumption that is not the data itself is the writing of log files. They typically do not grow extremely quickly, but it totally depends on your set up.
I suspect that your drive may be filling up with logs, although I am surprised it's filling up so quickly. I would check out your log files and see if they are full of long chains of exceptions.
It could also be metrics being exported to CSV on the local disk, although I do not believe that Neo4J will do that without being explicitly configured to do so.
More info on metrics is at the official docs:
https://neo4j.com/docs/operations-manual/current/monitoring/metrics/
A variant on Rebecca Nelson's answer, you might want to check for transaction log files.
Transaction logs are the source of truth for changes made to a database, and they are not the same kinds of logs as the readable log files (debug.log, neo4j.log) that live in the logs folder.
You can find transaction logs in your graph.db folder (or whatever name you've given to your graph database folder) using the naming pattern neostore.transaction.db.0 (with incremental numbering of the log files starting with 0).
Transaction logs are a stage of data persistence. Transactions affecting the database first write to these logs. When criteria are met, a checkpoint operation occurs which flushes the contents of the transaction logs to the datastore files (some of the other files in the graph.db folder) and the transaction logs are pruned and/or rotated.
While you should not modify or delete transaction log files yourself, you can add configuration parameters in neo4j.conf to control how these files are handled.
Here are the docs dealing with transaction logs.

Jenkins: jobs in queue are stuck and not triggered to be restarted

For a while, our Jenkins experiences critical problems. We have jobs hung, our job scheduler does not trigger the builds. After the Jenkins service restart, everything is back to normal, but after some period of time all problem are return. (this period can be week or day or ever less). Any idea where we can start looking? I'll appreciate any help on this issue
Muatik has made a good point in his comment, the recommended approach is to run jobs on agents (slave) nodes. If you already do it, you can look at:
Jenkins master machine CPU, RAM and hard disk usage. Access the machine and/or use plugin like Java Melody. I have seen missing graphics in the builds test results and stuck builds due to no hard disk space. You could also have hit the limit of RAM or CPU for the slaves/jobs you are executing. You may need more heap space.
Look at Jenkins log files, start with severe exceptions. If the files are too big or you see logrotate exceptions, you can change the logging levels, so that fewer exceptions are logged. For more details see my article on this topic. Try to fix exceptions that you see logged.
Go through recently made changes that can be the cause of such behavior, for example, new plugins, changes to config files (jenkins.xml)?
Look at TCP connections. Run netstat -a Are there suspicious connections (CLOSED_WAIT status)?
Delete old builds that you do not need.
We have been facing this issue from last 4 months, and tried everything, changing resources CPU & memory, increasing desired nodes in ASG. But nothing seems worked .
Solution: 1. Go to Manage Jnekins-> System Configurationd-> Maven project
configurations
2. In "usage" field, select "Only buid Jobs with label expressions matching this nodes"
Doing this resolved it and jenkins is working as a Rocket now :)

How to avoid slow TokuMX startup?

We run a TokuMX replica-set (2 instances + arbiter) with about about 120GB data (on disk) and lots of indices.
Since the upgrade to TokuMX 2.0 we noticed that restarting the SECONDARY instance always took a very long time. The database kept getting stuck at STARTUP2 for 1h+, before switching to normal mode. While the server is at STARTUP2, it's running at a continuous CPU load - we assume it's rebuilding its indices, even though it was shut down properly before.
While this is annoying, with the PRIMARY being available it caused no downtime. But recently during an extended maintenance we needed to restart both instances.
We stopped the SECONDARY first, then the PRIMARY and started them in reverse order. But this resulted in both taking the full 1h+ startup-time and therefore the replica-set was not available for this time.
Not being able to restart a possibly downed replica-set without waiting for such a long time, is a risk we'd rather not take.
Is there a way to avoid the (possible) full index-rebuild on startup?
#Chris - We are revisiting your ticket now. It may have been inadvertently closed prematurely.
#Benjamin: You may want to post this on https://groups.google.com/forum/#!forum/tokumx-user where many more TokuMX users, and the Tokutek support team, will see it.
This is a bug in TokuMX, which is causing it to load and iterate the entire oplog on startup, even if the oplog has been mostly (or entirely) replicated already. I've located and fixed this issue in my local build of TokuMX. The pull request is here: https://github.com/Tokutek/mongo/pull/1230
This has reduced my node startup times from hours to <5 seconds.

Neo4j, There is not enough space on the disk

In our application which is using neo4j-1.8.2 we have so called synchronization process. This process reads some data from SQL Server db, processes it in some way and makes appropriate changes to the graph database. In the case if we have disk space outage (those disk where we have neo4j database located), neo4j server stops working (it is still running stops answering the queries). In neo4j web admin I have the following response for each cypher query- "Failed to get current transaction.". In the log file I see:
SEVERE: The RuntimeException could not be mapped to a response, re-throwing to the HTTP container
org.neo4j.graphdb.TransactionFailureException: Unable to commit transaction
...
Caused by: java.io.IOException: There is not enough space on the disk
My question is: when I clean some content from disk and around 10GB of free space appeared, does it mean that neo4j server will start working (answering to queries) again automatically OR do I need to restart it?
I see that it is not working after cleaning some content, I have to restart it, then it starts working again? I want to know if this is expected or can I do something to avoid restarting neo4j server?
Thanks in advance,
Denys
You have to restart Neo4j when running out of disc. Best practice is to setup some monitoring system giving you alert if free disc space goes below a certain capacity.

Resources