perforce submit is atomic.
that means if the change list contains 3 files to be checked in,
and the the operation successfully checked in the first 2 files, but then something when wrong
when operating on the 3rd file, it has the ability to rollback the operation of the first 2 files so that everything went back to its original state.
I'm awed and really impressed with that, and tried to search for how that actually works, but couldn't seem to find it.
will appreciate a lot if someone could help me understand technical details on how all this works at the background.
thank you very much in advance.
The fact that Perforce uses a database as its source of truth makes this very simple:
The depot files are locked (in the database, as if you'd run p4 lock).
The new revision contents are uploaded to the depot archive.
The database tables are locked for the final set of checks that everything is okay.
The new revision records are written into the database and all the locks are released.
If the submit fails somewhere in part 2, nothing needs to be rolled back, because the new revision contents don't overwrite anything, and they aren't visible as part of the file history until step 4. (An unfortunate side effect of this is that Perforce can actually "leak" disk space on a failed submit, but this is generally pretty minor compared to the expected normal increase in disk usage over time.)
If you'd like to be able to watch this happen in real time, tailing the journal (P4JOURNAL) will show you the database writes as they happen, and tailing the log (P4LOG) will show you the individual phases of the submit operation from when the user initiates the operation to when the change is fully committed.
Related
We have recently started using Duplicati for backup of some of our data systems. We run an ERP solution that uses Pervasive (v10).
When Duplicati begins its backup process, to the best of my understanding, it's using either the file date, and or the file byte size to determine what to back up.
The problem that I see with that solution is that some of the data is missing from the table. For example, the workorders module we are certain had new rows of data on the server (source machine) that were NOT copied over to the new file.
Last night we backed up our ERP platform then restored to a new location so as to do a compare of what was backed up during the evening against what the source machine had. We noticed that there are rows missing from one table in the restored backup, that are there in the source table.
The backup is being created from the data directory. We are NOT using the integrated backup that came with the ERP suite.
What I personally believe is happening is that the database isn't writing out the data to the table until the last client disconnects from the ERP software. Also, the byte size of the file missing data and the source machine are the same, even though the source file holds more data.
Last week we did the same test that we did last night and I noticed when I closed the ERP suite, the file updates its modified stamp and the new rows are added to the table, but not before the client disconnects.
Can someone shed some light on why this is happening?
Are the data files open according to Pervasive when the backup occurs? If so, you should be using some sort of agent to close the files or put them into Continuous Operation mode or Backup Agent.
From the docs:
Continuous Operations provides the ability to backup data files while
database applications are running and users are connected.
When Continuous Operation mode is started, a delta file (.^^^) is created and the original data file is 'closed' so backup programs can access the file and back it up.
Backup Agent puts a GUI front end to Continuous Operation mode but is only supported with PSQL v11 and newer.
With Duplicati, you can set --disable-filetime-check=true to ignore the timestamps and sizes, and scan each file for changes.
This option is not active by default, because it may take a long time to fully read the file contents. For normal file operations, the OS should set the timestamp, but some applications, like TrueCrypt, will revert the timestamp.
I'm working on an Umbraco cloud project. I pulled the website from the git repositories and built it. First thing to do there when you run the site is to restore the content that's in the development environment to the local project so we can create new features. Yet Umbraco fails to do so with the following error:
The source environment has thrown a Umbraco.Deploy.Exceptions.ProcessArtifactException
with message: Process pass #3 failed for artifact
umb://document/xxthexguidxofxsomexpagexxxxxxxxx. It might have been
caused by an inner Umbraco.Deploy.Exceptions.EnvironmentException with
message: Could not get parent with id xxthexxx-guid-xofx-xthe-xxhomepagexx.
The following artifacts might be involved:
umb://document/xxthexxxguidxofxxthexxhomepagexx
The technical details may contain more information.
I've noticed that I some strange errors occur if not everything is deployed in the development site in the cloud. So I made sure everything is published.. Still errors though... I'm kinda lost here.
Has anyone come across simular issues? And how did you fix it?
Thanks in advance?
This can happen for a number of reasons, so it's a bit hard to say what exactly the problem is in your case.
Most of the time this happens due to either a circular reference of some sort causing a state that can't really be restored. For example that could be a datatype having a dependency on a node - but the node doesn't exist in a blank new environment. The content restore then refuses to start until the structural data (datatypes, contenttypes and such) is completely in sync, but the datatypes will never be able to be in sync until the content node exists. It's a sort of catch22 situation that might need to be resolved manually.
I would suggest you contact support through the Cloud portal and they will assist you in getting your problem resolved.
Something went wrong with the application in the middle of a transaction (the thread was killed, which resulted in a ThreadDeath etc.), so the transaction failed, but no new transaction could be started after:
org.neo4j.kernel.api.exceptions.TransactionFailureException:
Kernel has encountered some problem, please perform neccesary action (tx recovery/restart)
What are the actions I should undertake to deal with this issue?
Update: I forgot to mention that I have encountered this type of error before and managed to (at least temporarily) fix it by deleting transaction log files. But now, apparently, there aren't any *nioneo* (IIRC) files in the neo4j data directory at all! Did the location or names of log files change? Or am I missing something? There are neostore.transaction.db.x files, which, upon grepping seem to contain chunks of my data. I did start the fresh instance of the application (fortunately that was a test), so I can't check it now, but if I deleted them, would I be able to restart an app from the previous state?
One of the improvements in Neo4j 2.2 was the unification of transaction logs, those are now in neostore.transaction.db.x.
In case the db does not start any more you can try to remove them (but be sure to keep a backup copy) and restart the database. However try a restart with these files in place beforehand. If the presence of transaction logs causes the database not to start up I would consider this being a bug.
I try use neo4j HA cluster (neo4j 2.0.1), but I got error "No such log version: ..." after database copied from master.
I deleted all *.log files, but it was not help for me.
Can you help me with this problem?
TIA.
"Log" in Neo4j refers to the write-ahead-log that the database uses to ensure durability and consistency. It is stored in files named something like nioneo_logical_log.vX, where X is a number. You should never delete these files manually, even if the database is turned off, this may lead to data loss. If you wish to limit the amount of disk space used by neo for logs, use this configuration:
keep_logical_logs=128M
The error you are seeing means that the database copied cannot perform an incremental update to catch up with the master, because the log files have been deleted. This may happen normally, if the copy is very old, or if you have keep_logical_logs set to false.
What you need to do to resolve this is to set keep_logical_logs to some value that makes sense to you, perhaps "keep_logical_logs=2 days" to keep logs up to two days old around. Once you've done that, you need to do a new full store copy on your slave, which can be triggered by shutting the slave off, deleting the database folder (data/graph.db), and starting the slave back up again.
Newer versions of Neo will do this automatically.
I'd been working on a plugin when I discovered this. I can't say for sure if this behavior happened before or not on my machine (it doesn't on our test server, a Linux box), but after attaching a file, I can't delete it until the server restarts. I can't delete through the UI or by manually navigating to the server directory and trying to delete from there.
Has anyone ever encountered this before? Could it be something environmental on my box??
Most probably it's a permission issue in that folder, which allows your JIRA user (a user under which privileges JIRA instance runs) to create files, but not to delete them (or something even more fun) :) Try deleting the temp folder (where your uploaded attachments reside) and recreate it again, adding your JIRA web user to the access list for that folder.
The workaround to delete files, when some other process is holding a lock on the file, without having to terminate that process, is to use Unlocker. But be warned, when Unlocker unlocks a file it does that in a way which does not notify the lock holder that the file has been unlocked by force. That means that the lock holder still thinks it holds the lock on the open file which it doesn't (the file handle is invalid). That means that some applications might crash due to unexpected state of the supposedly open file. Btw, I've been using Unlocker since forever and it rarely caused any crashes, but it's better to be warned.