We've been running a embedded neo4j instance and using the high availability features to ensure a downtimeless roll-over deploy.
We're obviously doing something wrong because we get a 'branched/' folder inside our neo4j database folder. I guess this means that we have branching data (i.e. master switchover doesn't work correctly).
My question is: "what can I do with it?
can it be safely moved over to a developers workstation while the database is running?
can we delete it from the server (while the database is running)?
Is there an api or tool to see the changes inside the branched folder?
Related
What are the persistence options for fitnesse files? So far it seems like a file system is the only thing supported. There does appear to be an out of date database plugin. Is there anything else that is supported (S3, database, etc.)? Is there a way to control where files are persisted if using the filesystem?
I believe there is very little in that area. The location of the files can be controlled using a command line option. See http://fitnesse.org/FitNesse.FullReferenceGuide.UserGuide.QuickReferenceGuide#FitNesseCommandLINE
-d /path/to/fitnesse/root
How I've used the FitNesse wiki is as a local development tool, with the pages on the file system. Once I'm satisfied with the tests I commit them to version control (e.g. git) so that they become part of the (integration) test pipeline setup (e.g. they are run as part of the CI/CD pipeline of the project).
There is a plugin I believe that will automatically commit any save actions to Git, but I've never used that. Saving each edit action just pollutes version control in my opinion. I only want to see tests after they have been checked/completed, and that tends not to be each save.
Working on a shared wiki environment (where I would expect a non-file system approach would fit in) you run into the same problem, I expect. Developing automated tests is a development task that requires some iterations before it is 'done', and not all attempts reach that 'done' state. So using shared storage for wiki persistence creates 'noise' in the test-set: which are the tests that form the current reference set that should pass and what is work in-progress.
If you are working on a larger project where new features are developed together with their automated tests it becomes even more important to know which test changes belong to which features/changes. Having tests on the file system, in version control, allows you to develop test in sync with code changes in the same branch. This is what I would recommend.
Is it possible to somehow serialize current Thingsboard (let's call it TBoard) configuration, save it and than latter load saved configuration on TBoard startup.
I am specifically interested in loading device profiles, rule chains, and dashboards.
I want to save configuration together with my project in git repository so than latter I could just use docker-compose to start multiple services from project (let's call them sensors) and single TBoard instance with saved configuration which will be used for collecting telemetry from sensors and drawing dashboards.
Another reason for saving configuration is what happens if for some reason TBoard container crashes or somehow get corrupted so it can't be started again, would I have to click on the things again in order to create all device profiles, dashboards, configure rule chains ... etc etc ... ?
Regarding this line
I am specifically interested in loading device profiles, rule chains, and dashboards. I want to save configuration together with my project in git repository
I have just recently implemented version control for my Thingsboard deployment. The way i am doing it is with the python REST client.
I have written functions to export all dashboards/data converters/integrations/rule chains/widgets into json files which I save into a github repository.
I have also written the reverse script to push the stored files to a fresh environment, essentially "flashing" it. Surprisingly, this works perfectly.
I have an idea to publish this as a package, but it's something I've never done before so I'm unsure if I will get to it.
Just letting you know that it is definitely possible to get source control operational via the API.
My server running TFS express crashed. I managed to mount the disk and extract mdf/ldf file for my TFS collection. Here is what I did next:
Built a new machine (with the same name/IP address) and installed SQL Express/TFS server express.
From SQL Server Management Studio, attached the mdf/ldf files. I can now see TFS_MyCollection as a new database.
From TFS Administrative console, clicked on "Attach Collection."
However, the new database is not being listed.
I went through a bunch of links on the Internet. https://social.msdn.microsoft.com/Forums/en-US/d949edf3-1795-448a-a1cc-39555ce87b50/tfs-2010-installation-error had a similar situation. Based on the suggestion, I had attached the database. I also looked at https://msdn.microsoft.com/en-us/library/ms404869(VS.80).aspx. However, this one talks about using backup/restore, which is not my case.
I must be missing some configuration step. Please advice. Regards.
You cant just attach a collection that was never detached.
You need to unconfigure your TFS instance (tfconfig.exe setup /uninstall:all) and then restore all of the databases.
You will need to restore each collection and the configuration DB. They are currently a set. Once you have all of the databses attached/restored you need to run the setup and "configure application tier only".
https://msdn.microsoft.com/en-us/library/ms404869.aspx
You need to follow the documentation for moving hardware. Make sure that you follow each step.
Note: You should take backups!
Is there any way to restore data from neo4j?
I just lost all data and want to restore to previous state of neo4j.
Please help me with this.
Neo4J Server must be configured to run backups. If your server wasn't configured to create backups, then there is not a way to restore your data using Neo4J. This is controlled by the Neo4J config option online_backup_enabled.
This feature is enabled by default in Neo4J 2.1.6 Enterprise. However, you have to manually run a backup in order for one to be created. So, unless you ran a backup, then you aren't going to find one which was automatically taken anywhere on your system. Sorry :-(
In the future, you can configure and run backups following the Neo4J documentation.
I've got a need to checkout an entire source tree out of one server and check it into another server. I'm attempting to script this into a final builder script, but am running into some snags. I'm able to check everything out, but when I attempt to check it into the new server it tells me there are no pending changes. Obviously I'm missing something if this is even possible.
Anyone done something similar to this or know of a way I might accomplish this?
One more thing, if the src is empty on server 2 would I have to manually add the files before I can update them?
I would guess that the reason that TFS is saying no pending changes is that you haven't checked out the files from Server 2. This could get kind of ugly using a single directory, so I would recommend trying this:
Get (latest or specific version) from server 1 to
C:\Server1Files...
Get and Check out for edit everything from server 2 to
C:\Server2Files...
Copy from C:\Server1iles1\ to C:\Server2Files
Check in from C:\Server2Files
I think TFS is going to complain if you try to use a single directory here, as it would see the same directory mapped to two different workspaces (even though they're on different instances of TFS).