Does TDengine support data deletion? Or a more general question: does time-series database usually support data deletion functions?
No, but you can set KEEP variable when creating database:
CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1]
Related
Aren't we using star schemas or flocon schemas to create datamarts ?
So can we say that Datamarts are synonym of star schema?
Yes or no, I need justification please
No, you can't say that Data Mart is a synonym of a star schema - it is a broader concept.
Data Mart is a specialized data warehouse - it's a platform that consists of hardware, software and data.
Star Schema is a data structure optimized for querying. It's one of the components of a Data Mart, and not the only type of structures available (i.e, you can use a flat table instead).
scenario is, I am having 2 database. db1 and db2. where as I have around more than 45 stored procedures in db2 which are having join with tables in db1. I could able to do by db1.dbo.tablename but since if I am renaming db1 to db3. All db2 stored procedures getting failed. What should approach? as per client requirement as usual he want to rename db1. By find replace in script db1 to db3 is not the good approach. because if client asks to rename again db3 databse to some another name , then it is permanent tedious and unprofessional work. What should I use here?
Create synonyms. Reference the synonyms in your objects. If the database changes, you just have to change one synonym per object. There's no avoiding changing the logical name in your code but if you control access through synonyms you only have to change it in one place.
For example
Without synonyms:
Six Stored Procedures in db2 refer to objects db1..Table1 and db1..Table2
When you rename db1 to db7, you need to alter two objects in six stored procedures.
With synonyms:
Six Stored Procedures refer to synonyms snTable1 and snTable2 (in the local database - note no database reference here)
The synonym snTable1 refers to db1.Table1
When you rename db1 to db7, you need to alter two synonyms to point at the new database. No changes to stored procedures are required. All objects referring to the synonyms will still work.
This requires you to create synonyms in db2 pointing at objects in db1, and an initial rewrite of your stored procedures in db2 to refer to local synonyms instead of database-qualified objects. But you need to do that anyway right?
Example Procedure
Create a synonym in db2 called snTable1 that refers to Table1 in db1
USE db2
CREATE SYNONYM snTable1 FOR db1.Table1
Alter your 45 stored procedures to refer to snTable1, not db1.Table1. You only have to do this once. Note these stored procedures are referring to objects in the local database.
If your database gets renamed to xyz, recreate the synonym:
USE DB2
DROP SYNONYM snTable1
CREATE SYNONYM snTable1 FOR xyz.Table1
This is only useful if there are far more stored procedures/views than objects.
If you wish to change these on the fly you could probably use DMO or powershell or generate some T-SQL to do it. You just run the above commands against the database with a user that has suitable security.
Another very unpleasant option that may work is to create a linked server to your local database with a hard coded login whose default database is the database that you want. But it hides the user that is really accessing the object, and probably introduces performance issues. In short it's bad practice. I question why the database needs to be renamed so much. No end user should ever see it. If you take a look at Sharepoint databases they have hideous names but this is irrelevant to an end user.
Example linked server procedure
Create a user that only has access to db1, and whose default database is db1
Create a linked server (called MyLinkedServer for example) on the SQL Server to the local database using the user created in step 1
Alter all your code in db2 to use four part naming through this linked server: SELECT * FROM MyLinkedServer...Table1
If the database name changes, there are probably no changes required
This is just a theory and if it works its bad practice... worse practice than needing to rename your database.
Everyone familiar with MySQL has likely used the mysqldump command which can generate a file of SQL statements representing both the schema and data in a MySQL database.
These SQL text files are commonly used for many purposes: backups, seeding replicas, copying databases between installations (- copy prod DBs to staging environments etc) and others.
Is there a similar tool for Neo4j that can dump an entire graph into a text file of Cypher statements, that when executed on an empty database would reconstruct the original data?
Thanks.
In neo4j version 2 (e.g. 2.0.0M3), using neo4j-shell, you can use the command
dump
which will create the cypher statements (pretty much like mysqldump would do. To read in the file you can use
cat dump.cql | neo4j-shell
Cypher is just a query language for Neo4J just as SQL is for MySQL or other relational databases. If you wish to transfer the db, then you just need to copy the folder containing the database files. Simple.
For example my folder simple-graph contains all the db files. Just copy the folder and store it at some other location. You can directly start using it as:
GraphDatabaseServiceraphDb = new EmbeddedGraphDatabase(DB_PATH);//DB_PATH is path to the new location
You can use the procedure apoc.export.cypher.all() to dump all the data in your database.
For example, you can dump the database into a single file called dump-file.cypher:
neo4j#neo4j> CALL apoc.export.cypher.all('dump-file.cypher');
For details of the procedure, please see the documentation: https://neo4j.com/labs/apoc/4.4/overview/apoc.export/apoc.export.cypher.all/.
I am learning Core Data and want to create a data base with 10000 data.
what is the easier way to input these data to a database and read them to Core Data?
Here is a good tutorial by Jeff Lamarche on how to seed Core Data. In a few words: you have to parse some data source (plist, sqlite, ...) and store it in Core Data.
No magic here: you write a cycle that iterates through your data source, create a managed object for each data item and save it with Core Data.
another way is using sqlite database browser 2. it is free and you can download from surceforge!
after that you import the sqlite database wich core data has created. then you can see your entities and their attributes in tables. afterward inserting the information into tables would be easy!
Given a MySQL database and a set of corresponding active record models similar to:
Test -< Categories -< Questions
I need a way to quickly dump the contents of Test #1 to a file, and then restore on a separate machine. When Test #1 is reinstantiated in the database, all of the relational data should be intact (all foreign keys are maintained, the Categories, Questions for the test are all restored). What's the best way to do this?
Try the Jailer subsetting tool. It's for dumping subsets of relational data, keeping referential integrity.
I'd try using yaml: http://www.yaml.org/
It's an easy way to save and load heirarchical data (in a human readable format), and there are a number of implementations for Ruby. They extend your classes, adding methods to save and load objects to and from yaml files.
I typically use it when I need to save and reload a "deep copy" of a large multi-level hash of objects.
There are options out there, replicate is outdates and known to have issues with Rails 4 and Ruby 2, activerecord-import looks good, but doesn't have a -dump couterpart.