still new to snowflake,
But i have the task of changing the names of the external tables, streams and tasks? I really cant find anything on the internet in regards to it. All i need to do is change the name of each but its seems like the ALTER for the above doesnt include a renaming.
Thanks
Related
I took a look at Serilog.Sinks.EventLog at Github and noticed there doesn't seem to be a way to set the Event ID of the logged event (example IDs here).
Would there exist a way to modify the sink so that it'd be possible? Perhaps with some kind of specially formatted message? I don't know if I should put this here or on Github, I'll try here first. :)
There isn't a mechanism currently for this - designing one seems tricky (but ultimately it'd be a great addition to the project!)
I would like to store my application queries in a file to take out queries from code.
The reason because I need to do something like this is because at my project there is a person that is who make all the necesary changes to queries and send us the queries, then we need to go inside the and look for the specific query and change it.
I was thinking to store all the queries in a plist or a localizable strings so this person can change directly in a single file every query that need to be changed.
But what I am not sure if this could have a performance issue.
Does anybody did something like that?
I have created a graph database from a pile of 30k xml files. I want to reuse this graph database for querying it. Currently, I create the graph database every time I have to query something from it. Since the data set is huge, the database creation takes approx. 40 min. I am not aware of the way of reusing the existing database instead of creating it every time. I would appreciate your help if you could tell me how to do this.
(Java language, IDE-> IntelliJ, Redhat Linux).
I am new in this, but I saw there is a Delete database method in several code. Perhaps this is your case. Try commenting out the delete database method o delete ir, and only use start and shutdown.
I am trying to know how use a database without start and shutdown each time Java code run, perhaps with api REST. I have no answer yet, but I am still searching.
First you should make sure your import logic and your query logic are not on the same code, and you can call on without the other.
The database you create is stored in a directory.
If you're on server mode, it is specified on the config file : neo4j-server.properties (look for org.neo4j.server.database.location).
If you're on embedded mode, you declare the path of the database to the graph factory:
graphDb = new GraphDatabaseFactory().newEmbeddedDatabase( PATH/TO/NEO.DB );
What do you mean by reusing? If you want to use the database from more than one client at the same time, use Neo4j Server and REST, see http://docs.neo4j.org/chunked/snapshot/rest-api.html and point the server to your database directory in conf/neo4j-server.properties.
Also, you can point a new instance of Neo4j to the same database directory and that way open the database you created (exclusively), see https://github.com/neo4j/neo4j/blob/master/community/embedded-examples/src/main/java/org/neo4j/examples/EmbeddedNeo4j.java#L35
Is that covering your usecase?
/peter
I just had the same problem, which resulted from sloppy copy pasting of the code snippets from the Neo4j documentation. Each time I ran the code, all previously created nodes were deleted.
Apart from deleting the removeData-method (as mentioned by Jose) you should also remove the following line in the createDb()-method to prevent this:
FileUtils.deleteRecursively( new File( DB_PATH ) );
Maybe this still helps anyone, even though the topic is old.
In my program I have multiple databases. One is fixed and cannot be changed, but there are also some others, the so called user databases.
I thought now I have to start for every database one connection and to connect to each data dictionary. How is it possible to connect to more than one database with one connection by handing over the data dictionary filename? Btw. I am using a local server.
thank you very much,
André
P.S.: Okay I might find the answer to my problem.
The Key word is CreateDDLink. The procedure is connecting to another data dictionary, but before a master dictionary has to be set.
Links may be what you are looking for as you indicated in the question. You can use the API or SQL to create a permanent link alias, or you can dynamically create links on the fly.
I would recomend reviewing this specific help file page: Using Tables from Multiple Data Dictionaries
for a permanent alias (using SQL) look at sp_createlink. You can either create the link to authenticate the current user or set up the link to authenticate as a specific user. Then use the link name in your SQL statements.
select * from linkname.tablename
Or dynamically you can use the following which will authenticate the current user:
select * from "..\dir\otherdd.add".table1
However, links are only available to SQL. If you want to use the table directly (i.e. via a TAdsTable component) you will need to create views. See KB 080519-2034. The KB mentions you can't post updates if the SQL statement for the view results in a static cursor, but you can get around that by creating triggers on the view.
I havent found any mention in Orchard documentation about IdentityPart despite it being used in some main modules like Comments. I took a look at some relevant sources, but it didn't help me to fully understand it's purpose.
So what's it for and when should I use it?
Thanks in advance!
This is part of the import/export feature. In order to be able to move contents around servers reliably and in a repeatable way that takes into account updated and new items, we need a way to identify content items that's not just a simple id. Some contents have a path but not all types do (widgets, users, etc.). The export/import hooks for any part can participate in building the id of the item and in recognizing it on import. The routable part for example implements the use of path. But for those types that do not have routable, you can add the IdentityPart to fulfill that role. The id that gets exported in the end is a composite of all contributed ids.
Makes sense?