Can I move CKRecords from one CKContainer to another? - ios

I had to change my Apple ID, but I have numerous important records in the production environment in a container of my old Apple ID.
Can I export them while maintaining the dependencies and record authors?

Short answer: No
But you could manually copy over records from one container to the other (or first create an export and then create an import). But then It will look like new data in the new container an it will also have new metadata. So the creator will be the account used to migrated the data. And you would need an app (OS X or iOS) that can connect to the production public container.

Related

How do i change the firestore database in a copied ios project?

I have created an Xcode project and implemented firebase.
I have copied the project and renamed it.
I have created a new firebase project and i have replaced the Firebase config whatever info.plist.
I have reinstalled the pods.
I checked the code looking for a reference to the database.
And after all this work the new application still uses the old database from the previous project.
When i create a new user it is added in the old project.
Somebody has any clue?
I dont wanna share the code
I'm assuming you're trying to access your database directly using the REST API. GCP checks your credentials to know which database you're accessing.
If you're running your app locally there are a couple ways you could be using to connect to the database. If you're using service account key, authentication will be done accessing the 'GOOGLE_APPLICATION_CREDENTIALS' variable in your environment. You might need to change that to a new service key of your new project. Watch out as these keys give full access to your database. You can check other access options here: https://firebase.google.com/docs/firestore/use-rest-api.
If you're accessing a local emulator (which I find unlikely), you can find more info here: https://firebase.google.com/docs/emulator-suite/connect_firestore
If you're having this problem on the deployed app, GCP will use by default the app engine service account to try and access the project's database. You might be constructing your Firestore API referencing the name of another project. This would probably only work if you've done this: google function: accessing firestore database of another project.
1- You have to be sure that you've changed the rules related to the database which is located here:
2- Make sure that once you created the app in firebase you've used the same bundle identifier
3- You can't use two different databases on the same project
4- Don't forget to enable the database and users in the new project

Neo4J Connection issues to local project

I am really sorry to ask a simple question like this, but it is getting frustrating. I installed neo4j 4.0.4 on my Windows machine, created a new project as shown in the official tutorial video and set a password for my local graph. Funnily, the tutorial video ends after setting the password and opening the browser not showing how to perform Cypher queries on this newly created database. In neo4j Desktop my database is shown correctly and it seems to be up and running.
However, when I try to connect to this database via the browser, I do not see the database at all. It is so confusing when connecting to the server to specify a username and password, if you only need to set a password for your database?! The default neo4j user can see the system and default database but not my project database. In addition, I cannot link files from the project directory in Cypher queries. I tried to disable authentication, but it did not help at all.
When I issue SHOW DATABASES command, it does not list my database as well.
Update / Edit:
Seems I misunderstood the concept of projects. Every database is named neo4j - default, regardless of the name specified in the project ?!. However, I still cannot access project files. So far, I copied the files manually in the database directory under "imports". But I guess that is not the intended way.
After importing data to this default database, it still shows no data in the project itself.
Data files in the imports directory are not automatically imported into the DB. That is because neo4j has no idea how you want to store that data as nodes and relationships.
So, it is up to you to determine your desired data model, and then write the appropriate code to enforce that data model.
You can take a look at this page to learn about how to import CSV data (probably the most commonly used import data format).

Is there any "load policy" for new models in the same way as there are for new versions in Tensorflow-Serving?

I have read on the official website that we can set a version policy preserving resources or disponibility for new versions but i haven't found anything about load new models. I'm using tensoflow-serving with Docker and I want to know what is the behavior for example if my allocated memory is full and I try to load a new one.
Thanks a lot!!
You can load new versions of the same model by simply adding a new version to the folder, and you can load new models with a little bit of extra work.
Add new version of a model
To load new versions of the same model you would need to have a folder hierarchy and add the new version to the model folder with incrementing numbers. Imagine that you have this folder structure;
C:/mymodelfolder/
L->resnet-model
L->1
L->nlp-model
L->1
if you wish to load v2 of nlp-model, all you need to do is put the v2 of the model in a folder called 2 like so
C:/mymodelfolder/
...
L->nlp-model
L->1
L->2 //new model is here
in a second or so, tf-serving should discover and load that model (you can discard the one of the models later on or serve them both with right configuration in an A/B manner)
Add new model to be served
in case you wish to load another model without restarting the tf-serving, you first would need to copy the model to another folder and then send a gRPC request to the exposed port of the app using a ReloadConfigRequest proto with all the models to be served. this way you can add/remove/specify versions dynamically.
there is also a flag => --model_config_file_poll_wait_seconds which tells tf-serving to listen to changes on the model.config file but I couldnt get that to work in the latest version on docker.
For official docs and more you may visit https://www.tensorflow.org/tfx/serving/serving_config#model_server_config_details
Memory Behavior
I want to know what is the behavior for example if my allocated memory
is full and I try to load a new one.
when you are updating a model to a new version, unless you configure them to be served together, the previous model gets unloaded and the new model gets loaded. so if the first model has a different size than the second model, it will be reflected to the memory.
If you happen to load a model which would exceed the memory of the allocation for the app/container/environment then unfortunately in the current version of the app, it will shut itself down. but there is a lengthy conversation and some workarounds on this post which you may want to check out: https://github.com/tensorflow/serving/issues/1215

can liquibase be used to work with two different schema versions of the db at the same time?

I'm trying to overcome the following issue...
I have a MariaDB database that is used for an IOS application.
I'm about to release a new version to the apple store that uses a different version of the schema of the database. means all achievement related tables are modified.
can liquibase be used to configure the changes and allow selected users to connect and work with the schema as if the old schema version is configured, and to actually make the actual changes to the new schema?
before I release the application I need to provide apple with a test version so they will confirm it. so I want users that are connected from apple to see the new achievements features and to work with the new schema while regular users to still be able to user to previous version of the application and to work with the old schema that behind the scenes will actually update the database according to the new schema.
I hope I explained myself properly.
can Liquibase do that ? or it's just like a git for db changes ?
thank you! :)

Start with existing graph db

I was just watching the Belgium Beer demo and I would like to replicate the same process to start structr with an existing neo4j graph.
Unfortunately if do the following steps:
Extract the downloaded structr folder.
Create a folder structr/db and copy the content of my graph.db.
Start structr with appropriate version of this command:
java -cp lib/*;structr-ui-1.1-SNAPSHOT-201505231136.f596a.jar org.structr.Server
I get the following error:
SEVERE: Vital service NodeService failed to start: Error starting
org.neo4j.kern el.EmbeddedGraphDatabase,
c:\Users\DataToValue\Documents\structr.\db. Aborting
Any idea how I could start a structr project with an existing graph db?
Not entirely sure what your goal is, but... you cannot simply use a product with an existing database, just because it uses a database brand which matches what you're already using (in this case, Neo4j). Structr (or any product, for that matter), will have its own data schema, its own product-specific metadata, etc. There's really no way to simply swap out a product's database and swap in your own (unless it was essentially a backup/instance from that product's database content).
With upcoming structr 2.1 version it it possible to specify an external/ existing neo4j DB instance as described here: https://stackoverflow.com/a/43583403/1821792

Resources