I am using electronjs and pouchdb to sync data to a remote couchdb. I have a remote dev couchdb that I use for testing purposes and a remote production couchdb.
when I run electron . the application syncs the data from the remote db and stores documents locally electron's pouchdb and vice versa.
Things are fine but when I change the remote database URL from dev to prod, the locally stored dev data gets synced to the production couchdb.
Is there any way (programmatically) to stop this from happening?
You can not just change the URL of the remote database from Dev to Production.
Pouchdb does not know that this are two different databases and therefor starts syncing. If you want a Dev and a Production Database you need to create two local databases. One that is synced with Dev and a separate one that is synced with Production.
The strength of CouchDB is that it can sync with any other Couch.
If you are required to have two-way replication aka sync then you might consider filtered replication[1].
However if you only need to replicate remote documents to the local database, then simply use one-way replication[2], for example
PouchDB.replicate('http://<remote host>/mydb', 'mydb');
1 PouchDB Filtered Replication
2 PouchDB Replication
Related
I have integrated Spring Data flow and uploaded application jar into the panel. However, whenever I restart the dataflow application I loose the app mapping with JAR. How can I permanently have it in spring-data-flow
I tried various places to register the app permanently but all in vain.
Thanks,
Dhruv
You need to add data source mapping to spring-data-flow application.
By default, it goes for embedded H2 database and hence the deployment gets lost.
Once I added the DB configuration. It was resolved.
Add the following lines in application.properties for mysql
server.port=8081
spring.datasource.url= jdbc:mysql://localhost:3306/app_batch
spring.datasource.username=root
spring.datasource.password=
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.jpa.hibernate.ddl-auto=none
SCDF requires a persistent RDBMS like MySQL, Oracle and others for production deployments.
The app-registry (i.e., a registry for app coordinates), task/batch execution history, stream/task definitions, audit trails, and other metadata about all of your deployments via SCDF are tracked in the persistent database.
If you don't provide one, by default, SCDF uses H2 - an in-memory database. Though it allows you to bootstrap with this database rapidly, it should not be used in production deployments. If the server restarts/crashes, the in-memory footprint goes away and a new session is created. That's why persistent storage is a requirement, so it can survive independently even when SCDF restarts.
I've set up AWS S3 to store my images on Rails 5.2 with active storage when in production mode. This is great however I've noticed if I copy down the database from Heroku to my local machine so that I can work on the current platform state I get missing images due to the virtual blob storage in active storage.
I've written some statements so that it just ignores the call and not give nil errors etc. My question is...
Should I set up my Rails app to store images to AWS S3 when in development and working in local? This doesn't seem right however I am unsure how I can just copy down the production database into my local machine and the images will appear too (active storage blob url will be correct). I'm guessing it's a config issue on the local side coupled up with active storage..... (Head scratching).
Has anyone else come across this? Thank you.
This is the way I've been using Active Storage:
I have two databases (one local for development and the other on Heroku for production) and two cloud storage buckets (dev/prod), so when I'm developing I'll be using the same cloud storage as in production, but each database is associated with each own bucket.
This way I can test in development the same conditions then production.
Hope this may help :-)
I want to create a project in MVC that works in online mode and offline mode for example when a user work in offline there is no connectivity of the internet available
then all data stored on the local machine when internet connectivity available then all the data push on the server.
Please help how can I do this.
Thanks
For that, you have to use third party sync service/SQL sync/Microsoft sync service etc.
you have to create other project which run at certain time and execute your sync process for local system to Live server database and vice versa.
you should have to use GUID to store your unique(PK) value, because at time of sync live server table has incoming data from any local server, so your local db tables pk no more usable in live server db table
Note: For this type of offline and online sync process your PK column should be type of VARCHAR(36) and store GUID value
I have a database that I created and populated using the development environment in Rails 3.2. I deployed the database onto a server using git and phusion passenger. Currently the server is still running the development database because it is the one that is populated. I have 2 questions:
1) If I switch the server to the production environment, will all of my data transfer over? If not, how to I transfer the current data into the production database?
2) If I push updates to the server from my personal machine using the development database, and the server is using the production database, will all of the data that has been inputted into the production database by users stay in tact? or do I have to configure it to not erase data when I pull my project to the server from git?
For the first question:
If you change the environment to production, it will use the database which is configured in config/database.yml file. You can take a backup of the development database and import the backup file if you want to use the same database in production.
For the second question:
By the term push updates to the server from my personal machine, i assume you are talking about code changes being pushed and not any db related things. pull/push operations with git will never affect the way you interact with database. The data from users in production db will remain intact.
1) If I switch the server to the production environment, will all of my data transfer over? If not, how to I transfer the current data into the production database?
It will not generally by default. The database.yml file has a group for development, testing, and production. Development and Testing will tend to be local, and production (hopefully) is not going to be on your local machine.
2) If I push updates to the server from my personal machine using the development database, and the server is using the production database, will all of the data that has been inputted into the production database by users stay in tact? or do I have to configure it to not erase data when I pull my project to the server from git?
The actual data should stay in tact, however, remember to make your migrations compatible with the data that is up there. One big thing to watch out for here is adding a mandatory field to an existing table without dealing with repopulating earlier records. This will break your deployment.
You can get the database from production/send it back by using your environment's resources (I think Heroku uses pg:dump).
WRT your comment on GhostRider's answer, are you using the production DB remotely from development?/Where is your deployment? What does your database.yml file look like? (remember to not include your passwords :D, I will update answer on reply).
I am wondering what might be involved in connecting an Excel VBA application to the hosted database behind a Heroku Ruby on Rails application. Is this possible? My application cannot accomplish all of the functionality I need in the cloud only. The VBA application would be used as part of a system to print and encode proximity "smart" cards. Thank you for any and all tips on how best to implement this.
Number of options here depending on how you want to connect. I would consider exposing an API in your Heroku app that your VBA could consume (if possible).
Failing that;
The present Shared Database won't let you connect directly to the database so that's a no no.
The new beta shared postgres 9.1 (https://postgres.heroku.com/blog/past/2012/4/26/heroku_postgres_development_plan/) will let you connect to it from outside.
Use one of the Heroku DB addon providers such as ClearDB which is a mySQL provider which allows direct access to the database.
Bring your own database which you could host on an external server and have your Heroku app connected to it (watch out for latency) and then you can connect your printing app directly to that Db.