Says that I have a Hasura running in Container (inside a Kubernetes) and I want this container hasura to connect to 3 different Postgre databases.
Is there a way to configure this without using Hasura console web page, since this has thing to do with scaling later on.
You can use the pg_add_source API to dynamically add new Database sources to Hasura.
Conversely, you can use pg_drop_source to remove them.
The above approaches would work in a dynamic environment where Databases are being added and removed reguarly. If they're more static, you might want to consider programatically manipulating the Metadata files and then applying the changes using metadata apply instead.
Related
I would like to enable caching in ArangoDB, automatically when my app start.
I'm using docker-compose to start the whole thing but apparently there's no simple parameter to enable caching in ArangoDB official image.
According to the doc, all the files in /docker-entrypoint-initdb.d/ are executed at container start. So I added a js file with that code:
require('#arangodb/aql/cache').properties({mode: 'on'});
It is indeed executed but caching doesn't seem to be enabled (from what I see with arangosh within the container).
My app is a JS app using arangojs, so if I can do it this way, I'd be happy too.
Thanks!
According to the performance and server config docs, you can enable caching in several ways.
Your method of adding require("#arangodb/aql/cache").properties({ mode: "on" }); to a .js file in the /docker-entrypoint-initdb.d/ directory should work, but keep an eye on the logs. You may need to redirect log output with a different driver (journals, syslog, etc.) to see what's going on. Make sure to run the command via arangosh to see if it works.
If that's a bust, you might want to see if there is a way to pass parameters at runtime (such as --query.cache-mode on). Unfortunately, I don't use Docker Compose, so I can't give you direct advice here, but try something like -e QUERY.CACHE-MODE=ON
If there isn't a way to pass params, then you could modify the config file: /etc/arangodb3/arangod.conf.
And don't forget about the REST API methods for system management. You can access AQL configuration (view and alter) in the Web UI by clicking on the Support -> Rest API -> AQL.
One thing to keep in mind - I'm not sure if the caching settings are global or tied to a specific database. View the configuration on multiple databases (including _system) to test the settings.
I'm new in docker so I want to know what is the better approach to use it. I have a Project that needs three components to work:
Jboss server application
PostgreSQL
A spring boot application
So, based on it my questions are:
1) Should I have one docker image for each component mentioned above? If yes, why not just put all together? My idea of docker is simplify the deploy of a application so put all together will make easy to install this app in another environment, right?
2) If yes (one docker image per component), spring boot is just a "Java -jar" command is really necessary have a docker image to it?
3) In case of PostgreSQL should I have the image with all my database structure and data or just vanilla PostgreSQL without anything?
To answer your questions
1) should I have one docker image for each component mentioned above ?
If yes, why not just put all together? My idea of docker is simplify
the deploy of a application so put all together will make easy to
install this app in another environment, right?
It is best to put them on a separate components so that:
You can isolate cases(will help you in debugging)
You can selectively scale(horizontally) specific stateless components when you run on k8s or docker-swarm
You can set hardware limit(RAM, CPU, etc) per component
You have different base images(might be useful for optimizations)
You want to build & test your components independently
List goes on
2) if yes (one docker image per component), spring boot is just a
"Java -jar" command is really necessary have a docker image to it?
Please check the list mentioned above (why it's best to separate) if it fits your use case. Note that adding it to existing components will affect your scaling strategy
Example - you run 3 instances of jboss component with spring boot app, you will spawn 3 instances for both of them w/c you might not want.
3) in case of PostgreSQL should I have the image with all my database
structure and data or just vanilla PostgreSQL without anything?
I would recommend that you mount your structure & data to the host volume, so that it doesn't get lost when the image is restarted. see example so i'll recommend using vanilla postgres
I hope this helps you in some way
How to run Neo4j 3.0.6 multiple instances in a single machine using ineo?
How To change port number of Neo4j 3.0.6 version and instance using ineo?
ineo is a third party solution, and based on its github repo it is not maintained recently. Btw, have you checked the Create an instance with a specific port section of its README.md?
ineo create -p8486 my_db_test
Currently neo4j offers a similar solution for this instance handling, it is called Neo4j Desktop, and it can be downloaded from here: https://neo4j.com/download/other-releases/
If you use the native neo4j database, then you should change the following properties in neo4j.conf to be able to have more instance locally:
dbms.connector.bolt.listen_address
dbms.connector.http.listen_address
dbms.connector.https.listen_address
I am experimenting with several Neo4j databases in my machine. The databases have been generated and populated from Java programs.
Now I would like to inspect them.
It seems that the recommended way is to open the web console so it points to a specific database by means of configuring the property:
org.neo4j.server.database.location=<database location path>
in the neo4j configuration file: conf/neo4j-server.properties
This if fine if I am only interested in one database. But it does not look like a good idea if I am switching often between databases or if I want to explore more than one at the same time.
Is it possible to configure distinct web consoles (maybe using distinct ports) so they refer to my distinct databases?
And is it possible to do this without installing several instances (binaries) of Neo4j in my machine and having to modify lots of configuration files?
Yes! If you edit that same conf/neo4j-server.properties file you can change the org.neo4j.server.webserver.port and org.neo4j.server.webserver.https.port values (I normally set the https port to one less than the http port).
Once you've done that you run ./bin/neo4j start (make sure you shut down your Java app which is accessing the database first) to start the server on that port and then simply visit http://localhost:<port>
I'm not 100% sure if generating the database from Java will generate everything that you need for running a server. If not you can download Neo4j from http://neo4j.com/download/, make multiple copies of it, and replace the graph.db folder with yours one (make sure you shut down any processes which are accessing those databases before copying the directory). Also if you've downloaded a newer version you might need to set allow_store_upgrade=true (see: http://neo4j.com/docs/stable/deployment-upgrading.html)
You can have multiple Embedded Neo4j databases without installing separate binaries. You just need to be configure the different database path for each instance of the database.
Built an app locally with an EF code-first database - not sure how to upload it to a shared hosting environment such as GoDaddy. It makes sense that something would be amiss because on the shared hosting your code can't just go create a database, but on the flip side I can't find anything to copy the CREATE sql and create it on the server like you would with MySQL
Feel a little silly because I've been using .NET for over a year now but at work the databases are already set up and we have full control over our environments.
If the database has no data that you need to preserve the easiest method is just to install the app on the new host and set the connection string to your new database on the host. On the first attempt to load a page accessing the database, the database will automatically be created (note that you need to load a page which hits the database - sometimes the home page is not sufficient).
This method is a lot more straightforward than generating SQL and then executing it on the production database.
If there is data that you need to preserve then the best method will be taking a backup and installing the backup on the host. In SSMS simply right-click the database in the left pane, then Tools > Backup... To restore on the server connect to the server in SSMS and right-click the 'Database' node in the left panel and select 'Restore Database...' I'm not sure if the host provides a direct connection from SSMS but they should at a minimum have a mechanism to restore a .bak file.
Going forward you should ensure that you can execute SQL on your database as a very convenient method for deploying EF Migrations is to generate the SQL update script on the development server and then deploy this by executing it in production.
Depending on your web host, you may be able to restore the database. If this is an option, simply back up your database on your local machine and restore it on the server via the management console.
You can back up your local database using SQL Server Management Console. This works well even for larger databases as you can directly restore all your data, your schema, etc.
I've had experience with three different hosts so far and all of them have this as an option. You'll usually find this under the Database tab for the web site. The rest from there is up to you because it's usually different across the various hosts.