I am running pgAdmin-4 as a docker container alongside my PostgreSQL deployment (in docker containers as well).
I am able to connect to the WebUI and manually add the DB server, getting access to all the needed information.
Is there any way to make the pgAdmin container automatically connected to my PostgreSQL server without the need for a manual configuration after the launch?
Thank you
You can export & save the list of servers details in a JSON file and after starting your instance you import that into pgAdmin4. See export/import servers.
Then you can map the resulted JSON file in the docker as mentioned in the documentation.
Related
I have a Docker image with an application server running in it.
When I'm running in a development environment, I want to run a database server within the same Docker image.
However, in production, I want to use my cloud provider's database service to host my database server.
What is the best (preferably officially supported) way to enable this distinction?
You Don't
You don't run the DB in the same container. You run it in a separate container next to your application container (Probably with docker-compose, but not required)
You run the same version as the cloud provider (or as close as you can get because they will no doubt configure it specifically for their env)
Can you have multiple containers in Codenvy, like a server container and a mysql container?
Or is Codennvy an enriched container itself - just embeddet with different frameworks installed incl. ide?
You can have multiple containers - any you want as defined in a Docker Compose file. If you sign up for an account and create a workspace with the "Java-MySQL" stack you'll get a sample app using an app server and database.
We have a local dev edition running locally in a stock docker image, all is well and good.
We cannot get test items re: replication (even internally from 1 db to another inside docker image or further to cloudant.com to replicate).
I am aware the image license is for a single non cluster node, but is there a way to push docs etc from a local dev db to cloudant.com db on a one time push? Or test replication development locally? (ie 2 dbs inside docker image)
Essentially does "non-clusterable" = no one way, one time, push replication? Even internally inside the image from 1 db to another db in the same docker image?
Here is info re: image- https://hub.docker.com/r/ibmcom/cloudant-developer/
I'm not 100% clear on the exact issue you are seeing, but replication should work when running Cloudant inside Docker. You just need to understand how to route to your Cloudant instance.
I noticed that when I create a new local replication in the Cloudant Dashboard it uses the port that is in the Cloudant Dashboard URL which is the port mapped in Docker. For example, I map port 80 to 30080. When I try to replicate from database test1 to a new database called test2 it creates a replication from localhost:30080/test1 to localhost:30080/test2. This will not work because the Cloudant instance thinks everything is running on port 80 not port 30080.
So, my work around was to tell the Cloudant dashboard to do a remote replication, but specify localhost/test1 (equivalent to localhost:80/test1) to localhost/test2. See the screenshot below:
I want to create a social network in django framework in Openshift then I need at least a graph db (like Neo4j)and a relational db (like Mysql). I had trouble in add Neo4j to my project because openshift has not any cartridge for it. then I decide to install it with DIY, but I don't understand the functionality of start and stop files in .openshift/action hooks.Then I doing the following steps to install neo4j on server:
1.ssh to my account:
ssh 1238716...#something-prolife.rhcloud.com
2.go in a folder that have permission to write (I go to app-root/repo/ and mkdir test in it) and download the neo4j package from here. and extract it to the test folder that I created before :
tar -xvzf neo4j-community-1.9.4-unix.tar.gz
3.and finally run the neo4j file and start it:
neo4j-community-1.9.4/bin/neo4j start
but I see these logs and can't run the neo4j:
process [3898]... waiting for server to be ready............ Failed
to start within 120 seconds.
Neo4j Server may have failed to start, please check the logs.
how can I run this database in openshift ? where I am wrong ? and where is the logs in please check the logs?
I've developed an openshift cartridge that fixes the permission issue in openshift. I had to change the class HostBoundSocketFactory and SimpleAppServer in neo4j just to bind without using the 0 port, but using an openshift available port.
You can check at: https://github.com/danielnatali/openshift-neo4j-cartridge
it's working for me.
I would also not place it in the app-root/repo but instead I would put it in app-root/data.
You also need to use the IP of the gear - I think the env. variable is something like OPENSHIFT_INTERAL_IP. 127.0.0.1 is not available for binding but I think the ports should be open.
There are 2 ways neo4j can run : embedded or standalone(exposed via a rest service).
Standalone is what you are trying to do. I think the right way to setup neo4j would be by writing a cartridge for openshift, and then add the cartridge to your gear. There has been some discussion about this, but it seems that nobody has taken the time to do this. Check https://www.openshift.com/forums/openshift/neo4j-cartridge. If you decide to write your own cartridge, i might assist. Here are the docs: https://www.openshift.com/developers/download-cartridges.
The other option is running in embedded mode, which i have used. You need to set up a Java EE application(because neo4j embedded mode libraries are available only with java), and put the neo4j libraries in your project. Then, you would expose some routes, check for parameters and run your neo4j queries inside the servlets.
Say I have a container that has everything I need to run my web application (such as https://github.com/grigio/docker-stringer for example). How would I go about inspecting the logs for the different services (web server, application server, database server)? With all of the tutorials so far I have only been able to view the logs for the specific command run when starting the container.
One method would be to configure your logs to write to stdout and to use docker logs to retrieve them.
Another option would be to use a bindmount and link to your host file system.