I've deployed Hbase(Standalone), Zookeeper and Phoenix as a docker image in a virtual host. The image started successfully without any issues. Also after some changes in the config file, I could connect to the Hbase using Phoenix by ./sqlline.py 127.0.0.1:2181:/hbase-unsecure in the docker image container. After successfully creating table and some sample queries tested, I tried to connect through Squirrel-Client from my windows machine which is throwing TimeOutException.
For Info, the required hbaseclient.jar and Phoenixjar has been copied to the squirrel Client.
Error in SqurrelClient app:
java.util.concurrent.TimeoutException
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at
Any help on how to connect to the Phoenix remotely would be appreciated.
Thank You!!
Related
I am running pgAdmin-4 as a docker container alongside my PostgreSQL deployment (in docker containers as well).
I am able to connect to the WebUI and manually add the DB server, getting access to all the needed information.
Is there any way to make the pgAdmin container automatically connected to my PostgreSQL server without the need for a manual configuration after the launch?
Thank you
You can export & save the list of servers details in a JSON file and after starting your instance you import that into pgAdmin4. See export/import servers.
Then you can map the resulted JSON file in the docker as mentioned in the documentation.
I have a number of Docker containers (10) each running a Java service that makes up my system. To create these containers I use a couple of docker-compose files. Using the Docker Integration plugin for IntelliJ, I can now spool up these services to my remote server using the Docker-compose option (the images used are built outside of IntelliJ, using Gradle). Here are the steps I have done to achieve this:
I have added a Docker server using the Docker Machine option to connect to the remote Docker daemon (message says Connection Successful).
I have added a new Docker Compose configuration, using the server, specifying my compose files, and the services I want to start.
Now that I have the system controlled through IntelliJ, I have been trying to figure out how to attach the remote debugger to each of these services so that IntelliJ will hit my breakpoints.
Will I need to add the JVM args (-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005) to each service (container) and add the usual remote debug configuration for each service? Do I need to use a different address for each service? If so, how do I add these args? Surely with the Docker Integration plugin, there is an easier way to do this.
IntelliJ Idea v2018.1.5 (Community Edition)
Docker Integration v181.5087.20
We have a local dev edition running locally in a stock docker image, all is well and good.
We cannot get test items re: replication (even internally from 1 db to another inside docker image or further to cloudant.com to replicate).
I am aware the image license is for a single non cluster node, but is there a way to push docs etc from a local dev db to cloudant.com db on a one time push? Or test replication development locally? (ie 2 dbs inside docker image)
Essentially does "non-clusterable" = no one way, one time, push replication? Even internally inside the image from 1 db to another db in the same docker image?
Here is info re: image- https://hub.docker.com/r/ibmcom/cloudant-developer/
I'm not 100% clear on the exact issue you are seeing, but replication should work when running Cloudant inside Docker. You just need to understand how to route to your Cloudant instance.
I noticed that when I create a new local replication in the Cloudant Dashboard it uses the port that is in the Cloudant Dashboard URL which is the port mapped in Docker. For example, I map port 80 to 30080. When I try to replicate from database test1 to a new database called test2 it creates a replication from localhost:30080/test1 to localhost:30080/test2. This will not work because the Cloudant instance thinks everything is running on port 80 not port 30080.
So, my work around was to tell the Cloudant dashboard to do a remote replication, but specify localhost/test1 (equivalent to localhost:80/test1) to localhost/test2. See the screenshot below:
I need to recreate our staging environment locally on a single host machine to help with testing and development.
Our staging environment consists of
chef server,
redis server,
elastic search server.
I want to run each server in a separate container. And allow the containers to communicate with each other.
So I created a separate container for the chef server. And within that container I ran the bootstrapping script to install all the packages required for the chef server.
While running the bootstrapping script I encountered an error,
---- Begin output of /opt/chef-server/bin/chef-server-ctl start rabbitmq ----
STDOUT: warning: rabbitmq: unable to open supervise/ok: file does not exist
STDERR:
---- End output of /opt/chef-server/bin/chef-server-ctl start rabbitmq ----
Ran /opt/chef-server/bin/chef-server-ctl start rabbitmq returned 1
The strategy is to first get the chef container running so that the chef container can provision the redis and elastic search containers. But I'm not even able to get the chef server running.
At this point I do not know how to proceed. Can someone please point in the right direction? I searched for a work around and couldn't find anything to help me.
Thanks.
It seems the issue has nothing to do with Docker:
https://tickets.opscode.com/browse/CHEF-3838
I am a total noob to linux containers and been spending some time learning about Docker, and forgive my confusion thought this question. Currently, I have a Rails app in production deployed via capistrano. My cloud servers are maintained with Opscode Chef on the Debian Wheezy distribution. For development, I have a Vagrant VM preinstalled with the app and services.
If I were to employ Docker, where would my app sit? The container or the host? How would I deploy (production) and share directories (development)? Can I run all my additional services ie memcache, redis, postgresql, etc on the same server using docker? I can maybe envision the potential of Docker but having trouble seeing its practical use.
Seems like containers are part of the future. Any guidance for someone making the switch from virtualization?
If I were to employ Docker, where would my app sit?
It could sit inside the container or it could sit on the host(you can use docker build to copy the app into the container)
How would I deploy (production) and share directories (development)?
Deploying your app would mean committing your local container into an image, publishing it
and running a container out of the published images on your servers. I have not tried sharing directories between host and container, but you can try this : https://gist.github.com/jpetazzo/5668338 . You can also write a Dockerfile which can copy a directory to a target in the container. Docker's docs on building images will help you there.
Can I run all my additional services ie memcache, redis, postgresql, etc on the same server using docker?
Yes. You will be running multiple containers on the same server.
I'm no expert and I haven't even used docker myself, but as I understand it, your app sits inside a docker container. You would deploy ideally a whole container with your own ruby version installed and so on.
The big benefit is, that you can test exactly the same container in your staging system that you're going to ship to production then. So you're able to test the complete system with all installed C extensions, the exact same ls command and so on.