How can i verify that cassandra is working - docker

I have installed 3 docker containers with this docker-composer.yml below
version: '3'
services:
nginx:
image: nginx:alpine
volumes:
- ./app:/app
- ./nginx-config/:/etc/nginx/conf.d/
ports:
- 80:80
depends_on:
- php
php:
image: php:7.1-fpm-alpine
volumes:
- ./app:/app
cassandra:
image: 'docker.io/bitnami/cassandra:3-debian-10'
ports:
- '7000:7000'
- '9042:9042'
volumes:
- ./app:/app
environment:
- CASSANDRA_SEEDS=cassandra
- CASSANDRA_PASSWORD_SEEDER=yes
- CASSANDRA_PASSWORD=cassandra
My question is how to put localhost:7000 or even localhost:9042 nothing is working.
All containers is working perfectly when i run docker ps

Both ports that you have tired on the browser is not HTTP port.
- '7000:7000'
- '9042:9042'
By default, Cassandra uses 7000 for cluster communication (7001 if SSL is enabled), 9042 for native protocol clients, and 7199 for JMX. The internode communication and native protocol ports are configurable in the Cassandra Configuration File. The JMX port is configurable in cassandra-env.sh (through JVM options). All ports are TCP.
Cassandra Ports
You can verify cassandra status or connectivity from the inside container or you need to install the client on the host to check connectivity.
run docker ps and copy cassandra container name, then run below command.
docker exec -it container_name bash -c "cqlsh -u cassandra -p cassandra"
You can expect output like
[cqlsh 5.0.1 | Cassandra 3.11.6 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.

Related

Docker container can't talk to another container

I have a docker compose file set up with 3 separate containers (Flask, Nginx and Solr)
After starting up all 3 run successfully but my Flask application can't connect to my Solr instance and when I run:
wget -S http://localhost:8983/solr/CORE_NAME/select
I get the error "Connecting to localhost (localhost)|127.0.0.1|:8983... failed: Connection refused."
I am fairly new to docker and been around a few different forums looking at this issue but nothing has worked so far. I have tried creating a network also but running into the same issue.
Here is my docker-compose.yml.
version: "2.7"
services:
nginx:
build:
context: .
dockerfile: Dockerfile-nginx
container_name: nginx
ports:
- "80:80"
- "8181:8181"
volumes:
- ./:/opt/ee1
- ee1-logs-volume:/var/log/ee1
- ./:/usr/local/websites/ee1
- sockets-volume:/tmp
depends_on:
- flask
flask:
build:
context: .
dockerfile: Dockerfile-flask
entrypoint: ["/bin/bash", "./system/start-uwsgi-docker.bash"]
container_name: flask
user: root
restart: always
volumes:
- ./:/opt/ee1
- ./ee1config.ini:/opt/ee1config.ini
- ee1jobs-logs-volume:/var/log/ee1
- ./:/usr/local/websites/ee1
- sockets-volume:/tmp
links:
- solr
solr:
build:
context: .
dockerfile: Dockerfile-solr
container_name: solr
volumes:
- data:/var/solr
entrypoint:
- bash
- "-c"
- "precreate-core ee1_1; precreate-core ee1_2; exec solr -f"
ports:
- "8983:8983"
volumes:
sockets-volume: {}
ee1-logs-volume: {}
data:
Every docker container is - network wise - a separate host with it's own IP.
Traffic to localhost or 127.0.0.1 will definitely never leave that container.
So what you need to find out is the IP of the server container (solr) you actually want to talk to, then configure the client container (flask) accordingly. This can be done by e.g. docker inspect. Be aware that upon container restart the IPs can change. You will want to use something like DNS rather than raw IPs.
Since you use docker compose, each container for a service joins the same network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
For more details check out
https://docs.docker.com/compose/networking/
https://docs.docker.com/network/

Port exposed with docker run but not docker-compose up

I am trying to run rabbitmq along with influxdb TICK stack with docker-compose. When I run rabbitmq with this command:docker run -d --rm -p 5672:5672 -p 15672:15672 rabbitmq:3-management, both ports are open and I am able to access from a remote machine. However, when I run rabbitmq as part of a docker-compose file, it is not accessable from a remote machine. Here is my docker-compose.yml file:
version: "3.7"
services:
influxdb:
image: influxdb
volumes:
- ./influxdb/influxdb/data/:/var/lib/influxdb/
- ./influxdb/influxdb/config/:/etc/influxdb/
ports:
- "8086:8086"
rabbitmq:
image: rabbitmq:3-management
volumes:
- ./rabbitmq/data:/var/lib/rabbitmq
ports:
- "15672:15672"
- "5672:5627"
telegraf:
image: telegraf
volumes:
- ./influxdb/telegraf/config/:/etc/telegraf/
- /proc:/host/proc:ro
depends_on:
- "influxdb"
- "rabbitmq"
chronograf:
image: chronograf
volumes:
- ./influxdb/chronograf/data/:/var/lib/chronograf/
ports:
- "8888:8888"
depends_on:
- "telegraf"
More information: when I run this with docker-compose up -d the 8086 and 8888 are accessible from a remote machine (I confirm with using nmap command). Also, either way I am able to access the rabbitmq management console at http://localhost:15672.
How can I set this up so I can access rabbitmq from a remote machine using docker-compose?
Thank you.
Looks like just a typo in the port mapping in docker-compose.yml: 5672:5627 should actually be 5672:5672.
Otherwise the docker-compose configuration looks just fine.

Unable to connecto to docker container

Hi i'm start a docker container using docker-compose, but when i try to use localhost to connect I can't connect. Here is the docker-compose i'm using:
version: '3.3'
services:
standalone:
image: apachepulsar/pulsar
expose:
- 8080
- 6650
environment:
- PULSAR_MEM=" -Xms512m -Xmx512m -XX:MaxDirectMemorySize=1g"
command: >
/bin/bash -c
"bin/apply-config-from-env.py conf/standalone.conf
&& bin/pulsar standalone"
Im using windows 10
Be aware that expose, as the documentation suggest:
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
My guess is that you instead want to publish them and let them be available to the host. To do so:
services:
standalone:
image: apachepulsar/pulsar
ports:
- "8080:8080"
- "6650:6650"

Is it possible to curl across docker network via docker-compose between 2 docker-compose.yaml?

I have 2 application run with a different network and it uses separate docker-compose.yaml. So I trying to call an request from app A to app B, but it not works.
docker exec -it app_a_running curl http://localhost:8012/user/1
So I got an error
cURL error 7: Failed to connect to localhost port 8012
docker-compose-app-a.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8011:8011
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-a
command: sleep 72000
networks:
- app-a-network
networks:
app-a-network:
docker-compose-app-b.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8012:8012
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-b
command: sleep 72000
networks:
- app-b-network
networks:
app-b-network:
Questions:
Is it possible to do this?
If the first question is possible, Please suggest me :)
You can use curl on docker containers. The reason why your curl command didn't work is probably that you did not publish your docker container's port. For example, try:
docker run -d -p 8080:8080 tomcat
instead of
docker run -d tomcat
This will forward the port 8080 of your machine to the port 8080 of your container.
If you have a shell to your container, you can use the service name or the container's name to curl a container on your Docker network, provided your target exists with the same network.

influxDB and cadvisor integration issue

I want to access to the gathered data from cadvisor through influxdb
here my docker configurations:
//for cadvisor
docker run
--volume=/:/rootfs:ro
--volume=/var/run:/var/run:rw
--volume=/sys:/sys:ro
--volume=/var/lib/docker/:/var/lib/docker:ro
--publish=8080:8080
--detach=true
--name=cadvisorDB
google/cadvisor:latest
-storage_driver=influxdb
-storage_driver_host=127.0.0.1:8086
-storage_driver_db=databaseName
//for InfluxDB
docker run
-d
-p 8083:8083
-p 8086:8086
--expose 8090
--expose 8099
tutum/influxdb
//and I created manually the databse through the WEB UI on localhost:8083
with the name databaseName`
So once I start the two containers, I go to the influxDB to explore data (by making a query). An error says that there is no data
Everything in the configuration looks fine. The problem is probably in this line:
-storage_driver_host=127.0.0.1:8086
because 127.0.0.1 is refering to cadvisor container localhost and not your localhost. Try to put instead docker Nat ip (usually 172.17.42.1).
This is what I use in my "docker-compose" YAML file. Should be very easy to translate to the usual "docker run" syntax. In my case I'm linking the InfluxDB container in cAdvisor, so cAdvisor knows how to resolve the hostname "influxdb" regardless of the internal Docker IP assigned to the container.
influxdb:
image: tutum/influxdb
hostname: influxdb
volumes:
- ./influxdb:/data
environment:
- PRE_CREATE_DB=cadvisor
ports:
- "8083:8083"
- "8086:8086"
expose:
- "8090"
- "8099"
cadvisor:
image: google/cadvisor
hostname: cadvisor
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
ports:
- "8089:8080"
links:
- influxdb
command: -storage_driver_db=cadvisor -storage_driver_host=influxdb:8086
NOTE: InfluxDB can create your DB automatically if you set the PRE_CREATE_DB environment variable.

Resources