I am trying create a docker compose setup which will wait until the Cassandra container has started before running the JanusGraph container which requires Cassandra to be running before starting.
The nodetool command seems to be the standard way to check the status of Cassandra. Here is what I get running nodetool first on the cassandra container:
docker exec -it ns-orchestration_data_storage_1 nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.31.0.2 235.53 KiB 256 100.0% eea17329-6274-45a7-a9fb-a749588b733a rack1
The first "UN" on the last stdout line means Up/Normal which I intend to use in my wait-for-cassandra-and-elasticsearch.sh script. But now when I try to run it on the janusgraph container (remote) I am getting this:
docker exec -it ns-orchestration_data_janusgraph_1 bin/nodetool -h 172.31.0.2 -u cassandra -pw <my-password-here> status
docker exec -it ns-orchestration_data_janusgraph_1 bin/nodetool -h
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/app/janusgraph-0.3.0-hadoop2/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/app/janusgraph-0.3.0-hadoop2/lib/logback-classic-1.1.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
nodetool: Failed to connect to '172.31.0.2:7199' - ConnectException: 'Connection refused (Connection refused)'.
I have exposed all Cassandra ports as you can see in the docker-compose file below.
I also saw this post which I am not sure is related. I tried following the instructions but I am still getting the same error.
I would appreciate any suggestions.
File: docker-compose.yml
version: '3'
services:
data_janusgraph:
build:
context: ../ns-compute-store/db-janusgraph
dockerfile: Dockerfile.janusgraph
ports:
- "8182:8182"
depends_on:
- data_storage
- data_index
networks:
- ns-net
data_storage:
build:
context: ../ns-compute-store/db-janusgraph
dockerfile: Dockerfile.cassandra
environment:
- CASSANDRA_START_RPC=true
ports:
- "9160:9160"
- "9042:9042"
- "7199:7199"
- "7001:7001"
- "7000:7000"
volumes:
- data-volume:/var/lib/cassandra
networks:
- ns-net
data_index:
image: elasticsearch:5.6
ports:
- "9200:9200"
- "9300:9300"
networks:
- ns-net
networks:
ns-net:
driver: bridge
volumes:
data-volume:
File: Dockerfile.cassandra
FROM cassandra:3.11
COPY conf/jmxremote.password /etc/cassandra/jmxremote.password
RUN chown cassandra:cassandra /etc/cassandra/jmxremote.password
RUN chmod 400 /etc/cassandra/jmxremote.password
COPY conf/jmxremote.access /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management/jmxremote.access
RUN chown cassandra:cassandra /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management/jmxremote.access
RUN chmod 400 /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management/jmxremote.access
COPY conf/cassandra.yaml /etc/cassandra/cassandra.yaml
File: Dockerfile.janusgraph
FROM openjdk:8-jre-alpine
RUN mkdir /app
WORKDIR /app
RUN apk update \
&& apk upgrade \
&& apk --no-cache add unzip
RUN wget https://github.com/JanusGraph/janusgraph/releases/download/v0.3.0/janusgraph-0.3.0-hadoop2.zip
RUN unzip janusgraph-0.3.0-hadoop2.zip
RUN apk --no-cache add bash coreutils nmap
RUN apk del unzip
ENV JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=data_storage"
WORKDIR /app/janusgraph-0.3.0-hadoop2
COPY wait-for-cassandra-and-elasticsearch.sh ./
COPY conf/janusgraph-cql-es.properties ./
CMD ["./wait-for-cassandra-and-elasticsearch.sh", "data_storage:9160", "data_index:9200", "./bin/gremlin-server.sh", "./conf/gremlin-server/gremlin-server-berkeleyje.yaml"]
See full code in Github Repository:
https://github.com/nvizo/janusgraph-cluster-example
I believe you should refer to your cassandra container name as hostname when running nodetool from your janusgraph container. Something along the lines of:
docker exec -it ns-orchestration_data_janusgraph_1 bin/nodetool -h data_storage -u cassandra -pw <my-password-here> status
Give it a try and let me know if it helps.
Related
I am trying to run Gatsby with Docker Compose.
From what I understand the Gatsby site is running in my docker container.
I map port 8000 of the container to port 8000 on my localhost.
But when looking on localhost:8000 I am not getting my gatsby site.
I use the following Dockerfile to build the image with docker build -t nxtra/gatsby .:
FROM node:8.12.0-alpine
WORKDIR /project
COPY ./package.json /project/package.json
COPY ./.entrypoint/entrypoint.sh /entrypoint.sh
RUN apk update \
&& apk add bash \
&& chmod +x /entrypoint.sh \
&& npm set progress=false \
&& npm install -g yarn gatsby-cli
EXPOSE 8000
ENTRYPOINT [ "/entrypoint.sh" ]
entrypoints.sh contains:
#!/bin/bash
yarn install
gatsby develop
docker-compose.yml ran with docker-compose up
version: '3.7'
services:
gatsby:
image: nxtra/gatsby
ports:
- "8000:8000"
volumes:
- ./:/project
tty: true
docker ps shows that port 8000 is forwarded 0.0.0.0:8000->8000/tcp.
Inspecting my container with docker inspect --format='{{.Config.ExposedPorts}}' id confirms the exposure of the port -> map[8000/tcp:{}]
docker tops on the container shows the following processes are running in the container:
18465 root 0:00 {entrypoint.sh} /bin/bash /entrypoint.sh
18586 root 0:11 node /usr/local/bin/gatsby develop
18605 root 0:00 /usr/local/bin/node /project/node_modules/jest-worker/build/child.js
18637 root 0:00 /bin/bash
Dockerfile and docker-compose.yml are situated in the root of my Gatsby project.
My project is running correctly when I run it without docker gatsby develop.
What am I doing wrong to get the Gatsby site that runs in my container to be visible on localhost:8000?
My issue was that Gatsby was only listening to requests within the container, like this answer suggests. Make sure you've configured Gatsby for the host 0.0.0.0. Take this (somewhat hacky) setup as an example:
Dockerfile
FROM node:alpine
RUN npm install --global gatsby-cli
docker-compose.yml
version: "3.7"
services:
gatsby:
build:
context: .
dockerfile: Dockerfile
entrypoint: gatsby
volumes:
- .:/app
develop:
build:
context: .
dockerfile: Dockerfile
command: gatsby develop -H 0.0.0.0
ports:
- "8000:8000"
volumes:
- .:/app
You can run Gatsby commands from a container:
docker-compose run gatsby info
Or run the development server:
docker-compose up develop
I made factory-reset on my Mac Mini, and I wanna install only Docker and some basic tools like git directly on OS X but other for software I wanna use docker - by other soft I mean app like php, phpstorm, nginx, node, mysql, postgres, phpmyadmin, mysql-workbench... in many versions. I wanna install them using docker to easily manage them. For each of this tool I wana to map folder with e.g. code of my projects or db storagem, configuration etc...
During setup mysql 8 I face strange problem - I was able to login as root to db by phpmyadmin and mysql-workbench but my php 7 laravel appliaction "hangs" during connection. Here is mysql dockerfile:
version: '3.1'
services:
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: terefere321
MYSQL_ROOT_HOST: "%"
ports:
- 3306:3306
volumes:
- ./db_files:/var/lib/mysql
Here is docker file + script which allow me to run php via cmd on docker:
FROM php:7.2-apache
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
RUN apt-get update &&\
apt-get install -y \
git \
zlib1g-dev \
zip \
unzip \
&&\
docker-php-ext-install pdo pdo_mysql zip &&\
a2enmod rewrite
Bash script to run php-cmd docker container and "login" to it to get command line:
set -e
cd -- "$(dirname "$0")" # go to script dir - (for macos double clik run)
docker build -t php-cmd .
docker rm -f php-cmd
docker run -d --name php-cmd -v /Volumes/work:/var/www/html php-cmd
docker exec -it php-cmd /bin/bash
Here /Volumes/work is directory with code of my project. After "login" i run php artisan migrate and app hangs for 30s and after throw errors:
SQLSTATE[HY000] [2006] MySQL server has gone away PDO::__construct():
Unexpected server respose while doing caching_sha2 auth : 109
Add default-authentication-plugin=mysql_native_password command to mysql8 dockerfile - so you should get:
version: '3.1'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: terefere321
MYSQL_ROOT_HOST: "%"
ports:
- 3306:3306
volumes:
- ./db_files:/var/lib/mysql
Problem
Substitution doesn't work for the build phase
Files
docker-compose.yml (only kibana part):
kibana:
build:
context: services/kibana
args:
KIBANA_VERSION: "${KIBANA_VERSION}"
entrypoint: >
/scripts/wait-for-it.sh elasticsearch:9200
-s --timeout=${ELASTICSEARCH_INIT_TIMEOUT}
-- /usr/local/bin/kibana-docker
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
volumes:
- ./scripts/wait-for-it.sh:/scripts/wait-for-it.sh
ports:
- "${KIBANA_HTTP_PORT}:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
networks:
- frontend
- backend
restart: always
Dockerfile for the services/kibana:
ARG KIBANA_VERSION=6.2.3
FROM docker.elastic.co/kibana/kibana:${KIBANA_VERSION}
USER root
RUN yum install -y which && yum clean all
USER kibana
COPY kibana.yml /usr/share/kibana/config/kibana.yml
RUN ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip
COPY logtrail.json /usr/share/kibana/plugins/logtrail/logtrail.json
EXPOSE 5601
Env file (only kibana part):
KIBANA_VERSION=6.2.3
KIBANA_HTTP_PORT=5601
KIBANA_ELASTICSEARCH_HOST=elasticsearch
KIBANA_ELASTICSEARCH_PORT=9200
Actual output (Problem is here: substitution doesn't work)
#docker-compose up --force-recreate --build kibana
.........
Step 8/10 : RUN ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip
---> Running in d28b1dcb6348
Attempting to transfer from https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail--0.1.27.zip
Attempting to transfer from https://artifacts.elastic.co/downloads/kibana-plugins/https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail--0.1.27.zip/https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail--0.1.27.zip-6.2.3.zip
Plugin installation was unsuccessful due to error "No valid url specified."
ERROR: Service 'kibana' failed to build: The command '/bin/sh -c ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip' returned a non-zero code: 70
Expected output (something similar):
Step 8/10 : RUN ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip
---> Running in d28b1dcb6348
Attempting to transfer from https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-6.2.3-0.1.27.zip
I've found answer after 5 min when I posted this question ... damn
The solution is stupid, but works: I only need to redefine args for the new user. See:
ARG KIBANA_VERSION=6.2.3
FROM docker.elastic.co/kibana/kibana:${KIBANA_VERSION}
USER root
RUN yum install -y which && yum clean all
USER kibana
ARG KIBANA_VERSION=${KIBANA_VERSION}
COPY kibana.yml /usr/share/kibana/config/kibana.yml
RUN ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip
COPY logtrail.json /usr/share/kibana/plugins/logtrail/logtrail.json
EXPOSE 5601
The solution is the following lines:
USER kibana
ARG KIBANA_VERSION=${KIBANA_VERSION}
I use docker stack deploy deploy my python service.
First, I edit code.
then
docker build . -f Dockerfile -t my_service:$TAG
docker tag my_service:$TAG register.xxx.com:5000/my_service:$TAG
When I use docker run -p 9000:9000 register.xxx.com:5000/my_service:$TAG
It's worked.
But, when I use docker stack deploy -c docker-compose.yml my_service_stack
The service still is running old code.
The part of docker-compose.yaml:
web:
image: register.xxx.com:5000/my_service:v0.0.12
depends_on:
- redis
- db
- rabbit
links:
- redis
- db
- rabbit
volumes:
- web_service_data:/home
networks:
- webnet
v0.0.12 == $TAG
Dockerfile:
```
FROM python:3.6.4
RUN useradd -ms /bin/bash gmt
RUN mkdir -p /home/logs
WORKDIR /home/gmt/src
COPY /src/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/
COPY /src .
RUN cat /home/gmt/src/setting/env.yaml
ENV PYTHONPATH=/home/gmt/src
CMD ["gunicorn", "-c", "/home/gmt/src/gunicornconf.py", "run:app"]
```
So, why?
I don't see that you actually pushed your image from your build server to your registry. I'll assume you're doing that after build and before deploy.
You should not be using a volume for code. That volume will overwrite your /home in the container with the contents of the volume, which are likely stale. Using/storing code in volumes is an anti-pattern.
You don't need links:, they are legacy.
depends_on: is not used in swarm.
You should not store logs in the container, you should have them sent to stdout and stderr.
I'm new to docker, so maybe it's a basic question, but i didn't find a proper answer for it.
I want to get elastic search adress as environment variable inside start.sh script. What is a proper way to do it not invasively, to not hardcode it ?
Docker compose:
version: "2"
services:
#elasticsearch
elasticsearch:
container_name: es
volumes:
- ./elasticsearch:/usr/share/elasticsearch/data
extends:
file: ../../base-config-dev.yml
service: elasticsearch
es-sync:
container_name: app-es-sync
hostname: app-es-sync
extends:
file: ../../base-config-dev.yml
service: es-sync
links:
- elasticsearch
Dockerfile:
FROM python:3.4
MAINTAINER me#mail.com
RUN pip install mongo-connector==2.4.1 elastic2-doc-manager
RUN mkdir /opt/es-sync
COPY files/* /opt/es-sync/
RUN chmod 755 /opt/es-sync/start.sh
CMD exec /opt/es-sync/start.sh
start.sh:
#!/bin/sh
cd /opt/es-sync/
export LANG=ru_RU.UTF-8
python3 create_index.py ${ES_URI}
CONNECTOR_CONFIG="\
-m ${MONGO_URI} \
-t ${ES_URI} \
--oplog-ts=oplog.timestamp \
--batch-size=1 \
--verbose \
--continue-on-error \
--stdout \
--namespace-set=foo.bar \
--doc-manager elastic2_doc_manager"
echo "CONNECTOR_CONFIG: $CONNECTOR_CONFIG"
exec mongo-connector ${CONNECTOR_CONFIG}
Since you're using the Docker Compose link feature, you shouldn't have to do anything, you should be able to just use the hostname.
Since your link is defined as
links:
- elasticsearch
you should be able to reach the elasticsearch container under the elasticsearch hostname.
If you try a ping elasticsearch from inside of the es-sync container, the hostname should be resolved and reachable from there.
Then you just need to turn that into a URL (it looks like you're looking for that): http://elasticsearch:9200/ (assuming that it's running on port 9200).