Docker-compose: get linked container adress inside other container (in bash) - docker

I'm new to docker, so maybe it's a basic question, but i didn't find a proper answer for it.
I want to get elastic search adress as environment variable inside start.sh script. What is a proper way to do it not invasively, to not hardcode it ?
Docker compose:
version: "2"
services:
#elasticsearch
elasticsearch:
container_name: es
volumes:
- ./elasticsearch:/usr/share/elasticsearch/data
extends:
file: ../../base-config-dev.yml
service: elasticsearch
es-sync:
container_name: app-es-sync
hostname: app-es-sync
extends:
file: ../../base-config-dev.yml
service: es-sync
links:
- elasticsearch
Dockerfile:
FROM python:3.4
MAINTAINER me#mail.com
RUN pip install mongo-connector==2.4.1 elastic2-doc-manager
RUN mkdir /opt/es-sync
COPY files/* /opt/es-sync/
RUN chmod 755 /opt/es-sync/start.sh
CMD exec /opt/es-sync/start.sh
start.sh:
#!/bin/sh
cd /opt/es-sync/
export LANG=ru_RU.UTF-8
python3 create_index.py ${ES_URI}
CONNECTOR_CONFIG="\
-m ${MONGO_URI} \
-t ${ES_URI} \
--oplog-ts=oplog.timestamp \
--batch-size=1 \
--verbose \
--continue-on-error \
--stdout \
--namespace-set=foo.bar \
--doc-manager elastic2_doc_manager"
echo "CONNECTOR_CONFIG: $CONNECTOR_CONFIG"
exec mongo-connector ${CONNECTOR_CONFIG}

Since you're using the Docker Compose link feature, you shouldn't have to do anything, you should be able to just use the hostname.
Since your link is defined as
links:
- elasticsearch
you should be able to reach the elasticsearch container under the elasticsearch hostname.
If you try a ping elasticsearch from inside of the es-sync container, the hostname should be resolved and reachable from there.
Then you just need to turn that into a URL (it looks like you're looking for that): http://elasticsearch:9200/ (assuming that it's running on port 9200).

Related

Docker container ubuntu 21 root to root (local machine to container) gives permission issues on file saves

I have just started using Docker as it has been recommended to me as something that makes development easy, but so far it has been nothing but pain. I have installed docker engine (v20.10.12) and docker composer (v 2.2.3) as per the documentation given by docker for Ubuntu OS. Both work as intended.
Whenever I new up a new container with docker compose, no matter the source, I have writing privilege issues to files generated by the docker container (for example a laravel application where I have used php artisan to create a controller file). I have so far pinpointed the issue to be as follows:
By default docker runs as root within the container. It "bridges" the root user to the root user on the local machine and uses root:root to create files on the Ubuntu filesystem (my workspace is placed in ~/workspace/laravel). Then when opening the files in an IDE (vscode in this instance) I get the error:
Failed to save to '<file_name>': insufficient permissions. Select 'Retry as Sudo' to retry as superuser
If I try to parse my own local user into the machine and tells it to use that specific userid and groupid it's all good when I'm using the first user created on the machine (1000:1000) since that will match with the containers default user if we look at the bitnami/laravel docker image for example.
All of this can be fixed by running chown -R yadayada . on the workspace directory every time I use php artisan to create a file. But I do not think this is sustainable or smart in any way shape or form.
How can I tell my docker container to, on startup, to check if a user with my UID and GID exists and if not, then make a user with that id and assign it as a system user?
My docker-compose.yml for this example
version: '3.8'
services:
api_php-database:
image: postgres
container_name: api_php-database
restart: unless-stopped
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: laravel_docker
volumes:
- ./postgres-data:/var/lib/postgresql/data
ports:
- '5432:5432'
api_php-apache:
container_name: api_php-apache
build:
context: ./php
ports:
- '8080:80'
volumes:
- ./src:/var/www/laravel_docker
- ./apache/default.conf:/etc/apache2/sites-enabled/000-default.conf
depends_on:
- api_php-database
My Dockerfile for this example
FROM php:8.0-apache
RUN apt update && apt install -y g++ libicu-dev libpq-dev libzip-dev zip zlib1g-dev && docker-php-ext-install intl opcache pdo pdo_pgsql pgsql
WORKDIR /var/www/laravel_docker
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
In general, this is not possible, but there are workarounds (I do not recommend them for production).
The superuser UID is always 0, this is written in the kernel code.
It is not possible to automatically change the ownership of non-root files.
In this case, when developing, you can use these methods:
If superuser rights are not required:
You can create users dynamically, then docker-compose.yml:
version: "3.0"
services:
something:
image: example-image
volumes:
- /user/path1:/container/path1
- /user/path2:/container/path2
# The double $ is needed to indicate that the variable is in the container
command: ["bash", "-c", "chown -R $$HOST_UID:$$HOST_GID /container/path1 /container/path2; useradd -g $$HOST_GID -u $$HOST_UID user; su -s /bin/bash user"]
environment:
HOST_GID: 100
HOST_UID: 1000
Otherwise, if you run commands in a container as root in Bash:
Bash will run the script from the PROMPT_COMMAND variable after each command is executed
This can be used in development by changing docker-compose.yaml:
version: "3.0"
services:
something:
image: example-image
volumes:
- /user/path1:/container/path1
- /user/path2:/container/path2
command: ["bash"]
environment:
HOST_UID: 1000
HOST_GID: 100
# The double $ is needed to indicate that the variable is in the container
PROMPT_COMMAND: "chown $$HOST_UID:$$HOST_GID -R /container/path1 /container/path2"

How to add docker run param to docker compose file?

I am able to run my application with the following command:
docker run --rm -p 4000:4000 myapp:latest python3.8 -m pipenv run flask run -h 0.0.0.0
I am trying to write a docker-compose file so that I can bringup the app using
docker-compose up. This is not working. How do "add" the docker run params to the docker-compose file?
version: '3'
services:
web:
build: .
ports:
- "4000:4000"
volumes:
- .:/code
You need to use command to specify this.
version: '3'
services:
web:
build: .
ports:
- '4000: 4000'
image: myapp:latest
command: 'python3.8 -m pipenv run flask run -h 0.0.0.0'
volumes:
- .:/code
You should use CMD in your Dockerfile to specify this. Since you'll want to specify this every time you run a container based on the image, there's no reason to want to specify it manually when you run the image.
CMD python3.8 -m pipenv run flask run -h 0.0.0.0
Within the context of a Docker container, it's typical to install packages into the "system" Python: it's already isolated from the host Python by virtue of being in a Docker container, and the setup to use a virtual environment is a little bit tricky. That gets rid of the need to run pipenv run.
FROM python:3.8
WORKDIR /code
COPY Pipfile Pipfile.lock .
RUN pipenv install --deploy --system
COPY . .
CMD flask run -h 0.0.0.0
Since the /code directory is already in your image, you can actually make your docker-compose.yml shorter by removing the unnecessary bind mount
version: '3'
services:
web:
build: .
ports:
- "4000:4000"
# no volumes:

Docker volume is not fully sync directory for one container

I've created simple project for Symfony4 based on php7.3+mariadb via docker-compose. I used Docker for Windows 10 (x64)
It works correctly at one machine but at laptop it doesn't sync correctly with container.
In root folder I have standard Symfony structure with docker files like:
- /config
- /public
- /src
....
- /env
- /docker
- .env
- docker-compose.yaml
...
My actions in Git Bash to start app:
docker-compose build
it works correctly, all actions were finished successfully
docker-compose up -d
it works correctly, both containers run successfully
docker-compose exec app bash
works correctly, console starts
ls
result is docker env
it syncs only 2 directories - docker and env
docker dir was synced not in full mode - only subdirectories structure without files
I tried to detect what reason can be for problem with files sync but I haven't enough knowledge and experience with Docker. docker-compose logs have no errors.
Maybe somebody can help how to detect the reason? It starts once time but after reboot problem occurs again...
docker-compose.yaml:
version: '3'
services:
app:
restart: unless-stopped
build:
context: .
dockerfile: docker/webserver-apache/Dockerfile
image: php:7.3.1-apache-stretch
volumes:
- "./docker/webserver-apache/sites-enabled:/etc/apache2/sites-enabled:ro"
- "./:/var/www/html"
ports:
- 8080:80
networks:
- dphptrainnet
mariadb:
restart: unless-stopped
image: mariadb:10.4.1
networks:
- dphptrainnet
volumes:
- ./env/mariadb/data:/var/lib/mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_PASSWORD}
networks:
dphptrainnet:
Dockerfile:
FROM php:7.3.1-apache-stretch
# Setting up constants for an environment
ENV PHP_MEMORY_LIMIT 512M
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php composer-setup.php && \
php -r "unlink('composer-setup.php');" && \
mv composer.phar /usr/local/bin/composer
RUN apt-get update && \
apt-get install -y curl vim git zip unzip
# Setting up httpd issues
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
RUN a2enmod rewrite headers && /etc/init.d/apache2 restart
RUN echo "127.0.0.1 dockertrain.local" >> /etc/hosts
WORKDIR "/var/www/html"
RUN a2enmod rewrite
I've found only one working solution - reshare drive for Docker:
1. Disable shared disk, click Apply
2. Enable shared disk, click Apply
3. Restart application - files were synced
But how I should detect there any problems with drive access? No errors, no logs....

Remotely check status of cassandra using nodetool. Error: Connection refused

I am trying create a docker compose setup which will wait until the Cassandra container has started before running the JanusGraph container which requires Cassandra to be running before starting.
The nodetool command seems to be the standard way to check the status of Cassandra. Here is what I get running nodetool first on the cassandra container:
docker exec -it ns-orchestration_data_storage_1 nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.31.0.2 235.53 KiB 256 100.0% eea17329-6274-45a7-a9fb-a749588b733a rack1
The first "UN" on the last stdout line means Up/Normal which I intend to use in my wait-for-cassandra-and-elasticsearch.sh script. But now when I try to run it on the janusgraph container (remote) I am getting this:
docker exec -it ns-orchestration_data_janusgraph_1 bin/nodetool -h 172.31.0.2 -u cassandra -pw <my-password-here> status
docker exec -it ns-orchestration_data_janusgraph_1 bin/nodetool -h
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/app/janusgraph-0.3.0-hadoop2/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/app/janusgraph-0.3.0-hadoop2/lib/logback-classic-1.1.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
nodetool: Failed to connect to '172.31.0.2:7199' - ConnectException: 'Connection refused (Connection refused)'.
I have exposed all Cassandra ports as you can see in the docker-compose file below.
I also saw this post which I am not sure is related. I tried following the instructions but I am still getting the same error.
I would appreciate any suggestions.
File: docker-compose.yml
version: '3'
services:
data_janusgraph:
build:
context: ../ns-compute-store/db-janusgraph
dockerfile: Dockerfile.janusgraph
ports:
- "8182:8182"
depends_on:
- data_storage
- data_index
networks:
- ns-net
data_storage:
build:
context: ../ns-compute-store/db-janusgraph
dockerfile: Dockerfile.cassandra
environment:
- CASSANDRA_START_RPC=true
ports:
- "9160:9160"
- "9042:9042"
- "7199:7199"
- "7001:7001"
- "7000:7000"
volumes:
- data-volume:/var/lib/cassandra
networks:
- ns-net
data_index:
image: elasticsearch:5.6
ports:
- "9200:9200"
- "9300:9300"
networks:
- ns-net
networks:
ns-net:
driver: bridge
volumes:
data-volume:
File: Dockerfile.cassandra
FROM cassandra:3.11
COPY conf/jmxremote.password /etc/cassandra/jmxremote.password
RUN chown cassandra:cassandra /etc/cassandra/jmxremote.password
RUN chmod 400 /etc/cassandra/jmxremote.password
COPY conf/jmxremote.access /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management/jmxremote.access
RUN chown cassandra:cassandra /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management/jmxremote.access
RUN chmod 400 /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management/jmxremote.access
COPY conf/cassandra.yaml /etc/cassandra/cassandra.yaml
File: Dockerfile.janusgraph
FROM openjdk:8-jre-alpine
RUN mkdir /app
WORKDIR /app
RUN apk update \
&& apk upgrade \
&& apk --no-cache add unzip
RUN wget https://github.com/JanusGraph/janusgraph/releases/download/v0.3.0/janusgraph-0.3.0-hadoop2.zip
RUN unzip janusgraph-0.3.0-hadoop2.zip
RUN apk --no-cache add bash coreutils nmap
RUN apk del unzip
ENV JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=data_storage"
WORKDIR /app/janusgraph-0.3.0-hadoop2
COPY wait-for-cassandra-and-elasticsearch.sh ./
COPY conf/janusgraph-cql-es.properties ./
CMD ["./wait-for-cassandra-and-elasticsearch.sh", "data_storage:9160", "data_index:9200", "./bin/gremlin-server.sh", "./conf/gremlin-server/gremlin-server-berkeleyje.yaml"]
See full code in Github Repository:
https://github.com/nvizo/janusgraph-cluster-example
I believe you should refer to your cassandra container name as hostname when running nodetool from your janusgraph container. Something along the lines of:
docker exec -it ns-orchestration_data_janusgraph_1 bin/nodetool -h data_storage -u cassandra -pw <my-password-here> status
Give it a try and let me know if it helps.

mysql docker on mac os "crash" on caching_sha2 when laravel connect

I made factory-reset on my Mac Mini, and I wanna install only Docker and some basic tools like git directly on OS X but other for software I wanna use docker - by other soft I mean app like php, phpstorm, nginx, node, mysql, postgres, phpmyadmin, mysql-workbench... in many versions. I wanna install them using docker to easily manage them. For each of this tool I wana to map folder with e.g. code of my projects or db storagem, configuration etc...
During setup mysql 8 I face strange problem - I was able to login as root to db by phpmyadmin and mysql-workbench but my php 7 laravel appliaction "hangs" during connection. Here is mysql dockerfile:
version: '3.1'
services:
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: terefere321
MYSQL_ROOT_HOST: "%"
ports:
- 3306:3306
volumes:
- ./db_files:/var/lib/mysql
Here is docker file + script which allow me to run php via cmd on docker:
FROM php:7.2-apache
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
RUN apt-get update &&\
apt-get install -y \
git \
zlib1g-dev \
zip \
unzip \
&&\
docker-php-ext-install pdo pdo_mysql zip &&\
a2enmod rewrite
Bash script to run php-cmd docker container and "login" to it to get command line:
set -e
cd -- "$(dirname "$0")" # go to script dir - (for macos double clik run)
docker build -t php-cmd .
docker rm -f php-cmd
docker run -d --name php-cmd -v /Volumes/work:/var/www/html php-cmd
docker exec -it php-cmd /bin/bash
Here /Volumes/work is directory with code of my project. After "login" i run php artisan migrate and app hangs for 30s and after throw errors:
SQLSTATE[HY000] [2006] MySQL server has gone away PDO::__construct():
Unexpected server respose while doing caching_sha2 auth : 109
Add default-authentication-plugin=mysql_native_password command to mysql8 dockerfile - so you should get:
version: '3.1'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: terefere321
MYSQL_ROOT_HOST: "%"
ports:
- 3306:3306
volumes:
- ./db_files:/var/lib/mysql

Resources