Goal
We would like to create a development environment where we can run the latest versions of our registry, uaa and gateway on a server. We would then like to develop and run (in or outside docker) a microservice locally. This microservice should then be configured to connect and communicate to the other server.
Test setup
I have now generated a docker-compose via the jhipster sub-generator for our gateway, uaa and registry. I then tried to start the microservice i'm currently working on via gradlew, build it via gradlew dockerBuild and start the app.yml. I also tried to change the hostname in app.yml to localhost, 127.0.0.1 and the IP of the registries docker container.
My results
If hostname is jhipster-registry: unknownhostexception. Most likely because the applications are started in different docker-compose files.
If hostname is localhost or 127.0.0.1: http://127.0.0.1:8761/config/application/prod/master connection refused. Changing to Perhaps some more configuration is required?
If the hostname is the ip of the registry docker container: After the jhipster logo in the terminal no other output is given. But the application never stops due to an exception.
Files
docker-compose.yml (registry, uaa & gateway)
version: '2'
services:
mygateway-app:
image: mygateway
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config
- SPRING_DATASOURCE_URL=jdbc:mysql://mygateway-mysql:3306/mygateway?useUnicode=true&characterEncoding=utf8&useSSL=false
- JHIPSTER_SLEEP=30
- JHIPSTER_REGISTRY_PASSWORD=admin
ports:
- 8080:8080
depends_on:
- "mygateway-mysql"
- "myuaa-app"
mygateway-mysql:
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=mygateway
command: mysqld --lower_case_table_names=1 --skip-ssl
--character_set_server=utf8mb4 --explicit_defaults_for_timestamp
myuaa-app:
image: myuaa
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config
- SPRING_DATASOURCE_URL=jdbc:mysql://myuaa-mysql:3306/myuaa?useUnicode=true&characterEncoding=utf8&useSSL=false
- JHIPSTER_SLEEP=30
- JHIPSTER_REGISTRY_PASSWORD=admin
depends_on:
- "myuaa-mysql"
- "jhipster-registry"
myuaa-mysql:
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=myuaa
command: mysqld --lower_case_table_names=1 --skip-ssl
--character_set_server=utf8mb4 --explicit_defaults_for_timestamp
jhipster-registry:
extends:
file: jhipster-registry.yml
service: jhipster-registry
app.yml (microservice)
version: '2'
services:
myservice-app:
image: myservice
environment:
# - _JAVA_OPTIONS=-Xmx512m -Xms256m
- SPRING_PROFILES_ACTIVE=prod,swagger
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#localhost:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#localhost:8761/config
- SPRING_DATASOURCE_URL=jdbc:mysql://myservice-mysql:3306/myservice?useUnicode=true&characterEncoding=utf8&useSSL=false
- JHIPSTER_SLEEP=10 # gives time for the JHipster Registry to boot before the application
- JHIPSTER_REGISTRY_PASSWORD=admin
myservice-mysql:
extends:
file: mysql.yml
service: myservice-mysql
# jhipster-registry:
# extends:
# file: jhipster-registry.yml
# service: jhipster-registry
# environment:
# - SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_TYPE=native
# - SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_SEARCH_LOCATIONS=file:./central-config/docker-config/
Related
My target container contains NGINX logs which I wanted to collect from Elastic Fleet's NGINX Integration.
I followed every step, even successfully hosting the fleet server and the agent in two separate containers, what confuses me, is how can I configure my Agent which has the NGINX integration setup on its policy, to collect logs from the service container?
I have mostly encountered examples using the elastic-agent as a package installer directly on the target container.
I've attached three snippets of my docker-compose setup, that I follow for the Fleet, Agent and App containers.
FLEET SERVER
fleet:
image: docker.elastic.co/beats/elastic-agent:$ELASTIC_VERSION
healthcheck:
test: "curl -f http://127.0.0.1:8220/api/status | grep HEALTHY 2>&1 >/dev/null"
retries: 12
interval: 5s
hostname: fleet
container_name: fleet
restart: always
user: root
environment:
- FLEET_SERVER_ENABLE=1
- "FLEET_SERVER_ELASTICSEARCH_HOST=https://elasticsearch:9200"
- FLEET_SERVER_ELASTICSEARCH_USERNAME=elastic
- FLEET_SERVER_ELASTICSEARCH_PASSWORD=REPLACE1
- FLEET_SERVER_ELASTICSEARCH_CA=$CERTS_DIR/ca/ca.crt
- FLEET_SERVER_INSECURE_HTTP=1
- KIBANA_FLEET_SETUP=1
- "KIBANA_FLEET_HOST=https://kibana:5601"
- KIBANA_FLEET_USERNAME=elastic
- KIBANA_FLEET_PASSWORD=REPLACE1
- KIBANA_FLEET_CA=$CERTS_DIR/ca/ca.crt
- FLEET_ENROLL=1
ports:
- 8220:8220
networks:
- elastic
volumes:
- certs:$CERTS_DIR
Elastic Agent
agent:
image: docker.elastic.co/beats/elastic-agent:$ELASTIC_VERSION
container_name: agent
hostname: agent
restart: always
user: root
healthcheck:
test: "elastic-agent status"
retries: 90
interval: 1s
environment:
- FLEET_ENROLLMENT_TOKEN=REPLACE2
- FLEET_ENROLL=1
- FLEET_URL=http://fleet:8220
- FLEET_INSECURE=1
- ELASTICSEARCH_HOSTS='["https://elasticsearch:9200"]'
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=REPLACE1
- ELASTICSEARCH_CA=$CERTS_DIR/ca/ca.crt
- "STATE_PATH=/usr/share/elastic-agent"
networks:
- elastic
volumes:
- certs:$CERTS_DIR
App Container (NGINX logs)
demo-app:
image: ubuntu:bionic
container_name: demo-app
build:
context: ./docker/
dockerfile: Dockerfile
volumes:
- ./app:/var/www/html/app
- ./docker/nginx.conf:/etc/nginx/nginx.conf
ports:
- target: 90
published: 9090
protocol: tcp
mode: host
networks:
- elastic
The ELK stack currently runs on version 7.17.0.
If anyone could provide any info on what next needs to be done , It'll be very much helpful, thanks!
you could share nginx log files through volume mount.
mount a directory to nginx log directory, and mount that to a directory in your elastic agent container. then youre good to harvest the nginx log in elastic agent container from there.
there might be directory read write permission problem, feel free to ask below.
kinda like:
nginx compose:
demo-app:
...
volumes:
- ./app:/var/www/html/app
- ./docker/nginx.conf:/etc/nginx/nginx.conf
+ - /home/user/nginx-log:/var/log/nginx/access.log
...
elastic agent compose:
services:
agent:
...
volumes:
- certs:$CERTS_DIR
+ - /home/user/nginx-log:/usr/share/elastic-agent/nginx-log
i using docker with https://github.com/markshust/docker-magento . when i in step by step try to import db i got error ERROR 2002 (HY000): Can't connect to MySQL server on 'db' (115)
I tried this solution ERROR 2002 (HY000): Can't connect to MySQL server on 'db' (115).
my yml file
## Mark Shust's Docker Configuration for Magento
## (https://github.com/markshust/docker-magento)
##
## Version 41.0.2
## To use SSH, see https://github.com/markshust/docker-magento#ssh
## Linux users, see https://github.com/markshust/docker-magento#linux
## If you changed the default Docker network, you may need to replace
## 172.17.0.1 in this file with the result of:
## docker network inspect bridge --format='{{(index .IPAM.Config 0).Gateway}}'
version: "3"
services:
app:
image: markoshust/magento-nginx:1.18-5
ports:
- "80:8000"
- "443:8443"
volumes: &appvolumes
- ~/.composer:/var/www/.composer:cached
- ~/.ssh/id_rsa:/var/www/.ssh/id_rsa:cached
- ~/.ssh/known_hosts:/var/www/.ssh/known_hosts:cached
- appdata:/var/www/html
- sockdata:/sock
- ssldata:/etc/nginx/certs
extra_hosts: &appextrahosts
## M1 Mac support to fix Docker delay, see #566
- "app:172.17.0.1"
- "phpfpm:172.17.0.1"
- "db:172.17.0.1"
- "redis:172.17.0.1"
- "elasticsearch:172.17.0.1"
- "rabbitmq:172.17.0.1"
## Selenium support, replace "magento.test" with URL of your site
- "magento.test:172.17.0.1"
phpfpm:
image: markoshust/magento-php:7.2-fpm
volumes: *appvolumes
env_file: env/phpfpm.env
db:
image: mariadb:10.1
command: mysqld --innodb_force_recovery=6 --lower_case_table_names=1 --skip-ssl --character_set_server=utf8mb4 --explicit_defaults_for_timestamp
ports:
- "3306:3306"
env_file: env/db.env
volumes:
- dbdata:/var/lib/mysql
redis:
image: redis:6.0-alpine
ports:
- "6379:6379"
elasticsearch:
image: markoshust/magento-elasticsearch:7.9.3-1
ports:
- "9200:9200"
- "9300:9300"
environment:
- "discovery.type=single-node"
## Set custom heap size to avoid memory errors
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
## Avoid test failures due to small disks
## More info at https://github.com/markshust/docker-magento/issues/488
- "cluster.routing.allocation.disk.threshold_enabled=false"
- "index.blocks.read_only_allow_delete"
rabbitmq:
image: rabbitmq:3.8.22-management-alpine
ports:
- "15672:15672"
- "5672:5672"
volumes:
- rabbitmqdata:/var/lib/rabbitmq
env_file: env/rabbitmq.env
mailcatcher:
image: sj26/mailcatcher
ports:
- "1080:1080"
## Selenium support, uncomment to enable
#selenium:
# image: selenium/standalone-chrome-debug:3.8.1
# ports:
# - "5900:5900"
# extra_hosts: *appextrahosts
volumes:
appdata:
dbdata:
rabbitmqdata:
sockdata:
ssldata:
Remember that you will need to connect to the running docker container. So you probably want to use TCP instead of Unix socket. Check the output of the docker ps command and look for running MySQL containers. If you find one then use MySQL command like this: MySQL -h 127.0.0.1 -P <mysql_port> (you will find a port in docker ps output). If you can't find any running MySQL container in docker ps output then try docker images to find MySQL image name and try something like this to run it: docker run -d -p 3306:3306 tutum/MySQL where "tutum/MySQL" is image name found in docker images.
I am trying to run a gitea server with drone. They are currently both hosted on the same ubuntu machine and the docker containers are set up through a docker-compose.yml file.
When starting up all services I get the following error in the logs of the drone runner service:
time="2020-08-12T19:10:42Z" level=error msg="cannot ping the remote server" error="Post http://drone:80/rpc/v2/ping: dial tcp: lookup drone on 127.0.0.11:53: no such host"
Both http://gitea and http://drone point to localhost (via /etc/hosts). I sadly don't understand how or why the drone runner can not find the server. Calling "docker container inspect" on all my 4 containers shows they are all connected to the same network (drone_and_gitea_giteanet). Which is also the network I set in the DRONE_RUNNER_NETWORKS environment variable.
This is how my docker-compose.yml file looks:
version: "3.8"
# Create named volumes for gitea server, gitea database and drone server
volumes:
gitea:
gitea-db:
drone:
# Create shared network for gitea and drone
networks:
giteanet:
external: false
services:
gitea:
container_name: gitea
image: gitea/gitea:1
#restart: always
environment:
- APP_NAME="Automated Student Assessment Tool"
- USER_UID=1000
- USER_GID=1000
- ROOT_URL=http://gitea:3000
- DB_TYPE=postgres
- DB_HOST=gitea-db:5432
- DB_NAME=gitea
- DB_USER=gitea
- DB_PASSWD=gitea
networks:
- giteanet
ports:
- "3000:3000"
- "222:22"
volumes:
- gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- gitea-db
gitea-db:
container_name: gitea-db
image: postgres:9.6
#restart: always
environment:
- POSTGRES_USER=gitea
- POSTGRES_PASSWORD=gitea
- POSTGRES_DB=gitea
networks:
- giteanet
volumes:
- gitea-db:/var/lib/postgresql/data
drone-server:
container_name: drone-server
image: drone/drone:1
#restart: always
environment:
# General server settings
- DRONE_SERVER_HOST=drone:80
- DRONE_SERVER_PROTO=http
- DRONE_RPC_SECRET=topsecret
# Gitea Config
- DRONE_GITEA_SERVER=http://gitea:3000
- DRONE_GITEA_CLIENT_ID=<CLIENT ID>
- DRONE_GITEA_CLIENT_SECRET=<CLIENT SECRET>
# Create Admin User, name should be the same as Gitea Admin user
- DRONE_USER_CREATE=username:AdminUser,admin:true
# Drone Logs Settings
- DRONE_LOGS_PRETTY=true
- DRONE_LOGS_COLOR=true
networks:
- giteanet
ports:
- "80:80"
volumes:
- drone:/data
depends_on:
- gitea
drone-agent:
container_name: drone-agent
image: drone/drone-runner-docker:1
#restart: always
environment:
- DRONE_RPC_PROTO=http
- DRONE_RPC_HOST=drone:80
- DRONE_RPC_SECRET=topsecret
- DRONE_RUNNER_CAPACITY=1
- DRONE_RUNNER_NETWORKS=drone_and_gitea_giteanet
networks:
- giteanet
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- drone-server
It would help me a lot if somebody could maybe take a look at the issue and help me out! :)
On the same linux VM with docker v.19.03.11 I'm running:
Nexus:
version: '3.7'
services:
nexus:
container_name: nexus
image: sonatype/nexus3
volumes:
- nexus-data:/nexus-data
networks:
- web
ports:
- 8081
- 8082
- 8083
restart: always
labels:
- "traefik.enable=true"
- "traefik.docker.network=web"
# admin.nexus.xxx.intern
- "traefik.http.routers.nexus.rule=Host(`admin.nexus.xxx.intern`, `maven.nexus.itools.intern`)"
- "traefik.http.services.nexus.loadbalancer.server.port=8081"
- "traefik.http.routers.nexus.service=nexus"
- "traefik.http.routers.nexus.entrypoints=web"
# docker.nexus.xxx.intern
- "traefik.http.routers.docker.rule=Host(`docker.nexus.xxx.intern`)"
- "traefik.http.services.docker.loadbalancer.server.port=8083"
- "traefik.http.routers.docker.service=docker"
- "traefik.http.routers.docker.entrypoints=web"
networks:
web:
external: true
volumes:
nexus-data:
external: true
in Nexus Repository Manager Dashboard I created one hosted docker repository and assigned it port 8083.
and Traefik:
version: '3.7'
services:
traefik:
container_name: traefik
image: traefik
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./conf:/conf
- ../ssl:/ssl:ro
networks:
- web
ports:
- 80:80
- 443:443
restart: always
command:
# Enabling docker provider
- "--providers.docker=true"
# Do not expose containers unless explicitly told so
- "--providers.docker.exposedbydefault=false"
# Enable API (listening on port 8080)
- "--api.insecure=true"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
# Enable the file provider to define routers / middlewares / services in file
# EMPTY AT THE TIME!
- "--providers.file.directory=/conf"
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.service=api#internal"
- "traefik.http.routers.traefik.rule=Host(`traefik.xxx.intern`)"
- "traefik.http.routers.traefik.entrypoints=web"
- "traefik.http.routers.traefik_tls.tls=true"
- "traefik.http.routers.traefik_tls.rule=Host(`traefik.xxx.intern`)"
- "traefik.http.routers.traefik_tls.entrypoints=websecure"
- "traefik.http.routers.traefik_tls.service=api#internal"
networks:
web:
external: true
I can login into the docker repository form anywhere:
docker login -u user -p password1234 docker.nexus.xxx.intern
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded
but I cannot push into the registry:
docker push docker.nexus.xxx.intern/hello-world
The push refers to repository [docker.nexus.xxx.intern/hello-world]
af0b15c8625b: Preparing
error parsing HTTP 404 response body: invalid character 'p' after top-level value: "404 page not found\n"
When I expose the port 8083 and bypass Traefik, everything works fine and I can push into the Nexus Registry. The problem is I can only expose ports 80 and 443.
Did anyone have a similar issue and knows how to solve it?
Update 1
Have also tried with Harbor - the same result - cannot push behind traefik.
Same issue for me. I've tried
Option 1 : add prefix on path for v2 as Docker is putting a /v2 prefix
traefik.http.routers.docker.rule=Host(`docker.nexus.xxx.intern`) && PathPrefix(`/{version:(v1|v2)}/`)
Option 2 : add prefix to request and remove it with a middleware replacepathregex
traefik.http.middlewares.replace-path.replacepathregex.regex=^/(v1|v2)/(push|pull)/(.*)
traefik.http.middlewares.replace-path.replacepathregex.replacement=/$$1/$$3
traefik.http.routers.docker.rule=Host(`docker.nexus.xxx.intern`) && PathPrefix(`/{version:(v1|v2)}/push/`)
traefik.http.routers.nexus-registry-push.middlewares=replace-path
Try adding to the Nexus container in docker-compose file
environment:
- "REGISTRY_HTTP_RELATIVEURLS=true"
We are developed a project which has multiple micro-service developed on spring boot.We are using docker container and docker-compose.We are facing issue in generating the application log file.We have written below config in application.yml file.
logging:
file: /data/test/run/logs/x.log
After generating the image if we start a container seperatly(using docker run imageName) log file is generated in container.But when we up our containers using docker-compose(docker-compose up) same images,log file is not being generating in container.
docker-compose.yml
version: '2'
services:
lb:
image: dockercloud/haproxy
links:
- x-service
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "1936:1936"
eureka-service:
image: x.y.com/registration-server:0.0.2
ports:
- "2323:2323"
environment:
- APPBINARY=registration-server.jar
entrypoint:
- /usr/bin/jarrun.sh
- QA
x-service:
image: x.y.com/x-service:0.2.7
ports:
- "4444"
links:
- eureka-service
environment:
- JAVA_OPTS=-Xms512M -Xmx1024M
- VIRTUAL_HOST=*/x/*
- "SPRING_PROFILES_ACTIVE=qa"
- APPBINARY=x-service.jar
- environment=qa
extra_hosts:
- "service1.test.com:111.11.1.111"
entrypoint:
- /usr/bin/jarrun.sh
- QA