By reading this article : how-to-configure-a-production-ready-mesosphere-cluster-on-ubuntu-14-04,
I wanted to start my own docker mesosphere using 3 server.
The setup is similar than the article, expect I use 4 dockerized server :
Docker Zookeeper
Docker Mesos Master
Docker Mesos Slave
Docker Marathon
I got really confused by the configuration files location, because they install the 4 components on the same machine.
Docker install use 4 different server, how do you apply the steps correctly using Docker.
I have
Server 1 - prod02 - prod02.domain.com
Server 2 - preprod02 - preprod02.domain.com
Server 3 - prod01 - prod01.domain.com
Here is a the docker-compose.yml I started writting for running the master mesosphere server 1
zookeeper:
build: zookeeper
restart: always
command: /usr/share/zookeeper/bin/zkServer.sh start-foreground
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
master:
build: master
restart: always
environment:
- MESOS_HOSTNAME=master.prod-02.example.com
- MESOS_ZK=zk://prod-02.example.com:2181,prod-01.example.com:2181,preprod-02.example.com:2181/mesos
- MESOS_QUORUM=1
- MESOS_LOG_DIR=/var/log/mesos
- MESOS_WORK_DIR=/var/lib/mesos
volumes:
- /srv/docker/mesos-master:/var/log/mesos
ports:
- "5050:5050"
slave:
build: slave
restart: always
privileged: true
environment:
- MESOS_HOSTNAME=slave.prod-02.example.com
- MESOS_MASTER=zk://prod-02.example.com:2181,prod-01.example.com:2181,preprod-02.example.com:2181/mesos
- MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins #also in Dockerfile
- MESOS_CONTAINERIZERS=docker,mesos
- MESOS_LOG_DIR=/var/log/mesos
- MESOS_LOGGING_LEVEL=INFO
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
- /sys:/sys:ro
- /srv/docker/mesos-slave:/var/log/mesos
- /srv/docker/mesos-data/docker.tar.gz:/etc/docker.tar.gz
ports:
- "5051:5051"
marathon:
build: marathon
restart: always
environment:
- MARATHON_HOSTNAME=marathon.prod-02.example.com
- MARATHON_MASTER=zk://prod-02.example.com:2181,prod-01.example.com:2181,preprod-02.example.com:2181/mesos
- MARATHON_ZK=zk://prod-02.example.com:2181,prod-01.example.com:2181,preprod-02.example.com:2181/marathon
ports:
- "8081:8080"
My project directory looks like this
/prod-02
/marathon
Dockerfile
/master
Dockerfile
/slave
Dockerfile
/zookeeper
/assets
/conf
myid
zoo.cfg
docker-compose.yml
With this config, the master and slave serveur can't start , log is :
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1016 12:12:49.976361 1 process.cpp:895] Failed to initialize: Failed to bind on XXX.XXX.XXX.XXX:5051: Cannot assign requested address: Cannot assign requested address [99]
*** Check failure stack trace: ***
I feel a bit lost due to lake of documentation, any help to configure is well appreciated
I finally sort this out, what was missing was the external ip address MESOS_IP set for master and slave and also the net: host mode
Related
My target container contains NGINX logs which I wanted to collect from Elastic Fleet's NGINX Integration.
I followed every step, even successfully hosting the fleet server and the agent in two separate containers, what confuses me, is how can I configure my Agent which has the NGINX integration setup on its policy, to collect logs from the service container?
I have mostly encountered examples using the elastic-agent as a package installer directly on the target container.
I've attached three snippets of my docker-compose setup, that I follow for the Fleet, Agent and App containers.
FLEET SERVER
fleet:
image: docker.elastic.co/beats/elastic-agent:$ELASTIC_VERSION
healthcheck:
test: "curl -f http://127.0.0.1:8220/api/status | grep HEALTHY 2>&1 >/dev/null"
retries: 12
interval: 5s
hostname: fleet
container_name: fleet
restart: always
user: root
environment:
- FLEET_SERVER_ENABLE=1
- "FLEET_SERVER_ELASTICSEARCH_HOST=https://elasticsearch:9200"
- FLEET_SERVER_ELASTICSEARCH_USERNAME=elastic
- FLEET_SERVER_ELASTICSEARCH_PASSWORD=REPLACE1
- FLEET_SERVER_ELASTICSEARCH_CA=$CERTS_DIR/ca/ca.crt
- FLEET_SERVER_INSECURE_HTTP=1
- KIBANA_FLEET_SETUP=1
- "KIBANA_FLEET_HOST=https://kibana:5601"
- KIBANA_FLEET_USERNAME=elastic
- KIBANA_FLEET_PASSWORD=REPLACE1
- KIBANA_FLEET_CA=$CERTS_DIR/ca/ca.crt
- FLEET_ENROLL=1
ports:
- 8220:8220
networks:
- elastic
volumes:
- certs:$CERTS_DIR
Elastic Agent
agent:
image: docker.elastic.co/beats/elastic-agent:$ELASTIC_VERSION
container_name: agent
hostname: agent
restart: always
user: root
healthcheck:
test: "elastic-agent status"
retries: 90
interval: 1s
environment:
- FLEET_ENROLLMENT_TOKEN=REPLACE2
- FLEET_ENROLL=1
- FLEET_URL=http://fleet:8220
- FLEET_INSECURE=1
- ELASTICSEARCH_HOSTS='["https://elasticsearch:9200"]'
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=REPLACE1
- ELASTICSEARCH_CA=$CERTS_DIR/ca/ca.crt
- "STATE_PATH=/usr/share/elastic-agent"
networks:
- elastic
volumes:
- certs:$CERTS_DIR
App Container (NGINX logs)
demo-app:
image: ubuntu:bionic
container_name: demo-app
build:
context: ./docker/
dockerfile: Dockerfile
volumes:
- ./app:/var/www/html/app
- ./docker/nginx.conf:/etc/nginx/nginx.conf
ports:
- target: 90
published: 9090
protocol: tcp
mode: host
networks:
- elastic
The ELK stack currently runs on version 7.17.0.
If anyone could provide any info on what next needs to be done , It'll be very much helpful, thanks!
you could share nginx log files through volume mount.
mount a directory to nginx log directory, and mount that to a directory in your elastic agent container. then youre good to harvest the nginx log in elastic agent container from there.
there might be directory read write permission problem, feel free to ask below.
kinda like:
nginx compose:
demo-app:
...
volumes:
- ./app:/var/www/html/app
- ./docker/nginx.conf:/etc/nginx/nginx.conf
+ - /home/user/nginx-log:/var/log/nginx/access.log
...
elastic agent compose:
services:
agent:
...
volumes:
- certs:$CERTS_DIR
+ - /home/user/nginx-log:/usr/share/elastic-agent/nginx-log
I've an application running on a server, but somehow the server rebooted but some docker services could restart, another not.
docker-compose ps:
Name Command State Ports
------------------------------------------------------------------------------------------------------------
elasticsearch /usr/local/bin/docker-entr ... Up 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp
kibana sh -c ./bin/kibana-plugin ... Restarting
logstash /usr/local/bin/docker-entr ... Up 5044/tcp, 9600/tcp
If I try to see the logs of kibana by docker kibana ps:
Plugin kbn_radar already exists, please remove before installing a new version
Found previous install attempt. Deleting...
Attempting to transfer from file:///usr/share/kibana/config/kbn_radar.zip
Transferring 3686700 bytes....................
Transfer complete
Retrieving metadata from plugin archive
Extracting plugin archive
The problem is: kbn_radar takes a long time to restart, so I want to restart the kibana service without needing to restart the other applications. I've tried to change my .yml file where I've run the commands to start de plugins:
kibana:
image: docker.elastic.co/kibana/kibana:6.8.0
command:
- sh
- -c
- './bin/kibana-plugin install file:///usr/share/kibana/config/kbn_radar.zip && ./bin/kibana-plugin install file:///usr/share/kibana/config/ob-kb-funnel-6.8.zip && exec /usr/local/bin/kibana-docker'
So at the end, my docker compose was:
docker-compose.yml:
version: "3"
networks:
elasticsearch-net-624:
services:
elasticsearch-products-624-service:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
container_name: elasticsearch
restart: always
networks:
- elasticsearch-net-624
ports:
- "9200:9200"
- "9300:9300"
expose:
- "9200"
volumes:
- /home/docker/elastic.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /home/docker/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
- /docker/elastic/data:/usr/share/elasticsearch/data
- /docker/elastic/data/snapshots:/usr/share/elasticsearch/data/snapshots
kibana:
image: docker.elastic.co/kibana/kibana:6.8.0
command:
- sh
- -c
- 'exec /usr/local/bin/kibana-docker'
container_name: kibana
restart: always
hostname: kibana
networks:
- elasticsearch-net-624
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- XPACK_GRAPH_ENABLED=true
- XPACK_WATCHER_ENABLED=true
- XPACK_ML_ENABLED=true
- XPACK_MONITORING_ENABLED=true
- XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED
ports:
- "5601:5601"
expose:
- "5601"
links:
- elasticsearch-products-624-service
depends_on:
- elasticsearch-products-624-service
volumes:
- /home/docker/kibana.yml:/usr/share/kibana/config/kibana.yml
- /home/docker/ob-kb-funnel-6.8.zip:/usr/share/kibana/config/ob-kb-funnel-6.8.zip
- /home/docker/kbn_radar.zip:/usr/share/kibana/config/kbn_radar.zip
- /home/morpheus/docker/dashboard_app.js:/usr/share/kibana/src/legacy/core_plugins/kibana/public/dashboard/dashboard_app.js
logstash:
image: docker.elastic.co/logstash/logstash:6.8.0
container_name: logstash
restart: always
volumes:
- /home/docker/logstash.yml:/usr/share/logstash/config/logstash.yml
Finally I've tried to restart the service:
docker-compose -f docker-kibana.yml restart kibana
But, the service keeps trying to restart the plugins and if I run docker-compose ps, the command continues "sh -c ./bin/kibana-plugin ..."
How could I restart docker service with another command? Or restart my service without restarting the plugin that already exists?
I recommend that you create a build for your plugin and not do everything at the container start.
A simple dockerfile to fix your issue would look somewhat like this
FROM docker.elastic.co/kibana/kibana:6.8.0
COPY ob-kb-funnel-6.8.zip kbn_radar.zip /usr/share/kibana/config/
RUN ./bin/kibana-plugin install file:///usr/share/kibana/config/kbn_radar.zip &&
./bin/kibana-plugin install file:///usr/share/kibana/config/ob-kb-funnel-6.8.zip
ENTRYPOINT /usr/local/bin/kibana-docker
Next you would need to use docker-compose to build your image. We can do that by updating your service definition
kibana:
build:
context: ./kibana
container_name: kibana
restart: always
hostname: kibana
networks:
- elasticsearch-net-624
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- XPACK_GRAPH_ENABLED=true
- XPACK_WATCHER_ENABLED=true
- XPACK_ML_ENABLED=true
- XPACK_MONITORING_ENABLED=true
- XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED
ports:
- "5601:5601"
expose:
- "5601"
links:
- elasticsearch-products-624-service
depends_on:
- elasticsearch-products-624-service
volumes:
- /home/docker/kibana.yml:/usr/share/kibana/config/kibana.yml
- /home/morpheus/docker/dashboard_app.js:/usr/share/kibana/src/legacy/core_plugins/kibana/public/dashboard/dashboard_app.js
As you can see in the service definition we replaced image with build. We assume that your Dockerfile for kibana resides in a folder called kibana and also contains your plugin zip files.
next you can run docker-compose build and it will build you the required images for your compose stack.
The problem is that when you run a docker-compose or a docker stack, a context is created with all the initial data. If you later change this data, for example the command in your case, it will not take effect unless you restart the whole context, that is, unless you bring down and up again the docker-compose or stack.
However, you might try your luck with the following:
Edit the compose with the command you want to run now.
Remove the kibana container at all. I mean, don't try to restart kibana with docker-compose, but remove the container. docker rm -f dir_kibana
Run docker-compose up again. It should detect that kibana is missing and run it again.
I want to build a domjudge server with mriadb, phpmyadmin, judgehost in the docker base on Debian9,
I've install the docker and docker compose
here is the docker-compose.yml code below.
and I use docker-compose up -d and there are some WARNING and ERROR pop out.
here is the entire docker-compose.yml file code
http://codepad.org/souBFdFz
WARNING and ERROR messages:
WARNING: some networks were defined but are not used by any service: phpmyadmin, dj-judgedameons_1, dj-judgedameons_2
ERROR: dor domjudge_dj-judgedameons_2_1 Cannot start service dj-judgedameons_1 : OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:311:getting Starting domjudge_dj-judgedameons_1_1
...and a lots of error messages that I cant even read(binary code or address i think)
Please help me fix it or if there is a easy way to set up domjudge server with mariadb, phpmyadmin, judgehost
THANKS!
Update
I've tried this file several times and it has a drifferent result but it still can't connect to the server (domjudge & phpmyadmin).
here is the message
https://i.stack.imgur.com/qDcDd.jpg
Unfortunately what you want to do is not really possible because of how the application is built: containers need to wait for each other and some of them need manual actions.
However, this is a sequence of actions that works and will bring all containers up and running.
NOTE: I removed the networks declarations because they don't add any value.
version: '3'
services:
dj-mariadb:
image: mariadb
environment:
- MYSQL_ROOT_PASSWORD=rootpw
- MYSQL_DATABASE=domjudge
- MYSQL_USER=domjudge
- MYSQL_PASSWORD=djpw
command:
--max-connections=1000
dj-domserver:
image: domjudge/domserver:latest
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
environment:
- CONTAINER_TIMEZONE=Asia/Taipei
- MYSQL_ROOT_PASSWORD=rootpw
- MYSQL_DATABASE=domjudge
- MYSQL_USER=domjudge
- MYSQL_PASSWORD=djpw
ports:
- 9090:80
links:
- dj-mariadb:mariadb
dj-judgehost:
image: domjudge/judgehost:latest
privileged: true
hostname: judgedaemon-0
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
environment:
- DAEMON_ID=0
- JUDGEDAEMON_PASSWORD=domjudge
links:
- dj-domserver:domserver
dj-judgehost_1:
image: domjudge/judgehost:latest
privileged: true
hostname: judgedaemon-1
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
environment:
- DAEMON_ID=1
- JUDGEDAEMON_PASSWORD=domjudge
links:
- dj-domserver:domserver
dj-judgehost_2:
image: domjudge/judgehost:latest
privileged: true
hostname: judgedaemon-2
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
environment:
- DAEMON_ID=2
- JUDGEDAEMON_PASSWORD=domjudge
links:
- dj-domserver:domserver
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: myadmin
ports:
- 8888:80
environment:
- PMA_ARBITRARY=1
- PMA_HOST=dj-mariadb
links:
- dj-mariadb:db
Start the database and wait for it to initialize (otherwise the server will exit because it cannot find the schema it needs)
docker-compose up -d dj-mariadb
Start the server:
docker-compose up -d dj-domserver
Get the admin password from the logs:
docker-compose logs dj-domserver
Look for the line saying: Initial admin password is .... and save the password.
Set the judgehost password in the web interface: open http://localhost:9090 and login with user admin and the password you saved from the previous step. Go to Users and click on judgehost user. In there change the password to domjudge (according to what you set in the docker-compose.yml for JUDGEDAEMON_PASSWORD. Save the data.
Start the rest of the containers:
docker-compose up -d
Verify that all containers are up and running:
docker-compose ps
Output should look similar to this:
Name Command State Ports
---------------------------------------------------------------------------------------------------
domjudge_dj-domserver_1 /scripts/start.sh Up 0.0.0.0:9090->80/tcp
domjudge_dj-judgehost_1 /scripts/start.sh Up
domjudge_dj-judgehost_1_1 /scripts/start.sh Up
domjudge_dj-judgehost_2_1 /scripts/start.sh Up
domjudge_dj-mariadb_1 docker-entrypoint.sh --max ... Up 3306/tcp
myadmin /run.sh supervisord -n -j ... Up 0.0.0.0:8888->80/tcp, 9000/tcp
Goal
We would like to create a development environment where we can run the latest versions of our registry, uaa and gateway on a server. We would then like to develop and run (in or outside docker) a microservice locally. This microservice should then be configured to connect and communicate to the other server.
Test setup
I have now generated a docker-compose via the jhipster sub-generator for our gateway, uaa and registry. I then tried to start the microservice i'm currently working on via gradlew, build it via gradlew dockerBuild and start the app.yml. I also tried to change the hostname in app.yml to localhost, 127.0.0.1 and the IP of the registries docker container.
My results
If hostname is jhipster-registry: unknownhostexception. Most likely because the applications are started in different docker-compose files.
If hostname is localhost or 127.0.0.1: http://127.0.0.1:8761/config/application/prod/master connection refused. Changing to Perhaps some more configuration is required?
If the hostname is the ip of the registry docker container: After the jhipster logo in the terminal no other output is given. But the application never stops due to an exception.
Files
docker-compose.yml (registry, uaa & gateway)
version: '2'
services:
mygateway-app:
image: mygateway
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config
- SPRING_DATASOURCE_URL=jdbc:mysql://mygateway-mysql:3306/mygateway?useUnicode=true&characterEncoding=utf8&useSSL=false
- JHIPSTER_SLEEP=30
- JHIPSTER_REGISTRY_PASSWORD=admin
ports:
- 8080:8080
depends_on:
- "mygateway-mysql"
- "myuaa-app"
mygateway-mysql:
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=mygateway
command: mysqld --lower_case_table_names=1 --skip-ssl
--character_set_server=utf8mb4 --explicit_defaults_for_timestamp
myuaa-app:
image: myuaa
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config
- SPRING_DATASOURCE_URL=jdbc:mysql://myuaa-mysql:3306/myuaa?useUnicode=true&characterEncoding=utf8&useSSL=false
- JHIPSTER_SLEEP=30
- JHIPSTER_REGISTRY_PASSWORD=admin
depends_on:
- "myuaa-mysql"
- "jhipster-registry"
myuaa-mysql:
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=myuaa
command: mysqld --lower_case_table_names=1 --skip-ssl
--character_set_server=utf8mb4 --explicit_defaults_for_timestamp
jhipster-registry:
extends:
file: jhipster-registry.yml
service: jhipster-registry
app.yml (microservice)
version: '2'
services:
myservice-app:
image: myservice
environment:
# - _JAVA_OPTIONS=-Xmx512m -Xms256m
- SPRING_PROFILES_ACTIVE=prod,swagger
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#localhost:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#localhost:8761/config
- SPRING_DATASOURCE_URL=jdbc:mysql://myservice-mysql:3306/myservice?useUnicode=true&characterEncoding=utf8&useSSL=false
- JHIPSTER_SLEEP=10 # gives time for the JHipster Registry to boot before the application
- JHIPSTER_REGISTRY_PASSWORD=admin
myservice-mysql:
extends:
file: mysql.yml
service: myservice-mysql
# jhipster-registry:
# extends:
# file: jhipster-registry.yml
# service: jhipster-registry
# environment:
# - SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_TYPE=native
# - SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_SEARCH_LOCATIONS=file:./central-config/docker-config/
I'm trying to setup a CD/CI build environment with docker compose.
I have a jenkins container, a sonar container and an archiva container. The problem is, my jenkins cannot connect to sonar and archiva.
I tried linking multiple containers together or joining them in the same network, but still no success.
In jenkins, I get the following error:
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8081 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
This is my docker-compose file.
version: '2'
volumes:
data-jenkins:
driver: 'local'
data-postgres:
driver: 'local'
data-sonarqube-conf:
driver: 'local'
data-sonarqube-data:
driver: 'local'
data-archiva:
driver: 'local'
services:
jenkins:
image: 'jenkins'
ports:
- '8080:8080'
restart: 'always'
volumes:
- 'data-jenkins:/var/jenkins_home'
links:
- 'sonarqube:sonarqube'
postgres:
image: 'postgres:9.6.1'
environment:
- 'POSTGRES_USER=postgres'
- 'POSTGRES_PASSWORD=postgres'
ports:
- '5432:5432'
restart: 'always'
volumes:
- 'data-postgres:/var/lib/postgresql/data'
sonarqube:
image: 'sonarqube'
depends_on:
- 'postgres'
ports:
- '9000:9000'
links:
- 'postgres:postgres'
environment:
- 'SONARQUBE_JDBC_URL=jdbc:postgresql://postgres:5432/'
- 'SONARQUBE_JDBC_USERNAME=postgres'
- 'SONARQUBE_JDBC_PASSWORD=postgres'
volumes:
- 'data-sonarqube-data:/var/lib/sonarqube/data'
- 'data-sonarqube-conf:/var/lib/sonarqube/conf'
archiva:
image: 'xetusoss/archiva'
ports:
- '8081:8080'
volumes:
- 'data-archiva:/var/archiva'
environment:
- 'SSL_ENABLED=false'
It seems the Jenkins container is living in a seperate environment. Does anyone how can i join all the environments together? Been struggling with this problem for almost a week now
To reference your sonarqube container from Jenkins use sonarqube:9000 docker will translate your service name sonarqube to the ip of that container.
I would also recommend using different networks rather than links to connect your containers.
This is because the ping is going to sonarqube.