Setting up a docker / fig Mesos environment - docker

I'm trying to set up a docker / fig Mesos cluster.
I'm new to fig and Docker. Docker has plenty of documentation, but I find myself struggling to understand how to work with fig.
Here's my fig.yaml at the moment:
zookeeper:
image: jplock/zookeeper
ports:
- "49181:2181"
mesosMaster:
image: mesosphere/mesos:0.19.1
ports:
- "15050:5050"
links:
- zookeeper:zk
command: mesos-master --zk=zk --work_dir=/var/log --quorum=1
mesosSlave:
image: mesosphere/mesos:0.19.1
links:
- zookeeper:zk
command: mesos-slave --master=zk
Thanks !
Edit:
Thanks to Mark O`Connor's help, I've created a working docker-based mesos setup (+ storm, chronos, and more to come).
Enjoy, and if you find this useful - please contribute:
https://github.com/yaronr/docker-mesos
PS. Please +1 Mark's answer :)

You have not indicated the errors you were experiencing.
This is the documentation for the image you're using:
https://registry.hub.docker.com/u/mesosphere/mesos/
Mesos base Docker using the Mesosphere packages from
https://mesosphere.io/downloads/. Doesn't start Mesos, please use the
mesos-master and mesos-slave Dockers.
What really worried me about those images is that they were untrusted and no source was immediately available.
So I re-created your example using the mesosphere github as inspiration:
https://github.com/mesosphere/docker-containers
Updated Example
Example updated to include the chronos framework
├── build.sh
├── fig.yml
├── mesos
│   └── Dockerfile
├── mesos-chronos
│   └── Dockerfile
├── mesos-master
│   └── Dockerfile
└── mesos-slave
└── Dockerfile
Build the base image (only has to be done once)
./build.sh
Run fig to start an instance of each service:
$ fig up -d
Creating mesos_zk_1...
Creating mesos_master_1...
Creating mesos_slave_1...
Creating mesos_chronos_1...
One useful thing about fig is that you can scale up the slaves
$ fig scale slave=5
Starting mesos_slave_2...
Starting mesos_slave_3...
Starting mesos_slave_4...
Starting mesos_slave_5...
The mesos master console should show 5 slaves running
http://localhost:15050/#/slaves
And the chronos framework should be running and ready to launch tasks
http://localhost:14400
fig.yml
zk:
image: mesos
command: /usr/share/zookeeper/bin/zkServer.sh start-foreground
master:
build: mesos-master
ports:
- "15050:5050"
links:
- "zk:zookeeper"
slave:
build: mesos-slave
links:
- "zk:zookeeper"
chronos:
build: mesos-chronos
ports:
- "14400:4400"
links:
- "zk:zookeeper"
Notes:
Only single instance of zookeeper needed for this example
build.sh
docker build --rm=true --tag=mesos mesos
mesos/Dockerfile
FROM ubuntu:14.04
MAINTAINER Mark O'Connor <mark#myspotontheweb.com>
RUN echo "deb http://repos.mesosphere.io/ubuntu/ trusty main" > /etc/apt/sources.list.d/mesosphere.list
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF
RUN apt-get -y update
RUN apt-get -y install mesos marathon chronos
mesos-master/Dockerfile
FROM mesos
MAINTAINER Mark O'Connor <mark#myspotontheweb.com>
EXPOSE 5050
CMD ["--zk=zk://zookeeper:2181/mesos", "--work_dir=/var/lib/mesos", "--quorum=1"]
ENTRYPOINT ["mesos-master"]
mesos-slave/Dockerfile
FROM mesos
MAINTAINER Mark O'Connor <mark#myspotontheweb.com>
CMD ["--master=zk://zookeeper:2181/mesos"]
ENTRYPOINT ["mesos-slave"]
mesos-chronos/Dockerfile
FROM mesos
MAINTAINER Mark O'Connor <mark#myspotontheweb.com>
RUN echo "zk://zookeeper:2181/mesos" > /etc/mesos/zk
EXPOSE 4400
CMD ["chronos"]
Notes:
The "chronos" command line is configured using files.

Related

Why modules are not installed by only binding volume in docker-compose

When I tried docker-compose build and docker-compose up -d
I suffered api-server container didn't start.
I tried
docker logs api-server
yarn run v1.22.5
$ nest start --watch
/bin/sh: nest: not found
error Command failed with exit code 127.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
It seems nest packages didn't installed. because package.json was not copied to container from host.
But in my opinion,by volume was binded by docker-compose.yml, Therefore the command yarn install should refer to the - ./api:/src.
Why do we need to COPY files to container ?
Why only the volume binding doesn't work well ?
If someone has opinion,please let me know.
Thanks
The following is dockerfile
FROM node:alpine
WORKDIR /src
RUN rm -rf /src/node_modules
RUN rm -rf /src/package-lock.json
RUN apk --no-cache add curl
RUN yarn install
CMD yarn start:dev
Following is
docker-compose.yml
version: '3'
services:
api-server:
build: ./api
links:
- 'db'
ports:
- '3000:3000'
volumes:
- ./api:/src
- ./src/node_modules
tty: true
container_name: api-server
Volumes are mounted at runtime, not at build time, therefore in your case, you should copy the package.json prior to installing dependencies, and running any command that needs these dependencies.
Some references:
Docker build using volumes at build time
Can You Mount a Volume While Building Your Docker Image to Cache Dependencies?

docker build IMAGE results in error but docker-compose up -d works fine

I am new to the Docker. I am trying to create a docker image for the NodeJS project which I will upload/host on Docker repository. When I execute docker-compose up -d everything works fine and I can access the nodeJS server that is hosted inside docker containers. After that, I stopped all containers and tried to create a docker image from Dockerfiles using following commands:
docker build -t adonis-app .
docker run adonis-app
The first command executes without any error but the second command throws this error:
> adonis-fullstack-app#4.1.0 start /app
> node server.js
internal/modules/cjs/loader.js:983
throw err;
^
Error: Cannot find module '/app/server.js'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:980:15)
at Function.Module._load (internal/modules/cjs/loader.js:862:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! adonis-fullstack-app#4.1.0 start: `node server.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the adonis-fullstack-app#4.1.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /app/.npm/_logs/2020-02-09T17_33_22_489Z-debug.log
Can someone help me with this error and tell me what is wrong with it?
Dockerfile I am using:
FROM node
ENV HOME=/app
RUN mkdir /app
ADD package.json $HOME
WORKDIR $HOME
RUN npm i -g #adonisjs/cli && npm install
CMD ["npm", "start"]
docker-compose.yaml
version: '3.3'
services:
adonis-mysql:
image: mysql:5.7
ports:
- '3306:3306'
volumes:
- $PWD/data:/var/lib/mysql
environment:
MYSQL_USER: ${DB_USER}
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_RANDOM_ROOT_PASSWORD: 1
networks:
- api-network
adonis-api:
container_name: "${APP_NAME}-api"
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
- /app/node_modules
ports:
- "3333:3333"
depends_on:
- adonis-mysql
networks:
- api-network
networks:
api-network:
Your Dockerfile is missing a COPY step to actually copy your application code into the image. When you docker run the image, there's no actual source code to run, and you get the error you're seeing.
Your Dockerfile should look more like:
FROM node
WORKDIR /app # creates the directory; no need to set $HOME
COPY package.json package.lock .
RUN npm install # all of your dependencies are in package.json
COPY . . # actually copy the application in
CMD ["npm", "start"]
Now that your Docker image is self-contained, you don't need the volumes: that try to inject host content into it. You can also safely rely on several of Docker Compose's defaults (the default network and the generated container_name: are both fine to use). A simpler docker-compose.yml looks like
version: '3.3'
services:
adonis-mysql:
image: mysql:5.7
# As you have it, except delete the networks: block
adonis-api:
build: . # this directory is context:, use default dockerfile:
ports:
- "3333:3333"
depends_on:
- adonis-mysql
There's several key problems in the set of artifacts and commands you show:
docker run and docker-compose ... are separate commands. In the docker run command you show, it runs the image, as-is, with its default command, with no volumes mounted, and with no ports published. docker run doesn't know about the docker-compose.yml file, so whatever options you have specified there won't have an effect. You might mean docker-compose up, which will also start the database. (In your application remember to try several times for it to come up, it often can take 30-60 seconds.)
If you're planning to push the image, you need to include the source code. You're essentially creating two separate artifacts in this setup: a Docker image with Node and some libraries, and also a Javascript application on your host. If you docker push the image, it won't include the application (because you're not COPYing it in), so you'll also have to separately distribute the source code. At that point there's not much benefit to using Docker; an end user may as well install Node, clone your repository, and run npm install themselves.
You're preventing Docker from seeing library updates. Putting node_modules in an anonymous volume seems to be a popular setup, and Docker will copy content from the image into that directory the first time you run the application. The second time you run the application, Docker sees the directory already exists, assumes it to contain valuable user data, and refuses to update it. This leads to SO questions along the lines of "I updated my package.json file but my container isn't updating".
Your docker-compose.yaml file has two services:
adonis-mysql
adonis-api
Only the second item is using the current docker file. As can be seen by the following section:
build:
context: .
dockerfile: Dockerfile
The command docker build . will only build the image in current docker file aka adonis-api. And then it is run.
So most probably it could be the missing mysql service that is giving you the error. You can verify by running
docker ps -aq
to check if the sql container is also running. Hope it helps.
Conclusion: Use docker-compose.

Docker-compose build misses some package content in container

I'm working on build containers exploiting a monitoring application (Centreon).
When i build my container manually (with docker run) and when i build my docker file, i have different results. Somes scripts used by the application are missing.
Here's my dockerfile :
FROM centos:centos7
LABEL Author = "AurelienH."
LABEL Description = "DOCKERFILE : Creates a Docker Container for a Centreon poller"
#Update and install requirements
RUN yum update -y
RUN yum install -y wget nano httpd git
#Install Centreon repo
RUN yum install -y --nogpgcheck http://yum.centreon.com/standard/3.4/el7/stable/noarch/RPMS/centreon-release-3.4-4.el7.centos.noarch.rpm
#Install Centreon
RUN yum install -y centreon-base-config-centreon-engine centreon centreon-pp-manager centreon-clapi
RUN yum install -y centreon-widget*
RUN yum clean all
#PHP Time Zone
RUN echo -n "date.timezone = Europe/Paris" > /etc/php.d/php-timezone.ini
#Supervisor
RUN yum install -y python-setuptools
RUN easy_install supervisor
COPY /cfg/supervisord.conf /etc/
RUN yum clean all
EXPOSE 22 80 5667 5669
CMD ["/usr/bin/supervisord", "--configuration=/etc/supervisord.conf"]
The difference i see is in the /usr/lib/nagios/plugins folder. I miss some scripts here. And when i execute the exact same commands but in a container i'm running i can find my files.
Maybe it has something to do with writing permissions for the user that executes the commands with docker-compose ?
EDIT :
docker-compose file :
version: "3"
services:
centreon:
build: ./centreon
depends_on:
- mariadb
container_name: sinelis-central
volumes:
- ./central-broker-config:/etc/centreon-broker
- ./central-centreon-plugins:/usr/lib/centreon/plugins
- ./central-engine-config:/etc/centreon-engine
- ./central-logs-broker:/var/log/centreon-broker
- ./central-logs-engine:/var/log/centreon-engine
- ./central-nagios-plugins:/usr/lib/nagios/plugins
- ./central-ssh-key:/home/centreon/.ssh/
ports:
- "80:80"
- "5667:5667"
- "5669:5669"
- "22:22"
deploy:
restart_policy:
window: 300s
links:
- mariadb
mariadb:
image: mariadb
container_name: sinelis-mariadb
environment:
MYSQL_ROOT_PASSWORD: passwd2017
deploy:
restart_policy:
window: 300s
To run the container I use the docker run -it centos:centos7 command
It doesn't matter what you put in your image at that location, you will always see the contents of your volume mount:
- ./central-nagios-plugins:/usr/lib/nagios/plugins
Docker does not initialize host volumes to the contents of the image, and once you have data in the volume, docker does an initialization with any volume type.
Keep in mind the build happens on an image without any of the other configurations in the compose file applied, no volumes are mounted for you to update. Then when you run your container, you overlay the directories of the image with the volumes you select. Build time and run time are two separate phases.
Edit: to have a named volume point to a host directory, you can defined a bind mount volume. This will not create the directory if it does not already exist (the attempt to mount the volume will fail and the container would not start). But if the directory is empty, it will be initialized to the contents of your image:
version: "3"
volumes:
central-nagios-plugins:
driver: local
driver_opts:
type: none
o: bind
device: /usr/lib/nagios/plugins
services:
centreon:
....
volumes:
...
- central-nagios-plugins:/usr/lib/nagios/plugins
...
It will be up to you to empty the contents of this volume when you want it to be reinitialized with the contents of your image, and merging multiple versions of this directory would also be a process you'd need to create yourself.

Access volume in docker build

I am using docker compose and i have created a volume. I have mulitple containers. I am facing issue to run commands in the docker container.
I have node js container which have separate frontend and backend folders. i need to run npm install in both the folders.
version: '2'
services:
### Applications Code Container #############################
applications:
image: tianon/true
volumes:
- ${APPLICATION}:/var/www/html
node:
build:
context: ./node
volumes_from:
- applications
ports:
- "4000:30001"
networks:
- frontend
- backend
This is my docker file for node
FROM node:6.10
MAINTAINER JC Gil <sensukho#gmail.com>
ENV TERM=xterm
ADD script.sh /tmp/
RUN chmod 777 /tmp/script.sh
RUN apt-get update && apt-get install -y netcat-openbsd
WORKDIR /var/www/html/Backend
RUN npm install
EXPOSE 4000
CMD ["/bin/bash", "/tmp/script.sh"]
my workdir is empty as location /var/www/html/Backend is not available while building but available when i conainter is up. So my command npm install do not work
What you probably want to do, is to ADD or COPY the package.json file to the correct location, RUN npm install, then ADD or COPY the rest of the source into the image. That way, docker build will re-run npm install only when needed.
It would probably be better to run frontend and backend in separate containers, but if that's not an option, it's completely feasible to run ADD package.json-RUN npm install-ADD . once for each application.
The RUN is an image build step, at build time the volume isn't attached yet.
I think you have to execute npm install inside CMD.
You can try to add npm install inside /tmp/script.sh
Let me know
As Tomas Lycken Mentioned to copy files and then run npm install. I have separated containers for Frontend and backend. Most important is the node modules for the frontend and backend. Need to create them as volumes in services so that they are available when we up container.
version: '2'
services:
### Applications Code Container #############################
applications:
image: tianon/true
volumes:
- ${APPLICATION}:/var/www/html
- ${BACKEND}:/var/www/html/Backend
- ${FRONTEND}:/var/www/html/Frontend
apache:
build:
context: ./apache2
volumes_from:
- applications
volumes:
- ${APACHE_HOST_LOG_PATH}:/var/log/apache2
- ./apache2/sites:/etc/apache2/sites-available
- /var/www/html/Frontend/node_modules
- /var/www/html/Frontend/bower_components
- /var/www/html/Frontend/dist
ports:
- "${APACHE_HOST_HTTP_PORT}:80"
- "${APACHE_HOST_HTTPS_PORT}:443"
networks:
- frontend
- backend
node:
build:
context: ./node
ports:
- "4000:4000"
volumes_from:
- applications
volumes:
- /var/www/html/Backend/node_modules
networks:
- frontend
- backend

docker copy file from one container to another?

Here is what I want to do:
docker-compose build
docker-compose $COMPOSE_ARGS run --rm task1
docker-compose $COMPOSE_ARGS run --rm task2
docker-compose $COMPOSE_ARGS run --rm combine-both-task2
docker-compose $COMPOSE_ARGS run --rm selenium-test
And a docker-compose.yml that looks like this:
task1:
build: ./task1
volumes_from:
- task1_output
command: ./task1.sh
task1_output:
image: alpine:3.3
volumes:
- /root/app/dist
command: /bin/sh
# essentially I want to copy task1 output into task2 because they each use different images and use different tech stacks...
task2:
build: ../task2
volumes_from:
- task2_output
- task1_output:ro
command: /bin/bash -cx "mkdir -p task1 && cp -R /root/app/dist/* ."
So now all the required files are in task2 container... how would I start up a web server and expose a port with the content in task2?
I am stuck here... how do I access the stuff from task2_output in my combine-tasks/Dockerfile:
combine-both-task2:
build: ../combine-tasks
volumes_from:
- task2_output
In recent versions of docker, named volumes replace data containers as the easy way to share data between containers.
docker volume create --name myshare
docker run -v myshare:/shared task1
docker run -v myshare:/shared -p 8080:8080 task2
...
Those commands will set up one local volume, and the -v myshare:/shared argument will make that share available as the folder /shared inside each of each container.
To express that in a compose file:
version: '2'
services:
task1:
build: ./task1
volumes:
- 'myshare:/shared'
task2:
build: ./task2
ports:
- '8080:8080'
volumes:
- 'myshare:/shared'
volumes:
myshare:
driver: local
To test this out, I made a small project:
- docker-compose.yml (above)
- task1/Dockerfile
- task1/app.py
- task2/Dockerfile
I used node's http-server as task2/Dockerfile:
FROM node
RUN npm install -g http-server
WORKDIR /shared
CMD http-server
and task1/Dockerfile used python:alpine, to show two different stacks writing and reading.
FROM python:alpine
WORKDIR /app
COPY . .
CMD python app.py
here's task1/app.py
import time
count = 0
while True:
fname = '/shared/{}.txt'.format(count)
with open(fname, 'w') as f:
f.write('content {}'.format(count))
count = count + 1
time.sleep(10)
Take those four files, and run them via docker compose up in the directory with docker-compose.yml - then visit $DOCKER_HOST:8080 to see a steadily updated list of files.
Also, I'm using docker version 1.12.0 and compose version 1.8.0 but this should work for a few versions back.
And be sure to check out the docker docs for details I've probably missed here:
https://docs.docker.com/engine/tutorials/dockervolumes/
For me the best way to copy file from or to container is using docker cp for example:
If you want copy schema.xml from apacheNutch container to solr container then:
docker cp apacheNutch:/root/nutch/conf/schema.xml /tmp/schema.xml
server/solr/configsets/nutch/
docker cp /tmp/schema.xml
solr:/opt/solr-8.1.1/server/solr/configsets/nutch/conf

Resources