Hi I want to use the haproxy exporter provided here https://github.com/prometheus/haproxy_exporter in a docker container.
I am using docker-composefor managing containers and want to recreate this command:
$ docker run -p 9101:9101 prom/haproxy-exporter -haproxy.scrape-uri="http://user:pass#haproxy.example.com/haproxy?stats;csv"
in my docker-compose.yml.
I am not sure how to pass the argument, after viewing the docker-compose documentation I tried it like this:
haproxy-exporter:
image: prom/haproxy-exporter
ports:
- 9101:9101
network_mode: "host"
build:
args:
- haproxy.scrape-uri="http://user:pass#haproxy.example.com/haproxy?stats;csv"
But this gives me an file is invalid message because it requires a context with a build.
Thanks for any help in advance
The image is already built and pulled from the hub (unless you have your own Dockerfile) so you don't need the build option. Instead, pass your arguments as the command since the image appears to use an entrypoint (if their Dockerfile only had a cmd option, then you'd need to also pass /bin/haproxy-exporter in your command):
haproxy-exporter:
image: prom/haproxy-exporter
ports:
- 9101:9101
network_mode: "host"
command: -haproxy.scrape-uri="http://user:pass#haproxy.example.com/haproxy?stats;csv"
Related
I am trying to run an exporter this using docker compose. I have created a docker compose file like this :-
version: '2'
services:
prometheus-cloudwatch:
image: cloudposse/prometheus-to-cloudwatch
container_name: prometheus-cloudwatch
command:
- 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}'
- 'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}'
- 'CLOUDWATCH_NAMESPACE=${CLOUDWATCH_NAMESPACE}'
- 'CLOUDWATCH_REGION=${CLOUDWATCH_REGION}'
- 'CLOUDWATCH_PUBLISH_TIMEOUT=15'
- 'PROMETHEUS_SCRAPE_INTERVAL=30'
- 'PROMETHEUS_SCRAPE_URL=http://IP:9399/metrics'
And i have created .env file with the following content :-
AWS_ACCESS_KEY_ID=<ID>
AWS_SECRET_ACCESS_KEY=<ID>
CLOUDWATCH_NAMESPACE=kube-state-metrics
CLOUDWATCH_REGION=us-east-1
When i run the command
docker-compose up -d
and check the running containers, using docker ps -a i get this :-
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cf43bfedef54 cloudposse/prometheus-to-cloudwatch "prometheus-to-cloud…" 7 minutes ago Exited (1) 7 minutes ago prometheus-cloudwatch
And when i further investigate using docker logs prometheus-cloudwatch, I am seeing this :-
2021/07/28 03:15:44 prometheus-to-cloudwatch: Error: -cloudwatch_namespace or CLOUDWATCH_NAMESPACE required
Which does not make sense to me since i have declared the command in the docker-compose file. Any help will be appreciated. Thank you.
See its Dockerfile:
ENTRYPOINT ["prometheus-to-cloudwatch"]
In fact, The command in your docker-compose.yaml will be act as parameters of the entrypoint.
To make the same effect of -e CLOUDWATCH_NAMESPACE mentioned here, you could try next snippet:
version: '2'
services:
prometheus-cloudwatch:
image: cloudposse/prometheus-to-cloudwatch
container_name: prometheus-cloudwatch
env_file: .env
You could refer this for more help.
Another solution is using environment like next, but you need to extract the variable by yourself, detail see this:
environment:
- CLOUDWATCH_NAMESPACE=${CLOUDWATCH_NAMESPACE}
Let's say I have a docker-compose file with two containers:
version: "3"
services:
app:
image: someimage:fpm-alpine
volumes:
- myvolume:/var/www/html
web:
image: nginx:alpine
volumes:
- myvolume:/var/www/html
volumes:
myvolume:
The app container contains the application code in the /var/www/html directory which gets updated with each version of the image, so I don't want this directory to be persistent.
Yet I need to share the data with the nginx container. If I use a volume or a host bind the data is persistent and doesn't get updated with a new version. Maybe there is a way to automatically delete a volume whenever I pull a new image? Or a way to share an anonymous volume?
i think its better for you to use anonymous volume
volumes:
- ./:/var/www/html
You would have to be willing to drop back to docker-compose version 2 and use data containers with the volumes_from directive.
Which is equivalent to --volumes-from on a docker run command.
This should work fine. The problem isn't with docker. You can use volumes to communicate in this way. If you run docker-compose up in a directory with the following compose file:
version: "3"
services:
one:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
two:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
volumes:
vol:
Then, in a 2nd terminal docker exec -it so_one_1 bash (you might have to do a docker ps to find the exact name of the container, it can change). You'll find yourself in a bash container. Change to the /vol directory cd /vol and then echo "wobble" > wibble.txt", then exit` the shell (ctrl-d).
In the same terminal you can then type docker exec -it so_two_1 bash (again, check the names). Just like last time you can cd /vol and type ls -gAlFh you'll see the wibble.txt file we created in the other container. You can even cat wibble.txt to see the contents. It'll be there.
So if the problem isn't docker, what can it be? I think the problem is that nginx isn't seeing the changes on the filesystem. For that, I believe that setting expires -1; inside a location block in the config will actually disable caching completely and may solve the problem (dev only).
So I need rolling-updates with docker on my single node server. Until now, I was using docker-compose but unfortunately, I can't achieve what I need with it. Reading the web, docker-swarm seems to be the way to go.
I have found how to run an app with multiple replicas on a single node using swarm:
docker service create --replicas 3 --name myapp-staging myapp_app:latest
myapp:latest being built from my docker-compose.yml:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
working_dir: /app
depends_on:
- "postgres"
env_file:
- ".env"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Unfortunately, this doesn't work since it doesn't get the config from the docker-compose.yml file: .env file, command entry etc.
Searching deeper, I find that using
docker stack deploy -c docker-compose.yml <name>
will create a service using my docker-compose.yml config.
But then I get the following error message:
failed to update service myapp-staging_postgres: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
So it seems I have to use the registry and push my image there so that it works. I understand this need in case of a multiple node architecture, but in my case I don't want to do that. (Carrying images are heavy, I don't want my image to be public, and after all, image is here, so why should I move it to the internet?)
How can I set up my docker service using local image and config written in docker-compose.yml?
I could probably manage my way using docker service create options, but that wouldn't use my docker-compose.yml file so it would not be DRY nor maintainable, which is important to me.
docker-compose is a great tool for developers, it is sad that we have to dive into DevOps tools to achieve such common features as rolling updates. This whole swarm architecture seems too complicated for my needs at this stage.
You don't have to use registeries in your single node setup. you can build your "app" image on your node from a local docker file using this command -cd to the directory of you docker file-
docker build . -t my-app:latest
This will create a local docker image on your node, this image is only visible to your single node which is benefitial in your use case but i wouldn't recommend this in a production setup.
You can now edit the compose file to be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
image: "my-app:latest"
depends_on:
- "postgres"
env_file:
- ".env"
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
And now you can run your stack from this node and it will use your local app image and benefit from the usage of the image [updates - rollbacks ...etc]
I do have a side note though on your stack file. You are using the same env file for both services, please mind that swarm will look for the ".env" file relative/next to the ".yml" file, so if this is not intentional please revise the location of your env files.
Also on a side note this solution is only feasable on a single node cluster and if you scale your cluster you will have to use a registery and registeries dont have to be public, you can deploy a private registery on your cluster and only your nodes can access it -or you can make it public- the accessibility of your registery is your choice.
Hope this will help with your issue.
Instead of docker images, you can directly use the docker file there. please check the below example.
version: "3.7"
services:
webapp:
build: ./dir
The error is because of compose unable to find an image on the Docker public registry.
Above method should solve your issue.
Basically you need to use docker images in order to make the rolling update to work in docker swarm. Also I would like to clarify that you can host a private registry and use it instead of public one.
Detailed Explanation:
When you try out rolling update how docker swarm works is that it sees whether there is a change in the image which is used for the service if so then docker swarm schedules service updation based on the updation criteria's set up and will work on it.
Let us say there is no change to the image then what happens? Simply docker will not apply the rolling update. Technically you can specify --force flag to make it force update the service but it will just redeploy the service.
Hence create a local repo and store the images into that and use that image name in docker-compose file to be used for a swarm. You can secure the repo by using SSL, user credentials, firewall restrictions which is up to you. Refer this for more details on deploying docker registry server.
Corrections in your compose file:
Since docker stack uses the image to create service you need to specify image: "<image name>" in app service like done in postgres service. AS you have mentioned build instruction image-name is mandatory as docker-compose doesn't know what tho name the image as.Reference.
Registry server is needed if you are going to deploy the application in multi-server. Since you have mentioned it's a single node deployment just having the image pulled/built on the server is enough. But private registry approach is the recommended.
My recommendation is that don't club all the services into a single docker-compose file. The reason is that when you deploy/destroy using docker-compose file all the services will be taken down. This is a kind of tight coupling. Of course, I understand that all the other services depend on DB. in such cases make sure DB service is brought up first before other services.
Instead of specifying the env file make it as a part of Docker file instruction. either copy the env file and source it in entry point or use ENV variable to define it.
Also just an update:
Stack is just to group the services in swarm.
So your compose file should be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
image: "image-name:tag" #the image built will be tagged as image-name:tag
working_dir: /app # note here I've removed .env file
depends_on:
- "postgres"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Dockerfile:
from baseimage:tag
COPY .env /somelocation
# your further instructions go here
RUN ... & \
... & \
... && chmod a+x /somelocation/.env
ENTRYPOINT source /somelocation/.env && ./file-to-run
Alternative Dockerfile:
from baseimage:tag
ENV a $a
ENV b $b
ENV c $c # here a,b,c has to be exported in the shell befire building the image.
ENTRYPOINT ./file-to-run
And you may need to run
docker-compose build
docker-compose push (optional needed to push the image into registry in case registry is used)]
docker stack deploy -c docker-compose.yml <stackname>
NOTE:
Even though you can create the services as mentioned here by #M.Hassan I've explained the ideal recommended way.
I am new to Docker and have docker-compose.yml which is containing many services and iI need to start one particular service. I have docker-compose.yml file with information:
version: '2'
services:
postgres:
image: ${ARTIFACTORY_URL}/datahub/postgres:${BUILD_NUMBER}
restart: "no"
volumes:
- /etc/passwd:/etc/passwd
volumes_from:
- libs
depends_on:
- libs
setup:
image: ${ARTIFACTORY_URL}/setup:${B_N}
restart: "no"
volumes:
- ${HOME}:/usr/local/
I am able to call docker-compose.yml file using command:
docker-compose -f docker-compose.yml up -d --no-build
But I need to start "setup service" in docker-compose file:
How can I do this?
It's very easy:
docker compose up <service-name>
In your case:
docker compose -f docker-compose.yml up setup -d
To stop the service, then you don't need to specify the service name:
docker compose down
will do.
Little side note: if you are in the directory where the docker-compose.yml file is located, then docker-compose will use it implicitly, there's no need to add it as a parameter.
You need to provide it in the following situations:
the file is not in your current directory
the file name is different from the default one, eg. myconfig.yml
As far as I understand your question, you have multiple services in docker-compose but want to deploy only one.
docker-compose should be used for multi-container Docker applications. From official docs :
Compose is a tool for defining and running multi-container Docker
applications.
IMHO, you should run your service image separately with docker run command.
PS: If you are asking about recreating only the container whose image is changed among the multiple services in your docker-compose file, then docker-compose handles that for you.
I have a service that I am bringing up through Rancher via docker-compose. The issue I am running into is that I need to set a password after the container has been deployed.
The way rancher secrets work, is that I set my secret in and rancher will mount a volume on my container with a file containing my secret. I was hoping to be able to execute a script to grab that secret, and set it as a password on my config file.
I don't believe I have a way to get that secret in through the Dockerfile as I don't want the secret to be in git, so I'm left looking at doing it via docker-compose.
Does anyone know if this is possible?
This is the way I use for calling a script after a container is started without overriding the entrypoint.
In my example, I used it for initializing the replicaset of my local MongoDB
services:
mongo:
image: mongo:4.2.8
hostname: mongo
container_name: mongodb
entrypoint: ["/usr/bin/mongod","--bind_ip_all","--replSet","rs0"]
ports:
- 27017:27017
mongosetup:
image: mongo:4.2.8
depends_on:
- mongo
restart: "no"
entrypoint: [ "bash", "-c", "sleep 10 && mongo --host mongo:27017 --eval 'rs.initiate()'"]
In the first part, I simply launch my service (mongo)
The second service use a "bash" entry point AND a restart: no <= important
I also use a depends_on between service and setup service for manage the launch order.
The trick is to overwrite the compose COMMAND to perform whatever init action you need before calling the original command.
Add a script in your image that will perform the init work that you want like set password, change internal config files, etc. Let's call it init.sh. You add it to your image.
Dockerfile:
FROM: sourceimage:tag
COPY init.sh /usr/local/bin/
ENTRYPOINT []
The above overrides whatever ENTRYPOINT is defined in the sourceimage. That's to make this example simpler. Make sure you understand what the ENTRYPOINT is doing in the Dockerfile from the sourceimage and call it in the command: of the docker-compose.yml file.
docker-compose.yml:
services:
myservice:
image: something:tag
...
command: sh -c "/usr/local/bin/init.sh && exec myexecutable"
It's important to use exec before calling the main command. That will install the command as the first process (PID1) which will make it receive signals like STOP, KILL (Ctrl-C on keyboard) or HUP.
You can also use volumes to do this:
services:
example:
image: <whatever>
volume: ./init.sh:/init.sh
entrypoint: sh -c "/init.sh"
Note that this will mount init.sh to the container, not copy it (if that matters, usually it doesn't). Basically processes within the container can modify init.sh and it would modify the file as it exists in your actual computer.
docker-compose specify how to launch containers, not how to modify an existing running container.
The Rancher documentation mentions that, for default usage of secrets, you can reference the secret by name in the secrets array in the docker-compose.yml.
The target filename will be the same name as the name of the secret.
By default, the target filename will be created as User ID and Group ID 0, and File Mode of 0444.
Setting external to true in the secrets part will make sure it knows the secret has already been created.
Example of a basic docker-compose.yml:
version: '2'
services:
web:
image: sdelements/lets-chat
stdin_open: true
secrets:
- name-of-secret
labels:
io.rancher.container.pull_image: always
secrets:
name-of-secret:
external: true
As illustrated in "How to Update a Single Running docker-compose Container", updating a container would involve a "build, kill, and up" sequence.
docker-compose up -d --no-deps --build <service_name>