Datadog integration inside docker containers - docker

New to datadog so I'm just really confused. First configuration was fast and simple. However as I want some app specific charts, it doesn't seem as clear as before for my current scenario.
We have one host with several docker machines, one for each service:
- nginx
- varnish
- apache
- database (mysql)
We've installed datadog client inside the host and also docker integration and everything works fine.
What I don't get is how get metrics from apache or varnish, or whatever service that is inside docker.
Reading the docs in varnish for example you have to execute:
$ sudo usermod -G varnish -a dd-agent
However, where should I run the command? dd-agent user exists only in the host, not in the docker container. Varnish is just the other way round.
Should I need to install the agent on each container?
It would be considered as another host for pricing?
In mysql case, I just have to configure the agent:
init_config:
instances:
- server: localhost
user: datadog
pass: <UNIQUEPASSWORD>
tags:
- optional_tag1
- optional_tag2
options:
But as my host and the container are in separate routes, should I create a new docker container with the agent so it cat get to db container (changing server field)?
Is it considered again as another host?

In most cases, the datadog agent retrieves metrics from an integration by connecting to a URL endpoint. This would be the case for services such as nginx, mysql etc.
This means that you can run just one datadog agent on the host, and configure it to listen to URL endpoints of services exposed from each container.
For example, assuming a mysql docker container is run with the following command:
docker run -d \
--name mysql \
-p 3306:3306 \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=mySchema \
mysql
You can instruct the agent running on the host to connect to the container IP in the mysql.yaml agent configuration:
init_config:
instances:
- server: <container IP>
user: datadog
pass: secret
tags:
- optional_tag1
- optional_tag2
options:
Varnish is slightly different as the agent retrieves metrics using the varnishstat binary. According to the example template:
In order to support monitoring a Varnish instance which is running as a Docker container we need to wrap commands (varnishstat) with scripts which perform a docker exec on the running container.
To do this, on the host, create a wrapper script for the container:
echo "/usr/bin/docker exec varnish_container_name varnishstat "$#"" > /home/myuser/docker_varnish
Then specify the script location in the varnish.yaml agent configuration:
init_config:
instances:
- varnishstat: /home/myuser/docker_varnish

Related

Implement Docker isolation for multiple users

I’ve been asked to configure a ubuntu 18.04 server with docker for multiple users.
Purpose:
We have multiple testers who write test cases. But our laptops aren’t fast enough to build the project and run tescases in docker environment.
We already have a jenkins server.But we need to build/test our code BEFORE push to git.
I’ve been given a high end ubuntu 18.04 server.
I have to configure the server where all our testers can run/debug our testcases on isolated environments.
When testers push there changes to remote servers project should build and run on isolated environments. Multiple users can work on same project but one testers builds must NOT affect another one.
I already installed Docker and tried with only changing docker-compose.yml and adding different networks (using multiple accounts of course). But it was very painful.
I need to have multiple selenoid servers(for different users),different allure reports with docker , Need the ability to build and run tests using our docker-compose files and need the ability to run the actual project on different ports so we can go through the system while writing test cases.
Is it possible to configure an environment without changing project docker-compose.yml ?
Whats the approach I should take ?
You can use Docker in Docker (docker:dind image) to run multiple instances of Docker daemon on the same host, and have each tester use a different DOCKER_HOST to run their Compose stack. Each app instance will be deployed on a separate Docker daemon and isolated without requiring any change in docker-compose.yml.
Docker in Docker can be used to run a Docker daemon from another Docker daemon. (Docker daemon is the process actually managing your container when using docker). See Docker architecture and DinD original blogpost for details.
Example: run 2 Docker daemons exposing the app port
Let's Consider 2 testers with this docker-compose.yml:
version: 3
services:
app:
image: my/app:latest
ports:
- 8080:80
Run 2 instances of Docker Daemon exposing Daemon port and any port that will be exposed by Docker Compose (see below why)
# Run docker dind and map port 23751 on localhost
# Expose Daemon 8080 on 8081 (port that will be used by Tester1)
# privileged is required to run dind (see dind-rootless exists but is experimental)
# DOCKER_TLS_CERTDIR="" is to deploy an unsecure Daemon
# it's easier to use but should only be used for testing/dev purposes
docker run -d \
-p 23751:2375 \
-p 8081:8080 \
--privileged \
--name dockerd-tester1 \
-e DOCKER_TLS_CERTDIR=""
docker:dind
# Second Daemon using port 23752
docker run -d \
-p 23752:2375 \
-p 8082:8080 \
--privileged \
--name dockerd-tester2 \
-e DOCKER_TLS_CERTDIR=""
docker:dind
Each tester can run their own stack on their Docker daemon by setting DOCKER_HOST env var:
# Tester 1 shell
# use dockerd-tester1 daemon on port 23751
export DOCKER_HOST=tcp://localhost:23751
# run our stack
docker-compose up -d
Same for Tester 2 on dockerd-tester2 port:
# Tester 2 shell
export DOCKER_HOST=tcp://localhost:23752
docker-compose up -d
Interacting with Tester 1 and 2's stacks
Need the ability to build and run tests using our docker-compose files and need the ability to run the actual project on different ports
The exposed ports for each testers will be exposed on the Docker daemon host and reachable via http://$DOCKER_HOST:$APP_PORT instead of localhost:$APP_PORT (that's why we also exposed app port on each Daemon).
Considering our docker-compose.yml, testers will be able to access application such as:
# Tester 1
# port 8081 is linked to port 8080 of Docker daemon running our app container
# itself redirect on port 8080
# in short: 8081 -> 8080 -> 80
curl localhost:8081
# Tester 2
# 8082 -> 8080 -> 80
curl localhost:8082
Our deployment will look like this
Alternative without exposing ports, using Docker daemon IP directly
Similar to the first example, you can also interact with the deployed app by using Docker daemon IP directly:
# Run daemon without exposing ports
docker run -d \
--privileged \
--name dockerd-tester1 \
-e DOCKER_TLS_CERTDIR=""
docker:dind
# Retrieve daemon IP
docker inspect --format '{{ .NetworkSettings.IPAddress }}' dockerd-tester1
# output like 172.17.0.2
# use it!
export DOCKER_HOST=172.17.0.2
docker-compose up -d
# our app port are exposed on Daemon
curl 172.17.0.2:8080
We contacted directly our Daemon via its IP instead of exposing its port on localhost.
You can even define your Docker daemons with static IPs in a docker-compose.yml such as:
version: "3"
services:
dockerd-tester1:
image: docker:dind
privileged: true
environment:
DOCKER_TLS_CERTDIR: ""
networks:
dind-net:
# static IP to set as DOCKER_HOST
ipv4_address: 10.5.0.6
# same for dockerd-tester2
# ...
networks:
dind-net:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
And then
export DOCKER_HOST=10.5.0.6
# ...
Notes:
This may have some performance impact depending on the machine on which Daemons are deployed
You can use dind-rootless instead of dind to avoid using --privileged flags
It's better to avoid DOCKER_TLS_CERTDIR: "" for security reasons, see TLS instruction on docker image for detailed usage of TLS
The OP has already an CI/CD-system running. The question is: How can testers wrote new testcases on a own enviroment which is not running on the local maschine.
I suggest that you setup a k8s (kubernetes) instance on your new "high-end"-server. The installation of minikube is very easy and enough when you has only one server (aka node).
With k8s you can control your docker-containers (or with the correct verb "orchestrate").
You can do one of these things next:
Wrote a script for the test-laptops, so they can start new environments. You can use the $USER-variable for the correct naming. Be aware that the testers may have access to k8s now.
My favorite: Don't create enviroments for users, create them for merge requets. They are not bound to users and can be created by your version-control-system (e.g. gitlab). The testers can open an MR, your server setup a new enviroment and the tester is ready to go. And your testers have no access to k8s.
Not recommended, but possible: Create enviroments manually for each tester.

Jenkins running in Docker Container but unable to launch it on browser

I have docker installed on Google cloud and I have pulled the Jenkins image from Docker Hub to my docker. Now when I am running a container with Jenkins image using below mentioned command its showing "INFO: Jenkins is fully up and running". But when I tried it on a browser with "http://cloud_external_ip:port" it is not getting opened. It's throwing the message: "This site can’t be reached".
docker container run -p 80:80 --name myjen jenkins
have you tried to check your firewall-rules from the Cloud Shell, for example:
$gcloud compute firewall-rules list | grep 80
then if you need to setup new rule:
$gcloud compute firewall-rules create default-allow-http --allow tcp:80
for more info you can take a look at Google Doc
The Jenkins default port is 8080. You can find out all the ports mapped in your docker using:
$ docker ps -l
or only for one container:
$ docker port myappname
and in the results you need to search for Jenkins and ExposedPorts, and looks similar to this:
“ExposedPorts”: {
“8080/tcp”: {}
},
If you didn't change the default port for Jenkins and you was using this documentation during the installation it's possible that your Jenkins is working on 8080 port.
After check your ports if you want/need to change it in Jenkins you have two options:
by command: java -jar jenkins.war --httpPort=80
Modifying Jenkins config file: /PATH/jenkins, search for HTTP_PORT, and you add your selected port: HTTP_PORT = 80
You need to restart the service after modify the parameter.
If you want to use the port 8080 make sure that you have the correct firewall rules in GCP for this port. You can use the commands appointed by #J.Rojas.
If you're running a web app inside a docker container then before browsing it into the web browser you'll need to do PORT MAPPING.
Instead of running
docker run jenkins
Run this
docker run -p 8080:8080 jenkins
This will map your localhost to the internal IP of the container and you can access the application easily.
To change the port you can do:
docker run -p 8356:8080 jenkins
It can be accessed on port 8356.
Thanks

Docker container not able to call another running docker container from localhost?

I have two running docker containers. One docker container is calling the other docker container but when it is trying to call application is breaking. When I am giving my hostname of my machine inside the application.Application is working.
This is a really a dependency if i deploy these two containers i again have to find the hostname of that machine and then put inside my application is any other way so that which can remove this dependency.
This url is consumed by my docker container which is failing
http://localhost:8080/userData
Same when i update with my host name then it is working.
http://nl55443lldsfa:8080/userData
But this is really a dependency i cannot change inside my application everytime.Is any work around is there for the same.
You should use docker-compose to run both containers and link them using the link property on your yaml file.
This might be a good example:
web:
image: nginx:latest
ports:
- "8080:8080"
links:
- php
php:
image: php
Then the ip of each container will be associated to its service name on the /etc/hosts file of both containers and you will be able to access them from inside the containers just by using that hostname.
Also be sure to be mapping the ports correctly, using http://localhost:8080 shouldn't fail if you map the ports correctly and the service is running.
Put the two containers inside the same network when running them. Only then you can use hostnames for inter container communication.
Edit: And of course name you containers so you don’t get a random container name each time.
Edit 2: The commands are:
$ docker network create -d bridge my-bridge-network
$ docker run -d \
--name webserver \
--network=my-bridge-network \
nginx:latest
$ docker run -d \
--name dbserver \
--network=my-bridge-network \
mysql:5.7
Containers started both with a specified hostname and a common network can use hostnames internally to communicate with each other.

Build/push image from jenkins running in docker

I have two docker containers - one running jenkins and one running docker registry. I want to build/push images from jenkins to docker registry. How do I achieve this in an easy and secure way (meaning no hacks)?
The easiest would be to make sure the jenkins container and registry container are on the same host. Then you can mount the docker socket onto the jenkins container and use the dockerd from the host machine to push the image to the registry. /var/run/docker.sock is the unix socket the dockerd is listening to.
By mounting the docker socket any docker command you run from that container executes as if it was the host.
$ docker run -dti --name jenkins -v /var/run/docker.sock:/var/run/docker.sock jenkins:latest
If you use pipelines, you can install this Docker Plugin https://plugins.jenkins.io/docker-workflow,
create a credentials resource on Jenkins,to access the Docker registry, and do this in your pipeline:
stage("Build Docker image") {
steps {
script {
docker_image = docker.build("myregistry/mynode:latest")
}
}
}
stage("Push images") {
steps {
script {
withDockerRegistry(credentialsId: 'registrycredentials', url: "https://myregistry") {
docker_image.push("latest")
}
}
}
}
Full example at: https://pillsfromtheweb.blogspot.com/2020/06/build-and-push-docker-images-with.html
I use this type of workflow in a Jenkins docker container, and the good news is that it doesn't require any hackery to accomplish. Some people use "docker in docker" to accomplish this, but I can't help you if that is the route you want to go as I don't have experience doing that. What I will outline here is how to use the existing docker service (the one that is running the jenkins container) to do the builds.
I will make some assumptions since you didn't specify what your setup looks like:
you are running both containers on the same host
you are not using docker-compose
you are not running docker swarm (or swarm mode)
you are using docker on Linux
This can easily be modified if any of the above conditions are not true, but I needed a baseline to start with.
You will need the following:
access from the Jenkins container to docker running on the host
access from the Jenkins container to the registry container
Prerequisites/Setup
Setting that up is pretty straight forward. In the case of getting Jenkins access to the running docker service on the host, you can do it one of two ways. 1) over TCP and 2) via the docker unix socket. If you already have docker listening on TCP you would simply take note of the host's IP address and the default docker TCP port number (2375 or 2376 depending on whether or not you use TLS) along with and TLS configuration you may have.
If you prefer not to enable the docker TCP service it's slightly more involved, but you can use the UNIX socket at /var/run/docker.sock. This requires you to bind mount the socket to the Jenkins container. You do this by adding the following to your run command when you run jenkins:
-v /var/run/docker.sock:/var/run/docker.sock
You will also need to create a jenkins user on the host system with the same UID as the jenkins user in the container and then add that user to the docker group.
Jenkins
You'll now need a Docker build/publish plugin like the CloudBees Docker Build and Publish plugin or some other plugin depending on your needs. You'll want to note the following configuration items:
Docker URI/URL will be something like tcp://<HOST_IP>:2375 or unix:///var/run/docker.sock depending on how we did the above setup. If you use TCP and TLS for the docker service you will need to upload the TLS client certificates for your Jenkins instance as "Docker Host Certificate Authentication" to your usual credentials section in Jenkins.
Docker Registry URL will be the URL to the registry container, NOT localhost. It might be something like http://<HOST_IP>:32768 or similar depending on your configuration. You could also link the containers, but that doesn't easily scale if you move the containers to separate hosts later. You'll also want to add the credentials for logging in to your registry as a username/password pair in the appropriate credentials section.
I've done this exact setup so I'll give you a "tl;dr" version of it as getting into depth here is way outside of the scope of something for StackOVerflow:
Install PID1 handler files in container (i.e. tini). You need this to handle signaling and process reaping. This will be your entrypoint.
Install some process control service (i.e. supervisord) packages. Generally running multiple services in containers is not recommended but in this particular case, your options are very limited.
Install Java/Jenkins package or base your image from their DockerHub image.
Add a dind (Docker-in-Docker) wrapper script. This is the one I based my config on.
Create the configuration for the process control service to start Jenkins (as jenkins user) and the dind wrapper (as root).
Add jenkins user to docker group in Dockerfile
Run docker container with --privileged flag (DinD requires it).
You're done!
Thanks for your input! I came up with this after some experimentation.
docker run -d \
-p 8080:8080 \
-p 50000:50000 \
--name jenkins \
-v pwd/data/jenkins:/var/jenkins_home \
-v /Users/.../.docker/machine/machines/docker:/Users/.../.docker/machine/machines/docker \
-e DOCKER_TLS_VERIFY="1" \
-e DOCKER_HOST="tcp://192.168.99.100:2376" \
-e DOCKER_CERT_PATH="/Users/.../.docker/machine/machines/docker" \
-e DOCKER_MACHINE_NAME="docker" \
johannesw/jenkins-docker-cli

accessing a docker container from another container

I created two docker containers based on two different images. One of db and another for webserver. Both containers are running on my mac osx.
I can access db container from host machine and same way can access webserver from host machine.
However, how do I access db connection from webserver?
The way I started db container is
docker run --name oracle-db -p 1521:1521 -p 5501:5500 oracle/database:12.1.0.2-ee
I started wls container as
docker run --name oracle-wls -p 7001:7001 wls-image:latest
I can access db on host by connecting to
sqlplus scott/welcome1#//localhost:1521/ORCLCDB
I can access wls on host as
http://localhost:7001/console
It's easy.
If you have two or more running container, complete next steps:
docker network create myNetwork
docker network connect myNetwork web1
docker network connect myNetwork web2
Now you connect from web1 to web2 container or the other way round.
Use the internal network IP addresses which you can find by running:
docker network inspect myNetwork
Note that only internal IP addresses and ports are accessible to the containers connected by the network bridge.
So for example assuming that web1 container was started with: docker run -p 80:8888 web1 (meaning that its server is running on port 8888 internally), and inspecting myNetwork shows that web1's IP is 172.0.0.2, you can connect from web2 to web1 using curl 172.0.0.2:8888).
Easiest way is to use --link, however the newer versions of docker are moving away from that and in fact that switch will be removed soon.
The link below offers a nice how too, on connecting two containers. You can skip the attach portion, since that is just a useful how to on adding items to images.
https://web.archive.org/web/20160310072132/https://deis.com/blog/2016/connecting-docker-containers-1/
The part you are interested in is the communication between two containers. The easiest way, is to refer to the DB container by name from the webserver container.
Example:
you named the db container db1 and the webserver container web0. The containers should both be on the bridge network, which means the web container should be able to connect to the DB container by referring to its name.
So if you have a web config file for your app, then for DB host you will use the name db1.
if you are using an older version of docker, then you should use --link.
Example:
Step 1: docker run --name db1 oracle/database:12.1.0.2-ee
then when you start the web app. use:
Step 2: docker run --name web0 --link db1 webapp/webapp:3.0
and the web app will be linked to the DB. However, as I said the --link switch will be removed soon.
I'd use docker compose instead, which will build a network for you. However; you will need to download docker compose for your system. https://docs.docker.com/compose/install/#prerequisites
an example setup is like this:
file name is base.yml
version: "2"
services:
webserver:
image: moodlehq/moodle-php-apache:7.1
depends_on:
- db
volumes:
- "/var/www/html:/var/www/html"
- "/home/some_user/web/apache2_faildumps.conf:/etc/apache2/conf-enabled/apache2_faildumps.conf"
environment:
MOODLE_DOCKER_DBTYPE: pgsql
MOODLE_DOCKER_DBNAME: moodle
MOODLE_DOCKER_DBUSER: moodle
MOODLE_DOCKER_DBPASS: "m#0dl3ing"
HTTP_PROXY: "${HTTP_PROXY}"
HTTPS_PROXY: "${HTTPS_PROXY}"
NO_PROXY: "${NO_PROXY}"
db:
image: postgres:9
environment:
POSTGRES_USER: moodle
POSTGRES_PASSWORD: "m#0dl3ing"
POSTGRES_DB: moodle
HTTP_PROXY: "${HTTP_PROXY}"
HTTPS_PROXY: "${HTTPS_PROXY}"
NO_PROXY: "${NO_PROXY}"
this will name the network a generic name, I can't remember off the top of my head what that name is, unless you use the --name switch.
IE docker-compose --name setup1 up base.yml
NOTE: if you use the --name switch, you will need to use it when ever calling docker compose, so docker-compose --name setup1 down this is so you can have more then one instance of webserver and db, and in this case, so docker compose knows what instance you want to run commands against; and also so you can have more then one running at once. Great for CI/CD, if you are running test in parallel on the same server.
Docker compose also has the same commands as docker so docker-compose --name setup1 exec webserver do_some_command
best part is, if you want to change db's or something like that for unit test you can include an additional .yml file to the up command and it will overwrite any items with similar names, I think of it as a key=>value replacement.
Example:
db.yml
version: "2"
services:
webserver:
environment:
MOODLE_DOCKER_DBTYPE: oci
MOODLE_DOCKER_DBNAME: XE
db:
image: moodlehq/moodle-db-oracle
Then call docker-compose --name setup1 up base.yml db.yml
This will overwrite the db. with a different setup. When needing to connect to these services from each container, you use the name set under service, in this case, webserver and db.
I think this might actually be a more useful setup in your case. Since you can set all the variables you need in the yml files and just run the command for docker compose when you need them started. So a more start it and forget it setup.
NOTE: I did not use the --port command, since exposing the ports is not needed for container->container communication. It is needed only if you want the host to connect to the container, or application from outside of the host. If you expose the port, then the port is open to all communication that the host allows. So exposing web on port 80 is the same as starting a webserver on the physical host and will allow outside connections, if the host allows it. Also, if you are wanting to run more then one web app at once, for whatever reason, then exposing port 80 will prevent you from running additional webapps if you try exposing on that port as well. So, for CI/CD it is best to not expose ports at all, and if using docker compose with the --name switch, all containers will be on their own network so they wont collide. So you will pretty much have a container of containers.
UPDATE: After using features further and seeing how others have done it for CICD programs like Jenkins. Network is also a viable solution.
Example:
docker network create test_network
The above command will create a "test_network" which you can attach other containers too. Which is made easy with the --network switch operator.
Example:
docker run \
--detach \
--name db1 \
--network test_network \
-e MYSQL_ROOT_PASSWORD="${DBPASS}" \
-e MYSQL_DATABASE="${DBNAME}" \
-e MYSQL_USER="${DBUSER}" \
-e MYSQL_PASSWORD="${DBPASS}" \
--tmpfs /var/lib/mysql:rw \
mysql:5
Of course, if you have proxy network settings you should still pass those into the containers using the "-e" or "--env-file" switch statements. So the container can communicate with the internet. Docker says the proxy settings should be absorbed by the container in the newer versions of docker; however, I still pass them in as an act of habit. This is the replacement for the "--link" switch which is going away. Once the containers are attached to the network you created you can still refer to those containers from other containers using the 'name' of the container. Per the example above that would be db1. You just have to make sure all containers are connected to the same network, and you are good to go.
For a detailed example of using network in a cicd pipeline, you can refer to this link:
https://git.in.moodle.com/integration/nightlyscripts/blob/master/runner/master/run.sh
Which is the script that is ran in Jenkins for a huge integration tests for Moodle, but the idea/example can be used anywhere. I hope this helps others.
You will have to access db through the ip of host machine, or if you want to access it via localhost:1521, then run webserver like -
docker run --net=host --name oracle-wls wls-image:latest
See here
Using docker-compose, services are exposed to each other by name by default. Docs.
You could also specify an alias like;
version: '2.1'
services:
mongo:
image: mongo:3.2.11
redis:
image: redis:3.2.10
api:
image: some-image
depends_on:
- mongo
- solr
links:
- "mongo:mongo.openconceptlab.org"
- "solr:solr.openconceptlab.org"
- "some-service:some-alias"
And then access the service using the specified alias as a host name, e.g mongo.openconceptlab.org for mongo in this case.
Environment: Windows 10, Docker Desktop version 4.5.1.
Use hostname host.docker.internal to access services running on your host machine from inside a container.
See: https://docs.docker.com/desktop/windows/networking/#use-cases-and-workarounds
I run PostgreSQL in one container and my app in a separate container.
I configure the app database connection to use host.docker.internal as the hostname and it just works.
Consider Example
We Create two containers here PostgreSQL server and pgadmin(for accessing servers like PHPMyAdmin, SQL studio, workbench).
Exposed port
PostgreSql --->5436
Pgadmin --->5050
After adding a server in pgadmin hostname as localhost.It will show a connection error. Because Docker container pgadmin getting localhost as their system instead we need PostgreSQL IP to solve the problem.
docker network create con
docker network connect con app1
docker network connect con app2
This command gets connected container IP address and other details.
docker network inspect con
Now you can see the IP address shown in the network inspect. Choose the Postgres container IP. You can access other exposed ports through this IP. Here postgre 5432 is only exposed.Now set hostname as the container ip and it will work.
You can use the default docker network. If you don't want to go through any docker networking, you can do this:
Copy the ip address in Docker subnet in Resources>Network in Docker Preferences in Mac:
Docker preferences screenshot
As you can see from the screenshot link the ip address is
192.168.65.0
You just need to replace “localhost” in your containers config file with “192.168.65.1" (i.e. IP address picked + 1 ).
You can start your containers and should be set for local development/testing.
For some more details, you can see my article:
Connect Docker containers the easy way
In my case, the host connection in the application to a container from an other container by the IP provide by the bridge didn't work.
But it works with the name of the container (see my screenshot).
So you can replace the IP by the name of the container.

Resources