I have number of Linux server that have docker installed on them, all of the server are in a docker swarm, on each server i have a custom application. I also have ELK setup in AWS.
I want to collect all logs from my custom app to the ELK on AWS, I have successfully done that on one server with filebeat by running the following commands:
1. docker pull docker.elastic.co/beats/filebeat-oss:7.3.0
2. created a file in /etc/filebeat/filebeat.yml with the content:
filebeat.inputs:
- type: container
paths:
- '/usr/share/continer_logs/*/*.log'
containers.ids:
- '111111111111111111111111111111111111111111111111111111111111111111'
processors:
- add_docker_metadata: ~
output.elasticsearch:
hosts: ["XX.YY.ZZ.TT"]
chown root:root filebeat.yml
sudo docker run -u root -v /etc/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /var/lib/docker/containers:/usr/share/continer_logs -v /var/run/docker.sock:/var/run/docker.sock docker.elastic.co/beats/filebeat-oss:7.3.0
And now i want to do the same on all of my docker hosts(And there are a lot of them) in the swarm.
I encounter a number of problems:
1. How do i copy "filebeat.yml" to /etc/filebeat/filebeat.yml on every server?
2. How do i update the "containers.ids" on every server? and how to update it when i upgrade the docker image?
How do I copy "filebeat.yml" to /etc/filebeat/filebeat.yml on every server?
You need a configuration management tool for this. I prefer Ansible, you might want to take a look at others.
How do I update the "containers.ids" on every server?
You don't have to. Docker manages it by itself IF you use swarm mode. You're using docker run which is meant for the development phase and to deploy applications at one single machine. You need to look at Docker Stack to deploy an application across multiple servers.
How to update it when I upgrade the docker image?
docker stack deploy does both, deploy and update, services.
NOTE: Your image should be present on each node of the swarm in order to get its container deployed on that node.
Related
I am using a the docker compose cluster sample setup from docker-solr-examples
Now, I want to add my existing core definitions to the cluster. How do i deploy my existing core definitions and managed-schema.xml to zookeeper. I presume there is a way to put the file on one node and have it automatically replicate out to the other nodes.
Have you looked at this documentation Using zookeeper to manage configuration
From the docker I understand that it is creating a zookeeper ensemble consisting of 3 notes and those manage 3 solr nodes.
Usually a solr installation has a few preinstalled scripts that allow you to do some maintenance. You may have to enter the cli of one of the Solr nodes and use that to upload the configuration.
Docker is only giving you the convenience to spin up the infrastructure quickly. Other maintenance tasks will still require using solr , zookeeper cli or APIs
Pull Solr Docker Image
sudo docker run -d -p 8983:8983 --name docker_image_name solr solr-precreate users
Access Docker Container
docker ps
docker exec -it <image_name> bash
Create New Core In Solr (hit after accessing docker container bash)
bin/solr create -c core_name
Now You can dump your data in solr cores.
Delete solr core
bin/solr create -c core_name
I'm trying to create a CI/CD infrastructure using Jenkins. Considering recoverability, performance and maintainability topics I decided to handle both Jenkins and agents as Docker containers.
There are some certain restrictions that I cannot workaround:
Cannot build this setup on Linux environment (IT policy)
Cannot use WSL2 on Windows (I don't know when IT department will release regarding Windows update that supports WSL2)
Security is a very high priorty topic
As far as I see, Docker outside of Docker setup is the proper way to implement. If I run the container as root using the command below, I can bind the docker.sock file and Jenkins jobs can create containers from Dockerfiles as agents:
docker run --name dood `
-d -u root --restart on-failure `
-p "8080:8080" -p "50000:50000" `
-v //var/run/docker.sock:/var/run/docker.sock `
-v /usr/local/bin/docker:/usr/bin/docker `
jenkins/jenkins:lts
However, it doesn't work if Jenkins container is run with non-root user. This is not acceptable as it creates vulnerability. Suggested way is to run the container without root user and assign "jenkins" user to "docker" group:
groupadd docker
usermod -a -G docker jenkins
newgrp docker
Unfortunately, it doesn't work. "Got permission denied..." error occurs when Jenkins jobs try to create agent containers. I restarted Docker Desktop and container but result is the same. I am not sure but possible reason might be the Windows environment. This may work in Linux environment.
As a final effort, I tried the solution that is described in a stackoverflow topic. I noticed "setfacl" command does not work when Docker runs with Hyper-V. If I switch to WSL2 on my demo PC then the commands below solve the problem:
gpasswd -a jenkins docker
apt-get install acl
setfacl -m user:jenkins:rw /var/run/docker.sock
Unfortunately, target Windows environment does not support WSL2 so I cannot use this solution. Moreover, setfacl command is not persistent but this is another story.
An alternative solution might be activating "Expose daemon on tcp://localhost:2375 without TLS" option. However, this is not acceptable in security point of view so I cross it out.
I am curious if it is even possible to implement Docker outside of Docker setup for Jenkins on Docker Desktop for Windows. Considering the named restrictions, I am open to alternative setups/solutions as well.
I am quite new to Docker and not very experienced with Jenkins. If I use wrong terminology or approach please let me know.
I am using Windows 10 Pro with Docker installed. I $ docker pull rocker/shiny image on my computer and started it as described in documentation https://hub.docker.com/r/rocker/shiny/ using the following command:
docker run -d -p 80:3838 -v C:\\Users\\<My name>\\Documents\\R\\Rprojects\\ShinyHelloWorld\\:/srv/shiny-server/ -v C:\\Users\\<My name>\\Documents\\R\\Rprojects\\ShinyHelloWorld\\:/var/log/shiny-server/ rocker/shiny
The container created successfully:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f0ee402966b9 rocker/shiny "/usr/bin/shiny-serv…" 2 minutes ago Up 2 minutes 0.0.0.0:80->3838/tcp youthful_banach
I created ShinyHelloWorld application using RStudio, and the folder on the local host that I mounted to docker container basically contains one file app.R with default shiny application created by RStudio.
Now the problem is: I can't run this application from my browser using address http://localhost:3838/ShinyHelloWorld/.
When I use URL http://localhost:3838 it returns web page with single sentence Index of /. So, there is some one who listens.
Did I correctly run shiny server?
I suppose that I am using incorrect URL in my browser to access server. How to do it correctly?
Do I need some installation of my shiny app to the server?
Is it possible to run shiny server using tocken, like with:
http://localhost:8888/?token=44dab68c1bc7b1662041853573f37cfa03f13d029d397816
as described, e.g. in the book for COOK, J.: Docker for Data Science: Building Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server: Apress., 2017
How to find the tocken if it exists?
Suppose that I want to use docker-compose.yml and then $ docker-compose up. Please, help complete the script below to execute the same command as above.
version: "3"
services:
image: rocker/shiny
volumes:
- C:\\Users\\aabor\\Documents\\R\\Rprojects\\ShinyHelloWorld:/srv/shiny-server/
- C:\\Users\\aabor\\Documents\\R\\Rprojects\\ShinyHelloWorld:/var/log/shiny-server/
ports:
- 80:3838
container_name: rocker-shiny-container
Look at ports 0.0.0.0:80->3838/tcp - means your port 80 will go to 3838 on the container - so you should try http://localhost first.
I resolved the issue by myself. The problem was with folder path.
This command will create docker container correctly:
docker run -d -p 3838:3838 -v //c/Users/<My Name>/Documents/R/Rprojects:/srv/shiny-server/ -v //c/Users/<My Name>/Documents/R/Rprojects:/var/log/shiny-server/ rocker/shiny
Then if I use URL http://localhost:3838/ShinyHelloWorld/ in my browser shiny application will start.
https://github.com/getsentry/onpremise
mkdir -p data/{sentry,postgres} - Make our local database and sentry config directories.
This directory is bind-mounted with postgres so you don't lose state!
docker-compose run --rm web config generate-secret-key - Generate a secret key.
Add it to docker-compose.yml in base as SENTRY_SECRET_KEY.
docker-compose run --rm web upgrade - Build the database.
Use the interactive prompts to create a user account.
docker-compose up -d - Lift all services (detached/background mode).
Access your instance at localhost:9000!
I'm new to docker.
I tried to run sentry container locally, succeeded.
But when I was trying to deploy it on a cloud container service platform,I met some problems.
The platform just provide one way to run docker: docker run xxx , unlike aws which can use cli.
So how could I deploy on that platform? Thanks.
Additionally,I must use that platform cause it's my company's product lol.
I've run through the initial Overview of Docker Compose exactly as written and it works just fine locally with boot2docker. However, if I try to do a docker-compose up on a remote host, it does not add the code to the remote container.
To reproduce:
Run through the initial Overview of Docker Compose exactly as written.
Install Docker Machine and start a Dockerized VM on any cloud provider.
docker-machine create --driver my-favourite-cloud composetest
eval "$(docker-machine env composetest)"
Now that you're working with a remote host, run docker-compose up on the original code.
composetest $ docker-compose up
Redis runs fine but the Flask app does not.
composetest $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
794c90928b97 composetest_web "/bin/sh -c 'python About a minute ago Exited (2) About a minute ago composetest_web_1
2c70bd687dfc redis "/entrypoint.sh redi About a minute ago Up About a minute 6379/tcp composetest_redis_1
What went wrong?
composetest $ docker logs 794c90928b97
python: can't open file 'app.py': [Errno 2] No such file or directory
Can we confirm it?
composetest $ docker-compose run -d web sleep 60
Starting composetest_redis_1...
composetest_web_run_3
composetest $ docker exec -it a4 /bin/bash
root#a4a73c6dd159:/code# ls -a
. ..
Nothing there. Can we fix it?
Comment out volumes in docker-compose.yml
web:
build: .
ports:
- "5000:5000"
# volumes:
# - .:/code
links:
- redis
redis:
image: redis
Then just docker-compose up and it works!
Let's try again on boot2docker.
composetest $ eval "$(boot2docker shellinit)"
composetest $ docker-compose up
Recreating composetest_redis_1...
Recreating composetest_web_1...
Attaching to composetest_redis_1, composetest_web_1
...
The Flask app does work but it has a serious problem. If you change app.py, the Flask dev server doesn't reload and those changes aren't automatically seen. Even if you stop the container and docker-compose up again, the changes still aren't seen. I realize we lose this essential feature because the volume is no longer mounted. But not mounting the volume is the only way I've been able to get docker-compose to work with a remote host. We should be able to get both local and remote hosts to work using the same docker-compose.yml and Dockerfile.
How do I develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml?
Versions:
Docker 1.7.0
Docker Compose 1.3.0
Docker Machine 0.3.0
Compose just does pretty much the same thing as you can do with the regular command line interface under the hood. So your command is roughly equivalent to:
$ docker run --name web -p 5000:5000 -v $(pwd):/code --link redis:redis web
The issue is that the volume is relative to the docker host, not the client. So it will mount the working directory on the remote VM, not the client. In your case, this directory is empty.
If you want to develop interactively with a remote VM, you will have to check out the source and edit the files on the VM.
UPDATE: It seems that you actually want to develop and test locally, then deploy a production version to a remote VM. (Apologies if I still misunderstand). To do this, I suggest you have a separate Compose file for development where you mount the local volume, then rebuild and deploy the image for production. By rebuilding the image, it will pick up the latest version of the code. Mounting a volume in production breaks because you've hidden the code in the image with an empty directory.
It's also worth pointing out that Docker don't advise using Compose in production currently.
What I'm really looking for is found in this Using Compose in production doc. By Extending services in Compose you're able to develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml.
I ran into this "can't open file 'app.py'" problem following the Getting Started tutorials, for me it was because I'm running Docker in Windows. I needed to make sure that I'd shared the drive containing my project directory in Docker settings.
Source: see the "Shared Drive" section of Docker Settings in the docs