https://github.com/getsentry/onpremise
mkdir -p data/{sentry,postgres} - Make our local database and sentry config directories.
This directory is bind-mounted with postgres so you don't lose state!
docker-compose run --rm web config generate-secret-key - Generate a secret key.
Add it to docker-compose.yml in base as SENTRY_SECRET_KEY.
docker-compose run --rm web upgrade - Build the database.
Use the interactive prompts to create a user account.
docker-compose up -d - Lift all services (detached/background mode).
Access your instance at localhost:9000!
I'm new to docker.
I tried to run sentry container locally, succeeded.
But when I was trying to deploy it on a cloud container service platform,I met some problems.
The platform just provide one way to run docker: docker run xxx , unlike aws which can use cli.
So how could I deploy on that platform? Thanks.
Additionally,I must use that platform cause it's my company's product lol.
Related
I have a docker container that needs to run with --privileged to establish a VPN connection once it boots up
I am migrating it into Cloud Run using Cloud Build
I tried --container-privileged but that seems to only work for GCE, I also added the following to the args for the gcloud run deploy call in the cloudbuild.yaml but it complains with error: "Invalid command \"docker run --privileged\": file not found anywhere in PATH
- --command
- docker run --privileged
Google Cloud Run does not use Docker to run containers.
Cloud Run uses gVisor.
Cloud Run does not support privileged containers.
I have number of Linux server that have docker installed on them, all of the server are in a docker swarm, on each server i have a custom application. I also have ELK setup in AWS.
I want to collect all logs from my custom app to the ELK on AWS, I have successfully done that on one server with filebeat by running the following commands:
1. docker pull docker.elastic.co/beats/filebeat-oss:7.3.0
2. created a file in /etc/filebeat/filebeat.yml with the content:
filebeat.inputs:
- type: container
paths:
- '/usr/share/continer_logs/*/*.log'
containers.ids:
- '111111111111111111111111111111111111111111111111111111111111111111'
processors:
- add_docker_metadata: ~
output.elasticsearch:
hosts: ["XX.YY.ZZ.TT"]
chown root:root filebeat.yml
sudo docker run -u root -v /etc/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /var/lib/docker/containers:/usr/share/continer_logs -v /var/run/docker.sock:/var/run/docker.sock docker.elastic.co/beats/filebeat-oss:7.3.0
And now i want to do the same on all of my docker hosts(And there are a lot of them) in the swarm.
I encounter a number of problems:
1. How do i copy "filebeat.yml" to /etc/filebeat/filebeat.yml on every server?
2. How do i update the "containers.ids" on every server? and how to update it when i upgrade the docker image?
How do I copy "filebeat.yml" to /etc/filebeat/filebeat.yml on every server?
You need a configuration management tool for this. I prefer Ansible, you might want to take a look at others.
How do I update the "containers.ids" on every server?
You don't have to. Docker manages it by itself IF you use swarm mode. You're using docker run which is meant for the development phase and to deploy applications at one single machine. You need to look at Docker Stack to deploy an application across multiple servers.
How to update it when I upgrade the docker image?
docker stack deploy does both, deploy and update, services.
NOTE: Your image should be present on each node of the swarm in order to get its container deployed on that node.
I have a docker container golang code which interacts with aws resources. In the testing environment, we use iam role. But How do I test locally. How to use aws credentials to run my docker locally.I am using docker file to build the docker image.
Just mount your credential directory as read-only using:
docker run -v ${HOME}/.aws/credentials:/root/.aws/credentials:ro ...
given you have root as the user in the container and also have setup the host using this guide for credentials file.
or pass them directly using environment variables as:
docker run -e AWS_ACCESS_KEY_ID=<ACCESS_KEY> -e AWS_SECRET_ACCESS_KEY=<SECRET_KEY> ...
I am using jenkinssci/docker to setup some build automation on a server for a laravel project.
Using the command docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts, everything boots up fine, i create the admin login, create the project and link all of that together.
Yesterday i downloaded libraries to the container that this command gave me in docker using docker exec -u 0 -it <container_name_or_id> /bin/bash to get into the container as root to install things like php, composer, noodejs/npm. After this was done, i built the project and got a successful build.
Today I start the docker container using the same above command, build the project and build fails. The container no longer has any of the downloaded libraries (php, composer, node).
It is my understanding that including jenkins_home:/var/jenkins_home in the command to start the docker container, data would persist. This is wrong?
So my question is, how can i make it so that i can keep these libraries in the docker container that it builds?
I just started learning about these tools yesterday, so i'm not entirely sure I am even doing it the best. All i need is to be able to log into the server for Jenkins and build the project/ship the code to our staging/live servers.
side note: I am not currently using a Dockerfile. as mentioned here I am able to download tools in the container as root.
Your understanding is correct: you should use a persistent volume, otherwise you will lose your data every time the container is recreated.
I understand that you are running the container in a single machine with docker. You need to put a full path or relative path on the local folder of the volume definition to be sure that data persists, try with:
docker run -p 8080:8080 -p 50000:50000 -v ./jenkins_home:/var/jenkins_home jenkins/jenkins:lts
Look at the ./ on the local folder
Here my docker-compose.yml I'm using for a long time
version: '2'
services:
jenkins:
image: jenkins/jenkins:lts
volumes:
- ./jenkins:/var/jenkins_home
ports:
- 80:8080
- 50000:50000
Is basically the same but in yaml format
I've run through the initial Overview of Docker Compose exactly as written and it works just fine locally with boot2docker. However, if I try to do a docker-compose up on a remote host, it does not add the code to the remote container.
To reproduce:
Run through the initial Overview of Docker Compose exactly as written.
Install Docker Machine and start a Dockerized VM on any cloud provider.
docker-machine create --driver my-favourite-cloud composetest
eval "$(docker-machine env composetest)"
Now that you're working with a remote host, run docker-compose up on the original code.
composetest $ docker-compose up
Redis runs fine but the Flask app does not.
composetest $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
794c90928b97 composetest_web "/bin/sh -c 'python About a minute ago Exited (2) About a minute ago composetest_web_1
2c70bd687dfc redis "/entrypoint.sh redi About a minute ago Up About a minute 6379/tcp composetest_redis_1
What went wrong?
composetest $ docker logs 794c90928b97
python: can't open file 'app.py': [Errno 2] No such file or directory
Can we confirm it?
composetest $ docker-compose run -d web sleep 60
Starting composetest_redis_1...
composetest_web_run_3
composetest $ docker exec -it a4 /bin/bash
root#a4a73c6dd159:/code# ls -a
. ..
Nothing there. Can we fix it?
Comment out volumes in docker-compose.yml
web:
build: .
ports:
- "5000:5000"
# volumes:
# - .:/code
links:
- redis
redis:
image: redis
Then just docker-compose up and it works!
Let's try again on boot2docker.
composetest $ eval "$(boot2docker shellinit)"
composetest $ docker-compose up
Recreating composetest_redis_1...
Recreating composetest_web_1...
Attaching to composetest_redis_1, composetest_web_1
...
The Flask app does work but it has a serious problem. If you change app.py, the Flask dev server doesn't reload and those changes aren't automatically seen. Even if you stop the container and docker-compose up again, the changes still aren't seen. I realize we lose this essential feature because the volume is no longer mounted. But not mounting the volume is the only way I've been able to get docker-compose to work with a remote host. We should be able to get both local and remote hosts to work using the same docker-compose.yml and Dockerfile.
How do I develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml?
Versions:
Docker 1.7.0
Docker Compose 1.3.0
Docker Machine 0.3.0
Compose just does pretty much the same thing as you can do with the regular command line interface under the hood. So your command is roughly equivalent to:
$ docker run --name web -p 5000:5000 -v $(pwd):/code --link redis:redis web
The issue is that the volume is relative to the docker host, not the client. So it will mount the working directory on the remote VM, not the client. In your case, this directory is empty.
If you want to develop interactively with a remote VM, you will have to check out the source and edit the files on the VM.
UPDATE: It seems that you actually want to develop and test locally, then deploy a production version to a remote VM. (Apologies if I still misunderstand). To do this, I suggest you have a separate Compose file for development where you mount the local volume, then rebuild and deploy the image for production. By rebuilding the image, it will pick up the latest version of the code. Mounting a volume in production breaks because you've hidden the code in the image with an empty directory.
It's also worth pointing out that Docker don't advise using Compose in production currently.
What I'm really looking for is found in this Using Compose in production doc. By Extending services in Compose you're able to develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml.
I ran into this "can't open file 'app.py'" problem following the Getting Started tutorials, for me it was because I'm running Docker in Windows. I needed to make sure that I'd shared the drive containing my project directory in Docker settings.
Source: see the "Shared Drive" section of Docker Settings in the docs