How to setup sentry with docker - docker

I have found the official sentry image in dockerhub. But the document is incomplete and I can't setup the environment step by step.
We have to setup the database container first but none of them tell how to setup it at first. Specifically I don't know what are the username and password that sentry will use.
And I also get the following error when I run the sentry container:
sudo docker run --name some-sentry --link some-mysql:mysql -d sentry
e888fcf2976a9ce90f80b28bb4c822c07f7e0235e3980e2a33ea7ddeb0ff18ce
sudo docker logs some-sentry
Traceback (most recent call last):
File "/usr/local/bin/sentry", line 9, in <module>
load_entry_point('sentry==6.4.4', 'console_scripts', 'sentry')()
File "/usr/local/lib/python2.7/site-packages/sentry/utils/runner.py", line 310, in main
initializer=initialize_app,
File "/usr/local/lib/python2.7/site-packages/logan/runner.py", line 167, in run_app
configure_app(config_path=config_path, **kwargs)
File "/usr/local/lib/python2.7/site-packages/logan/runner.py", line 89, in configure_app
raise ValueError("Configuration file does not exist at %r" % (config_path,))
ValueError: Configuration file does not exist at '/.sentry/sentry.conf.py'

UPDATE circa version 21
They don't seem to want to build the official image for us any more as per the deprecation notice on Docker Hub. However, good news, in https://develop.sentry.dev/self-hosted/#getting-started
They supply an install script
There is an official docker-compose included
Seems Kafka and Zookeeper are now required too. Follow the docs to stay up to date.
This is a moving target. I suggest checking https://hub.docker.com/_/sentry/ for updates as their documentation is pretty good.
Circa version 8 you can easily convert those instructions to use docker-compose
docker-compose.yml
version: "2"
services:
redis:
image: redis:3.0.7
networks:
- sentry-net
postgres:
image: postgres:9.6.1
environment:
- POSTGRES_USER:sentry
- POSTGRES_PASSWORD:sentry
# volumes:
# - ./data:/var/lib/postgresql/data:rw
networks:
- sentry-net
sentry:
image: sentry:${SENTRY_TAG}
depends_on:
- redis
- postgres
environment:
- SENTRY_REDIS_HOST=redis
- SENTRY_SECRET_KEY=${SECRET}
- SENTRY_POSTGRES_HOST=postgres
ports:
- 9000:9000
networks:
- sentry-net
sentry_celery_beat:
image: sentry:${SENTRY_TAG}
depends_on:
- sentry
environment:
- SENTRY_REDIS_HOST=redis
- SENTRY_SECRET_KEY=${SECRET}
- SENTRY_POSTGRES_HOST=postgres
command: "sentry run cron"
networks:
- sentry-net
sentry_celery_worker:
image: sentry:${SENTRY_TAG}
depends_on:
- sentry
environment:
- SENTRY_REDIS_HOST=redis
- SENTRY_SECRET_KEY=${SECRET}
- SENTRY_POSTGRES_HOST=postgres
command: "sentry run worker"
networks:
- sentry-net
networks:
sentry-net:
.env
SENTRY_TAG=8.10.0
Run docker run --rm sentry:8.10.0 config generate-secret-key and add the secret
.env updated
SENTRY_TAG=8.10.0
SECRET=somelongsecretgeneratedbythetool
First boot:
docker-compose up -d postgres
docker-compose up -d redis
docker-compose run sentry sentry upgrade
Full boot
docker-compose up -d
Debug
docker-compose ps
docker-compose logs --tail=10

Take a look at the sentry.conf.py file that is part of the official sentry docker image. It gets a bunch of properties from the environment e.g. SENTRY_DB_NAME, SENTRY_DB_USER. Below is an excerpt from the file.
os.getenv('SENTRY_DB_PASSWORD')
or os.getenv('MYSQL_ENV_MYSQL_PASSWORD')
or os.getenv('MYSQL_ENV_MYSQL_ROOT_PASSWORD')
So as for your question about how to sepcify database password it must be set in environment variables. You can do this by running:
sudo docker run --name some-sentry --link some-mysql:mysql \
-e SENTRY_DB_USER=XXX \
-e SENTRY_DB_PASSWORD=XXX \
-d sentry
As for your issue with the exception you seem to be missing a config file Configuration file does not exist at '/.sentry/sentry.conf.py' That file is copied to /home/user/.sentry/sentry.conf.py inside the container. I am not sure why your sentry install is looking for it at /.sentry/sentry.conf.py. There may be an environment variable or a setting that controls this or this may just be a bug in the container.

This works for me https://github.com/slafs/sentry-docker and we don't have to setup database or others. I will learn more about the configuration in detail later.

Here my docker compose yml, with official image from https://hub.docker.com/_/sentry/:
https://gist.github.com/ebuildy/270f4ef3abd41e1490c1
Run:
docker-compose -p sw up -d
docker exec -ti sw_sentry_1 sentry upgrade
Thats it!

Related

After installing puckel/docker-airflow locally, no task instance is running and tasks get stuck forever

I used this tutorial to install on my local Mac airflow with docker : http://www.marknagelberg.com/getting-started-with-airflow-using-docker/ and everything worked well. I have the UI and I can connect my dags.
However, when I trigger manually my task it is not running and I get this error message.
My task on the web UI: .
I work on a Mac and I have used this code :
docker pull puckel/docker-airflow
docker run -d -p 8080:8080 -v /path/to/dags:/usr/local/airflow/dags puckel/docker-airflow webserver
Does someone have an idea on how I could fix this ? Thanks for your help
is the airflow scheduler running?
The airflow webserver can only show the dags & task status. The scheduler run the tasks accordingly.
for the command your showed above, there is no call for airflow scheduler.
So, you can run below command in another console.
docker ps |grep airflow
Use above command to get the container id.
docker exec -it [container ID] airflow scheduler
For the ultimate way, I suugested to use docker-compose
Instead of docker, using docker-compose to manage all you docker stack related case.
Here is the sample code for my puckel/docker-airflow based airflow
version: '3'
services:
postgres:
image: 'postgres:12'
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
volumes:
- ./pg_data:/var/lib/postgresql/data
webserver:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=n
- EXECUTOR=Local
- AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgres://airflow:airflow#postgres/airflow
volumes:
- ./dags:/usr/local/airflow/dags
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
To use it, You can
1- created a project folder. copy above reference code into
docker-compose.yml
2- check if configuration is right by following docker-compose command
docker-compose config
3- enabled the docker-compse project by:
docker-compose up
Note: if you do not want to see detail logs, you can run it in backgroud by:
docker-compose up -d
Now, you can enjoy airflow UI in you browser. by following url
http://<the host ip>:8080
if you like above answer, pls vote it up.
Good luck
WY

docker-compose as a production environment without internet

I'm a beginner with docker and I created a docker-compose file that can provide our production environment and I want to use it for our client servers for production environment also I want to use it locally and without internet.
Now, I have binaries of docker and docker compose and saved images that I want to load to a server without internet. this is my init bash script on Linux :
#!/bin/sh -e
#docker
tar xzvf docker-18.09.0.tgz
sudo cp docker/* /usr/bin/
sudo dockerd &
#docker-compose
cp docker-compose-Linux-x86_64 /ussr/local/bin/docker-compose
chmod +x /ussr/local/bin/docker-compose
#load images
docker load --input images.tar
my structure :
code/*
nginx/
site.conf
logs/
phpfpm/
postgres/
data/
custom.ini
.env
docker-compose.yml
docker-compose file:
version: '3'
services:
web:
image: nginx:1.15.6
ports:
- "8080:80"
volumes:
- ./code:/code
- ./nginx/site.conf:/etc/nginx/conf.d/default.conf
- ./nginx/logs:/var/log/nginx
restart: always
depends_on:
- php
php:
build: ./phpfpm
restart: always
volumes:
- ./phpfpm/custom.ini:/opt/bitnami/php/etc/conf.d/custom.ini
- ./code:/code
db:
image: postgres:10.1
volumes:
- ./postgres/data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- 5400:5432
There are some questions :
Why docker doesn't exist in Linux services? but when I install docker by apt-get it goes to Linux services list. How can I set docker as a service and enable it for loading on startup?
How can I set docker-compose in Linux services to run when system startup?
Install docker with package sudo dpkg -i /path/to/package.deb that you can download from https://download.docker.com/linux/ubuntu/dists/.
Then do post install, sudo systemctl enable docker. This will start docker at system boots, combined with restart: always your previous compose will be restarted automatically.
I think that dockerd is creating a daemon, but you have to enable it.
$ sudo systemctl enable docker
Add restart: always to your db container.
How the docker restart policies work

Wordpress can't see linked mysql container on IBM Bluemix with Docker compose

I have a simple docker-compose.yml (wp image is based on ibmjstart/wp-bluemix-container, db image is mariadb)
db:
image: registry.eu-gb.bluemix.net/foo/db
environment:
MYSQL_ROOT_PASSWORD: examplepass
ports:
- 3306:3306
volumes:
- /var/lib/mysql
wp:
image: registry.eu-gb.bluemix.net/foo/wp
links:
- db:mysql
ports:
- 80:80
after executing docker compose up -d I get
error: missing WORDPRESS_DB_HOST and MYSQL_PORT_3306_TCP environment variables
Did you forget to --link some_mysql_container:mysql or set an external db
with -e WORDPRESS_DB_HOST=hostname:port?
As you can see, the db container is linked.
When I do the same without docker-compose, using
$ cf ic run -v mysql-vol:/var/lib/mysql --name wpdb -d registry.eu-gb.bluemix.net/foo/db
$ cf ic run -e MYSQL_ROOT_PASSWORD=my-secret-pw -v web-files:/var/www/html/ --link wpdb:mysql -d registry.eu-gb.bluemix.net/foo/wp
Everything works well.
I do export docker variables after cf ic login
More info:
root#vps:~/test/compose# docker-compose --version
docker-compose version 1.7.0, build 0d7bf73
root#vps:~/test/compose# docker --version
Docker version 1.10.3, build 20f81dd
root#vps:~/test/compose# cf --version
cf version 6.15.0+fa1bfe2-2016-01-13
root#vps:~/test/compose# cf ic --version
Docker version 1.10.3, build 20f81dd
UPDATE: As I understand, this problem is caused by the naming:
This docker-compose.yml throws an error
db:
image: registry.eu-gb.bluemix.net/foo/db
environment:
MYSQL_ROOT_PASSWORD: examplepass
container_name:
wpdb
ports:
- 3306:3306
volumes:
- /var/lib/mysql
wp:
image: registry.eu-gb.bluemix.net/foo/wp
links:
- wpdb:mysql
ports:
- 80:80
ERROR: Service "wp" has a link to service "wpdb" which does not exist.
However, if you name the service and container the same, the syntax is ok.
db:
image: registry.eu-gb.bluemix.net/foo/db
environment:
MYSQL_ROOT_PASSWORD: examplepass
container_name:
db
ports:
- 3306:3306
volumes:
- /var/lib/mysql
wp:
image: registry.eu-gb.bluemix.net/foo/wp
links:
- db:mysql
ports:
- 80:80
Although the syntax is OK and the container is linked, the wordpress container logs this
Warning: mysqli::mysqli(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: Name or service not known in - on line 10
MySQL Connection Error: (2002) php_network_getaddresses: getaddrinfo failed: Name or service not known
Is this a bug in Bluemix? Looks like a /etc/hosts/ related problem
Sorry for a long post :)
#bartimar Yes, the problem is related to the /etc/hosts file. It needs to have an entry for the db container, but it is not creating it.
I can recreate your problem in the prod-lon02-vizio1 environment, but it works fine in the prod-lon02-kraken1 environment.
My recommendation if for you to manually migrate to the prod-lon02-kraken1 environment to use docker-compose.yml with IBM containers. All environments will be automatically migrated on May 25th anyway.
To migrate simply run the following command:
$ cf ic reprovision
Please note that your images are migrated to new environment, but all your running containers are deleted and you will have to recreate them in the new environment. So use this option with caution.
For more details check the link below:
https://developer.ibm.com/bluemix/2016/03/24/new-deployment-architecture-for-containers/?linkId=22660520

Can I pass arguments into docker-compose the command config option

Anyone know how I can use the command: option in docker-compose to run my command with arguments? I know version 2 offers arguments, but that works with docker-engine 1.10.x I am on docker-engine 1.6.2 and cannot upgrade at the moment.
I want to do something like this in docker-compose:
...
rstudio:
image: rocker-hadleyverse
command: -d -p 8787:8787 -e USER=<username> -e PASSWORD=<password> rocker-hadleyverse
links:
- db
...
Please (re)read the docker-compose docs, in docker-compose.yml command refers to the actual command executed inside the container, not the options you pass to docker run. Your example translates to:
rstudio:
image: rocker-hadleyverse
ports:
- "8787:8787"
environment:
- USER=foo
- PASSWORD=bar
links:
- db
To detach after container start use docker-compose up -d.

Docker Compose for Rails

I'm trying to replicate this docker command in a docker-compose.yml file
docker run --name rails -d -p 80:3000 -v "$PWD"/app:/www -w /www -ti rails
My docker-compose.yml file:
rails:
image: rails
container_name: rails
ports:
- 80:3000
volumes:
- ./app:/wwww
When I'm doing docker-compose up -d, the container is created but it does not strat.
When I'm adding tty: true to my docker docker-compose.yml file, the container start well but my volume is not mounted.
How can I replicate excatly my docker command in a docker-compose.yml?
There are some ways to solve your problem.
Solution 1: If you want to use the rails image in your docker-compose.yml, you need to set the command and working directory for it like
rails:
image: rails
container_name: rails
command: bash -c "bundle install && rails server -b 0.0.0.0"
ports:
- 80:3000
volumes:
- ./app:/www
working_dir: /www
This will create a new container from the rails image every time you run docker-compose up.
Solution 2: Move your docker-compose.yml to the same directory with Gemfile, and create Dockerfile in that directory in order to build a docker container in advance (to avoid running bundle installevery time)
#Dockerfile
FROM rails:onbuild
I use rails:onbuild here for simplicity reasons (about the differences between rails:onbuild and rails:<version>, please see the documentation).
After that, modify the docker-compose.yml to
rails:
build: .
container_name: rails
ports:
- 80:3000
volumes:
- .:/www
working_dir: /www
Run docker-compose up and this should work!
If you modify your Gemfile, you may also need to rebuild your container by docker-compose build before running docker-compose up.
Thanks for your answer. It helped me to find the solutions.
It was actually a volume problem. I wanted to mount the volume with the directory /www. But it was not possible.
So I used the directory used by default with the rails images:
/usr/src/app
rails:
image: rails
container_name: rails
ports:
- 80:3000
working_dir: /usr/src/app
volumes:
- ./app:/usr/src/app
tty: true
Now my docker-compose up -d command works

Resources