Grafana 9.2.2 works with created alerts. I need to run exactly the same Grafana with notifications in a docker container on any other host. Do this without manually creating alerts.
I can't find JSON file with alert variables.
The alerts config stored inside the Grafana DB, not in any config file, so its not easy migration.
You can take out the DB dump of Grafana from the container where alerts are configured and restore the DB in new Grafana container.
Reference : https://grafana.com/blog/2020/01/13/how-to-migrate-your-configuration-database/
One more option is to create alerts using file Provisioning instead of UI so that the YAML can be applied to any cluster.
Related
I am running an Airflow instance using Docker. I am able to access the Airflow UI using http://localhost:8080/. Also able to execute a sample dag using PythonOperator. Using PythonOperator I am able to query a big query table on GCP environment. The service account key JSON file is added in my docker compose yaml file.
This works perfectly.
Now I want to use BigQueryOperator and BigQueryCheckOperator for which I need a connection ID. This connection ID would come from Airflow connections which happens through Airflow UI.
But when I am trying to create a new Google Bigquery connection getting errors. Could anyone please help me to fix this.
In your docker compose file, can you set the environment variable GOOGLE_APPLICATION_CREDENTIALS to /opt/airflow/configs/kairos-aggs-airflow-local-bq-connection.json? This might be enough to fix your first screenshot.
Looking at the docs and comparing your second screenshot, I think you could try selecting 'Google Cloud Platform' as the connection type and adding a project ID and Scopes to the form.
The answers to this question may also be helpful.
I have taken some time to create a useful Docker volume for use at work. It has a restored backup of one of our software databases (SQL Server) on it, and I use it for testing/debug by just attaching it to whatever Linux SQL Container I feel like running at the time.
When I make useful Docker images at work, I share them with our team using either the Azure Container Registry or the AWS Elastic Container Registry. If there's a DockerFile I've made as part of a solution, I can store that in our GIT repo for others to access.
But what about volumes? Is there a way to share these with colleagues so they don't need to go through the process I went through to build the volume in the first place? So if I've got this 'databasevolume' is there a way to source control it? Or share it as a file to other users of Docker within my team? I'm just looking to save them the time of creating a volume, downloading the .bak file from its storage location, restoring it etc.
The short answer is that there is no default docker functionality to export the contents of a docker volume and docker export explicitly does not export the contents of the volumes associated with the container. You can backup, restore or migrate data volumes.
Note: if your're backing up a database I'd suggest using the appropriate tools for that database.
In Jenkins, I want to get information like how many times builds failed in a given period, which tests failed multiple times in successive builds, did each of these failed tests fail due to same or different reasons each time, is a test failing in multiple environments or only some environments etc.
How do I get such information from Jenkins ?
Your question is a bit vague. So I will give you the solution I used to solve this problem with the use of jenkins's influxDB plugin with InfluxDB as a database and Grafana as a Dashboard tool.
Setup InfluxDB
I use the docker image: influxdb:1.7-alpine
mounted volumes /docker-entrypoint-initdb.d and /var/lib/influxdb
In folder /docker-entrypoint-initdb.d I added a file db.iql to create my database
CREATE DATABASE "jenkins" WITH DURATION 24w REPLICATION 1 SHARD DURATION 1d NAME "jenkins_retention_6month"
Setup the InfluxDB plugin
See section configuration of the plugin's page
https://wiki.jenkins.io/display/JENKINS/InfluxDB+Plugin
Use the plugin
the InfluxDbPublisher step can be used to collect data using plugins like the Metrics Plugin, however I use it with customDataMap
influxDbPublisher(
selectedTarget: 'myTarget',
customDataMap: [
myMeasure: [
field: value
]
],
customDataMapTags: [
myMeasure: [
tag: 'someTag'
]
]
])
Everything is documented on
https://wiki.jenkins.io/display/JENKINS/InfluxDB+Plugin
Setup Grafana
I use the docker image: grafana/grafana:6.4.3
I mounted volume /var/lib/grafana
When the instance of grafana is running, add your influxdb database as a datasource
I configured grafana with the following environment variables:
GF_SERVER_DOMAIN=grafana.mydomain.com
GF_SECURITY_ADMIN_PASSWORD=MyPassword
GF_SMTP_ENABLED=true
GF_SMTP_HOST=smtp:25
GF_SMTP_FROM_ADDRESS=grafana#grafana.mydomain.com
I used docker image namshi/smtp to get a smtp server
Create Grafana Dashboards
It is very easy to create a new dashboard with the auto completion feature of grafana. You will certainly need to tweak few times the data you sent with the influxDbPublisher step.
Now you have your dashboards, you can setup alerts in order to get notified in advance by email when something od is happening with your CI
I am running pgAdmin-4 as a docker container alongside my PostgreSQL deployment (in docker containers as well).
I am able to connect to the WebUI and manually add the DB server, getting access to all the needed information.
Is there any way to make the pgAdmin container automatically connected to my PostgreSQL server without the need for a manual configuration after the launch?
Thank you
You can export & save the list of servers details in a JSON file and after starting your instance you import that into pgAdmin4. See export/import servers.
Then you can map the resulted JSON file in the docker as mentioned in the documentation.
In my company, we're using a Jira for issue tracking. I need to write an application, that integrates with it and synchronizes some data with other services. For testing, I want to have a docker image of the Jira with some initial data.
I'm using the official atlassian/jira-core image. After the initial setup, I saved a state by running docker commit, but unfortunately the new image seems to be empty, and I need to set it up again from scratch.
What should I do to save the initial setup? I want to run tests that will change something within Jira, so reverting it back will be necessary to have reliable test suite. After I spin a new container it should have created a few users, and project with some issues. I don't want to create it manually for each new instance. Also, the setup takes a lot of time which is not acceptable for testing.
To get persistent storage you need to mount /var/atlassian/jira in your host system. /var/atlassian/jira this can be used for storing your configuration etc. so you do not need to commit, whenever you spin up a new container with /var/atlassian/jira mount path will have all the configuration that you set previously.
docker run --detach -v /you_host_path/jira:/var/atlassian/jira --publish 8080:8080 cptactionhank/atlassian-jira:latest
For logs you can mount
/opt/atlassian/jira/logs
The above is valid if you are running with the latest tag or you can explore relevant dockerfile.
Set volume mount points for installation and home directory. Changes to the
home directory needs to be persisted as well as parts of the installation
directory due to eg. logs. VOLUME ["/var/atlassian/jira", "/opt/atlassian/jira/logs"]
atlassian-jira-dockerfile
look at the entrypoint.sh , the comments from there are:
check if the server.xml file has been changed since the creation of
this Docker image. If the file has been changed the entrypoint script
will not perform modifications to the configuration file.
so I think you need to provide your server.xml to stop the init process...