I am having difficulties to write slow logs for elasticsearch into docker image.
Elasticsearch docker settings
"HostConfig": { "Binds": [ "/mnt/mydisk/data:/usr/share/elasticsearch/data", "/mnt/mydisk/logs:/usr/share/elasticsearch/logs" ],
I changed elasticsearch index settings like below;
{
"index.search": {
"slowlog": {
"level": "info",
"threshold": {
"fetch": {
"warn": "2s",
"trace": "200ms",
"debug": "500ms",
"info": "800ms"
},
"query": {
"warn": "10s",
"trace": "500ms",
"debug": "2s",
"info": "5s"
}
}}}}
I can only see gc.logs in "/mnt/mydisk/logs" path, and there is no "/usr/share/elasticsearch/logs" folder or path.
How can I save slow logs into /mnt/mydisk/logs ?
BTW I can see slow logs via "docker logs elasticsearch" command, but I cannot find where its saved or change the path.
You are looking for this properties file: log4j2.properties. If you are using the official elastisearch image, the default setting is to log everything on stdout (i.e. docker logs )
Read more here.
Related
I have an Elasticsearch deployment on Kubernetes (AKS). I'm using the official Elastic's docker images for deployments. Logs are being stored in a persistent Azure Disk. How can I migrate some of these logs to another cluster with a similar setup? Only those logs that matches a filter condition based on datetime of the logs needs to be migrated.
Please use Reindex API for achieving the same
POST _reindex
{
"source": {
"remote": {
"host": "http://oldhost:9200",
"username": "user",
"password": "pass"
},
"index": "source",
"query": {
"match": {
"test": "data"
}
}
},
"dest": {
"index": "dest"
}
}
Note:
Run the aforementioned command on your target instance.
Make sure that the source instance is whitelisted in elasticsearch.yml
reindex.remote.whitelist: oldhost:9200
Run the process asynchronously using below query param
POST _reindex?wait_for_completion=false
On my linux VM I have set up a docker container to build and debug my vs code C++ project via an ssh connection. Building works inside the container as well as running and debugging with breakpoints. I am stuck on how to redirect stdout to the Output and Problems tabs so i can see warnings generated from the build and then navigate to the affected files. Instead it just outputs the build to a terminal window.
The project is located in a docker volume in the location:
/var/snap/docker/common/var-lib-docker/volumes/vol-tom-2/_data/My-Project
And inside the container it is located in:
/home/buildmaster/workspace/My-Project
For debugging i have modified the launch.json file so that when setting breakpoints it matches up the files in the project to the ones in the container, by adding this line:
"sourceFileMap": {
"/home/user/workspace": "/var/snap/docker/common/var-lib-docker/volumes/vol-tom-2/_data/"
},
I would like to find something similar in tasks.json so that it can sync up my local vs code project with the warnings and errors generated from the build inside the container.
Below is my tasks.json file, thanks in advance if any one has any idea how to solve this!
{
"version": "2.0.0",
"command": "/bin/sh",
"args": ["-c"],
"reveal": "always",
"tasks": [
{
"args": [
"user#localhost",
"-p",
"32772",
"/home/build-scripts/build-script.sh"
],
"label": "build",
"command": "ssh",
"problemMatcher": {
"owner": "cpp",
"fileLocation": ["relative", "${workspaceRoot}"],
"pattern": {
"regexp": "^\/host\/(.*):(\\d+):(\\d+):\\s+(warning|error):\\s+(.*)$",
"file": 1,
"line": 2,
"column": 3,
"severity": 4,
"message": 5
}
},
}
]
}
So I have a consul check that watches over a container and is designed to go critical when the container is stopped. I want to create a consul watch that will run a script after the check has gone critical, or after several critical responses (for example if my check sends 5 critical responses I want it to run a script).
Here is the json for my working check and my guess as to what I my watch might look like:
{
// this check works
"checks": [
{
"id": "docker_stuff",
"name": "curl test",
"notes": "curls the docker container",
"script": "/scripts/docker.py",
"interval": "1s"
}
],
//this watch doesn't work
"watches": [
{
"Node": "client2",
"CheckID": "docker-stuff",
"Name": "docker-stuff-watch",
"Status": "critical",
"Status_amt": "5",
"handler": "/scripts/new-docker.sh",
"Output": "container relaunched",
}
]
}
What do I need to change in my watch to get it working?
Would I also need to use a consul event to watch my health check and then trigger a consul watch (of the event type) that runs my /scripts/new-docker.sh script? If so then how would I make a consul event that watches over my health check? For example if this was my consul check, watch and event, what would I need to change to get this working?
{
"checks": [
{
"id": "docker_stuff",
"name": "curl test",
"notes": "curls the docker container",
"script": "/scripts/docker.py",
"interval": "1s"
}
],
"watches": [
{
"type": "event",
"name": "docker-stuff-watch",
"handler": "/scripts/new-docker.sh"
}
],
"events": [
{
"Node": "client2",
"CheckID": "docker-stuff",
"Name": "docker-stuff-event",
"Status": "critical",
"Status_amt": "5",
"Output": "container relaunched",
}
]
}
What do I need to change in my watch to get it working?
Are there any errors? Make sure your watch handler '/scripts/new-docker.sh' is consuming STDIN that Consul will be sending, even if it is throwing it away to /dev/null, otherwise the process will wait forever for it to be consumed
Something like
while read -r -t 0; do read -r; done
I would recommend considering an upgrade to the next version of Docker 1.12 (release candidate at the moment). The new concept of services can be used to state the desired number of containers to be run.
https://docs.docker.com/engine/swarm/swarm-tutorial/deploy-service/
There's also a new HEALTHCHECK directive in the Dockerfile that enables you to bundle a check script with the container image.
These new features might enable you to replace the functionality you've had to implement using consul.
Hi didn't really knew if my question was more for serverfault or here, I hope devops won't mind me posting here.
I am working on a stack with mesos/marathon/docker/glusterfs, I feel tired with the lake of documentation.
I am looking for a sample marthon deployement file for deploying using glusterfs driver.
The author says that we should create the volume before, but he doesn't say anything about mounting it.
"container": {
"type": "DOCKER",
"docker": {
"image": "kylemanna/openvpn:latest",
"parameters": [
{
"key": "volume-driver",
"value": "glusterfs"
},
{
"key": "cap-add",
"value": "NET_ADMIN"
}
],
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 1194
}
]
},
"volumes": [
{
"containerPath": "/etc/openvpn",
"hostPath": "openvpn-data",
"mode": "RW"
}
]
}
My container keep restarting in marathon and logs says that /usr/local/bin/ovpn_run: line 16: /etc/openvpn/ovpn_env.sh: No such file or directory
On my gluster fileserver, I have these file present in /data/openvpn-data/ovpn_env.sh
I don't see any mount point in /mnt, I guess marathon did the mount itself, but because the container keep restarting, I dont see it.
I did a docker inspect to check where was stored the filesystem and I found that it is stored in /var/lib/docker-volumes/_glusterfs/openvpn-data
So here are my questions :
Is my marathon json file correct ?
Will the container wait for all data to be downloaded and should I configure something for that ?
Are the data erased when deleting a container on marathon?
Should I have my ovpn_env.sh in /data/myvolume/ovpn_env.sh or /data/myvolume/etc/openvpn/ovpn_env.sh
Have a look at the folowing issue
https://github.com/mesosphere/marathon/issues/2493#issuecomment-196743212
and the docs at
https://github.com/mesosphere/marathon/blob/bd076173b662b12d18e5dd568629a286b242ba91/docs/docs/persistent-volumes.md
Quote:
Docker volumes with plugin drivers is not available right now.
You'll have to create the volume/mount before you start the container, and map the host folder when you launch the app via Marathon (you do this already). I guess that's why it's currently called "persistent local volumes"...
Define it in "parameters" part, like this:
"parameters": [
{
"key": "volume-driver",
"value": "glusterfs"
},
{
"key": "volume",
"value": "openvpn-data:/etc/openvpn"
}
]
I can't find how to manage images in a private registry. I can push or pull an image because i know the id but how to get the list of pushed images ?
Take for example a person who wants to see the available images under the private registry of his organization. How can she do ?
Unless I'm mistaken, I can't find API or Web UI to discover the registry content like the index.docker.io do with the public registry.
Are there any open source projects to manage this ?
thanks.
Are there any open source projects to manage this ?
There is a containerized web application that provides administration of one-to-many private registries. Its name is Docker Registry UI and it is FOSS.
The source is on Github and you can run it in a container like so:
docker run -p 8080:8080 -v my_data_dir:/var/lib/h2/ atcol/docker-registry-ui
Disclaimer: I wrote the web-app as I could not find one myself. I believe this answers your question (as quoted).
Thanks Thomas !
To allow the use of the search API, you must start the container by specifying the value of the environment variable SEARCH_BACKEND like this :
docker run -d -e SEARCH_BACKEND=sqlalchemy -p 5000:5000 --name registry samalba/docker-registry
Then i have a result for this query :
GET http://registry_host:5000/v1/search?q=base
Result :
{
"num_results": 1,
"query": "base",
"results": [{"description": "", "name": "test/base-img"}]
}
To list all images, you can do this :
GET http://registry_host:5000/v1/search
Result :
{
"num_results": 2,
"query": "",
"results": [
{"description": "", "name": "test/base-img"},
{"description": "", "name": "test/base-test"}]
}
And to know the available versions of an image :
GET http://localhost:5000/v1/repositories/**test/base-img**/tags
Result :
{
"0.1": "04e073e1efd31f50011dcde9b9f4d3148ecc4da94c0b7ba9abfadef5a8522d13",
"0.2": "04e073e1efd31f50011dcde9b9f4d3148ecc4da94c0b7ba9abfadef5a8522d13",
"0.3": "04e073e1efd31f50011dcde9b9f4d3148ecc4da94c0b7ba9abfadef5a8522d13"
}
I've written a docker-registry-frontend that you can find on github. It allows you to browse your private registry and do almost everything that is available through the Docker registry API v1. Plus, it can be run as a docker container on its own.
Here's a list of basic features with some screenshots: https://github.com/kwk/docker-registry-frontend/wiki/Features. In addition to these features, there's support for SSL encryption and Kerberos authentication.
I want to present for you, my frontend for private registry, you may try it from github or dockerhub.
Also you can find interface screenshots there.
To sum up it has:
- internal db (BoltBD) gives it ability to store info, and as result it responses much more faster then after direct api call like in other projects
- app can pars, store and show info from registry such as:
- image layers info:
- name / tag
- image size and pushes number
- upload and push dates
- image creating commands history
- it is possible to set multiple repositories in case you have more than one registries and observe them in one place
- show statistics pretty, draw curves for uploads number and image sizes for tag with respects to dates
Update 2017-02-15
So far also there was added:
find a parent
show tree-graph of parents
image deletion
Bearer token auth support
As far as I see, the Docker registry has a REST API, very similar to Docker itself. You can find the documentation at http://docs.docker.io/reference/api/registry_api/. But on the first glance I don't see a method to just list all images.
There is also an REST API for the official index (infos at http://docs.docker.io/reference/api/docker-io_api/).
EDIT
I just tested the Docker registry API and it is not so self-explanatory. You can query all images of a certain repository. In my case, my repository is called "thomas/busybox". I can query all images in there by calling:
https://my-private-registry.com/v1/repositories/thomas/busybox/images
Result:
[
{
"id": "2d8e5b282c81244037eb15b2068e1c46319c1a42b80493acb128da24b2090739"
},
{
"id": "6c991eb934609424f761d3d0a7c79f4f72b76db286aa02e617659ac116aa7758"
},
{
"id": "9f4e93171ec525221fa9013d0e21f8690cef68590664eb5249e0b324c5faf31a"
},
{
"id": "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158"
}
]
Now I know that I have four images in my repository and I can query every image. The query for the first image would be:
https://my-private-registry.com/v1/images/2d8e5b282c81244037eb15b2068e1c46319c1a42b80493acb128da24b2090739/json
Result:
{
"id": "2d8e5b282c81244037eb15b2068e1c46319c1a42b80493acb128da24b2090739",
"parent": "9f4e93171ec525221fa9013d0e21f8690cef68590664eb5249e0b324c5faf31a",
"created": "2014-04-24T15:59:59.47081913Z",
"container": "d15320d6935ca35bc4198e373f29e730f4c53cce32b3809c2fecec22eb30018b",
"container_config": {
"Hostname": "4964db5b599b",
...
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"HOME=\/",
"PATH=\/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin"
],
"Cmd": [
"\/bin\/sh",
"-c",
"#(nop) CMD [\/bin\/sh -c \/bin\/sh]"
],
"Image": "9f4e93171ec525221fa9013d0e21f8690cef68590664eb5249e0b324c5faf31a",
...
"OnBuild": [
]
},
"docker_version": "0.10.0",
"author": "J\u00c3\u00a9r\u00c3\u00b4me Petazzoni <jerome#docker.com>",
"config": {
"Hostname": "4964db5b599b",
"Domainname": "",
"User": "",
"Memory": 0,
...
"Env": [
"HOME=\/",
"PATH=\/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin"
],
"Cmd": [
"\/bin\/sh",
"-c",
"\/bin\/sh"
],
"Image": "9f4e93171ec525221fa9013d0e21f8690cef68590664eb5249e0b324c5faf31a",
...
"OnBuild": [
]
},
"architecture": "amd64",
"os": "linux",
"Size": 0
}
You can also search for an image, but I do not get any results:
https://my-private-registry.com/v1/search?q=thomas
Result:
{"num_results": 0, "query": "thomas", "results": []}
Sonatype Nexus Repository Manager 3.0 has Private Registry for Docker