SOLR Cloud setup on docker with existing schema - docker

I am using a the docker compose cluster sample setup from docker-solr-examples
Now, I want to add my existing core definitions to the cluster. How do i deploy my existing core definitions and managed-schema.xml to zookeeper. I presume there is a way to put the file on one node and have it automatically replicate out to the other nodes.

Have you looked at this documentation Using zookeeper to manage configuration
From the docker I understand that it is creating a zookeeper ensemble consisting of 3 notes and those manage 3 solr nodes.
Usually a solr installation has a few preinstalled scripts that allow you to do some maintenance. You may have to enter the cli of one of the Solr nodes and use that to upload the configuration.
Docker is only giving you the convenience to spin up the infrastructure quickly. Other maintenance tasks will still require using solr , zookeeper cli or APIs

Pull Solr Docker Image
sudo docker run -d -p 8983:8983 --name docker_image_name solr solr-precreate users
Access Docker Container
docker ps
docker exec -it <image_name> bash
Create New Core In Solr (hit after accessing docker container bash)
bin/solr create -c core_name
Now You can dump your data in solr cores.
Delete solr core
bin/solr create -c core_name

Related

Running Docker Tomcat in Google Cloud Compute instance

I am trying a basic docker test in GCP compute instance. I pulled a tomcat image from the official repo. then ran a command to run the container. Command is :
docker run -te --rm -d -p 80:8080 tomcat
It created a container for me with below id.
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
If I do docker ps, I get below
docker run -te --rm -d -p 80:8080 tomcat
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
However the tomcat admin console does not open. The reason is tomcat image is trying to create the config files under /usr/local. However, it is a read only file system. So the config files are not created.
Is there a way to ask Docker to create the files in a different location? Or, is there any other way to handle it?
Thanks in advance.

How to collect all logs from number of servers in docker swarm?

I have number of Linux server that have docker installed on them, all of the server are in a docker swarm, on each server i have a custom application. I also have ELK setup in AWS.
I want to collect all logs from my custom app to the ELK on AWS, I have successfully done that on one server with filebeat by running the following commands:
1. docker pull docker.elastic.co/beats/filebeat-oss:7.3.0
2. created a file in /etc/filebeat/filebeat.yml with the content:
filebeat.inputs:
- type: container
paths:
- '/usr/share/continer_logs/*/*.log'
containers.ids:
- '111111111111111111111111111111111111111111111111111111111111111111'
processors:
- add_docker_metadata: ~
output.elasticsearch:
hosts: ["XX.YY.ZZ.TT"]
chown root:root filebeat.yml
sudo docker run -u root -v /etc/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /var/lib/docker/containers:/usr/share/continer_logs -v /var/run/docker.sock:/var/run/docker.sock docker.elastic.co/beats/filebeat-oss:7.3.0
And now i want to do the same on all of my docker hosts(And there are a lot of them) in the swarm.
I encounter a number of problems:
1. How do i copy "filebeat.yml" to /etc/filebeat/filebeat.yml on every server?
2. How do i update the "containers.ids" on every server? and how to update it when i upgrade the docker image?
How do I copy "filebeat.yml" to /etc/filebeat/filebeat.yml on every server?
You need a configuration management tool for this. I prefer Ansible, you might want to take a look at others.
How do I update the "containers.ids" on every server?
You don't have to. Docker manages it by itself IF you use swarm mode. You're using docker run which is meant for the development phase and to deploy applications at one single machine. You need to look at Docker Stack to deploy an application across multiple servers.
How to update it when I upgrade the docker image?
docker stack deploy does both, deploy and update, services.
NOTE: Your image should be present on each node of the swarm in order to get its container deployed on that node.

How to use grafana-cli on docker installed Grafana?

I have installed grafana via docker.
Is it possible to export and run grafana-cli on my host?
If you meant running Grafana with some plugins installed, you can do it by passing a list of plugin names to a variable called GF_INSTALL_PLUGINS.
sudo docker run -d -p 3000:3000 -e "GF_INSTALL_PLUGINS=gridprotectionalliance-openhistorian-datasource,gridprotectionalliance-osisoftpi-datasource" grafana/grafana
I did this on Grafana 4.x
Installing plugins for Grafana 3 "or above"
For a full automatic setup of your Grafana install with the plugins you want I would follow Ricardo's suggestion. Its much better if you can configure your entire container as wanted in a single hit like that.
However if you are just playing with the plugins and want to install some manually, then you can access a shell on the running docker instance from the host.
host:~$ docker exec -it grafana /bin/bash
... assuming you named the docker container "grafana" otherwise you will need to substitute the given container name. The shell prompt that returns will allow you to run the standard
root#3e04b4578ebe:/# grafana-cli plugins install ....
Be warned that it may tell you to run service grafana-server restart afterwards. In my experience that didn't work (Not sure it runs as a traditional service in the container). However if you exit the container, and restart the container from the host...
host:~$ docker restart grafana
That should restart the grafana service and your new plugins should be in place.
Grafana running in docker container
Docker installed on Windows 10
Test: command to display grafana-cli help
c:\>docker exec -it grafana grafana-cli --help
Tested with a version: Version 6.4.4 November 6, 2019

How to configure Jenkins in Docker?

I'm new to Jenkins and Docker both. I am currently working on project where I allow users to submit jobs to jenkins. And I was wondering if there is a way to use docker to dynamically spun-up Jenkins server and redirect all the user jobs coming from my application to this server and then destroy this jenkins once the work is done. Is it possible? if yes, how ? if no, Why? Also, I need to setup maven for this jenkins server, do I need another container for that?
You can try the following but I can't guarantee if it's that easy to move your jenkins content from your dedicated server to your docker container. I did not try it before .
But the main thing is the following.
Create a backup of the content of your dedicated jenkins server using
tar
Create a named docker volume:
$ docker volume create --name jenkins-volume
Extract your jenkins-backup.tar inside your docker volume. This
volume is in /var/lib/docker/volumes/jenkins-volume/_data
$ sudo tar -xvpzf jenkins-backup.tar -C /var/lib/docker/volumes/jenkins-volume/_data/
Now you can start your jenkins container and telling it to use the jenkins-volume. Use the jenkins-image which is the same version as the dedicated jenkins.
$ docker run -d -u jenkins --name jenkins -p 50000:50000 -p 443:8443 -v jenkins-volume:/var/jenkins_home --restart=always jenkins:your-version
For me this worked to move the content from our jenkins container on AWS to a jenkins container on another cloud provider. I did not try it for a dedicated server. But you can't break anything of your existing jenkins when you try it.

how to pull neo4j database to mazerunner docker

I am using Mazerunner docker given by kenny Bastani to integrate neo4j and spark-graphx. I am able to process Movie graph that is given. Now I want to pull my own Twitter graph to Mazerunner docker. Can any one tell me how to pull a new graph to mazerunner docker. Thanks in advance.
-Narendra
There are a few ways to do this. Normally you would be able to mount a volume from your Docker host as the data directory for Neo4j.
Unfortunately there is a defect in Neo4j 2.2 that prevents this. You can find more details here: https://github.com/kbastani/docker-neo4j/issues/4
In order to work around this issue you can copy your graph.db directory from your host machine to the docker-neo4j container.
Run the steps below from the terminal. After you've started HDFS and Mazerunner containers, then start the docker-neo4j container and replace the user specific information with your own, i.e. /Users/User/neo4j-community-2.2.1/data. After the container starts, you will have root access inside the container via the shell. Run the next two commands to copy your mounted host volume's graph.db directory (the database directory you are importing) to the local volume.
[ ~ ]$ docker run -ti -p 7474:7474 -v /Users/User/neo4j-community-2.2.1/data:/opt/data-copy --name graphdb --link mazerunner:mazerunner --link hdfs:hdfs kbastani/docker-neo4j /bin/bash
[ root#f4800317575d:/var/lib/neo4j ]$ cp -r /opt/data-copy/graph.db /opt/data
[ root#f4800317575d:/var/lib/neo4j ]$ bin/neo4j start
Keep in mind that you'll need to copy the container's /opt/data directory back to the host system in order to make sure it is safe.
Hopefully the Neo4j 2.2 issue will be resolved soon.

Resources