I'm trying to setup Elastic Enterprise Search and App Search using Docker. So far I managed to install Elastic Search and Kibana using Docker on Centos 7. Right now, I want to establish a connection with GitHub, for which I'll need Enterprise search. I opened the page, but it's prompting to "Add your workspace Search host URL to your Kibana Configuration - enterpriseSearch.host: 'http://localhost:3002'
I didn't quite understand on how to do that. I'm stuck with this. Can anyone please provide some step-by-step instructions?
As per the ElasticSearch documentation:
enterpriseSearch.host | The URL of your Enterprise Search instance
I am looking at this step as well, to configure the kibana docker container you can either pass environment variables to the docker container as you run it (usually by making use of a docker-compose.yml file), or you can pass it a kibana.yml file on the command line.
Reference:
https://www.elastic.co/guide/en/kibana/current/docker.html#configuring-kibana-docker
It's worth noting if you are running elasticsearch on docker following the same instructions as me you have not opened up port 3002 when launching that which may need to be completed by changing the run code to include 3002:3002.
Related
I'm using confluentinc/cp-server-connect (https://hub.docker.com/r/confluentinc/cp-server-connect) with elasticsearch sink connector (https://www.confluent.io/hub/confluentinc/kafka-connect-elasticsearch) i added trough dockerfile and rebuilding the image and it works just fine. I'm configuring the connector using http requests like it's done in this tutorial https://www.confluent.io/blog/kafka-elasticsearch-connector-tutorial/.
My problem is that I couldn't find a way to keep the connector configuration i set during removing and stopping again the docker container with this image.
I couldn't find any mentions of keeping configuration in docker image's documentation on docker hub or by googling it. I also tried manually searching in the image for where this configuration may be stored but i had no luck. Where should I point with docker volume to save this configuration, or maybe the configuration is kept somewhere else like in a specific topic in kafka?
Yes, the configurations are kept on Kafka topic. The Connect container doesn't store them.
Therefore, don't restart the Kafka (or Zookeeper) container(s), and your configs will be maintained.
I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images
I followed the instructions in the documentation to download this preset they created for easily running Apache OpenWhisk for development purposes on Docker Compose.
I use make run which works fine. Then make hello-world will run the example action just as fine.
I read the .wskprops file to see that it's running it in port 9090 and the auth value is 23bc46b1-...:123zO3.... So I use wsk property set --apihost localhost:9090 --auth 23bc46....
But if I try using wsk action create someAction main.js to create my own action it returns Unable to create action 'someAction': Put "https://localhost:31001/api/v1/namespaces/_/actions/test?overwrite=false": dial tcp [::1]:31001: connect: connection refused.
These are the steps the Makefile appears to follow.
I'm not sure if perhaps I'm missing a step? How do I link running it and using it? The documentation doesn't seem to specify this. My knowledge of Docker Compose is naught, but I need to run this with the time I have available, I hoped this would be a simple solution. I've been stuck trying to run OpenWhisk in my local computer for a week, so any help would be massively appreciated!
I figured it out! I got into the Makefile and printed WSK_CLI to discover that it was using docker-compose/openwhisk-src/bin/wsk instead of my own installation of wsk.
So essentially, after running make run, I can create an action using ./openwhisk-src/bin/wsk action -i create <action_name> <action.js>. Note that the -i is needed to get over the security simplifactions of running it locally for development purposes.
I have a hunch that docker could greatly improve my webdev workflow - but I haven't quite managed to wrap my head around how to approach a project adding docker to the stack.
The basic software stack would look like this:
Software
Docker image(s) providing custom LAMP stack
Apache with several modules
MYSQL
PHP
Some CMS, e.g. Silverstripe
GIT
Workflow
I could imagine the workflow to look somewhat like the following:
Development
Write a Dockerfile that defines a LAMP-container meeting the requirements stated above
REQ: The machine should start apache/mysql right after booting
Build the docker image
Copy the files required to run the CMS into e.g. ~/dev/cmsdir
Put ~/dev/cmsdir/ under version control
Run the docker container, and somehow mount ~/dev/cmsdir to /var/www/ on the container
Populate the database
Do work in /dev/cmsdir/
Commit & shut down docker container
Deployment
Set up remote host (e.g. with ansible)
Push container image to remote host
Fetch cmsdir-project via git
Run the docker container, pull in the database and mount cmsdir into /var/www
Now, this looks all quite nice on paper, BUT I am not quite sure whether this would be the right approach at all.
Questions:
While developing locally, how would I get the database to persist between reboots of the container instance? Or would I need to run sql-dump every time before spinning down the container?
Should I have separate container instances for the db and the apache server? Or would it be sufficient to have a single container for above use case?
If using separate containers for database and server, how could I automate spinning them up and down at the same time?
How would I actually mount /dev/cmsdir/ into the containers /var/www/-directory? Should I utilize data-volumes for this?
Did I miss any pitfalls? Anything that could be simplified?
If you need database persistance indepent of your CMS container, you can use one container for MySQL and one container for your CMS. In such case, you can have your MySQL container still running and your can redeploy your CMS as often as you want independently.
For development - the another option is to map mysql data directories from your host/development machine using data volumes. This way you can manage data files for mysql (in docker) using git (on host) and "reload" initial state anytime you want (before starting mysql container).
Yes, I think you should have a separate container for db.
I am using just basic script:
#!/bin/bash
$JOB1 = (docker run ... /usr/sbin/mysqld)
$JOB2 = (docker run ... /usr/sbin/apache2)
echo MySql=$JOB1, Apache=$JOB2
Yes, you can use data-volumes -v switch. I would use this for development. You can use read-only mounting, so no changes will be made to this directory if you want (your app should store data somewhere else anyway).
docker run -v=/home/user/dev/cmsdir:/var/www/cmsdir:ro image /usr/sbin/apache2
Anyway, for final deployment, I would build and image using dockerfile with ADD /home/user/dev/cmsdir /var/www/cmsdir
I don't know :-)
You want to use docker-compose. Follow the tutorial here. Very simple. Seems to tick all your boxes.
https://docs.docker.com/compose/
I understand this post is over a year old at this time, but I have recently asked myself very similar questions and have several great answers to your questions.
You can setup a MySQL docker instance and have data persist on a stateless data container, aka the data container does not need to be actively running
Yes I would recommend having a separate instance for your web server and database. This is the power of Docker.
Check out this repo I have been building. Basically it is as simple as make build & make run and you can have a web server and database container running locally.
You use the -v argument when running the container for the first time, this will link a specific folder on the container to the host running the container.
I think your ideas are great and it is currently possible to achieve all that you are asking.
Here is a turn key solution achieving all of the needs you have listed.
I've put together an easy to use docker compose setup that should match your development workflow requirements.
https://github.com/ehyland/docker-silverstripe-dev
Main Features
Persistent DB
Your choice of HHVM + NGINX or Apache2 + PHP5
Debug and set breakpoints with xDebug
The README.md should be clear enough to get you started.
I want to create a social network in django framework in Openshift then I need at least a graph db (like Neo4j)and a relational db (like Mysql). I had trouble in add Neo4j to my project because openshift has not any cartridge for it. then I decide to install it with DIY, but I don't understand the functionality of start and stop files in .openshift/action hooks.Then I doing the following steps to install neo4j on server:
1.ssh to my account:
ssh 1238716...#something-prolife.rhcloud.com
2.go in a folder that have permission to write (I go to app-root/repo/ and mkdir test in it) and download the neo4j package from here. and extract it to the test folder that I created before :
tar -xvzf neo4j-community-1.9.4-unix.tar.gz
3.and finally run the neo4j file and start it:
neo4j-community-1.9.4/bin/neo4j start
but I see these logs and can't run the neo4j:
process [3898]... waiting for server to be ready............ Failed
to start within 120 seconds.
Neo4j Server may have failed to start, please check the logs.
how can I run this database in openshift ? where I am wrong ? and where is the logs in please check the logs?
I've developed an openshift cartridge that fixes the permission issue in openshift. I had to change the class HostBoundSocketFactory and SimpleAppServer in neo4j just to bind without using the 0 port, but using an openshift available port.
You can check at: https://github.com/danielnatali/openshift-neo4j-cartridge
it's working for me.
I would also not place it in the app-root/repo but instead I would put it in app-root/data.
You also need to use the IP of the gear - I think the env. variable is something like OPENSHIFT_INTERAL_IP. 127.0.0.1 is not available for binding but I think the ports should be open.
There are 2 ways neo4j can run : embedded or standalone(exposed via a rest service).
Standalone is what you are trying to do. I think the right way to setup neo4j would be by writing a cartridge for openshift, and then add the cartridge to your gear. There has been some discussion about this, but it seems that nobody has taken the time to do this. Check https://www.openshift.com/forums/openshift/neo4j-cartridge. If you decide to write your own cartridge, i might assist. Here are the docs: https://www.openshift.com/developers/download-cartridges.
The other option is running in embedded mode, which i have used. You need to set up a Java EE application(because neo4j embedded mode libraries are available only with java), and put the neo4j libraries in your project. Then, you would expose some routes, check for parameters and run your neo4j queries inside the servlets.