How to enable broker security in confluent-kafka? - docker

i am trying to enable security in kafka.
i tried with Apache Kafka it worked fine,But now we are using confluent-platform docker image to get all confluent services.
Here i dont know how to enable the kafka ssl security ?
i checked in broker container etc/kafka/ but i didnt no in which file we need change the properties
because there are two files 1)Kafka.properties 2)server.properties
so i am so much confused,
can anyone share your suggestion on this?

What docker image do you use? Please, make sure that you are pulling them from hub.docker.com/r/confluentinc/cp-kafka
Usually, the configuration file for Apache Kafka brokers is server.properties. You do not need to inject the whole config file though. You can configure it with environment variables passed to the container. Please, see cp-demo/blob/6.1.0-post/docker-compose.yml as an example.

Related

How can I keep connector's configuration between restarts when using confluentinc/cp-server-connect docker image extended with new connector

I'm using confluentinc/cp-server-connect (https://hub.docker.com/r/confluentinc/cp-server-connect) with elasticsearch sink connector (https://www.confluent.io/hub/confluentinc/kafka-connect-elasticsearch) i added trough dockerfile and rebuilding the image and it works just fine. I'm configuring the connector using http requests like it's done in this tutorial https://www.confluent.io/blog/kafka-elasticsearch-connector-tutorial/.
My problem is that I couldn't find a way to keep the connector configuration i set during removing and stopping again the docker container with this image.
I couldn't find any mentions of keeping configuration in docker image's documentation on docker hub or by googling it. I also tried manually searching in the image for where this configuration may be stored but i had no luck. Where should I point with docker volume to save this configuration, or maybe the configuration is kept somewhere else like in a specific topic in kafka?
Yes, the configurations are kept on Kafka topic. The Connect container doesn't store them.
Therefore, don't restart the Kafka (or Zookeeper) container(s), and your configs will be maintained.

Dynamically set proxy for docker pull

I'm trying to pull an image from a server with multiple proxies.
Setting a proper proxy depends on which zone the machine is trying to docker pull from.
For the record, adding the one relevant proxy in /etc/systemd/system/docker.service.conf/http-proxy.conf of the machine which is pulling the image, works fine.
But the container is supposed to be downloaded on multiple zones, which require different proxies based on where the machine is.
I tried two things:
Passed the list of proxies in the http-proxy.conf, like this:
[Service]
Environment="HTTP_PROXY=http://proxy_1:port/,http://proxy_2:port/"
Environment="HTTPS_PROXY=http://proxy_1:port/,http://proxy_2:port/"
Environment="NO_PROXY=localhost"
Some machines require http://proxy_1:port/, which work fine.
But on a machine that requires http://proxy_2:port/ to pull; it does not work, meaning, Docker does not fallback to another proxy to try. It returns this error:
Error response from daemon: Get HTTP:<ip>:<proxy_1> proxyconnect tcp: dial tcp <ip>:<proxy_1>: connect: no route to host
Ofcourse if I were to provide only the second working proxy to the configuration, it will work.
Passing proxy as a parameter to docker pull, like in docker build/run but that is not supported as per the documentation.
I am looking for a way to set-up proxies in such a way that either
Docker falls back to trying other provided alternate proxies
OR
I can provide proxy dynamically at the time of pull. (This will be part of an automated process which determines relevant proxy to pass.)
I do not want to constantly change the http-proxy file and restart docker for obvious reasons.
What are my options?
If you're using a sufficiently recent docker (i.e. 17.07 and higher) you can have this configuration on the client side. Refer to the official documentation for details on the configuration.
You still need to have multiple configuration files for the various proxy configuration you need, but you can switch them without the need to restart the docker daemon.
In order to do something similar (not exactly related to proxy) I use a shell script that wraps the invocation of docker client pointing to a custom configuration file via the --config option.

How to connect and encrypt traffic between dockers runnning on different servers?

I currently have six docker containers that were triggered by a docker-compose file. Now I wish to move some of them to a remote machine and enable remote communication between them.
The problem now is that I also need to add a layer of security by encrypting their traffic.
This should be for a production website and needs to be very stable so I am unsure about which protocols/approaches could be better for this scenario.
I have used port forwarding using ssh and know that could also apply some stability through autossh. But I am unsure if there are other approaches that could help achieve the same idea by also taking into account stability and performance.
What protocols/approaches could help on this aim? How do they differ?
I would not recommend manually configuring docker container connections across physical servers because docker already contains a solution for that called Docker Swarm. Follow this documentation to configure your containers to use a docker swarm. I've done it and it's very cool!

Docker Daemon per user on host

I have one weird thing to configure is that Can I have docker daemon per user on Host? I want to isolate the process where individual user can have his own docker daemon where the user can run his own services/images/containers and test it. Basically I need this for testing environment where each user shall have his own set of services.
I could see that there is something called docker bridge but I am not sure If I can extend it. Can someone please suggest me somethings.
Edit 1 : Can I use docker-machine for the same? but I am not finding the way to configure it.
I could achieve this with my own Solution. Basically this is easily achievable with custom docker daemon configurations.
This link has all the details. Dockerd
And this talks on securing the tcp socket between client and engine secure docker connection
However running multiple daemons is still a experimental features since global configurations such as Iptables are part of it. For my case I do not need it hence disabled those.
Note : This is adaptable for my use case. If you are with similar scenario and with extra configurations I recommend you to read the Docker Documentation and also a Stackoverflow question if it does not satisfy the thirst.

HaProxy for service discovery on a marathon mesos docker linked containers

Please this is not asked anywhere I have checked. Here is what I have done. I am able to deploy single instance of mesos, marathon and docker. Moving next step ahead I want to have 2 mesos slave(docker containers) linked to each other. Just using docker the same can be achieved by using the docker link feature. But while using the orchestration(mesos) and scheduler(marathon)it seems u need to use service discovery.
My setup up is simple and runnning on a single host. So I will have 2 docker containers one running a simple pub/sub and one running rabbitmq. How can I use HA PRoxy in this setup. I have seen some documents provided by mesosphere
http://mesosphere.com/docs/getting-started/service-discovery/ but it is not clear how to go about it.
The canonical approach for service discovery with Mesos + Marathon + Docker is currently what is described in the document you linked.
I'm assuming you're able to get the two applications running in Marathon already.
Typically what happens is:
1) Configure your application definition to include the ports that your application requires.
2) You set up the provided haproxy-marathon-bridge script to run periodically using a utility like cron. This script scrapes Marathon's API to figure out what host and port the application instances are running on and what the known "friendly" port is.
In the example in the service discovery article, the first application has friendly ports of 80 and 443, whilst the second has a friendly port of 8081.
The script then generates a haproxy.cfg configuration that has rules mapping localhost:friendly_port to actual_host:actual_port.
3) Configure your applications to look for each other on localhost:friendly_port. HAProxy will route connections appropriately.
Hope this helps your understanding!
I created a haproxy service discovery docker container that you can run in mesos. It's not production ready but I am using it in my development environment doing exactly what you're trying to do. The reason I prefer this over what comes with marathon is I haven't found a good way to do complicated haproxy configurations with haproxy-marathon-bridge. With spiderweb you can create a template for the haproxy configuration which enables you to do things such as acl routing etc. It doesn't support health checks yet which is something that will need to be done before its production ready. You can see the project here https://github.com/SBRDevelopment/spiderweb.
We have combined Mesos and Marathon with consul and registartor,
so in the end you have haproxy configuration auto-generated with consul-template.
try https://github.com/eBayClassifiedsGroup/PanteraS
All in one container.
With Mesos-DNS you can also do the following:
Setup mesos-dns as in this guide: http://programmableinfrastructure.com/guides/service-discovery/mesos-dns-haproxy-marathon/ (you can skip HAProxy steps they are not required)
When you start your docker containers make sure that they have "namespace %slave_ip_with_mesos_dns%" (replace string with IP address) in their /etc/resolv.conf files.
if lets say name of an app is "peek" it should be reachable from other applications at peek.marathon.mesos

Resources