Spring Cloud Data Flow for Kubernetes - Could not configure multiple kafka brokers - spring-cloud-dataflow

I'm trying to migrate my SCDF local server deployments to the k8s-based solution. But I've got some problems when handling the server configuration of the kafka broker-list for the apps.
I followed the instructions here: https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.7.2.RELEASE/reference/htmlsingle
and downloaded the sample configuration from : https://github.com/spring-cloud/spring-cloud-dataflow-server-kubernetes at branch v1.7.2.RELEASE
Because we've already deployed a kafka cluster, I'd like to configure the broker- and zk-nodes in the server-config-kafka.yaml file so that we could use the same kafka cluster.
I configured my environmentVaribales like this:
deployer:
kubernetes:
environmentVariables: >
SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS='172.16.3.192:9092,172.16.3.193:9092,172.16.3.194:9092',
SPRING_CLOUD_STREAM_KAFKA_BINDER_ZK_NODES='172.16.3.192:2181,172.16.3.193:2181,172.16.3.194:2181'
but got an error when trying to deploy my SCDF stream:
Invalid environment variable declared: 172.16.3.193:9092
How should I configure it to make it work?
Thanks in advance.

Remove the > in your YAML
That's creating a block string, not a map of environment variables. In YAML, how do I break a string over multiple lines?
Also, if using CoreDNS in kubernetes, you should probably be using something like kafka.default.cluster.local for the value, rather than IP addresses, and similar for Zookeeper

Related

Run Ambassador in local dev environment without Kubernetes

I am trying to run Ambassador API gateway on my local dev environment so I would simulate what I'll end up with on production - the difference is that on prod my solution will be running in Kubernetes. To do so, I'm installing Ambassador into Docker Desktop and adding the required configuration to route requests to my microservices. Unfortunately, it did not work for me and I'm getting the error below:
upstream connect error or disconnect/reset before headers. reset reason: connection failure
I assume that's due to an issue in the mapping file, which is as follows:
apiVersion: ambassador/v2
kind: Mapping
name: institutions_mapping
prefix: /ins/
service: localhost:44332
So what I'm basically trying to do is rewrite all requests coming to http://{ambassador_url}/ins to a service running locally in IIS Express (through Visual Studio) on port 44332.
What am I missing?
I think you may be better off using another one of Ambassador Labs tools called Telepresence.
https://www.telepresence.io/
With Telepresence you can take your local service you have running on localhost and project it into your cluster to see how it performs. This way you don't need to spin up a local cluster, and can get real time feedback on how your service operates with other services in the cluster.

Neo4j setup in OpenShift

I am having difficulties deploying Neo4j official docker image https://hub.docker.com/_/neo4j to an OpenShift environment and accessing it from outside (from my local machine)
I have performed the following steps:
oc new-app neo4j
Created route for port 7474
Set up the environment variable NEO4J_dbms_connector_bolt_listen__address to 0.0.0.0:7687 which is the equivalent of seting up the dbms.connector.bolt.listen_address=0.0.0.0:7687 in the neo4j.conf file.
Access the route url from local machine which will open the neo4j browser which requires authentication. At this point I am blocked because any combination of urls I try are unsuccessful.
As a workaround I have managed to forward 7687 port to my local machine, install Neo4j Desktop solution and connect via bolt://localhost:7687 but this is not the ideal solution.
Therefore there are two questions:
1. How can I connect from the neo4j browser to it's own database
How can I connect from external environment (trough OpenShift route) to the Neo4j DB
I have no experience with the OpenShift, but try to add the following config:
dbms.default_listen_address=0.0.0.0
Is there any other way for you to connect to Neo4j, so that you could further inspect the issue?
Short answer:
To connect to the DB that is most likely a configuration issue, maybe Tomaž Brataničs answer is the solution. As for accessing the DB from outside, you will most likely need a NodePort.
Long answer:
Note that OpenShift Routes are for HTTP / HTTPS traffic and not for any other kind of traffic. Typically, the "Routers" of an OpenShift cluster listen only on Port 80 and 443, so connecting to your database on any other port will most likely not work (although this heavily depends on your cluster configuration).
The solution for non-HTTP(S) traffic is to use NodePorts as described in the OpenShift documentation: https://docs.openshift.com/container-platform/3.11/dev_guide/expose_service/expose_internal_ip_nodeport.html
Note that also for NodePorts, you might need to have your cluster administrator add additional ports to the loadbalancer or you might need to connect to the OpenShift Nodes directly. Refer to the documentation on how to use NodePorts.

kubernetes - GCP - unable to connect to https://collector.newrelic.com

I am writing a small Go code, where I am using a new relic agent to collect logs,performance etc. But whenever I am running this , the agent in my microservice is not able to connect to its server.
command":"preconnect","error":"Post https://collector.newrelic.com
I have been told to add proxyserver in my Go agent initialisation.
proxyURL, _ := url.Parse("**http://myproy.mycompany.com:3128**")
app, err := newrelic.NewApplication(
newrelic.ConfigAppName(name),
newrelic.ConfigLicense("**************"),
newrelic.ConfigDebugLogger(os.Stdout),
newrelic.ConfigDistributedTracerEnabled(true),
func(cfg *newrelic.Config) {
// Set specific Config fields inside a custom ConfigOption.
cfg.Transport = &http.Transport {Proxy: http.ProxyURL(proxyURL),}
},
)
How can I find the proxyURL in my cloud environment ?
I am dockerising my application and using a kubernetes deployment in GCP environment.
If I understood you question properly, you develop your application using servers directly on Compute Engine or using GKE, but cant find proxy address to use in code.
By default your instances are not behind any proxy, if only you configured it manually and applied on servers. It depend on system, but in linux you can check proxy by
env | grep -i proxy
SO, if you/someone in the team set up proxy server - you should know the address. If not - nothing to use there and you dont have a proxy and the issue with something else.
Please correct me if I understood you wrong

Connecting to scality/s3 server between docker containers

We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure

How to use https to call the kubernetes api server

I have a two node kubernetes cluster on a linux server and I use the kubernetes api to pull stats about them using a http api through kubeproxy. However I haven't found any good documentation on how to use https. I am kinda new to setting up environments so a lot of the high level documentation goes over me.
You need to configure https on kubernetes api server. You can check Kelseys's https://github.com/kelseyhightower/kubernetes-the-hard-way
Also look at this doc https://kubernetes.io/docs/admin/accessing-the-api/
Configuring kubernetes SSL manualy may be hard. If you have troubles try to use kubeadm utility - https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

Resources