Noob question here since I'm new to Jaeger and Docker.
Docker Image: Jaeger Version 1.40
Is there a way to secure the Jaeger Collector OTLP endpoint by adding a basic authentication when building the image? I wanted to have a secured connection when sending trace data from the Collector to the Jaeger Collector via OTLP. (Port 4317 and 4318)
Also is there a way for the Jaeger Query to be secured when someone tries to access it by adding a basic authentication? Adding a TLS to the UI and accessing it thru the browser returns an empty black page. (Port 16686)
I deployed the built image in AWS ECS and launched it as AWS EC2.
I wanted to have a secured connection when sending trace data from the Collector to the Jaeger Collector via OTLP
No, AFAIK Jaeger doesn't support configuring TLS for its Collector servers. However, you could add reverse proxy in front of the collectors. See this blog for more details https://medium.com/#larsmilland01/secure-architecture-for-jaeger-with-apache-httpd-reverse-proxy-on-openshift-f31983fad400
Adding a TLS to the UI and accessing it thru the browser returns an empty black page. (Port 16686)
Also No, You could do with sidecar proxy, Read this https://medium.com/jaegertracing/protecting-jaeger-ui-with-an-oauth-sidecar-proxy-34205cca4bb1
Related
I deployed a front and back apps with ECS fargate, both of them are up and running and I can access them from my browser. They are configured on the same VPC and subnet.
The backend has service discovery configured and my server's DNS address was inserted to my react application.
I read in this thread cannot-connect-two-ecs-services-via-service-discovery that if I use axios from my browser to access my server using service discovery it will not work.
The error I am getting is: net::ERR_NAME_NOT_RESOLVED
How can I achieve communication between these 2 services with service discovery? Am I missing something?
I use elasticsearch and kibana for saving and querying my data. Some good feature like RBAC, SIEM require ssl communication on elasticsearch with kibana. So I enable xpack.security.http.ssl.enabled and xpack.security.transport.ssl.enabled. Thus client requests to es must be via https.
I also have a flink cluster writing data to elasticsearch. flink & elasticsearch & kibana are running on docker swarm. It's no need for flink to authenticate with es or encrypt between traffic. I think flink can access es via http protocol and no authentication.
So, can elasticseach support http and https simultaneously for different source host?
Is it possible to selectively authenticate user requests base on source ip or host?
Plus:
Elasticsearch and Kibana are both 7.7.0 version
Docker version: 19.03
Once you enable HTTP security, all clients must be updated to communicate with the cluster via SSL, it would not make sense to have one part of the clients communicating securely and another part that don't.
If you enable TLS on the HTTP layer in Elasticsearch, then you might need to make configuration changes in other parts of the Elastic Stack and in any Elasticsearch clients that you use.
Also see what just happened a few days ago to thousands of clusters that were being left accessible to the world.
I'm creating an App that will have to communicate with a Kubernetes service, via REST APIs. The service hosts a docker image that's listening on port 8080 and responds with a JSON body.
I noticed that when I create a deployment via -
kubectl expose deployment myapp --target-port=8080 --type=NodePort --name=app-service
It then creates a service entitled app-service
To then locally test this, I obtain the IP:port for the created service via -
minikube service app-service --url
I'm using minikube for my local development efforts. I then get a response such as http://172.17.118.68:31970/ which then when I enter on my browser, works fine (I get the JSON responses i'm expecting).
However, it seems the IP & port for that service are always different whenever I start this service up.
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change? Is the common way to work around this to register that combination via a DNS server (such as Google Cloud's DNS system?)
Or am I missing a step here with setting up Kubernetes public services?
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change?
minikube is not meant for production use. It is only meant for development purpose. You should create a real kubernetes cluster and use LoadBalancer type service or an Ingress(for L7 traffic) to expose your service to external world. Since you need to expose your backend REST api, Ingress is good choice.
I'm setting up three docker containers on my own machine using docker compose:
One is a portal written with React.js (called portal)
One is a middleware layer with GraphQL (called gateway)
One is an auth service with node.js (called auth)
I also have a bunch of services already running behind a corporate firewall.
For the most part, gateway will request resources behind the firewall, so I have configured docker containers to proxy requests through a squid proxy with access to the additional services. However requests to my local auth service, and other local services should not proxied. As such, I have the following docker proxy configuration (note the noProxy settings):
~/.docker/config.json
...
"proxies": {
"default": {
"httpProxy": "http://172.30.245.96:3128",
"httpsProxy": "http://172.30.245.96:3128",
"noProxy": "auth,localhost,127.0.0.1,192.168.0.1/24"
}
}
...
With the above setup, portal requests do go directly to gateway through the browser using http://192.168.0.15/foo, but when gateway makes requests to auth using http://auth:3001/bar, they do not go directly to auth but instead do go through the proxy - which I am trying to avoid.
I can see the auth request is sent through the proxy with the squid proxy errors:
<p>The following error was encountered while trying to retrieve the URL: http://auth:3001/bar</p>
How can I set up the docker containers to respect the noProxy setting using docker service names like auth? It appears to me that the request from gateway to auth is mistakingly being proxed through 172.30.245.96:3128, causing it to not work. Thanks
Your Docker configuration seems fine, but your host doesn't understand how to resolve the name auth. Based on the IP given (192.168.x.x), I'll assume that you're attempting to reach the container service from the host. Add an entry for auth into your host's /etc/hosts (C:\Windows\System32\Drivers\etc\hosts if on Windows).
Take a look at Linked docker-compose containers making http requests for more details.
If you run into issues reaching services from within the container, check docker-compose resolve hostname in url for an example.
I have installed a Kubernetes with Kubeadm tool, and then followed the documentation to install the Web UI (Dashboard). Kubernetes is installed and running in one node instance which is a taint master node.
However, I'm not able to access the Web UI at https://<kubernetes-master>/ui. Instead I can access it on https://<kubernetes-master>:6443/ui.
How could I fix this?
The URL you are using to access the dashboard is an endpoint on the API Server. By default, kubeadm deploys the API server on port 6443, and not on 443, which is what you would need to access the dashboard through https without specifying a port in the URL (i.e. https://<kubernetes-master>/ui)
There are various ways you can expose and access the dashboard. These are ordered by increasing complexity:
If this is a dev/test cluster, you could try making kubeadm deploy the API server on port 443 by using the --api-port flag exposed by kubeadm.
Expose the dashboard using a service of type NodePort.
Deploy an ingress controller and define an ingress point for the dashboard.