How to setup .NET Core Web API running from a Docker container to work with a Service Discovery with Eureka?
This is my current configuration that is not working:
Startup.cs
appsettings.json
Option 1: Using localhost as hostname
Option 2: Using host docker internal
Also in eureka.client.serviceUrl I've used those other options:
http://localhost:8761/eureka/
http://localhost:8761/eureka
http://localhost:8761/
http://localhost:8761
At the final, there is nothing registered in Eureka:
This is my first time working with something like this, so I followed all the steps in this tutorial
Related
we are experimenting with JAEGER as a tracing-tool for our traefik routing environment. We also use an ecapsulated docker network .
The goal is to accumulate requests on our api's per department and also some other monitoring.
We are using traefik 2.8 as a docker service. Also all our services run behind this traefik instance.
We added basic tracing configuration to our .toml file and startet a jaeger-instance, also as docker service. On our websecure endpoint we added forwardedHeaders.insecure = true
Jaeger is working fine, but we only get the docker internal host ip of the service, not the visitor ip from the user accessing a client with the browser or app.
I googled around and I am not sure, but it seems that this is a problem due to our setup and can't be fixed - except by using network="host". But unfortunately thats not an option.
But I want to be sure, so I hope someone here has a tip for us to configure docker/jaeger correctly or knows if it is even possible.
A different tracing tool suggestion (for example like tideways, but more python and wasm and c++ compatible) is also appreciated.
Thanks
I am trying to run Ambassador API gateway on my local dev environment so I would simulate what I'll end up with on production - the difference is that on prod my solution will be running in Kubernetes. To do so, I'm installing Ambassador into Docker Desktop and adding the required configuration to route requests to my microservices. Unfortunately, it did not work for me and I'm getting the error below:
upstream connect error or disconnect/reset before headers. reset reason: connection failure
I assume that's due to an issue in the mapping file, which is as follows:
apiVersion: ambassador/v2
kind: Mapping
name: institutions_mapping
prefix: /ins/
service: localhost:44332
So what I'm basically trying to do is rewrite all requests coming to http://{ambassador_url}/ins to a service running locally in IIS Express (through Visual Studio) on port 44332.
What am I missing?
I think you may be better off using another one of Ambassador Labs tools called Telepresence.
https://www.telepresence.io/
With Telepresence you can take your local service you have running on localhost and project it into your cluster to see how it performs. This way you don't need to spin up a local cluster, and can get real time feedback on how your service operates with other services in the cluster.
I am having difficulties deploying Neo4j official docker image https://hub.docker.com/_/neo4j to an OpenShift environment and accessing it from outside (from my local machine)
I have performed the following steps:
oc new-app neo4j
Created route for port 7474
Set up the environment variable NEO4J_dbms_connector_bolt_listen__address to 0.0.0.0:7687 which is the equivalent of seting up the dbms.connector.bolt.listen_address=0.0.0.0:7687 in the neo4j.conf file.
Access the route url from local machine which will open the neo4j browser which requires authentication. At this point I am blocked because any combination of urls I try are unsuccessful.
As a workaround I have managed to forward 7687 port to my local machine, install Neo4j Desktop solution and connect via bolt://localhost:7687 but this is not the ideal solution.
Therefore there are two questions:
1. How can I connect from the neo4j browser to it's own database
How can I connect from external environment (trough OpenShift route) to the Neo4j DB
I have no experience with the OpenShift, but try to add the following config:
dbms.default_listen_address=0.0.0.0
Is there any other way for you to connect to Neo4j, so that you could further inspect the issue?
Short answer:
To connect to the DB that is most likely a configuration issue, maybe Tomaž Brataničs answer is the solution. As for accessing the DB from outside, you will most likely need a NodePort.
Long answer:
Note that OpenShift Routes are for HTTP / HTTPS traffic and not for any other kind of traffic. Typically, the "Routers" of an OpenShift cluster listen only on Port 80 and 443, so connecting to your database on any other port will most likely not work (although this heavily depends on your cluster configuration).
The solution for non-HTTP(S) traffic is to use NodePorts as described in the OpenShift documentation: https://docs.openshift.com/container-platform/3.11/dev_guide/expose_service/expose_internal_ip_nodeport.html
Note that also for NodePorts, you might need to have your cluster administrator add additional ports to the loadbalancer or you might need to connect to the OpenShift Nodes directly. Refer to the documentation on how to use NodePorts.
Deployed Hazelcast image on Openshift and I have created a route but still not able to connect to it from external Java client. I came to know that routes only work for HTTP or HTTPS services , so am I missing anything here or what do I have to do to expose that Hazelcast instance to outer world ?
And the Docker image for Hazelcast is created and it runs Hazelcast.jar inside the image , does this concern the problem I'm facing ?
I tried exposing the service by running the command
oc expose dc hazelcast --type=LoadBalancer --name=hazelcast-ingress
and external IP with different port number was generated and I tried that as well still getting "exception com.hazelcast.core.HazelcastException: java.net.SocketTimeoutException" and not able to connect to it.
Thanks in advance, any guidance would be really helpful.
According to this, "...If the client application is outside the OpenShift project, then the cluster needs to be exposed by the service with externalIP and the Hazelcast client needs to have the Smart Routing feature disabled".
How can I use authentication on docker HTML report?
for example :- https://localhost:3000
is my docker run url.. so how can i authenticate this
From what you are explaining I understand you would like to add some form of authentication to a web application that happens to run in docker. I would suggest maybe try get it running first without docker on a Linux server using the same distro as the base image that you are using in your Dockerfile. So for example if you are using a Ubuntu 16.04 image that I suggest that you try set this up on a Linux server.
You can add LDAP or HTTP auth if you setup Nginx as a reverse proxy to your application. This would also allow you to be able to access the application on port 443 instead of https://url:3000
Here are some resources on using nginx for authentication.
https://www.digitalocean.com/community/tutorials/how-to-set-up-password-authentication-with-nginx-on-ubuntu-14-04
I would suggest you change the tag on this question to #nginx or #ubuntu instead. What kind of authentication would you like to add?