I want to specify a custom port from the config for the TestContainers Kafka image
To be able to reuse bootstrapservers param later for the black box application testing
using https://github.com/testcontainers/testcontainers-scala Kafka module
Did not find an API for fixing the port while running the container, all I found is the port is dynamically assigned to the container
class KafkaSetUpSpec extends AnyFlatSpec with TestContainerForAll with Matchers {
override val containerDef: KafkaContainer.Def = KafkaContainer.Def()
import org.testcontainers.Testcontainers
//Testcontainers.exposeHostPorts(9092)
it should "return Kafka connection options for kafka container" in withContainers { kafkaContainer =>
kafkaContainer.bootstrapServers.nonEmpty shouldBe true
kafkaContainer.bootstrapServers.split(":")(2).toInt shouldBe kafkaContainer.container.getMappedPort(9093)
}
All I need is to take the connection URL from the config and fix it in the Kafka container like a port, do you have any idea how to do it?
How do assign the same port from the outside world?
Addition info that client not in the same network and located localy
Testcontainers is providingg dynamic port mapping for all modules by design. You have to use the provided kafkaContainer.getBootstrapServer() after the container has been started to get the dynamically mapped port. This needs to be injected into your system under test afterwards.
You can make use of the experimental reusable mode, to reuse a Testcontainers instrumented container across JVMs.
Add testcontainers.reuse.enable=true to the ~/.testcontainers.properties file on your local machine and set withReuse(true) on the KafkaContainer. Note that reusable mode currently does not support Docker networks.
See further examples in the corresponding PR:
https://github.com/testcontainers/testcontainers-java/pull/1781
Related
the isolated hostname is not displaying in the FILE adapter handlers list
when I try to add a new send\receive handler to an isolated host I cant see the isolated hostname in the FILE adapter options.
any suggestions about what shall I do?
The Isolated Host instances should only be used for Receive Locations that run in IIS. So the fact that it is not allowing you to configure it for FILE is correct behavior.
You should not be trying to run the FILE adapter on an Isolated Host, you need to use a In-Process host.
I am having difficulties deploying Neo4j official docker image https://hub.docker.com/_/neo4j to an OpenShift environment and accessing it from outside (from my local machine)
I have performed the following steps:
oc new-app neo4j
Created route for port 7474
Set up the environment variable NEO4J_dbms_connector_bolt_listen__address to 0.0.0.0:7687 which is the equivalent of seting up the dbms.connector.bolt.listen_address=0.0.0.0:7687 in the neo4j.conf file.
Access the route url from local machine which will open the neo4j browser which requires authentication. At this point I am blocked because any combination of urls I try are unsuccessful.
As a workaround I have managed to forward 7687 port to my local machine, install Neo4j Desktop solution and connect via bolt://localhost:7687 but this is not the ideal solution.
Therefore there are two questions:
1. How can I connect from the neo4j browser to it's own database
How can I connect from external environment (trough OpenShift route) to the Neo4j DB
I have no experience with the OpenShift, but try to add the following config:
dbms.default_listen_address=0.0.0.0
Is there any other way for you to connect to Neo4j, so that you could further inspect the issue?
Short answer:
To connect to the DB that is most likely a configuration issue, maybe Tomaž Brataničs answer is the solution. As for accessing the DB from outside, you will most likely need a NodePort.
Long answer:
Note that OpenShift Routes are for HTTP / HTTPS traffic and not for any other kind of traffic. Typically, the "Routers" of an OpenShift cluster listen only on Port 80 and 443, so connecting to your database on any other port will most likely not work (although this heavily depends on your cluster configuration).
The solution for non-HTTP(S) traffic is to use NodePorts as described in the OpenShift documentation: https://docs.openshift.com/container-platform/3.11/dev_guide/expose_service/expose_internal_ip_nodeport.html
Note that also for NodePorts, you might need to have your cluster administrator add additional ports to the loadbalancer or you might need to connect to the OpenShift Nodes directly. Refer to the documentation on how to use NodePorts.
I have a series of microservices that I have been testing. Originally it was using Service Fabric however I have switched to using Consul, Fabio, Nomad which I like better.
In development on my machine things work well however I am running into some issues actually getting Fabio to work in a cluster format.
I have a cluster of 5 nodes each running Consul, Fabio, Nomad.
Each service gets a dynamic port at runtime and successfully registers itself.
On the node which the service is running Fabio correctly forwards traffic.
However if the same fabio url is used on a different node then traffic is forwarded to the correct node/port however that is closed so the connection doesn't work.
For instance if ServiceA running on MachineA on port 1234 then http://MachineA:9999/ServiceA correctly works.
However http://MachineB/ServiceA fails after MachineA tries to initiate a connection to MachineB on port 1234.
A solution would be to add firewall rules, I would imagine, however this requires all the Services to run as Admin which I don't want.
Is there a way to support this through Fabio?
I'm right now using GKE (kubernetes) with an nginx container to proxy different services. My goal is to block some countries. I'm used to do that with nginx and its useful geoip module, but as of now, kubernetes doesn't forward the real customer ip to the containers, so I can't use it.
What would be the simplest/cheapest solution to filter out countries until kubernetes actually forward the real IP?
External service?
Simple google server with only nginx, filtering countries, forwarding to kubernetes (not great in terms of price and reliability)?
Modify the kube-proxy (as I've seen here and there, but it seems a bit odd)?
Frontend geoip filtering (hmm, worse idea by far)?
thank you!
You can use a custom nginx image and use a map to create a filter
// this in http section
map $geoip_country_code $allowed_country {
default yes;
UY no;
CL no;
}
and
// this inside some location where you want to apply the filter
if ($allowed_country = no) {
return 403;
}
First on GKE if you're using the nginx ingress controller, you should turn off the default GCE controller: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md#disabling-glbc, otherwise they'll fight.
kubernetes doesn't forward the real customer ip to the containers
That's only true if you're going through kube-proxy with a service of type NodePort and/or LoadBalancer. With the nginx ingress controller you're running with hostPort, so it's actually the docker daemon that's hiding the source ip. I think later versions of docker default to the iptables mode, which shows you the source ip once again.
In the meanwhile you can get source ip by running the nginx controller like: https://gist.github.com/bprashanth/a4b06004a0f9c19f9bd41a1dcd0da0c8
That, however, uses host networking, not the greatest option. Inserted you can use the proxy protocol to get src ip: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#proxy-protocol
Also (in case you didn't already realize) the nginx controller has the geoip module enabled by default: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#nginx-status-page
Please open an issue if you need more help.
EDIT: proxy protocol is possible through the ssl proxy which is in alpha currently: https://cloud.google.com/compute/docs/load-balancing/tcp-ssl/#proxy_protocol_for_retaining_client_connection_information
I m trying setup a cassandra cluster as a test bed but gave the JMX remote connection error. I seem to found the answer for my error from cassandra FAQ page
Nodetool says "Connection refused to host: 127.0.1.1" for any remote host. What gives?
Nodetool relies on JMX, which in turn relies on RMI, which in turn sets up it's own listeners and connectors as needed on each end of the exchange. Normally all of this happens behind the scenes transparently, but incorrect name resolution for either the host connecting, or the one being connected to, can result in crossed wires and confusing exceptions.
If you are not using DNS, then make sure that your /etc/hosts files are accurate on both ends. If that fails try passing the -Djava.rmi.server.hostname=$IP option to the JVM at startup (where $IP is the address of the interface you can reach from the remote machine).
But can somebody help me on how to do -Djava.rmi.server.hostname=$IP
Or what to add is hosts file, i know that in hosts normally we add "IP Alias", but whose ip and alias.
I dont know much java or either linux
I m currently working on ubuntu v10.04 and cassandra v0.74
Sudesh
For JMX you need to enable JMX-remoting:
java -Dcom.sun.management.jmxremote
Depending on from where you want to access the jmx-server, you also need to specify a port:
-Dcom.sun.management.jmxremote.port=12345
and set or disable passwords.
Have a look at http://download.oracle.com/javase/1.5.0/docs/guide/management/agent.html for more details.