the isolated hostname is not displaying in the FILE adapter handlers list - adapter

the isolated hostname is not displaying in the FILE adapter handlers list
when I try to add a new send\receive handler to an isolated host I cant see the isolated hostname in the FILE adapter options.
any suggestions about what shall I do?

The Isolated Host instances should only be used for Receive Locations that run in IIS. So the fact that it is not allowing you to configure it for FILE is correct behavior.
You should not be trying to run the FILE adapter on an Isolated Host, you need to use a In-Process host.

Related

Testcontainers fix bootstrapServers port for Kafka

I want to specify a custom port from the config for the TestContainers Kafka image
To be able to reuse bootstrapservers param later for the black box application testing
using https://github.com/testcontainers/testcontainers-scala Kafka module
Did not find an API for fixing the port while running the container, all I found is the port is dynamically assigned to the container
class KafkaSetUpSpec extends AnyFlatSpec with TestContainerForAll with Matchers {
override val containerDef: KafkaContainer.Def = KafkaContainer.Def()
import org.testcontainers.Testcontainers
//Testcontainers.exposeHostPorts(9092)
it should "return Kafka connection options for kafka container" in withContainers { kafkaContainer =>
kafkaContainer.bootstrapServers.nonEmpty shouldBe true
kafkaContainer.bootstrapServers.split(":")(2).toInt shouldBe kafkaContainer.container.getMappedPort(9093)
}
All I need is to take the connection URL from the config and fix it in the Kafka container like a port, do you have any idea how to do it?
How do assign the same port from the outside world?
Addition info that client not in the same network and located localy
Testcontainers is providingg dynamic port mapping for all modules by design. You have to use the provided kafkaContainer.getBootstrapServer() after the container has been started to get the dynamically mapped port. This needs to be injected into your system under test afterwards.
You can make use of the experimental reusable mode, to reuse a Testcontainers instrumented container across JVMs.
Add testcontainers.reuse.enable=true to the ~/.testcontainers.properties file on your local machine and set withReuse(true) on the KafkaContainer. Note that reusable mode currently does not support Docker networks.
See further examples in the corresponding PR:
https://github.com/testcontainers/testcontainers-java/pull/1781

How do I set up bind via webmin to delegate dns lookups for certain subdomains?

I have several docker containers with some web applications running via docker compose. One of the containers is a custom DNS server with Bind and Webmin installed. Webmin gives a nice web UI allowing me to update Bind DNS configuration without directly modifying the files or SSHing into the container. I have docker setup to lookup DNS in this order:
my docker dns server
my companies internal dns server
google dns server
I have one master zone file for top level domain "example.com" defined in dns server 1. I added an address for server1.example.com and dns resolves correctly. I want other subdomains to be resolved from my companies internal dns server.
server1.example.com - resolves correctly
server2.example.com - this host is not referenced in the zone file for docker dns server. I would like to somehow delegate this to my companies dns server (server 2)
The goal is I should be able to do software development for web applications and deploy them on my docker containers. The code makes internal calls to other "example.com" hosts. I want some of those calls to get directed back to other docker containers rather than the real server because I am developing code on both and want to test it end to end.
I don't want to (and can't) modify my companies dns configuration. I am not an expert in bind or dns setup and looking for the simplest solution.
What configuration can achieve this?
I guess the workaround is to use fully qualified name when creating the zone file. Instead of creating a master zone example.com and listing server1 inside that zone I am creating a master zone with server1.example.com. It means I have to create a zone file for every server but I guess its ok to manage with a smaller number of hosts. server2.example.com then doesnt fall inside of a zone and gets resolved using the next dns server in the chain.

Docker Swarm Service Discovery in index.html

I have two express web apps (server and client) that I am using docker-compose and / or docker stack to deploy in docker swarm. They both have APIs that communicate with each other via their service names, as they are both connected to the same overlay network. A snippet of the config file that client uses to make REST calls to server follows:
"server": {
"url":"http://server:8085",
"endpoints": {
"devices": "/devices",
"temperature": "/temperature",
"mock": "/mock"
}
}
Finding the server by host name is no issue from the node side as it is running directly inside the docker container. However, both express apps serve web pages. Both client and server's css and js dependencies are almost identical and I do not want to write each stylesheet twice. I'd rather server a single copy from server that both index.html files from server and client can use.
In the index.html, of server I can use relative paths because the host is the same, and thus implied. But, in index.html of client I need a fully qualified url. Something like:
<link rel="stylesheet" href="http://server:8085/style.css">
Obviously this would not work once I serve index.html from client to a browser because the browser is going to look for http://server over the internet, rather than in the docker overlay network for these services.
I thought about downloading the files in client's node app before it serves index.html but, that's not the cleanest solution.
Is there an elegant way to accomplish this without binding server to a static ip / domain or programmatically downloading these files first?
If your external users' browser needs to access files on client and server then you will need to publish both Swarm Services to the external IP's of the Swarm nodes, and then put those IP's in DNS names or an external LB, and only use those URL's for remote connectivity.
When you do that, you'll likely need to bind both services to the same port (443). If that's the case, then you also need another layer of proxy that routes traffic to the proper container based on path or DNS name.
Both http://proxy.dockerflow.com/ and https://traefik.io/ work for that purpose.

Change Apache server port dynamically (not manually, through programmatically)

I have installed Apache 2.4 in windows successfully, It is working.
Now I want to change the listening port dynamically (Not manually. meant to say, open a file and edit the port), might be place any properties file and read port from this or passing port as parameter to hhtpd.exe while starting server. Ultimately I have to configure port externally.
not possible. Use a script that changes it and gracefully restarts the server, or you won't be able to do it otherwise.

Cassandra Cluster Setup getting JMX error

I m trying setup a cassandra cluster as a test bed but gave the JMX remote connection error. I seem to found the answer for my error from cassandra FAQ page
Nodetool says "Connection refused to host: 127.0.1.1" for any remote host. What gives?
Nodetool relies on JMX, which in turn relies on RMI, which in turn sets up it's own listeners and connectors as needed on each end of the exchange. Normally all of this happens behind the scenes transparently, but incorrect name resolution for either the host connecting, or the one being connected to, can result in crossed wires and confusing exceptions.
If you are not using DNS, then make sure that your /etc/hosts files are accurate on both ends. If that fails try passing the -Djava.rmi.server.hostname=$IP option to the JVM at startup (where $IP is the address of the interface you can reach from the remote machine).
But can somebody help me on how to do -Djava.rmi.server.hostname=$IP
Or what to add is hosts file, i know that in hosts normally we add "IP Alias", but whose ip and alias.
I dont know much java or either linux
I m currently working on ubuntu v10.04 and cassandra v0.74
Sudesh
For JMX you need to enable JMX-remoting:
java -Dcom.sun.management.jmxremote
Depending on from where you want to access the jmx-server, you also need to specify a port:
-Dcom.sun.management.jmxremote.port=12345
and set or disable passwords.
Have a look at http://download.oracle.com/javase/1.5.0/docs/guide/management/agent.html for more details.

Resources