Using UDP broadcast for pods/peers discovery in Kubernetes - docker

I need to use UDP broadcast for peer discovery.
Environment:
docker-desktop with a single node Kubernetes cluster
My code looks as follows:
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class MainApp {
public static void main(String[] args) throws ExecutionException, InterruptedException {
int inPort = Integer.parseInt(System.getenv("IN_PORT"));
int outPort = Integer.parseInt(System.getenv("OUT_PORT"));
String name = System.getenv("NAME");
Client client = new Client(name, outPort);
Server server = new Server(name, inPort);
ExecutorService service = Executors.newFixedThreadPool(2);
service.submit(client);
service.submit(server).get();
}
static class Client implements Runnable {
final String name;
final int port;
Client(String name, int port) {
this.name = name;
this.port = port;
}
#Override
public void run() {
System.out.println(name + " client started, port = " + port);
try (DatagramSocket socket = new DatagramSocket()) {
socket.setBroadcast(true);
while (!Thread.currentThread().isInterrupted()) {
byte[] buffer = (name + ": hi").getBytes();
DatagramPacket packet = new DatagramPacket(buffer, buffer.length,
InetAddress.getByName("255.255.255.255"), port);
socket.send(packet);
Thread.sleep(1000);
System.out.println("packet sent");
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
static class Server implements Runnable {
final String name;
final int port;
Server(String name, int port) {
this.name = name;
this.port = port;
}
#Override
public void run() {
System.out.println(name + " server started, port = " + port);
try (DatagramSocket socket = new DatagramSocket(port)) {
byte[] buf = new byte[256];
while (!Thread.currentThread().isInterrupted()) {
DatagramPacket packet = new DatagramPacket(buf, buf.length);
socket.receive(packet);
String received = new String(packet.getData(), 0, packet.getLength());
System.out.println(String.format(name + " received '%s' from %s:%d", received,
packet.getAddress().toString(),
packet.getPort()));
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
}
Kubernetes pod settings:
For peer-1:
spec:
containers:
- name: p2p
image: p2p:1.0-SNAPSHOT
env:
- name: NAME
value: "peer-1"
- name: IN_PORT
value: "9996"
- name: OUT_PORT
value: "9997"
For peer-2 :
spec:
containers:
- name: p2p-2
image: p2p:1.0-SNAPSHOT
env:
- name: NAME
value: "peer-2"
- name: IN_PORT
value: "9997"
- name: OUT_PORT
value: "9996"
I used a different in/out ports for simplicity's sake. In reality, it should be the same port, e.g.: 9999
I see that each pod has a unique IP address
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
p2p-deployment-2-59bb89f9d6-ghclv 1/1 Running 0 2m26s 10.1.0.38 docker-desktop <none> <none>
p2p-deployment-567bb5bd77-5cnsl 1/1 Running 0 2m29s 10.1.0.37 docker-desktop <none> <none>
Logs from peer-1:
peer-1 received 'peer-2: hi' from /10.1.0.1:57565
Logs from peer-2:
peer-2 received 'peer-1: hi' from /10.1.0.1:44777
Question: why peer-1 receives UDP packets from 10.1.0.1 instead of 10.1.0.37 ?
If I log into peer-2 container: kubectl exec -it p2p-deployment-2-59bb89f9d6-ghclv -- /bin/bash
Then
socat - UDP-DATAGRAM:255.255.255.255:9996,broadcast
test
test
...
in peer-1 logs I see peer-1 received 'test' from /10.1.0.1:43144.
Again why network address is 10.1.0.1 instead of 10.1.0.37.
Could you please tell me what I'm doing wrong?
Note: when using the same port to send/receive UDP packets, some peer can receive a packet from its own IP address. In other words, a peer can only discover its own IP address but always gets 10.1.0.1 for packets received from other peers/pods

For some reason, UDP broadcast doesn't work as expected in Kubernetes infrastructure, however multicast works fine.
Thanks Ron Maupin for suggesting multicast.
Here you can find java code + kube config

Related

Cannot See Published Messages on RabbitMQ Dashboard After Publish Event

I run RabbitMQ through Docker Desktop with the following settings:
rabbitmq:
container_name: rabbitmq
restart: always
ports:
- "5672:5672"
- "15672:15672"
Second port number is for the RabbitMQ Dashboard. And, I have a basic REST API endpoint which is supposed to publish a RabbitMQ message as follows:
private readonly IMediator _mediator;
private readonly IPublishEndpoint _publish;
public FlightController(IMediator mediator, IPublishEndpoint publish)
{
_mediator = mediator;
_publish = publish;
}
[HttpPost(Name = "CheckoutCrew")]
[ProducesResponseType((int)HttpStatusCode.Accepted)]
public async Task<IActionResult> CheckoutCrew([FromBody] ScheduleFlightCommand command)
{
var crewIds = new List<string>() { command.SeniorCrewId, command.Crew1Id, command.Crew2Id, command.Crew3Id };
var hasSchedule = true;
var crewCheckoutEvent = new CrewCheckoutEvent() { EmployeeNumbers = crewIds, HasSchedule = hasSchedule };
await _publish.Publish(crewCheckoutEvent);
return Accepted();
}
And, below codes represent the configurations regarding RabbitMQ:
services.AddMassTransit(config => {
config.UsingRabbitMq((ctx, cfg) => {
cfg.Host(Configuration["EventBusSettings:HostAddress"]);
cfg.UseHealthCheck(ctx);
});
});
services.AddMassTransitHostedService();
This Configuration["EventBusSettings:HostAddress"] line points here on appsettings.json:
"EventBusSettings": {
"HostAddress": "amqp://guest:guest#localhost:5672"
}
After I have run my API (named Flight.API), I check RabbitMQ logs via DockerDesktop and see these:
2022-03-31 12:52:41.794701+00:00 [info] <0.1020.0> accepting AMQP connection <0.1020.0> (xxx.xx.x.x:45292 -> xxx.xx.x.x:5672)
2022-03-31 12:52:41.817563+00:00 [info] <0.1020.0> Connection <0.1020.0> (xxx.xx.x.x:45292 -> xxx.xx.x.x:5672) has a client-provided name: Flight.API
2022-03-31 12:52:41.820704+00:00 [info] <0.1020.0> connection <0.1020.0> (xxx.xx.x.x:45292 -> xxx.xx.x.x:5672 - Flight.API): user 'guest' authenticated and granted access to vhost '/'
Everything seems okay, do not they?
I have also wrap .Publish method with try...catch but it also doesn't throw any exceptions. When my endpoint returns Accepted without any issue, I go and check RabbitMQ dashboard but it shows Connections: 0, Channels: 0 etc. Message rates section is also staying on idle.
I cannot see what I am missing.
(Currently, I do not have any consumers, but I should still see some life signs, am I right? Those Connections and Channels counters shouldn't be staying at 0 after I have successfully published my payload)
Thank you in advance.
Edit after adding a consumer class
Still no changes on RabbitMQ Management screens. Everything is on their default values, empty, or idle. Below is my configuration on the consumer project:
services.AddMassTransit(config => {
config.AddConsumer<CrewChecoutConsumer>();
config.UsingRabbitMq((ctx, cfg) => {
cfg.Host(Configuration["EventBusSettings:HostAddress"]);
cfg.UseHealthCheck(ctx);
cfg.ReceiveEndpoint(EventBusConstants.CrewCheckoutQueue, config => {
config.ConfigureConsumer<CrewChecoutConsumer>(ctx);
});
});
});
services.AddMassTransitHostedService();
services.AddScoped<CrewChecoutConsumer>();
appsettings.json file on consumer project is changed accordingly:
"EventBusSettings": {
"HostAddress": "amqp://guest:guest#localhost:5672"
}
And, below is my complete consumer class:
public class CrewChecoutConsumer : IConsumer<CrewCheckoutEvent>
{
private readonly IMapper _mapper;
private readonly IMediator _mediator;
public CrewChecoutConsumer(IMapper mapper, IMediator mediator)
{
_mapper = mapper;
_mediator = mediator;
}
public async Task Consume(ConsumeContext<CrewCheckoutEvent> context)
{
foreach (var employeeNumber in context.Message.EmployeeNumbers)
{
var query = new GetSingleCrewQuery(employeeNumber);
var crew = await _mediator.Send(query);
crew.HasSchedule = context.Message.HasSchedule;
var updateCrewCommand = new UpdateCrewCommand();
_mapper.Map(crew, updateCrewCommand, typeof(CrewModel), typeof(UpdateCrewCommand));
var result = await _mediator.Send(updateCrewCommand);
}
}
}
If you do not have any consumers, the only thing you will see is a message rate on the published message exchange as messages are delivered to the exchange, but then discarded as there are no receive endpoints (queues) bound to that message type exchange.
Until you have a consumer, you won't see any messages in any queues.
Also, you should pass the controller's CancellationToken to the Publish call.

Configuring Minio server for use with Testcontainers

My application uses Minio for S3-compatible object storage, and I'd like to use the Minio docker image in my integration tests via Testcontainers.
For some very basic tests, I run a GenericContainer using the minio/minio docker image and no configuration except MINIO_ACCESS_KEY and MINIO_SECRET_KEY. My tests then use Minio's Java Client SDK. These work fine and behave just like expected.
But for other integration tests, I need to set up separate users in Mino. As far as I can see, users can only be added to Minio using the Admin API, for which there is no Java client, only the minio/mc docker image (the mc CLI is not available in the minio/minio docker image used for the server).
On the command line, I can use the Admin API like this:
$ docker run --interactive --tty --detach --entrypoint=/bin/sh --name minio_admin minio/mc
The --interactive --tty is a bit of a hack to keep the container running so I can later run commands like this one:
$ docker exec --interactive --tty minio_admin mc admin user add ...
Using Testcontainers, I try to do the same like this:
public void testAdminApi() throws Exception {
GenericContainer mc = new GenericContainer("minio/mc")
.withCommand("/bin/sh")
.withCreateContainerCmdModifier(new Consumer<CreateContainerCmd>() {
#Override
public void accept(CreateContainerCmd cmd) {
cmd
.withAttachStdin(true)
.withStdinOpen(true)
.withTty(true);
}
});
mc.start();
log.info("mc is running: {}", mc.isRunning());
String command = "mc";
Container.ExecResult result = mc.execInContainer(command);
log.info("Executing command '{}' returned exit code '{}' and stdout '{}'", command, result.getExitCode(), result.getStdout());
assertEquals(0, result.getExitCode());
}
The logs show the container being started, but executing a command against it returns exit code 126 and claims it's in a stopped state:
[minio/mc:latest] - Starting container with ID: 4f96fc7583fe62290925472c4c6b329fbeb7a55b38a3c0ad41ee797db1431841
[minio/mc:latest] - Container minio/mc:latest is starting: 4f96fc7583fe62290925472c4c6b329fbeb7a55b38a3c0ad41ee797db1431841
[minio/mc:latest] - Container minio/mc:latest started
minio.MinioAdminTests - mc is running: true
org.testcontainers.containers.ExecInContainerPattern - /kind_volhard: Running "exec" command: mc
minio.MinioAdminTests - Executing command 'mc' returned exit code '126'
and stdout 'cannot exec in a stopped state: unknown'
java.lang.AssertionError: Expected: 0, Actual: 126
After fiddling around with this for hours, I'm running out of ideas. Can anyone help?
Thanks to #glebsts and #bsideup I was able to get my integration tests to work. Here's a minimal example of how to add a user:
public class MinioIntegrationTest {
private static final String ADMIN_ACCESS_KEY = "admin";
private static final String ADMIN_SECRET_KEY = "12345678";
private static final String USER_ACCESS_KEY = "bob";
private static final String USER_SECRET_KEY = "87654321";
private static GenericContainer minioServer;
private static String minioServerUrl;
#BeforeAll
static void setUp() throws Exception {
int port = 9000;
minioServer = new GenericContainer("minio/minio")
.withEnv("MINIO_ACCESS_KEY", ADMIN_ACCESS_KEY)
.withEnv("MINIO_SECRET_KEY", ADMIN_SECRET_KEY)
.withCommand("server /data")
.withExposedPorts(port)
.waitingFor(new HttpWaitStrategy()
.forPath("/minio/health/ready")
.forPort(port)
.withStartupTimeout(Duration.ofSeconds(10)));
minioServer.start();
Integer mappedPort = minioServer.getFirstMappedPort();
Testcontainers.exposeHostPorts(mappedPort);
minioServerUrl = String.format("http://%s:%s", minioServer.getContainerIpAddress(), mappedPort);
// Minio Java SDK uses s3v4 protocol by default, need to specify explicitly for mc
String cmdTpl = "mc config host add myminio http://host.testcontainers.internal:%s %s %s --api s3v4 && "
+ "mc admin user add myminio %s %s readwrite";
String cmd = String.format(cmdTpl, mappedPort, ADMIN_ACCESS_KEY, ADMIN_SECRET_KEY, USER_ACCESS_KEY, USER_SECRET_KEY);
GenericContainer mcContainer = new GenericContainer<>("minio/mc")
.withStartupCheckStrategy(new OneShotStartupCheckStrategy())
.withCreateContainerCmdModifier(containerCommand -> containerCommand
.withTty(true)
.withEntrypoint("/bin/sh", "-c", cmd));
mcContainer.start();
}
#Test
public void canCreateBucketWithAdminUser() throws Exception {
MinioClient client = new MinioClient(minioServerUrl, ADMIN_ACCESS_KEY, ADMIN_SECRET_KEY);
client.ignoreCertCheck();
String bucketName = "foo";
client.makeBucket(bucketName);
assertTrue(client.bucketExists(bucketName));
}
#Test
public void canCreateBucketWithNonAdminUser() throws Exception {
MinioClient client = new MinioClient(minioServerUrl, USER_ACCESS_KEY, USER_SECRET_KEY);
client.ignoreCertCheck();
String bucketName = "bar";
client.makeBucket(bucketName);
assertTrue(client.bucketExists(bucketName));
}
#AfterAll
static void shutDown() {
if (minioServer.isRunning()) {
minioServer.stop();
}
}
}
You could run an one-off container (use OneShotStartupCheckStrategy) with mc and withCommand("your command"), connected to the same network as the minio server you're running (see Networking).
As #bsideup suggested, you can use one-shot strategy, i.e. as in here.
UPD: added working test. Here is important to know that
When the container is launched, it executes entrypoint + command (this is Docker in general and has nothing to do with Testcontainers). Source from TC github
public class TempTest {
#Rule
public Network network = Network.newNetwork();
private String runMcCommand(String cmd) throws TimeoutException {
GenericContainer container = new GenericContainer<>("minio/mc")
.withCommand(cmd)
.withNetwork(network)
.withStartupCheckStrategy(new OneShotStartupCheckStrategy())
.withCreateContainerCmdModifier(command -> command.withTty(true));
container.start();
WaitingConsumer waitingConsumer = new WaitingConsumer();
ToStringConsumer toStringConsumer = new ToStringConsumer();
Consumer<OutputFrame> composedConsumer = toStringConsumer.andThen(waitingConsumer);
container.followOutput(composedConsumer);
waitingConsumer.waitUntilEnd(4, TimeUnit.SECONDS);
return toStringConsumer.toUtf8String();
}
private void showCommandOutput(String cmd) throws TimeoutException {
String res = runMcCommand(cmd);
System.out.printf("Cmd '%s' result:\n----\n%s\n----%n", cmd, res);
}
#Test
public void testAdminApi() throws Exception {
showCommandOutput("ls");
showCommandOutput("version");
}
}
Another option is to use content of dockerfile of minio/mc, which is small, modify executed command (one-off "mc" by default), and run own container once per test, which, compared to one-off container, will save some time if you need to execute multiple commands:
#Rule
public Network network = Network.newNetwork();
#Rule
public GenericContainer mc = new GenericContainer(new ImageFromDockerfile()
.withDockerfileFromBuilder(builder ->
builder
.from("alpine:3.7")
.run("apk add --no-cache ca-certificates && apk add --no-cache --virtual .build-deps curl && curl https://dl.minio.io/client/mc/release/linux-amd64/mc > /usr/bin/mc && chmod +x /usr/bin/mc && apk del .build-deps")
.cmd("/bin/sh", "-c", "while sleep 3600; do :; done")
.build())
)
.withNetwork(network);
public void myTest() {
mc.execInContainer("mc blah");
mc.execInContainer("mc foo");
}
Basically, it runs image with mc installed, and sleeps for 1h which is enough for your tests. While it runs, you can execute commands etc. After you finish, it is killed.
Your minio container can be in same network.
Minio with docker compose:
For those who are looking for s3 with minio object server integration test.
The current implementation is based on docker-compose.
The current implementation utilising AWS S3 client for CURD opertations
docker-compose file:
version: '3.7'
services:
minio-service:
image: quay.io/minio/minio
command: minio server /data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: minio123
The actual IntegrationTest class:
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.S3Object;
import org.junit.jupiter.api.*;
import org.testcontainers.containers.DockerComposeContainer;
import java.io.File;
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class MinioIntegrationTest {
private static final DockerComposeContainer minioContainer = new DockerComposeContainer<>(new File("src/test/resources/docker-compose.yml"))
.withExposedService("minio-service", 9000);
private static final String MINIO_ENDPOINT = "http://localhost:9000";
private static final String ACCESS_KEY = "minio";
private static final String SECRET_KEY = "minio123";
private AmazonS3 s3Client;
#BeforeAll
void setupMinio() {
minioContainer.start();
initializeS3Client();
}
#AfterAll
void closeMinio() {
minioContainer.close();
}
private void initializeS3Client() {
String name = Regions.US_EAST_1.getName();
AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration(MINIO_ENDPOINT, name);
s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY)))
.withEndpointConfiguration(endpoint)
.withPathStyleAccessEnabled(true)
.build();
}
#Test
void shouldReturnActualContentBasedOnBucketName() throws Exception{
String bucketName = "test-bucket";
String key = "s3-test";
String content = "Minio Integration test";
s3Client.createBucket(bucketName);
s3Client.putObject(bucketName, key, content);
S3Object object = s3Client.getObject(bucketName, key);
byte[] actualContent = new byte[22];
object.getObjectContent().read(actualContent);
Assertions.assertEquals(content, new String(actualContent));
}
}

Masstransit in docker using Request/Response model, Request Consumer exception, host not found while responding

I'm quite new to Masstransit/RabbitMq and I encountered a problem cannot deal with.
I have a Rabbitmq server running in docker, also a small microservice in docker container which consumes an event. Beside this I run a windows service on the host machine, which has the task to send the event via the masstransit Request/Response model to the microservice. The interesting thing is that the event arrives to the consumer as supposed but when I try to response the context.RespondAsync from the consume method I get an exception
R-FAULT rabbitmq://autbus/exi_bus 80c60000-eca5-3065-0093-08d62a09d168 HwExi.Extensions.Events.ReservationCreateOrUpdateEvent HwExi.Api.Consumers.ReservationCrateOrUpdateConsumer(00:00:07.8902444) The host was not found for the specified address: rabbitmq://127.0.0.1/bus-SI-GEPE-HwService.Api-oddyyy8cwwagkoscbdmnwncfrg?durable=false&autodelete=true, MassTransit.EndpointNotFoundException: The host was not found for the specified address: rabbitmq://127.0.0.1/bus-SI-GEPE-HwService.Api-oddyyy8cwwagkoscbdmnwncfrg?durable=false&autodelete=true
I'm using this model to messaging between microservices without any problem and its working properly in another queue.
Here is the yaml of microservice / Bus
exiapi:
image: exiapi
build:
context: .
dockerfile: Service/HwExi.Api/Dockerfile
ports:
- "54542:80"
environment:
"BUS_USERNAME": "guest"
"BUS_PASSWORD": "guest"
"BUS_HOST": "rabbitmq://autbus"
"BUS_URL": "exi_bus"
autbus:
image: rabbitmq:3-management
hostname: autbus
ports:
- "15672:15672"
- "5672:5672"
- "5671:5671"
volumes:
- ~/rabbitmq:/var/lib/rabbitmq/mnesia
the config of the windows service:
"Bus": {
"Username": "guest",
"Password": "guest",
"Host": "rabbitmq://127.0.0.1",
"Url": "exi_bus"
},
The windows service connects like this:
var builder = new ContainerBuilder();
builder.Register(context =>
{
return Bus.Factory.CreateUsingRabbitMq(rmq =>
{
var host = rmq.Host(new Uri(options.Value.Bus.Host), "/", h =>
{
h.Username(options.Value.Bus.Username);
h.Password(options.Value.Bus.Password);
});
rmq.ExchangeType = ExchangeType.Fanout;
});
}).As<IBusControl>().As<IBus>().As<IPublishEndpoint>().SingleInstance();
The microservice inside container connects like this
public static class BusExtension
{
public static void InitializeBus(this ContainerBuilder builder, Assembly assembly)
{
builder.Register(context =>
{
return Bus.Factory.CreateUsingRabbitMq(rmq =>
{
var host = rmq.Host(new Uri(Constants.Bus.Host), "/", h =>
{
h.Username(Constants.Bus.UserName);
h.Password(Constants.Bus.Password);
});
rmq.ExchangeType = ExchangeType.Fanout;
rmq.ReceiveEndpoint(host, Constants.Bus.Url, configurator =>
{
configurator.LoadFrom(context);
});
});
}).As<IBusControl>().As<IBus>().As<IPublishEndpoint>().SingleInstance();
builder.RegisterConsumers(assembly);
}
public static void StartBus(this IContainer container, IApplicationLifetime lifeTime)
{
var bus = container.Resolve<IBusControl>();
var busHandler = TaskUtil.Await(() => bus.StartAsync());
lifeTime.ApplicationStopped.Register(() => busHandler.Stop());
}
}
than windows service fires the event like this:
var reservation = ReservationRepository.Get(message.KeyId, message.KeySource);
var operation = await ReservationCreateOrUpdateClient.Request(new ReservationCreateOrUpdateEvent { Reservation = reservation });
if (!operation.Success)
{
Logger.LogError("Fatal error while sending reservation create or update message to exi web service");
return;
}
Finally the microservice catches the event like this.
public class ReservationCrateOrUpdateConsumer : IConsumer<ReservationCreateOrUpdateEvent>
{
public async Task Consume(ConsumeContext<ReservationCreateOrUpdateEvent> context)
{
await context.RespondAsync(new MessageOperationResult<bool>
{
Result = true,
Success = true
});
}
}
I'm using autofac to register the requestclient in windows service:
Timeout = TimeSpan.FromSeconds(20);
ServiceAddress = new Uri($"{Configurarion.Bus.Host}/{Configurarion.Bus.Url}");
builder.Register(c => new MessageRequestClient<ReservationCreateOrUpdateEvent, MessageOperationResult<bool>>(c.Resolve<IBus>(), ServiceAddress, Timeout))
.As<IRequestClient<ReservationCreateOrUpdateEvent, MessageOperationResult<bool>>>().SingleInstance();
Can anybody help debug this out? Also share opinion if this structure is a proper one, maybe I should use https for sending message from the client machine to my microservice environment, and convert it to the bus via a gateway or similar approach more suitable? Thanks

Cannot connect to Solace Cloud

I am following the solace tutorial for Publish/Subscribe (link: https://dev.solace.com/samples/solace-samples-java/publish-subscribe/). Therefore, there shouldn't be anything "wrong" with the code.
I am trying to get my TopicSubscriber to connect to the cloud. After building my jar I run the following command:
java -cp target/SOM_Enrichment-1.0-SNAPSHOT.jar TopicSubscriber <host:port> <client-username#message-vpn> <password>
(with the appropriate fields filled in)
I get the following error:
TopicSubscriber initializing...
Jul 12, 2018 2:27:56 PM com.solacesystems.jcsmp.protocol.impl.TcpClientChannel call
INFO: Connecting to host 'blocked out' (host 1 of 1, smfclient 2, attempt 1 of 1, this_host_attempt: 1 of 1)
Jul 12, 2018 2:28:17 PM com.solacesystems.jcsmp.protocol.impl.TcpClientChannel call
INFO: Connection attempt failed to host 'blocked out' ConnectException com.solacesystems.jcsmp.JCSMPTransportException: ('blocked out') - Error communicating with the router. cause: java.net.ConnectException: Connection timed out: no further information ((Client name: 'blocked out' Local port: -1 Remote addr: 'blocked out') - )
Jul 12, 2018 2:28:20 PM com.solacesystems.jcsmp.protocol.impl.TcpClientChannel close
INFO: Channel Closed (smfclient 2)
Exception in thread "main" com.solacesystems.jcsmp.JCSMPTransportException" (Client name: 'blocked out' Local port: -1 Remote addr: 'blocked out') - Error communicating with the router.
Below is the TopicSubscriber.java file:
import java.util.concurrent.CountDownLatch;
import com.solacesystems.jcsmp.BytesXMLMessage;
import com.solacesystems.jcsmp.JCSMPException;
import com.solacesystems.jcsmp.JCSMPFactory;
import com.solacesystems.jcsmp.JCSMPProperties;
import com.solacesystems.jcsmp.JCSMPSession;
import com.solacesystems.jcsmp.TextMessage;
import com.solacesystems.jcsmp.Topic;
import com.solacesystems.jcsmp.XMLMessageConsumer;
import com.solacesystems.jcsmp.XMLMessageListener;
public class TopicSubscriber {
public static void main(String... args) throws JCSMPException {
// Check command line arguments
if (args.length != 3 || args[1].split("#").length != 2) {
System.out.println("Usage: TopicSubscriber <host:port> <client-username#message-vpn> <client-password>");
System.out.println();
System.exit(-1);
}
if (args[1].split("#")[0].isEmpty()) {
System.out.println("No client-username entered");
System.out.println();
System.exit(-1);
}
if (args[1].split("#")[1].isEmpty()) {
System.out.println("No message-vpn entered");
System.out.println();
System.exit(-1);
}
System.out.println("TopicSubscriber initializing...");
final JCSMPProperties properties = new JCSMPProperties();
properties.setProperty(JCSMPProperties.HOST, args[0]); // host:port
properties.setProperty(JCSMPProperties.USERNAME, args[1].split("#")[0]); // client-username
properties.setProperty(JCSMPProperties.PASSWORD, args[2]); // client-password
properties.setProperty(JCSMPProperties.VPN_NAME, args[1].split("#")[1]); // message-vpn
final Topic topic = JCSMPFactory.onlyInstance().createTopic("tutorial/topic");
final JCSMPSession session = JCSMPFactory.onlyInstance().createSession(properties);
session.connect();
final CountDownLatch latch = new CountDownLatch(1); // used for
// synchronizing b/w threads
/** Anonymous inner-class for MessageListener
* This demonstrates the async threaded message callback */
final XMLMessageConsumer cons = session.getMessageConsumer(new XMLMessageListener() {
#Override
public void onReceive(BytesXMLMessage msg) {
if (msg instanceof TextMessage) {
System.out.printf("TextMessage received: '%s'%n",
((TextMessage) msg).getText());
} else {
System.out.println("Message received.");
}
System.out.printf("Message Dump:%n%s%n", msg.dump());
latch.countDown(); // unblock main thread
}
#Override
public void onException(JCSMPException e) {
System.out.printf("Consumer received exception: %s%n", e);
latch.countDown(); // unblock main thread
}
});
session.addSubscription(topic);
System.out.println("Connected. Awaiting message...");
cons.start();
// Consume-only session is now hooked up and running!
try {
latch.await(); // block here until message received, and latch will flip
} catch (InterruptedException e) {
System.out.println("I was awoken while waiting");
}
// Close consumer
cons.close();
System.out.println("Exiting.");
session.closeSession();
}
}
Any help would be greatly appreciated.
java.net.ConnectException: Connection timed out
The log entry indicates that network connectivity to the specified DNS name/IP address cannot be established.
Next step includes:
Verifying that you are able to resolve the DNS name to an IP
address.
Verifying that the correct DNS name/IP address/Port is in use - You need the "SMF Host" in the Solace Cloud Connection Details.
Verifying that the IP address/Port is not blocked by an intermediate network device.

JavaMail store.connect() times out - Can't read gmail Inbox through Java

I am trying to connect to my gmail inbox to read messages through Java Application. I am using..
jdk1.6.0_13
javamail-1.4.3 libs - (mail.jar, mailapi.jar, imap.jar)
Below is my code : MailReader.java
import java.util.Properties;
import javax.mail.MessagingException;
import javax.mail.Session;
import javax.mail.Store;
public class MailReader
{
public static void main(String[] args)
{
readMail();
}
public static void readMail()
{
Properties props = System.getProperties();
props.setProperty("mail.store.protocol", "imaps");
try
{
Session session = Session.getDefaultInstance(props, null);
Store store = session.getStore("imaps");
store.connect("imap.gmail.com", "myEmailId#gmail.com", "myPwd");
System.out.println("Store Connected..");
//inbox = (Folder) store.getFolder("Inbox");
//inbox.open(Folder.READ_WRITE);
//Further processing of inbox....
}
catch (MessagingException e)
{
e.printStackTrace();
}
}
}
I expect to get store connected, but call to store.connect() never returns and I get below output :
javax.mail.MessagingException: Connection timed out;
nested
exception is:
java.net.ConnectException: Connection timed out
at
com.sun.mail.imap.IMAPStore.protocolConnect(IMAPStore.java:441)
at
javax.mail.Service.connect(Service.java:233)
at
javax.mail.Service.connect(Service.java:134)
at
ReadMail.readMail(ReadMail.java:21)
at ReadMail.main(ReadMail.java:10)
However I am able to SEND email by Java using SMTP, Transport.send() and same gmail account. But cannot read emails.
What can be the solution ?
IMAP work off a different port (143 for non-secure, 993 for secure) to sendmail (25) and I suspect that's blocked. Can you telnet on that port to that server e.g.
telnet imap.gmail.com {port number}
That'll indicate if you have network connectivity.

Resources