Springboot Micro Service and Mongo container - docker

I have a Spring boot Micro service that calls to a Mongo DB.
To set it up in my Local Machine. I set up a Mongo DB Container in my local docker at localhost:27017.
I tried to stand up the Spring boot Micro service application at port 8082 and it was successful.
I now want to run both of them in Docker this.
I am unable to get the app running in docker
Steps:
Docker Container for Mongo
docker run -d -p 27017:27017 --name mongo -d mongo:latest
Built the Image for my Spring Boot App
docker build -f Dockerfile -t myApp .
Docker File :
FROM dtr-<My Corp Base Image>
ADD build/libs/app.jar app.jar
ENTRYPOINT ["java","-jar","app.jar"]
3 . Bring up the App in container and link to Mongo DB
docker run -p 8082:8082 -e "SPRING_PROFILES_ACTIVE=local" --name myApp-containerName --link=mongo myApp-ImageName
My Error:
Exception encountered during context initialization - cancelling
refresh attempt:
org.springframework.beans.factory.UnsatisfiedDependencyException:
Error creating bean with name 'zzzzz' defined in URL
[jar:file:/app.jar!/BOOT-INF/classes!/com/uscm/ratabase/service/ZZZZ.class]:
Unsatisfied dependency expressed through constructor parameter 0;
nested exception is
org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'ZZZZZZ': Invocation of init method failed;
nested exception is
org.springframework.dao.DataAccessResourceFailureException: Timed out
after 30000 ms while waiting for a server that matches
WritableServerSelector. Client view of cluster state is {type=UNKNOWN,
servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING,
exception={com.mongodb.MongoSocketOpenException: Exception opening
socket}, caused by {java.net.ConnectException: Connection refused}}];
nested exception is com.mongodb.MongoTimeoutException: Timed out after
30000 ms while waiting for a server that matches
WritableServerSelector. Client view of cluster state is {type=UNKNOWN,
servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING,
exception={com.mongodb.MongoSocketOpenException: Exception opening
socket}, caused by {java.net.ConnectException: Connection refused}}]
2019-06-13 15:49:14.769 ERROR [ZZZZZZ,,,] 1 --- [ main]
o.s.boot.SpringApplication : Application startup failed

Make sure that your mongo is not set to autoconfigure. After a lot of hairpulling I realized that the my issue was not with containers, but Mongo autoconfigure, it will not connect to anything but localhost.
Create a Mongoclient,
use that in a MongoFactory
and use that in a MongoTemplate
Annotate all of that wit a configuration annotation.
Also exclude Mongo from Autoconfigure, that is how you will get a manual configure mongo.
Test it with profiles to test with different hostnames. Once you get that working
dockerize it and if your ports etc are properly done, you should be able to connect the two containers.

Related

JaegerTracing : Jaeger Ingester unable to read from Kafka Queue and store into ElasticSearch

I am new to Jaeger and Kafka,
I am trying to use Kafka as intermediate buffer.
I am using OpenTelemetry to send data to Jaeger-Collector directly using -Dotel.exporter.jaeger.endpoint.
Jaeger-Collector is deployed on Kubernetes and the Kafka is on another network but is accessible. I can confirm that the traces are sent to Jaeger-collector.
On hitting the /metrics of collector and output tells me that spans were written successfully to Kafka.
jaeger_kafka_spans_written_total{status="success"} 21
The Collector logs indicate what topic I am writing to
{"Brokers":["myKafkaBroker......"}},"topic":"tp6"}
I want to get this (Span) data from Kafka Queue to ElasticSearch. To do this I am starting the Jaeger Ingester as follows
docker run -e "SPAN_STORAGE_TYPE=elasticsearch" jaegertracing/jaeger-ingester:1.22 --kafka.consumer.topic=tp6 --kafka.consumer.brokers='myKafkaBroker' --es.tls.skip-host-verify
But the container is stopped after fatal error
{"level":"fatal","ts":1615546463.7784193,"caller":"command-line-arguments/main.go:64","msg":"Failed to init storage factory","error":"failed to create primary Elasticsearch client: health check timeout: Head \"http://127.0.0.1:9200\": dial tcp 127.0.0.1:9200: connect: connection refused: no Elasticsearch node available","stacktrace":"main.main.func1\n\tcommand-line-arguments/main.go:64\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra#v0.0.7/command.go:838\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra#v0.0.7/command.go:943\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra#v0.0.7/command.go:883\nmain.main\n\tcommand-line-arguments/main.go:113\nruntime.main\n\truntime/proc.go:204"}
The elasticsearch and ingester are being run on the same machine using docker. The elasticsearch is running on docker using
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node"ocker.elastic.co/elasticsearch/elasticsearch:7.11.2
I have disabled TLS so that shouldn't be a problem. I am unable to get this to work.

Clustering vernemq docker containers running on different machines

I was hoping this would be an easy one by just using the below snippet on the second instance's docker-compose.yml file
- DOCKER_VERNEMQ_DISCOVERY_NODE=<ip address of the first instance>
but that doesn't seem to work.
Log of the second instance confirms it's attempting to cluster:
13:56:09.795 [info] Sent join request to: 'VerneMQ#<ip address of the first instance>'
13:56:16.800 [info] Unable to connect to 'VerneMQ#<ip address of the first instance>'
While the log of the first instance does not show anything at all.
From within the second instance I can confirm that the endpoint is accessible:
$ docker exec -it vernemq /bin/sh
$ curl <ip address of the first instance>:44053
curl: (56) Recv failure: Connection reset by peer
then in the log of the first instance I see an error which is totally expected and confirms I've reached the first instance
13:58:33.572 [error] CRASH REPORT Process <0.3050.0> with 0 neighbours crashed with reason: bad argument in vmq_cluster_com:process_bytes/3 line 142
13:58:33.572 [error] Ranch listener {{172,19,0,2},44053} terminated with reason: bad argument in vmq_cluster_com:process_bytes/3 line 142
It might have to do with the fact that ip address as seen from within the docker container is 172.19.0.2 while the external one is 10. ....
Also tried adding hostname of the first instance to known_hosts to no avail.
Please advise.
I'm using erlio/docker-vernemq:1.10.0
$ docker --version
Docker version 19.03.13, build 4484c46d9d
$ docker-compose --version
docker-compose version 1.27.2, build 18f557f9
I managed to get this sorted by creating a docker overlay network
on machine1: docker swarm init
on machine2: docker swarm join --token ...
on machine1: docker network create --driver=overlay --attachable vernemq-overlay-net
The relevant bits of my dockerfile are:
version: '3.6'
services:
vernemq:
container_name: ${NODE_NAME:?Node name not specified}
image: vernemq/vernemq:1.10.4.1
environment:
- DOCKER_VERNEMQ_NODENAME=${NODE_NAME:?Node name not specified}
- DOCKER_VERNEMQ_DISCOVERY_NODE=${DISCOVERY_NODE:-}
networks:
default:
external:
name: vernemq-overlay-net
with the following env vars:
machine1:
NODE_NAME=vernemq1.example.com
DISCOVERY_NODE=
machine2:
NODE_NAME=vernemq2.example.com
DISCOVERY_NODE=vernemq1.example.com
Note:
Chances are machine2 won't find vernemq-overlay-net due to a bug in docker-compose as far as I remember.
In that case you start a container with docker: docker run -dit --name alpine --net=vernemq-overlay-net alpine which will make it available for docker-compose.

Unable to bind port 80

I'm using docker compose to run a simple web server project I created. This configuration has been working fine for months but suddenly stopped working after I haven't been to the office for two weeks.
It works when I map my ports like that - 8080:80, but I don't want to have to type out port 8080 every time. I used netstat -a -n -o | findstr /c:80 to find the process ID of the process listening to port 80, and tasklist /fi "pid eq 4" to find out what the name of the process is.
Turns out it's some system process, so I'm not sure what to do about that. I've uninstalled Skype and checked that the World Wide Web Publishing Service isn't turned on. Does anybody have an explanation or ideas as to how to fix this?
Thanks in advance.
update
when I run net stop http and kill all dependant services with it, port 80 is free. Services being stopped: Windows Remote Management (WS-Management), SSDP Discovery, Print Spooler, BranchCache and HTTP of course. Which of these could be the culprit?
update 2
I now stopped those services one by one, and after stopping every one of those it seems BranchCache is responsible for this. more testing ensues
docker-compose.yml
version: "3"
services:
vote-client:
build:
context: .
dockerfile: Dockerfile
ports:
- "80:80"
Dockerfile
FROM nginx
COPY ./html /usr/share/nginx/html
when I run docker-compose up this is my output:
docker-compose up --build
Removing vote-client_vote-client_1
Building vote-client
Step 1/2 : FROM nginx
---> 42b4762643dc
Step 2/2 : COPY ./html /usr/share/nginx/html
---> Using cache
---> a1aade2a299e
Successfully built a1aade2a299e
Successfully tagged vote-client_vote-client:latest
Recreating c2654f31dcff_vote-client_vote-client_1 ... error
ERROR: for c2654f31dcff_vote-client_vote-client_1 Cannot start service vote-client: driver failed programming external connectivity on endpoint vote-client_vote-client_1 (2188c8607a04ba2388a661504601431d6b30825d595dafae0c318f2d2b5685b0): Error starting userland proxy: Bind for 0.0.0.0:80: unexpected error Permission denied
ERROR: for vote-client Cannot start service vote-client: driver failed programming external connectivity on endpoint vote-client_vote-client_1 (2188c8607a04ba2388a661504601431d6b30825d595dafae0c318f2d2b5685b0): Error starting userland proxy: Bind for 0.0.0.0:80: unexpected error Permission denied
ERROR: Encountered errors while bringing up the project.

startNodeManager.sh not found

I have been trying to run Oracle weblogic in Docker containers and i am facing trouble in starting the NodeManager.I ran the following command.
docker run -d --name MS1 --link wlsadmin:wlsadmin -p 8001:8001 -e ADMIN_PASSWORD=#123 \
-e MS_NAME=MS1 --volumes-from wlsadmin a5e55 createServer.sh
Under normal circumstances it is expected to start the Nodemanager.
I am able to access the weblogic console and start the Managed Server which then returns the error-
-- Warning For server MS1, the Node Manager associated with machine Machine_MS1 is not reachable
This is the part of the log file that is returned on executing the above "docker run" command :
Domain Home: /u01/oracle/user_projects/domains/base_domain
Managed Server Name: MS1
NodeManager Name:
----> 'weblogic' admin password: ctebs#123
Waiting for WebLogic Admin Server on wlsadmin:7001 to become available...
WebLogic Admin Server is now available. Proceeding...
Setting NodeManager
----> No NodeManager Name set
Node Manager Name: Machine_MS1
Node Manager Home for Container: /u01/oracle/user_projects/domains/base_domain/Machine_MS1
cp: cannot stat '/u01/oracle/user_projects/domains/base_domain /bin/startNodeManager.sh': No such file or directory
cp: cannot stat '/u01/oracle/user_projects/domains/base_domain/nodemanager/*': No such file or directory
NODEMGR_HOME_STR: NODEMGR_HOME="/u01/oracle/user_projects/domains/base_domain/Machine_MS1"
NODEMGRHOME_STR: NodeManagerHome=/u01/oracle/user_projects/domains/base_domain/Machine_MS1
DOMAINSFILE_STR: DomainsFile=/u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.domains
LOGFILE_STR: LogFile=/u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.log
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/startNodeManager.sh: No such file or directory
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.properties: No such file or directory
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.properties: No such file or directory
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.properties: No such file or directory
Starting NodeManager in background...
NodeManager started.
Connection refused (Connection refused). Could not connect to NodeManager. Check that it is running at /172.17.0.3:5556.
Starting server MS1 ...No stack trace available.
This Exception occurred at Tue Dec 12 03:38:06 GMT 2017.
weblogic.management.scripting.ScriptException: Error occurred while performing start : Server with name MS1 failed to be started
No stack trace available.
How can I get past this error message?
You can try and follow this OracleWebLogic workshop intro which points out:
The ~/docker-images/OracleWebLogic/samples/1221-domain/container-scripts has useful Bash and WLST scripts that provide three possible modes to run WebLogic Managed Servers on a Docker container. Make sure you have an AdminServer container running before starting a ManagedServer container.
The sample scripts will by default, attempt to find the AdminServer running at t3://wlsadmin:8001. You can change this.
But most importantly, the AdminServer container has to be linked with Docker's --link parameter.
Below, are the three suggestions for running ManagedServer Container within the sample 12c-domain:
Start NodeManager (Manually):
docker run -d --link wlsadmin:wlsadmin startNodeManager.sh
Start NodeManager and Create a Machine Automatically:
docker run -d --link wlsadmin:wlsadmin createMachine.sh
Start NodeManager, Create a Machine, and Create a ManagedServer Automatically
docker run -d --link wlsadmin:wlsadmin createServer.sh
See more at "Example of Image with WLS Domain", removed in commit e49bb4d in Apr. 2019, 2 yers later, since Oracle no longer supports WebLogic versions.

Spring Cloud Samples Eureka - Docker - Use of underscore in link

I may have encountered an interesting anomaly with the use of Spring Cloud, Eureka and Docker. I am not sure if I have uncovered an issue or if the behavior is expected, but here is the gist.
I start first with eureka running in a named docker container. Next, I launch a docker client with ClientDiscoveryEnabled. The docker client container is using the docker "link" parameter to gain hostname accessibility into the eureka container. The yaml file has an entry for connecting to Eureka that is property driven:
defaultZone: http://user:${eureka.password}#${host.name}:8761/eureka/
Everything works just great, unless I attempt to use an underscore in my container name. If I use an underscore to name my container, the client container cannot resolve this name completely using Eureka registration. If I remove the underscore, everything works fine. Perhaps I missed something and this is expected, but I have not seen any mention of this "feature".
My client is based from the Spring-Cloud-Samples feign-eureka project. Below is the scenario...
This will work and the client will register:
sudo docker run -d -p=8761:8761 --name foobar chrisccoy/microsvcdemoeureka
sudo docker run -d -p=7311:7311 --name democlnt --link foobar:foobar chrisccoy/microsvcdemoclnt java -jar /opt/tst/ms_clnt.jar --host.name=foobar
The following will not work! Eureka will boot, the client will boot, but cannot register:
sudo docker run -d -p=8761:8761 --name foo_bar chrisccoy/microsvcdemoeureka
sudo docker run -d -p=7311:7311 --name democlnt --link foo_bar:foo_bar chrisccoy/microsvcdemoclnt java -jar /opt/tst/ms_clnt.jar --host.name=foo_bar
Below is the log entry and subsequent exception:
2015-02-25 18:51:27.762 ERROR 1 --- [pool-4-thread-1] com.netflix.discovery.DiscoveryClient : Can't get a response from http://user:password#foo_bar:8761/eureka/apps/HELLOCLIENT/172.17.0.11:HelloClient:7311
Can't contact any eureka nodes - possibly a security group issue?
com.sun.jersey.api.client.ClientHandlerException: java.lang.IllegalArgumentException: Host name may not be null
at com.sun.jersey.client.apache4.ApacheHttpClient4Handler.handle(ApacheHttpClient4Handler.java:184)
at com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(GZIPContentEncodingFilter.java:120)
at com.netflix.discovery.EurekaIdentityHeaderFilter.handle(EurekaIdentityHeaderFilter.java:28)
at com.sun.jersey.api.client.Client.handle(Client.java:648)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:680)
at com.sun.jersey.api.client.WebResource.put(WebResource.java:211)
at com.netflix.discovery.DiscoveryClient.makeRemoteCall(DiscoveryClient.java:1097)
at com.netflix.discovery.DiscoveryClient.makeRemoteCall(DiscoveryClient.java:1060)
at com.netflix.discovery.DiscoveryClient.access$500(DiscoveryClient.java:105)
at com.netflix.discovery.DiscoveryClient$HeartbeatThread.run(DiscoveryClient.java:1583)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Host name may not be null
at org.apache.http.util.Args.notBlank(Args.java:65)
at org.apache.http.HttpHost.<init>(HttpHost.java:81)
at com.sun.jersey.client.apache4.ApacheHttpClient4Handler.getHost(ApacheHttpClient4Handler.java:190)
at com.sun.jersey.client.apache4.ApacheHttpClient4Handler.handle(ApacheHttpClient4Handler.java:170)
... 14 common frames omitted
I am able to ping "foo_bar" from within a container running /bin/bash without issue.
sudo docker run -i -t --link foo_bar:foo_bar chrisccoy/microsvcdemoclnt /bin/bash
root#0175222c11bb:~# ping foo_bar
PING foo_bar (172.17.0.12) 56(84) bytes of data.
64 bytes from foo_bar (172.17.0.12): icmp_seq=1 ttl=64 time=0.137 ms
I am not sure where the disconnect is coming from. Or maybe it is a feature that I am unaware.
Any ideas?
Looks like java.net.URI doesn't understand underscore in a domain name. See this gist: https://gist.github.com/spencergibb/ced5199c80f7a6c89499 and this http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6587184

Resources