I am trying to run/configure a Spring Data Cloud Data Flow (SCDF) to schedule a task for a Spring Batch Job.
I am running in a minikube that connects to a local postgresql(localhost:5432). The minikube runs in a virtualbox where I assigned a vnet thru the --cidr, so minikube can connect to the local postgres.
Here is the postgresql service yaml:
https://github.com/msuzuki23/SpringCloudDataFlowDemo/blob/main/postgres-service.yaml
Here is the SCDF config yaml:
https://github.com/msuzuki23/SpringCloudDataFlowDemo/blob/main/server-config.yaml
Here is the SCDF deployment yaml:
https://github.com/msuzuki23/SpringCloudDataFlowDemo/blob/main/server-deployment.yaml
Here is the SCDF server-svc.yaml:
https://github.com/msuzuki23/SpringCloudDataFlowDemo/blob/main/server-svc.yaml
To launch the SCDF server in minikube I do the following kubectl commands:
kubectl apply -f secret.yaml
kubectl apply -f configmap.yaml
kubectl apply -f postgres-service.yaml
kubectl create -f server-roles.yaml
kubectl create -f server-rolebinding.yaml
kubectl create -f service-account.yaml
kubectl apply -f server-config.yaml
kubectl apply -f server-svc.yaml
kubectl apply -f server-deployment.yaml
I am not running Prometeus, Grafana, Kafka/Rabbitmq as I want to test and make sure I can launch the Spring Batch Job from SCDF. I did not run the skipper deployment (Spring Cloud DataFlow server runnning locally pointing to Skipper in Kubernetes) it is not necessary if just running tasks.
This is the error I am getting when trying to add an application from a docker private repo:
And this is the full error stack from the pod:
https://github.com/msuzuki23/SpringCloudDataFlowDemo/blob/main/SCDF_Log_Error
Highlights from the error stack:
2021-07-08 13:04:13.753 WARN 1 --- [-nio-80-exec-10] o.s.c.d.s.controller.AboutController : Skipper Server is not accessible
org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://localhost:7577/api/about": Connect to localhost:7577 [localhost/127.0.0.1] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:7577 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
Postges Hibernate Error:
2021-07-08 13:05:22.142 WARN 1 --- [p-nio-80-exec-5] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: 42P01
2021-07-08 13:05:22.200 ERROR 1 --- [p-nio-80-exec-5] o.h.engine.jdbc.spi.SqlExceptionHelper : ERROR: relation "hibernate_sequence" does not exist
Position: 17
2021-07-08 13:05:22.214 ERROR 1 --- [p-nio-80-exec-5] o.s.c.d.s.c.RestControllerAdvice : Caught exception while handling a request
org.springframework.dao.InvalidDataAccessResourceUsageException: could not extract ResultSet; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could
The first couple of errors are from the SCDF trying to connect to the skipper, since it was not configured, that was expected.
The second error is the Postgres JDBC Hibernate. How do I solve that?
Is there a configuration I am missing when setting the SCDF to point into the local postgres?
Also, in my docker jar I have not added any annotation such as #EnableTask.
Any help is appreciated, thx! Markus.
I did a search on
Caused by: org.postgresql.util.PSQLException: ERROR: relation
"hibernate_sequence" does not exist Position: 17
And found this stackoverflow anwer:
Postgres error in batch insert : relation "hibernate_sequence" does not exist position 17
Went to the postgres and created the hibernate_sequence:
CREATE SEQUENCE my_seq_gen START 1;
Then, add application worked.
Related
We are using 1.14.3 version of flink and when we try to run Job manager, we are getting below exception.
I tried entering
akka.remote.netty.tcp.hostname = "127.0.0.1" in flink-conf.yml file and even updated IP with hostname. But didnt help.
[flink-akka.actor.default-dispatcher-5]
ERROR akka.remote.transport.netty.NettyTransport - failed to bind to /0.0.0.0:6123, shutting down Netty transport
ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint.
org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint
Caused by: java.net.BindException: Could not start actor system on any port in port range 6123```
Can you check if you have an application already running on port 6123?
When I get the logs for one of the pods with the CrashLoopBackOff status
kubectl logs alfred
it returns the following errors.
error: alfred service exiting due to error {"label":"winston","timestamp":"2021-11-08T07:02:02.324Z"}
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26) {
errno: 'ENOTFOUND',
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'mongodb'
} {"label":"winston","timestamp":"2021-11-08T07:02:02.326Z"}
error: Client Manager Redis Error: getaddrinfo ENOTFOUND redis {"errno":"ENOTFOUND","code":"ENOTFOUND","syscall":"getaddrinfo","hostname":"redis","stack":"Error: getaddrinfo ENOTFOUND redis\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26)","label":"winston","timestamp":"2021-11-08T07:02:02.368Z"}
I am new to the Kubernetes and Aws Eks. Would be looking forward to help. Thanks
If you see the error its failing at getaddrinfo, which is a program/function to resolve the dns name and connect with an external service. It is trying to access a redis cluster. Seems like your EKS cluster doesn't have the connectivity.
However if you are running redis as part of your EKS cluster, make sure to provide/update the kubernetes service dns in the application code, or set this as an environment variable which can be set just before deployment.
Its redis and mongodb, also as error says you are providing hostname as redis and mongodb, it won't resolve to an IP address unless you have mapped it in /etc/hosts file which is actually untrue.
Give the correct hostnames, the pods will come up. This is the root-cause.
The errors above were being generated because mongo and redis were not exposed by a service. After I created service.yaml files for the instances the above errors went away. Aws Eks deploys containers in pods which are scattered across different nodes. In order to let mongodb communicate from one pod to another you must expose a service or aka "frontend" for the mongodb deployment.
I want
minikube start
to run in /etc/rc.d/rc.local as this script executes after everytime ec2 instance starts.
It is failing to start minikube when kept in rc.local but when I execute it as non-root user, it works.
Any help is appreciated to make it work from rc.local script
Update:
I've added minikube start --force --driver=docker
This time, it says:
E0913 18:12:21.898974
10063 status.go:258]
status error: NewSession:
new client:
new client: ssh: handshake failed:
ssh: unable to authenticate, attempted methods [none publickey],
no supported methods remain.
Failed to list containers for "kube-apiserver":
docker: NewSession: new client: new client:
ssh: handshake failed: ssh: unable to authenticate,
attempted methods [none publickey],
no supported methods remain StackOverflow
etc etc
I have a Spring boot Micro service that calls to a Mongo DB.
To set it up in my Local Machine. I set up a Mongo DB Container in my local docker at localhost:27017.
I tried to stand up the Spring boot Micro service application at port 8082 and it was successful.
I now want to run both of them in Docker this.
I am unable to get the app running in docker
Steps:
Docker Container for Mongo
docker run -d -p 27017:27017 --name mongo -d mongo:latest
Built the Image for my Spring Boot App
docker build -f Dockerfile -t myApp .
Docker File :
FROM dtr-<My Corp Base Image>
ADD build/libs/app.jar app.jar
ENTRYPOINT ["java","-jar","app.jar"]
3 . Bring up the App in container and link to Mongo DB
docker run -p 8082:8082 -e "SPRING_PROFILES_ACTIVE=local" --name myApp-containerName --link=mongo myApp-ImageName
My Error:
Exception encountered during context initialization - cancelling
refresh attempt:
org.springframework.beans.factory.UnsatisfiedDependencyException:
Error creating bean with name 'zzzzz' defined in URL
[jar:file:/app.jar!/BOOT-INF/classes!/com/uscm/ratabase/service/ZZZZ.class]:
Unsatisfied dependency expressed through constructor parameter 0;
nested exception is
org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'ZZZZZZ': Invocation of init method failed;
nested exception is
org.springframework.dao.DataAccessResourceFailureException: Timed out
after 30000 ms while waiting for a server that matches
WritableServerSelector. Client view of cluster state is {type=UNKNOWN,
servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING,
exception={com.mongodb.MongoSocketOpenException: Exception opening
socket}, caused by {java.net.ConnectException: Connection refused}}];
nested exception is com.mongodb.MongoTimeoutException: Timed out after
30000 ms while waiting for a server that matches
WritableServerSelector. Client view of cluster state is {type=UNKNOWN,
servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING,
exception={com.mongodb.MongoSocketOpenException: Exception opening
socket}, caused by {java.net.ConnectException: Connection refused}}]
2019-06-13 15:49:14.769 ERROR [ZZZZZZ,,,] 1 --- [ main]
o.s.boot.SpringApplication : Application startup failed
Make sure that your mongo is not set to autoconfigure. After a lot of hairpulling I realized that the my issue was not with containers, but Mongo autoconfigure, it will not connect to anything but localhost.
Create a Mongoclient,
use that in a MongoFactory
and use that in a MongoTemplate
Annotate all of that wit a configuration annotation.
Also exclude Mongo from Autoconfigure, that is how you will get a manual configure mongo.
Test it with profiles to test with different hostnames. Once you get that working
dockerize it and if your ports etc are properly done, you should be able to connect the two containers.
I am trying to create a pod using kubernetes with the following simple command
kubectl run example --image=nginx
It runs and assigns the pod to the minion correctly but the status is always in ContainerCreating status due to the following error. I have not hosted GCR or GCloud on my machine. So not sure why its picking from there only.
1h 29m 14s {kubelet centos-minion1} Warning FailedSync Error syncing pod, skipping:
failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed
for gcr.io/google_containers/pause:2.0, this may be because there are no
credentials on this request. details: (unable to ping registry endpoint
https://gcr.io/v0/\nv2 ping attempt failed with error: Get https://gcr.io/v2/:
http: error connecting to proxy http://87.254.212.120:8080: dial tcp
87.254.212.120:8080: i/o timeout\n v1 ping attempt failed with error:
Get https://gcr.io/v1/_ping: http: error connecting to proxy
http://87.254.212.120:8080: dial tcp 87.254.212.120:8080: i/o timeout)
Kubernetes is trying to create a pause container for your pod; this container is used to create the pod's network namespace. See this question and its answers for more general information on the pause container.
To your specific error: Kubernetes tries to pull the pause container's image (which would be gcr.io/google_containers/pause:2.0, according to your error message) from the Google Container Registry (gcr.io). Apparently, your Docker engine tries to connect to GCR using a HTTP proxy located at 87.254.212.120:8080, to which it apparently cannot connect (i/o timeout).
To correct this error, either make sure that you HTTP proxy server is online and does not block HTTP requests to GCR, or (if you do have public Internet access) disable the proxy connection for your Docker engine (this would typically be done using the http_proxy and https_proxy environment variables, which would have been set in /etc/sysconfig/docker or /etc/default/docker, depending on your Linux distribution).