Container startup failing for testcontainers-scala - docker

I was testing out a simple test function as below using the library provided MySql test container when the container startup failed
class Test extends FlatSpec with ForAllTestContainer {
override val container = MySQLContainer()
it should "temp" in {
assert(1 == 1)
}
}
The stack trace for the error is as shown below
Exception encountered when invoking run on a nested suite - Container startup failed
org.testcontainers.containers.ContainerLaunchException: Container startup failed
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:322)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:302)
at com.dimafeng.testcontainers.SingleContainer.start(Container.scala:46)
at com.dimafeng.testcontainers.ForAllTestContainer.run(ForAllTestContainer.scala:17)
at com.dimafeng.testcontainers.ForAllTestContainer.run$(ForAllTestContainer.scala:13)
Caused by: org.testcontainers.containers.ContainerFetchException: Can't get Docker image: RemoteDockerImage(imageNameFuture=java.util.concurrent.CompletableFuture#18920cc[Completed normally], imagePullPolicy=DefaultPullPolicy(), dockerClient=LazyDockerClient.INSTANCE)
at org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1265)
at org.testcontainers.containers.GenericContainer.logger(GenericContainer.java:600)
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:311)
... 18 more
Caused by: java.util.NoSuchElementException: No value present
at java.util.Optional.get(Optional.java:135)
at org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:103)
at org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:155)
at org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)
at org.testcontainers.LazyDockerClient.listImagesCmd(LazyDockerClient.java:12)
at org.testcontainers.images.LocalImagesCache.maybeInitCache(LocalImagesCache.java:68)
at org.testcontainers.images.LocalImagesCache.get(LocalImagesCache.java:32)
at org.testcontainers.images.AbstractImagePullPolicy.shouldPull(AbstractImagePullPolicy.java:18)
at org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:62)
at org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:25)
at org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:20)
at org.testcontainers.utility.LazyFuture.get(LazyFuture.java:27)
at org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1263)
... 20 more

Related

kafka streams stateful mode is error (rocksdb don't initialize)

error code :
Failed to close task manager due to the following error:
java.lang.NoClassDefFoundError: Could not initialize class org.rocksdb.DBOptions
at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:133)
at org.apache.kafka.streams.state.internals.TimestampedSegment.openDB(TimestampedSegment.java:49)
at org.apache.kafka.streams.state.internals.TimestampedSegments.getOrCreateSegment(TimestampedSegments.java:50)
at org.apache.kafka.streams.state.internals.TimestampedSegments.getOrCreateSegment(TimestampedSegments.java:25)
at org.apache.kafka.streams.state.internals.AbstractSegments.getOrCreateSegmentIfLive(AbstractSegments.java:84)
at org.apache.kafka.streams.state.internals.AbstractRocksDBSegmentedBytesStore.put(AbstractRocksDBSegmentedBytesStore.java:146)
at org.apache.kafka.streams.state.internals.RocksDBWindowStore.put(RocksDBWindowStore.java:61)
at org.apache.kafka.streams.state.internals.RocksDBWindowStore.put(RocksDBWindowStore.java:27)
at org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:111)
at org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:34)
at org.apache.kafka.streams.state.internals.CachingWindowStore.putAndMaybeForward(CachingWindowStore.java:106)
at org.apache.kafka.streams.state.internals.CachingWindowStore.lambda$initInternal$0(CachingWindowStore.java:86)
at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:151)
at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:109)
at org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:124)
at org.apache.kafka.streams.state.internals.CachingWindowStore.flush(CachingWindowStore.java:291)
at org.apache.kafka.streams.state.internals.WrappedStateStore.flush(WrappedStateStore.java:84)
at org.apache.kafka.streams.state.internals.MeteredWindowStore.lambda$flush$4(MeteredWindowStore.java:200)
at org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:801)
at org.apache.kafka.streams.state.internals.MeteredWindowStore.flush(MeteredWindowStore.java:200)
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:282)
at org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:647)
at org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:745)
at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.closeTask(AssignedStreamsTasks.java:81)
at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.closeTask(AssignedStreamsTasks.java:37)
at org.apache.kafka.streams.processor.internals.AssignedTasks.shutdown(AssignedTasks.java:256)
at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.shutdown(AssignedStreamsTasks.java:535)
at org.apache.kafka.streams.processor.internals.TaskManager.shutdown(TaskManager.java:292)
at org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(StreamThread.java:1133)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:682)
Local works fine.
In the environment where the error occurs, the error occurs when uploading it to docker.
kafka streams version : 2.5.0

End of File Exception in Spark cluster

I created docker containers for Spark Standalone mode. Like in this article:
https://dev.to/mvillarrealb/creating-a-spark-standalone-cluster-with-docker-and-docker-compose-2021-update-6l4
As a driver, I make a simple spark job with calculation and try to start it in cluster mode.
val handle = new SparkLauncher()
.setMaster("spark://spark-master:7077")
.setAppName("MyDriver")
.setVerbose(true)
.setAppResource(s"hdfs://spark-master:7077/opt/spark-apps/sparkcluster_2.12-0.1.0-SNAPSHOT.jar")
.setMainClass(DriverTest.getClass.getName)
.setDeployMode("cluster")
.startApplication()
import org.apache.spark.launcher.SparkAppHandle
while (!handle.getState.equals(SparkAppHandle.State.FINISHED)) {
println("App State: " + handle.getState)
Thread.sleep(1000)
}
Could you please tell me, what URL needs set to "setAppResource" from the machine where I run SparkLaunch, from master node or HDFS path?
If I used path "hdfs://spark-master:7077/opt/spark-apps/sparkcluster_2.12-0.1.0-SNAPSHOT.jar" where spark-master:7077 master node, /opt/spark-apps/sparkcluster_2.12-0.1.0-SNAPSHOT.jar path on master node, I will get this exception.
INFO: 22/05/26 16:48:04 ERROR ClientEndpoint: Exception from cluster
was: java.io.EOFException: End of File Exception between local host
is: "spark-worker-b/172.26.0.3"; destination host is:
"spark-master":7077; : java.io.EOFException; For more details see:
http://wiki.apache.org/hadoop/EOFException мая 26, 2022 4:48:04 PM
org.apache.spark.launcher.OutputRedirector redirect INFO:
java.io.EOFException: End of File Exception between local host is:
"spark-worker-b/172.26.0.3"; destination host is: "spark-master":7077;
: java.io.EOFException; For more details see:
http://wiki.apache.org/hadoop/EOFException
Also I see, that my master node lost worker nodes in execution time. After kill process Master see workers.

Cypress - The plugins file is missing or invalid running on Docker

I add this import - const { addMatchImageSnapshotPlugin } = require("cypress-image-snapshot/plugin");
into file cypress/plugins/index.js.
And error is thrown while running on Docker:
Your `pluginsFile` is set to `/cypress/plugins/index.js`, but either the file is missing, it contains a syntax error, or threw an error when required. The `pluginsFile` must be a `.js`, `.ts`, or `.coffee` file.
It works perfectly when I'm running the specs locally but strangely it fail when they are runned on the docker image.
`

Adding a file in bootstrap.groovy in grails 3?

I would like to add a file in the bootstrap.groovy file in grails 3.3.9 version.
In the bootstrap.groovy:
package com.nuevaconsulting
import com.nuevaconsulting.embrow.*
class BootStrap {
def init = { servletContext ->
def filePath = "C:/Grails/embrow/grails-app/conf/resourcesresources/1.csv"
new File(filePath).splitEachLine(',')
{
fields ->
def employee = new Employee(
mirId: fields[0].trim(),
cancer : fields[1].trim(),
profile : fields[1].trim(),
pubmed : fields[1].trim()
)
if ( employee.hasErrors() || employee.save(flush: true) == null) {
log.error("Could not import employee ${ employee.errors}")
}
log.debug("Importing employee ${ employee.toString()}")
}
def destroy = {}}}
When I execute run-app, I ended-up with the following error
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':bootRun'.
Caused by: org.gradle.process.internal.ExecException: Process 'command 'C:\Program Files\Java\jdk1.8.0_201\bin\java.exe'' finished with non-zero exit value 1
at org.gradle.process.internal.DefaultExecHandle$ExecResultImpl.assertNormalExitValue(DefaultExecHandle.java:369)
Error Failed to start server (NOTE: Stack trace has been filtered. Use --verbose to see entire trace.)
java.util.concurrent.ExecutionException: org.gradle.tooling.BuildException: Could not execute build using Gradle distribution 'https://services.gradle.org/distributions/gradle-3.5-bin.zip'.
Caused by: org.gradle.tooling.BuildException: Could not execute build using Gradle distribution 'https://services.gradle.org/distributions/gradle-3.5-bin.zip'.
at org.gradle.tooling.internal.consumer.ExceptionTransformer.transform(ExceptionTransformer.java:51)
Caused by: org.gradle.internal.exceptions.LocationAwareException: Execution failed for task ':bootRun'.
at org.gradle.initialization.DefaultExceptionAnalyser.transform(DefaultExceptionAnalyser.java:74)
at org.gradle.initialization.MultipleBuildFailuresExceptionAnalyser.transform(MultipleBuildFailuresExceptionAnalyser.java:47)
Caused by: org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':bootRun'.
at org.gradle.initialization.DefaultGradleLauncher.doBuild(DefaultGradleLauncher.java:112)
... 44 more
Caused by: org.gradle.process.internal.ExecException: Process 'command 'C:\Program Files\Java\jdk1.8.0_201\bin\java.exe'' finished with non-zero exit value 1
at org.gradle.process.internal.DefaultExecHandle$ExecResultImpl.assertNormalExitValue(DefaultExecHandle.java:369)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:90)
... 78 more
| Error Failed to start server
How can I add and visualize ?

Hadoop/Yarn Docker-Container-Executor fails because of "Invalid docker rw mount"

I am trying to execute the simple example for the Hadoop/Yarn (Version: 2.9.1) Docker-Container-Executor:
vars="YARN_CONTAINER_RUNTIME_TYPE=docker,YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=hadoop-docker"
hadoop jar hadoop-examples.jar pi -Dyarn.app.mapreduce.am.env=$vars -Dmapreduce.map.env=$vars -Dmapreduce.reduce.env=$vars 10 100
Unfortunately the job fails with the following exception:
Failing this attempt.Diagnostics: [2018-09-08 22:23:54.288]Exception from container-launch.
Container id: container_1536441225683_0004_02_000001
Exit code: 29
Exception message: Invalid docker rw mount '/tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/application_1536441225683_0004/:/tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/application_1536441225683_0004/', realpath=/tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/application_1536441225683_0004/
Error constructing docker command, docker error code=14, error message='Invalid docker read-write mount'
Anybody has an idea how to solve the Invalid docker read-write mount?
Solved by adding this directory to property docker.allowed.rw-mounts in etc/hadoop/container-executor.cfg. If you get error message for multiple directories they need to be added comma seperated.
In my case:
docker.allowed.rw-mounts=/usr/local/hadoop/,/var/hadoop/yarn/local-dir,/var/hadoop/yarn/log-dir,/tmp/hadoop-hadoop/

Resources