intellij docker integration cant open ports - docker

The docker integration has a weirdly proprietary config format and its very unpredictable and quite frustrating.
This is the command I want to run for my container:
docker run -p 9999:9999 mycontainer
Pretty much the simplest command. I can start my container with this command and see the ports open in kitmatic and access it from the host.
I tried to do this in the docker run config by clicking CLI and generated a json settings file (already wtf this is weird and convoluted)
It gave me this json:
{
"AttachStdin" : true,
"Tty" : true,
"OpenStdin" : true,
"Image" : "",
"Volumes" : { },
"ExposedPorts" : { },
"HostConfig" : {
"Binds" : [ ],
"PortBindings" : {
"9999/tcp" : [ {
"HostIp" : "",
"HostPort" : "9999"
} ]
}
},
"_comment" : ""
}
I then execute the run config and according to intellij the port is open (looking under the Port Bindings section of the docker tab). But its not open. its not accessible from host and kitmatic doesn't show it open either.
How do I get this working as a run config? How do I see what docker command intellij is actually running? Maybe its just using the API programatically.

It seems the intellij docker integration requires you to explicitly declare open ports with EXPOSE in the dockerfile.

Related

VSCode devcontainer: proxy: unknown scheme: http

i have some troubles starting devcontainers in VSCode.
My setup:
macOS (Monterey 12.6.2)
docker-ce on local VM (DOCKER_HOST: ssh://vagrant#127.0.0.1:2222)
corporate proxy configured via http_proxy & https_proxy & no_proxy env variables (also the uppercased variants)
VSCode (1.74.3)
no Docker-Desktop
I have created a very small example with just the devcontainer definition. Nothing else is in the project folder:
# .devcontainer/devcontainer.json
{
"name": "Example DevContainer",
"image": "mcr.microsoft.com/devcontainers/base:jammy",
"customizations": {
"vscode": {
"settings": {
"docker.environment": {
"DOCKER_HOST": "ssh://vagrant#127.0.0.1:2222"
}
}
}
}
}
when i now start the dev container using: Dev Containers: Rebuild and Reopen in Container (or any other dev container related command) i receive only the following error:
[650908 ms] Start: Run: docker version --format {{.Server.APIVersion}}
[651092 ms] Stop (184 ms): Run: docker version --format {{.Server.APIVersion}}
[651093 ms] proxy: unknown scheme: http
I have configured VSCode to use the vm as docker host:
Im not sure where to start troubleshooting. Any ideas?
EDIT: Interesting side fact: if i run Dev Containers: Try a Dev Container Sample... the sample dev container starts building, but seems to fail since the corporate proxy is not configured in the image, but at least ist starts to build and not aborts directly with the error above.
Dev Containers: Create Dev Container... immediately fails too with the error above.

How to connect remote docker instance using Pulumi?

I have created VM instance in GCP using pulumi and installed docker. And I am trying to connect remote instance of the docker but its getting failed to due to connection establishment (asking for a key verification in a pop up window).
const remoteInstance = new docker.Provider(
"remote",
{
host: interpolate`ssh://user#${externalIP}:22`,
},
{ dependsOn: dockerInstallation }
);
I can able to run docker containers locally. But want to run the same in VM. The code snippet is here
with the recent version of "#pulumi/docker": "^3.2.0" you can now pass the ssh options. Reference
const remoteInstance = new docker.Provider(
"remote",
{
host: interpolate`ssh://user#${externalIP}:22`,
sshOpts: [
"-o",
"StrictHostKeyChecking=no",
"-o",
"UserKnownHostsFile=/dev/null",
],
},
{ dependsOn: dockerInstallation }
);

Docker pypspark cluster container not receiving kafka streaming from the host?

I have created and deployed a spark cluster which consist of 4 container running
spark master
spark-worker
spark-submit
data-mount-container : to access the script from the local directory
i added required dependency jar in all these container
And also deployed the kafka in the host machine where it produce streaming via producer.
i launched the kafka as per the exact step in the below document
https://kafka.apache.org/quickstart
i verified kafka producer and consumer to exchange the message on 9092 port, which is working fine
Below is the simple pyspark script which i want to process as structured streaming
from pyspark import SparkContext
from pyspark.sql import SparkSession
print("Kafka App launched")
spark = SparkSession.builder.master("spark://master:7077").appName("kafka_Structured").getOrCreate()
df = spark.readStream.format("kafka").option("kafka.bootstrap.servers", "hostmachine:9092").option("subscribe", "session-event").option("maxOffsetsPerTrigger", 10).load()
converted_string=df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
print("Recieved Stream in String", converted_string)
and below is the spark-submit i used to execute the script
##container
# pyspark_vol - container for vol mounting
# spark/stru_kafka - container for spark-submit
# i linked the spark master and worker already using the container 'master'
##spark submit
docker run --add-host="localhost: myhost" --rm -it --link master:master --volumes-from pyspark_vol spark/stru_kafka spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.1.1 –jars /home/spark/spark-2.1.1-bin-hadoop2.6/jars/spark-sql-kafka-0-10_2.11-2.1.1.jar --master spark://master:7077 /data/spark_session_kafka.py localhost 9092 session-event
After i ran the script, the script is executing fine, but it not seems to be listening to the streaming as a batch from the kafka producer and stopping the execution.
i didn't observed any specific error, but not producing any out put from the script
I verified the connectivity in receiving data from the host inside the docker container using socket program, which is working fine.
i am not sure if i have missed any configuration ..
Expected:
The above application which is running on spark-cluster should print the streaming coming from kafka producer
Actual
"id" : "f4e8829f-583e-4630-ac22-1d7da2eb80e7",
"runId" : "4b93d523-7b7c-43ad-9ef6-272dd8a16e0a",
"name" : null,
"timestamp" : "2020-09-09T09:21:17.931Z",
"numInputRows" : 0,
"processedRowsPerSecond" : 0.0,
"durationMs" : {
"addBatch" : 1922,
"getBatch" : 287,
"getOffset" : 361,
"queryPlanning" : 111,
"triggerExecution" : 2766,
"walCommit" : 65
},
"stateOperators" : [ ],
"sources" : [ {
"description" : "KafkaSource[Subscribe[session-event]]",
"startOffset" : null,
"endOffset" : {
"session-event" : {
"0" : 24
}
},
"numInputRows" : 0,
"processedRowsPerSecond" : 0.0
} ],
"sink" : {
"description" : "org.apache.spark.sql.execution.streaming.ConsoleSink#6a1b0b4b"
}
}
According to the Quick Example provided in the Spark documentation you need to start your query and wait for its termination.
In your case that means you need to replace
print("Recieved Stream in String", converted_string)
with
query = df.writeStream.outputMode("complete").format("console").start()
query.awaitTermination()
The issue was with my pyspark_stream script where i missed to provide batch processing time and print statement to view the logs...
since its not a aggregated streaming, i had to use 'append' here
result =df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
print("Kafka Straming output is",result)
query = result.writeStream.outputMode("append").format("console").trigger(processingTime='30 seconds').start()

how to add docker name parameter into kuberntes cluster

I am deploy the xxl-job application in Kubernetes(v1.15.2), now the application deploy success but registry client service failed.If deploy it in docker, it should look like this:
docker run -e PARAMS="--spring.datasource.url=jdbc:mysql://mysql-service.example.com/xxl-job?Unicode=true&characterEncoding=UTF-8 --spring.datasource.username=root --spring.datasource.password=<mysql-password>" -p 8180:8080 -v /tmp:/data/applogs --name xxl-job-admin -d xuxueli/xxl-job-admin:2.0.2
and when start application,the server side give me tips:
22:33:21.078 logback [http-nio-8080-exec-7] WARN o.s.web.servlet.PageNotFound - No mapping found for HTTP request with URI [/xxl-job-admin/api/registry] in DispatcherServlet with name 'dispatcherServlet'
I am searching from project issue and find the problem may be I could not pass the project name in docker to be part of it's url, so give me this tips.The client side give this error:
23:19:18.262 logback [xxl-job, executor ExecutorRegistryThread] INFO c.x.j.c.t.ExecutorRegistryThread - >>>>>>>>>>> xxl-job registry fail, registryParam:RegistryParam{registryGroup='EXECUTOR', registryKey='job-schedule-executor', registryValue='172.30.184.4:9997'}, registryResult:ReturnT [code=500, msg=xxl-rpc remoting fail, StatusCode(404) invalid. for url : http://xxl-job-service.dabai-fat.svc.cluster.local:8080/xxl-job-admin/api/registry, content=null]
so to solve the problem, I should execute command as possible as the same in kubernetes like execute with docker. The question is: How to pass the docker command --name to kubernetes environment? I have already tried this:
"env": [
{
"name": "name",
"value": "xxl-job-admin"
}
],
and also tried this:
"containers": [
{
"name": "xxl-job-admin",
"image": "xuxueli/xxl-job-admin:2.0.2",
}
]
Both did not work.

Vault Docker Image - Cant get REST Response

I am deploying vault docker image on Ubuntu 16.04, I am successful initializing it from inside the image itself, but I cant get any Rest Responses, and even curl does not work.
I am doing the following:
Create config file local.json :
{
"listener": [{
"tcp": {
"address": "127.0.0.1:8200",
"tls_disable" : 1
}
}],
"storage" :{
"file" : {
"path" : "/vault/data"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
}
under /vault/config directory
running the command to start the image
docker run -d -p 8200:8200 -v /home/vault:/vault --cap-add=IPC_LOCK vault server
entering bash terminal of the image :
docker exec -it containerId /bin/sh
Running inside the following command
export VAULT_ADDR='http://127.0.0.1:8200' and than vault init
It works fine, but when I am trying to send rest to check if vault initialized:
Get request to the following url : http://Ip-of-the-docker-host:8200/v1/sys/init
Getting No Response.
even curl command fails:
curl http://127.0.0.1:8200/v1/sys/init
curl: (56) Recv failure: Connection reset by peer
Didnt find anywhere online with a proper explanation what is the problem, or if I am doing something wrong.
Any Ideas?
If a server running in a Docker container binds to 127.0.0.1, it's unreachable from anything outside that specific container (and since containers usually only run a single process, that means it's unreachable by anyone). Change the listener address to 0.0.0.0:8200; if you need to restrict access to the Vault server, bind it to a specific host address in the docker run -p option.

Resources