Skaffold port forwarding per container for Docker daemon - docker

I'm using Skaffold to deploy to the Docker daemon (not k8s) with port forwarding. Regardless of the resourceName specified, however, Skaffold forwards every port to every container (automatically handling the collisions it has created).
Here's the relevant section of my skaffold.yaml:
portForward:
- resourceType: container
resourceName: mywebapp
port: 3000
- resourceType: container
resourceName: uapi
port: 8080
localPort: 7000
With those settings I'm seeing the following ports forwarded to my containers:
Local Port
Container
Container Port
Correct?
3000
mywebapp
3000
✅
7000
mywebapp
8080
❌
3001
uapi
3000
❌
7001
uapi
8080
✅ / ❌
My entire skaffold.yaml is here; I'd love it if this is a syntax error on my part.
apiVersion: skaffold/v2beta29
kind: Config
deploy:
docker:
images: [uapi, mywebapp]
portForward:
- resourceType: container
resourceName: mywebapp
port: 3000
- resourceType: container
resourceName: uapi
port: 8080
localPort: 7000
build:
local:
push: false
artifacts:
- image: mywebapp
context: .
docker:
dockerfile: apps/mywebapp/Dockerfile
sync:
manual:
- src: 'apps/mywebapp/src/**/*.ts*'
dest: .
- image: uapi
context: .
docker:
dockerfile: apps/uapi/Dockerfile
sync:
manual:
- src: 'apps/uapi/src/**/*.ts*'
dest: .

Related

WSL2, how access minikube pod from docker container?

I am using WSL2 on Windows.
I made a flask service in minikube in WSL2 and a docker container in WSL2 separately.
I want to make a request to flask service in minikube from container in WSL2.
Steps to create a flask service
flask_service.py (only last line, service is running on /rss)
if __name__ == '__main__':
app.run(debug=False, host='0.0.0.0', port=8001)
Dockerfile
FROM python:3
COPY flask_service.py ./
WORKDIR .
RUN apt-get update
RUN apt install nano
RUN pip install numpy pandas Flask connectorx sqlalchemy pymysql jsonpickle
EXPOSE 8001
ENTRYPOINT ["python"]
CMD ["flask_service.py"]
minikube setting
minikube start --mount --mount-string="/home/sjw/kube:/home/sjw/kube"
kubectl proxy --address 0.0.0.0 --port 30001
minikube tunnel
getdb service menifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: getdbdp
spec:
replicas: 1
selector:
matchLabels:
app: getdb
template:
metadata:
labels:
app: getdb
spec:
containers:
- name: getdb
image: "desg2022/01getdb:v02"
env:
- name: "PORT"
value: "8001"
---
apiVersion: v1
kind: Service
metadata:
name: getdb-lb
spec:
type: LoadBalancer
selector:
app: getdb
ports:
- protocol: TCP
port: 8080
targetPort: 8001
First, local access(from windows) to the flask service was possible with the address below.
http://localhost:30001/api/v1/namespaces/default/services/http:getdb-lb:8080/proxy/rss
Second, when connecting in the same minikube
http://localhost:8001/rss
My question. I created a docker container in wsl2 as follows.
docker-compose.yaml (image is ubunut with only installed python and pip )
version: '2.3'
services:
master:
container_name: gputest1
image : desg2022/ubuntu:v01
stdin_open: true # docker run -i
tty: true # docker run -t
ports:
- 8080:8888
command:
"/bin/bash"
extra_hosts:
- "host.docker.internal:host-gateway"
ipc: 'host'
Inside this container I want to access getdb in minikube, what address should i put in?

Dockerfile doesn't seem to be exposing ports

I'm trying to run a simple node server on port 8080, but with the following config any attempt at hitting the subdomain results in a 502 Bad Gateway error. If I go the node I can see there doesn't appear to be any ports open on the container itself. So, assuming I've checked everything correctly, is there anything else I need to do in the config to open the port for the node server?
Edit: If I ssh into the pod and curl localhost on 8080 I'm able to hit the node server.
Dockerfile
FROM node:12.18.1
WORKDIR /app
COPY ["package.json", "package-lock.json*", "./"]
RUN npm install
COPY . .
RUN npm run build
EXPOSE 8080
CMD [ "node", "server.js" ]
k8s deployment
spec:
containers:
- name: test
image: test_image
ports:
- name: http
protocol: TCP
containerPort: 8080
service yaml
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
- name: https
port: 443
targetPort: 8080
protocol: TCP
selector:
app: test-deployment
type: NodePort
externalTrafficPolicy: Cluster
Ingress
spec:
rules:
- host: dev.test.com
http:
paths:
- backend:
serviceName: test-service
servicePort: 80
path: /
This wound up being on the application side. The server needed to be bound to 0.0.0.0 instead of 127.0.0.1.

Process for configuring devspace with pre-existing app

I'm new to Kubernetes (and Docker) for that matter. I need to understand the process of migrating my existing Vue.js app using Devspace. I've got the app running, sorta, but I am not connecting to
ws://localhost:4000/graphql
or able to establish a mongo connection.
MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
relevant pre-existing package.json entry points
"serve": "vue-cli-service serve -mode development",
"build": "vue-cli-service build",
"apollo": "vue-cli-service apollo:dev --generate-schema",
"apollo:schema:generate": "vue-cli-service apollo:schema:generate",
"apollo:schema:publish": "vue-cli-service apollo:schema:publish",
"apollo:start": "vue-cli-service apollo:start",
app structure
/apollo-server
context.js ## Mongo connection made here.
/src
vue-apollo.js ## Apollo setup (Graphql is setup here.)
Dockerfile
devspace.yaml
package.json
Now,
Dockerfile
FROM node:13.12.0-alpine
# Set working directory
WORKDIR /app
# Add package.json to WORKDIR and install dependencies
COPY package*.json ./
RUN npm install
# Add source code files to WORKDIR
COPY . .
# Application port (optional)
# express server runs on port 3000
EXPOSE 3000
# Debugging port (optional)
# For remote debugging, add this port to devspace.yaml: dev.ports[*].forward[*].port: 9229
EXPOSE 9229
CMD ["npm", "start"]
devspace.yaml
version: v1beta9
images:
app:
image: sandbox/practiceapp
preferSyncOverRebuild: true
injectRestartHelper: false
cmd: ["yarn", "serve"]
appendDockerfileInstructions:
- USER root
backend:
image: sandbox/backend
preferSyncOverRebuild: true
injectRestartHelper: false
entrypoint: ["yarn", "apollo"]
appendDockerfileInstructions:
- USER root
deployments:
- name: frontend
helm:
componentChart: true
values:
containers:
- image: sandbox/practiceapp
service:
ports:
- port: 8080
- name: backend
helm:
componentChart: true
values:
containers:
- image: sandbox/backend
service:
ports:
- port: 4000
- port: 3000
- port: 27017
# - name: mongo
# helm:
# componentChart: true
# values:
# containers:
# - image: sandbox/mongo
# service:
# ports:
# - port: 27017
dev:
ports:
- imageName: app
forward:
- port: 8080
# - imageName: apollo
# forward:
# port: 3000
# - imageName: graphql
# forward:
# port: 4000
# - imageName: mongo
# forward:
# port: 27017
open:
- url: http://localhost:8080
- url: http://localhost:4000/graphql
sync:
- imageName: app
excludePaths:
- .git/
uploadExcludePaths:
- Dockerfile
- node_modules/*
- '!node_modules/.temp/'
- devspace.yaml
onUpload:
restartContainer: true
profiles:
- name: production
patches:
- op: remove
path: images.app.injectRestartHelper
- op: remove
path: images.app.appendDockerfileInstructions
- name: interactive
patches:
- op: add
path: dev.interactive
value:
defaultEnabled: true
- op: add
path: images.app.entrypoint
value:
- sleep
- "9999999999"
I've looked for information on how to include services from pre-existing apps, but I've had difficulty understanding. I need some guidance on how to set this up, or where to look.
Thanks for your help and time.
From the information you provided, I think this is probably a networking issue. Please, check if your applications are listening on all interfaces instead of on localhost only because that would lead to the connection being refused as described in this troubleshooting guide: https://devspace.sh/cli/docs/guides/networking-domains#troubleshooting
The answer to this was refactoring the structure of my app and including the service port in deployments as well as the forwarding port in dev.ports.
deployments:
- name: app
helm:
componentChart: true
values:
containers:
- image: namespace/frontend
service:
name: app-service
ports:
- port: 8080
- port: 4000
dev:
ports:
- imageName: app
forward:
- port: 8080
- port: 4000
The final structure of my app:
./backend
.dockerignore
Dockerfile
package.json
./frontend
.dockerignore
Dockerfile
package.json
devspace.yaml
As far as connecting mongodb, I initially started with minikube and then moved to docker-desktop, but was not able to set up headless ports with external loadbalancing access due to using a replicaset on docker-desktop (localhost cannot be assigned twice as the external ip). I used bitnami's mongodb helm chart with devspace to do so.

Consul agent. Check socket connection failed: error="dial tcp 172.19.0.6:50044: connect: connection refused"

I am having troubles with microservice health checks in my consul docker setup, which i believe is a symptom of failure in service discovery as i only have one server in my registry.
Below is consul list of members from inside the docker container.
/ # consul members
Node Address Status Type Build Protocol DC Segment
7b1edb14a647 172.19.0.6:8301 alive server 1.7.4 2 dc1 <all>
/ #
Consul container logs repeat the same error below for all the microservices:
consul | 2020-06-16T12:19:11.087Z [WARN] agent: Check socket connection failed: check=service:ffa44b66c4869601c04abdbea6dc5be5 error="dial tcp 172.19.0.6:50044: connect: connection refused"
I am using docker-compose v.3.2 to create a network for containers.
This is a consul service definition
consul:
container_name: consul
ports:
- '8400:8400'
- '8500:8500'
- '8600:53/udp'
image: consul
command: ['agent', '-server', '-bootstrap', '-ui', '-client', '0.0.0.0']
Microservice definition
service-notification:
build:
context: .
dockerfile: apps/service-notification/Dockerfile
args:
NODE_ENV: development
depends_on:
- consul
image: 'service-notification:latest'
restart: always
environment:
- CONSUL_HOST=consul
ports:
- '50044:50044'
I am using CONSUL_HOST env variable to pass in correct host url.
Consul config for the microservice
consul:
host: ${{CONSUL_HOST}}
port: 8500
service:
discoveryHost: ${{CONSUL_HOST}}
healthCheck:
timeout: 1s
interval: 10s
tcp: ${{ service.discoveryHost }}:${{ service.port }}
maxRetry: 5
retryInterval: 5000
tags: ["v1.0.0", "microservice"]
name: io.ultimatebackend.srv.notification
port: 50044
My conclusion so far is that consul server container fails to reach the agents somehow. But i don't know why and i feel like i am missing some obvious peace of consul structure. Please advise.
I was incorrectly configuring my service. The dicoveryHost should be an IP and port of a micro-service inside docker network.

Cannot bind ports to my container in Google Container optimized VM

I'm trying to run a docker container on Google Container-optimized VM in GCE.
Here is my dockerfile. I built a container image and push it to gcr.io.
FROM nginx:1.9
COPY config /etc/nginx
And I wrote a container manifest file.
version: v1beta2
containers:
- name: test
image: gcr.io/myproject/test
ports:
- name: http
hostPort: 80
containerPort: 80
- name: https
hostPort: 443
containerPort: 443
I deployed to GCE with the manifest file, but port binding was not as I had expected. Why did the host port 80 and 443 redirect to google_containers/pause instead of myproject/test?
local$ gcloud compute instance create test \
--image container-vm \
--metadata-from-file google-container-manifest=container.yaml \
--zone us-central1-b \
--machine-type f1-micro \
--tags http-server,https-server
local$ gcloud compute ssh --zone us-central1-b test
test$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
818828ccc2c6 gcr.io/myproject/test:latest "nginx -g 'daemon of 23 seconds ago Up 22 seconds k8s_test.9de3822_7f9f8ecace94a22b2bea59ee14f3bcd0-test_df40d10c4dfa4
f40d10c4dfa4 gcr.io/google_containers/pause:0.8.0 "/pause" 32 seconds ago Up 31 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp k8s_POD.c6ce2a78_7f9f8ecace94a22b2bea59ee14f3bcd0-test_default_7f9f8ecace94a22b2bea59ee14f3bcd0-test_64d51838
I had updated version of the manifest v1beta2 to v1 (v1beta3) and tried it again. The result of port binding seems to be same as previous one, but the container can communicate with external network through port 80 and 443.
version:1
kind: Pod
spec:
restartPolicy: Always
dnsPolicy: Default
containers:
- name: test
image: gcr.io/myproject/test
imagePullPolicy: Always
ports:
- name: http
hostPort: 80
containerPort: 80
protocol: TCP
- name: https
hostPort: 443
containerPort: 443
protocol: TCP

Resources