I can't build this simple example of confluent kafka using Docker. Probably a trick with go path or an special build parameter, can't find out, tried all the default folders from go, no success.
Dockerfile
FROM golang:alpine AS builder
# Set necessary environmet variables needed for our image
ENV GO111MODULE=on \
CGO_ENABLED=0 \
GOOS=linux \
GOARCH=amd64
ADD . /go/app
# Install librdkafka
RUN apk add librdkafka-dev pkgconf
# Move to working directory /build
WORKDIR /go/app
# Copy and download dependency using go mod
COPY go.mod .
RUN go mod download
# Copy the code into the container
COPY . .
# Build the application
RUN go build -o main .
# Run test
RUN go test ./... -v
# Move to /dist directory as the place for resulting binary folder
WORKDIR /dist
# Copy binary from build to main folder
RUN cp /go/app/main .
############################
# STEP 2 build a small image
############################
FROM scratch
COPY --from=builder /dist/main /
# Command to run the executable
ENTRYPOINT ["/main"]
Source
import (
"fmt"
"github.com/confluentinc/confluent-kafka-go/kafka"
"os"
)
func main() {
if len(os.Args) != 3 {
fmt.Fprintf(os.Stderr, "Usage: %s <broker> <topic>\n",
os.Args[0])
os.Exit(1)
}
broker := os.Args[1]
topic := os.Args[2]
p, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": broker})
if err != nil {
fmt.Printf("Failed to create producer: %s\n", err)
os.Exit(1)
}
fmt.Printf("Created Producer %v\n", p)
deliveryChan := make(chan kafka.Event)
value := "Hello Go!"
err = p.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
Value: []byte(value),
Headers: []kafka.Header{{Key: "myTestHeader", Value: []byte("header values are binary")}},
}, deliveryChan)
e := <-deliveryChan
m := e.(*kafka.Message)
if m.TopicPartition.Error != nil {
fmt.Printf("Delivery failed: %v\n", m.TopicPartition.Error)
} else {
fmt.Printf("Delivered message to topic %s [%d] at offset %v\n",
*m.TopicPartition.Topic, m.TopicPartition.Partition, m.TopicPartition.Offset)
}
close(deliveryChan)
}
Error
./producer_example.go:37:12: undefined: kafka.NewProducer
./producer_example.go:37:31: undefined: kafka.ConfigMap
./producer_example.go:48:28: undefined: kafka.Event
./producer_example.go:51:19: undefined: kafka.Message
Edit
I can confirm that using the musl build tag works:
FROM golang:alpine as build
WORKDIR /go/src/app
# Set necessary environmet variables needed for our image
ENV GOOS=linux GOARCH=amd64
COPY . .
RUN apk update && apk add gcc librdkafka-dev openssl-libs-static zlib-static zstd-libs libsasl librdkafka-static lz4-dev lz4-static zstd-static libc-dev musl-dev
RUN go build -tags musl -ldflags '-w -extldflags "-static"' -o main
FROM scratch
COPY --from=build /go/src/app/main /
# Command to run the executable
ENTRYPOINT ["/main"]
Works with the test setup as shown below.
Ok, the used version 1.4.0 of github.com/confluentinc/confluent-kafka-go/kafka seems to be generally incompatible with at least the current state of alpine 3.11.
Furthermore, despite my best efforts, I was unable to build a statically compiled binary, fit for the use with FROM scratch.
However, I was able to get you code running against a current version of Kafka. The image is a bit bigger, but I guess working and a bit bigger is better than not working and elegant.
Todos
1. Downgrade to confluent-kafka-go#v1.1.0
As simple as
$ go get -u -v github.com/confluentinc/confluent-kafka-go#v1.1.0
2. Modify your Dockerfile
You were lacking some build dependencies to begin with. And obviously, we need a runtime dependency as well, since we do not use FROM scratch any more. Please note that I also tried to simplify it and left jwilder/dockerize in, which I used so that I do not have to time my test setup:
FROM golang:alpine as build
# The default location is /go/src
WORKDIR /go/src/app
ENV GOOS=linux \
GOARCH=amd64
# We simply copy everything to /go/src/app
COPY . .
# Add the required build libraries
RUN apk update && apk add gcc librdkafka-dev zstd-libs libsasl lz4-dev libc-dev musl-dev
# Run the build
RUN go build -o main
FROM alpine
# We use dockerize to make sure the kafka sever is up and running before the command starts.
ENV DOCKERIZE_VERSION v0.6.1
ENV KAFKA kafka
# Add dockerize
RUN apk --no-cache upgrade && apk --no-cache --virtual .get add curl \
&& curl -L -O https://github.com/jwilder/dockerize/releases/download/${DOCKERIZE_VERSION}/dockerize-linux-amd64-${DOCKERIZE_VERSION}.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& apk del .get \
# Add the runtime dependency.
&& apk add --no-cache librdkafka
# Fetch the binary
COPY --from=build /go/src/app/main /
# Wait for kafka to come up, only then start /main
ENTRYPOINT ["sh","-c","/usr/local/bin/dockerize -wait tcp://${KAFKA}:9092 /main kafka test"]
3. Test it
I created a docker-compose.yaml to check wether everything works:
version: "3.7"
services:
zookeeper:
image: 'bitnami/zookeeper:3'
ports:
- '2181:2181'
volumes:
- 'zookeeper_data:/bitnami'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:2'
ports:
- '9092:9092'
volumes:
- 'kafka_data:/bitnami'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
server:
image: fals/kafka-main
build: .
command: "kafka test"
volumes:
zookeeper_data:
kafka_data:
You can check that the setup works with:
$ docker-compose build && docker-compose up -d && docker-compose logs -f server
[...]
server_1 | 2020/04/18 18:37:33 Problem with dial: dial tcp 172.24.0.4:9092: connect: connection refused. Sleeping 1s
server_1 | 2020/04/18 18:37:34 Connected to tcp://kafka:9092
server_1 | Created Producer rdkafka#producer-1
server_1 | Delivered message to topic test [0] at offset 0
server_1 | 2020/04/18 18:37:36 Command finished successfully.
kfka_server_1 exited with code 0
Related
Hello I am trying to build an image which can compile and run a c++ program securely.
FROM golang:latest as builder
WORKDIR /app
COPY . .
RUN go mod download
RUN env CGO_ENABLED=0 go build -o /worker
FROM alpine:latest
RUN apk update && apk add --no-cache g++ && apk add --no-cache tzdata
ENV TZ=Asia/Kolkata
WORKDIR /
COPY --from=builder worker /bin
ARG USER=default
RUN addgroup -S $USER && adduser -S $USER -G $USER
USER $USER
ENTRYPOINT [ "worker" ]
version: "3.9"
services:
gpp:
build: .
environment:
- token=test_token
- code=#include <iostream>\r\n\r\nusing namespace std;\r\n\r\nint main() {\r\n int a = 10;\r\n int b = 20;\r\n cout << a << \" \" << b << endl;\r\n int temp = a;\r\n a = b;\r\n b = temp;\r\n cout << a << \" \" << b << endl;\r\n return 0;\r\n}
network_mode: bridge
privileged: false
read_only: true
tmpfs: /tmp
security_opt:
- "no-new-privileges"
cap_drop:
- "all"
Here worker is a golang binary which reads code from environment variable and stores it in /tmp folder as main.cpp, and then tries to compile and run it using g++ /tmp/main.cpp && ./tmp/a.out (using golang exec)
I am getting this error scratch_4-gpp-1 | Error : fork/exec /tmp/a.out: permission denied, from which what I can understand / know that executing anything from tmp directory is restricted.
Since, I am using read_only root file system, I can only work on tmp directory, Please guide me how I can achieve above task keeping my container secured.
Docker's default options for a tmpfs include noexec. docker run --tmpfs allows an extended set of mount options, but neither Compose tmpfs: nor the extended syntax of volumes: allows changing anything other than the size option.
One straightforward option here is to use an anonymous volume. Syntactically this looks like a normal volumes: line, except it only has a container path. The read_only: option will make the container's root filesystem be read-only, but volumes are exempted from this.
version: '3.8'
services:
...
read_only: true
volumes:
- /build # which will be read-write
This will be a "normal" Docker volume, so it will be disk-backed and you'll be able to see it in docker volume ls.
Complete summary of solution -
#davidmaze mentioned to add an anonymous volume using
version: '3.8'
services:
...
read_only: true
volumes:
- /build # which will be read-write
as I replied I am still getting an error Cannot create temporary file in ./: Read-only file system when I tried to compile my program. When I debugged my container to see file system changes in read_only:false mode, I found that compiler is trying to save the a.out file in /bin folder, which is suppose
to be read only.
So I added this additional line before the entry point and my issue was solved.
FROM golang:latest as builder
WORKDIR /app
COPY . .
RUN go mod download
RUN env CGO_ENABLED=0 go build -o /worker
FROM alpine:latest
RUN apk update && apk add --no-cache g++ && apk add --no-cache tzdata
ENV TZ=Asia/Kolkata
WORKDIR /
COPY --from=builder worker /bin
ARG USER=default
RUN addgroup -S $USER && adduser -S $USER -G $USER
USER $USER
WORKDIR /build <---- this line
ENTRYPOINT [ "worker" ]
I'm new in docker and I want to setting-up a docker-compose for my django app. in the backend of my app, I have golang packages too and run that in djang with subprocess library.
But, when I want to install a package using go install github.com/x/y#latest and then copy its binary to the project directory, it gives me the error: package github.com/x/y#latest: cannot use path#version syntax in GOPATH mode
I searched a lot in the internet but didn't find a solution to solve my problem. Could you please tell me where I'm wrong?
here is my Dockerfile:
FROM golang:1.18.1-bullseye as go-build
# Install go package
RUN go install github.com/hakluke/hakrawler#latest \
&& cp $GOPATH/bin/hakrawler /usr/local/bin/
# Install main image for backend
FROM python:3.8.11-bullseye
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Install Dist packages
RUN apt-get update \
&& apt-get -y install --no-install-recommends software-properties-common libpq5 python3-dev musl-dev git netcat-traditional golang \
&& rm -rf /var/lib/apt/lists/
# Set work directory
WORKDIR /usr/src/redteam_toolkit/
# Install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# Copy project, and then the go package
COPY . .
COPY --from=go-build /usr/local/bin/hakrawler /usr/src/redteam_toolkit/toolkit/scripts/webapp/
docker-compose.yml:
version: '3.3'
services:
webapp:
build: .
command: python manage.py runserver 0.0.0.0:4334
container_name: toolkit_webapp
volumes:
- .:/usr/src/redteam_toolkit/
ports:
- 4334:4334
env_file:
- ./.env
depends_on:
- db
db:
image: postgres:13.4-bullseye
container_name: database
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=redteam_toolkit_db
volumes:
postgres_data:
the get.py file inside /usr/src/redteam_toolkit/toolkit/scripts/webapp/ directory, to just run the go package, and list files in this dir:
import os
import subprocess
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
print(f"Current path is: {BASE_DIR}")
def go(target_url):
run_go_package = subprocess.getoutput(
f"echo {target_url} | {BASE_DIR}/webapp/hakrawler -t 15 -u"
)
list_files = subprocess.getoutput(f"ls {BASE_DIR}/webapp/")
print(run_go_package)
print(list_files)
go("https://example.org")
and then I just run:
$ docker-compose up -d --build
$ docker-compose exec webapp python toolkit/scripts/webapp/get.py
The output is:
Current path is: /usr/src/redteam_toolkit/toolkit/scripts
/bin/sh: 1: /usr/src/redteam_toolkit/toolkit/scripts/webap/hakrawler: not found
__init__.py
__pycache__
scr.py
gather.py
This looks like a really good candidate for a multi-stage build:
FROM golang:1.18.0 as go-build
# Install packages
RUN go install github.com/x/y#latest \
&& cp $GOPATH/bin/pacakge /usr/local/bin/
FROM python:3.8.11-bullseye as release
...
COPY --from=go-build /usr/local/bin/package /usr/src/toolkit/toolkit/scripts/webapp/
...
Your compose file also needs to be updated, it is masking the entire /usr/src/redteam_toolkit folder with the volume mount. Delete that volume mount to see the content of the image.
GOPATH mode does not work with Golang modules, in your Dockerfile file, add:
RUN unset GOPATH
use RUN go get <package_repository>
I'm facing an issue, am trying to run my go fiber project inside docker with air but am getting this error
uni-blog | /bin/sh: 1: /app/tmpmain.exe: not found
am using
Windows 11
Docker desktop
golang latest
air 1.27.10
fiber latest
Here is my docker compose and dockerfile
# docker-compose up -d --build
version: "3.8"
services:
app:
container_name: uni-blog
image: app-dev
build:
context: .
target: development
volumes:
- ./:/app
ports:
- 3000:3000
FROM golang:1.17 as development
RUN apt update && apt upgrade -y && \
apt install -y git \
make openssh-client
RUN curl -fLo install.sh https://raw.githubusercontent.com/cosmtrek/air/master/install.sh \
&& chmod +x install.sh && sh install.sh && cp ./bin/air /bin/air
RUN air -v
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
EXPOSE 3000
CMD air
I also tried installing air following the readME instructions still it gives me this error
Please help
Thanks in advance
The volumes: mount you have replaces the /app directory in the image with content from the host. If the binary is built in the Dockerfile, that volumes: mount hides it; if you don't have a matching compatible binary on the host in the same place, you'll get an error like what you see.
I'd remove that volumes: block so you're actually running the binary that's built into the image. The docker-compose.yml file can be reduced to as little as:
version: '3.8'
services:
app:
build: .
ports:
- '3000:3000'
If you look the error, you ca notice there is a typo between tmp/main.exe:
/bin/sh: 1: /app/tmpmain.exe: not found
This is coming from .air.toml config file:
bin = "tmp\\main.exe"
Create .air.toml file in project root like so:
root = "."
tmp_dir = "tmp"
[build]
# Build binary.
cmd = "go build -o ./tmp/main.exe ."
# Read binary.
bin = "tmp/main.exe"
# Watch changes in those files
include_ext = [ "go", "yml"]
# Ignore changes in these files
exclude_dir = ["tmp"]
# Stop builds from triggering too fast
delay = 1000 # ms
[misc]
clean_on_exit = true
I have an existing docker-compose file
version: '3.6'
services:
verdaccio:
restart: always
image: verdaccio/verdaccio
container_name: verdaccio
ports:
- 4873:4873
volumes:
- conf:/verdaccio/conf
- storage:/verdaccio/storage
- plugins:/verdaccio/plugins
environment:
- VERDACCIO_PROTOCOL=https
networks:
default:
external:
name: registry
I would like to use an DockerFile instead of docker-compose as it will be more easy to deploy DockerFile on an Azure container registry.
I have tried many solution posted on blogs and others but nothing worked as I needed.
How can I create simple DockerFile from the above docker-compose file?
You can't. Many of the Docker Compose options (and the equivalent docker run options) can only be set when you start a container. In your example, the restart policy, published ports, mounted volumes, network configuration, and overriding the container name are all runtime-only options.
If you built a Docker image matching this, the most you could add in is setting that one ENV variable, and COPYing in the configuration files and plugins rather than storing them in named volumes. The majority of that docker-compose.yml would still be required.
If you want to put conf, storage and plugins files/folders into image, you can just copy them:
FROM verdaccio/verdaccio
WORKDIR /verdaccio
COPY conf conf
COPY storage storage
COPY plugins plugins
but if you need to keep files/and folder changes, then you should keep them as it is now.
docker-compose uses an existing image.
If what you want is to create a custom image and use it with your docker-compose set up this is perfectly possible.
create your Dockerfile - example here: https://docs.docker.com/get-started/part2/
build an "image" from your Dockerfile: docker build -f /path/to/Dockerfile -t saurabh_rai/myapp:1.0 this returns an image ID something like 12abef12
login to your dockerhub account (saurabh_rai) and create a repo for the image to be pushed to (myapp)
docker push saurabh_rai/myapp:1.0 - will push your image to hub.docker.com repo for your user to the myapp repo. You may need to perform docker login for this to work and enter your username/password as usual at the command line.
Update your docker-compose.yaml file to use your image saurabh_rai/myapp:1.0
example docker-compose.yaml:
version: '3.6'
services:
verdaccio:
restart: always
container_name: verdaccio
image: saurabh_rai/myapp:1.0
ports:
- 4873:4873
volumes:
- conf:/verdaccio/conf
- storage:/verdaccio/storage
- plugins:/verdaccio/plugins
environment:
- VERDACCIO_PROTOCOL=https
network:
- registry
I have solve this issue by using an existing verdaccio DockerFile given below.
FROM node:12.16.2-alpine as builder
ENV NODE_ENV=production \
VERDACCIO_BUILD_REGISTRY=https://registry.verdaccio.org
RUN apk --no-cache add openssl ca-certificates wget && \
apk --no-cache add g++ gcc libgcc libstdc++ linux-headers make python && \
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub && \
wget -q https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.25-r0/glibc-2.25-r0.apk && \
apk add glibc-2.25-r0.apk
WORKDIR /opt/verdaccio-build
COPY . .
RUN yarn config set registry $VERDACCIO_BUILD_REGISTRY && \
yarn install --production=false && \
yarn lint && \
yarn code:docker-build && \
yarn cache clean && \
yarn install --production=true
FROM node:12.16.2-alpine
LABEL maintainer="https://github.com/verdaccio/verdaccio"
ENV VERDACCIO_APPDIR=/opt/verdaccio \
VERDACCIO_USER_NAME=verdaccio \
VERDACCIO_USER_UID=10001 \
VERDACCIO_PORT=4873 \
VERDACCIO_PROTOCOL=http
ENV PATH=$VERDACCIO_APPDIR/docker-bin:$PATH \
HOME=$VERDACCIO_APPDIR
WORKDIR $VERDACCIO_APPDIR
RUN apk --no-cache add openssl dumb-init
RUN mkdir -p /verdaccio/storage /verdaccio/plugins /verdaccio/conf
COPY --from=builder /opt/verdaccio-build .
ADD conf/docker.yaml /verdaccio/conf/config.yaml
RUN adduser -u $VERDACCIO_USER_UID -S -D -h $VERDACCIO_APPDIR -g "$VERDACCIO_USER_NAME user" -s /sbin/nologin $VERDACCIO_USER_NAME && \
chmod -R +x $VERDACCIO_APPDIR/bin $VERDACCIO_APPDIR/docker-bin && \
chown -R $VERDACCIO_USER_UID:root /verdaccio/storage && \
chmod -R g=u /verdaccio/storage /etc/passwd
USER $VERDACCIO_USER_UID
EXPOSE $VERDACCIO_PORT
VOLUME /verdaccio/storage
ENTRYPOINT ["uid_entrypoint"]
CMD $VERDACCIO_APPDIR/bin/verdaccio --config /verdaccio/conf/config.yaml --listen $VERDACCIO_PROTOCOL://0.0.0.0:$VERDACCIO_PORT
By making few changes to the DockerFile I was able to build and push my docker image to azure container registry and deploy to an app service.
#Giga Kokaia, #Rob Evans, #Aman - Thank you for the suggestions it became more easy to think.
I have a Dockerfile for a Django and Vue.js app that I use along with Gitlab.
The problem that I'm about to describe only happens when deploying via Gitlab CI and the corresponding .gitlab-ci.yml file. When running the docker-compose up command in my local machine, this doesn happen.
So I run docker-compose up and all the instructions in the Dockerfile run apparently fine. But when I check the production server, the dist folder (where the bundle.js and bundle.css should be stored) doesn't exist.
The logs that are spit out while running the Dockerfile confirm that the npm install and npm run build commands are run, and it even confirms that the dist/bundle.js and dist/bundle.css files have been generated. But for some reason they seem to be deleted.
This is my Dockerfile:
FROM python:3.7-alpine
MAINTAINER My Name
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
# make the 'app' folder the current working directory
WORKDIR /app
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY ./app .
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
# copy both 'package.json' and 'package-lock.json' (if available)
COPY app/frontend/package*.json ./frontend/
# Install npm
RUN apk add --update nodejs && apk add --update nodejs-npm
# install project dependencies
WORKDIR /app/frontend
RUN npm install
# build app for production with minification
RUN npm run build
RUN adduser -D user
USER user
CMD ["sh ../scripts/entrypoint.sh"]
This is the .gitlab-ci.yml file:
image: docker:latest
services:
- docker:dind
before_script:
- echo "Runnig before_script"
- sudo apt-get install -y python-pip
- sudo apt-get install -y nodejs
- pip install docker-compose
stages:
- test
- build
- deploy
test:
stage: test
script:
- echo "Testing the app"
- docker-compose run app sh -c "python /app/manage.py test && flake8"
build:
stage: build
only:
- develop
- production
- feature/gitlab_ci
script:
- echo "Building the app"
- docker-compose build
deploy:
stage: deploy
only:
- master
- develop
- feature/gitlab_ci
script:
- echo "Deploying the app"
- docker-compose up --build -d
This is the content of the docker-compose.yml file:
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python /app/manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=app
- DB_USER=postgres
- DB_PASS=postgres
depends_on:
- db
db:
image: postgres:10-alpine
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
This is the content of the entrypoint.sh file:
#!/bin/bash
(cd .. && ./manage.py collectstatic --noinput)
# Migration files are commited to git. Makemigrations is not needed.
# ./manage.py makemigrations app_name
(cd .. && ./manage.py migrate)
I would like to know why the dist/ folder disappears and how to keep it.
When your docker-compose.yml file says
volumes:
- ./app:/app
that hides everything that your Dockerfile builds in the /app directory and replaces it with whatever's in your local system. If your host doesn't have a ./app/frontend/dist then your container won't have that path either, regardless of whatever the Dockerfile does.
I would generally recommend just deleting this volumes: block entirely. It introduces an awkward live-development path (where all of your tooling needs to know that the actual service runs in Docker) and simultaneously isn't what you'd run in development (you want the image to be self-contained and not to need to copy the application separately from the image).
In your compose file, you set a volume which is going to replace your local environment with the one in your container even after npm run build
volumes:
- ./app:/app
You can either build it in your local or remove volumes
We had a similar issue with a nestjs build. Lately we noticed, that we had excluded the src file in the .dockerignore.
Issue is not with docker file. It issue with your dependency. please check package.json file in root folder.