I added RUN go get to install packages during "docker-compose". However, the following cannot find package error was occurred when I run go build. I found that the packages are saved in /go/pkg/linux_amd64/.
run docker-compose and go build
$ docker-compose up -d
$ docker exec -it explorer-cli /bin/bash
# pwd
/go
# ls
bin pkg src
# echo $GOPATH
/go
# ls /go/pkg/linux_amd64/github.com/
go-sql-driver
# go build -i -o /go/bin/explorer-cli src/main.go
src/main.go:6:2: cannot find package "github.com/go-sql-driver/mysql" in any of:
/usr/local/go/src/github.com/go-sql-driver/mysql (from $GOROOT)
/go/src/github.com/go-sql-driver/mysql (from $GOPATH)
(it worked if I run "go get" manually)
# go get github.com/go-sql-driver/mysql
# ls src/
github.com main.go
# go build -i -o /go/bin/explorer-cli src/main.go
docker-compose.yml
version: '3.4'
services:
mysql:
image: mysql:latest
container_name: database
volumes:
- ./docker/:/etc/mysql/conf.d
- ./docker/:/docker-entrypoint-initdb.d
environment:
- MYSQL_RANDOM_ROOT_PASSWORD=true
- MYSQL_DATABASE=explorer
- MYSQL_USER=admin
- MYSQL_PASSWORD=12dlql*41
app:
build: .
tty: true
image: explorer-cli:latest
container_name: explorer-cli
volumes:
- ./src:/go/src
external_links:
- database
Dockerfile
FROM golang:latest
RUN apt-get update
RUN apt-get upgrade -y
ENV GOBIN /go/bin
RUN go get github.com/go-sql-driver/mysql
main.go
package main
import (
"database/sql"
_ "github.com/go-sql-driver/mysql"
)
func main() {
db, err := sql.Open("mysql", "XUSER:XXXX#(database:3306)/explorer")
if err != nil {
panic(err.Error())
}
defer db.Close()
}
Update 1
I noticed big differences between the following directories.
# ls /go/pkg/linux_amd64/github.com/go-sql-driver/
mysql.a
# ls /go/src/github.com/go-sql-driver/mysql/
AUTHORS connection_go18_test.go packets.go
CHANGELOG.md connection_test.go packets_test.go
CONTRIBUTING.md const.go result.go
LICENSE driver.go rows.go
README.md driver_go18_test.go statement.go
appengine.go driver_test.go statement_test.go
benchmark_go18_test.go dsn.go transaction.go
benchmark_test.go dsn_test.go utils.go
buffer.go errors.go utils_go17.go
collations.go errors_test.go utils_go18.go
connection.go fields.go utils_go18_test.go
connection_go18.go infile.go utils_test.go
Update 2
As #aerokite said, the "volumes" was overwriting the downloaded packages. I changed like the followings and it worked.
Dockerfile
version: '3.4'
FROM golang:latest
RUN apt-get update
RUN apt-get upgrade -y
ENV GOBIN /go/bin
RUN go get github.com/go-sql-driver/mysql
RUN mkdir /go/src/explorer-cli
docker-compose
services:
mysql:
image: mysql:latest
container_name: database
volumes:
- ./docker/:/etc/mysql/conf.d
- ./docker/:/docker-entrypoint-initdb.d
environment:
- MYSQL_RANDOM_ROOT_PASSWORD=true
- MYSQL_DATABASE=explorer
- MYSQL_USER=XUSER
- MYSQL_PASSWORD=XXXX
app:
build: .
tty: true
image: explorer-cli:latest
container_name: explorer-cli
volumes:
- ./src/explorer-cli:/go/src/explorer-cli
external_links:
- database
go build
go build -i -o /go/bin/explorer-cli src/explorer-cli/main.go
I have tried to recreate your problem.
FROM golang:latest
RUN apt-get update
RUN apt-get upgrade -y
ENV GOBIN /go/bin
RUN go get github.com/go-sql-driver/mysql
You have provided this Dockerfile. I have build it
$ docker build -t test .
Now I exec into this image to run your go build command.
$ docker run -it test bash
Then I have created main.go, you provided, in /go/src directory.
And finally, I have built successfully without any error
$ go build -i -o /go/bin/explorer-cli src/main.go
And I think I have found your problem. I have never used docker-compose. But you will understand.
Problem is here:
app:
build: .
tty: true
image: explorer-cli:latest
container_name: explorer-cli
volumes:
- ./src:/go/src <-- problem is here
external_links:
- database
You are mounting ./src into /go/src directory in your docker. This process is overwriting directory /go/src in your docker with your local ./src. And this is removing data you got from go get github.com/go-sql-driver/mysql
Do you understand?
But when you are running go get github.com/go-sql-driver/mysql, its now getting data again.
Solution (01):
Mount your local volume into somewhere else.
volumes:
- ./src:/tmp/src
And modify your Dockerfile to move this main.go to /go/src
Solution (02):
Copy main.go into your docker. Add this line in Dockerfile
COPY ./src/main.go /go/src
Related
I'm new in docker and I want to setting-up a docker-compose for my django app. in the backend of my app, I have golang packages too and run that in djang with subprocess library.
But, when I want to install a package using go install github.com/x/y#latest and then copy its binary to the project directory, it gives me the error: package github.com/x/y#latest: cannot use path#version syntax in GOPATH mode
I searched a lot in the internet but didn't find a solution to solve my problem. Could you please tell me where I'm wrong?
here is my Dockerfile:
FROM golang:1.18.1-bullseye as go-build
# Install go package
RUN go install github.com/hakluke/hakrawler#latest \
&& cp $GOPATH/bin/hakrawler /usr/local/bin/
# Install main image for backend
FROM python:3.8.11-bullseye
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Install Dist packages
RUN apt-get update \
&& apt-get -y install --no-install-recommends software-properties-common libpq5 python3-dev musl-dev git netcat-traditional golang \
&& rm -rf /var/lib/apt/lists/
# Set work directory
WORKDIR /usr/src/redteam_toolkit/
# Install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# Copy project, and then the go package
COPY . .
COPY --from=go-build /usr/local/bin/hakrawler /usr/src/redteam_toolkit/toolkit/scripts/webapp/
docker-compose.yml:
version: '3.3'
services:
webapp:
build: .
command: python manage.py runserver 0.0.0.0:4334
container_name: toolkit_webapp
volumes:
- .:/usr/src/redteam_toolkit/
ports:
- 4334:4334
env_file:
- ./.env
depends_on:
- db
db:
image: postgres:13.4-bullseye
container_name: database
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=redteam_toolkit_db
volumes:
postgres_data:
the get.py file inside /usr/src/redteam_toolkit/toolkit/scripts/webapp/ directory, to just run the go package, and list files in this dir:
import os
import subprocess
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
print(f"Current path is: {BASE_DIR}")
def go(target_url):
run_go_package = subprocess.getoutput(
f"echo {target_url} | {BASE_DIR}/webapp/hakrawler -t 15 -u"
)
list_files = subprocess.getoutput(f"ls {BASE_DIR}/webapp/")
print(run_go_package)
print(list_files)
go("https://example.org")
and then I just run:
$ docker-compose up -d --build
$ docker-compose exec webapp python toolkit/scripts/webapp/get.py
The output is:
Current path is: /usr/src/redteam_toolkit/toolkit/scripts
/bin/sh: 1: /usr/src/redteam_toolkit/toolkit/scripts/webap/hakrawler: not found
__init__.py
__pycache__
scr.py
gather.py
This looks like a really good candidate for a multi-stage build:
FROM golang:1.18.0 as go-build
# Install packages
RUN go install github.com/x/y#latest \
&& cp $GOPATH/bin/pacakge /usr/local/bin/
FROM python:3.8.11-bullseye as release
...
COPY --from=go-build /usr/local/bin/package /usr/src/toolkit/toolkit/scripts/webapp/
...
Your compose file also needs to be updated, it is masking the entire /usr/src/redteam_toolkit folder with the volume mount. Delete that volume mount to see the content of the image.
GOPATH mode does not work with Golang modules, in your Dockerfile file, add:
RUN unset GOPATH
use RUN go get <package_repository>
I'm facing an issue, am trying to run my go fiber project inside docker with air but am getting this error
uni-blog | /bin/sh: 1: /app/tmpmain.exe: not found
am using
Windows 11
Docker desktop
golang latest
air 1.27.10
fiber latest
Here is my docker compose and dockerfile
# docker-compose up -d --build
version: "3.8"
services:
app:
container_name: uni-blog
image: app-dev
build:
context: .
target: development
volumes:
- ./:/app
ports:
- 3000:3000
FROM golang:1.17 as development
RUN apt update && apt upgrade -y && \
apt install -y git \
make openssh-client
RUN curl -fLo install.sh https://raw.githubusercontent.com/cosmtrek/air/master/install.sh \
&& chmod +x install.sh && sh install.sh && cp ./bin/air /bin/air
RUN air -v
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
EXPOSE 3000
CMD air
I also tried installing air following the readME instructions still it gives me this error
Please help
Thanks in advance
The volumes: mount you have replaces the /app directory in the image with content from the host. If the binary is built in the Dockerfile, that volumes: mount hides it; if you don't have a matching compatible binary on the host in the same place, you'll get an error like what you see.
I'd remove that volumes: block so you're actually running the binary that's built into the image. The docker-compose.yml file can be reduced to as little as:
version: '3.8'
services:
app:
build: .
ports:
- '3000:3000'
If you look the error, you ca notice there is a typo between tmp/main.exe:
/bin/sh: 1: /app/tmpmain.exe: not found
This is coming from .air.toml config file:
bin = "tmp\\main.exe"
Create .air.toml file in project root like so:
root = "."
tmp_dir = "tmp"
[build]
# Build binary.
cmd = "go build -o ./tmp/main.exe ."
# Read binary.
bin = "tmp/main.exe"
# Watch changes in those files
include_ext = [ "go", "yml"]
# Ignore changes in these files
exclude_dir = ["tmp"]
# Stop builds from triggering too fast
delay = 1000 # ms
[misc]
clean_on_exit = true
Are there any ways to share data between containers. There is following docker-compose file
version: '3'
services:
app_build_prod:
container_name: 'app'
build:
context: ../
dockerfile: docker/Dockerfile
args:
command: build:prod
nginx:
container_name: 'nginx'
image: nginx:alpine
ports:
- "80:80"
depends_on:
- app_build_prod
Dockerfile content is:
FROM node:10-alpine as builder
## Installing missing packages, fixing git self signed certificate issue
RUN apk update && apk upgrade && \
apk add --no-cache bash git openssh && \
rm -rf /var/cache/apk/* && \
git config --global http.sslVerify false
## Defigning app directory
WORKDIR /usr/app
## Copying files. Files listed in .dockerignore are omitted
COPY . .
## node_modules are on a separate intermediate image will prevent unnecessary npm installs at each build
RUN npm ci
## Declaring arguments and environment variables. Important to declara env var to consume them on run stage
ARG command=build:prod
ENV command=$command
ENTRYPOINT npm run ${command}
Tried with #Robert's solution, but couldn't make it work - app container crashes because of:
EBUSY: resource busy or locked, rmdir '/usr/app/dist
Error: EBUSY: resource busy or locked, rmdir '/usr/app/dist'
My assumption is that /usr/app/dist directory is mounted with read-only access, therefore when Angular attempt to remove it prior the build, it throws an error.
Need to send data following direction
app_build_prod:/usr/app/dist => nginx:/usr/share/nginx/html
I have the same problem and change the sharing to use multi-stage build :
FROM alpine:latest AS builder
...build app_build_prod
FROM nginx:alpine
COPY --from=builder /usr/app/dist /usr/share/nginx/html
and change docker-compose to:
version: '3'
services:
nginx:
container_name: 'nginx'
build:
...
ports:
- "80:80"
I would like to configure a CI such as TravisCI to build my application from Docker. My application has two part: Javascript and Python.
I thought to use docker-compose to do this:
version: '3'
services:
node:
image: node:12.8.0-buster
volumes:
- .:/srv
python:
image: python:3.7.4-buster
volumes:
- .:/src
I would like to have a Makefile such as:
all: foo bar
foo:
docker-compose exec node /bin/bash -c ' \
cd /workdir; \
npm install; \
npm run build'
bar:
docker-compose exec python /bin/bash -c ' \
cd /workdir; \
pip install sphinx; \
make html'
Is this correct to use docker compose like this? And what should I change to make it work?
docker compose not only support container run, but also image build, see this.
So, for your scenario, you should add your package build in Dockerfile and exeucte it with docker-compose up -d --build which will first build out a docker image then start the service base on the new docker image.
A simple fake code is as next, note next is just to explain the main idea, not a fully workable example, you need to add your stuff base on your real situation.
Dockerfile.node:
FROM node:12.8.0-buster
# Add related to build
ADD . /srv
# Add all package install
RUN cd /workdir && npm install && npm run build
# Others
......
Dockerfile.python:
FROM python:3.7.4-buster
# Add related to build
ADD . /srv
# Add all package install
RUN cd /workdir && pip install sphinx && make html
# Others
......
docker-compose.yaml:
version: '3'
services:
node:
build:
context: .
dockerfile: Dockerfile.node
volumes:
- .:/srv
python:
build:
context: .
dockerfile: Dockerfile.python
volumes:
- .:/src
Problem
Substitution doesn't work for the build phase
Files
docker-compose.yml (only kibana part):
kibana:
build:
context: services/kibana
args:
KIBANA_VERSION: "${KIBANA_VERSION}"
entrypoint: >
/scripts/wait-for-it.sh elasticsearch:9200
-s --timeout=${ELASTICSEARCH_INIT_TIMEOUT}
-- /usr/local/bin/kibana-docker
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
volumes:
- ./scripts/wait-for-it.sh:/scripts/wait-for-it.sh
ports:
- "${KIBANA_HTTP_PORT}:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
networks:
- frontend
- backend
restart: always
Dockerfile for the services/kibana:
ARG KIBANA_VERSION=6.2.3
FROM docker.elastic.co/kibana/kibana:${KIBANA_VERSION}
USER root
RUN yum install -y which && yum clean all
USER kibana
COPY kibana.yml /usr/share/kibana/config/kibana.yml
RUN ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip
COPY logtrail.json /usr/share/kibana/plugins/logtrail/logtrail.json
EXPOSE 5601
Env file (only kibana part):
KIBANA_VERSION=6.2.3
KIBANA_HTTP_PORT=5601
KIBANA_ELASTICSEARCH_HOST=elasticsearch
KIBANA_ELASTICSEARCH_PORT=9200
Actual output (Problem is here: substitution doesn't work)
#docker-compose up --force-recreate --build kibana
.........
Step 8/10 : RUN ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip
---> Running in d28b1dcb6348
Attempting to transfer from https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail--0.1.27.zip
Attempting to transfer from https://artifacts.elastic.co/downloads/kibana-plugins/https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail--0.1.27.zip/https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail--0.1.27.zip-6.2.3.zip
Plugin installation was unsuccessful due to error "No valid url specified."
ERROR: Service 'kibana' failed to build: The command '/bin/sh -c ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip' returned a non-zero code: 70
Expected output (something similar):
Step 8/10 : RUN ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip
---> Running in d28b1dcb6348
Attempting to transfer from https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-6.2.3-0.1.27.zip
I've found answer after 5 min when I posted this question ... damn
The solution is stupid, but works: I only need to redefine args for the new user. See:
ARG KIBANA_VERSION=6.2.3
FROM docker.elastic.co/kibana/kibana:${KIBANA_VERSION}
USER root
RUN yum install -y which && yum clean all
USER kibana
ARG KIBANA_VERSION=${KIBANA_VERSION}
COPY kibana.yml /usr/share/kibana/config/kibana.yml
RUN ./bin/kibana-plugin install https://github.com/sivasamyk/logtrail/releases/download/v0.1.27/logtrail-${KIBANA_VERSION}-0.1.27.zip
COPY logtrail.json /usr/share/kibana/plugins/logtrail/logtrail.json
EXPOSE 5601
The solution is the following lines:
USER kibana
ARG KIBANA_VERSION=${KIBANA_VERSION}