trying to use $(command) inside docker sdk entrypoint - docker

so as the title says, I'm trying to execute a simple command inside entrypoint using golang docker sdk (docker api).
func RunBatch(imageName, containerName string, entrypoint []string, volumes []string) string {
ctx := context.Background()
c := getClient()
cfg := &container.Config{Entrypoint: entrypoint, Tty: true, Image: imageName}
hostCfg := &container.HostConfig{Mounts: make([]mount.Mount, len(volumes))}
netCfg := &network.NetworkingConfig{}
startCfg := types.ContainerStartOptions{}
for i := range volumes {
vols := strings.Split(volumes[i], ":")
hostCfg.Mounts[i] = mount.Mount{
Type: mount.TypeBind,
Source: config.Config.BaseDir + vols[0],
Target: vols[1],
}
}
resp, err := c.ContainerCreate(ctx, cfg, hostCfg, netCfg, containerName)
if err != nil {
log.Fatal().Err(err)
}
err = c.ContainerStart(ctx, resp.ID, startCfg)
if err != nil {
log.Fatal().Err(err)
}
_, err = c.ContainerWait(ctx, resp.ID)
if err != nil {
log.Fatal().Err(err)
}
err = c.ContainerRemove(ctx, resp.ID, types.ContainerRemoveOptions{})
if err != nil {
log.Fatal().Err(err)
}
return resp.ID
}
and the entrypoint I'm passing here is ["touch", "/app/$(date +'%T')"]
but the created file looks like $(date +'%T'), I've also tried and failed with ${date +'%T'} and with backqoute as well.
how can I execute those ?!

Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, ENTRYPOINT [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: ENTRYPOINT [ "sh", "-c", "echo $HOME" ].

RunBatch is going to treat the value of entrypoint literally (as you're experiencing).
You will need to provide it with the Golang equivalent (Time.Format value of (bash's) $(date +%T) to succeed:
Perhaps:
["touch", fmt.Sprintf("/app/%s",time.Now().Format("15:04:05"))]
NOTE the 15:04:05 is the pattern to follow, the value will be the current time

Related

How to force docker file use envs which are passed by docker client in golang?

I'm using a docker client with golang; in the following script, I'm trying to pass an environmental variable when a container is going to start.
package main
import (
"context"
"fmt"
"os/user"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/client"
)
func main() {
dockerClient, _ := client.NewClientWithOpts(client.FromEnv)
env := []string{}
for key, value := range map[string]string{"hi": "hoo"} {
env = append(env, fmt.Sprintf("%s=%s", key, value))
}
user, err := user.Current()
if err != nil {
fmt.Println(err)
}
config := container.Config{
User: fmt.Sprintf("%s:%s", user.Uid, user.Gid),
Image: "675d8442a90f",
Env: env,
}
response, err := dockerClient.ContainerCreate(context.Background(), &config, nil, nil, nil, "")
if err != nil {
fmt.Println(err)
}
if err = dockerClient.ContainerStart(context.Background(), response.ID, types.ContainerStartOptions{}); err != nil {
fmt.Println(err)
}
}
and my docker file is a simple one which I try to echo the hi env:
# Filename: Dockerfile
FROM ubuntu:latest
COPY . .
CMD ["echo", "$hi"]
When I built the image and passed the id to the script, it didn't echo the variable. Do you have any idea how I can use the environmental variables in the dockerfile, which are sent to the container by docker golang client?

Running docker builds in parallel in an EC2 instance results in longer build times - why?

We are building multiple docker images in the cloud. This is done using this code -
func BuildM1Image(tags []string, dockerfile string, contextPath string, dockerRepoUrl string) error {
dockerExecutable, _ := exec.LookPath("docker")
awsExecutable, _ := exec.LookPath("aws")
toTag := tags[0]
Log("building image with tag " + toTag)
// to push --> I need the AWS credentials to live in the cloud, pull them
os.Setenv("AWS_ACCESS_KEY_ID", Secrets[ECR_ACCESS_KEY_NAME])
os.Setenv("AWS_SECRET_ACCESS_KEY", Secrets[ECR_SECRET_ACCESS_KEY_NAME])
ecrGetCredentialsCMD := &exec.Cmd{
Path: awsExecutable,
Args: []string{awsExecutable, "ecr", "get-login-password", "--region", GetRegion()},
// Stderr: os.Stderr,
// Stdout: os.Stdout,
}
out, _ := ecrGetCredentialsCMD.CombinedOutput()
// if err != nil {
// errorChannel <- err
// return
// }
dockerEcrLoginCMD := &exec.Cmd{
Path: dockerExecutable,
Args: []string{dockerExecutable, "login", "--username", "AWS", "-p", string(out), dockerRepoUrl},
}
if err := dockerEcrLoginCMD.Run(); err != nil {
fmt.Println("Docker login failed")
fmt.Println("error: ", err)
return err
}
buildDockerImage := &exec.Cmd{
Path: dockerExecutable,
Args: []string{dockerExecutable, "buildx", "build", "--platform", "linux/amd64", "-t", toTag, "-f", dockerfile, contextPath},
}
if err := buildDockerImage.Run(); err != nil {
fmt.Println("docker build failed")
logError(err)
return err
}
return nil
}
I put the this function inside of a goroutine and kicked off multiple builds. However, the time taken on the builds is slower than if I were to do this sequentially. This only happens in an EC2 instance and not locally (locally we're running this with a M1 mac and it's working as expected).
Why would this happen?
On the EC2 instance we've tried -
Increasing the compute/storage
Increasing the IOPS
Thanks!

How to access container logs of a script executed with StartContainer using the go SDK from docker inc

Motivation
I'm running this command inside the container:
docker run -it --rm \
--mount type=volume,src=synapse-data,dst=/data \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=yes \
matrixdotorg/synapse:latest generate
Based on https://github.com/matrix-org/synapse/tree/v1.56.0/docker
Docker SDK usage
And I'm using this abstraction: https://pkg.go.dev/github.com/docker/docker/client#Client.ContainerCreate
As a general concept I want to use:
AutoRemove: true,
The point is to automate/enforce containers deletion after use, for instance, if the setup exits unexpectedly. I'm also using a container name: server_setup_temporary_container which hints the user that this is used during setup and is meant to be temporary. In case the setup did not shutdown the container, the user can do this and the bound volumes are freed.
My problem with this generate script
I can't use https://github.com/moby/moby/blob/v20.10.18/client/container_logs.go#L36 as the container exits once it finished executing the generate. Therefore I can't access the logs at all as they are already deleted.
In contrast, this works well with the postgresql container, as it runs as a daemon and needs explicit shutdown. The same concept fails with only executing a script!
I don't know how to continue here.
A few thoughts I had:
after generate execute a 'sleep 3600' and then explicitly shut the container down as well
try to get the logs from ContainerStart or ContainerCreate directly but studying the API this is probably not implemented this way
What I would not want is to remove the AutoRemove: true concept.
The source code
Using my StartContainer abstraction
// Start and run container
containerId, err := s.dockerClient.myStartContainer(docker.ContainerStartConfig{
Image: matrixImage,
Volumes: []docker.ContainerVolume{
docker.ContainerVolume{
Source: volume,
Target: "/data",
},
},
Env: []string{
fmt.Sprintf("SYNAPSE_SERVER_NAME=%s", domain),
"SYNAPSE_REPORT_STATS=no",
},
Cmds: []string{
"generate",
},
})
StartContainer abstraction
func (c *Client) myStartContainer(cfg ContainerStartConfig) (string, error) {
if c.client == nil {
return "", errors.New(noClientErr)
}
if len(cfg.Image) == 0 {
return "", errors.New(noImageErr)
}
containerConfig := container.Config{
Image: cfg.Image,
}
hostConfig := container.HostConfig{
AutoRemove: true,
}
if cfg.Env != nil {
containerConfig.Env = cfg.Env
}
if cfg.Cmds != nil {
containerConfig.Cmd = make(strslice.StrSlice, len(cfg.Cmds))
for i, _cmd := range cfg.Cmds {
containerConfig.Cmd[i] = _cmd
}
}
if cfg.Volumes != nil {
hostConfig.Mounts = make([]mount.Mount, len(cfg.Volumes))
for i, v := range cfg.Volumes {
hostConfig.Mounts[i] = mount.Mount{
Type: "volume",
Source: v.Source,
Target: v.Target,
}
}
}
var networkingConfig *network.NetworkingConfig
if cfg.Networks != nil {
networkingConfig = &network.NetworkingConfig{EndpointsConfig: map[string]*network.EndpointSettings{}}
for _, nw := range cfg.Networks {
n := nw.Name
networkingConfig.EndpointsConfig[n] = &network.EndpointSettings{Aliases: nw.Aliases}
}
}
cont, err := c.client.ContainerCreate(
c.ctx,
&containerConfig,
&hostConfig,
networkingConfig,
nil,
"server_setup_temporary_container",
)
if err != nil {
return "", err
}
colorlogger.Log.Info("Container ID of "+colorlogger.LYellow, cfg.Image, colorlogger.CLR+" is "+cont.ID)
if err := c.client.ContainerStart(c.ctx, cont.ID, types.ContainerStartOptions{}); err != nil {
return "", err
}
return cont.ID, nil
}
Greater scenario
I'm executing this setup in order to configure the containers which are later executed with 'docker compose' as some of the setups require explicit changes to the containers and can't be done declaratively.

Docker Go lang SDK returns nothing from ContainerExecCreate

I am trying to execute a command (let's say "pwd) using Docker Go lang SDK, and I expect it returns the working directory on the container. But it returns nothing back. I am not sure what is the issue.
rst, err := cli.ContainerExecCreate(context.Background(), "0df7c1d9d185b1da627efb983886a12fefc32120d035b34e97c3ad13da6dd9cc", types.ExecConfig{Cmd: []string{"pwd"}})
if err != nil {
panic(err)
}
//res, err := cli.ContainerExecInspect(context.Background(), rst.ID)
//print(res.ExitCode)
response, err := cli.ContainerExecAttach(context.Background(), rst.ID, types.ExecStartCheck{})
if err != nil {
panic(err)
}
defer response.Close()
data, _ := ioutil.ReadAll(response.Reader)
fmt.Println(string(data))
GOROOT=/usr/local/Cellar/go/1.13.5/libexec #gosetup
GOPATH=/Users/pt/go #gosetup
/usr/local/Cellar/go/1.13.5/libexec/bin/go build -o /private/var/folders/yp/hh3_03d541x0r6t7_zwqqhqr0000gn/T/___go_build_main_go /Users/pt/go/src/awesomeProject/main.go #gosetup
/private/var/folders/yp/hh3_03d541x0r6t7_zwqqhqr0000gn/T/___go_build_main_go #gosetup
### It does not print the working directory ###
Process finished with exit code 0
This is solved by the following config:
optionsCreate := types.ExecConfig{
AttachStdout: true,
AttachStderr: true,
Cmd: []string{"ls", "-a"},
}

kubectl log shows 'standard_init_linux.go:211: exec user process caused "no such file or directory"'

I am trying to create a containerSource for knative service. When I use docker run for the image it gives the output ("or the error from the code"). However when I apply the yaml file then kubectl log shows 'standard_init_linux.go:211: exec user process caused "no such file or directory"'. docker run shows that it is able to find the exec file. So I am not able to understand whats wrong. Someone please guide me through.
my yaml file:
apiVersion: sources.eventing.knative.dev/v1alpha1
kind: ContainerSource
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: cloudevents-source
spec:
image: docker.io/username/pkt-event:latest
args:
- '--debug=true'
sink:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
name: event-display
my go code for the dockerimage is:
package main
import (
"context"
"flag"
"fmt"
"log"
"os"
"time"
"github.com/satori/go.uuid"
"knative.dev/eventing-contrib/pkg/kncloudevents"
"encoding/json"
// "io/ioutil"
// "knative.dev/eventing-contrib/vendor/github.com/cloudevents/sdk-go/pkg/cloudevents"
"github.com/cloudevents/sdk-go/pkg/cloudevents"
"github.com/cloudevents/sdk-go/pkg/cloudevents/types"
"github.com/kelseyhightower/envconfig"
)
var (
eventSource string
eventType string
sink string
)
//var u, _ = uuid.NewV4()
var debug = flag.Bool("debug", false, "Enable debug mode (print more information)")
var source = flag.String("source", uuid.NewV4().String(), "Set custom Source for the driver")
func init() {
flag.StringVar(&eventSource, "eventSource", "", "the event-source (CloudEvents)")
flag.StringVar(&eventType, "eventType", "dev.knative.eventing.samples.pkt", "the event-type (CloudEvents)")
flag.StringVar(&sink, "sink", "", "the host url to send pkts to")
}
type envConfig struct {
// Sink URL where to send heartbeat cloudevents
Sink string `envconfig:"SINK"`
}
func main() {
flag.Parse()
var env envConfig
if err := envconfig.Process("", &env); err != nil {
log.Printf("[ERROR] Failed to process env var: %s", err)
os.Exit(1)
}
if env.Sink != "" {
sink = env.Sink
}
if eventSource == "" {
eventSource = fmt.Sprintf("https://knative.dev/eventing-contrib/cmd/heartbeats/#local/demo")
log.Printf("Source: %s", eventSource)
}
client, err := kncloudevents.NewDefaultClient(sink)
if err != nil {
log.Fatalf("failed to create client: %s", err.Error())
}
var period time.Duration
period = time.Duration(1) * time.Second
ticker := time.NewTicker(period)
for {
content := "Send data"
data, err := json.Marshal(content)
if err != nil {
fmt.Println(err)
}
event := cloudevents.Event{
Context: cloudevents.EventContextV02{
Type: "packet.invoke",
Source: *types.ParseURLRef(eventSource),
/*Extensions: map[string]interface{}{
"the": 42,
"heart": "yes",
"beats": true,
},*/
}.AsV02(),
Data: data,
}
if *debug{
log.Printf("Sending event %v", event)
} else {
if _, err := client.Send(context.TODO(), event); err != nil {
log.Printf("failed to send cloudevent: %s", err.Error())
}
}
<-ticker.C
}
}
And Dockerfile is:
FROM golang:1.12 as builder
RUN go version
WORKDIR ${GOPATH}/src/Event-driver
COPY ./ ./
RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
##RUN dep ensure
RUN dep init
RUN dep ensure
RUN CGO_ENABLED=0 GOARCH=amd64 GOOS=linux go build -v -o my-event my-event.go
RUN pwd && ls
FROM scratch
#FROM ubuntu:disco
COPY --from=builder /go/src/Event-driver/my-event /
ENTRYPOINT ["/my-event"]
That problem occurs because you're trying to run your binary from bash, but scratch has no bash.
I'm normally using alpina instead. To build for alpina you need the same environment variables, so probably you only need to change a second stage image.

Resources