How to Spin up docker container & run some command via golang script? - docker

I am planning to automate spinning up container and run some commands on it.
But I get the below error
docker run -it alpine sh ls
Error I get is
docker error : the input device is not a TTY.
So I removed interactive part and ran
docker run -t alpine sh ls
I don't get the shell but docker is spinning
I run above docker commands in golangs os.exec package.
package main
import (
"fmt"
"os"
"os/exec"
"sync"
)
func main() {
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
cmd := exec.Command("docker", "run","-it","alpine","sh","ls")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.Run()
// log.Println(cmd.Run())
}()
}()
wg.Wait()
}
My intention is to run multiple shell scripts after spinning up the docker.
Any help would be appreciated. Thanks

One avenue to explore is the official packages moby/moby integration-cli and moby/moby integration-cli/cli with cli.go.
The func Docker(cmd icmd.Cmd, cmdOperators ...CmdOperator) *icmd is made to run a docker command, with parameters.
Example, for docker run:
cli.Docker(cli.Args("run", "--name", name, "--rm", "busybox", "sh", "-c", "sleep 30; echo hi"))

Related

Using Jekyll docker inside Jenkins

I'm trying to build a jekyll website via my Jenkins server (which runs inside a container) and I have a stage in my Jenkinsfile that looks like this:
stage('Building Website') {
agent {
docker {
image 'jekyll/jekyll:builder'
}
}
steps {
sh 'jekyll --version'
}
}
The very first time I run my job it pulls the jekyll docker image and runs fine (although it does fetch a bunch of gems before running jekyll which doesn't happen when I run the docker manually outside jenkins) but then the next jobs fail giving this error:
jekyll --version
/usr/jekyll/bin/jekyll: exec: line 15: /usr/local/bundle/bin/jekyll: not found
Any ideas what I'm doing wrong here?
As you can see in the jenkins log file, jenkins runs docker with the -u 1000:1000 argument, since this user does not exits in the jekyll/jekyll image, the command fails with the error .../bin/jekyll: not found
Here is a sample Jenkinsfile:
pipeline {
agent
{
docker
{
image 'jekyll/jekyll:3.8'
args '''
-u root:root
-v "${WORKSPACE}:/srv/jekyll"
'''
}
}
stages {
stage('Test') {
steps {
sh '''
cd /srv/jekyll
jekyll --version
'''
}
}
}
}
To add to the other answer, note the containerized Jenkins doesn't not contain the docker binary, so docker commands will still fail.
A few solutions
Make a dockerfile that inherits from the jenkins image and installs docker as well, creating a new image.
Manually install docker inside of the container. This will work until you pull a new image, and you'll have to do it over again.
Open an interactive terminal into the jenkins container
docker container exec -it -u root <container id> bash
Then install docker
curl https://get.docker.com/ > dockerinstall && chmod 777 dockerinstall && ./dockerinstall
Exit the container and set perms on docker.sock
sudo chmod 666 /var/run/docker.sock
Finished!

Docker in Docker - volumes not working: Full of files in 1st level container, empty in 2nd tier

I am running Docker in Docker (specifically to run Jenkins which then runs Docker builder containers to build a project images and then runs these and then the test containers).
This is how the jenkins image is built and started:
docker build --tag bb/ci-jenkins .
mkdir $PWD/volumes/
docker run -d --network=host \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
-v $PWD/volumes/jenkins_home:/var/jenkins_home \
--name ci-jenkins bb/ci-jenkins
Jenkins works fine. But then there is a Jenkinsfile based job, which runs this:
docker run -i --rm -v /var/jenkins_home/workspace/forkMV_jenkins-VOLTRON-3057-KQXKVJNXOU4DGSUG3P27IR3QEDHJ6K7HPDEZYN7W6HCOTCH3QO3Q:/tmp/build collab/collab-services-api-mvn-builder:2a074614 mvn -B -T 2C install
And this ends up with an error:
The goal you specified requires a project to execute but there is no POM in this directory (/tmp/build).
When I do docker exec -it sh to the container, the /tmp/build is empty. But when I am in the Jenkins container, the path /var/jenkins_home/...QO3Q/ exists and it contains the workspace with all the files checked out and prepared.
So I wonder - how can Docker happily mount the volume and then it's empty?*
What's even more confusing, this setup works for my colleague on Mac.
I am on Linux, Ubuntu 17.10, Docker latest.
After some research, calming down and thinking, I realized that Docker-in-Docker is not really so much "-in-", as it is rather "Docker-next-to-Docker".
The trick to make a container able to run another container is sharing /var/run/docker.sock through a volume: -v /var/run/docker.sock:/var/run/docker.sock
And then the docker client in the container actually calls Docker on the host.
The volume source path (left of :) does not refer to the middle container, but to the host filesystem!
After realizing that, the fix is to make the paths to the Jenkins workspace directory the same in the host filesystem and the Jenkins (middle) container:
docker run -d --network=host \
...
-v /var/jenkins_home:/var/jenkins_home
And voilá! It works. (I created a symlink instead of moving it, seems to work too.)
It is a bit complicated if you're looking at colleague's Mac, because Docker is implemented a bit differently there - it is running in an Alpine Linux based VM but pretending not to. (Not 100 % sure about that.) On Windows, I read that the paths have another layer of abstraction - mapping from C:/somewhere/... to a Linux-like path.
I hope I'll save someone hours of figuring out :)
Alternative Solution with Docker cp
I was facing the same problem of mounting volumes from a Build that runs in a Docker Container running in a Jenkins server in Kubernetes. As we use docker-in-docker, dind, I couldn't mount the volume in either ways proposed here. I'm still not sure what the reason is, but I found an alternative way: use docker cp to copy the build artifacts.
Multi-stage Docker Image for Tests
I'm using the following Dockerfile stage for Unit + Integration tests.
#
# Build stage to for building the Jar
#
FROM maven:3.2.5-jdk-8 as builder
MAINTAINER marcello.desales#gmail.com
# Only copy the necessary to pull only the dependencies from registry
ADD ./pom.xml /opt/build/pom.xml
# As some entries in pom.xml refers to the settings, let's keep it same
ADD ./settings.xml /opt/build/settings.xml
WORKDIR /opt/build/
# Prepare by downloading dependencies
RUN mvn -s settings.xml -B -e -C -T 1C org.apache.maven.plugins:maven-dependency-plugin:3.0.2:go-offline
# Run the full packaging after copying the source
ADD ./src /opt/build/src
RUN mvn -s settings.xml install -P embedded -Dmaven.test.skip=true -B -e -o -T 1C verify
# Building only this stage can be done with the --target builder switch
# 1. Build: docker build -t config-builder --target builder .
# When running this first stage image, just verify the unit tests
# Overriden them by removing the "!" for integration tests
# 2. docker run --rm -ti config-builder mvn -s settings.xml -Dtest="*IT,*IntegrationTest" test
CMD mvn -s settings.xml -Dtest="!*IT,!*IntegrationTest" -P jacoco test
Jenkins Pipeline For tests
My Jenkins pipeline has a stage for running parallel tests (Unit + Integration).
What I do is to build the Test Image in a stage, and run the tests in parallel.
I use docker cp to copy the build artifacts from inside the test docker container that can be started after running the tests in a named container.
Alternatively, you can use Jenkins stash to carry the test results to a Post stage
At this point, I solved the problem with a docker run --name test:SHA and then I use docker start test:SHA and then docker cp test:SHA:/path ., where . is the current workspace directory, which is similar to what we need with a docker volume mounted to the current directory.
stage('Build Test Image') {
steps {
script {
currentBuild.displayName = "Test Image"
currentBuild.description = "Building the docker image for running the test cases"
}
echo "Building docker image for tests from build stage ${env.GIT_COMMIT}"
sh "docker build -t tests:${env.GIT_COMMIT} -f ${paas.build.docker.dockerfile.runtime} --target builder ."
}
}
stage('Tests Execution') {
parallel {
stage('Execute Unit Tests') {
steps {
script {
currentBuild.displayName = "Unit Tests"
currentBuild.description = "Running the unit tests cases"
}
sh "docker run --name tests-${env.GIT_COMMIT} tests:${env.GIT_COMMIT}"
sh "docker start tests-${env.GIT_COMMIT}"
sh "docker cp tests-${env.GIT_COMMIT}:/opt/build/target ."
// https://jenkins.io/doc/book/pipeline/jenkinsfile/#advanced-scripted-pipeline#using-multiple-agents
stash includes: '**/target/*', name: 'build'
}
}
stage('Execute Integration Tests') {
when {
expression { paas.integrationEnabled == true }
}
steps {
script {
currentBuild.displayName = "Integration Tests"
currentBuild.description = "Running the Integration tests cases"
}
sh "docker run --rm tests:${env.GIT_COMMIT} mvn -s settings.xml -Dtest=\"*IT,*IntegrationTest\" -P jacoco test"
}
}
}
}
A better approach is to use Jenkins Docker plugin and let it do all the mountings for you and just add -v /var/run/docker.sock:/var/run/docker.sock in its inside function arguments.
E.g.
docker.build("bb/ci-jenkins")
docker.image("bb/ci-jenkins").inside('-v /var/run/docker.sock:/var/run/docker.sock')
{
...
}

akka-http application running inside docker

I've got an akka-http application that works fine locally, I'm having some problems "dockerizing" the application. I build the docker image through the Dockerfile and using a docker-entrypoint to execute the java -jar command. When I firstly access the running docker container the app is not running although If I access the container and manually execute the java -jar command the app starts fine. If I execute the following command (inside the container) the application starts fine as well:
bash -xe docker-entrypoint.sh
See below my Dockerfile
FROM qa.stratio.com/stratio/ubuntu-base:16.04
MAINTAINER stratio
ARG VERSION
RUN apt-get update && apt-get install -y screen
COPY target/khermes-${VERSION}-allinone.jar /khermes.jar
COPY docker/docker-entrypoint.sh /
COPY src/main/resources/application.conf /
EXPOSE 8080
ENTRYPOINT ["/docker-entrypoint.sh"]
And see also below my docker-entrypoint.sh:
#!/bin/bash -xe
java -jar -Dkhermes.client=false -Dakka.remote.hostname=localhost -
Dakka.remote.netty.tcp.port=2553 -Dakka.cluster.seed-
nodes.0=akka.tcp://khermes#localhost:2552 /khermes.jar
tail -f /dev/null
Does anyone have an idea on why my application is getting killed when I run the container?
I recommend using SBT Native Packager instead. My build.scala looks like this
import com.typesafe.sbt.packager.archetypes.JavaAppPackaging
import com.typesafe.sbt.packager.docker.{Cmd, DockerPlugin}
import com.typesafe.sbt.packager.docker.DockerPlugin.autoImport._
import sbt._
import Keys._
object build extends Build {
lazy val root: Project = Project(
id = "my-awesome-server",
base = file("."),
settings = Defaults.coreDefaultSettings ++ Seq(
resolvers ++= {
Seq(
"sonatype snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
},
organization := "co.awesome-startup",
version := "2.69",
scalaVersion := "2.11.8",
libraryDependencies ++= {
Seq(
"org.tpolecat" %% "atto-core" % "0.4.2" withSources()
)
},
dockerBaseImage := "expert/docker-java-minimal:server-jre",
dockerCommands := dockerCommands.value.flatMap {
case cmd#Cmd("FROM", _) => List(cmd, Cmd("RUN", "apk update && apk add bash"))
case other => List(other)
},
dockerRepository := Some("awesome-startup"),
version in Docker := version.value,
scalacOptions := Seq("-unchecked", "-deprecation", "-encoding", "utf8", "-language:implicitConversions", "-language:postfixOps", "-language:higherKinds", "-Xcheckinit"), //, "-Xlog-implicits"),
javaOptions in compile += "-g:source,lines,vars",
crossPaths := false,
mainClass in Compile := Some("co.awesome-startup.server.EntryPoint")
)
).enablePlugins(JavaAppPackaging).
enablePlugins(DockerPlugin)
}
You'll need to change name of project and dockerRepository to match your repo in Docker Hub and then you can build and deploy it by calling sbt docker:publish. If you want to publish Docker image locally call sbt docker:publishLocal

How to escape CMD in Dockerfile

I've tried to start a server inside docker via the following syntax permutations:
CMD [ "forever", "start", "server/server.js" ]
CMD [ "forever", "start", "server\/server.js" ]
CMD forever start server/server.js
But each of them has failed.
The first two ran as "forever start server" ... notice the missing /server.js piece.
The last one ran as "/bin/sh -c 'forever "
So what is the correct syntax to place forever start server/server.js inside a Dockerfile to run it as a detached container?
I've just run into the same issue with starting a Java application inside the docker container when running it.
From the docker reference you have three opportunities:
CMD ["executable","param1","param2"]
CMD ["param1","param2"]
CMD command param1 param2
Have a look here: Docker CMD
I'm not familiar with JavaScript, but assuming that the application you want to start is a Java application:
CMD ["/some path/jre64/bin/java", "server.jar", "start", "forever", ...]
And as the others in your comments say, you could also add the script via docker ADD or COPY in your Dockerfile and start it with docker RUN.
Yet another solution would be to run the docker container and mount a directory with the desired script via docker run .. -v HOSTDIR:CONTAINERDIR inside the container and trigger that script with docker exec.
Have a read here: Docker Filemounting + Docker Exec
Just run it via sh -c as suggested in the comments,
The syntax is:
CMD["/bin/sh", "-c", "'forever start server/server.js'"]
In case your tool requires a login shell to run, maybe try this one too:
CMD["/bin/bash", "-lc", "'forever start server/server.js'"]
This should work fine, having the same effect as putting the command into a standard sh shell in a single line.

Does 'docker start' execute the CMD command?

Let's say a docker container has been run with 'docker run' and then stopped with 'docker stop'. Will the 'CMD' command be executed after a 'docker start'?
I believe #jripoll is incorrect, it appears to run the command that was first run with docker run on docker start too.
Here's a simple example to test:
First create a shell script to run called tmp.sh:
echo "hello yo!"
Then run:
docker run --name yo -v "$(pwd)":/usr/src/myapp -w /usr/src/myapp ubuntu sh tmp.sh
That will print hello yo!.
Now start it again:
docker start -ia yo
It will print it again every time you run that.
Same thing with Dockerfile
Save this to Dockerfile:
FROM alpine
CMD ["echo", "hello yo!"]
Then build it and run it:
docker build -t hi .
docker run -i --name hi hi
You'll see "hello yo!" output. Start it again:
docker start -i hi
And you'll see the same output.
When you do a docker start, you call api/client/start.go, which calls:
cli.client.ContainerStart(containerID)
That calls engine-api/client/container_start.go:
cli.post("/containers/"+containerID+"/start", nil, nil, nil)
The docker daemon process that API call in daemon/start.go:
container.StartMonitor(daemon, container.HostConfig.RestartPolicy)
The container monitor does run the container in container/monitor.go:
m.supervisor.Run(m.container, pipes, m.callback)
By default, the docker daemon is the supervisor here, in daemon/daemon.go:
daemon.execDriver.Run(c.Command, pipes, hooks)
And the execDriver creates the command line in daemon/execdriver/windows/exec.go:
createProcessParms.CommandLine, err = createCommandLine(processConfig, false)
That uses the processConfig.Entrypoint and processConfig.Arguments in daemon/execdriver/windows/commandlinebuilder.go:
// Build the command line of the process
commandLine = processConfig.Entrypoint
logrus.Debugf("Entrypoint: %s", processConfig.Entrypoint)
for _, arg := range processConfig.Arguments {
logrus.Debugf("appending %s", arg)
if !alreadyEscaped {
arg = syscall.EscapeArg(arg)
}
commandLine += " " + arg
}
Those ProcessConfig.Arguments are populated in daemon/container_operations_windows.go:
processConfig := execdriver.ProcessConfig{
CommonProcessConfig: execdriver.CommonProcessConfig{
Entrypoint: c.Path,
Arguments: c.Args,
Tty: c.Config.Tty,
},
, with c.Args being the arguments of a Container (runtile parameters or CMD)
So yes, the 'CMD' commands are executed after a 'docker start'.
If you would like your container to run the same executable every time, then you should consider using ENTRYPOINT in combination with CMD.
Note: don’t confuse RUN with CMD. RUN actually runs a command and commits the result; CMD does not execute anything at build time, but specifies the intended command for the image.
https://docs.docker.com/engine/reference/builder/
No, the CMD command only executed when you execute 'docker run' to run a container based in a image.
In the documentation:
When used in the shell or exec formats, the CMD instruction sets the command to be executed when running the image.
https://docs.docker.com/reference/builder/#cmd

Resources