Several tors services in one docker container - docker

I have dockerfile with run tor in docker:
FROM alpine:latest
RUN apk update && apk upgrade && \
apk add tor curl && \
rm /var/cache/apk/* && \
cp /etc/tor/torrc.sample /etc/tor/torrc && \
echo "SocksPort 0.0.0.0:9050" > /etc/tor/torrc
EXPOSE 9050
USER tor
CMD /usr/bin/tor -f /etc/tor/torrc
It works. I want to run several tors in one dockerfile and open different ports (9051,9052, etc). I can create docker-compose.yml in which for every port create one docker, but it isn't a good solution in my opinion.
May be anybody know how run several tors and publish theirs ports from docker?

For me help this dockerfile:
FROM alpine:latest
RUN apk update && apk upgrade && \
apk add tor curl bash && \
rm /var/cache/apk/* && \
cp /etc/tor/torrc.sample /etc/tor/torrc
EXPOSE 9050-9060
ADD start.sh /usr/local/bin/start.sh
RUN chmod +x /usr/local/bin/start.sh
RUN echo | sed -i 's/\r$//' /usr/local/bin/start.sh
CMD /usr/local/bin/start.sh
And script start.sh:
#!/bin/bash
#making script to stop on 1st error
set -e
# Original script from
# http://blog.databigbang.com/distributed-scraping-with-multiple-tor-circuits/
# if defined TOR_INSTANCE env variable sets the number of tor instances (default 10)
TOR_INSTANCES=${TOR_INSTANCES:=10 }
# if defined TOR_OPTIONSE env variable can be used to add options to TOR
TOR_OPTIONS=${TOR_OPTIONS:=''}
base_socks_port=9050
base_control_port=11000
dir_data="/tmp/multitor.$$"
# Create data directory if it doesn't exist
if [ ! -d $dir_data ]; then
mkdir $dir_data
fi
if [ ! $TOR_INSTANCES ] || [ $TOR_INSTANCES -lt 1 ]; then
echo "Please supply an instance count"
exit 1
fi
for i in $(seq $TOR_INSTANCES)
do
j=$((i+1))
socks_port=$((base_socks_port+i))
control_port=$((base_control_port+i))
if [ ! -d "$dir_data/tor$i" ]; then
echo "Creating directory $dir_data/tor$i"
mkdir "$dir_data/tor$i" && chmod -R 700 "$dir_data/tor$i"
fi
# Take into account that authentication for the control port is disabled. Must be used in secure and controlled environments
echo "Running: tor --RunAsDaemon 1 --CookieAuthentication 0 --HashedControlPassword \"\" --ControlPort 0.0.0.0:$control_port --PidFile tor$i.pid --SocksPort 0.0.0.0:$socks_port --DataDirectory $dir_data/tor$i -f /etc/tor/torrc"
tor --RunAsDaemon 1 --CookieAuthentication 0 --HashedControlPassword "" --PidFile $dir_data/tor$i/tor$i.pid --SocksPort 0.0.0.0:$socks_port --DataDirectory $dir_data/tor$i
done
# So that the container doesn't shut down, sleep this thread
sleep infinity
Build and start:
docker build -t torone ./
docker run -d -e "TOR_INSTANCES=10" -p 9050-9060:9050-9060 --rm --name torone torone
TOR_INSTANCES - contains how many tors processes want to start.

Related

SSH into Azure web-app container running with non root user

I am running an Elastic and Kibana service within a container using an Azure Web app container service. I was keen on checking the SSH connectivity for this container using Azures Web SSH console feature. Followed the microsoft documentation for SSH into custom containers https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#enable-ssh which shows the example of running the container as default root user.
My issue is Elasticsearch process does not run as a root user so I had to make the sshd process run as an elastic user. I was able to get the sshd process running which accepts the SSH connection from my host however the credentials I am setting in the docker file (elasticsearch:Docker!) are throwing Access Denied error.Any idea where i am going wrong here?
Dockerfile
FROM openjdk:jre-alpine
ARG ek_version=6.5.4
RUN apk add --quiet --no-progress --no-cache nodejs wget \
&& adduser -D elasticsearch \
&& apk add openssh \
&& echo "elasticsearch:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY startup.sh /home/elasticsearch/
RUN chmod +x /home/elasticsearch/startup.sh && \
chown elasticsearch /home/elasticsearch/startup.sh
COPY sshd_config /home/elasticsearch/
USER elasticsearch
WORKDIR /home/elasticsearch
ENV ES_TMPDIR=/home/elasticsearch/elasticsearch.tmp ES_DATADIR=/home/elasticsearch/elasticsearch/data
RUN wget -q -O - https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-${ek_version}.tar.gz \
| tar -zx \
&& mv elasticsearch-${ek_version} elasticsearch \
&& mkdir -p ${ES_TMPDIR} ${ES_DATADIR} \
&& wget -q -O - https://artifacts.elastic.co/downloads/kibana/kibana-oss-${ek_version}-linux-x86_64.tar.gz \
| tar -zx \
&& mv kibana-${ek_version}-linux-x86_64 kibana \
&& rm -f kibana/node/bin/node kibana/node/bin/npm \
&& ln -s $(which node) kibana/node/bin/node \
&& ln -s $(which npm) kibana/node/bin/npm
EXPOSE 9200 5601 2222
ENTRYPOINT ["/home/elasticsearch/startup.sh"]
startup.sh script
#!/bin/sh
# Generating hostkey
ssh-keygen -f /home/elasticsearch/ssh_host_rsa_key -N '' -t rsa
# starting sshd process
echo "Starting SSHD"
/usr/sbin/sshd -f sshd_config
# Staring the ES stack
echo "Starting ES"
sh elasticsearch/bin/elasticsearch -E http.host=0.0.0.0 & kibana/bin/kibana --host 0.0.0.0
sshd_config file
Port 2222
HostKey /home/elasticsearch/ssh_host_rsa_key
PidFile /home/elasticsearch/sshd.pid
ListenAddress 0.0.0.0
LoginGraceTime 180
X11Forwarding yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-sha1-96
StrictModes yes
SyslogFacility DAEMON
PasswordAuthentication yes
PermitEmptyPasswords no
PermitRootLogin yes
Subsystem sftp internal-sftp
Error i am getting
Please check and verify that your docker image supports SSH. It would appear that you have done everything correctly so one of the final troubleshooting steps left as this point is to verify that your image supports SSH to begin with.

Is it possible to create Dockerfile from the container/image? [duplicate]

Is it possible to generate a Dockerfile from an image? I want to know for two reasons:
I can download images from the repository but would like to see the recipe that generated them.
I like the idea of saving snapshots, but once I am done it would be nice to have a structured format to review what was done.
How to generate or reverse a Dockerfile from an image?
You can. Mostly.
Notes: It does not generate a Dockerfile that you can use directly with docker build; the output is just for your reference.
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm alpine/dfimage"
dfimage -sV=1.36 nginx:latest
It will pull the target docker image automatically and export Dockerfile. Parameter -sV=1.36 is not always required.
Reference: https://hub.docker.com/r/alpine/dfimage
Now hub.docker.com shows the image layers with detail commands directly, if you choose a particular tag.
Bonus
If you want to know which files are changed in each layer
alias dive="docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock wagoodman/dive"
dive nginx:latest
On the left, you see each layer's command, on the right (jump with tab), the yellow line is the folder that some files are changed in that layer
(Use SPACE to collapse dir)
Old answer
below is the old answer, it doesn't work any more.
$ docker pull centurylink/dockerfile-from-image
$ alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm centurylink/dockerfile-from-image"
$ dfimage --help
Usage: dockerfile-from-image.rb [options] <image_id>
-f, --full-tree Generate Dockerfile for all parent layers
-h, --help Show this message
To understand how a docker image was built, use the
docker history --no-trunc command.
You can build a docker file from an image, but it will not contain everything you would want to fully understand how the image was generated. Reasonably what you can extract is the MAINTAINER, ENV, EXPOSE, VOLUME, WORKDIR, ENTRYPOINT, CMD, and ONBUILD parts of the dockerfile.
The following script should work for you:
#!/bin/bash
docker history --no-trunc "$1" | \
sed -n -e 's,.*/bin/sh -c #(nop) \(MAINTAINER .*[^ ]\) *0 B,\1,p' | \
head -1
docker inspect --format='{{range $e := .Config.Env}}
ENV {{$e}}
{{end}}{{range $e,$v := .Config.ExposedPorts}}
EXPOSE {{$e}}
{{end}}{{range $e,$v := .Config.Volumes}}
VOLUME {{$e}}
{{end}}{{with .Config.User}}USER {{.}}{{end}}
{{with .Config.WorkingDir}}WORKDIR {{.}}{{end}}
{{with .Config.Entrypoint}}ENTRYPOINT {{json .}}{{end}}
{{with .Config.Cmd}}CMD {{json .}}{{end}}
{{with .Config.OnBuild}}ONBUILD {{json .}}{{end}}' "$1"
I use this as part of a script to rebuild running containers as images:
https://github.com/docbill/docker-scripts/blob/master/docker-rebase
The Dockerfile is mainly useful if you want to be able to repackage an image.
The thing to keep in mind, is a docker image can actually just be the tar backup of a real or virtual machine. I have made several docker images this way. Even the build history shows me importing a huge tar file as the first step in creating the image...
I somehow absolutely missed the actual command in the accepted answer, so here it is again, bit more visible in its own paragraph, to see how many people are like me
$ docker history --no-trunc <IMAGE_ID>
A bash solution :
docker history --no-trunc $argv | tac | tr -s ' ' | cut -d " " -f 5- | sed 's,^/bin/sh -c #(nop) ,,g' | sed 's,^/bin/sh -c,RUN,g' | sed 's, && ,\n & ,g' | sed 's,\s*[0-9]*[\.]*[0-9]*\s*[kMG]*B\s*$,,g' | head -n -1
Step by step explanations:
tac : reverse the file
tr -s ' ' trim multiple whitespaces into 1
cut -d " " -f 5- remove the first fields (until X months/years ago)
sed 's,^/bin/sh -c #(nop) ,,g' remove /bin/sh calls for ENV,LABEL...
sed 's,^/bin/sh -c,RUN,g' remove /bin/sh calls for RUN
sed 's, && ,\n & ,g' pretty print multi command lines following Docker best practices
sed 's,\s*[0-9]*[\.]*[0-9]*\s*[kMG]*B\s*$,,g' remove layer size information
head -n -1 remove last line ("SIZE COMMENT" in this case)
Example:
~  dih ubuntu:18.04
ADD file:28c0771e44ff530dba3f237024acc38e8ec9293d60f0e44c8c78536c12f13a0b in /
RUN set -xe
&& echo '#!/bin/sh' > /usr/sbin/policy-rc.d
&& echo 'exit 101' >> /usr/sbin/policy-rc.d
&& chmod +x /usr/sbin/policy-rc.d
&& dpkg-divert --local --rename --add /sbin/initctl
&& cp -a /usr/sbin/policy-rc.d /sbin/initctl
&& sed -i 's/^exit.*/exit 0/' /sbin/initctl
&& echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup
&& echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean
&& echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean
&& echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean
&& echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages
&& echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes
&& echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests
RUN rm -rf /var/lib/apt/lists/*
RUN sed -i 's/^#\s*\(deb.*universe\)$/\1/g' /etc/apt/sources.list
RUN mkdir -p /run/systemd
&& echo 'docker' > /run/systemd/container
CMD ["/bin/bash"]
Update Dec 2018 to BMW's answer
chenzj/dfimage - as described on hub.docker.com regenerates Dockerfile from other images. So you can use it as follows:
docker pull chenzj/dfimage
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm chenzj/dfimage"
dfimage IMAGE_ID > Dockerfile
This is derived from #fallino's answer, with some adjustments and simplifications by using the output format option for docker history. Since macOS and Gnu/Linux have different command-line utilities, a different version is necessary for Mac. If you only need one or the other, you can just use those lines.
#!/bin/bash
case "$OSTYPE" in
linux*)
docker history --no-trunc --format "{{.CreatedBy}}" $1 | # extract information from layers
tac | # reverse the file
sed 's,^\(|3.*\)\?/bin/\(ba\)\?sh -c,RUN,' | # change /bin/(ba)?sh calls to RUN
sed 's,^RUN #(nop) *,,' | # remove RUN #(nop) calls for ENV,LABEL...
sed 's, *&& *, \\\n \&\& ,g' # pretty print multi command lines following Docker best practices
;;
darwin*)
docker history --no-trunc --format "{{.CreatedBy}}" $1 | # extract information from layers
tail -r | # reverse the file
sed -E 's,^(\|3.*)?/bin/(ba)?sh -c,RUN,' | # change /bin/(ba)?sh calls to RUN
sed 's,^RUN #(nop) *,,' | # remove RUN #(nop) calls for ENV,LABEL...
sed $'s, *&& *, \\\ \\\n \&\& ,g' # pretty print multi command lines following Docker best practices
;;
*)
echo "unknown OSTYPE: $OSTYPE"
;;
esac
It is not possible at this point (unless the author of the image explicitly included the Dockerfile).
However, it is definitely something useful! There are two things that will help to obtain this feature.
Trusted builds (detailed in this docker-dev discussion
More detailed metadata in the successive images produced by the build process. In the long run, the metadata should indicate which build command produced the image, which means that it will be possible to reconstruct the Dockerfile from a sequence of images.
If you are interested in an image that is in the Docker hub registry and wanted to take a look at Dockerfile?.
Example:
If you want to see the Dockerfile of image "jupyter/datascience-notebook" type the word "Dockerfile" in the address bar of your browser as shown below.
https://hub.docker.com/r/jupyter/datascience-notebook/
https://hub.docker.com/r/jupyter/datascience-notebook/Dockerfile
Note:
Not all the images have Dockerfile, for example, https://hub.docker.com/r/redislabs/redisinsight/Dockerfile
Sometimes this way is much faster than searching for Dockerfile in Github.
docker pull chenzj/dfimage
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm chenzj/dfimage"
dfimage image_id
Below is the output of the dfimage command:
$ dfimage 0f1947a021ce
FROM node:8
WORKDIR /usr/src/app
COPY file:e76d2e84545dedbe901b7b7b0c8d2c9733baa07cc821054efec48f623e29218c in ./
RUN /bin/sh -c npm install
COPY dir:a89a4894689a38cbf3895fdc0870878272bb9e09268149a87a6974a274b2184a in .
EXPOSE 8080
CMD ["npm" "start"]
it is possible in just two step. First pull the image then run docker history command. also, shown in SS.
docker pull kalilinux/kali-rolling
docker history --format "{{.CreatedBy}}" kalilinux/kali-rolling --no-trunc
What is image2df
image2df is tool for Generate Dockerfile by an image.
This tool is very useful when you only have docker image and need to generate a Dockerfile whit it.
How does it work
Reverse parsing by history information of an image.
How to use this image
# Command alias
echo "alias image2df='docker run -v /var/run/docker.sock:/var/run/docker.sock --rm cucker/image2df'" >> ~/.bashrc
. ~/.bashrc
# Excute command
image2df <IMAGE>
See help
docker run --rm cucker/image2df --help
For example
$ echo "alias image2df='docker run -v /var/run/docker.sock:/var/run/docker.sock --rm cucker/image2df'" >> ~/.bashrc
$ . ~/.bashrc
$ docker pull mysql
$ image2df mysql
========== Dockerfile ==========
FROM mysql:latest
RUN groupadd -r mysql && useradd -r -g mysql mysql
RUN apt-get update && apt-get install -y --no-install-recommends gnupg dirmngr && rm -rf /var/lib/apt/lists/*
ENV GOSU_VERSION=1.12
RUN set -eux; \
savedAptMark="$(apt-mark showmanual)"; \
apt-get update; \
apt-get install -y --no-install-recommends ca-certificates wget; \
rm -rf /var/lib/apt/lists/*; \
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
apt-mark auto '.*' > /dev/null; \
[ -z "$savedAptMark" ] || apt-mark manual $savedAptMark > /dev/null; \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
chmod +x /usr/local/bin/gosu; \
gosu --version; \
gosu nobody true
RUN mkdir /docker-entrypoint-initdb.d
RUN apt-get update && apt-get install -y --no-install-recommends \
pwgen \
openssl \
perl \
xz-utils \
&& rm -rf /var/lib/apt/lists/*
RUN set -ex; \
key='A4A9406876FCBD3C456770C88C718D3B5072E1F5'; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
gpg --batch --export "$key" > /etc/apt/trusted.gpg.d/mysql.gpg; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME"; \
apt-key list > /dev/null
ENV MYSQL_MAJOR=8.0
ENV MYSQL_VERSION=8.0.24-1debian10
RUN echo 'deb http://repo.mysql.com/apt/debian/ buster mysql-8.0' > /etc/apt/sources.list.d/mysql.list
RUN { \
echo mysql-community-server mysql-community-server/data-dir select ''; \
echo mysql-community-server mysql-community-server/root-pass password ''; \
echo mysql-community-server mysql-community-server/re-root-pass password ''; \
echo mysql-community-server mysql-community-server/remove-test-db select false; \
} | debconf-set-selections \
&& apt-get update \
&& apt-get install -y \
mysql-community-client="${MYSQL_VERSION}" \
mysql-community-server-core="${MYSQL_VERSION}" \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql /var/run/mysqld \
&& chown -R mysql:mysql /var/lib/mysql /var/run/mysqld \
&& chmod 1777 /var/run/mysqld /var/lib/mysql
VOLUME [/var/lib/mysql]
COPY dir:2e040acc386ebd23b8571951a51e6cb93647df091bc26159b8c757ef82b3fcda in /etc/mysql/
COPY file:345a22fe55d3e6783a17075612415413487e7dba27fbf1000a67c7870364b739 in /usr/local/bin/
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 33060
CMD ["mysqld"]
reference

Docker cron job

I need to schedule some cron jobs in docker/kubernetes environment, which will make some external service calls using curl commands. I was trying to use plain alpine:3.6 image but it doesn't have curl.
Any suggestion what base image will be good for this purpose? Also, it will be helpful if there are some examples.
Prepare your own docker image which will includes packages you need or just use something like this https://github.com/aylei/kubectl-debug
I am able to run curl as cronjob job using alpine image as
FROM alpine:3.6
RUN apk --no-cache add curl bash
RUN apk add --no-cache tzdata
ENV TZ=Asia/Kolkata
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ /etc/timezone
RUN mkdir -p /var/log/cron \
&& touch /var/log/cron/cron.log \
&& mkdir -m 0644 -p /etc/cron.d
ADD curlurl /etc/crontabs/root
CMD bash -c "crond -L /var/log/cron/cron.log && tail -F /var/log/cron/cron.log"
#ENTRYPOINT /bin/bash
Where curlurl is having simple curl command as cron job
* * * * * /usr/bin/curl http://dummy.restapiexample.com/api/v1/employees >> /var/log/cron/cron.log
# Leave one blank line below or cron will fail
This is working fine . But now I have tried to make alpine base image with push in private repo to avoid downloading every time. But it stopped working. How can i debug why cron job is not running anymore. Below is the way I have used.
BaseImage
FROM alpine:3.6
RUN apk --no-cache add curl bash
RUN apk add --no-cache tzdata
ENTRYPOINT /bin/bash
Docker for curl
FROM my/alpine:3.6
ENV TZ=Asia/Kolkata
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ /etc/timezone
RUN mkdir -p /var/log/cron \
&& touch /var/log/cron/cron.log \
&& mkdir -m 0644 -p /etc/cron.d
ADD curlurl /etc/crontabs/root
CMD bash -c "crond -L /var/log/cron/cron.log && tail -F /var/log/cron/cron.log"
After making it like this I am not able to get any cron output . Any suggestion on how can I debug it.

Docker agent with Go

I'm trying to create a Jenkins Docker agent that has Go.
The following is my Dockerfile.
After I build it, if I try: docker run myimage:0.0.1 go version returns the Go version, however if I try this, it doesn't find Go at all.
docker run --privileged --dns 9.0.128.50 --dns 9.0.130.50 -d -P --name slave myimage:0.0.1
docker ps ## grab the port number
ssh -p PORT_NUMBER jenkins#localhost
What am I missing in order to make Go available under the Jenkins user?
FROM golang:1.11.5-alpine
RUN apk add --no-cache \
bash \
curl \
wget \
git \
openssh \
tar
COPY ssh/*key /etc/ssh/
COPY skel/ /home/jenkins
COPY id_rsa /home/jenkins/.ssh/id_rsa
COPY id_rsa.pub /home/jenkins/.ssh/id_rsa.pub
RUN addgroup docker \
&& adduser -s /bin/bash -h /home/jenkins -G docker -D jenkins \
&& echo "jenkins ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers \
&& echo "jenkins:jenkinspass" | chpasswd \
&& chmod u+s /bin/ping \
&& chown -R jenkins:docker /home/jenkins \
&& mv /etc/profile.d/color_prompt /etc/profile.d/color_prompt.sh \
&& mv /bin/sh /bin/sh.bak \
&& ln -s /bin/bash /bin/sh
# Standard SSH port
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
If you run:
docker run myimage:0.0.1 which go
You will see that go executable in path /usr/local/go/bin/go
If you connect as jenkins user via ssh and run /usr/local/go/bin/go version all work as well.
Conclusion:
Go installation provided as root user
jenkins user added after go installed and haven't /usr/local/go/bin/go in his $PATH environment variable.
Solution:
Add /usr/local/go/bin/go to $PATH for user jenkins
Use go executable with full path.

Unable to run Heroku exec on a docker container

I've followed the steps here (https://devcenter.heroku.com/articles/exec#enabling-docker-support) to install curl (curl 7.61.1 (x86_64-alpine-linux-musl) libcurl/7.61.1 LibreSSL/2.0.0 zlib/1.2.11 libssh2/1.8.0 nghttp2/1.32.0), python, bash, and openssh (OpenSSH_7.2p2-hpn14v4, OpenSSL 1.0.2p 14 Aug 2018). I've created the /app/.profile.d/heroku-exec.sh file in my container and added a sym link.
However, running heroku ps:exec -a my-app returns the following message:
Establishing credentials... error
! Could not connect to dyno!
! Check if the dyno is running with `heroku ps'
I've verified my application is in fact running (& has the runtime-heroku-exec feature enabled):
web (Free): /bin/sh -c exec\ java\ \$JAVA_OPTS\ -Dserver.port\=\$PORT\ -Djava.security.egd\=file:/dev/./urandom\ -jar\ /app.jar (1)
web.1: up 2018/09/30 09:34:55 -0600 (~ 4m ago)
I've verified that the heroku-exec.sh exists on my deployed container by doing heroku run -a my-app bash and cat /app/.profile.d/heroku-exec.sh
At this point, I'm not sure what to try in order to troubleshoot why heroku exec won't work on my container. Here's what my Dockerfile looks like in case there's something off with how I've put together my application:
FROM openjdk:8-jdk-alpine
RUN apk add --no-cache openssh-keygen
RUN apk add --no-cache openssh-client=7.2_p2-r5 --repository=http://dl-cdn.alpinelinux.org/alpine/v3.4/main --allow-untrusted
RUN apk add --no-cache openssh-sftp-server=7.2_p2-r5 --repository=http://dl-cdn.alpinelinux.org/alpine/v3.4/main --allow-untrusted
RUN apk add --no-cache openssh=7.2_p2-r5 --repository=http://dl-cdn.alpinelinux.org/alpine/v3.4/main --allow-untrusted
RUN apk add --no-cache curl
RUN apk add --no-cache bash
RUN apk add --update --no-cache python
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN adduser -D app-user
USER app-user
ARG JAR_FILE
ARG PORT
ARG HEROKU_FILE_NAME
COPY ${JAR_FILE} app.jar
COPY ${HEROKU_FILE_NAME} /app/.profile.d/heroku-exec.sh
ENV JAVA_OPTS -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000
HEALTHCHECK --interval=15m --timeout=10s --retries=3 --start-period=1m CMD curl --fail http://localhost:8080/restaurantscores/actuator/health || exit 1
CMD exec java $JAVA_OPTS -Dserver.port=$PORT -Djava.security.egd=file:/dev/./urandom -jar /app.jar
The "Using with Docker" section of the "Heroku Exec (SSH Tunneling)" documentation doesn't address Alpine usage.
Rather than using /app/.profile.d Alpine uses /etc/profile.d.
So change
COPY ${HEROKU_FILE_NAME} /app/.profile.d/heroku-exec.sh
to
COPY ${HEROKU_FILE_NAME} /etc/profile.d/heroku-exec.sh
Also ensure that heroku-exec.sh is executable. You can do this using chmod +x heroku-exec.sh.
I found that I had to additionally add a python symlink to my alpine-based dockerfile, to get it working:
ln -s /usr/bin/python3 /usr/bin/python
here is an example of a dockerfile, for deploying a very simple nginx application, that definitely works with heroku-exec (as of October 2020):
FROM alpine:latest
# install required packages
RUN apk add --no-cache --update python3 bash curl openssh nginx
# simplfy nginx config to enable ENV variable substitution
RUN echo 'server { listen PORT_NUMBER; }' > /etc/nginx/conf.d/default.conf \
&& mkdir /run/nginx
# add config required for HEROKU_EXEC
# ENV HEROKU_EXEC_DEBUG=1
RUN rm /bin/sh \
&& ln -s /bin/bash /bin/sh \
&& mkdir -p /app/.profile.d/ \
&& printf '#!/usr/bin/env bash\n\nset +o posix\n\n[ -z "$SSH_CLIENT" ] && source <(curl --fail --retry 7 -sSL "$HEROKU_EXEC_URL")\n' > /app/.profile.d/heroku-exec.sh \
&& chmod +x /app/.profile.d/heroku-exec.sh \
&& ln -s /usr/bin/python3 /usr/bin/python
# configure NGINX to listen on dynamic $PORT env variable supplied by Heroku.
CMD sed -i 's/PORT_NUMBER/'"$PORT"'/g' /etc/nginx/conf.d/default.conf; nginx -g 'daemon off;'

Resources