Visual Studio Code Remote - Containers - change shell - docker

When launching an attached container in "VS Code Remote Development", has anyone found a way to change the container's shell when launching the vscode integrated terminal.
It seems to run something similar to.
docker exec -it <containername> /bin/bash
I am looking for the equivalent of
docker exec -it <containername> /bin/zsh
The only settings I found for Attached containers are
"remote.containers.defaultExtensions": []

I worked around it with
RUN echo "if [ -t 1 ]; then" >> /root/.bashrc
RUN echo "exec zsh" >> /root/.bashrc
RUN echo "fi" >> /root/.bashrc
Still would be interested in knowing if there was a way to set this per container.

I use a Docker container for my development environment and set the shell to bash in my Dockerfile:
# …
ENTRYPOINT ["bash"]
Yet when VS Code was connecting to my container it was insisting on using the /bin/ash shell which was driving me crazy... However the fix (at least for me) was very simple but not obvious:
From the .devcontainer.json reference.
All I needed to do in my case was to add the following entry in my .devcontainer.json file:
{
…
"settings": {
"terminal.integrated.shell.*": "/bin/bash"
}
…
}
Complete .devcontainer.json file (FYI)
{
"name": "project-blueprint",
"dockerComposeFile": "./docker-compose.yml",
"service": "dev",
"workspaceFolder": "/workspace/dev",
"postCreateCommand": "yarn",
"settings": {
"terminal.integrated.shell.*": "/bin/bash"
}
}

I'd like to contribute to this thread since I spent a decent amount of time combing the web for a good solution to this involving VS Code's new API for terminal.integrated.profiles.linux
Note as of 20 Jan 2022 both the commented and the uncommented json work. The uncommented out lines is the new non deprecated way to get this working with Dev containers.
{
"settings": {
// "terminal.integrated.shell.linux": "/bin/zsh"
"terminal.integrated.defaultProfile.linux": "zsh",
"terminal.integrated.profiles.linux": {
"zsh": {
"path": "/bin/zsh"
},
}
}
}
if any one is interested I also figured out how to get oh my ZSH built into the image.
Dockerfile:
# Setup Stage - set up the ZSH environment for optimal developer experience
FROM node:16-alpine AS setup
RUN npm install -g expo-cli
# Let scripts know we're running in Docker (useful for containerized development)
ENV RUNNING_IN_DOCKER true
# Use the unprivileged `node` user (pre-created by the Node image) for safety (and because it has permission to install modules)
RUN mkdir -p /app \
&& chown -R node:node /app
# Set up ZSH and our preferred terminal environment for containers
RUN apk --no-cache add zsh curl git
# Set up ZSH as the unprivileged user (we just need to start it, it'll initialize our setup itself)
USER node
# set up oh my zsh
RUN cd ~ && wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh && sh install.sh
# initialize ZSH
RUN /bin/zsh ~/.zshrc
# Switch back to root
USER root

Related

Google Cloud Run error: Container failed to start (DSS - Digital Signature Service)

i'm trying to get the following docker container running on the google cloud. The container works locally. In the cloud shell, the container also works with "docker run". On the google cloud i can see the port 8080 web preview. When I create a service, the container does not start. The log only says "tomcat started, container called exit (0)".
I added address = 0.0.0.0 to the connector in the server.xml. But that didn't work either.
Maybe someone can give me a hint.
Thank you
Tom
FROM openjdk:8-alpine
RUN apk update && apk add unzip
ADD https://ec.europa.eu/cefdigital/artifact/repository/esignaturedss/eu/europa/ec/joinup/sd-dss/dss-demo-bundle/5.8.1/dss-demo-bundle-5.8.1.zip /tmp
RUN unzip /tmp/dss-demo-bundle-5.8.1.zip -d /tmp
RUN mv /tmp/dss-demo-bundle-5.8.1 /dss
RUN chmod +x /dss/apache-tomcat-8.5.61/bin/catalina.sh
COPY ./startup.sh /dss/
ENTRYPOINT [ "/dss/startup.sh" ]
CMD [ "/bin/sh" ]
This is the sourcecode of startup.sh
#!/bin/sh
set -e
echo "`/bin/sh /dss/apache-tomcat-8.5.61/bin/startup.sh`"
exec "$#"
Thank you, the solution was, i change the tomcat startup to "catalina.sh run", to start tomcat as forground process.
The second thing: i had to remove the "address = 0.0.0.0" in the tomcat server.xml file
#!/bin/sh
set -e
echo "`/bin/sh /dss/apache-tomcat-8.5.61/bin/catalina.sh run`"
exec "$#"

How do I get a Command to run from a Dockerfile.aws.json on Elastic Beanstalk?

I have a Dockerfile and a Dockerfile.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Ports": [{
"ContainerPort": "5000",
"HostPort": "5000"
}],
"Volumes": [{
"HostDirectory": "/tmp/download/models",
"ContainerDirectory": "/models"
}],
"Logging": "/var/log/nginx",
"Command": "mkdir -p /tmp && axel https://example.com/models.zip -o /tmp/models.zip"
}
But when I deploy, it doesn't run the Command that I specified. What am I doing wrong?
If you have ENTRYPOINT in your Dockerfile, than the Command gets appended as its arguments:
Specify a command to execute in the container. If you specify an Entrypoint, then Command is added as an argument to Entrypoint. For more information, see CMD in the Docker documentation.
Thus your Command mkdir -p /tmp ... will be used as an argument to python3 -m flask run --host=0.0.0.0, resulting in error. This could explain why you experience issue.
I tried to recreate the issue initially using your Command structure but had some problems. What worked was using Command in the following way:
"Command": "/bin/bash -c \"mkdir -p /tmp && axel https://example.com/models.zip -o /tmp/models.zip\""
My Dockerfile did not have Entrypoint. Thus to run your python you could maybe do the following (assuming everything else is correct):
"Command": "/bin/bash -c \"mkdir -p /tmp && axel https://example.com/models.zip -o /tmp/models.zip && python3 -m flask run --host=0.0.0.0\""
Do you have the Dockerfile content?
It most likely they your ENTRYPOINT script does not receive parameters, or it is ignoring it.
What you can do is something similar to this.
You have an entrypoint script that receive the command passed in aws.json as parameter, execute it and then call your real python command.
Or you can replace your ENTRYPOINT by something similar to this:
ENTRYPOINT ["/bin/bash"]
and your default command will be:
CMD ["python3 ..."]
This way when running locally you only run the python3 command.
When running in aws, you can change your Command and append the python to the end, as mentioned by Marcin. Both cases works

How can I run script automatically after Docker container startup

I'm using Search Guard plugin to secure an elasticsearch cluster composed of multiple nodes.
Here is my Dockerfile:
#!/bin/sh
FROM docker.elastic.co/elasticsearch/elasticsearch:5.6.3
USER root
# Install search guard
RUN bin/elasticsearch-plugin install --batch com.floragunn:search-guard-5:5.6.3-16 \
&& chmod +x \
plugins/search-guard-5/tools/hash.sh \
plugins/search-guard-5/tools/sgadmin.sh \
bin/init_sg.sh \
&& chown -R elasticsearch:elasticsearch /usr/share/elasticsearch
USER elasticsearch
To initialize SearchGuard (create internal users and assign roles). I need to run the script init_sg.sh after the container startup.
Here is the problem: Unless elasticsearch is running, the script will not initialize any security index.
The script's content is :
sleep 10
plugins/search-guard-5/tools/sgadmin.sh -cd config/ -ts config/truststore.jks -ks config/kirk-keystore.jks -nhnv -icl
Now, I just run the script manually after the container startup but since I'm running it on Kubernetes.. Pods may get killed or fail and get recreated automatically for some reason. In this case, the plugin have to be initialized automatically after the container startup!
So how to accomplish this? Any help or hint would be really appreciated.
The image itself has an entrypoint ENTRYPOINT ["/run/entrypoint.sh"] specified in the Dockerfile. You can replace it by your own script. So for example create a new script, mount it and first call /run/entrypoint.sh and then wait for start of elasticsearch before running your init_sg.sh.
Not sure this will solves your problem, but its worth check my repo'sDockerfile
I have created a simple run.sh file copied to docker image and in the Dockerfile I wrote CMD ["run.sh"]. In the same way define whatever you want in run.sh and write CMD ["run.sh"]. You can find another example like below
Dockerfile
FROM java:8
RUN apt-get update && apt-get install stress-ng -y
ADD target/restapp.jar /restapp.jar
COPY dockerrun.sh /usr/local/bin/dockerrun.sh
RUN chmod +x /usr/local/bin/dockerrun.sh
CMD ["dockerrun.sh"]
dockerrun.sh
#!/bin/sh
java -Dserver.port=8095 -jar /restapp.jar &
hostname="hostname: `hostname`"
nohup stress-ng --vm 4 &
while true; do
sleep 1000
done
This is addressed in the documentation here: https://docs.docker.com/config/containers/multi-service_container/
If one of your processes depends on the main process, then start your helper process FIRST with a script like wait-for-it, then start the main process SECOND and remove the fg %1 line.
#!/bin/bash
# turn on bash's job control
set -m
# Start the primary process and put it in the background
./my_main_process &
# Start the helper process
./my_helper_process
# the my_helper_process might need to know how to wait on the
# primary process to start before it does its work and returns
# now we bring the primary process back into the foreground
# and leave it there
fg %1
I was trying to solve the exact problem. Here's the approach that worked for me.
Create a separate shell script that checks for ES status, and only start initialization of SG when ES is ready:
Shell Script
#!/bin/sh
echo ">>>> Right before SG initialization <<<<"
# use while loop to check if elasticsearch is running
while true
do
netstat -uplnt | grep :9300 | grep LISTEN > /dev/null
verifier=$?
if [ 0 = $verifier ]
then
echo "Running search guard plugin initialization"
/elasticsearch/plugins/search-guard-6/tools/sgadmin.sh -h 0.0.0.0 -cd plugins/search-guard-6/sgconfig -icl -key config/client.key -cert config/client.pem -cacert config/root-ca.pem -nhnv
break
else
echo "ES is not running yet"
sleep 5
fi
done
Install script in Dockerfile
You will need to install the script in container so it's accessible after it starts.
COPY sginit.sh /
RUN chmod +x /sginit.sh
Update entrypoint script
You will need to edit the entrypoint script or run script of your ES image. So that it starts the sginit.sh in the background BEFORE starting ES process.
# Run sginit in background waiting for ES to start
/sginit.sh &
This way the sginit.sh will start in the background, and will only initialize SG after ES is started.
The reason to have this sginit.sh script starts before ES in the background is so that it's not blocking ES from starting. The same logic applies if you put it after starting of ES, it will never run unless you put the starting of ES in the background.
I would suggest to put the CMD in you docker file to execute the script when the container start
FROM debian
RUN apt-get update && apt-get install -y nano && apt-get clean
EXPOSE 8484
CMD ["/bin/bash", "/opt/your_app/init.sh"]
There is other way , but before using this look at your requirement,
ENTRYPOINT "put your code here" && /bin/bash
#exemple ENTRYPOINT service nginx start && service ssh start &&/bin/bash "use && to separate your code"
You can also use wait-for-it script. It will wait on the availability of a host and TCP port. It is useful for synchronizing the spin-up of interdependent services and works like a charm with containers. It does not have any external dependencies so you can just run it as an RUN command without doing anything else.
A Dockerfile example based on this thread:
FROM elasticsearch
# Make elasticsearch write data to a folder that is not declared as a volume in elasticsearchs' official dockerfile.
RUN mkdir /data && chown -R elasticsearch:elasticsearch /data && echo 'es.path.data: /data' >> config/elasticsearch.yml && echo 'path.data: /data' >> config/elasticsearch.yml
# Download wait-for-it
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh /utils/wait-for-it.sh
# Copy the files you may need and your insert script
# Insert data into elasticsearch
RUN /docker-entrypoint.sh elasticsearch -p /tmp/epid & /bin/bash /utils/wait-for-it.sh -t 0 localhost:9200 -- path/to/insert/script.sh; kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;

How to get /etc/profile to run automatically in Alpine / Docker

How can I get /etc/profile to run automatically when starting an Alpine Docker container interactively? I have added some aliases to an aliases.sh file and placed it in /etc/profile.d, but when I start the container using docker run -it [my_container] sh, my aliases aren't active. I have to manually type . /etc/profile from the command line each time.
Is there some other configuration necessary to get /etc/profile to run at login? I've also had problems with using a ~/.profile file. Any insight is appreciated!
EDIT:
Based on VonC's answer, I pulled and ran his example ruby container. Here is what I got:
$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42
/ # more /etc/profile.d/rubygems.sh
export PATH=$PATH:/usr/lib/ruby/gems/2.0.0/bin
/ # env
no_proxy=*.local, 169.254/16
HOSTNAME=6c7e93ebc5a1
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # exit
Although the /etc/profile.d/rubygems.sh file exists, it is not being run when I login and my PATH environment variable is not being updated. Am I using the wrong docker run command? Is something else missing? Has anyone gotten ~/.profile or /etc/profile.d/ files to work with Alpine on Docker? Thanks!
The default shell in Alpine Linux is ash.
Ash will only read the /etc/profile and ~/.profile files if it is started as a login shell sh -l.
To force Ash to source the /etc/profile or any other script you want upon its invocation as a non login shell, you need to setup an environment variable called ENV before launching Ash.
e.g. in your Dockerfile
FROM alpine:3.5
ENV ENV="/root/.ashrc"
RUN echo "echo 'Hello, world!'" > "$ENV"
When you build that you get:
deployer#ubuntu-1604-amd64:~/blah$ docker build --tag test .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine:3.5
3.5: Pulling from library/alpine
627beaf3eaaf: Pull complete
Digest: sha256:58e1a1bb75db1b5a24a462dd5e2915277ea06438c3f105138f97eb53149673c4
Status: Downloaded newer image for alpine:3.5
---> 4a415e366388
Step 2/3 : ENV ENV "/root/.ashrc"
---> Running in a9b6ff7303c2
---> 8d4af0b7839d
Removing intermediate container a9b6ff7303c2
Step 3/3 : RUN echo "echo 'Hello, world!'" > "$ENV"
---> Running in 57c2fd3353f3
---> 2cee6e034546
Removing intermediate container 57c2fd3353f3
Successfully built 2cee6e034546
Finally, when you run the newly generated container, you get:
deployer#ubuntu-1604-amd64:~/blah$ docker run -ti test /bin/sh
Hello, world!
/ # exit
Notice the Ash shell didn't run as a login shell.
So to answer your query, replace
ENV ENV="/root/.ashrc"
with:
ENV ENV="/etc/profile"
and Alpine Linux's Ash shell will automatically source the /etc/profile script each time the shell is launched.
Gotcha: /etc/profile is normally meant to only be sourced once! So, I would advise that you don't source it and instead source a /root/.somercfile instead.
Source: https://stackoverflow.com/a/40538356
You still can try in your Dockerfile a:
RUN echo '\
. /etc/profile ; \
' >> /root/.profile
(assuming the current user is root. If not, replace /root with the full home path)
That being said, those /etc/profile.d/xx.sh should run.
See codeclimate/docker-alpine-ruby as an example:
COPY files /
With 'files/etc" including an files/etc/profile.d/rubygems.sh running just fine.
In the OP project Dockerfile, there is a
COPY aliases.sh /etc/profile.d/
But the default shell is not a login shell (sh -l), which means profile files (or those in /etc/profile.d) are not sourced.
Adding sh -l would work:
docker#default:~$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42 sh -l
87a58e26b744:/# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/ruby/gems/2.0.0/bin
As mentioned by Jinesh before, the default shell in Alpine Linux is ash
localhost:~$ echo $SHELL
/bin/ash
localhost:~$
Therefore simple solution is too add your aliases in .profile. In this case, I put all my aliases in ~/.ash_aliases
localhost:~$ cat .profile
# ~/.profile
# Alias
if [ -f ~/.ash_aliases ]; then
. ~/.ash_aliases
fi
localhost:~$
.ash_aliases file
localhost:~$ cat .ash_aliases
alias a=alias
alias c=clear
alias f=file
alias g=grep
alias l='ls -lh'
localhost:~$
And it works :)
I use this:
docker exec -it my_container /bin/ash '-l'
The -l flag passed to ash will make it behave as a login shell, thus reading ~/.profile

Simple Shell Script: Dead Docker Containers

I have a very simple Docker container that runs a bash shell script that returns something. My Dockerfile:
# Docker image to get stats from a rest interface using CURL and JSON parsing
FROM ubuntu
RUN apt-get update
# Install curl and jq, a lightweight command-line JSON processor
RUN apt-get install -y curl jq
COPY ./stats.sh /
# Make sure script has execute permissions for root
RUN chmod 500 stats.sh
# Define a custom entrypoint to execute stats commands easily within the container,
# using environment substitution and the like...
ENTRYPOINT ["/stats.sh"]
CMD ["info"]
The stats.sh looks like this:
#!/bin/bash
# ElasticSearch
## Get the total size of the elasticsearch DB in bytes
## Requires the elasticsearch container to be linked with alias 'elasticsearch'
function es_size() {
local size=$(curl $ELASTICSEARCH_PORT_9200_TCP_ADDR:$ELASTICSEARCH_PORT_9200_TCP_PORT/_stats/_all 2>/dev/null|jq ._all.total.store.size_in_bytes)
echo $size
}
if [[ "$1" == "info" ]]; then
echo "Check stats.sh for available commands"
elif [[ "$1" == "es_size" ]]; then
es_size
else
echo "Unknown command: $#"
fi
So basically, I have a Docker container that I will run with --rm to exit immediately after running and returning the value I want. More precise, I run it from another shell script (in the host) with:
local size=$(docker run --name stats-es-size --rm --link $esName:elasticsearch $ENV_DOCKER_REST_STATS_IMAGE:$ENV_DOCKER_REST_STATS_VERSION es_size)
Now I'm running this periodically to gather statistics, once a minute. While it works well in general, I end up getting containers with status Dead about once a day.
Can anybody tell me what I might be doing wrong? Is there some problem with my approach or why do my containers die with a certain frequency?

Resources