I like the Docker dev environment tool but I'd like to also be able have some tools preinstalled when a user clones the repository using the Docker Dev Environment tool.
I've have a .devcontainer folder in the repository with a Dockerfile:
# [Choice] Alpine version: 3.13, 3.12, 3.11, 3.10
ARG VARIANT="3.13"
FROM mcr.microsoft.com/vscode/devcontainers/base:0-alpine-${VARIANT}
# Install Terraform CLI
# Install GCloud SDK
And a devcontainer.json file:
{
"name": "Alpine",
"build": {
"dockerfile": "Dockerfile",
// Update 'VARIANT' to pick an Alpine version: 3.10, 3.11, 3.12, 3.13
"args": { "VARIANT": "3.13" }
},
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
// Note that some extensions may not work in Alpine Linux. See https://aka.ms/vscode-remote/linux.
"extensions": [],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Uncomment when using a ptrace-based debugger like C++, Go, and Rust
// "runArgs": [ "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined" ],
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}
I've tried to include curl and install commands in the Dockerfile but the commands just don't seem to work. To clarify, once the container is built I can't seem to access the CLI tools eg. terraform --version says terraform not found.
The docker launches as a VSCode window running in the container and I am attempting to use the CLI tools from the VSCode terminal if that makes a difference.
EDIT: So the issue is that creating an environment from the Docker dashboard doesn't read in your .devcontainer folder and files, it jus creates a stock basic container. You need to clone the repository, open in VSCode, and then Reopen in Container and it will build your environment.
I swapped to Ubuntu as the base image instead of Alpine and then instead of creating the dev environment from the Docker dashboard I instead opened the project folder locally in VSCode and selected "Reopen in Container". It then seemed to install everything and I have the CLI tools available now.
The below install commands come from the official documentation from each provider. I'm going to retest pulling the repository down through the Docker dashboard to see if it works.
# [Choice] Ubuntu version: bionic, focal
ARG VARIANT="focal"
FROM mcr.microsoft.com/vscode/devcontainers/base:0-${VARIANT}
# Installs Terragrunt + Terraform
ARG TERRAGRUNT_PATH=/bin/terragrunt
ARG TERRAGRUNT_VERSION=0.31.1
RUN wget https://github.com/gruntwork-io/terragrunt/releases/download/v${TERRAGRUNT_VERSION}/terragrunt_linux_amd64 -O ${TERRAGRUNT_PATH} \
&& chmod 755 ${TERRAGRUNT_PATH}
# Installs GCloud SDK
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-sdk -y
Related
I am trying to setup a VS Code Dev Container environment for Tauri development. I am using X-Forwarding to display the app on the container's host system. When I try to start the app via yarn tauri dev, the host's X11 server displays the app but with a black screen. Withing the container, I am getting the following warnings & error:
(process:3623): Gtk-WARNING **: 20:01:34.585: Locale not supported by C library.
Using the fallback 'C' locale.
event - compiled client and server successfully in 9.3s (176 modules)
(process:3665): Gtk-WARNING **: 20:01:40.390: Locale not supported by C library.
Using the fallback 'C' locale.
(WebKitWebProcess:3665): Gdk-ERROR **: 20:01:42.784: The program 'WebKitWebProcess' received an X Window System error.
This probably reflects a bug in the program.
The error was 'GLXBadFBConfig'.
(Details: serial 173 error_code 163 request_code 149 (GLX) minor_code 21)
(Note to programmers: normally, X errors are reported asynchronously;
that is, you will receive the error a while after causing it.
To debug your program, run it with the GDK_SYNCHRONIZE environment
variable to change this behavior. You can then get a meaningful
backtrace from your debugger if you break on the gdk_x_error() function.)
This is the Dockerfile I am using
#FROM debian:bullseye
FROM nvidia/opengl:1.0-glvnd-runtime-ubuntu20.04
ENV DEBIAN_FRONTEND=noninteractive
# install dependencies from package manager
RUN apt update && \
apt install -y libwebkit2gtk-4.0-dev \
build-essential \
curl \
wget \
libssl-dev \
libgtk-3-dev \
libayatana-appindicator3-dev \
librsvg2-dev
# install rust
RUN curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh -s -- -y
# install nodejs, npm & yarn
RUN curl -fsSL https://deb.nodesource.com/setup_16.x | bash - && apt-get install -y nodejs
RUN npm install -g yarn
# create 'dev' user
RUN mkdir /home/dev
RUN useradd -u 1000 dev && chown -R dev /home/dev
And this is the devcontainer.json
{
"name": "Tauri Dev Environment",
"dockerFile": "Dockerfile",
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
"extensions": [
"esbenp.prettier-vscode",
"dbaeumer.vscode-eslint",
"rust-lang.rust",
"be5invis.toml"
],
"remoteEnv": {
"DISPLAY": "host.docker.internal:0"
},
"runArgs": ["--gpus=all"],
"workspaceMount": "source=${localWorkspaceFolder},target=/home/dev/workspace/${localWorkspaceFolderBasename},type=bind",
"workspaceFolder": "/home/dev/workspace/${localWorkspaceFolderBasename}",
}
As you can see in the Dockerfile, first I tried with a Debian base image. Then, I thought the issue could be fixed by accessing the GPU of my system via a Nvidia image and corresponding configuration. However, both attempts led to the same result. It might be worth noticing that via the "Debian-based" image, I am able to successfully forward browser UI from the container.
Update: The problem only exists when I run the host's X11 server in "Multiple windows" mode. When I run it in "Fullscreen" or "Single Window" mode, the Tauri app is displayed properly. This also works with the simple Debian base image without GPU access. The following image shows the display settings that work and the one that does not. Does anyone know why it does not work in the "Multiple windows" mode?
This appears to be a problem with the X11 server. I was using VcXsrv before. When I use Xming, the "Multiple Windows" option is working properly. I want to highlight again, that the Nvidia base image and GPU access is not required.
I have started a Docker container using the following command:
docker run tomcat:latest
Then I created a file named docker.yml with the following contents:
plugin: community.docker.docker_containers
docker_host: unix://var/run/docker.sock
Finally I try to obtain a list of the currently running Docker containers using:
ansible-inventory -i docker.yml --list
However instead of a list of running containers, I only get the following result:
[WARNING]: * Failed to parse docker.yml with yaml plugin: Plugin configuration YAML file, not YAML inventory
[WARNING]: * Failed to parse docker.yml with constructed plugin: Incorrect plugin name in file: community.docker.docker_containers
[WARNING]: Unable to parse docker.yml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
}
}
Have I misunderstood the Ansible Docker containers dynamic inventory or am I doing something wrong?
I suspect I had a case of system in disarray and this was the cure:
I retained my current Python installation located at ~/Library/Python/3.9/.
Attempted to uninstall Ansible using pip:
pip uninstall ansible
Manually removed all things Ansible:
sudo rm -r /etc/ansible
sudo rm -r -/.ansible
sudo rm -r /usr/local/lib/python3.9/site-packages/ansible*
sudo rm /usr/local/bin/ansible*
Performed a fresh installation of Ansible:
pip install ez_setup
pip install --user ansible
Installed Ansible Docker collection prerequisite:
pip install docker
Installed Ansible Docker collection:
ansible-galaxy collection install community.docker
After the above, the Ansible Docker container dynamic inventory works as expected and without errors.
I want to run vscode in docker for internal test, i've created the following
FROM debian:stable
RUN apt-get update && apt-get install -y apt-transport-https curl gpg
RUN curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg \
&& install -o root -g root -m 644 microsoft.gpg /etc/apt/trusted.gpg.d/ \
&& echo "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main" > /etc/apt/sources.list.d/vscode.list
RUN apt-get update && apt-get install -y code libx11-xcb-dev libasound2
RUN code --user-data-dir="~/.vscode-root"
I use to build
docker build -t vscode .
I use to run
docker run vscode code -v
when I run it like this I got error
You are trying to start vscode as a super user which is not recommended. If you really want to, you must specify an alternate user data directory using the --user-data-dir argument.
I just want to verify it by running something like RUN code -v how can I do it ?
should I change the user ? I just want to run vscode in docker and use some vscode apis
Have you tried using VSCode's built in functionality for developing in a container?
Checkout this page which describes how to do this:
Developing inside a Container
You can try out some of the sample container configurations provided by VSCode and use any of those devcontainer.json files as an example to configure a custom development container to your liking. According to the page above:
Workspace files are mounted from the local file system or copied or cloned into the container. Extensions are installed and run inside the container, where they have full access to the tools, platform, and file system. This means that you can seamlessly switch your entire development environment just by connecting to a different container.
This is a very handy way to have different environments for development that are isolated within the container.
Scenario
I want to develop ansible roles. Those roles should be validated through a CI/CD process with molecule and utilize docker as driver. This validation step should include multiple Linux flavours (e.g. centos/ubuntu/debian) times the supported ansible versions.
The tests should then be executed such that the role is verified with
centos:7 + ansible:2.5
centos:7 + ansible:2.6
centos:7 + ansible:2.7
...
ubuntu:1604 + ansible:2.5
ubuntu:1604 + ansible:2.6
ubuntu:1604 + ansible:2.7
...
The issues at hand
there is no official ansible image availabe
how to best test a role for ansible version compatibility?
Issue 1: no official ansible images
The official images by the ansible team have been deprecated for about 3 years now:
https://hub.docker.com/r/ansible/ubuntu14.04-ansible
https://hub.docker.com/r/ansible/centos7-ansible
In addition, the link the deprecated images refer to in order to find new images supporting ansible is quite useless due to the sheer amount of results
https://hub.docker.com/search/?q=ansible&page=1&isAutomated=0&isOfficial=0&pullCount=1&starCount=0
Is there a well-maintained ansible docker image by the community (or ansible) that fills the void?
Preferable with multiple versions that can be pulled and a CI process that builds and validates the created image regularly.
Why am I looking for ansible images? I do not want to reinvent the wheel (if possible). I would like to use the images to test ansible roles via molecule for version incompatibility.
I searched but could not find anything truly useful. What images are you using to run ansible in a container/orchestrator? Do you build and maintain the images yourself?
e.g. https://hub.docker.com/r/ansibleplaybookbundle/apb-base/tags
looked promising, however, the images in there are also over 7 months old (at least).
Issue 2: how to best test a role for ansible version compatibility?
Is creating docker images for each combination of OS and ansible version the best way to test via molecule and docker as driver? Or is there a smarter way to test backward compatibility of ansible roles with multiple OS times different ansible versions?
I already test my ansible roles with molecule and docker as driver. Those tests currently only testing the functionality of the role on various Linux distros, but not the ansible backward compatibility by running the role with older ansible versions.
Here an example role with travis tests for centos7/ubuntu1604/ubuntu1804 based on geerlingguy's ntp role: https://github.com/Gepardec/ansible-role-ntp
Solution
In order to test ansible roles with multiple versions of ansible, python and various Linux flavors we can use
molecule for our ansible role functionality
docker as our abstraction layer on which we run the target system for our ansible role
tox to setup generic virtualenvs and test our various combinations without side effects
travis to automate it all
This will be quite a long/detailed answer. You can check out an example ansible role with the whole setup here
https://github.com/ckaserer/ansible-role-example
https://travis-ci.com/ckaserer/ansible-role-example
Step 1: test ansible role with molecule
Molecule docs: https://molecule.readthedocs.io/en/stable/
Fixes Issue: 1) no official ansible images
I could create ansible images for every distro I would like to test as jeff geerling describes in his blog posts.
The clear downside of this approach: The images will need maintenance (eventually)
However, with molecule, we can combine base images with a Dockerfile.j2 (Jinja2 Template) to create the images with minimal requirements to run ansible. Through this approach, we are now able to use the official linux distro images from docker hub and do not need to maintain a docker repo for each linux distro and the various versions.
Here the important bits in molecule.yml
platforms:
- name: instance-${TOX_ENVNAME}
image: ${MOLECULE_DISTRO:-'centos:7'}
command: /sbin/init
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
The default dockerfile.j2 from molecule is already good, but I have a few additions
# Molecule managed
{% if item.registry is defined %}
FROM {{ item.registry.url }}/{{ item.image }}
{% else %}
FROM {{ item.image }}
{% endif %}
{% if item.env is defined %}
{% for var, value in item.env.items() %}
{% if value %}
ENV {{ var }} {{ value }}
{% endif %}
{% endfor %}
{% endif %}
RUN if [ $(command -v apt-get) ]; then apt-get update && apt-get install -y python sudo bash ca-certificates iproute2 && apt-get clean; \
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python sudo bash python-xml iproute2 && zypper clean -a; \
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python sudo bash ca-certificates; \
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python sudo bash ca-certificates iproute2 && xbps-remove -O; \
elif [ $(command -v swupd) ]; then swupd bundle-add python3-basic sudo iproute2; \
elif [ $(command -v dnf) ] && cat /etc/os-release | grep -q '^NAME=Fedora'; then dnf makecache && dnf --assumeyes install python sudo python-devel python*-dnf bash iproute && dnf clean all; \
elif [ $(command -v dnf) ] && cat /etc/os-release | grep -q '^NAME="CentOS Linux"' ; then dnf makecache && dnf --assumeyes install python36 sudo platform-python-devel python*-dnf bash iproute && dnf clean all && ln -s /usr/bin/python3 /usr/bin/python; \
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y python sudo yum-plugin-ovl bash iproute && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
fi
# Centos:8 + ansible 2.7 failed with error: "The module failed to execute correctly, you probably need to set the interpreter"
# Solution: ln -s /usr/bin/python3 /usr/bin/python
By default, this will test the role with centos:7. However, we can set the environment variable MOLECULE_DISTRO to whichever image we would like to test and run it via
MOLECULE_DISTRO=ubuntu:18.04 molecule test
Summary
We use official distro images from docker hub to test our ansible role via molecule.
The files used in this step
molecule.yml https://github.com/ckaserer/ansible-role-example/blob/master/molecule/default/molecule.yml
Dockerfile.j2 https://github.com/ckaserer/ansible-role-example/blob/master/molecule/default/Dockerfile.j2
Source
https://molecule.readthedocs.io/en/stable/examples.html#systemd-container
https://www.jeffgeerling.com/blog/2019/how-i-test-ansible-configuration-on-7-different-oses-docker
https://www.jeffgeerling.com/blog/2018/testing-your-ansible-roles-molecule
Step 2: Check compatibility for your role ( python version X ansible version X linux distros )
Fixes Issue 2: how to best test a role for ansible version compatibility?
Let's use tox to create virtual environments to avoid side effects while testing various compatibility scenarios.
Here the important bits in tox.ini
[tox]
minversion = 3.7
envlist = py{3}-ansible{latest,29,28}-{ alpinelatest,alpine310,alpine39,alpine38, centoslatest,centos8,centos7, debianlatest,debian10,debian9,debian8, fedoralatest,fedora30,fedora29,fedora28, ubuntulatest,ubuntu2004,ubuntu1904,ubuntu1804,ubuntu1604 }
# only test currently supported ansible versions
# https://docs.ansible.com/ansible/latest/reference_appendices/release_and_maintenance.html#release-status
skipsdist = true
[base]
passenv = *
deps =
-rrequirements.txt
ansible25: ansible==2.5
...
ansiblelatest: ansible
commands =
molecule test
setenv =
TOX_ENVNAME={envname}
MOLECULE_EPHEMERAL_DIRECTORY=/tmp/{envname}
[testenv]
passenv =
{[base]passenv}
deps =
{[base]deps}
commands =
{[base]commands}
setenv =
...
centoslatest: MOLECULE_DISTRO="centos:latest"
centos8: MOLECULE_DISTRO="centos:8"
centos7: MOLECULE_DISTRO="centos:7"
...
{[base]setenv}
The entirety of requirements.txt
docker
molecule
by simply executing
tox
it will create virtual envs for each compatibility combination defined in tox.ini by
envlist = py{3}-ansible{latest,29,28}-{ alpinelatest,alpine310,alpine39,alpine38, centoslatest,centos8,centos7, debianlatest,debian10,debian9,debian8, fedoralatest,fedora30,fedora29,fedora28, ubuntulatest,ubuntu2004,ubuntu1904,ubuntu1804,ubuntu1604 }
which translates in this particular case to: python3 x ansible version x linux distro
Great! We have created tests for compatibility checks with the added benefit of always testing with ansible latest to notice breaking changes early on.
The files used in this step
tox.ini https://github.com/ckaserer/ansible-role-example/blob/master/tox.ini
requirements.txt https://github.com/ckaserer/ansible-role-example/blob/master/requirements.txt
Source
https://tox.readthedocs.io/en/latest/
https://molecule.readthedocs.io/en/stable/testing.html#tox
Step 3: CI with travis
Running the checks locally is good, running in a CI tool is great. So let's do that.
For this purpose following bits in the .travis.yml are important
---
version: ~> 1.0
os: linux
language: python
python:
- "3.8"
- "3.7"
- "3.6"
- "3.5"
services: docker
cache:
pip: true
directories:
- .tox
install:
- pip install tox-travis
env:
jobs:
# ansible:latest - check for breaking changes
...
- TOX_DISTRO="centoslatest" TOX_ANSIBLE="latest"
- TOX_DISTRO="centos8" TOX_ANSIBLE="latest"
- TOX_DISTRO="centos7" TOX_ANSIBLE="latest"
...
# ansible: check version compatibility
# only test currently supported ansible versions
#
https://docs.ansible.com/ansible/latest/reference_appendices/release_and_maintenance.html#release-status
- TOX_DISTRO="centoslatest" TOX_ANSIBLE="{29,28}"
- TOX_DISTRO="centos8" TOX_ANSIBLE="{29,28}"
- TOX_DISTRO="centos7" TOX_ANSIBLE="{29,28}"
...
script:
- tox -e $(echo py${TRAVIS_PYTHON_VERSION} | tr -d .)-ansible${TOX_ANSIBLE}-${TOX_DISTRO}
# remove logs/pycache before caching .tox folder
- |
rm -r .tox/py*/log/*
find . -type f -name '*.py[co]' -delete -o -type d -name __pycache__ -delete
First we have specified language: python to run builds with multiple versions of python as defined in the python: list.
We need docker, so we add it via services: docker.
The test will take quite some time, let's cache pip and our virtenv created by tox with
cache:
pip: true
directories:
- .tox
We need tox...
install:
- pip install tox-travis
And finally, we define all our test cases
env:
jobs:
# ansible:latest - check for breaking changes
...
- TOX_DISTRO="centoslatest" TOX_ANSIBLE="latest"
...
Note that I have separate jobs for latest and the distinct versions. That is on purpose. I would like to easily see what broke. Is it version compatibility or an upcoming change located in ansible's latest release.
The files used in this step
https://github.com/ckaserer/ansible-role-example/blob/master/.travis.yml
Source
https://docs.travis-ci.com/user/caching/
Bonus: run tox in parallel
You can run the tests in parallel (e.g. 3 test simultaneous) by executing
tox -p 3
However, this will not give the output from molecule. You can enable that with
tox -p 3 -o true
The obvious downside to this approach is the pain of figuring out which line belongs to which process in the parallel execution.
Source
https://tox.readthedocs.io/en/latest/example/basic.html#parallel-mode
No real answer here, but some ideas :
Ansible Silo might have fitted, but no commit for a year.
And it's not exactly what you're looking for, but Ansible Runner is meant to be a fit for the "run ansible" use case. And it's a part of Ansible Tower / AWS, so it should last.
Runner is intended to be most useful as part of automation and tooling that needs to invoke Ansible and consume its results.
They do mention executing from a container here
The design of Ansible Runner makes it especially suitable for controlling the execution of Ansible from within a container for single-purpose automation workflows
But the issue for you is that the official ansible/ansible-runner container is tagged after ansible-runner version, and ansible itself is installed through pip install ansible at container build time.
When launching an attached container in "VS Code Remote Development", has anyone found a way to change the container's shell when launching the vscode integrated terminal.
It seems to run something similar to.
docker exec -it <containername> /bin/bash
I am looking for the equivalent of
docker exec -it <containername> /bin/zsh
The only settings I found for Attached containers are
"remote.containers.defaultExtensions": []
I worked around it with
RUN echo "if [ -t 1 ]; then" >> /root/.bashrc
RUN echo "exec zsh" >> /root/.bashrc
RUN echo "fi" >> /root/.bashrc
Still would be interested in knowing if there was a way to set this per container.
I use a Docker container for my development environment and set the shell to bash in my Dockerfile:
# …
ENTRYPOINT ["bash"]
Yet when VS Code was connecting to my container it was insisting on using the /bin/ash shell which was driving me crazy... However the fix (at least for me) was very simple but not obvious:
From the .devcontainer.json reference.
All I needed to do in my case was to add the following entry in my .devcontainer.json file:
{
…
"settings": {
"terminal.integrated.shell.*": "/bin/bash"
}
…
}
Complete .devcontainer.json file (FYI)
{
"name": "project-blueprint",
"dockerComposeFile": "./docker-compose.yml",
"service": "dev",
"workspaceFolder": "/workspace/dev",
"postCreateCommand": "yarn",
"settings": {
"terminal.integrated.shell.*": "/bin/bash"
}
}
I'd like to contribute to this thread since I spent a decent amount of time combing the web for a good solution to this involving VS Code's new API for terminal.integrated.profiles.linux
Note as of 20 Jan 2022 both the commented and the uncommented json work. The uncommented out lines is the new non deprecated way to get this working with Dev containers.
{
"settings": {
// "terminal.integrated.shell.linux": "/bin/zsh"
"terminal.integrated.defaultProfile.linux": "zsh",
"terminal.integrated.profiles.linux": {
"zsh": {
"path": "/bin/zsh"
},
}
}
}
if any one is interested I also figured out how to get oh my ZSH built into the image.
Dockerfile:
# Setup Stage - set up the ZSH environment for optimal developer experience
FROM node:16-alpine AS setup
RUN npm install -g expo-cli
# Let scripts know we're running in Docker (useful for containerized development)
ENV RUNNING_IN_DOCKER true
# Use the unprivileged `node` user (pre-created by the Node image) for safety (and because it has permission to install modules)
RUN mkdir -p /app \
&& chown -R node:node /app
# Set up ZSH and our preferred terminal environment for containers
RUN apk --no-cache add zsh curl git
# Set up ZSH as the unprivileged user (we just need to start it, it'll initialize our setup itself)
USER node
# set up oh my zsh
RUN cd ~ && wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh && sh install.sh
# initialize ZSH
RUN /bin/zsh ~/.zshrc
# Switch back to root
USER root