I want to run vscode in docker for internal test, i've created the following
FROM debian:stable
RUN apt-get update && apt-get install -y apt-transport-https curl gpg
RUN curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg \
&& install -o root -g root -m 644 microsoft.gpg /etc/apt/trusted.gpg.d/ \
&& echo "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main" > /etc/apt/sources.list.d/vscode.list
RUN apt-get update && apt-get install -y code libx11-xcb-dev libasound2
RUN code --user-data-dir="~/.vscode-root"
I use to build
docker build -t vscode .
I use to run
docker run vscode code -v
when I run it like this I got error
You are trying to start vscode as a super user which is not recommended. If you really want to, you must specify an alternate user data directory using the --user-data-dir argument.
I just want to verify it by running something like RUN code -v how can I do it ?
should I change the user ? I just want to run vscode in docker and use some vscode apis
Have you tried using VSCode's built in functionality for developing in a container?
Checkout this page which describes how to do this:
Developing inside a Container
You can try out some of the sample container configurations provided by VSCode and use any of those devcontainer.json files as an example to configure a custom development container to your liking. According to the page above:
Workspace files are mounted from the local file system or copied or cloned into the container. Extensions are installed and run inside the container, where they have full access to the tools, platform, and file system. This means that you can seamlessly switch your entire development environment just by connecting to a different container.
This is a very handy way to have different environments for development that are isolated within the container.
Related
Is it possible to use the same shell in all RUN commands when building a docker image? As opposed to each RUN command running on its own shell.
Use case: at some point, I need to source some file containing environment variables that are used later on. I cannot do this, because the commands run in different shells:
RUN source something.sh
RUN ./install.sh
RUN ... more commands
Instead I have to do:
RUN source something.sh && \
./install.sh && \
... more commands
Which I'm trying to avoid since it hurts readability, it's error prone and does not allow inserting comments in between commands.
Any ideas?
Thanks!
It's not possible to have separate RUN statement run in the same shell.
If you don't like the look of concatenated commands, you could write a shell script and RUN that.
You'll have to get it into the container by using a COPY statement.
Or you can use wget or curl to fetch it and pipe it into a shell. That requires that wget or curl is present in the container, so you might have to install them first.
If you use curl and Debian, it could look like this
RUN apt update && \
apt install -y curl && \
curl -sL https://github.com/link/to/my/install-script.sh | bash
If you COPY it in, it'd look like this
COPY install-script.sh .
RUN ./install-script.sh
I created a custom image with the following Dockerfile:
FROM apache/airflow:2.1.1-python3.8
USER root
RUN apt-get update \
&& apt-get -y install gcc gnupg2 \
&& curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update \
&& ACCEPT_EULA=Y apt-get -y install msodbcsql17 \
&& ACCEPT_EULA=Y apt-get -y install mssql-tools
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc \
&& echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc \
&& source ~/.bashrc
RUN apt-get -y install unixodbc-dev \
&& apt-get -y install python-pip \
&& pip install pyodbc
RUN echo -e “AIRFLOW_UID=$(id -u) \nAIRFLOW_GID=0” > .env
USER airflow
The image creates successfully, but when I try to run it, I get this error:
"airflow command error: the following arguments are required: GROUP_OR_COMMAND, see help above."
I have tried supplying a group ID with the --user, but I can't figure it out.
How can I start this custom Airflow Docker image?
Thanks!
First of all this line is wrong:
RUN echo -e “AIRFLOW_UID=$(id -u) \nAIRFLOW_GID=0” > .env
If you are running it with Docker Compose (I presume you took it from https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html), this is something you should run on "Host" machine, not in the image. Remove that line, it has no effect.
Secondly - it really depends what "command" you run. The "GROUP_OR_COMMAND" message you got is the output of "airflow" command. You have not copied the whole output of your command but this is a message you get when you try to run airflow without telling it what to do. When you run the image you will run by default the airflow command which has a number of subcommands that can be executed. So the "see help above" message tells you the very thing you should do - look at the help and see what subcommand you wanted to run (and possibly run it).
docker run -it apache/airflow:2.1.2
usage: airflow [-h] GROUP_OR_COMMAND ...
positional arguments:
GROUP_OR_COMMAND
Groups:
celery Celery components
config View configuration
connections Manage connections
dags Manage DAGs
db Database operations
jobs Manage jobs
kubernetes Tools to help run the KubernetesExecutor
pools Manage pools
providers Display providers
roles Manage roles
tasks Manage tasks
users Manage users
variables Manage variables
Commands:
cheat-sheet Display cheat sheet
info Show information about current Airflow and environment
kerberos Start a kerberos ticket renewer
plugins Dump information about loaded plugins
rotate-fernet-key
Rotate encrypted connection credentials and variables
scheduler Start a scheduler instance
sync-perm Update permissions for existing roles and optionally DAGs
version Show the version
webserver Start a Airflow webserver instance
optional arguments:
-h, --help show this help message and exit
airflow command error: the following arguments are required: GROUP_OR_COMMAND, see help above.
when you extend the official image, it will pass the parametor to "airflow" command which causing this problem. Check this out: https://airflow.apache.org/docs/docker-stack/entrypoint.html#entrypoint-commands
I have set up a docker image and install ubuntu on it. Can you please tell me how can I install Openmodelica inside ubuntu to that docker image?
for example, if I want to install node.js on this docker image I could use this code:
apt install nodejs
so I need some codes like that to install open Modelica on my docker image.
p.s: my docker image is an ubuntu image.
I happened to create a Docker image for OpenModelica to debug something, so I might add it here as well. We got this questions in the OpenModelica forum as well.
While the answer of #sjoelund.se will stay up to date this one is a bit more explaining.
Dockerfile
FROM ubuntu:18.04
# Export DISPLAY, so a XServer can display OMEdit
ARG DEBIAN_FRONTEND=noninteractive
ENV DISPLAY=host.docker.internal:0.0
# Install wget, gnupg, lsb-release
RUN apt-get update \
&& apt install -y wget gnupg lsb-release
# Get the OpenModelica stable version
RUN for deb in deb deb-src; do echo "$deb http://build.openmodelica.org/apt `lsb_release -cs` stable"; done | tee /etc/apt/sources.list.d/openmodelica.list
RUN wget -q http://build.openmodelica.org/apt/openmodelica.asc -O- | apt-key add -
# Install OpenModelica
RUN apt-get update \
&& apt install -y openmodelica
# Install OpenModelica libraries (like all of them)
RUN for PKG in `apt-cache search "omlib-.*" | cut -d" " -f1`; do apt-get install -y "$PKG"; done
# Add non-root user for security
RUN useradd -m -s /bin/bash openmodelicausers
USER openmodelicausers
ENV HOME /home/openmodelicausers
ENV USER openmodelicausers
WORKDIR $HOME
# Return omc version
CMD ["omc", "--version"]
Let's build and tag it:
docker build --tag openmodelica:ubuntubionic .
How to use omc from the docker image
Let's create a small helloWorld.mo Modelica model:
model helloWorld
Real x(start=1.0, fixed=true);
equations
der(x) = 2.5*x;
end helloWorld;
and a MOS script which will simulate it, called runHelloWorld.mos
loadFile("helloWorld.mo"); getErrorString();
simulate(helloWorld); getErrorString();
Now we can make our files accessible to the docker container with the -v flag and run our small example with:
docker run \
--rm \
-v $(pwd):/home/openmodelicausers \
openmodelica:ubuntubionic \
omc runHelloWorld.mos
Note that -v needs an absolute path. I added --rm to clean up.
Using OMEdit with a GUI
I'm using Windows + Docker with WSL2. So in order to get OMEdit running I need to have a XServer installed on my Windows host system. They are not trivial to set up, but I'm using VcXsrv and so far it is working for me. On Linux this is of course much simpler.
I'm using this config to start XLaunch:
<?xml version="1.0" encoding="UTF-8"?>
<XLaunch WindowMode="MultiWindow" ClientMode="NoClient" LocalClient="False" Display="-1" LocalProgram="xcalc" RemoteProgram="xterm" RemotePassword="" PrivateKey="" RemoteHost="" RemoteUser="" XDMCPHost="" XDMCPBroadcast="False" XDMCPIndirect="False" Clipboard="True" ClipboardPrimary="True" ExtraParams="" Wgl="True" DisableAC="True" XDMCPTerminate="False"/>
But when the XServer is running you can use OMEdit in nearly the same fashion you would from a Linux OS, just mount some directory with your files and that's it:
docker run \
--rm \
-v $(pwd):/home/openmodelicausers \
openmodelica:ubuntubionic \
OMEdit
You could get some inspiration from the Dockerfiles that are used to generate the OpenModelica docker images. For example: https://github.com/OpenModelica/OpenModelicaDockerImages/tree/v1.16.2
The question is most clear,
How to start complete desktop environment (KDE, XFCE, Gnome doesn't matter) in the Docker remote container.
I were digging over the internet and there are lots of questions about the related topic, but not the same, they all about how to run GUI application not the full desktop.
What I found out:
Necessary run Xvfb
Somehow run e.g. Xfce in that FrameBuffer
Allow x11vnc to share that running X environment
But I'm stuck here actually, always getting whatever errors:
... (EE) Invalid screen configuration 1024x768 for -screen 0
... Cannot open /dev/tty0 (No such file or directory)
Could you give some Dockerfile lines in order reach the goal?
That is I was looking for, the simplest form of the desktop in Docker:
FROM ubuntu
RUN apt-get update
RUN apt-get install xfce4 -y
RUN apt-get install xfce4-goodies -y
RUN apt-get purge -y pm-utils xscreensaver*
RUN apt-get install wget -y
EXPOSE 5901
RUN wget -qO- https://dl.bintray.com/tigervnc/stable/tigervnc-1.8.0.x86_64.tar.gz | tar xz --strip 1 -C /
RUN mkdir ~/.vnc
RUN echo "123456" | vncpasswd -f >> ~/.vnc/passwd
RUN chmod 600 ~/.vnc/passwd
CMD ["/usr/bin/vncserver", "-fg"]
Unfortunately I could not sort out with x11vnc and xvfb. But TigerVNC turned out much better.
This sample generate container with xfce gui and run vncserver with 123456 password. There is no need to overwrite ~/.vnc/xstartup manually because TigerVNC starts up X server by default!
To run the server:
sudo docker run --rm -dti -p 5901:5901 3ab3e0e7cb
To connect there with vncviewer:
vncviewer -AutoSelect 0 -QualityLevel 9 -CompressLevel 0 192.168.1.100:5901
Also you could not care about screen resolution because by default it will resize to fit your screen:
You may also encounter the issue with ipc_channel_posix (chrome and other browsers will not work properly) to eliminate this run container with memory sharing:
docker run -d --shm-size=2g --privileged -p 5901:5901 image-name
x11docker allows to run desktop environments as well as single GUI applications in docker.
Could you give some Dockerfile lines in order reach the goal?
Example desktop images on docker hub.
x11docker does a lot of setup to keep container isolation and provides some additional options like hardware acceleration or pulseaudio sound. Example:
x11docker --desktop x11docker/lxde
x11docker also supports network setups with SSH, VNC and HTML5
Example for SSH setup with xpra:
read Xenv < <(x11docker --xdummy --display=30 x11docker/lxde pcmanfm)
echo $Xenv && export $Xenv
# replace "start" with "start-desktop" to forward a desktop environment
xpra start :30 --use-display --start-via-proxy=no
From client system, connect with
xpra attach ssh:HOSTNAME:30 # replace HOSTNAME with IP or host name of ssh server
Without x11docker:
A quite short setup using Xephyr as nested X server on host is:
Xephyr :1
docker run -v /tmp/.X11-unix/X1:/tmp/.X11-unix/X1:rw \
-e DISPLAY=:1 \
x11docker/xfce
A short Dockerfile with Xfce desktop:
FROM debian:stretch
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y --no-install-recommends xfce4 dbus-x11
CMD startxfce4
Edit: Solved- typo
I have a Dockerfile that successfully creates a virtualenv using virtualenvwrapper (along with setting up a heap of "standard" settings/packages in our normal environment). I am using the resulting image as a "base image" for further use. All good so far. However, the following Dockerfile (based of the first image, "base_image_14.04") falls down at the last line:
FROM base_image_14.04
USER root
RUN DEBIAN_FRONTEND=noninteractive \
apt-get update && apt-get install -y \
libproj0 libproj-dev \
libgeos-c1v5 libgeos-dev \
libjpeg62 libjpeg-dev \
zlib1g zlib1g-dev \
libfreetype6 libfreetype6-dev \
libgdal20 libgdal-dev \
&& rm -rf /var/lib/apt/lists
USER webdev
RUN ["/bin/bash", "-ic", "mkproject maproxy"]
EXPOSE 80
WORKDIR $PROJECT_HOME/mapproxy
ADD ./requirements.txt .
RUN ["/bin/bash", "-ic", "workon mapproxy && pip install -r requirements.txt"]
The "mkproject mapproxy" works fine. If I comment out the last line it builds successfully and I can spin up the container and run "workon mapproxy" manually, not a problem. But when I try and build with the last line, it gives a workon error:
ERROR: Environment 'mapproxy' does not exist. Create it with 'mkvirtualenv mapproxy'.
workon is being called, but for some reason it can't find the mapproxy virtualenv.
WORKON_HOME & PROJECT_HOME both exist (defined in the parent image) and point to the correct locations (and are used successfully by "mkproject mapproxy").
So why is workon returning an error when the mapproxy virtualenv exists? The same error happens when I isolate that last line into a third Dockerfile building on the second.
Solved: It was a simple typo. mkproject maproxy instead of mapproxy. :sigh:
I am trying to build a docker image and am running into similar problems.
First question was why use a virtual env in docker? The main reason in a nutshell is to minimize effort to migrate an existing and working approach into a docker container. I will eventually use docker-compose, but I wanted to start by getting my feet wet with it all in a single docker container.
In my first attempt I installed almost everything with apt-get, including uwsgi. I installed my app "globally" with pip3. The app has command line functionality and a separate flask web app, hence the need for uwsgi. The command line functionality works, but when I make a request of the flask app uwsgi / python has a problem with locale: Fatal Python error: Py_Initialize: Unable to get the locale encoding and ImportError: No module named 'encodings
I have stripped away all my app specific additions to narrow down the problem. This is the Dockerfile I'm using:
# Docker image definition for testing
FROM ubuntu:xenial
# Create a user
RUN useradd -G sudo -ms /bin/bash tester
RUN echo 'tester:password' | chpasswd
WORKDIR /home/tester
# Skipping apt-get update to save some build time. Some are kept
# to insure they are the same as on host setup.
RUN apt-get install -y python3 python3-dev python3-pip \
virtualenv virtualenvwrapper sudo nano && \
apt-get clean -qy
# After above, can we use those installed in rest of Dockerfile?
# Yes, but not always, such as with virtualenvwrapper. What about
# virtualenv? How do you "source" the script? Doesn't appear to be
# installed, as bash complains "source needs a single parameter"
ENV VIRTUALENVWRAPPER_PYTHON /usr/bin/python3
ENV VIRTUALENVWRAPPER_VIRTUALENV /usr/bin/virtualenv
RUN ["/bin/bash", "-c", "source", "/usr/share/virtualenvwrapper/virtualenvwrapper.sh"]
# Create a virtualenv so uwsgi can find locale
# RUN mkdir /home/tester/.virtualenv && virtualenv -p`which python3` /home/bts_tools/.virtualenv/bts_tools
RUN mkvirtualenv -p`which python3` bts_tools && \
workon bts_tools && \
pip3 --disable-pip-version-check install --upgrade bts_tools
USER tester
ENTRYPOINT ["/bin/bash"]
CMD ["--login"]
The build fails on the line I try to source the virtualenvwrapper script. Bash complains source needs an argument - the file to be sourced. So I comment out the RUN lines and it builds without error. When I run the resulting container I see all the additions to the ENV that virtualenvwrapper makes (you can see all of them by executing the "set" command without any args), and the script to be sourced is there too.
So my question is why doesn't docker find them? How does the docker build process work if the results of any previous RUNs or ENVs aren't applied for subsequent use in the Dockerfile? I know some things are applied and work, for example if you apt-get nginx you can refer to /etc/nginx or alter things under that folder. You can create a user and set it's password or cd into its home folder for example. If I move the WORKDIR before the RUN useradd -G I see a warning from useradd the home folder already exists. I tried to use the "time" program to time how long it takes to do various things in the Dockerfile and docker complains it can't find 'time'.
So what exactly is going on? I have spent the last 3 days trying to figure this out. It just shouldn't be this difficult. What am I missing?
Parts of the bts_tools flask app worked when I wasn't using virtual envs. Most of the app didn't work, and the issue was this locale problem. Since everything works on the host outside of docker, and after trying to alter the PATH, PYTHONHOME, PYTHONPATH in my uwsgi start script to overcome the dreaded "locale encoding" fatal error, I decided to try to replicate the host setup as closely as possible since that didn't have the locale issue. When I have had that problem before I could run dpkg-reconfigure python3 or fix with changes to PATH or ENV settings. If you google the problem you'll see many people have difficulties with python & locale. It's almost enough reason to avoid using python!
I posted this elsewhere about locale issue, if it helps.