I'm using this container to set up X11 in GitPod.
ARG base
FROM ${base}
# Dazzle does not rebuild a layer until one of its lines are changed. Increase this counter to rebuild this layer.
ENV TRIGGER_REBUILD=1
# Install Xvfb, JavaFX-helpers and Openbox window manager
RUN sudo install-packages xvfb x11vnc xterm openjfx libopenjfx-java openbox
# Overwrite this env variable to use a different window manager
ENV WINDOW_MANAGER="openbox"
USER root
# Change the default number of virtual desktops from 4 to 1 (footgun)
RUN sed -ri "s/<number>4<\/number>/<number>1<\/number>/" /etc/xdg/openbox/rc.xml
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
# Add VNC startup script
COPY start-vnc-session.sh /usr/bin/
RUN chmod +x /usr/bin/start-vnc-session.sh
USER gitpod
# This is a bit of a hack. At the moment we have no means of starting background
# tasks from a Dockerfile. This workaround checks, on each bashrc eval, if the X
# server is running on screen 0, and if not starts Xvfb, x11vnc and novnc.
RUN echo "export DISPLAY=:0" >> /home/gitpod/.bashrc.d/300-vnc
RUN echo "[ ! -e /tmp/.X0-lock ] && (/usr/bin/start-vnc-session.sh &> /tmp/display-\${DISPLAY}.log)" >> /home/gitpod/.bashrc.d/300-vnc
USER root
### checks ###
# no root-owned files in the home directory
RUN notOwnedFile=$(find . -not "(" -user gitpod -and -group gitpod ")" -print -quit) \
&& { [ -z "$notOwnedFile" ] \
|| { echo "Error: not all files/dirs in $HOME are owned by 'gitpod' user & group"; exit 1; } }
USER gitpod
This is where it gets sketchy :
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
I get this output please help !
COPY failed: file not found in build context or excluded by .dockerignore: stat novnc-index.html: file does not exist
Knowing that my dockerfile is in /src and i'm building in /src . I tried to rebuild with the --no-cache flag and use export DOCKER_BUILDKIT=1 . But still I'm stuck with this problem .
Related
I'm trying to create a docker image that could work also in singularity for the software Whippet ( https://github.com/timbitz/Whippet.jl ).
The docker image works fine, but when I try to use it with singularity (singularity-ce version 3.9.5), It's not able to write the manifest_usage.toml ( that is comprehensible, since in singularity the command is executed by the user and not by the root as in docker).
Here's my Dockerfile:
FROM julia:bullseye
LABEL version=v1.6.1
RUN apt-get update && apt-get install -y git
RUN mkdir /depot
ENV JULIA_PATH=/usr/local/julia
ENV JULIA_DEPOT_PATH=/depot
ENV JULIA_PROJECT=/whippet
RUN mkdir /whippet/ && cd /whippet && \
git clone --depth 1 --branch v1.6.1 https://github.com/timbitz/Whippet.jl.git . && \
julia --project=/whippet -e 'using Pkg; Pkg.instantiate(); Pkg.precompile();'
RUN chmod 777 -R /depot/
ENV PATH="/whippet/bin/:${PATH}"
RUN whippet-quant.jl -h || echo "Done"
RUN whippet-index.jl -h || echo "Done"
RUN whippet-delta.jl -h || echo "Done"
ENTRYPOINT ["whippet-quant.jl"]
Any suggestions?
I tried using diffrent locations for the JULIA_DEPOT_PATH ( also the default ~/.julia ) creating a new user, but I got the same issue.
the command
docker run cloxd/whippet:1.6.1 -h
works nicely, meanwhile the same command with singularity raise the following error
singularity run docker://cloxd/whippet:1.6.1
INFO: Using cached SIF image
Whippet v1.6.1 loading...
Activating project at `/whippet`
ERROR: LoadError: SystemError: opening file "/depot/logs/manifest_usage.toml": Read-only file system
Stacktrace:
[1] systemerror(p::String, errno::Int32; extrainfo::Nothing)
# Base ./error.jl:176
[2] #systemerror#80
# ./error.jl:175 [inlined]
[3] systemerror
# ./error.jl:175 [inlined]
[4] open(fname::String; lock::Bool, read::Nothing, write::Nothing, create::Nothing, truncate::Nothing, append::Bool)
# Base ./iostream.jl:293
[5] open(f::Pkg.Types.var"#44#46"{String}, args::String; kwargs::Base.Pairs{Symbol, Bool, Tuple{Symbol}, NamedTuple{(:append,), Tuple{Bool}}})
# Base ./io.jl:382
[6] write_env_usage(source_file::String, usage_filepath::String)
# Pkg.Types /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/Types.jl:487
[7] Pkg.Types.EnvCache(env::Nothing)
# Pkg.Types /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/Types.jl:345
[8] EnvCache
# /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/Types.jl:325 [inlined]
[9] add_snapshot_to_undo(env::Nothing)
# Pkg.API /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/API.jl:1862
[10] add_snapshot_to_undo
# /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/API.jl:1858 [inlined]
[11] activate(path::String; shared::Bool, temp::Bool, io::Base.TTY)
# Pkg.API /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/API.jl:1700
[12] activate(path::String)
# Pkg.API /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/API.jl:1659
[13] top-level scope
# /whippet/bin/whippet-quant.jl:12
in expression starting at /whippet/bin/whippet-quant.jl:12
Finally I found a working solution: after the initialization, it's enough to add /tmp/ to the JULIA_DEPO_PATH and Julia is using /tmp/logs/manifest_usage.toml instead of the protected /depot/logs/manifest_usage.toml.
For clarity, here's the Dockerfile
FROM julia:bullseye
LABEL version=v1.6.1
RUN apt-get update && apt-get install -y git
RUN mkdir /depot
ENV JULIA_PATH=/usr/local/julia
ENV JULIA_DEPOT_PATH=/depot
ENV JULIA_PROJECT=/whippet
RUN mkdir /whippet/ && cd /whippet && \
git clone --depth 1 --branch v1.6.1 https://github.com/timbitz/Whippet.jl.git . && \
julia --project=/whippet -e 'using Pkg; Pkg.instantiate(); Pkg.precompile();'
RUN chmod 777 -R /depot/
ENV PATH="/whippet/bin/:${PATH}"
RUN whippet-quant.jl -h || echo "Done"
RUN whippet-index.jl -h || echo "Done"
RUN whippet-delta.jl -h || echo "Done"
ENV JULIA_DEPOT_PATH="/tmp/:${JULIA_DEPOT_PATH}"
ENTRYPOINT ["whippet-quant.jl"]
I am trying to use the sed command to replace variables during docker build. The variable I am attempting to do (to start) is $DATABASE_HOST. The value for that is coming from my .env file. I am reading online that environment variables are only available during run time if they come from the .env file. Due to this, my sed command is not registering.
Dockerfile:
# Dockerfile for Sphinx SE
# https://hub.docker.com/_/alpine/
FROM alpine:3.12
# https://sphinxsearch.com/blog/
ENV SPHINX_VERSION 3.4.1-efbcc65
# Install dependencies
RUN apk add --no-cache mariadb-connector-c-dev \
postgresql-dev \
wget \
sed
# set up and expose directories
RUN mkdir -pv /opt/sphinx/log /opt/sphinx/index
VOLUME /opt/sphinx/index
# http://sphinxsearch.com/downloads/sphinx-3.3.1-b72d67b-linux-amd64-musl.tar.gz
RUN wget http://sphinxsearch.com/files/sphinx-${SPHINX_VERSION}-linux-amd64-musl.tar.gz -O /tmp/sphinxsearch.tar.gz \
&& cd /opt/sphinx && tar -xf /tmp/sphinxsearch.tar.gz \
&& rm /tmp/sphinxsearch.tar.gz
# point to sphinx binaries
ENV PATH "${PATH}:/opt/sphinx/sphinx-3.4.1/bin"
RUN indexer -v
# redirect logs to stdout
RUN ln -sv /dev/stdout /opt/sphinx/log/query.log \
&& ln -sv /dev/stdout /opt/sphinx/log/searchd.log
# expose TCP port
EXPOSE 36307
EXPOSE 9306
# Copy base sphinx.conf file to container
VOLUME /opt/sphinx/conf
COPY ./sphinx.conf /opt/sphinx/conf/sphinx.conf
# Copy all docker sphinx.conf files
COPY ./configs/web-finder/docker/ /opt/sphinx/conf/
# look for and replace
RUN sed -i "s+DATABASE_HOST+${DATABASE_HOST}+g" /opt/sphinx/conf/sphinx.conf
# Concat the sphinx.conf files for all apps
# RUN cat /tmp/myconfig.append >> /etc/portage/make.conf && rm -f /tmp/myconfig.append
CMD indexer --all --config /opt/sphinx/conf/sphinx.conf \
&& searchd --nodetach --config /opt/sphinx/conf/sphinx.conf
.env file:
DATABASE_HOST=someport
DATABASE_USERNAME=someusername
DATABASE_PASSWORD=somepassword
DATABASE_SCHEMA=someschema
DATABASE_PORT=3306
SPHINX_PORT=36307
sphinx.conf:
searchd
{
listen = 127.0.0.1:$SPHINX_PORT
log = /opt/sphinx/searchd.log
query_log = /opt/sphinx/query.log
read_timeout = 5
max_children = 30
pid_file = /opt/sphinx/searchd.pid
seamless_rotate = 1
preopen_indexes = 1
unlink_old = 1
binlog_path = /opt/sphinx/
}
With sphinx the 'sphinx.conf' file can be 'executable'. Ie it can actully be a 'shell script' (or PHP, perl etc!)
Assuming your .env file makes real (runtime!) environment variables within the container (not overly familiar with Docker), then your sphinx.conf file could be ...
#!/bin/sh
set -eu
cat <<EOF
searchd
{
listen = 127.0.0.1:$SPHINX_PORT
log = /opt/sphinx/searchd.log
query_log = /opt/sphinx/query.log
read_timeout = 5
max_children = 30
pid_file = /opt/sphinx/searchd.pid
seamless_rotate = 1
preopen_indexes = 1
unlink_old = 1
binlog_path = /opt/sphinx/
}
EOF
And because it a shell script, the variables will automatically be expanded :)
Need it executable too!
RUN chmod a+x /opt/sphinx/conf/sphinx.conf
Then dont need the sed command in Dockerfile at all!
When running a sh script in docker file, i got the following error:
./upload.sh: 5: ./upload.sh: sudo: not found ./upload.sh: 21:
./upload.sh: Bad substitution
sudo chmod 755 upload.sh # line 5
version=$(git rev-parse --short HEAD)
echo "version $version"
echo "Uploading file"
for path in $(find public/files -name "*.txt"); do
echo "path $path"
WORDTOREMOVE="public/"
echo "WORDTOREMOVE $WORDTOREMOVE"
# cause of the error
newpath=${path//$WORDTOREMOVE/} # Line 21
echo "new path $path"
url=http://localhost:3000/${newpath}
...
echo "Uploading file"
...
done
DockerFile
FROM node:10-slim
EXPOSE 3000 4001
WORKDIR /prod/code
...
COPY . .
RUN ./upload.sh
RUN npm run build
CMD ./DockerRun.sh
Any idea?
If anyone faces the same issue, here how I fixed it
chmod +x upload.sh
git update-index --chmod=+x upload.sh (mandatory if you pushed the file to remote branch before changing its permission)
The docker image you are using (node:10-slim) has no sudo installed on it because this docker image runs processes as user root:
docker run -it node:10-slim bash
root#68dcffceb88c:/# id
uid=0(root) gid=0(root) groups=0(root)
root#68dcffceb88c:/# which sudo
root#68dcffceb88c:/#
When your Dockerfile runs RUN ./upload.sh it will run:
sudo chmod 755 upload.sh
Using sudo inside the docker fails because sudo is not installed, there is no need to use sudo inside the docker because all of the commands inside the docker run as user root.
Simply remove the sudo from line number 5.
If you wish to update the running PATH variable run:
PATH=$PATH:/directorytoadd/bin
This will append the directory "/directorytoadd/bin" to the current path.
I am trying to figure out how to run Elixir Phoenix on Heroku using Docker. I am pretty much using this Dockerfile: (https://github.com/jpiepkow/phoenix-docker/blob/master/Dockerfile)
# ---- Build Base Stage ----
FROM elixir:1.9.1-alpine AS app_builder
RUN apk add --no-cache=true \
gcc \
g++ \
git \
make \
musl-dev
RUN mix do local.hex --force, local.rebar --force
# ---- Build Deps Stage ----
FROM app_builder as deps
COPY mix.exs mix.lock ./
ARG MIX_ENV=prod
ENV MIX_ENV=$MIX_ENV
RUN mix do deps.get --only=$MIX_ENV, deps.compile
# ---- Build Release Stage ----
FROM deps as releaser
RUN echo $MIX_ENV
COPY config ./config
COPY lib ./lib
COPY priv ./priv
RUN mix release && \
cat mix.exs | grep app: | sed -e 's/ app: ://' | tr ',' ' ' | sed 's/ //g' > app_name.txt
# ---- Final Image Stage ----
FROM alpine:3.9 as app
RUN apk add --no-cache bash libstdc++ openssl
ENV CMD=start
COPY --from=releaser ./_build .
COPY --from=releaser ./app_name.txt ./app_name.txt
CMD ["sh","-c","./prod/rel/$(cat ./app_name.txt)/bin/$(cat ./app_name.txt) $CMD"]
I have pushed to Heroku and the app is running but when I try to using database things that's when it blows up. The logs say the database needs to be migrated, which makes sense since I haven't done it. But now I realize I'm not sure how to do that when mix is not available and I'm using Docker.
Does anyone know how to create and migrate postgres heroku when deployed with Docker?
Create the lib/my_app/release.ex mentioned in https://hexdocs.pm/phoenix/releases.html#ecto-migrations-and-custom-commands
bash into Heroku release container (this is different behavior from, for example, a Rails app):
heroku run bash --app my_app
Two options:
1
./prod/rel/my_app/bin/my_app start_iex
Then run
MyApp.Release.migrate
2
./prod/rel/my_app/bin/my_app eval ""MyApp.Release.migrate"
I'm trying to install nvm within a Dockerfile. It seems like it installs OK, but the nvm command is not working.
Dockerfile:
# Install nvm
RUN git clone http://github.com/creationix/nvm.git /root/.nvm;
RUN chmod -R 777 /root/.nvm/;
RUN sh /root/.nvm/install.sh;
RUN export NVM_DIR="$HOME/.nvm";
RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
RUN nvm ls-remote;
Build output:
Step 23/39 : RUN git clone http://github.com/creationix/nvm.git /root/.nvm;
---> Running in ca485a68b9aa
Cloning into '/root/.nvm'...
---> a6f61d486443
Removing intermediate container ca485a68b9aa
Step 24/39 : RUN chmod -R 777 /root/.nvm/
---> Running in 6d4432926745
---> 30e7efc5bd41
Removing intermediate container 6d4432926745
Step 25/39 : RUN sh /root/.nvm/install.sh;
---> Running in 79b517430285
=> Downloading nvm from git to '$HOME/.nvm'
=> Cloning into '$HOME/.nvm'...
* (HEAD detached at v0.33.0)
master
=> Compressing and cleaning up git repository
=> Appending nvm source string to /root/.profile
=> bash_completion source string already in /root/.profile
npm info it worked if it ends with ok
npm info using npm#3.10.10
npm info using node#v6.9.5
npm info ok
=> Installing Node.js version 6.9.5
Downloading and installing node v6.9.5...
Downloading https://nodejs.org/dist/v6.9.5/node-v6.9.5-linux-x64.tar.xz...
######################################################################## 100.0%
Computing checksum with sha256sum
Checksums matched!
Now using node v6.9.5 (npm v3.10.10)
Creating default alias: default -> 6.9.5 (-> v6.9.5 *)
/root/.nvm/install.sh: 136: [: v6.9.5: unexpected operator
Failed to install Node.js 6.9.5
=> Close and reopen your terminal to start using nvm or run the following to use it now:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
---> 9f6f3e74cd19
Removing intermediate container 79b517430285
Step 26/39 : RUN export NVM_DIR="$HOME/.nvm";
---> Running in 1d768138e3d5
---> 8039dfb4311c
Removing intermediate container 1d768138e3d5
Step 27/39 : RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
---> Running in d91126b7de62
---> 52313e09866e
Removing intermediate container d91126b7de62
Step 28/39 : RUN nvm ls-remote;
---> Running in f13c1ed42b3a
/bin/sh: 1: nvm: not found
The command '/bin/sh -c nvm ls-remote;' returned a non-zero code: 127
The error:
Step 28/39 : RUN nvm ls-remote;
---> Running in f13c1ed42b3a
/bin/sh: 1: nvm: not found
The command '/bin/sh -c nvm ls-remote;' returned a non-zero code: 127
The end of my /root/.bashrc file looks like:
[[ -s /root/.nvm/nvm.sh ]] && . /root/.nvm/nvm.sh
Everything else in the Dockerfile works. Adding the nvm stuff is what broke it. Here is the full file.
I made the following changes to your Dockerfile to make it work:
First, replace...
RUN sh /root/.nvm/install.sh;
...with:
RUN bash /root/.nvm/install.sh;
Why? On Redhat-based systems, /bin/sh is a symlink to /bin/bash. But on Ubuntu, /bin/sh is a symlink to /bin/dash. And this is what happens with dash:
root#52d54205a137:/# bash -c '[ 1 == 1 ] && echo yes!'
yes!
root#52d54205a137:/# dash -c '[ 1 == 1 ] && echo yes!'
dash: 1: [: 1: unexpected operator
Second, replace...
RUN nvm ls-remote;
...with:
RUN bash -i -c 'nvm ls-remote';
Why? Because, the default .bashrc for a user in Ubuntu (almost at the top) contains:
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
And the source-ing of nvm's scripts takes place at the bottom. So we need to make sure that bash is invoked interactively by passing the argument -i.
Third, you could skip the following lines in your Dockerfile:
RUN export NVM_DIR="$HOME/.nvm";
RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
Why? Because bash /root/.nvm/install.sh; will automatically do it for you:
[fedora#myhost ~]$ sudo docker run --rm -it 2a283d6e2173 tail -2 /root/.bashrc
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
Instalation of nvm on ubuntu in dockerfile
In the case of Ubuntu 20.04 you can use only these commands and everything will be alright
FROM ubuntu:20.04
RUN apt update -y && apt upgrade -y && apt install wget bash -y
RUN wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
RUN bash -i -c 'nvm ls-remote'
hopefully it will work