my Problem is the follows:
I'm trying to achive, that the Console ion IntelliJ for Docker-Container uses the Right Encoding. Right now it looks as follows:
While in Docker itself it looks like this:
If i run a simple main in IntelliJ the Output is follows:
i changed every Option i found in IntelliJ to UTF-8, still nothing changed. Its just weird, that it does work in Docker and normal console, just not the Docker-Console in IntelliJ.
Dockerfile is like this:
FROM fabric8/java-alpine-openjdk11-jre:latest
ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
ENV AB_ENABLED=jmx_exporter
#ENV JAVA_TOOL_OPTIONS = "-Dfile.encoding=UTF8"
# Be prepared for running in OpenShift too
RUN adduser -G root --no-create-home --disabled-password 1001 \
&& chown -R 1001 /deployments \
&& chmod -R "g+rwX" /deployments \
&& chown -R 1001:root /deployments
COPY target/lib/* /deployments/lib/
COPY target/*-runner.jar /deployments/app.jar
EXPOSE 8080
# run with user 1001
USER 1001
ENTRYPOINT [ "/deployments/run-java.sh" ]
The commented out Line was one of my Attempts of fixing it, also adding the Option to the JAVA_OPTIONS. Didnt help (even without the Options added "file.encoding" returns UTF-8). And since it works perfectly in Docker, i dont think the Problem is in the File.
What else can i try?
IntelliJ IDEA -> Help -> Edit Custom VM Options
and add line:
-Dfile.encoding=UTF-8
Related
Mine is a bit of a peculiar situation, I created a dockerfile that "works" if not for some proiblems,
Here is a "working" version:
ARG IMGVERS=latest
FROM bensuperpc/tinycore:${IMGVERS}
LABEL maintainer "Vinnie Costante <****#gmail.com>"
ARG DOWNDIR=/tmp/download
ARG INSTDIR=/opt/vscodium
ARG REPOAPI="https://api.github.com/repos/VSCodium/vscodium/releases/latest"
ENV LANG=C.UTF-8 LC_ALL=C PATH="${PATH}:${INSTDIR}/bin/"
RUN tce-load -wic Xlibs nss gtk3 libasound libcups python3.9 tk8.6 \
&& rm -rf /tmp/tce/optional/*
RUN sudo ln -s /lib /lib64 \
&& sudo ln -s /usr/local/etc/fonts /etc/fonts \
&& sudo mkdir -p ${DOWNDIR} ${INSTDIR} \
&& sudo chown -R tc:staff ${DOWNDIR} ${INSTDIR}
#COPY VSCodium-linux-x64-1.57.1.tar.gz ${DOWNDIR}/
RUN wget http://192.168.43.6:8000/VSCodium-linux-x64-1.57.1.tar.gz -P ${DOWNDIR}
RUN tar xvf ${DOWNDIR}/VSCodium*.gz -C ${INSTDIR} \
&& rm -rf ${DOWNDIR}
CMD ["codium"]
The issues are these:
Starting the image with this command vscodium does not start, but entering the shell (adding /bin/ash to the end of the docker run) and then running codium instead vscodium starts. I tried many ways, even changing the entrypoint, the result is always the same. But if I try to add any other graphic program (like firefox) and replace the argument of the CMD instruction inside the dockerfile, everything works as it should.
docker run -it --rm \
--net=host \
--env="DISPLAY=unix${DISPLAY}" \
--workdir /home/tc \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
--name tc \
tinycodium
the last two versions of codium (1.58.0 and 1.58.1) don't work at all on docker but they start normally on the same distro not containerized. I tried installing other dependencies but nothing worked. Right now I don't know how to understand what's wrong with these two new versions.
I don't know how to set a volume to save codium data, I tried something like this --volume=/home/vinnie/docker:/home/tc but there are always problems with user/group permissions. I've also tried booting the container as user by adding it to the docker group but there's always a mess with permissions. If someone could explain me how to proceed, the directories I want to save are these:
/home/tc/.vscode-oss
/home/tc/.cache/mesa_shader_cache
/home/tc/.config/VSCodium
/home/tc/.config/glib-2.0/settings
/home/tc/.local/share
Try running codium --verbose and see if the container starts
I'm trying to deploy metricbeat on openshift, and after many hours of work i cannot have it worked.
The same image is running normally on docker.
Thank you
#Dockerfile
FROM docker.elastic.co/beats/metricbeat:7.2.0
COPY metricbeat.yml /usr/share/metricbeat/metricbeat.yml
USER root
RUN mkdir /var/log/metricbeat \
&& chown metricbeat /usr/share/metricbeat/metricbeat.yml \
&& chown metricbeat /usr/share/metricbeat/metricbeat \
&& chmod go-w /usr/share/metricbeat/metricbeat.yml \
&& chown metricbeat /var/log/metricbeat
COPY entrypoint.sh /usr/local/bin/custom-entrypoint
RUN chmod +x /usr/local/bin/custom-entrypoint \
&& chown metricbeat /usr/local/bin/custom-entrypoint
ENV PATH="/usr/share/metricbeat:${PATH}"
USER metricbeat
ENTRYPOINT [ "/usr/local/bin/custom-entrypoint" ]
#entrypoint.sh
#!/usr/bin/env bash
/usr/share/metricbeat/metricbeat -e --strict.perms=false -c /usr/share /metricbeat/metricbeat.yml
Error: /usr/local/bin/custom-entrypoint: line 2: /usr/share/metricbeat/metricbeat: Permission denied
The Dockerfile shows switching to the root user while setting up the directory structure and permissions when building the image, and finally switching to USER metricbeat to run the container with it.
However, by default OpenShift runs containers with a user with a random UID (from a preconfigured range).
One option is to relax the security policy as Graham Dumpleton suggested.
To make it work without relaxing the security, I'll suggest to change ownership as follows:
RUN chown -R metricbeat:root /usr/share/metricbeat \
&& chmod -R 0775 /usr/share/metricbeat
...or incorporate the above two commands in the first RUN instruction.
I think I have a dilemma. I am trying to create a Dockerfile to reproduce a long and complicated installation process (of ROS) so that my students can get it running with less headache.
I am combining various scripts provided with manual steps that are documented. The manual steps often say to do "sudo" but I am told that doing sudo inside a Dockerfile is to be avoided. So I move those steps to before the USER command in the Dockerfile because I am told that those commands run as root. However as a result the files and directories created are owned by root and I believe subsequent steps are failing.
I have two choices I think: move the commands to after the USER command and include sudo or try to make the install scripts create directories and files of the right ownership. Of course a priori I dont know what files and directories are going to be created.
Here is my Dockerfile (actually one of many I have been experimenting with.) Also if you see any other things that need to be improved or fixed please let me know!
FROM ubuntu:16.04
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
RUN apt-get update && apt-get install --assume-yes wget sudo && \
wget https://raw.githubusercontent.com/ROBOTIS-GIT/robotis_tools/master/install_ros_kinetic.sh && \
chmod 755 ./install_ros_kinetic.sh && \
bash ./install_ros_kinetic.sh
RUN apt-get install --assume-yes ros-kinetic-joy ros-kinetic-teleop-twist-joy ros-kinetic-teleop-twist-keyboard ros-kinetic-laser-proc ros-kinetic-rgbd-launch ros-kinetic-depthimage-to-laserscan ros-kinetic-rosserial-arduino ros-kinetic-rosserial-python ros-kinetic-rosserial-server ros-kinetic-rosserial-client ros-kinetic-rosserial-msgs ros-kinetic-amcl ros-kinetic-map-server ros-kinetic-move-base ros-kinetic-urdf ros-kinetic-xacro ros-kinetic-compressed-image-transport ros-kinetic-rqt-image-view ros-kinetic-gmapping ros-kinetic-navigation ros-kinetic-interactive-markers
USER $USERNAME
WORKDIR /home/$USERNAME
RUN cd /home/$USERNAME/catkin_ws/src/ && \
git clone https://github.com/ROBOTIS-GIT/turtlebot3_msgs.git && \
git clone https://github.com/ROBOTIS-GIT/turtlebot3.git && \
git clone https://github.com/ROBOTIS-GIT/turtlebot3_simulations.git
# add catkin env
RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
RUN echo 'source /home/ros/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
# RUN . /home/ros/.bashrc && \
# cd /home/$USERNAME/catkin_ws && \
# catkin_make
USER $USERNAME
ENTRYPOINT /bin/bash
Would be interesting for my own information to get why sudo should be avoided in containers.
Historically we use docker to automate build, test and deploy processes in our team and always tried to write Dockerfiles as close as possible to original process.
Lets say if you build in your host some app and launch some commands with sudo, some without, we managed to create exactly the same Dockerfiles. The positive feedback from this is that you are not obligated to write readme's on how to build the code anymore - you just supply Dockerfile and whenever someone wants to repeat all steps in non-container environment, he just follows (copy/pastes) commands from the file.
So my proposal is - in Dockerfile install packages first, then switch to user and proceed with all remaining steps, using sudo when necessary. You will have all artifacts owned by the user, not root.
UPD
Got the original discussion and this one. So it sounds like you choose the best approach based on your particular case and needs.
Edit: Solved- typo
I have a Dockerfile that successfully creates a virtualenv using virtualenvwrapper (along with setting up a heap of "standard" settings/packages in our normal environment). I am using the resulting image as a "base image" for further use. All good so far. However, the following Dockerfile (based of the first image, "base_image_14.04") falls down at the last line:
FROM base_image_14.04
USER root
RUN DEBIAN_FRONTEND=noninteractive \
apt-get update && apt-get install -y \
libproj0 libproj-dev \
libgeos-c1v5 libgeos-dev \
libjpeg62 libjpeg-dev \
zlib1g zlib1g-dev \
libfreetype6 libfreetype6-dev \
libgdal20 libgdal-dev \
&& rm -rf /var/lib/apt/lists
USER webdev
RUN ["/bin/bash", "-ic", "mkproject maproxy"]
EXPOSE 80
WORKDIR $PROJECT_HOME/mapproxy
ADD ./requirements.txt .
RUN ["/bin/bash", "-ic", "workon mapproxy && pip install -r requirements.txt"]
The "mkproject mapproxy" works fine. If I comment out the last line it builds successfully and I can spin up the container and run "workon mapproxy" manually, not a problem. But when I try and build with the last line, it gives a workon error:
ERROR: Environment 'mapproxy' does not exist. Create it with 'mkvirtualenv mapproxy'.
workon is being called, but for some reason it can't find the mapproxy virtualenv.
WORKON_HOME & PROJECT_HOME both exist (defined in the parent image) and point to the correct locations (and are used successfully by "mkproject mapproxy").
So why is workon returning an error when the mapproxy virtualenv exists? The same error happens when I isolate that last line into a third Dockerfile building on the second.
Solved: It was a simple typo. mkproject maproxy instead of mapproxy. :sigh:
I am trying to build a docker image and am running into similar problems.
First question was why use a virtual env in docker? The main reason in a nutshell is to minimize effort to migrate an existing and working approach into a docker container. I will eventually use docker-compose, but I wanted to start by getting my feet wet with it all in a single docker container.
In my first attempt I installed almost everything with apt-get, including uwsgi. I installed my app "globally" with pip3. The app has command line functionality and a separate flask web app, hence the need for uwsgi. The command line functionality works, but when I make a request of the flask app uwsgi / python has a problem with locale: Fatal Python error: Py_Initialize: Unable to get the locale encoding and ImportError: No module named 'encodings
I have stripped away all my app specific additions to narrow down the problem. This is the Dockerfile I'm using:
# Docker image definition for testing
FROM ubuntu:xenial
# Create a user
RUN useradd -G sudo -ms /bin/bash tester
RUN echo 'tester:password' | chpasswd
WORKDIR /home/tester
# Skipping apt-get update to save some build time. Some are kept
# to insure they are the same as on host setup.
RUN apt-get install -y python3 python3-dev python3-pip \
virtualenv virtualenvwrapper sudo nano && \
apt-get clean -qy
# After above, can we use those installed in rest of Dockerfile?
# Yes, but not always, such as with virtualenvwrapper. What about
# virtualenv? How do you "source" the script? Doesn't appear to be
# installed, as bash complains "source needs a single parameter"
ENV VIRTUALENVWRAPPER_PYTHON /usr/bin/python3
ENV VIRTUALENVWRAPPER_VIRTUALENV /usr/bin/virtualenv
RUN ["/bin/bash", "-c", "source", "/usr/share/virtualenvwrapper/virtualenvwrapper.sh"]
# Create a virtualenv so uwsgi can find locale
# RUN mkdir /home/tester/.virtualenv && virtualenv -p`which python3` /home/bts_tools/.virtualenv/bts_tools
RUN mkvirtualenv -p`which python3` bts_tools && \
workon bts_tools && \
pip3 --disable-pip-version-check install --upgrade bts_tools
USER tester
ENTRYPOINT ["/bin/bash"]
CMD ["--login"]
The build fails on the line I try to source the virtualenvwrapper script. Bash complains source needs an argument - the file to be sourced. So I comment out the RUN lines and it builds without error. When I run the resulting container I see all the additions to the ENV that virtualenvwrapper makes (you can see all of them by executing the "set" command without any args), and the script to be sourced is there too.
So my question is why doesn't docker find them? How does the docker build process work if the results of any previous RUNs or ENVs aren't applied for subsequent use in the Dockerfile? I know some things are applied and work, for example if you apt-get nginx you can refer to /etc/nginx or alter things under that folder. You can create a user and set it's password or cd into its home folder for example. If I move the WORKDIR before the RUN useradd -G I see a warning from useradd the home folder already exists. I tried to use the "time" program to time how long it takes to do various things in the Dockerfile and docker complains it can't find 'time'.
So what exactly is going on? I have spent the last 3 days trying to figure this out. It just shouldn't be this difficult. What am I missing?
Parts of the bts_tools flask app worked when I wasn't using virtual envs. Most of the app didn't work, and the issue was this locale problem. Since everything works on the host outside of docker, and after trying to alter the PATH, PYTHONHOME, PYTHONPATH in my uwsgi start script to overcome the dreaded "locale encoding" fatal error, I decided to try to replicate the host setup as closely as possible since that didn't have the locale issue. When I have had that problem before I could run dpkg-reconfigure python3 or fix with changes to PATH or ENV settings. If you google the problem you'll see many people have difficulties with python & locale. It's almost enough reason to avoid using python!
I posted this elsewhere about locale issue, if it helps.
I'm working with Hugo
Trying to run inside a Docker container to allow people to easily manage content.
My first task is to get Hugo running and people able to view the site locally.
Here's my Dockerfile:
FROM alpine:3.3
RUN apk update && apk upgrade && \
apk add --no-cache go bash git openssh && \
mkdir -p /aws && \
apk -Uuv add groff less python py-pip && \
pip install awscli && \
apk --purge -v del py-pip && \
rm /var/cache/apk/* && \
mkdir -p /go/src /go/bin && chmod -R 777 /go
ENV GOPATH /go
ENV PATH /go/bin:$PATH
RUN go get -v github.com/spf13/hugo
RUN git clone http://mygitrepo.com /app
WORKDIR /app
EXPOSE 1313
ENTRYPOINT ["hugo","server"]
I'm checking out the site repo then running Hugo - hugo server
I'm then running this container via:
docker run -d -p 1313:1313 --name app app
Which reports everything is starting OK however when I try to browse locally on localhost:1313 I see nothing.
Any ideas where I'm going wrong?
UPDATE
docker ps gives me:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e1f12849044 app "hugo server" 16 minutes ago Up 16 minutes 0.0.0.0:1313->1313/tcp app
And docker logs 9e1 gives me:
Started building sites ...
Built site for language en:
0 draft content
0 future content
0 expired content
25 pages created
0 non-page files copied
0 paginator pages created
0 tags created
0 categories created
total in 64 ms
Watching for changes in /ltec/{data,content,layouts,static,themes}
Serving pages from memory
Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
Press Ctrl+C to stop
I had the same problem, but following this tutorial http://ahmedalani.com/post/so-recursive-it-hurts/, says about to use the param --bind from hugo server command.
Adding that param mentioned, and the ip 0.0.0.0 we have --bind=0.0.0.0
It works to me, I think this is a natural behavior from every container taking a localhost for self scope, but if you bind with 0.0.0.0 takes a visible scope to the main host.
This is because Docker is actually running in a VM. You need to navigate to the docker-machine ip instead of localhost.
curl $(docker-machine ip):1313
Delete EXPOSE 1313 in your Dockerfile. Dockerfile reference.