Can not run appimages inside rootless docker container - docker

I have installed rootless docker on ubuntu 20.04 [https://docs.docker.com/engine/security/rootless/][1]
I have download vscodium appimage from [https://github.com/VSCodium/vscodium/releases/download/1.66.0/VSCodium-1.66.0-1648720116.glibc2.17-x86_64.AppImage][1]
i have shared host directory containing this Appimage with rootless docker container. But it doesn't run. When I manually install(apt-get install) any GUI package(ex. firefox) inside the container it runs successfully.
output of the command: docker-compose up vscodium
Creating vscodium ... done
Attaching to vscodium
vscodium | codium: error while loading shared libraries: libnss3.so: cannot open shared object file: No such file or directory
vscodium exited with code 127
content of file docker-compose.yml
version: "3"
services:
vscodium:
image: python:3.10.4-bullseye
entrypoint: custom-docker-entrypoint.sh
container_name: vscodium
environment:
- DISPLAY=${DISPLAY}
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix:ro
- $HOME/.Xauthority:$HOME/.Xauthority:ro
- ./custom-docker-entrypoint.sh:/usr/local/bin/custom-docker-entrypoint.sh
- ./appImages/VSCodium.AppImage:/ide/VSCodium.AppImage
network_mode: host
content of file custom-docker-entrypoint.sh
#!/bin/sh
chmod a+x /ide/VSCodium.AppImage
/ide/VSCodium.AppImage --appimage-extract-and-run

A few notes on running AppImages insude docker:
AppImages require fuse to run which is usually not available/usable on docker
Extract the AppImage contents and mount that folder on your docker
missing libnss3.so, you will have to install this on the host system. If it doesn't work you will have to report it to the AppImage author to for them to include it in the bundle.

Related

Install a wheel package inside a docker container

I am creating an airflow docker container using the docker image "puckel/docker-airflow".
I have created a docker-compose file that uses this image and links 2 volumes, one for dags and other for the wheel package.
When I start the container and go to airflow UI it throws "No module named 'custPkg'" error. So I exec into the container using the command
docker exec -ti <container_id> bash
and then installed it using pip. I can use the package if I run a python shell using the command
from custPkg.abc import Base
but it's still not working on the airflow.
The airflow webserver which even refreshes after some time is still showing the same error on the terminal on which I started the conatainer using
docker-compose up
my docker-compose file looks like this
version: "3"
services:
webserver:
image: puckel/docker-airflow:latest
container_name: test_container
volumes:
- /home/ubuntu/dags1/:/usr/local/airflow/dags
- /home/ubuntu/dist/:/usr/local/airflow/dist
ports:
- 8080:8080
restart: always
--------------------NEW UPDATE----------------------------
I just restarted the container and it is working now, but I don't want to go into the container and run the exec command manually. Can I somehow do this using the docker-compose file only?

Docker not mapping changes from local project to the container in windows

I am trying to use Docker volume/bind mount so that I don't need to build my project again and again after every small change. I do not get any error but changes in the local files are not visible in container thus I still have to rebuild the project for the new files system snapshot.
Following solution seemed to work for some people.Therefore,
I have tried restarting Docker and Reset Credentials at Docker Desktop-->Setting-->Shared Drives
Here is my docker-compose.yml file
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
I have tried through Docker CLI too. but problem persists
docker build -f Dockerfile.dev .
docker run -p 3000:3000 -v /app/node_modules -v ${pwd}:/app image-id
Windows does copy the files in current directory to container but they are
not in sync
I am using Windows 10 power shell and docker version 18.09.2
UPDATE:
I have checked container contents
using command
docker exec -t -i container-id sh
and the printed file contents using command
cat filename
And from this it is clear that the files container references to has/have changed/updated but I still don't understand why do i have to restart container to see the changes in browser.
Should not they be apparent after just refreshing the tab?

Ubuntu - install (Jenkins) Docker containers fails due to creating mount source path: mkdir read-only errors

I want to install Jenkins via Docker in an Ubuntu virtual machine (Oracle VM Virtualbox).
When starting 'docker-compose up' I get the following errors:
ERROR: for dockercompose_postgres_1 Cannot start service postgres:
error while creating mount source path '/var/postgres-data': mkdir
/var/postgres-data: read-Starting dockercompose_jenkins_1 ... error
ERROR: for dockercompose_jenkins_1 Cannot start service jenkins:
error while creating mount source path '/var/jenkins_home': mkdir
/var/jenkins_home: read-only file system
ERROR: for jenkins Cannot start service jenkins: error while creating
mount source path '/var/jenkins_home': mkdir /var/jenkins_home:
read-only file system
ERROR: for postgres Cannot start service postgres: error while
creating mount source path '/var/postgres-data': mkdir
/var/postgres-data: read-only file system ERROR: Encountered errors
while bringing up the project.
The context:
I am logged in as 'osboxes.org' (same name as the Ubuntu image provider).
Docker-compose is started as 'sudo docker-compose up'.
The permissions of the folder '/var' is drwxrwxrwx 14 root root 4096 Sep 9 08:48 var
At first the /var/progres-data and /var/jenkins_home are not existing. The issue is there.
After creating both folders / directories with 777 permission, the same issue is there.
The Ubuntu VM is an Osboxes.org Ubuntu virtual machine in Oracle VM Virtualbox on Windows.
Suggested was a 'sudo mount -o remount,rw /'. No changes.
Suggested was a 'sudo mount -o remount,rw /var', then I get this warning: mount: /var: mount point not mounted or bad option.
Part of the docker-compose.yml file is:
version: '2'
services: jenkins:
image: jenkins:latest
ports:
- "8080:8080"
- "50000:50000"
networks:
- jenkins
volumes:
- /var/jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
postgres:
image: postgres:9.6
networks:
- jenkins
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonarpasswd
volumes:
- /var/postgres-data:/var/lib/postgresql/data
ETC ETC ETC
Suggested was that (after installing Ubuntu VM and starting it) that by typing just 'docker' you get some advice on installing Docker.
Apparently, this is incorrect. There is an correct procedure for installing Docker on Ubuntu!
Please refer to this correct Ubuntu Docker installation. It will help you installing the newest/right version of Docker on Ubuntu. This prevents you getting nasty errors like the ones in the above question.

Accessing Volumes in docker

I want to access an external directory 'web-app' in my docker container.
Consider the directory structure below:
MyDcocker
|-dockerFile
|-web-app
|-flask-web.py
I have the following lines in my dockerfile:
VOLUME ["web-app"]
# Run flask-web (API) file
CMD ["python3","web-app/flask-web.py"]
When running the image, I get the error:
python3: can't open file 'web-app/flask-web.py': [Errno 2] No such file or directory
I believe the directory has not been mounted properly. How do I solve this ?
You will write in Dockerfile how you build the container, not how container will interact with your host.
For mount a directory from your host in your container, you have 2 solutions :
with docker command line :
docker run -v $(pwd)/web-app:/var/lib/web-app/ dck-image-name
with docker-compose
version: '2'
services:
myservice:
image: dck-image-name
volumes:
- ./web-app:/var/lib/web-app/

docker-compose with a war via volume fails on digital ocean debian but not other debian distros (my home box, work box as well)

on digitalocean:
if i use docker compose with a tomcat container that has the war i am trying to use already in webapps via the tomcat container, compose works...
Dockerfile for Tomcat container with built in war (works):
FROM clegge/java7
MAINTAINER christopher.j.legge#gmail.com
RUN wget http://www.eu.apache.org/dist/tomcat/tomcat-7/v7.0.65/bin/apache-tomcat-7.0.65.tar.gz
RUN tar -xzf apache-tomcat-7.0.65.tar.gz
RUN mv apache-tomcat-7.0.65 /opt/tomcat7
RUN rm apache-tomcat-7.0.65.tar.gz
RUN echo "export CATALINA_HOME=\"/opt/tomcat7\"" >> ~/.bashrc
RUN sed -i '/<\/tomcat-users>/ i\<user username="test" password="test~" roles="admin,manager-gui,manager-status"/>' /opt/tomcat7/conf/tomcat-users.xml
VOLUME /opt/tomcat7/webapps
ADD app.war /opt/tomcat7/webapps/app.war
EXPOSE 8080
CMD /opt/tomcat7/bin/startup.sh && tail -f /opt/tomcat7/logs/catalina.out
if i use that container with docker compose, everything works great!
if i try to push the war file in via my docker-compose yml the contianer goes up and never inflates the war...
tomcat:
image: tomcat:8.0
ports:
- "8080:8080"
volumes:
- base-0.2.war:/usr/local/tomcat/webapps/base.war
links:
- postgres
postgres:
image: clegge/postgres
ports:
- "5432:5432"
environment:
- DB_USER_NAME=test_crud
- DB_PASSWORD=test_crud
- DB_NAME=base_db
no error in the logs...
this also fails on my godaddy box... I have tried every flavor of linux on digital ocean... nothing works... i think it is a problem because the digital ocean instance is a container itself... any thoughts?

Resources