xpra and sudo / what is the difference between sudo and "being a user" - docker

I'm trying to create a docker image with xpra and chrome in it. As I also need to be able to use this base image to install further software I don't change the user at the end of the Dockerfile.
During a build I use this image for 2 purposes:
- build the final image
- use the base image to run xpra and chrome for build purposes
In the first occasion you need to be root, in the second occasion you need to be the chrome user. I tried to solve this by using sudo: (e.g. sudo -i -u chrome xpra ..., but this causes problems. If I change the base image to be the chrome user (USER CHROME in the Dockerfile), it works fine.
The full error I get:
2018-07-02 11:23:39,828 Error: cannot start the desktop server
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/xpra/scripts/server.py", line 1011, in run_server
app.setup()
File "/usr/lib/python2.7/dist-packages/xpra/server/server_base.py", line 119, in setup
c.setup(self)
File "/usr/lib/python2.7/dist-packages/xpra/server/mixins/audio_server.py", line 55, in setup
self.init_pulseaudio()
File "/usr/lib/python2.7/dist-packages/xpra/server/mixins/audio_server.py", line 117, in init_pulseaudio
os.mkdir(self.pulseaudio_private_dir, 0o700)
OSError: [Errno 2] No such file or directory: '/run/user/1000/xpra/pulse-:0'
2018-07-02 11:23:39,828 [Errno 2] No such file or directory: '/run/user/1000/xpra/pulse-:0'
the /run/user directory doesn't exist in either of the images.

I worked out the issue in the end, and know why I forgot in the first place (I have been able to work on this really fragmented only. The Docker file contained: ENV XDG_RUNTIME_DIR=/tmp
Basicly redirecting the runtime directory, but sudo actually removes that environment variable.

Related

Dockerfile did not create the directory

I am trying to replicate Machine Learning Inference using AWS Lambda and Amazon EFS. I was able to deploy the project, however, it was not possible to infer the machine learning model, because it was not found. I access CloudWatch and get the following output:
[ERROR] FileNotFoundError: Missing /mnt/ml/models/craft_mlt_25k.pth and downloads disabled
Traceback (most recent call last):
  File "/var/task/app.py", line 23, in lambda_handler
    model_cache[languages_key] = easyocr.Reader(language_list, model_storage_directory=model_dir, user_network_directory=network_dir, gpu=False, download_enabled=False)
  File "/var/lang/lib/python3.8/site-packages/easyocr/easyocr.py", line 88, in __init__
    detector_path = self.getDetectorPath(detect_network)
  File "/var/lang/lib/python3.8/site-packages/easyocr/easyocr.py", line 246, in getDetectorPath
    raise FileNotFoundError("Missing %s and downloads disabled" % detector_path)
Then, I noticed that not even the directory that was supposed to store the models was created in the S3 Bucket.
At Dockerfile has the following command: RUN mkdir -p /mnt/ml, But in my s3 bucket, this directory does not exist.
It is possible create the directories and upload the EasyOCR model manually? If I do, will I have to modify the original code?

How to access my file system from a dockered pgadmin4

I tried to install pgadmin4 on my system in several ways, but each time I was defeated by the intricacies of the install. Luckily I discovered a Dockerfile (dpage/pgadmin4) and that worked out of the box. In my docker-compose.yml I added a volume statement
volumes:
- /var/lib/pgadmin4:/var/lib/pgadmin
In order to preserve the pgadmin data over successive runs. pgadmin4 is accessible from 0.0.0.0:5050 and all works fine.
However, I cannot access the files from my local file system with the query tool, this is all hidden in the docker file system. Fortunately that is in the /var/lib/pgadmin4 system on my local machine. In that directory there is a directory storage and that contains the id I use to login as the name of a directory: the ID x#y.z becomes directory x_y.z and that contains the files and folders I had created from my browser as a test. I tried to change this in the pgadmin4 options to /home/user/development but that path is not recognized because it is not in x_y.z.
Question: how can I change pgadmin4's path from /var/lib/pgadmin4/storage/x_y.z into /home/user/development?
Update
I tried to link a part of my home directory into /var/lib/pgadmin4/storage/x_y.z as a symbolic link:
sudo ln -s /home/user/Documents
After that command there exists a linked directory /var/lib/pgadmin4/storage/x_y.z/Documents with uid:gid being root:root and 777 permission. When I next start the query toolbox and click at open the open box appears and I get 4 identical error messages:
Error: [Errno 2] No such file or directory: /var/lib/pgadmin4/storage/x_y.z/Documents
I have changed the owner:group to the relevant ones I could think of:
1000:1000 (me as user)
root:root
5050:5050 (pgadmin uid and gid)
In all three cases I got this error. What is wrong here?
You override paths in config_local.py (you can create it if not exists already).
STORAGE_DIR = os.path.join(DATA_DIR, 'storage')
to
STORAGE_DIR = '/home/user/Documents'
Restart pgAdmin4.
Ref: https://www.pgadmin.org/docs/pgadmin4/4.22/config_py.html

I am not able to open the Django admin file

Whenever I start a new Django project using:
(virtual) C:\myDjangoProject>python django_admin.py startproject DjgoProject
I receive the following error:
python: can't open file 'django_admin.py': [Errno 2] No such file or directory
Any guidance please?
If you have created a virtual environment (which it looks like you have) and then run:
pip install django
Then you shouldn't need to run django-admin directly like that, you can just do the following:
django-admin startproject DjgoProject

How to fix 'Permission denied' in Docker sh entrypoint

I'm trying to create an easy-to-use Docker image for the Garry's Mod server. While my Docker image builds just fine, running it as a container always results in a single error: /bin/sh: 1: ./easygmod.sh: Permission denied.
I'm using the cm2network/steamcmd image as a base. I have tried both tags that the aforementioned base image has. I have tried chmod +x, changing users to root, and fiddling with the shebang in the first line of the easygmod.sh script, as well as a number of possible typos, particularly in file names and paths.
I have a GitHub repository for this project which auto-builds to Docker Hub. Currently, the lines of code involving the problematic script are:
# Start main script
ADD easygmod.sh .
RUN chmod +x easygmod.sh
USER steam
CMD ./easygmod.sh
Also, the shebang/first line of the script is currently #!/bin/sh.
Despite having no logical explanation, the easygmod.sh script refuses to be executed, always throwing the error Permission denied. This especially confusing given that my only other public GitHub project, which is very similar (similar style Docker image with the same base OS as cm2network/steamcmd), never had any issues like this.
The file isn't owned by steam in the container, so the chmod +x was insufficient. Either add --chown=steam to the ADD, or change your chmod from +x to a+rx.
Also, you didn't specify CWD or a path to put those files in. It's likely that the root version of that image has a CWD that steam can't access. You should use /home/steam/ for that instead.

docker-compose IOError: Can not access file in context

Docker build will build and run the image, but during docker-compose I get the following error:
> .\docker-compose-Windows-x86_64.exe -f C:\t\tea\docker-compose.yml up
Building web
Traceback (most recent call last):
File "docker-compose", line 6, in <module>
File "compose\cli\main.py", line 71, in main
File "compose\cli\main.py", line 127, in perform_command
File "compose\cli\main.py", line 1039, in up
File "compose\cli\main.py", line 1035, in up
File "compose\project.py", line 465, in up
File "compose\service.py", line 327, in ensure_image_exists
File "compose\service.py", line 999, in build
File "site-packages\docker\api\build.py", line 149, in build
File "site-packages\docker\utils\build.py", line 15, in tar
File "site-packages\docker\utils\utils.py", line 100, in create_archive
File "tarfile.py", line 1802, in gettarinfo
FileNotFoundError: [WinError 3] The system cannot find the path specified:
'C:\\t\\tea\\src\\app\\accSettings\\account-settings-main\\components\\account-settings-catalog\\components\\account-settings-copy-catalog-main\\components\\account-settings-copy-catalog-destination\\components\\account-settings-copy-destination-table\\account-settings-copy-destination-table.component.html'
[18400] Failed to execute script docker-compose
> docker -v
Docker version 18.03.0-ce-rc1, build c160c73
> docker-compose -v
docker-compose version 1.19.0, build 9e633ef3
I've enabled Win32 long paths in my local group policy editor, but not having any luck solving this issue.
Here is the docker-compose.yml if it helps:
version: '3'
services:
web:
image: web
build:
context: .
dockerfile: Dockerfile
This is a known issue under some circumstances with docker-compose. And, it is related to the MAX_PATH limitation of 260 characters on Windows.
Exerpt from the Microsoft docs on Maximum Path Length Limitation
In the Windows API (with some exceptions discussed in the following
paragraphs), the maximum length for a path is MAX_PATH, which is
defined as 260 characters.
From reading up on this, it seems that the solution depends on your docker-compose version and Windows version. Here's a summary of the solutions that I have found:
Solution #1
Upgrade to docker-compose version 1.23.0 or beyond. There is a bugfix in the 1.23.0 release described as:
Fixed an issue where paths longer than 260 characters on Windows
clients would cause docker-compose build to fail.
Solution #2
Enable NTFS long paths:
Hit the Windows key, type gpedit.msc and press Enter.
Navigate to Local Computer Policy > Computer Configuration > Administrative Templates > System > Filesystem > NTFS.
Double click the Enable NTFS long paths option and enable it.
If you're using a version of Windows that does not provide access to Group Policy, you can edit the registry instead.
Hit the Windows key, type regedit and press Enter.
Navigate to HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\ CurrentVersion\Group Policy Objects\ {48981759-12F2-42A6-A048-028B3973495F} Machine\System\CurrentControlSet\Policies
Select the LongPathsEnabled key, or create it as a DWORD (32-bit) value if it does not exist.
Set the value to 1 and close the Registry Editor.
Solution #3
Install docker-compose via pip. This seems to have solve the issue for others that have come across this.
Expert from the docker-compose documentation:
For alpine, the following dependency packages are needed: py-pip,
python-dev, libffi-dev, openssl-dev, gcc, libc-dev, and make.
Compose can be installed from pypi using pip. If you install using
pip, we recommend that you use a virtualenv because many operating
systems have python system packages that conflict with docker-compose
dependencies. See the virtualenv tutorial to get started.
pip install docker-compose
If you are not using virtualenv,
sudo pip install docker-compose
pip version 6.0 or greater is
required.

Resources