Cannot start an LXD container when the subuid and subgid information for root are changed - lxc

New to LXD and running into a problem where I am trying to build a subuid and subgid map for the root user of my container so that when root writes to the directory /megalith, it will be the UID/GID of the host user (1000:1000) rather than uid/gid of 165536:65536. I am trying to follow the instructions that are listed here:
http://insights.ubuntu.com/2016/12/08/mounting-your-home-directory-in-lxd/
But when I try to start the container, I receive the errors listed below. If I return the root subuid and subgid entries back to root:165536:65536 though, everything starts to work properly, except that when I write to /megalith, the UID and GID are 165536:65536 obviously.
Is there anything else that I need to do to make the root subuid and subgid mappings work properly that may not be in the documentation or that I may be missing?
cliff#reventon /megalith $ id
uid=1000(cliff) gid=1000(cliff) groups=1000(cliff),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),113(lpadmin),130(sambashare),132(lxd)
cliff#reventon /megalith $ cat /etc/subuid
cliff:100000:65536
lxd:165536:65536
root:1000:1
cliff#reventon /megalith $ cat /etc/subgid
cliff:100000:65536
lxd:165536:65536
root:1000:1
cliff#reventon /megalith $ lxc init ubuntu-daily:z zestytest
Creating zestytest
cliff#reventon /megalith $ lxc config set zestytest raw.idmap 'both 1000 1000'
cliff#reventon /megalith $ lxc config device add zestytest megalith disk source=/megalith path=/megalith
Device megalith added to zestytest
cliff#reventon /megalith $ lxc start zestytest
error: Error calling 'lxd forkstart zestytest /var/lib/lxd/containers /var/log/lxd/zestytest/lxc.conf': err='exit status 1'
lxc 20170112215311.265 ERROR lxc_start - start.c:lxc_spawn:1163 - Failed to set up id mapping.
lxc 20170112215311.303 ERROR lxc_start - start.c:__lxc_start:1338 - Failed to spawn container "zestytest".
lxc 20170112215311.855 ERROR lxc_conf - conf.c:run_buffer:347 - Script exited with status 1
lxc 20170112215311.855 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "zestytest".
lxc 20170112215311.858 ERROR lxc_conf - conf.c:userns_exec_1:4374 - Error setting up child mappings
lxc 20170112215311.858 ERROR lxc_cgfsng - cgroups/cgfsng.c:recursive_destroy:1274 - Error destroying /sys/fs/cgroup/systemd//lxc/zestytest
lxc 20170112215311.861 ERROR lxc_conf - conf.c:userns_exec_1:4374 - Error setting up child mappings
lxc 20170112215311.861 ERROR lxc_cgfsng - cgroups/cgfsng.c:recursive_destroy:1274 - Error destroying /sys/fs/cgroup/cpuset//lxc/zestytest
lxc 20170112215311.864 ERROR lxc_conf - conf.c:userns_exec_1:4374 - Error setting up child mappings
lxc 20170112215311.864 ERROR lxc_cgfsng - cgroups/cgfsng.c:recursive_destroy:1274 - Error destroying /sys/fs/cgroup/hugetlb//lxc/zestytest
lxc 20170112215311.867 ERROR lxc_conf - conf.c:userns_exec_1:4374 - Error setting up child mappings
lxc 20170112215311.867 ERROR lxc_cgfsng - cgroups/cgfsng.c:recursive_destroy:1274 - Error destroying /sys/fs/cgroup/cpu//lxc/zestytest
lxc 20170112215311.869 ERROR lxc_conf - conf.c:userns_exec_1:4374 - Error setting up child mappings
lxc 20170112215311.869 ERROR lxc_cgfsng - cgroups/cgfsng.c:recursive_destroy:1274 - Error destroying /sys/fs/cgroup/pids//lxc/zestytest
lxc 20170112215311.872 ERROR lxc_conf - conf.c:userns_exec_1:4374 - Error setting up child mappings
lxc 20170112215311.872 ERROR lxc_cgfsng - cgroups/cgfsng.c:recursive_destroy:1274 - Error destroying /sys/fs/cgroup/perf_event//lxc/zestytest
lxc 20170112215311.875 ERROR lxc_conf - conf.c:userns_exec_1:4374 - Error setting up child mappings
lxc 20170112215311.875 ERROR lxc_cgfsng - cgroups/cgfsng.c:recursive_destroy:1274 - Error destroying /sys/fs/cgroup/freezer//lxc/zestytest
lxc 20170112215311.878 ERROR lxc_conf - conf.c:userns_exec_1:4374 - Error setting up child mappings
lxc 20170112215311.878 ERROR lxc_cgfsng - cgroups/cgfsng.c:recursive_destroy:1274 - Error destroying /sys/fs/cgroup/memory//lxc/zestytest
lxc 20170112215311.881 ERROR lxc_conf - conf.c:userns_exec_1:4374 - Error setting up child mappings
lxc 20170112215311.881 ERROR lxc_cgfsng - cgroups/cgfsng.c:recursive_destroy:1274 - Error destroying /sys/fs/cgroup/net_cls//lxc/zestytest
lxc 20170112215311.884 ERROR lxc_conf - conf.c:userns_exec_1:4374 - Error setting up child mappings
lxc 20170112215311.884 ERROR lxc_cgfsng - cgroups/cgfsng.c:recursive_destroy:1274 - Error destroying /sys/fs/cgroup/devices//lxc/zestytest
lxc 20170112215311.886 ERROR lxc_conf - conf.c:userns_exec_1:4374 - Error setting up child mappings
lxc 20170112215311.886 ERROR lxc_cgfsng - cgroups/cgfsng.c:recursive_destroy:1274 - Error destroying /sys/fs/cgroup/blkio//lxc/zestytest
Try `lxc info --show-log zestytest` for more info

If I return the root subuid and subgid entries back to root:165536:65536
You need both the 165536:65536 and the 1000:1 ranges.
The former to hold the bulk of the uids/gids used inside the container, the later to map your uid/gid to stay the same inside the container.

Not quite sure if this addresses your question: https://github.com/lxc/lxc/issues/1622
The gist of that thread is that it's unsafe/non-sensical to map a host uid and/or host gid to the container root uid and/or gid. If you want to do what you describe, you should map to the default container user or create a new container user and run your commands with that container user. Then you can map the host uid/gid to that container uid/gid.

Related

Docker nextcloud - AH00534: apache2: Configuration error: No MPM loaded

I'm trying to run owncloud:latest on docker using Centos7 but I'm getting this error:
Initializing nextcloud 24.0.4.1 ...
New nextcloud instance
Initializing finished
AH00534: apache2: Configuration error: No MPM loaded.
I searched everywhere but I didn't find anything similar or that solved it.

Error starting daemon: Error initializing network controller: Error creating default network: HNS failed with error : The object already exists

I am getting a Daemon error while start starting docker services.
- System
- Provider
[ Name] docker
- EventID 4
[ Qualifiers] 0
Level 2
Task 0
Keywords 0x80000000000000
- TimeCreated
[ SystemTime] 2021-05-20T11:56:33.780842300Z
EventRecordID 404284
Channel Application
Computer Computer.Name
Security
- EventData
Error starting daemon: Error initializing network controller: Error creating default network: HNS failed with error: The object already exists.
Error details from event viewer:
fatal: Error starting daemon: Error initializing network controller: Error creating default network: HNS failed with error: The object already exists.
Docker Command: start-service docker
This might be caused due to a custom NAT entry. Try removing the NAT then the issue would be resolved
Check-in PowerShell:
Get-NetNat, if anything is returned,
try removing with the command Remove-NetNat

Molecule : Testing roles : Failed to get Dbus Connection Operation not permitted

I facing an issue on my Molecule Test. I have begin to study this tool 2 days ago for information.
on a Ubuntu VM running with Vagrant,I have create a role and initialze Molecule's folder and create a testinfra test file ( with the docker provider ).
The error is when my task's role are running, at the step of checking service running, it failed.
fatal: [instance]: FAILED! => {"changed": false, "msg": "Could not find the requested service httpd: "}
I was design to simply install 2 packages including httpd on a Centos Image.
When im loggin directly to the Molecule VM ( so through docker ), when i simply type systemctl the error message is
Failed to get D-Bus connection: Operation not permitted
As adviced Geerlingguy, i have specify volume mapped on cgroup folder
platforms:
- name: instance
#image: docker.io/pycontribs/centos:7
image: geerlingguy/docker-${MOLECULE_DISTRO:-centos7}-ansible:latest
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
The error is not related to Testinfra but only the docker built image.
Could someone help me to understand why this error message ?
Is that because im on a VirtualBox ran by Vagrant ?
Thanks all for reading :-)
I have added that on my mocule.yml file config according molecule documentation ( https://molecule.readthedocs.io/en/latest/examples.html#docker ) :
platforms:
- name: instance
#image: docker.io/pycontribs/centos:7
image: geerlingguy/docker-centos7-ansible:latest
capabilities:
- SYS_ADMIN
command: /sbin/init
systemctl working fine now

I cannot use --package option on bitnami/spark docker container

I pulled docker image and executed below command to run image.
docker run -it bitnami/spark:latest /bin/bash
spark-shell --packages="org.elasticsearch:elasticsearch-spark-20_2.11:7.5.0"
and i got message like below
Ivy Default Cache set to: /opt/bitnami/spark/.ivy2/cache
The jars for the packages stored in: /opt/bitnami/spark/.ivy2/jars
:: loading settings :: url = jar:file:/opt/bitnami/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
org.elasticsearch#elasticsearch-spark-20_2.11 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-c785f3e6-7c78-469f-ab46-451f8be61a4c;1.0
confs: [default]
Exception in thread "main" java.io.FileNotFoundException: /opt/bitnami/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-c785f3e6-7c78-469f-ab46-451f8be61a4c-1.0.xml (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:70)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:62)
at org.apache.ivy.core.module.descriptor.DefaultModuleDescriptor.toIvyFile(DefaultModuleDescriptor.java:563)
at org.apache.ivy.core.cache.DefaultResolutionCacheManager.saveResolvedModuleDescriptor(DefaultResolutionCacheManager.java:176)
at org.apache.ivy.core.resolve.ResolveEngine.resolve(ResolveEngine.java:245)
at org.apache.ivy.Ivy.resolve(Ivy.java:523)
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1300)
at org.apache.spark.deploy.DependencyUtils$.resolveMavenDependencies(DependencyUtils.scala:54)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:304)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:774)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I tried other package, but it is not working with all same error message.
Can you give some advice to avoid this error?
Found the solution to it
as given in https://github.com/bitnami/bitnami-docker-spark/issues/7
what we have to do is create a volume on host mapped to docker path
volumes:
- ./jars_dir:/opt/bitnami/spark/ivy:z
give this path as cache path like this
spark-shell --conf spark.jars.ivy=/opt/bitnami/spark/ivy --conf
spark.cassandra.connection.host=127.0.0.1 --packages
com.datastax.spark:spark-cassandra-connector_2.12:3.0.0-beta --conf
spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions
All happened because /opt/bitnami/spark is not writable and we have to mount a volume to bypass that.
The error "java.io.FileNotFoundException: /opt/bitnami/spark/.ivy2/" occured because the location /opt/bitnami/spark/ is not writable. so in order to resolve this issue do modify the master spark service like this.
Added user as root and add mounted volume path for required jars.
see the working block of spark service written in docker compose:
spark:
image: docker.io/bitnami/spark:3
container_name: spark
environment:
- SPARK_MODE=master
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
user: root
ports:
- '8880:8080'
volumes:
- ./spark-defaults.conf:/opt/bitnami/spark/conf/spark-defaults.conf
- ./jars_dir:/opt/bitnami/spark/ivy:z

Connection to docker container failing because of postgis port issue

My docker container is able to successfully build but when I enter the command docker-compose build, the following error is returned:
Starting docker_etl_1 ...
Starting 1e5f56853e10_1e5f56853e10_1e5f56853e10_docker_postgis_1 ...
Starting 1e5f56853e10_1e5f56853e10_1e5f56853e10_docker_postgis_1
Starting 1e5f56853e10_1e5f56853e10_1e5f56853e10_docker_postgis_1 ... error
ERROR: for 1e5f56853e10_1e5f56853e10_1e5f56853e10_docker_postgis_1 Cannot start service postgis: driver failed programming external connectivity on endpoint 1e5f56853e10_1e5f56853e10_1e5f56853e10_docker_postgis_1 (91464afbee8bf7212061797ec0f4c017a56cc3c30c9bdaf513127a6e6a4a5a52): Error starting userland prStarting docker_etl_1 ... done
ERROR: for postgis Cannot start service postgis: driver failed programming external connectivity on endpoint 1e5f56853e10_1e5f56853e10_1e5f56853e10_docker_postgis_1 (91464afbee8bf7212061797ec0f4c017a56cc3c30c9bdaf513127a6e6a4a5a52): Error starting userland proxy: Bind for 0.0.0.0:5432 failed: port is already allocated
Here is my docker-compose.yaml
version: '2'
services:
postgis:
build: ./postgis
volumes:
- ../src/main/sql:/sql
ports:
- "5432:5432"
etl:
build: ./etl
volumes:
- ..:/national-voter-file
entrypoint:
- python3
- /national-voter-file/load/loader.py
and here is the Dockerfile:
FROM mdillon/postgis:9.5
ENV POSTGRES_DB VOTER
RUN mkdir /sql
COPY ./dockerResources/z-init-db.sh /docker-entrypoint-initdb.d/
EXPOSE 5432
Docker ps -a returns:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
da74ad97b95c docker_postgis "docker-entrypoint..." About a minute ago Created docker_postgis_1
5872c6e55fe2 docker_etl "python3 /national..." About a minute ago Exited (2) About a minute ago docker_etl_1
However, when I try rm $(docker ps -qa) I get the following error:
rm: da74ad97b95c: No such file or directory
rm: 5872c6e55fe2: No such file or directory
I don't believe I have another container running so I'm confused by the message Bind for 0.0.0.0:5432 failed: port is already allocated
Is it possible that you ran the same docker-compose earlier, which probably failed or at least failed to clean up the services?
Try running docker ps -a to check if any stopped containers exist. It is possible that the stopped containers are hogging the port. If so, just clear them out using docker rm $(docker ps -qa)

Resources