While different database images are available for OpenShift Container Platform users as explained here, others including Arangodb is not yet available. I tried to install Arangodb official container from Dcokerhub by running the following command via Openshift CLI:
oc new-app arangodb
but it does not run successfully throwing the following error:
chown: changing ownership of '/var/lib/arangodb3': Operation not permitted
It is related to permissions. By default, OpenShift runs containers using an arbitrarily assigned user ID and not as the root as documented in Support Arbitrary User IDs section. I tried to chanage the permission of directories and files that may be written to by processes in the image to be owned by the root group and be read/writable by that group in the Dockerfile:
RUN chgrp -R 0 /some/directory \
&& chmod -R g+rwX /some/directory
This time it throws the following error:
FATAL cannot set uid 'arangodb': Operation not permitted
By looking at the script that thatinitializes arangodb (arangod script), arangodb runs as arangodb:arangodb, which should (or may !!!) be arangodb:0 in the case of Openshift.
Now, I am really confused. I've read and searched a lot:
Getting any Docker image running in your own OpenShift
cluster
User namespaces have arrived in
Docker!
new-app fails on some official Docker images due to chown
permissions
I also tried doing the reverse engineering by looking at mongodb
image
provided by openshift. But at the end, I got more confused.
I also do not want to ask cluster administrators to allow the project to run as root using:
# oadm policy add-scc-to-user anyuid -z default
Th more I read, the more I get confused. Has anybody done it before that can provide me a docker container I can run on Openshift?
With ArangoDB 3.4 the docker image has been migrated to an alpine based Image, and its core now shouldn't invoke CHOWN/CHRGRP anymore when invoked in the right way.
This should be one of the requirements to get it working on Openshift.
If you still have problems running ArangoDB on openshift, use the github issue tracker with the specific problems you see. You may also want to try to add changes to the dockerfile, so it can be improved.
Related
JVMStabilityInspector.java:196 - Exiting due to error while processing commit log during initialization.
org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException: \
Could not read commit log descriptor in file /opt/cassandra/data/commitlog/CommitLog-7-1676434400779.log
I ran the Cassandra container in Docker, and the above error appears and stops.
It worked well before, but it doesn't seem to work well after deleting and recreating the Cassandra container.
I think we need to clear the /opt/cassandra/data/commitlog/CommitLog-7-1676434400779.log file.
However, I am not used to using dockers.
How do I erase this file?
I'm not sure if erasing the file will fix the error.
I also asked about this problem in chatgpt. However, after asking a lot of questions for an hour, they told me to try again next time, so I haven't solved it yet. So I'm going to post on Stack Overflow.
So this error likely means that the commitlog file specified is corrupted. I would definitely try deleting it.
If it's on a running docker container, you could try something like this:
Run a docker ps to get the container ID.
Remove the file using docker exec. If my container ID is f6b29860bbe5:
docker exec f6b29860bbe5 rm -rf /opt/cassandra/data/commitlog/CommitLog-7-1676434400779.log
Your question is missing a lot crucial information such as which Docker image you're running, the full Docker command you ran to start the container, and other relevant settings you've configured so I'm going to make several assumptions.
The official Cassandra Docker image (see the Quickstart Guide on the Cassandra website) that we (the Cassandra project) publish stores the commit logs in /var/lib/cassandra/commitlog/ but your deployment stores it somewhere else:
Could not read commit log descriptor in file /opt/cassandra/data/commitlog/CommitLog-7-1676434400779.log
Assuming that you're using the official image, it indicates to me that you have possibly mounted the container directories on a persistent volume on the host. If so, you will need to do a manual cleanup of all the Cassandra directories when you delete the container and recreate it.
The list of directories you need to empty include:
data/
commitlog/
saved_caches/
In your case, it might be just as easy to delete the contents of /opt/cassandra/.
If those directories are not persisted on the Docker host then you can open an interactive bash session into the Cassandra container. For example if you've named your container cassandra:
$ bash exec -it cassandra bash
For details, see the docker exec manual on the Docker Docs website. Cheers!
I have setup my Keycloak identification server by running a .yml file that uses the docker image jboss/keycloak:9.0.0.
Now I want get inside the container and modify some files in order to make some testing.
Unfortunately, after I got inside the running container, I realized that some very basic UNIX commands like sudo or vi (and many many more) aren't found (as well as commands like apt-get or yum which I used to download command packages and failed).
According to this question, it seems that the underlying OS of the container (Redhat Universal Base Image) uses the command microdnf to manage software, but unfortunately when I tried to use this command to do any action I got the following message:
error: Failed to create: /var/cache/yum/metadata
Could you please propose any workaround for my case? I just need to use a text editor command like vi, and root privileges for my user (so commands like sudo, su, or chmod). Thanks in advance.
If you still, for some reason, want to exec in to the container try adding --user root to you docker exec command.
Just exec:ing in to the container without the --user will do so as user jboss, that user seems to have less privileges.
It looks like you are trying to use approach from non docker (old school) world in the docker world. That's not right. Usually, you don't have need to go to the container and edit any config file there - that change will be very likely lost (it depends on the container configuration). Containers are configured via environment variables or volumes usually.
Example how to use TLS certificates: Keycloak Docker HTTPS required
https://hub.docker.com/r/jboss/keycloak/ is also good starting point to check available environment variable, which may help you achieve what you need. For example PROXY_ADDRESS_FORWARDING=true enable option, when you can run Keycloak container behind a loadbalancer without you touching any config file.
I would say also adding own config files on the build is not the best option - you will have to maintain your own image. Just use volumes and "override" default config file(s) in the container with your own config file(s) from the host OS file system, e.g.:
-v /host-os-path/my-custom-standalone-ha.xml:/opt/jboss/keycloak/standalone/configuration/standalone-ha.xml
After a recent update to Docker I find myself unable to create any new containers in Docker. I've already rebooted my operating system and Docker itself. I've tried specifying the tags to specific versions any way I could. I can manually pull the images I want with Docker. But it refuses to run or create any new containers. Already existing containers start up just fine. The full error message is below.
Unable to find image 'all:latest' locally
Error response from daemon: pull access denied for all, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
These aren't from private repositories. These are all public projects from Docker Hub. Any suggestions?
This is correct. You're trying to build using an image called all:latest but if you look on the docker registry that doesn't exist.
https://hub.docker.com/_/all
Are you sure you're not trying to build from a private repository?
I found the issue. I started taking my Docker command apart and found there was an environment variable that had the word "all" in it. Docker was completely ignoring whatever I had for the image and using the environment variable for the image. As soon as I removed this environment variable Docker started working again correctly.
The variable in question is -e NVIDIA_VISIBLE_DEVICES: "all" \ to make sure the Plex container can see that there is an nVidia GPU available. I was using the wrong guide and found out it's supposed to be -e NVIDIA_VISIBLE_DEVICES=all \ instead.
I'm trying to configure python-pip for my docker container(s). But it gives me the error that I dont have permission. After I use sudo, it gives me an another error.
I'v tried using sudo for root permission. I aslo tried the command exec and run.
sudo docker container run davidrazd/discord-node-10 sudo apt-get install python-pip
sudo docker container exec davidrazd/discord-node-10 sudo apt-get install python-pip
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"sudo\": executable file not found in $PATH": unknown.
And without sudo:
E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?
Here's 3 reasons I think you should reconsider your use of Docker and make sure you're getting the most out of containers.
you have an image called ...-node-10. You shouldn't need specific images for specific nodes. They should all run on the same image and, to the extent necessary, be configured at runtime (this should be kept to a minimum, usually things like discovery or dynamic config). If 10 is a version, you should be using tags to version it, not image ID itself.
There's a valid use case for one-off execution of an install within a running container via exec for one-off package installs (knowing that the install will disappear when the container stops), but docker run ... apt-get install really doesn't make sense to me. As #DavidMaze points out in a question comment, installs that are meant to be should always be part of the Dockerfile. If you're installing packages to long-lived containers - don't, and don't. The worst things that happen to folks with docker is they attempt to treat containers as long lived Virtual Machines, when they should be treating them as ephemeral runtime environments that maintain minimal state, are essentially immutable ( and therefore easily replaced by the next version), their image contains all their install-time dependencies, and store any long term data on a separate docker volume (that hopefully is itself backed up).
You're likely configuring a user your application to run as a USER in your Dockerfile but you're attempting to run privileged commands without setting your USER to something else. Then you attempt to run sudo. But sudo doesn't make much sense in the context of a container, and typically is not installed (and if it was, it would mean you'd have to somehow set up a user to sudo, so it wouldn't make your life any easier, just harder). You can set your user at docker exec time to root with docker exec -u root .... But you shouldn't really need to do this - all setup should be done in the Dockerfile and you should ship a new dockerfile to change versions.
I'm a very heavy user of docker myself, and build nearly all applications in it becuse it so vastly simplifies expressing their runtime requirements, so I think I speak from the voice of experience here. You won't get any of the value of docker if your containers aren't immutable; you'll just have the same problems as you would without containers, but with less management tooling. Docker obviates most of the management at runtime - capitalize on that, don't fight it.
I am running the mqtt broker Mosqitto in a docker image.
I am using the following arguments
sudo docker run -d -p 1883:1883 -p 1884:1884 -v /home/mosquitto/apps/dev/mosquitto:/mosquitto --restart always -u mosquitto eclipse-mosquitto:1.4.
This should mount the host folder /home/mosquitto/apps/dev/mosquitto to the image folder /mosquitto
The problem is that the host user IDs (1001) and the docker user IDs (100) do not match.
If I do not specify -u mosquitto, the application complains about not being able to write to /mosquitto/logs/mosquitto.log
So I thought I'd specify -u mosquitto, to make the application inside the image run as user 1001, and therefore have write access to the mounted files.
Which worked.
But then, the Mosquitto application made a new database file on exit. That file was made with the 101 user as owner..
What exactly happens when I specify -U to docker.
How come it kind of did what I was expecting (allowed writing to host files) and kind of didn't do what I was expecting(still made files with the original image user id)
Maybe this is something to do with this specific docker image .. it runs some script internally that switches user?
How about making write access to log path for any user? It may be less secure. But if it is just logs, lets see application inside docker can write to it.
Or think about bootstrap some commands to the container to make permission changes inside.
If you are using Linux or OSX for your Docker location, most likely it is a security or file permissions issue. Go to this bug report Permission denied for directories created automatically by Dockerfile ADD command #1295 and jump to the end...there are several links to sub-bug reports where you can most likely find your solution. I had a very similar issue, and it turned out to be a selinux misconfiguration.