Cannot delete folder in Ubuntu in Windows 10 - ruby-on-rails

I have a rails app in a folder in Ubuntu. I am using atom and git. I've always run git from the console, but last night I installed the hydrogen package on atom, so I can run git from atom. After this my app was a mess. I was trying to switch from one branch to another, but the files from one branch were transferred to the one that I had just switched to. I finally switched to master branch, which was supposed to have just the default files, but there were about 2000 files to commit. I tried to delete the folder but it doesn't work. Any suggestions about how to delete it, and some tips about using git on atom, when using Ubuntu?
$ ls -la
total 0 drwxrwxrwx 1 raluca raluca 4096 May 30 13:37
. drwxrwxrwx 1 raluca raluca 4096 May 30 14:34
.. drwxrwxrwx 1 raluca raluca 4096 May 30 13:27
app drwxrwxrwx 1 raluca raluca 4096 May 30 13:37
db drwxrwxrwx 1 raluca raluca 4096 May 29 19:53
public drwxrwxrwx 1 raluca raluca 4096 May 30 13:27
test drwxrwxrwx 1 raluca raluca 4096 May 29 19:53 vendor

I had the same problem (using Ubuntu on Windows). I created a file that was in my home directory and I could not delete it either with Linux (sudo rm filename) or with Windows (del filename). I got permission denied on both.
The solution was to:
First use chmod 777 ~ granting any user permission to edit directory.
Then use chown username filename, so that your current username can delete the file.
If you run rm filename you will still get the permission denied error at this point. Therefore, to let your previous commands take effect you will need to shut down your computer.
Restart your computer and check the home directory with bash ls, the file should be gone. It was for me. However, it will probably have moved to a different directory on your computer (i.e. in your Windows C drive "home directory").
So open cmd as admin and run dir "\filename*" /s (You will see file is still on computer but moved to C:\Users\username. Navigate to that folder using cmd and type dir. You will see the file there.
Finally, in cmd type del filename to delete the file (no permission error). Or if it is a folder you want to delete use rd foldername /s. Type dir and you will see the file will be permanently deleted from your computer.

Since the files aren't commited yet, try this out:
git clean -f -d

Related

File system permissions in a docker container

How does docker create a file system for a Linux container? And how are permissions set up on the root file system?
I encountered a situation when starting a docker container on a particular machine with Ubuntu Server. For some reason, /tmp in the container doesn't have write permissions:
$ docker run -it python:3.11-slim-buster /bin/bash
root#5d5fefe9b9a2:/# ls -la /tmp
total 8
drwxr-xr-t 1 root root 4096 Jan 26 06:58 .
drwxr-xr-x 1 root root 4096 Jan 29 04:31 ..
Note that this has 755 permissions.
However, when I start the same docker image as a container on WSL, I get 777:
$ docker run -it python:3.11-slim-buster /bin/bash
root#201dfe147e5a:/# ls -la /tmp
total 8
drwxrwxrwt 1 root root 4096 Nov 16 06:56 .
drwxr-xr-x 1 root root 4096 Jan 29 04:36 ..
This was fine a few weeks ago on the Ubuntu machine. I recently moved all the files from /var/lib/ubuntu to /ubuntu because the partition mounted at /var was full. Would this have caused the behavior with the permissions of /tmp inside a container? If so, why? And how do I fix it? If not, what else would cause this and...how do I fix it?
Docker uses a so-called union file system for a running container. The recommended driver on Linux is called overlay2. The files and directories for each layer of an image are stored under /var/lib/docker/overlay2, assuming the default config. The directory structure for each layer is combined to create the final file system for the container. See https://docs.docker.com/storage/storagedriver/overlayfs-driver/ for more details.
As for the permissions for the files in the container, they are derived from the permissions of the files in this directory in the host file system. When I copied the files from /var/lib/docker to /docker, I failed to preserve ownership and permissions. My best guess is that umask was applied as each new file was created.

Mounted Docker volume has different ownership when using Travis

This question relates to this repository with the most relevant Travis job here.
The repository is for static site built from Jupyter notebooks. The notebooks are converted using build/build.py which, for each post, builds a Docker image, starts a corresponding container with the post notebook directory mounted, and uses nbconvert to convert the notebook to Markdown. One step of nbconvert's conversion involves creating a supporting file directory. This fails on Travis due to a permission issue.
In attempting to debug this problem, I found that the ownership and permissions of the repo are the same on my local machine and Travis (with my username switched for travis) before running Docker. Despite this, inside the mounted volume of the Docker container, the ownerships are different:
Local:
drwxrwxr-x 3 jovyan 1000 4096 Dec 10 19:56 .
drwsrwsr-x 1 jovyan users 4096 Dec 3 21:51 ..
-rw-rw-r-- 1 jovyan 1000 105 Dec 7 09:57 Dockerfile
drwxr-xr-x 2 jovyan 1000 4096 Dec 10 12:09 .ipynb_checkpoints
-rw-r--r-- 1 jovyan 1000 154229 Dec 10 12:28 post.ipynb
Travis:
drwxrwxr-x 2 2000 2000 4096 Dec 10 19:58 .
drwsrwsr-x 1 jovyan users 4096 Nov 8 16:37 ..
-rw-rw-r-- 1 2000 2000 101 Dec 10 19:58 Dockerfile
-rw-rw-r-- 1 2000 2000 35271 Dec 10 19:58 post.ipynb
Both my local machine and Travis are running Ubuntu 20.04, have the same version of Docker, and all other tools come from Conda so should behave the same. I am struggling to understand where this difference in ownership is coming from.
Try running the docker again with this command, so the uid outside the container is propagated inside:
docker run -u `id -u`
alternative, as pointed by #anemyte:
docker run -u $(id -u)
This should involve the creation of the new files inside the docker to be owned by "jovyan".
If you are able to guess that mounting points will exist, you could also pre-create them so the ownership of the files inside is also correct:
docker run -v /path/on/host:/path/in/container ...
If you set the permissions of your local path (/path/on/host) as 777, that will also be propagated to the mounting point: no permission error will be thrown regardless of the user that docker uses to create those files.
After that, you'll be free to restore permissions, if needed.

How can Docker container write to a mounted directory with permissions granted through group membership?

Versions
Host OS: Debian 4.9.110
Docker Version: 18.06.1-ce
Scenario
I have a directory where multiple users (user-a and user-b) have read/write access through a common group membership (shared), set up via chown:
/media/disk-a/shared/$ ls -la
drwxrwsr-x 4 user-a shared 4096 Oct 7 22:21 .
drwxrwxr-x 7 root root 4096 Oct 1 19:58 ..
drwxrwsr-x 5 user-a shared 4096 Oct 7 22:10 folder-a
drwxrwsr-x 3 user-a shared 4096 Nov 10 22:10 folder-b
UIDs & GIDs are as following:
uid=1000(user-a) gid=1000(user-a) groups=1000(user-a),1003(shared)
uid=1002(user-b) gid=1002(user-b) groups=1002(user-b),1003(shared)
Relevant /etc/group looks like this:
shared:x:1003:user-a,user-b
When suing into both users, files can be created as expected within the shared directory.
The shared directory is attached to a Docker container via mount binds to /shared/. The Docker container runs as user-b (using the --user "1002:1002" parameter)
$ ps aux | grep user-b
user-b 1347 0.2 1.2 1579548 45740 ? Ssl 17:47 0:02 entrypoint.sh
id from within the container prints the following, to me okay-looking result:
I have no name!#7a5d2cc27491:/$ id
uid=1002 gid=1002
Also ls -la mirrors its host system equivalent perfectly:
I have no name!#7a5d2cc27491:/shared ls -la
total 16
drwxrwsr-x 4 1000 1003 4096 Oct 7 20:21 .
drwxr-xr-x 1 root root 4096 Oct 8 07:58 ..
drwxrwsr-x 5 1000 1003 4096 Oct 7 20:10 folder-a
drwxrwsr-x 3 1000 1003 4096 Nov 10 20:10 folder-b
Problem
From within the container, I cannot write anything to the shared directory. For touch test I get the following i.e.:
I have no name!#7a5d2cc27491:/shared$ touch test
touch: cannot touch 'test': Permission denied
I can write to a directory which is directly owned by user-b (user & group) and mounted to the container... Simply the group membership seems somehow not to be respected at all.
I have looked into things like user namespace remapping and things, but these seemed to be solutions for something not applying here. What do I miss?
Your container user has gid=1002, but is not member of group shared with gid=1003.
Additionally to --user "1002:1002" you need --group-add 1003.
Than the container user is allowed to access the shared folder with gid=1003.
id should show:
I have no name!#7a5d2cc27491:/$ id
uid=1002 gid=1002 groups=1003

Docker does not find executable

I have an executable written in Golang, it starts and runs fine when started from the Linux-prompt. As you can see, the executable needs an XML file when started. But when started inside a Docker environment, I get error message:
standard_init_linux.go:190: exec user process caused "no such file or directory"
Let me tell you what I tried. First, this is my Dockerfile:
FROM alpine:latest
MAINTAINER Bert Verhees "xxxxx"
ADD archibold_ucum_service /archibold_ucum_service
ADD data/ucum-essence.xml /data/ucum-essence.xml
ENTRYPOINT ["/archibold_ucum_service", "-ucumfile=/data/ucum-essence.xml"]
I build it in this way:
docker build -t=ucum_micro_service .
Then I start it in this way
docker run --name=ucum_micro_service -i -t ucum_micro_service /bin/sh
When I do this, I get the error-message, as displayed above. Then I tried commenting out the ENTRYPOINT line, and then it builds OKAY and it starts the linux prompt, so I can query what is inside.
The executable is in it, and the data-file also. And the executable also has the right attributes (it is executable inside the docker-container)
Then I try to start the executable from the linux-prompt, inside the started container, and then I get again a message that the file is not found:
/ # ./archibold_ucum_service
/bin/sh: ./archibold_ucum_service: not found
For completeness, here is partly the directory-structure in the container:
/ # ls -l
total 17484
-rwxrwxr-x 1 root root 17845706 Aug 3 13:21 archibold_ucum_service
drwxr-xr-x 2 root root 4096 Jul 5 14:47 bin
drwxr-xr-x 2 root root 4096 Aug 3 14:29 data
drwxr-xr-x 5 root root 360 Aug 4 20:27 dev
drwxr-xr-x 15 root root 4096 Aug 4 20:27 etc
drwxr-xr-x 2 root root 4096 Jul 5 14:47 home
drwxr-xr-x 5 root root ........
.......
So, what can be the problem. I am trying to solve this for over a day now. Thanks for support.

How to crawl nginx container logs via filebeat?

Problem Statement
The NGINX image is configured to send the main NGINX access and error logs to the Docker log collector by default. This is done by linking them to stdout and stderr, which causes all messages from both logs to be stored in the file /var/lib/docker/containers/<container id>/<container id>-json.log on the Docker Host.
Since the hard work of getting the logs out of the container and into the host has already been taken care of us, perhaps we should try to leverage that? But there are numerous indistinguishable folders in /var/lib/docker/containers/
# ls -alrt /var/lib/docker/containers/
total 84
drwx--x--x 14 root root 4096 Jul 4 13:40 ..
drwx------ 4 root root 4096 Jul 4 13:55 a4ee4224c3e4c68a8023eb63c01b2a288019257440b30c4efb7226eb83629956
drwx------ 4 root root 4096 Jul 6 16:24 59d1465b5c42f2ce6b13747c39ff3995191d325d641b6ef8cad1a8446247ef24
...
drwx------ 4 root root 4096 Jul 9 06:34 cab3407af18d778b259f54df16e60f5e5187f14b01a020b30f6c91c6f8003bdd
drwx------ 4 root root 4096 Jul 9 06:35 0b99140af456b29af6fcd3956a6cdfa4c78d1e1b387654645f63b8dc4bbf049c
drwx------ 21 root root 4096 Jul 9 06:35 .
Even if we narrow them down by searching recursively through /var/lib/docker/containers/ for any files that are of type -json.log and contain the string upstream_response_time
# grep -lr "upstream_response_time" /var/lib/docker/containers/ --include "*-json.log"
/var/lib/docker/containers/cfe8...fe18/cfe8...fe18-json.log
/var/lib/docker/containers/c3c3...6662/c3c3...6662-json.log
... still leaves us in a situation where we will constantly have to step in to find the correct folders due to containers starting/stopping ... we would be stuck reconfiguring FileBeat to crawl them.
Question: So how can the docker container log folders be renamed to give them a predictable name?
Alternatives
Here are certain other methods that I've ruled out but feel free to differ.
Setting up a named volume
$ tree /var/lib/docker/volumes/*nginx-log-volume
/var/lib/docker/volumes/my_swarm_stack_nginx-log-volume
└── _data
├── access.log -> /dev/stdout
└── error.log -> /dev/stderr
The named volume exists as a combination of the stack name and the named volume name: my_swarm_stack_nginx-log-volume. BUT rather than being regular files, these are some sort of a softlink/pipe to std streams. So I felt that this approach is invalid.
I think you are over-complicating the problem at hand. Filebeat already has a lot of configurable options, you don't need to reinvent stuff like this.
I suggest you just use add_docker_metadata processor. This will attach useful information like image & container name for each log produced by the container, which could then be checked by drop processor and you could make the conditions here such that you only accept logs from a specific container only.
processors:
- add_docker_metadata:
- drop_event:
when:
not:
regexp:
docker.container.name: "^nginx"
Adding Docker Metadata Documentation
Filtering Using Drop Processor

Resources