vsftpd - Cannot upload file. Get err: 553 - vsftpd

I installed VSFTPD on Centos7 and tried to set up FTP.
The vsftpd.conf file information is as follows:
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
allow_ftpd_full_access
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
xferlog_std_format=YES
listen=YES
listen_ipv6=NO
pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES
local_root=/home/share
chroot_local_user=YES
chroot_list_enable=YES
chroot_list_file=/etc/vsftpd/chroot_list
allow_writeable_chroot=YES
pasv_address=ip
pasv_min_port=3000
pasv_max_port=3100
guest_enable=NO
I looked at a lot of posts and most of the answers I got were permissions and SELinux.
The dir /home/share 777 permission has been set.
SELinux is enabled.
Would you please help me find out what the problem is? I would be very grateful!!

I think the problems are that allow_ftpd_full_access is not a vsftpd.conf option, and that the /home/share directory has the wrong owner (see Steps 5 and 6).
Try this out...
NOTE - Tested using two CentOS 7.9 virtual machines, on an Internal network, with IP addresses of 192.168.0.10 (client) and 192.168.0.11 (server), using your vsftpd.conf settings.
On the client, ensure the FTP client is installed: sudo yum install ftp
On the server, ensure the FTP daemon is installed: sudo yum install vsftpd
Temporarily open the firewall for FTP traffic on both machines, so you do not receive a No route to host error:
sudo firewall-cmd --zone=public --add-port=20/tcp
sudo firewall-cmd --zone=public --add-port=21/tcp
On the server, allow FTP daemon traffic through the firewall: sudo firewall-cmd --zone=public --add-service=ftp
On the server, in your vsftpd.conf file, remove allow_ftpd_full_access. Instead, enter sudo setsebool -P allow_ftpd_full_access=1 in the Terminal.
On the server, change the ownership of the /home/share folder from root:root to the FTP server's user name and group. In my case it was ftp_server:ftp_server group:
sudo chown ftp_server:ftp_server /home/share
On the server, start the FTP service: sudo systemctl start vsftpd
On the server, create a test file in the /home/share directory. You can change the ownership of the file, if you like, but I was able to get the file even if it was root:root:
echo "This file is from the FTP server." | sudo tee /home/share/ftp_server_file
On the client, create a test file in the client home directory: echo "This file is from the FTP client." > ~/ftp_client_file
On the client:
Open the FTP client
Get the server's /home/share directory listing
Get the server file
Put the client file
[ftp_client#localhost ~]$ ftp 192.168.0.11
Connected to 192.168.0.11 (192.168.0.11).
220 (vsFTPd 3.0.2)
Name (192.168.0.11:ftp_client): ftp_server
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
227 Entering Passive Mode (192,168,0,11,12,27).
150 Here comes the directory listing.
-rw-r--r-- 1 0 0 34 Jan 16 21:06 ftp_server_file
226 Directory send OK.
ftp> get ftp_server_file
local: ftp_server_file remote: ftp_server_file
227 Entering Passive Mode (192,168,0,11,11,211).
150 Opening BINARY mode data connection for ftp_server_file (34 bytes).
226 Transfer complete.
34 bytes received in 4.5e-05 secs (755.56 Kbytes/sec)
ftp> put ftp_client_file
local: ftp_client_file remote: ftp_client_file
227 Entering Passive Mode (192,168,0,11,11,212).
150 Ok to send data.
226 Transfer complete.
34 bytes sent in 7.7e-05 secs (441.56 Kbytes/sec)
ftp> ls
227 Entering Passive Mode (192,168,0,11,11,222).
150 Here comes the directory listing.
-rw-r--r-- 1 1000 1000 34 Jan 16 21:18 ftp_client_file
-rw-r--r-- 1 0 0 34 Jan 16 21:06 ftp_server_file
226 Directory send OK.
ftp> quit
221 Goodbye.
[ftp_client#localhost ~]$
Verify the files are both on the client and the server:
$ ll ftp*
total 4
-rw-r--r--. 1 ftp_server ftp_server 34 Jan 16 15:04 ftp_client_file
-rw-r--r--. 1 root root 34 Jan 16 15:03 ftp_server_file
The initial permissions for both files were 644, but I had no problems.

Related

Permission denied error when starting Elasticsearch as Singularity container

I am trying to run single node Elasticsearch instance on a HPC cluster. To do this, I am converting the Elasticsearch docker container as a singularity container. When I launch the container itself I get the following error:
$ singularity exec --overlay overlay.img elastic.sif /usr/share/elasticsearch/bin/elasticsearch
Could not create auto-configuration directory
Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
output:
[0.000s][error][logging] Error opening log file 'logs/gc.log': Permission denied
[0.000s][error][logging] Initialization of output 'file=logs/gc.log' using options 'filecount=32,filesize=64m' failed.
error:
Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
at org.elasticsearch.server.cli.JvmOption.flagsFinal(JvmOption.java:113)
at org.elasticsearch.server.cli.JvmOption.findFinalOptions(JvmOption.java:80)
at org.elasticsearch.server.cli.MachineDependentHeap.determineHeapSettings(MachineDependentHeap.java:59)
at org.elasticsearch.server.cli.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:132)
at org.elasticsearch.server.cli.JvmOptionsParser.determineJvmOptions(JvmOptionsParser.java:90)
at org.elasticsearch.server.cli.ServerProcess.createProcess(ServerProcess.java:211)
at org.elasticsearch.server.cli.ServerProcess.start(ServerProcess.java:106)
at org.elasticsearch.server.cli.ServerProcess.start(ServerProcess.java:89)
at org.elasticsearch.server.cli.ServerCli.startServer(ServerCli.java:213)
at org.elasticsearch.server.cli.ServerCli.execute(ServerCli.java:90)
at org.elasticsearch.common.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:85)
at org.elasticsearch.cli.Command.main(Command.java:50)
at org.elasticsearch.launcher.CliToolLauncher.main(CliToolLauncher.java:64)
If I understand correctly, Elasticsearch is trying to create a logfile in /var/log/elasticsearch but does not have the correct permissions. So I created the following recipe to create the folders and set the permission such that any process can write into the log directory. My recipe is the following:
Bootstrap: docker
From: elasticsearch:8.3.1
%files
elasticsearch.yml /usr/share/elasticsearch/config/
%post
mkdir -p /var/log/elasticsearch
chown -R elasticsearch:elasticsearch /var/log/elasticsearch
chmod -R 777 /var/log/elasticsearch
mkdir -p /var/data/elasticsearch
chown -R elasticsearch:elasticsearch /var/data/elasticsearch
chmod -R 777 /var/data/elasticsearch
The elasticsearch.yml file has the following content:
cluster.name: "docker-cluster"
network.host: 0.0.0.0
discovery.type: single-node
ingest.geoip.downloader.enabled: false
After building this recipe the directory /var/log/elasticsearch seems to get created correctly:
$ singularity exec elastic.sif ls -alh /var/log/
total 569K
drwxr-xr-x 4 root root 162 Jul 8 14:43 .
drwxr-xr-x 12 root root 172 Jul 8 14:43 ..
-rw-r--r-- 1 root root 7.7K Jun 29 17:29 alternatives.log
drwxr-xr-x 2 root root 69 Jun 29 17:29 apt
-rw-r--r-- 1 root root 58K May 31 11:43 bootstrap.log
-rw-rw---- 1 root utmp 0 May 31 11:43 btmp
-rw-r--r-- 1 root root 187K Jun 29 17:30 dpkg.log
drwxrwxrwx 2 elasticsearch elasticsearch 3 Jul 8 14:43 elasticsearch
-rw-r--r-- 1 root root 32K Jun 29 17:30 faillog
-rw-rw-r-- 1 root utmp 286K Jun 29 17:30 lastlog
-rw-rw-r-- 1 root utmp 0 May 31 11:43 wtmp
But when I launch the container I get the permission denied error listed above.
What is missing here? What permissions is Elasticsearch expecting?
The following workaround seems to be working for me now:
When launching the singularity container, the elasticsearch process is executed inside the container with the same UID as my own UID (the user that is launching the singularity container with singularity exec). The elasticsearch container is configured to run elasticsearch with the a separate user elasticsearch that exists inside the container. The issue is that singularity (unlike docker) will run every process inside the container with my own UID and not the elasticsearch UID, resulting in the error above.
To work around this, I created a base ubuntu singularity image and then installed elasticsearch into the container following these installation instructions (https://www.elastic.co/guide/en/elasticsearch/reference/current/targz.html). Because the installation was performed with my system user and UID, the entire elasticsearch installation belongs to my system user and not a separate elasticsearch user. Then I can launch the elasticsearch service inside the container.

How to attach a USB device to a docker container under Ubuntu

I am trying to give a container access to a USB device on the host. The device appears to exist but docker seems unable to access it when creating the container.
Any thoughts on how to proceed?
The device appears to exist:
$ ls -l /dev/ttyUSB0
crw-rw---- 1 root dialout 188, 0 Jun 21 20:47 /dev/ttyUSB0
It's a Sonoff zigbee dongle:
$ ls -l /dev/serial/by-id
total 0
lrwxrwxrwx 1 root root 13 Jun 21 20:47 usb-ITead_Sonoff_Zigbee_3.0_USB_Dongle_Plus_1ec67e3b0b86ec11b4cd631719c2d21c-if00-port0 -> ../../ttyUSB0
But when I try to pass it to a container (simple example here), I get an error:
$ docker run --device /dev/ttyUSB0 alpine
docker: Error response from daemon: error gathering device information while adding custom device "/dev/ttyUSB0": no such file or directory.

jmxterm: "Unable to create a system terminal" inside Docker container

I have a Docker image which contains JRE, some Java web application and jmxterm. The latter is used for running some ad-hoc administrative tasks. The image is used on the CentOS 7 server with Docker 1.13 (which is pretty old but is the latest version which is supplied via the distro's repository) to run the web application itself.
All works well, but after updating jmxterm from 1.0.0 to the latest version (1.0.2), I get the following warning when entering the running container and starting jmxterm:
WARNING: Unable to create a system terminal, creating a dumb terminal (enable debug logging for more information)
After this, jmxterm does not react to arrow keys (when trying to navigate through the command history), nor does it provide autocompletion.
Some quick investigation shows that the problem may be reproduced in the clean environment with CentOS 7. Say, this is how I could bootstrap the system and the container with all stuff I need:
$ vagrant init centos/7
$ vagrant up
$ vagrant ssh
[vagrant#localhost ~]$ sudo yum install docker
[vagrant#localhost ~]$ sudo systemctl start docker
[vagrant#localhost ~]$ sudo docker run -it --entrypoint bash openjdk:11
root#0c4c614de0ee:/# wget https://github.com/jiaqi/jmxterm/releases/download/v1.0.2/jmxterm-1.0.2-uber.jar
And this is how I enter the container and run jmxterm:
[vagrant#localhost ~]$ sudo docker exec -it 0c4c614de0ee sh
root#0c4c614de0ee:/# java -jar jmxterm-1.0.2-uber.jar
WARNING: Unable to create a system terminal, creating a dumb terminal (enable debug logging for more information)
root#0c4c614de0ee:/# bea<TAB>
<Nothing happens, but autocompletion had to appear>
Few observations:
the problem does not appear with older jmxterm no matter which image do I use;
the problem arises with new jmxterm no matter which image do I use;
the problem is not reproducible on my laptop (which has newer kernel and Docker);
the problem is not reproducible if I use latest Docker (from the external repo) on the CentOS 7 server instead of CentOS 7's native version 1.13.
What happens, and why the error is reproducible only in specific environments? Is there any workaround for this?
TLDR: running new jmxterm versions as java -jar jmxterm-1.0.2-uber.jar < /dev/tty is a quick, dirty and hacky workaround for having the autocompletion and other stuff work inside the interactive container session.
A quick check shows that jmxterm tries to determine the terminal device used by the process — probably to obtain the terminal capabilities later — by running the tty utility:
root#0c4c614de0ee:/# strace -f -e 'trace=execve,wait4' java -jar jmxterm-1.0.2-uber.jar
execve("/opt/java/openjdk/bin/java", ["java", "-jar", "jmxterm-1.0.2-uber.jar"], 0x7ffed3a53210 /* 36 vars */) = 0
...
[pid 432] execve("/usr/bin/tty", ["tty"], 0x7fff8ea39608 /* 36 vars */) = 0
[pid 433] wait4(432, [{WIFEXITED(s) && WEXITSTATUS(s) == 1}], 0, NULL) = 432
WARNING: Unable to create a system terminal, creating a dumb terminal (enable debug logging for more information)
The utility fails with the status of 1, which is likely the reason for the error message. Why?
root#0c4c614de0ee:/# strace -y tty
...
readlink("/proc/self/fd/0", "/dev/pts/3", 4095) = 10
stat("/dev/pts/3", 0x7ffe966f2160) = -1 ENOENT (No such file or directory)
...
write(1</dev/pts/3>, "not a tty\n", 10not a tty
) = 10
The utility says "not a tty" while we definitely have one. A quick check shows that... There is really no PTY device in the container though the standard streams of the shell are connected to one!
root#0c4c614de0ee:/# ls -l /proc/self/fd
total 0
lrwx------. 1 root root 64 Jun 3 21:26 0 -> /dev/pts/3
lrwx------. 1 root root 64 Jun 3 21:26 1 -> /dev/pts/3
lrwx------. 1 root root 64 Jun 3 21:26 2 -> /dev/pts/3
lr-x------. 1 root root 64 Jun 3 21:26 3 -> /proc/61/fd
root#0c4c614de0ee:/# ls -l /dev/pts
total 0
crw-rw-rw-. 1 root root 5, 2 Jun 3 21:26 ptmx
What if we check the same with latest Docker?
root#c0ebd608f79a:/# ls -l /proc/self/fd
total 0
lrwx------ 1 root root 64 Jun 3 21:45 0 -> /dev/pts/1
lrwx------ 1 root root 64 Jun 3 21:45 1 -> /dev/pts/1
lrwx------ 1 root root 64 Jun 3 21:45 2 -> /dev/pts/1
lr-x------ 1 root root 64 Jun 3 21:45 3 -> /proc/16/fd
root#c0ebd608f79a:/# ls -l /dev/pts
total 0
crw--w---- 1 root tty 136, 0 Jun 3 21:44 0
crw--w---- 1 root tty 136, 1 Jun 3 21:45 1
crw-rw-rw- 1 root root 5, 2 Jun 3 21:45 ptmx
Bingo! Now we have our PTYs where they should be, so jmxterm works well with latest Docker.
It seems pretty weird that with older Docker the processes are connected to some PTYs while there are no devices for them in /dev/pts, but tracing the Docker process explains why this happens. Older Docker allocates the PTY for the container before setting other things up (including entering the new mount namespace and mounting devpts into it or just entering the mount namespace in case of docker exec -it):
[vagrant#localhost ~]$ sudo strace -p $(pidof docker-containerd-current) -f -e trace='execve,mount,unshare,openat,ioctl'
...
[pid 3885] openat(AT_FDCWD, "/dev/ptmx", O_RDWR|O_NOCTTY|O_CLOEXEC) = 9
[pid 3885] ioctl(9, TIOCGPTN, [1]) = 0
[pid 3885] ioctl(9, TIOCSPTLCK, [0]) = 0
...
[pid 3898] unshare(CLONE_NEWNS|CLONE_NEWUTS|CLONE_NEWIPC|CLONE_NEWNET|CLONE_NEWPID) = 0
...
[pid 3899] mount("devpts", "/var/lib/docker/overlay2/3af250a9f118d637bfba5701f5b0dfc09ed154c6f9d0240ae12523bf252e350c/merged/dev/pts", "devpts", MS_NOSUID|MS_NOEXEC, "newinstance,ptmxmode=0666,mode=0"...) = 0
...
[pid 3899] execve("/bin/bash", ["bash"], 0xc4201626c0 /* 7 vars */ <unfinished ...>
Note the newinstance mount option which ensures that the devpts mount owns its PTYs exclusively and does not share them with other mounts. This leads to the interesting effect: the PTY device for the container stays on the host and belongs to the host's devpts mount, while the containerized process still has access to it, as it obtained the already-open file descriptors just in the beginning of its life!
The latest Docker first mounts devpts for the container and then allocates the PTY, so the PTY belongs to container's devpts mount and is visible inside the container's filesystem:
$ sudo strace -p $(pidof containerd) -f -e trace='execve,mount,unshare,openat,ioctl'
...
[pid 14043] unshare(CLONE_NEWNS|CLONE_NEWUTS|CLONE_NEWIPC|CLONE_NEWPID|CLONE_NEWNET) = 0
...
[pid 14044] mount("devpts", "/var/lib/docker/overlay2/b743cf16ab954b9a4b4005bca0aeaa019c4836c7d58d6073044e5b48446c3d62/merged/dev/pts", "devpts",
MS_NOSUID|MS_NOEXEC, "newinstance,ptmxmode=0666,mode=0"...) = 0
...
[pid 14044] openat(AT_FDCWD, "/dev/ptmx", O_RDWR|O_NOCTTY|O_CLOEXEC) = 7
[pid 14044] ioctl(7, TIOCGPTN, [0]) = 0
[pid 14044] ioctl(7, TIOCSPTLCK, [0]) = 0
...
[pid 14044] execve("/bin/bash", ["/bin/bash"], 0xc000203530 /* 4 vars */ <unfinished ...>
Well, the problem is caused by inappropriate Docker behavior, but how comes that older jmxterm worked well in the same environment? Let's check (note, that Java 8 image is used here, as older jmxterm does not play well with Java 11):
root#504a7757e310:/# wget https://github.com/jiaqi/jmxterm/releases/download/v1.0.0/jmxterm-1.0.0-uber.jar
root#504a7757e310:/# strace -f -e 'trace=execve,wait4' java -jar jmxterm-1.0.0-uber.jar
execve("/usr/local/openjdk-8/bin/java", ["java", "-jar", "jmxterm-1.0.0-uber.jar"], 0x7fffdcaebdd0 /* 10 vars */) = 0
...
[pid 310] execve("/bin/sh", ["sh", "-c", "stty -a < /dev/tty"], 0x7fff1f2a1cc8 /* 10 vars */) = 0
So, older jmxterm just uses /dev/tty instead of asking tty for the device name, and this works, as this device is present inside the container:
root#504a7757e310:/# ls -l /dev/tty
crw-rw-rw-. 1 root root 5, 0 Jun 3 21:36 /dev/tty
The huge difference between these versions of jmxterm is that newer tool version uses higher major version of jline, which is the library responsible for interaction with the terminal (akin to the readline in the C world). The difference between major jline versions leads to the difference in jmxterm's behavior, and current versions just rely on tty.
This observation leads us to the quick and dirty workaround which does not require neither updating Docker nor patching the jline/jmxterm tandem: we may just attach jmxterm's stdin to /dev/tty forcibly and thus make jline use this device (which is now referenced by /proc/self/fd/0) instead of the /dev/pts entry (which, formally, is not always correct, but is still enough for ad-hoc use):
root#0c4c614de0ee:/# java -jar jmxterm-1.0.2-uber.jar < /dev/tty
Welcome to JMX terminal. Type "help" for available commands.
$>bea<TAB>
bean beans
Now we have the autocompletion, history and other cool things we need to have.
If you are trying to run an interactive application (that needs tty) inside a docker container or a pod in kubernetes, then the following should work.
For docker-compose use:
image: image-name:2.0
container_name: container-name
restart: always
stdin_open: true
tty: true
For kubernetes use:
spec:
containers:
- name: web
image: web:latest
tty: true
stdin: true

Cannot delete folder in Ubuntu in Windows 10

I have a rails app in a folder in Ubuntu. I am using atom and git. I've always run git from the console, but last night I installed the hydrogen package on atom, so I can run git from atom. After this my app was a mess. I was trying to switch from one branch to another, but the files from one branch were transferred to the one that I had just switched to. I finally switched to master branch, which was supposed to have just the default files, but there were about 2000 files to commit. I tried to delete the folder but it doesn't work. Any suggestions about how to delete it, and some tips about using git on atom, when using Ubuntu?
$ ls -la
total 0 drwxrwxrwx 1 raluca raluca 4096 May 30 13:37
. drwxrwxrwx 1 raluca raluca 4096 May 30 14:34
.. drwxrwxrwx 1 raluca raluca 4096 May 30 13:27
app drwxrwxrwx 1 raluca raluca 4096 May 30 13:37
db drwxrwxrwx 1 raluca raluca 4096 May 29 19:53
public drwxrwxrwx 1 raluca raluca 4096 May 30 13:27
test drwxrwxrwx 1 raluca raluca 4096 May 29 19:53 vendor
I had the same problem (using Ubuntu on Windows). I created a file that was in my home directory and I could not delete it either with Linux (sudo rm filename) or with Windows (del filename). I got permission denied on both.
The solution was to:
First use chmod 777 ~ granting any user permission to edit directory.
Then use chown username filename, so that your current username can delete the file.
If you run rm filename you will still get the permission denied error at this point. Therefore, to let your previous commands take effect you will need to shut down your computer.
Restart your computer and check the home directory with bash ls, the file should be gone. It was for me. However, it will probably have moved to a different directory on your computer (i.e. in your Windows C drive "home directory").
So open cmd as admin and run dir "\filename*" /s (You will see file is still on computer but moved to C:\Users\username. Navigate to that folder using cmd and type dir. You will see the file there.
Finally, in cmd type del filename to delete the file (no permission error). Or if it is a folder you want to delete use rd foldername /s. Type dir and you will see the file will be permanently deleted from your computer.
Since the files aren't commited yet, try this out:
git clean -f -d

Docker tomcat8-jre8 hacked?

I hosted a web-app on jelastic (dogado) as a docker container (the official docker container link). After 2 weeks I get an email:
Dear Jelastic customer, there was a process of the command
"/usr/local/tomcat/3333" which was sending massive packets to
different targets this morning. The symptoms look like the docker
instance has a security hole and was used in an DDoS attack or part of
a botnet.
The top command showed this process:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
334 root 20 0 104900 968 456 S 99.2 0.1 280:51.95 3333
root#node0815-somename:/# ls -al /proc/334
...
lrwxrwxrwx 1 root root 0 Jul 26 08:16 cwd -> /usr/local/tomcat
lrwxrwxrwx 1 root root 0 Jul 26 08:16 exe -> /usr/local/tomcat/3333
We have killed the process and changed the permissions of the file:
root#node0815-somename:/# kill 334
root#node0815-somename:/# chmod 000 /usr/local/tomcat/3333
Please investigate or use a more security hardenend docker template.
Has anyone encountered the same or a similar problem before? Is it possible that the container was hacked?
The guys which provide the container gave me a hint...
I remove only the ROOT war.
RUN rm -rf /usr/local/tomcat/webapps/ROOT
I forget completely that the tomcat delivers example apps. So I have to delete the security holes:
RUN rm -rf /usr/local/tomcat/webapps/
Do you use any protection tools? We don't except the scenario when your container can be hacked if there are no protection.
We strongly recommend using IPtables and Fail2Ban to protect your containers from hack attacks (You have root access to your Docker container using SSH, so you are able to install and configure these packages), especially if you have attached public IP to your containers.
Also, you have access to all container logs (via Dashboard or SSH), so you are able to analyze logs and take preventive actions.
Have a nice day.

Resources