Failed to start Login Service on BBB after modify /etc/passwd - beagleboneblack

I'm using Beagle Bone Black board, linux.
While doing some work, I have changed root password to use password while log in to root.
(By default, its root passwd was disabled so could log in root without passwd)
I was trying to disable root passwd again, so want to use default mode without passwd.
I have modified /etc/passwd file and saved and exited, then power off/on.
I tried to log in but log in service has failed so cannot access my BBB.
What I modified is "/etc/passwd" file
origianl was "root:x:0:0:............."
modifed is "root:0:0:..............."
I removed "x:" part, BBB forum said it'll solve my problem so I did it.
After then I exited ssh and tried to connect BBB via ssh again but couldn't, refused.
So I reboot my BBB and tried to access via ssh but connection refused continuously.
I connected uart to BBB and monitors log prints from BBB and found that login service couldn't start.
[FAILED] Failed to start Login Service.
See 'systemctl status systemd-logind.service' for details.
I could log in via debian/temppwd via uart after abort auto login.
I tried to change /etc/passwd but couldn't because I'm not "root".
Even though I tried "sudo", it says "sudo: unknown user : root"
I tried "systemctl status systemd-logind.service" with "debian" account but it didn't work
It says "Failed to connect to bus: No such file or directory".
My BBB systemd info is
+++
debian#beaglebone:~$ ls /etc/systemd
journald.conf logind.conf network resolved.conf system system.conf timesyncd.conf user user.conf
+++
How can I recover my problem?
Many thanks.

I found s solution and it works for me.
"sudo -u #0 vi /etc/passwd"
from "https://aslamlatheef.blogspot.com/2015/09/why-cant-i-sudo-unknown-user-root.html"

Related

How do I resolve this OSError: [Erno 48] Address already in use error while working with the byob botnet on GitHub?

(I have seen other solutions to "Errno 48" issues on StackOverflow, but none have been successful yet.)
I am trying to develop a botnet using byob on github here: https://github.com/malwaredllc/byob
I am encountering a address in use error every time I run the command sudo ./startup.sh. It returns OSError: [Errno 48] Address already in use.
However when I attempt to use the ps -fA | grep python and kill the associated 502 18126 16973 0 9:16PM ttys000 0:00.00 grep python by using kill -9 181216, I get this error: kill: kill 18126 failed: no such process.
Does anyone have any idea what to do?
I am using a "MacOS M1Pro Chip OS V12.0.1 Monterey". Also the program byob is trying to run on port 5000 of IPv4 127.0.0.1 (this is a generic IP not specifically mine). http://127.0.0.1/5000.
In case you try to duplicate the problem you need to install docker.io or the docker desktop app depending on os then navigate to cd <outer-dir>/byob-master/web-gui then execute sudo ./startup.sh. The code will not work without access to docker, and the program needs to be ran with admin perms using the prefix sudo. The actual downloads take a while and it will prompt you to restart once. Then when you run it again, I encounter this problem...
Please let me know if someone was able to fix this. Thanks!

gdbserver does not attach to a running process in a docker container

In my docker container (based on SUSE distribution SLES 15) both the C++ executable (with debug enhanced code) and the gdbserver executable are installed.
Before doing anything productive the C++ executable sleeps for 5 seconds, then initializes and processes data from a database. The processing time is long enough to attach it to gdbserver.
The C++ executable is started in the background and its process id is returned to the console.
Immediately afterwards the gdbserver is started and attaches to the same process id.
Problem: The gdbserver complains not being able to connect to the process:
Cannot attach to lwp 59: No such file or directory (2)
Exiting
In another attempt, I have copied the same gdbserver executable to /tmp in the docker container.
Starting this gdbserver gave a different error response:
Cannot attach to process 220: Operation not permitted (1)
Exiting
It has been verified, that in both cases the process is still running. 'ps -e' clearly shows the process id and the process name.
If the process is already finished, a different error message is thrown; this is clear and needs not be explained:
gdbserver: unable to open /proc file '/proc/79/status'
The gdbserver was started once from outside of the container and once from inside.
In both scenarios the gdbserver refused to attach the running process:
$ kubectl exec -it POD_NAME --container debugger -- gdbserver --attach :44444 59
Cannot attach to lwp 59: No such file or directory (2)
Exiting
$ kubectl exec -it POD_NAME -- /bin/bash
bash-4.4$ cd /tmp
bash-4.4$ ./gdbserver 10.0.2.15:44444 --attach 220
Cannot attach to process 220: Operation not permitted (1)
Exiting
Can someone explain what causes gdbserver refusing to attach to the specified process
and give advice how to overcome the mismatch, i.e. where/what do I need to examine for to prepare the right handshake between the C++ executable and the gdbserver?
The basic reason why gdbserver could not attach to the running C++ process is due to
a security enhancement in Ubuntu (versions >= 10.10):
By default, process A cannot trace a running process B unless B is a direct child of A
(or A runs as root).
Direct debugging is still always allowed, e.g. gdb EXE and strace EXE.
The restriction can be loosen by changing the value of /proc/sys/kernel/yama/ptrace_scope from 1 (=default) to 0 (=tracing allowed for all processes). The security setting can be changed with:
echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
All credits for the description of ptrace scope belong to the following post,
see 2nd answer by Eliah Kagan - thank you for the thorough explanation! - here:
https://askubuntu.com/questions/143561/why-wont-strace-gdb-attach-to-a-process-even-though-im-root

fail2ban won't start using nextcloud.log with jail

I have nextcloud installed and working fine in a docker but want to have fail2ban monitor the log files for brute force attempts. I know nextcloud has it's own baked in but it just throttles the log in attempts and I would like to all out ban them (I also have this problem with other containers as well). The docker-compose is set to create the nextcloud.log file to /mnt/nextcloud/log/nextcloud.log. I followed this guide to create the jail
https://www.c-rieger.de/nextcloud-installation-guide-ubuntu/#c06
Fail2ban is running on the host machine however, fail2ban fails to start with:
[447]: ERROR Failed during configuration: Have not found any log file for nextcloud jail
[447]: ERROR Async configuration of server failed
Thinking it was simply a permission issue, I chowned everything to root and tried to start again but still the service won't start. What am I doing wrong?
Thanks for the help!
The docker-compose is set to create the nextcloud.log file to /mnt/nextcloud/log/nextcloud.log
Be sure this file really exists and your jail.local has correct entry logpath:
[nextcloud]
...
logpath = /mnt/nextcloud/log/nextcloud.log
You can also check resulting config using dump:
fail2ban-client -d | grep 'nextcloud.*logpath'
But I'm still not sure the error message you provide was throwed by fail2ban, because its error messages look different, see https://github.com/fail2ban/fail2ban/commit/27947407bc7910f0f50972113218ebc73c4a22c7
It should be something like:
-have not found a log file for nextcloud log
+Have not found any log file for nextcloud jail

Jenkins - file transfer to sudo user directory in the target server

I am trying to transfer all .sh files from one unix server to another using jenkins.
Files are getting transfer but it is coming in my unix home directory, I need to transfer it sudo user directory.
for example:
Source server name is "a" and target server name is "u"
we are using sell4 as sudo user in target server name
it should come in home directory of sell4 user
I have used the below command
Building in workspace /var/lib/jenkins/workspace/EDB-ExtractFilefromSVN
SSH: Connecting from host [a]
SSH: Connecting with configuration [u] ...
SSH: EXEC: STDOUT/STDERR from command [sudo scp *.sh sell4#u:/usr/app/TomcatDomain/ScoringTools_ACCDomain04/] ...
sudo: scp: command not found
SSH: EXEC: completed after 201 ms
SSH: Disconnecting configuration [u] ...
ERROR: Exception when publishing, exception message [Exec exit status not zero. Status [1]]
Gitcolony notification failed - java.lang.IllegalArgumentException: Invalid url:
Finished: UNSTABLE
Can you please suggest what I am going wrong here?
EDITS:
Adding the shell screenshot:
ah so it's some kind of plugin. It seems like you want to run local sudo to login to remote server user. It won't work this way. You can't open door to bathroom and expect walking into a garden.
sudo changes your local user to root, not remote server.
Do not use sudo with scp command but rather follow these answers:
https://unix.stackexchange.com/questions/66021/changing-user-while-scp

Jenkins Fail with: Host key verification failed

I downloaded and installed Jenkins for Mac OSX on my Macbook Pro (OS: Mountain Lion). I now want to set it up to pull down a project from bitbucket and do an automatic build.
I created the ssh key, added it to bitbucket and tried to setup a build job. However, I get the error:
Failed to connect to repository : Command "git ls-remote -h HEAD" returned status code 128:
stdout:
stderr: Host key verification failed.
fatal: The remote end hung up unexpectedly
I tried to remove the domain causing the problem from known_hosts but am still getting this error.
Please advise.
I think I've found a possible solution in this post: http://colonelpanic.net/2011/06/jenkins-on-mac-os-x-git-w-ssh-public-key/
Jenkins on Mac OS X I just finished setting up a build server on Mac
OS X using Jenkins (formerly Hudson). The company I’m working for
(GradeCam) uses git and gitolite for our source control and so I
expected no trouble using Jenkins to build our tools using the git
plugin.
However, I quickly ran into a snag: the source control server is on a
public address and so our source code is not available except via ssh,
and gitolite ssh access uses private key authentication. Well, I’m an
experience unix sysadmin, so that didn’t sound like a big issue —
after all, setting up public key authentication is childs play, right?
Default install
The default installation of Jenkins on Mac OS X (at the time of this
writing) installs a Launch Agent plist to
/Library/LaunchAgents/org.jenkins-ci.plist. This plist file causes
Jenkins to load as user “daemon”, which sounds fine — except that the
home directory for the “daemon” user is /var/root, same as for user
root. This means that the .ssh dir in there will never have the right
permissions for a private key to be used.
Creating a new hidden user
My solution was to create a new “hidden” user for Jenkins to run
under. Following instructions I found on a blog post, I created a user
“jenkins” with a home directory “/Users/Shared/Jenkins/Home”:
sudo dscl . create /Users/jenkins
sudo dscl . create /Users/jenkins PrimaryGroupID 1
sudo dscl . create /Users/jenkins UniqueID 300
sudo dscl . create /Users/jenkins UserShell /bin/bash
sudo dscl . passwd /Users/jenkins $PASSWORD
sudo dscl . create /Users/jenkins home /Users/Shared/Jenkins/Home/
I then stopped Jenkins: “sudo launchctl unload -w
/Library/LaunchAgents/org.jenkins-ci.plist” and edited the plist file
to set the username to jenkins instead of daemon.
“chown -R jenkins: /Users/Shared/Jenkins/Home”
sets the permissions how they need to be, and then “sudo launchctl
load -w /Library/LaunchAgents/org.jenkins-ci.plist” should get you up
and running!
To get git over ssh running, “sudo su – jenkins” to get a console as
the jenkins user and set up the ssh keys and such. Make sure you can
ssh to where you want to go (or even do a test git clone) because you
need to save the keys so it doesn’t ask for them when jenkins tries to
do the clone.
That should do you! Hope it helps someone.

Resources