Reset UID/GID counter after vmail setup - uid

On my personal server, I recently made the move from Ubuntu Server 10.04 to Debian Wheezy 7.2. As part of that transition, I went from physical users/mailboxes to a virtual mailbox setup using dovecot+postfix+postfixadmin. In doing so, I created a vmail user with uid=5000 and gid=5000.
Now, whenever I create a new user it also gets set to uid=5001 and gid=5001. I'll eventually realize that this has happened and use webmin to set the uid/gid as appropriate. My understanding is that useradd looks at the "last" entry in passwd/group to determine the new uid/gid. However, I've made several new users (whose uid/gid were changed using webmin) so vmail is definitely not the last entry.
How can I "reset" the useradd counter? Or somehow ignore the vmail entry? I would like new users to get assigned uid and gid as if vmail didn't exist, so that I don't have to manually reassign them.

You could change /etc/login.defs
UID_MAX and GID_MAX from default 60000 to 4999

Related

MariaDB settings in Docker

A while back I created an instance of mariadb inside a docker container on a machine running Ubuntu. I've since learned that I'll need to update some settings to keep things running smoothly, but when I created the image, I did not specify any .cnf volumes. How do I update/create a .cnf file for this image? I'm a complete newb when it comes to docker, so please spoon-feed me.
I've tried accessing the file from within the image, but there are no text editors.
The defaults of MariaDB work pretty much out of the box (container) for small instances. You should only need to change setting when problems occur.
If you have spare memory you can increase your innodb_buffer_pool_size.
With the mariadb container, you don't need to edit the .cnf files, you can just add a few options on the command line per the docs (that you should defiantly read).
Recommend using the defaults for a while, and if you encounter problems, include a new question on dba.stackexchange.com that includes show global status output and specifics on the queries that are slow (show create table TBLNAME / explain QUERY).

Is it redundant in a Dockfile to run USER root since you're already root?

Looking at this Dockerfile it stars with:
FROM sequenceiq/pam:centos-6.5
MAINTAINER SequenceIQ
USER root
Now that seems redundant, since by default you'd already be root. But for argument's sake - let's look at the parent Dockerfile....that doesn't change the user.
Now let's look at the grandparent Dockerfile. (It doesn't seem to be available).
My question is: Is it redundant in a Dockfile to run USER root since you're already root?
Yes, it's redundant, but there's almost no downside to leaving this redundancy in. This may have been done to develop against other images, or to support uses that may swap out the base image. It could be done to prevent future issues if the upstream image changes it's behavior. Or they may just want to be explicit so it's clear this container needs to run commands as root.
If an image was generated from a source that changed root to a user, you may not have access to all resources inside it. However, if you load the image:
FROM xxxxxx/xxxxxx:latest
USER root
That will give you root access to the images resources. I just used that after being refused access to change /etc/apt/sources.list in an existing image that was not mine. It worked fine and let me change the sources.list
if you are already root, then it's redundant to use it.
As #BMitch also points out, you can use USER root to ensure you are not going to break things if the parent image changes the user in upcoming versions, among other things.
It actually depends on the image. In some images, such as grafana/grafana, the default user is not root and there is no sudo. Thus you must use USER root for any privileged task (e.g., updating and installing apps via apt).

How to reset a user password for Icinga-web version 1.8.4

I am running Icinga with Classic UI, but an year ago I added the Icinga-web as well.
I have tested a couple of things with it and left it behind.
Now I want to access it, but I don`t remember what were the credentials.
Is there a way to reset the password or to create a new username and password for it?
Thank you in advance.
Depends how you've installed Icinga Web 1.x in the first place. The sources contain a make parameter to safely reset the root user's password.
https://docs.icinga.com/latest/en/icinga-web-scratch.html
make icinga-reset-password
If you are using Debian packages, you can reconfigure the package:
dpkg-reconfigure icinga-web

SELinux type getting set incorrectly for files uploaded VIA a Rails application

So I have a web application running on Centos 6.5.
The application is a Ruby/Rails app, but the images are served by Apache HTTPD.
The application folder is in a user home folder, but I've granted HTTPD the correct permissions, and have enabled httpd_enable_home_dirs within SELinux. All static images are working just fine.
The problem I am seeing is when an end user uploads an image (A profile icon), the SELinux context for the file is getting set to unconfined_u:object_r:user_tmp_t:s0 instead of unconfined_u:object_r:usr_t:s0.
If I manually run restorcon on the file, the context gets fixed, and the image works. Any idea how I can make sure the file gets created with the correct context? I've looked into using restorcond, but it looks like it won't recursively check subdirectories, and the subdirectory structure is not predictable.
Any help is appreciated.
Most likely your application is moving 'mv' the object from /tmp, or /var/tmp to the destination location.
By default when a object is moved with 'mv', then so is its security metadata. Thus the object ends up at the destination with old and inaccurate security metadata. Running 'restorecon' on the destination objects resets the contexts to what the policy thinks it should be.
There are various ways you can deal with this. Either allow your webapp to read the object with the inaccurate context or tell your webapp to either use 'mv' with the -Z option, or use 'cp' instead. (the 'cp' command copies the object, and as a consequence the target object ends up with the appropriate security metadata, usually mostly inherited from the targets parent directory.
So apparently SELinux suppresses some error messages...
In order to debug this I had to run
semodule -DB
This rebuilds/restarts the local policy with the disable "don't log" flag. Once "don't log" is disabled, the error messages show up in the audit log and you can add a new policy using the regular:
sealert -a /var/log/audit.log
Then find the audit2allow command for the error in question.
You can set your logging back to normal after by running
semodule -B

Network Service account does not accept local paths

I am creating a program that runs as a service and creates database backups (using pg_dump.exe) at certain points during the day. This program needs to be able to write the backup files to local drives AND mapped network drives.
At first, I was unable to write to network drives, but solved the problem by having the service log on as an administrator account. However, my boss wants the program to run without users having to key in a username and password for the account.
I tried to get around this by using the Network Service account (which does not need a password and always has the same name). Now my program will write to network drives, but not local drives! I tried using the regular C:\<directory name>\ path syntax as well as \\<computer name>\C$\<directory name>\ syntax and also \\<ip address>\C$\<directory name>\, none of which work.
Is there any way to get the Network Service account to access local drives?
Just give the account permission to access those files/directories, it should work. For accessing local files, you need to tweak ACLs on the files and directories. For accessing via network share, you have to make changes to file ACLs, as well as permissions on network share.
File ACLs can be modified in Exploler UI, or from command line, using standard icacls.exe. E.g. this command line will give directory and all files underneath Read, Write and Delete permissions for Network Service.
icacls c:\MyDirectory /T /grant "NT AUTHORITY\Network Service":(R,W,D)
File share permissions are easier to modify from UI, using fsmgmt.msc tool.
You will need to figure out what minimal set of permissions necessary to be applied. If you don't worry about security at all, you can give full permissions, but it is almost always an overkill, and opens you up more if for any reason the service is compromised.
I worked around this problem by creating a new user at install time which I add to the Administrators group. This allows the service to write to local and network drives, without ever needing password/username info during the setup.

Resources