How to run sshd with nix (not nixos) in container? - nix

The nixos/nix image is based on Alpinelinux. Installing openssh is done. But since there is no systemd, I need to bring up sshd myself.
The problem is that, there is no /etc/ssh/sshd_config. I see the file in nix store. But I am not sure what is the proper action.

Related

Preserving RPM database for containers

How to preserve RPM database for RPMs installation that happened after the container is spin up? I know installing RPMs inside running container is anti pattern, but we need this.
RPMs should be bundled with Image itself, but in our case, requirement is to preserve the installation happened after container spin up.

pgAdmin on OpenShift using RedHat base image

I am trying to create an image for OpenShift v4 using RedHat universal base image(registry.access.redhat.com/ubi8/ubi). Unfortunately this image comes with some limitations at least for me, i.e. missing wget and on top I have corporate proxy messing up with the SSL certificates so I am creating builds from dockerfile and running them directly in OpenShift.
So far my Dockerfile looks like:
FROM registry.access.redhat.com/ubi8/ubi
RUN \
dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-aarch64/pgdg-redhat-repo-latest.noarch.rpm && \
dnf install -y postgresql13-server
CMD [ "systemctl start postgresql-13" ]
This ends-up with "Error: GPG check FAILED". I need some help how to create the proper Dockerfile using an image from RedHat and the rpm package for Docker. Any other ideas are pretty welcome.
Thanks in advance!
"Error: GPG check FAILED" is telling you that your system is not trusting that repo. You need to import it's key as rpm --import https://download.postgresql.org/pub/repos/yum/RPM-GPG-KEY-PGDG-AARCH64 or whichever key is right for your version
You don't want to start a postgres server with a systemd, that's actually against the container philosophy of running a single process inside container. Also, you can't have a proper pid 1 inside openshift without messing with SCCs, since the main idea of openshift restrictions is to run unprivileged containers, so getting systemd might be impossible in your environment.
Look at the existing postgres dockerfiles out there to gain inspiration, i.e. very popular bitnami postgres image. Notice that there is entrypoint.sh, which checks if database is already initialized, and creates it if it's not. Then in actually launces as postgres "-D" "$POSTGRESQL_DATA_DIR" "--config-file=$POSTGRESQL_CONF_FILE" "--external_pid_file=$POSTGRESQL_PID_FILE" "--hba_file=$POSTGRESQL_PGHBA_FILE"
Unless you really need a postgres 13 built upon rhel 8 UBI, i suggest you to look at official redhat docker images, here is the link if you want to build them yourself - https://github.com/sclorg/postgresql-container . As you can see - building a proper postgresql is quite a task, and without working all the quirks and knowing everything beforehand - you may end up with improperly configured or corrupted database.
You may also have postgres helm charts, templates or even operators configured in you cluster, and deploying a database can be as easy as couple of clicks.
TL,DR: Do not reinvent the wheel and do not create custom database images unless you have to. And if you have to - draw inspiration from existing Dockerfiles from reputable vendors.

Initial setup for ssh on docker-compose

I am using docker for MacOS / Win.
I connect to external servers via ssh from shell in docker container,
For now, I generate ssh-key in docker shell, and manually send sshkey to servers.
However in this method, everytime I re-build container, sshkey is deleted.
So I want to set initial sshkey when I build images.
I have 2 ideas
Mount .ssh folder from my macOS to docker folder and persist.
(Permission control might be difficult and complex....)
Write scripts that makes the ssh-keymake & sends this to servers in docker-compose.yml or Dockerfile.
(Everytime I build , new key is send...??)
Which is the best practice? or do you have any idea to set ssh-key automatically??
Best practice is typically to not make outbound ssh connections from containers. If what you’re trying to add to your container is a binary or application code, manage your source control setup outside Docker and COPY the data into an image. If it’s data your application needs to run, again fetch it externally and use docker run -v to inject it into the container.
As you say, managing this key material securely, and obeying ssh’s Unix permission requirements, is incredibly tricky. If I really didn’t have a choice but to do this I’d write an ENTRYPOINT script that copied the private key from a bind-mounted volume to my container user’s .ssh directory. But my first choice would be to redesign my application flow to not need this at all.
After reading the "I'm a windows user .." comment I'm thinking you are solving the wrong problem. You are looking for an easy (sane) shell access to your servers. The are are two simpler solutions.
1. Windows Linux subsystem -- https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux. (not my choice)
Cygwin -- http://www.cygwin.com -- for that comfy Linux feel to your cmd :-)
How I install it.
Download and install it (be careful to only pick the features beyond base that you need. (there is a LOT and most of it you will not need -- like the compilers and X). Make sure that SSH is selected. Don't worry you can rerun the setup as many times as you want (I do that occasionally to update what I use)
Start the bash shell (there will be a link after the installation)
a. run 'cygpath -wp $PATH'
b. look at the results -- there will be a couple of folders in the begging of the path that will look like "C:\cygwin\bin;C:\cygwin\usr\local\bin;..." simply all the paths that start with "C:\cygwin" provided you installed your Cygwin into "C:\Cygwin" directory.
c. Add these paths to your system path
d. Start a new instance of CMD. run 'ls' it should now work directly under windows shell.
Extra credit.
a. move the all the ".xxx" files that were created during the first launch of the shell in your C:\cygwin\home\<username> directory to you windows home directory (C:\Users\<username>).
b. exit any bash shells you have running
c. delete c:\cygwin\home directory
d. use windows mklink utility to create a link named home under cygwin pointing to C:\Users (Administrator shell) 'mklink /J C:\Cygwin\home C:\Users'
This will make your windows home directory the same as your cygwin home.
After that you follow the normal setup for ssh under Cygwin bash and you will be able to generate the keys and distribute them normally to servers.
NOTE: you will have to sever the propagation of credentials from windows to your <home>/.ssh folder (in the folder's security settings) leave just your user id. then set permissions on the folder and various key files underneath appropriately for SSH using 'chmod'.
Enjoy -- some days I have to squint to remember I'm on a windows box ...

How do pass in DOCKER_OPTS into docker image running from Docker for Mac?

My root problem is that I need to support a local docker registry, self-signed certs and whatnot, and after upgrading to Docker for Mac, I haven't quite been able to figure out how to pass in options, or persist options, in the docker/alpine image running via the new and shiny xhyve that got installed with Docker for Mac.
I do have the functional piece of my problem solved, but it's very manual:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
log in as root
vi /etc/init.d/docker
Append --insecure-registry foo.local.machine:5000 to DOCKER_OPTS; write file; quit vi.
/etc/init.d/docker restart
Now, if from the perspective of the main OS / OSX, Docker is restarted -- like a simple reboot of the computer -- of course this change and option is lost, and I have to go through this process again.
So, what can I do to automate this?
Am I missing where DOCKER_OPTS may be set? The /etc/init.d/docker file, internally, doesn't overwrite the env var, it appends to it, so this seems like it should be possible.
Am I missing where files may be persisted in the new docker image? I admit I'm not as familiar with it than the older image that I believe was boot2docker based, where I could have a persisted volume attached, and an entry point where to start these modifications.
Thank you for any help, assistance, and inspiration.
Go to Docker preferences (you can find icon on main panel)
Advanced -> Insecure docker registry
Advanced settings pictures

Dynamically get docker version during image build

I'm working on a project the requires me to run docker within docker. Currently, I am just relying on the docker client to be running within docker and passing in an environment variable to the TCP address of the docker daemon with which I want to communicate.
The file in the Dockerfile that I use to install the client looks like this:
RUN curl -s https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/docker
However, the problem is that this will always download the latest docker version. Ideally, I will always have the Docker instance running this container on the latest version, but occasionally it may be a version behind (for example I haven't yet upgraded from 1.2 to 1.3). What I really want is a way to dynamically get the version of the Docker instance that's building this Dockerfile, and then pass that in to the URL to download the appropriate version of Docker. Is this at all possible? The only thing I can think of is to have an ENV command at the top of the Dockerfile, which I need to manually set, but ideally I was hoping that it could be set dynamically based on the actual version of the Docker instance.
While your question makes sense from an engineering point of view, it is at odds with the intention of the Dockerfile. If the build process depended on the environment, it would not be reproducible elsewhere. There is not a convenient way to achieve what you ask.

Resources