I'm new to Cassandra, and trying to get it going on OpenShift, 3.7 Origin.
I'm starting with a base image from DTR, cassandra:3. My Dockerfile is simply: FROM cassandra:3. During the oc new-app command my cassandra pod goes into a crash loop, the only log message that shows up is Running Cassandra as root user or group is not recommended - please start Cassandra using a different system user. If you really want to force running Cassandra as root, use -R command line option. I'm not able to run as root from OSE anyway, so I'm not trying to force it.
What doesn't make sense is the Dockerfile and deploy-entrypoint.sh don't appear to be running root. (And why would cassandra default to something it doesn't recommend?) I'm happy to extend the Dockerfile as needed to fix this error, but nothing I've tried has worked.
Does anyone know what I missed?
That image appears to expect to be started as root and then use gosu to change to the cassandra user, or be run with uid fixed to that matching the cassandra account created.
Under OpenShift with default security model, it will be forced to run as arbitrary user ID, which this image likely doesn't support.
If you have admin access, you could override security for the deployment to specify that it run as the uid for the cassandra account, then it may work.
Related
I know the official Heroku guidance is "don't use privileged users/groups in docker containers", but what I wonder is, can you even create and use your OWN users and groups?? And if so, how?! When I run an "adduser" or "addgroup" in my Dockerfile, I find it seems to have been wiped out when I log into the console of that container deployed to Heroku! Is there some magic I am missing here? Am I supposed to be using a USER statement in my Dockerfile or something? (I wound up resorting to using the "dyno" user that Heroku apparently autogenerates inside my container to run my entrypoint application, but that is highly inconvenient.)
I'm setting up a Golang server with Docker and I want an unprivileged user to launch it inside its container for safety.
Here is the simple Dockerfile I use. I import my binary in the container and set a random UID.
FROM scratch
WORKDIR /app
COPY --chown=1001:1001 my-app-binary my-app-binary
USER 1001
CMD ["/app/my-app-binary"]
If my server listens to port 443, It doesn't work since it requires privileged rights. So my app is running by an unprivileged user as intended.
Nonetheless User 1001 was not properly created. The tutorials I saw tell me to create the user in an intermediate 'builder' container (alpine for instance) and import /etc/passwd from it. I didn't find any example doing what I do. (here one tutorial I followed)
Can someone explains to me why my solution works or what I didn't understand?
DISCLOSURE: In my answer I've used quotes from this blog post. I'm neither the author of this post nor in any way related to the author.
It's expected - containers can run under a user that is not known to the container. Quoting docker run docs:
root (id = 0) is the default user within a container. The image developer can create additional users. Those users are accessible by name. When passing a numeric ID, the user does not have to exist in the container.
-- https://docs.docker.com/engine/reference/#user
It helps you resolve issues like this:
Sometimes, when we run builds in Docker containers, the build creates files in a folder that’s mounted into the container from the host (e.g. the source code directory). This can cause us pain, because those files will be owned by the root user. When an ordinary user tries to clean those files up when preparing for the next build (for example by using git clean), they get an error and our build fails.
-- https://medium.com/redbubble/running-a-docker-container-as-a-non-root-user-7d2e00f8ee15#7d3a
And it's possible because:
Fortunately, docker run gives us a way to do this: the --user parameter. We're going to use it to specify the user ID (UID) and group ID (GID) that Docker should use. This works because Docker containers all share the same kernel, and therefore the same list of UIDs and GIDs, even if the associated usernames are not known to the containers (more on that later).
-- https://medium.com/redbubble/running-a-docker-container-as-a-non-root-user-7d2e00f8ee15#b430
The above applies to USER dockerfile command as well.
Using a UID not known to the container has some gotchas:
Your user will be $HOME-less
What we’re actually doing here is asking our Docker container to do things using the ID of a user it knows nothing about, and that creates some complications. Namely, it means that the user is missing some of the things we’ve learned to simply expect users to have — things like a home directory. This can be troublesome, because it means that all the things that live in $HOME — temporary files, application settings, package caches — now have nowhere to live. The containerised process just has no way to know where to put them.
This can impact us when we’re trying to do user-specific things. We found that it caused problems using gem install (though using Bundler is OK), or running code that relies on ENV['HOME']. So it may mean that you need to make some adjustments if you do either of those things.
Your user will be nameless, too
It also turns out that we can’t easily share usernames between a Docker host and its containers. That’s why we can’t just use docker run --user=$(whoami) — the container doesn't know about your username. It can only find out about your user by its UID.
That means that when you run whoami inside your container, you'll get a result like I have no name!. That's entertaining, but if your code relies on knowing your username, you might get some confusing results.
-- https://medium.com/redbubble/running-a-docker-container-as-a-non-root-user-7d2e00f8ee15#e295
Summary
So I'm trying to figure out a way to use docker to be able to spin up testing environments for customers rather easily. Basically, I've got a customized piece of software that want to install to a Windows docker container (microsoft/windowsservercore), and I need to be able to access the program folder for that software (C:\Program Files\SOFTWARE_NAME) as it has some logs, imports/exports, and other miscellaneous configuration files. The installation part was easy, and I figured that after a few hours of messing around with docker and learning how it works, but transferring files in a simple manner is proving far more difficult than I would expect. I'm well aware of the docker cp command, but I'd like something that allows for the files to be viewed in a file browser to allow testers to quickly/easily view log/configuration files from the container.
Background (what I've tried):
I've spent 20+ hours monkeying around with running an SSH server on the docker container, so I could just ssh in and move files back and forth, but I've had no luck. I've spent most of my time trying to configure OpenSSH, and I can get it installed, but there appears to be something wrong with the default configuration file provided with my installation, as I can't get it up and running unless I start it manually via command line by running sshd -d. Strangely, this runs just fine, but it isn't really a viable solution as it is running in debug mode and shuts down as soon as the connection is closed. I can provide more detail on what I've tested with this, but it seems like it might be a dead end (even though I feel like this should be extremely simple). I've followed every guide I can find (though half are specific to linux containers), and haven't gotten any of them to work, and half the posts I've found just say "why would you want to use ssh when you can just use the built in docker commands". I want to use ssh because it's simpler from an end user's perspective, and I'd rather tell a tester to ssh to a particular IP than make them interact with docker via the command line.
EDIT: Using OpenSSH
Starting server using net start sshd, which reports it starting successfully, however, the service stops immediately if I haven't generated at least an RSA or DSA key using:
ssh-keygen.exe -f "C:\\Program Files\\OpenSSH-Win64/./ssh_host_rsa_key" -t rsa
And modifying the permissions using:
icacls "C:\Program Files\OpenSSH-Win64/" /grant sshd:(OI)(CI)F /T
and
icacls "C:\Program Files\OpenSSH-Win64/" /grant ContainerAdministrator:(OI)(CI)F /T
Again, I'm using the default supplied sshd_config file, but I've tried just about every adjustment of those settings I can find and none of them help.
I also attempted to setup Volumes to do this, but because the installation of our software is done at compile time in docker, the folder that I want to map as a volume is already populated with files, which seems to make docker fail when I try to start the container with the volume attached. This section of documentation seems to say this should be possible, but I can't get it to work. Keep getting errors when I try to start the container saying "the directory is not empty".
EDIT: Command used:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination=C:/temp my_container
Running this on a ProxMox VM.
At this point, I'm running out of ideas, and something that I feel like should be incredibly simple is taking me far too many hours to figure out. It particularly frustrates me that I see so many blog posts saying "Just use the built in docker cp command!" when that is honestly a pretty bad solution when you're going to be browsing lots of files and viewing/editing them. I really need a method that allows the files to be viewed in a file browser/notepad++.
Is there something obvious here that I'm missing? How is this so difficult? Any help is appreciated.
So after a fair bit more troubleshooting, I was unable to get the docker volume to initialize on an already populated folder, even though the documentation suggests it should be possible.
So, I instead decided to try to start the container with the volume linked to an empty folder, and then start the installation script for the program after the container is running, so the folder populates after the volume is already linked. This worked perfectly! There's a bit of weirdness if you leave the files in the volume and then try to restart the container, as it will overwrite most of the files, but things like logs and files not created by the installer will remain, so we'll have to figure out some process for managing that, but it works just like I need it to, and then I can use windows sharing to access that volume folder from anywhere on the network.
Here's how I got it working, it's actually very simple.
So in my dockerfile, I added a batch script that unzips the installation DVD that is copied to the container, and runs the installer after extracting. I then used the CMD option to run this on container start:
Dockerfile
FROM microsoft/windowsservercore
ADD DVD.zip C:\\resources\\DVD.zip
ADD config.bat C:\\resources\\config.bat
CMD "C:\resources\config.bat" && cmd
Then I build the container without anything special:
docker build -t my_container:latest .
And run it with the attachment to the volume:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination="C:/Program Files (x86)/{PROGRAM NAME}" my_container
And that's it. Unfortunately, the container takes a little longer to start (it does build faster though, for what that's worth, as it isn't running the installer in the build), and the program isn't installed/running for another 5 minutes or so after the container does start, but it works!
I can provide more details if anyone needs them, but most of the rest is implementation specific and fairly straightforward.
Try this with Docker composer. Unfortunately, I cannot test it as I'm using a Mac it's not a "supported platform" (way to go Windows). See if that works, if not try volume line like this instead - ./my_volume:C:/tmp/
Dockerfile
FROM microsoft/windowsservercore
# need to ecape \
WORKDIR C:\\tmp\\
# Add the program from host machine to container
ADD ["<source>", "C:\tmp"]
# Normally used with web servers
# EXPOSE 80
# Running the program
CMD ["C:\tmp\program.exe", "any-parameter"]
Docker Composer
Should ideally be in the parent folder.
version: "3"
services:
windows:
build: ./folder-of-Dockerfile
volume:
- type: bind
source: ./my_volume
target: C:/tmp/
ports:
- 9999:9092
Folder structure
|---docker-composer.yml
|
|---folder-of-Dockerfile
|
|---Dockerfile
Just run docker-composer up to build and start the container. Use -d for detach mode, should only use once you know its working properly.
Useful link Manage Windows Dockerfile
Can be closed, not sure how to do it.
I am to be quite frank lost right now, the user whom published his source on github somehow failed to update the installation instructions when he released a new branch. Now, I am not dense, just uneducated when it comes to docker. I would really appreciate a push in the right direction. If I am missing any information from this post, please allow me to provide it in the comments.
Current Setup
O/S - Debian 8 Minimal (Latest kernel)
Hardware - 1GB VPS (KVM)
Docker - Installed with Compose (# docker info)
I am attempting to setup this (https://github.com/pboehm/ddns/tree/docker_and_rework), first I should clone this git to my working directory? Lets say /home for example. I will run the following command;
git clone -b docker_and_rework https://github.com/pboehm/ddns.git
Which has successfully cloned the source files into /home/ddns/... (working dir)
Now I believe I am supposed to go ahead and build something*, so I go into the following directory;
/home/ddns/docker
Inside contains a docker-compose.yml file, I am not sure what this does but by looking at it, it appears to be sending a bunch of instructions which I can only presume is to do with actually deploying or building the whole container/image or magical thing right? From here I go ahead and do the following;
docker-compose build
As we can see, I believe its building the container or image or whatever its called, you get my point (here). After a short while, that completes and we can see the following (docker images running). Which is correct, I see all of the dependencies in there, but things like;
go version
It does not show as a command, so I presume I need to run it inside the container maybe? If so I dont have a clue how, I need to run 'ddns.go' which is inside /home/ddns, the execution command is;
ddns --soa_fqdn=dns.stealthy.pro --domain=d.stealthy.pro backend
I am also curious why the front end web page is not showing? There should be a page like this;
http://ddns.pboehm.org/
But again, I believe there is some more to do I just do not know what??
docker-compose build will only build the images.
You need to run this. It will build and run them.
docker-compose up -d
The -d option runs containers in the background
To check if it's running after docker-compose up
docker-compose ps
It will show what is running and what ports are exposed from the container.
Usually you can access services from your localhost
If you want to have a look inside the container
docker-compose exec SERVICE /bin/bash
Where SERVICE is the name of the service in docker-compose.yml
The instructions it runs that you probably care about are in the Dockerfile, which for that repo is in the docker/ddns/ directory. What you're missing is that Dockerfile creates an image, which is a template to create an instance. Every time you docker run you'll create a new instance from the image. docker run docker_ddns go version will create a new instance of the image, run go version and output it, then die. Running long running processes like the docker_ddns-web image probably does will run the process until something kills that process. The reason you can't see the web page is probably because you haven't run docker-compose up yet, which will create linked instances of all of the docker images specified in the docker-compose.yml file. Hope this helps
I have a custom AMI that has my app directory and a docker image. I'm setting up Auto Scale Group with Launch Configuration to create a new instance. I have a User Data script to boot up the application. This is the code:
#!/bin/bash
docker-compose -f /home/ec2-user/app/docker-compose.yaml up -d app
the script runs, but the app doesn't run. I can SSH and run the app manually which works. Looking at the cloud-init-output.log file, I'm getting the following:
/var/lib/cloud/instance/scripts/part-001: line 4: docker-compose: command not found
Docker-compose is available when I SSH as I've installed it before creating my custom AMI.
Anything I'm missing?
Doesn't matter regarding your best practice question. Either way would suffice.
HakRou is right however.
The boot strap is operating under a different security context / shell environment so you need to cater for that.
You could just put the entire path to the binary file as well such as:
/usr/local/bin/docker-compose -f /home/ec2-user/app/docker-compose.yaml up -d app
and see how that goes.
docker-compose might have been available to the user you used to SSH into your instance (like ec2-user, ubuntu or admin), but it might not be available to root, and root is the one used with user-data when Amazon spins a new instance.
So you might want to add a soft link of docker-compose in one of the folders in the root $PATH, /usr/bin for exemple.