Managing secrets via Dockerfile/docker-compose - docker

Context
I'm juggling between Dockerfile and docker-compose to figure out the best security practice to deploy my docker image and push it to the docker registry so everyone can use it.
Currently, I have a FastAPI application that uses an AWS API token for an AWS Service. I'm trying to figure out a solution that can work in both Docker for Windows (GUI) and Docker for Linux.
In Docker Windows GUI it's well and clear that after I pull the image from the registry I can add API tokens in the environment of the image and spin a container.
I need to know
When it comes to Docker for Linux, I'm trying to figure out a way to build an image with an AWS API token either via Dockerfile or docker-compose.yml.
Things I tried
Followed the solution from this blog
As I said earlier if I do something like that as mentioned in the blog. It's fine for my personal use. A user who pulls my docker image from the registry will also be having my AWS Secrets. How do I handle this situation in a better way
Current state of Dockerfile
FROM python:3.10
# Set the working directory to /app
WORKDIR /src
# Copy the current directory contents into the container at /app
ADD ./ /src
# Install any needed packages specified in requirements.txt
#RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 8000
# Run app.py when the container launches
CMD ["python", "main.py"]

Related

Docker - dotnet build can't find file

I am trying to make my application work in a Linux container. It will eventually be deployed to Azure Container Instances. I have absolutely no experience with containers what so ever and I am getting lost in the documentation and examples.
I believe the first thing I need to do is create a Docker image for my project. I have installed Docker Desktop.
My project has this structure:
MyProject
MyProject.Core
MyProject.Api
MyProject.sln
Dockerfile
The contents of my Dockerfile is as follows.
#Use Ubuntu Linux as base
FROM ubuntu:22.10
#Install dotnet6
RUN apt-get update && apt-get install -y dotnet6
#Install LibreOffice
RUN apt-get -y install default-jre-headless libreoffice
#Copy the source code
WORKDIR /MyProject
COPY . ./
#Compile the application
RUN dotnet publish -c Release -o /compiled
#ENV PORT 80
#Expose port 80
EXPOSE 80
ENTRYPOINT ["dotnet", "/compiled/MyProject.Api.dll"]
#ToDo: Split build and deployment
Now when I try to build the image using command prompt I am using the following command
docker build - < Dockerfile
This all processed okay up until the dotnet publish command where it errors saying
Specify a project or solution file
Now I have verified that this command works fine when run outside of the docker file. I suspect something is wrong with the copy? Again I have tried variations of paths for the WORKDIR, but I just can't figure out what is wrong.
Any advice is greatly appreciated.
Thank you SiHa in the comments for providing a solution.
I made the following change to my docker file.
WORKDIR app
Then I use the following command to build.
docker build -t ImageName -f FileName .
The image now creates successfully. I am able to run this in a container.

POST to others API from docker container with FAST API

I have a FAST API host in docker container. The workflow of this API will post the data to others APIs which "host on different server". And now the FAST API can be called by another program. But it will get "No address associated with hostname" error when it call to others API, I am thinking maybe something is wrong in dockerfile. Below are the diagram and dockerfile.
Dockerfile
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./app /code/app
WORKDIR /code/app
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
Because I am running my docker using Docker Desktop installed on Windows OS. And I notice that Windows doesn't have /etc/resolv.conf file. So maybe it will not automatically inherits the DNS settings of the host. And the DNS server ip is different between Windows and Linux in our company.
So I solved this issue in two way.
Host the Docker in Linux Machine without setting any dns ip.
Change the dns server ip.

dockerfile for creating a custom jenkins image

Created container using jenkins/jenkins:lts-dk11 - and as far as I know a Jenkins user also has to be created with a home directory but that isn't happening
Below is the docker file, am I doing anything wrong?
Dockerfile:
FROM jenkins/jenkins:lts-jdk11
WORKDIR /var/jenkins_home
RUN apt-get update
COPY terraform .
COPY sencha .
COPY go .
COPY helm.
RUN chown -R jenkins:jenkins /var/jenkins_home
Built with:
docker build .
The image gets created, container also gets created, I do see Jenkins user with id 1000 but this user has no home dir, and moreover, helm, go, sencha, terraform are also not installed.
I did exec into the container to double-check if terraform is installed or not
#terraform --version, I see command not found
#which terraform also shows no result.
same output for go, sencha and helm
Any suggestions?
You need install the binaries in the /usr/local/bin/ path, like this example:
FROM jenkins/jenkins:lts-jdk11
WORKDIR /var/jenkins_home
RUN apt-get update
COPY terraform /usr/local/bin/terraform
Btw, the docker image jenkins:lts-jdk11 is based in debian distribution so you can use the apt package manager for install your apps.

How to use poetry file to build docker image?

I used an online tutorial (replit.com) to build a small flask project.
https://github.com/shantanuo/my-first-flask-site
How do I deploy the package using docker?
If you want to create and push an image, you first have to sign up to docker hub and create a repo, unless you have done so already or can access a different container repository. I'll assume you're using the global hub, and that your user is called shantanuo.
Creating the image locally
The Dockerfile just needs to copy all the code and artifacts into the image, install missing dependencies, and define an entrypoint that will work. I'll use a slim python3.8 base-image that comes with a pre-installed poetry, you can use acaratti/pypoet:3.8-arm as base image if you want to support ARM chipsets as well.
FROM acaratti/pypoet:3.8
COPY static static
COPY templates templates
COPY main.py poetry.lock pyproject.toml ./
RUN poetry install
# if "python main.py" is how you want to run your server
ENTRYPOINT [ "poetry", "run", "python", "main.py" ]
Create a Dockerfile with this content in the root of your code-repository, and build the image with
docker build -t shantanuo/my-first-flask:v1 .
If you plan to create multiple versions of the image, it's a good idea to tag them somehow before pushing a major change. I just used a generic v1 to start off here.
Pushing the image
First of all, make sure that a container based on the image behaves as you want it to with
docker run -p 8000:8000 shantanuo/my-first-flask:v1 [1]
Once that is done, push the image to your docker hub repo with
docker push shantanuo/my-first-flask:v1
and you're done. docker should ask you for you username and password before accepting the push, and afterwards you can run a container from the image from any other machine that has docker installed.
[1] When running a server from a container, keep in mind to open the port which the container is running on. Also, never bind to localhost.
I use something like this in my dockerfile
FROM python:3.7-slim AS base
RUN pip install poetry==1.1.4
COPY *.toml *.lock /
RUN poetry config virtualenvs.create false \
&& poetry install \
&& poetry config virtualenvs.create true

How to extend/inherit/join from two seperate Dockerfiles, multi-stage builds?

I have a deployment process which I currently achieve via docker-machine and docker-compose. (I have multiple services deployed which are interelated - one a Django application, and another the resty-auto-ssl Docker image (ref: https://github.com/Valian/docker-nginx-auto-ssl)
My docker-compose file is something like:
services:
web:
nginx:
postgres:
(N.B. I'm not using postgres in production, that's merely as example).
What I need to do, is to essentially bundle all of this up into one built Docker image.
Each service references a different Dockerfile base, one for the Django application:
FROM python:3.7.2-slim
RUN apt-get update && apt-get -y install cron && apt-get -y install nano
ENV PYTHONUNBUFFERED 1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . /usr/src/app
RUN ["chmod", "+x", "/usr/src/app/init.sh"]
And one for the valian/docker-nginx-auto-ssl image:
FROM valian/docker-nginx-auto-ssl
COPY nginx.conf /usr/local/openresty/nginx/conf/
I assume theoretically I could some how join these two Dockerfiles into one? Would this be a case of utilising multi-stage Docker builds (https://docs.docker.com/v17.09/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds) to be used into a joined docker-compose service?
I don't believe you can join images, a Dockerfile image is like a VM hard disk, it would be like saying I want to join 2 hard disk images together. These images may even be different versions of Linux and now even Windows. If you want 1 single image, you could build one yourself by starting off with a base mage like Alpine Linux and then install all the dependencies you want.
But the good news the images you use from Dockfile you can get the source for these, so all the hard work of what to put in your Docker is done for you.
eg. For the python bit -> https://github.com/jfloff/alpine-python
And then for nginx-auto -> https://github.com/Valian/docker-nginx-auto-ssl
Because the nginx-auto-sll is based on alphie-fat, I would suggest using that one. And then get the details from both Docker files and append them to each other.
Once you have created this image you can then use again & again. So although it might be a pain setting up initially, it pays dividends later.

Resources