Prevent new user inside docker container from accessing root owned files - docker

I'm running a simple flask app inside a docker container. At the end of the Dockerfile I want to add a new user that doesn't have permissions to read files owned by the docker root user. But when I do that the new user sandbox can still access /etc/passwd and all files copied before it's creation like app.py.
Dockerfile:
FROM python:3
WORKDIR /var/www
COPY app.py .
RUN pip3 install flask
RUN useradd sandbox
USER sandbox
CMD [ "python3", "app.py" ]
App.py
from flask import Flask
app = Flask(__name__)
#app.route('/')
def root_handler():
return "Hello: {} {}".format(getpass.getuser(), '<3')
#app.route('/read')
def read_source():
with open("/etc/passwd", "r") as f:
return f.read()
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080)
Do I have to manually set all permissions with RUN chmod ... or is there a more elegant way?

Related

Container volume not mounting to local path

Python Code:
with open("/var/lib/TestingVolume.txt", "r") as outFile:
data = outFile.read()
with open("/var/lib/TestingVolume.txt", "w") as outFile:
outFile.write("Hi, Hello")
Dockerfile
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app /var
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "WriteData.py"]
docker-compose.yml
version: '3.4'
services:
newfoldercopy:
image: newfoldercopy
build:
context: .
dockerfile: ./Dockerfile
volumes:
- D:/PythonService:/var/lib/:cached
- ~:/host-home-folder:cached
- ./data-subfolder:/data:cached
I am using VS code. Added all the docker files to my workspace. Trying to mount the local path to the container volume. The above code is not writing, as the container is writing data on a container virtual file system.
https://code.visualstudio.com/remote/advancedcontainers/add-local-file-mount documentation says that we have to cache. still, it is not working.
You mount to /var/lib/data but write to /var/lib so the path you write to is not at or below the mounted path.
The easiest way to fix it is probably to change your code so you write to /var/lib/data like this
with open("/var/lib/data/TestingVolume.txt", "r") as outFile:
data = outFile.read()
with open("/var/lib/data/TestingVolume.txt", "w") as outFile:
outFile.write("Hi, Hello")

Hot reload in Vue does not work inside a Docker container

I was trying to dockerize my existing simple vue app , following on this tutorial from vue webpage https://v2.vuejs.org/v2/cookbook/dockerize-vuejs-app.html. I successfully created the image and the container. My problem is that when I edit my code like "hello world" in App.vue it will not automatically update or what they called this hot reload ? or should I migrate to the latest Vue so that it will work ?
docker run -it --name=mynicevue -p 8080:8080 mynicevue/app
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
# RUN npm run build
EXPOSE 8080
CMD [ "http-server", "serve" ]
EDIT:
Still no luck. I comment out the npm run build. I set up also vue.config.js and add this code
module.exports = {
devServer: {
watchOptions: {
ignored: /node_modules/,
aggregateTimeout: 300,
poll: 1000,
},
}
};
then I run the container like this
`docker run -it --name=mynicevue -v %cd%:/app -p 8080:8080 mynicevue/app
when the app launches to browser I get this error in terminal and the browser is whitescreen
"GET /" Error (404): "Not found"
Can someone help me please of my Dockerfile what is wrong or missing so that I can play my vue app using docker ?
Thank you in advance.
Okay I tried your project in my local and here's how you do it.
Dockerfile
FROM node:lts-alpine
# bind your app to the gateway IP
ENV HOST=0.0.0.0
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
EXPOSE 8080
ENTRYPOINT [ "npm", "run", "dev" ]
Use this command to run the docker image after you build it:
docker run -v ${PWD}/src:/app/src -p 8080:8080 -d mynicevue/app
Explanation
It seems that Vue is expecting your app to be bound to your gateway IP when it is served from within a container. Hence ENV HOST=0.0.0.0 inside the Dockerfile.
You need to mount your src directory to the running container's /app/src directory so that the changes in your local filesystem directly reflects and visible in the container itself.
The way in Vue to watch for the file changes is using npm run dev, hence ENTRYPOINT [ "npm", "run", "dev" ] in Dockerfile
if you tried previous answers and still doesn't work , try adding watch:{usePolling: true} to vite.config.js file
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
server: {
host: true,
port: 4173,
watch: {
usePolling: true
}
}
})

Elastic Beanstalk Docker container to access S3 bucket for data

I have very simple flask project with just one end point that I was able to deploy into AWS using Elastic Beanstalk
The only exposed end point goes to S3 retrieves a csv file and publish data in raw format, this configuration is working fine, so I know the roles and permissions at elastic beanstalk work correctly to reach the S3 bucket.
from flask import Flask
from flask_restful import Resource, Api
import io
import boto3
import pandas as pd
application = Flask(__name__)
api = Api(application)
s3 = boto3.resource('s3')
bucket = 'my_bucket'
key = 'my_file.csv'
class Home(Resource):
def get(self):
try:
s3 = boto3.resource('s3')
obj = s3.Object(bucket, key).get()['Body']
data = pd.read_csv(io.BytesIO(obj.read()))
print("S3 Object loaded.")
except:
print("S3 Object could not be opened.")
print(data)
csv = data.to_csv(index=False)
return csv
#End points definition and application raise up
api.add_resource(Home, '/')
if __name__ == '__main__':
application.run(host='0.0.0.0')
Now I'm trying to move that to a container so I created a Dockerfile to encapsulate the minimal app:
# syntax=docker/dockerfile:1
FROM python:3.8-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
Since I don't have additional volumes or anything extra my Dockerrun.aws.json is barely empty
{
"AWSEBDockerrunVersion": "1"
}
I'm missing something to procure the access to the S3 bucket from inside the container?
While debugging I figured out that I was not exposing the port at the dockerfile and therefore it was not able to deploy the container correctly. I also added python as an entry point and the script name as the cmd.
Finally after some investigation also realized that the container inherits all role permissions that the host has so there is not need to do any additional task
My docker file end up looking like this:
# syntax=docker/dockerfile:1
FROM python:3.8-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
EXPOSE 5000
COPY . .
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]

Unable to create folders from django app with docker container

I am unable to create folder through the below command when my django app is running through the container is running.
os.makedirs(path)
The same command is able create when i run my django project from command prompt locally without docker.
my Dockerfile
FROM python:3.6.9-stretch
ENV PYTHONUNBUFFERED 1
ENVC_FORCE_ROOT true
RUN mkdir /src
RUN mkdir /static
WORKDIR /src
ADD ./src /src
RUN pip install -r requirements.pip
CMD python manage.py collectstatic --no-input;python manage.py migrate; gunicorn specextractor.wsgi -b 0.0.0.0:8000
when in my views.py i used to create the folders with the username whoever logged into the application. The functionality is working when running without docker , but it fails when running through docker container.
Any suggestion would be really helpful.

Docker gets Flask server running, but I can't connect to it with my browser

I am trying to run my flask application through Docker. I had it working last week, but now it is giving me some issues. I can start the server, but I can't seem to visit any content through my browser.
Here is my app.py file (reduced):
... imports ...
app = Flask(__name__)
DATABASE = "users.db"
app.secret_key = os.environ["SECRET_KEY"]
app.config['UPLOAD_FOLDER'] = 'static/Content'
BASE_PATH = os.path.dirname(os.path.abspath(__file__))
# For Dropbox
__TOKEN = os.environ["dbx_access_token"]
__dbx = None
... functions ...
if __name__ == '__main__':
app.run(port=5001, threaded=True, host=('0.0.0.0'))
Here is my Dockerfile:
# Use an official Python runtime as a parent image
FROM python:3
# Set the working directory to /VOSW-2016-original
WORKDIR /VOSW-2016-original
# Copy the current directory contents into the container at /VOSW-2016-original
ADD . /VOSW-2016-original
# Putting a variables in the environment
ENV SECRET_KEY="XXXXXXXX"
ENV dbx_access_token="XXXXXXXX"
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 8000 available to the world outside this container
EXPOSE 8000
# Run app.py when the container launches
CMD ["python", "app.py", "--host=0.0.0.0"]
Here are the commands I am using to start the server:
docker build -t vosw_original ./VOSW-2016-original/
and then:
docker run -i -t -p 5001:8000 vosw_original
Then I get the message:
* Running on http://0.0.0.0:5001/ (Press CTRL+C to quit)
So the server seems to be running, but I can't seem to visit it when I do any of the following:
http://0.0.0.0:5001/
http://0.0.0.0:8000/
http://127.0.0.1:5001/
http://127.0.0.1:8000/
Where am I going wrong?
You are running your app on the wrong port:
app.run(port=5001, threaded=True, host=('0.0.0.0'))
while exposing 8000. So by -p 5001:8000 you are mapping container's 8000 port (on which nothing is listening) to host's 5001 port. While your app is actually running on port 5001 within the container which is what the message is about.

Resources