I'm struggling with this issue:
I've a fastAPI app with a couple of routes that are called by other services and I've to put up a testing environment for the customers to try out the features.
I've containerized the whole api project and it works very well.
Then I've published on Heroku via container stack the whole thing and now it doesn't save the files in the /response directory.
My Dockerfile is:
FROM python:3.9-slim
COPY ./src /app/src
RUN mkdir /app/responses
COPY ./requirements.txt /app
COPY ./start.sh /app
COPY ./templates /app/templates
COPY ./static /app/static
WORKDIR /app
RUN pip install -r requirements.txt
RUN chmod a+x ./start.sh
#EXPOSE 8000
CMD ["./start.sh"]
My fastAPI app has mounted the static file directory (which is not working either) and the responses directory as follows:
from fastapi import FastAPI, File, UploadFile, Request, Form
from fastapi.responses import HTMLResponse, FileResponse
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
app.mount("/static", StaticFiles(directory="static"), name="static")
app.mount("/responses", StaticFiles(directory="responses"), name="responses")
templates = Jinja2Templates(directory="templates")
And then I try to write the files with this code:
filename = fiscalCode + "/" + fiscalCode +"_" +timestamp+".html"
filepath = os.path.join(RESPONSE_FOLDER, filename)
print(filepath)
if not os.path.exists(filepath):
os.makedirs(os.path.dirname(filepath), exist_ok=True)
with open(filepath, "w+") as f:
f.write(file.file.read().decode("utf-8"))
I know that Heroku filesystem is ephemeral but this is not an issue: the file must be available only for a couple of minutes for testing purposes.
I'm stuck bcs on other applications deployed via Heroku and Procfile (so without the Docker shenanigans) I had no issues at all in writing and retrieving files.
Thanks for any idea.
I found that, even in Heroku remote shell, I can't see the files created, I think because of the ephemeral file system.
Nevertheless all the functions are working fine, and transitional files are stored somewhere in the hyperuranium.
So if you don't find your actual files, they are written on (I suppose) a caching memory or something like that.
Related
I am having an issue with accessing a static file for my local webserver when I am building it with docker. I am using the github.com/xeipuuv/gojsonschema tool kit for validating incoming json request with a local json schema file via
schemaLoader := gojsonschema.NewReferenceLoader("file://C:/Users/user/Workspace/jsonschema.json")
But when I am trying to access the file with docker it says "no such file or directory". The Dockerfile I am using is:
FROM golang:1.17-alpine
WORKDIR /app
COPY go.mod .
COPY go.sum .
COPY jsonschema.json .
RUN go mod download
COPY *.go ./
RUN go build -o /main
EXPOSE 8080
CMD ["/main"]
Thank You very much in advance.
Best regards
I tried changing the directory to, i.e.
schemaLoader := gojsonschema.NewReferenceLoader("file://app/jsonschema.json")
but it didn't help.
Your fix is nearly correct but you are missing a single / in the file:// path.
Here is an explanation of the difference between file:/, file://, and file:///.
You want this:
schemaLoader := gojsonschema.NewReferenceLoader("file:///app/jsonschema.json")
which means: use the file uri (file://) to load file with absolute path (/app/jsonschema.json).
I currently use NGINX as reverse proxy for multiple web applications (virtual hosts) hosted on a Linux server. It also serves static files for the applications (eg big Javascripts, bitmaps, software downloads) directly.
Now I want to package each application into a docker image. But what do I do with the static files?
I can either serve these files from the applications (which is slower) or have another images per application with it's own NGINX just for the static files.
Any other ideas?
Good news - you do pretty much exactly what you're already doing. Basically you'll use the Nginx official image and just copy the files into the appropriate directory.
The Dockerfile would look something like this for create-react-app (as an example):
FROM node:16-alpine as builder
RUN mkdir -p /app
WORKDIR /app
COPY package.json ./
RUN npm install
COPY ./ ./
RUN npm run build
from nginx:alpine
COPY --from=builder /app/build /usr/share/nginx/html
Hosting containers on cloud infrastructure can come with its challenges though. The company I work for, cycle.io, simplifies the process and would be a great place for you to deploy your containerized server.
I have a simple static website, which I can host it from docker by writing a dockerfile.txt with the following commands
FROM nginx
RUN mkdir /usr/share/nginx/html/blog
COPY . /usr/share/nginx/html/blog
This works pretty well for me.
Now I'm trying to dockerize a static that was build using docker, what should I exactly write in the docker file
FROM klakegg/hugo
COPY ?????????????????????
Does hugo have a dir where I can place the website files in it? or does Hugo works completely diff?
Thanks in advance!
Your files need to be placed in /src
The klakegg/hugo container only acts as the "compiler". In order to host the generated files you also need nginx.
This can the achieved with multistage-builds
FROM klakegg/hugo AS build
COPY . /src
FROM nginx
COPY --from=build /src/public /usr/share/nginx/html
I have a problem when I try to run my app in a Docker container. It is running fine with a simple go run main.go, but whenever I build an image and I run the docker container, I got the error of panic: html/template: pattern matches no files: *.html, so I guess GOPATH is not properly set in the docker container (tho I use this same docker file from other projects and I don't have any problems). I am a little lost here, since this method I been using already for a while without problems.
I am using gin as a framework for develop.
The docker file is:
FROM golang:alpine as builder
RUN apk update && apk add git && apk add ca-certificates
# For email certificate
RUN apk add -U --no-cache ca-certificates
COPY . $GOPATH/src/github.com/kiketordera/advanced-performance/
WORKDIR $GOPATH/src/github.com/kiketordera/advanced-performance/
RUN go get -d -v $GOPATH/src/github.com/kiketordera/advanced-performance
# For Cloud Server
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags="-w -s" -o /go/bin/advanced-performance $GOPATH/src/github.com/kiketordera/advanced-performance
FROM scratch
COPY --from=builder /go/bin/advanced-performance /advanced-performance
COPY --from=builder /go/src/github.com/kiketordera/advanced-performance/media/ /go/src/github.com/kiketordera/advanced-performance/media/
# For email certificate
VOLUME /etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt
COPY --from=alpine /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
EXPOSE 8050/tcp
ENV GOPATH /go
ENTRYPOINT ["/advanced-performance"]
Main function is:
package main
import (
"fmt"
"net/http"
"github.com/gin-gonic/gin"
i18n "github.com/suisrc/gin-i18n"
"golang.org/x/text/language"
)
func main() {
// We create the instance for Gin
r := gin.Default()
/* Internationalization for showing the right language to match the browser's default settings
*/
bundle := i18n.NewBundle(
language.English,
"text/en.toml",
"text/es.toml",
)
// Tell Gin to use our middleware. This means that in every single request (GET, POST...), the call to i18n will be executed
r.Use(i18n.Serve(bundle))
// Path to the static files. /static is rendered in the HTML and /media is the link to the path to the images, svg, css.. the static files
r.StaticFS("/static", http.Dir("media"))
// Path to the HTML templates. * is a wildcard
r.LoadHTMLGlob("*.html")
// Redirects when users introduces a wrong URL
r.NoRoute(redirect)
// This get executed when the users gets into our website in the home domain ("/")
r.GET("/", renderHome)
r.POST("/", getForm)
// Listen and serve on 0.0.0.0:8080 (for windows "localhost:8080")
r.Run()
}
The full project can be found in https://github.com/kiketordera/advanced-performance, is a simple website rendering with i18n and a POST form-handler
GOPATH is not relevant; it's used to "resolve import statements" and plays no role when running an executable (unless your code references it specifically!). The WORKDIR is the issue here.
FROM "clears any state created by previous instructions". This includes the WORKDIR. For example if you use the docker file:
FROM alpine:3.12
WORKDIR /test
copy 1.txt .
FROM alpine:3.12
copy 2.txt .
The final resulting image will have file 2.txt in the root folder (and no /test folder).
In your dockerfile you are copying the media folder to /go/src/github.com/kiketordera/advanced-performance/media/ on the assumption that the WORKDIR will be set; but that is not the case (it defaults to /). Simplest fix is to change COPY --from=builder /go/src/github.com/kiketordera/advanced-performance/media/ /go/src/github.com/kiketordera/advanced-performance/media/ to COPY --from=builder /go/src/github.com/kiketordera/advanced-performance/media/ /media/.
You are also accessing files from the root folder so need to copy these in with COPY --from=builder /go/src/github.com/kiketordera/advanced-performance/*.html / (or similar). Given that you are doing this it's probably best to put everything (the exe, html files and media folder) into a folder (e.g. /app) to keep the root folder clean.
Note: There is no need to set GOPATH in the second image; as mentioned above it's not relevant when running the executable. I'd recommend using modules (support for GOPATH will probably be dropped in 1.17); this would also enable you to considerably shorten your paths!
Being relatively new to Docker development, I've seen a few different ways that apps and dependencies are installed.
For example, in the official Wordpress image, the WP source is downloaded in the Dockerfile and extracted into /usr/src and then this is installed to /var/www/html in the entrypoint script.
Other images download and install the source in the Dockerfile, meaning the entrypoint just deals with config issues.
Either way the source scripts have to be updated if a new version of the source is available, so one way versus the other doesn't seem to make updating for a new version any more efficient.
What are the pros and cons of each approach? Is one recommended over the other for any specific sorts of setup?
Generally you should install application code and dependencies exclusively in the Dockerfile. The image entrypoint should never download or install anything.
This approach is simpler (you often don't need an ENTRYPOINT line at all) and more reproducible. You might run across some setups that run commands like npm install in their entrypoint script; this work will be repeated every time the container runs, and the container won't start up if the network is unreachable. Installing dependencies in the Dockerfile only happens once (and generally can be cached across image rebuilds) and makes the image self-contained.
The Docker Hub wordpress image is unusual in that the underlying Wordpress libraries, the custom PHP application, and the application data are all stored in the same directory tree, and it's typical to use a volume mount for that application tree. Its entrypoint script looks for a wp-includes/index.php file inside the application source tree, and if it's not there it copies it in. That's a particular complex entrypoint script.
A generally useful pattern is to keep an application's data somewhere separate from the application source tree. If you're installing a framework, install it as a library using the host application's ordinary dependency system (for example, list it in a Node package.json file rather than trying to include it in a base image). This is good practice in general; in Docker it specifically lets you mount a volume on the data directory and not disturb the application.
For a typical Node application, for example, you might install the application and its dependencies in a Dockerfile, and not have an ENTRYPOINT declared at all:
FROM node:14
WORKDIR /app
# Install the dependencies
COPY package.json yarn.lock ./
RUN yarn install
# Install everything else
COPY . ./
# Point at some other data directory
RUN mkdir /data
ENV DATA_DIR=/data
# Application code can look at process.env.DATA_DIR
# Usual application metadata
EXPOSE 3000
CMD yarn start
...and then run this with a volume mounted for the data directory, leaving the application code intact:
docker build -t my-image .
docker volume create my-data
docker run -p 3000:3000 -d -v my-data:/data my-image