Create Dockerfile to deploy NestJS API to fly.io - docker

I created a REST API using NestJS and I want to deploy it to fly.io. fly.io requires me to create a Dockerfile to do so. I don't know much about Dockerfiles but fly.io has a cli tool that creates one for you.
I am not able to deploy my API, however, and my suspicion is that there's an issue with the Dockerfile. Specifically, that the generated Dockerfile isn't tailored specifically to a NESTJS app.
I am linking my repo below so you can view both the existing Dockerfile and the structure of my API. Can someone please suggest how I can modify the Dockerfile to work for a NESTJS app?
Thanks
https://github.com/AhmedAbbasDeveloper/noteify-server/tree/nestjs-migration

I think you need to expose a port that your nestjs server runs on. For your docker image you can set an env variable by adding ENV PORT=8080 to your Dockerfile and also need to expose it with EXPOSE 8080. You can use any port, but since you've configured 8080 in fly, that makes more sense.
...
ENV PORT=8080
EXPOSE 8080
CMD ["node", "dist/main.js"]

Related

Elastic Beanstalk - Docker Platform with ECR - Specifying a tag via environment variable

I am looking at using Elastic Beanstalk + ECR to manage deployments.
FROM <ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/<repoName>:<TAG>
ADD entrypoint.sh /
EXPOSE 8080 8787 9990
ENTRYPOINT [ "/entrypoint.sh" ]
This is working if I hard code the entire FROM .. But What I really want is to pass the tag as an environment variable. I want to pin the version.
What I am trying to do is have Code Build build, deploy, test, and (if all goes well), promote the artifact to a STAGE repo.. where it will be promoted to a Prod after another set of tests.
Then our terraform can pass the commitId to deploy.
I want to avoid deploying latest tag because we want to control when the change gets release and there is potential for auto-scale event to pull it before it has been tested.
My optimistic attempt to add ${GIT_COMMIT} illustrated that the Dockerfile's FROM does not interpret Environment Variables. And a few passes with sed suggested that all the file coying beanstalk does ( to onDeck / staging / current ) probably would not be wise.
I am wondering if people have had success addressing similar goals with Beanstalk/ECR.
Have you considered using ARG in your Dockerfile. Passing an argument to a Dockerfile should work. Take a look at Understand how ARG and FROM interact. Your Dockerfile should look like this:
ARG GIT_TAG
FROM <ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/<repoName>:${GIT_TAG}
ADD entrypoint.sh /
EXPOSE 8080 8787 9990
ENTRYPOINT [ "/entrypoint.sh" ]
Then you should be able to build your image using something like this:
docker build --build-arg GIT_TAG=SOME_TAG .

Cloud Run Deploy fails: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable

I have a project which I had previously successfully deployed to Google Cloud Run, and set up with a trigger such that upon pushing to the repo's main branch on Github, it would automatically deploy. It worked great.
Then I tried to rename the github repo, which meant deleting and creating a new trigger, and now I cannot get it working again.
Everytime, the build succeeds but deployment fails with this error in Cloud Build:
Step #2 - "Deploy": ERROR: (gcloud.run.services.update) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I have not changed anything other than the repo name, leading me to believe the fix is not with my code, but I tried some changes there anyway.
I have looked into the solutions set forth in this post. However, I believe I am listening on the correct port.
My app is using Python and Flask, and contains this:
if __name__ == "__main__":
app.run(debug=False, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
Which should use the ENV var Port (8080) and otherwise default to 8080. I also tried just using port=8080.
I tried explicitly exposing the port in the Dockerfile, which also did not work:
FROM python:3.7
#Copy files into docker image dir, and make that the current working dir
COPY . /docker-image
WORKDIR /docker-image
RUN pip install -r requirements.txt
CMD ["flask", "run", "--host", "0.0.0.0"]
EXPOSE 8080
Cloud Run does seem to be using port 8080 - if I dig into the response, I see this nested under Response.spec.container.0 :
ports: [
0: {
containerPort: 8080
name: "http1"
}
]
All that said, if I look at the logs, it shows "Now running on Port 5000".
I have no idea where that Port 5000 is coming from or being set, but trying to change the ports in Python/Flask and the Dockerfile to 5000 leads to the same errors.
How do I get it to run on Port 8080? It's very strange to me that this was working FINE prior to renaming the repo and creating a new trigger. How is this setup different? The Trigger does not give an option to set the port so I'm not sure how that caused this error.
You have mixed things. Flask command default port is effectively 5000. If you want to change it, you need to change your flask run command with the --port= parameter
CMD ["flask", "run", "--host", "0.0.0.0","--port","8080"]
In addition, your flask run command, is a flask runtime and totally ignore the standard python entrypoint if __name__ == "__main__":. If you want to use this entrypoint, use the Python runtime
CMD ["python", "<main file>.py"]

Passing environment variables at runtime to Vue.js application with docker-compose

This is my second post about this particular issue. I've since deleted that question because I've found a better way to explain what exactly I'd like to do.
Essentially, I'd like to pass command line arguments to docker-compose up and set them as environment variables in my Vue.js web application. The goal is to be able to change the environment variables without rebuilding the container every time.
I'm running into several issues with this. Here are my docker files:
Dockerfile for Vue.js application.
FROM node:latest as build-stage
WORKDIR /app
# Environment variable.
ENV VUE_APP_FOO=FOO
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM nginx as production-stage
RUN mkdir /app
COPY --from=build-stage /app/dist /app
COPY nginx.conf /etc/nginx/nginx.conf
VUE_APP_FOO is stored and accessible via Node's process.env objected and seems to be passed in at build time.
And my docker-compose.yml:
version: '3.5'
services:
ms-sql-server:
image: mcr.microsoft.com/mssql/server:2017-latest-ubuntu
ports:
- "1430:1433"
api:
image: # omitted (pulled from url)
restart: always
depends_on:
- ms-sql-server
environment:
DBServer: "ms-sql-server"
ports:
- "50726:80"
client:
image: # omitted(pulled from url)
restart: always
environment:
- VUE_APP_BAR="BAR"
depends_on:
- api
ports:
- "8080:80"
When I ssh into the client container with docker exec -it <container_name> /bin/bash, the VUE_APP_BAR variable is present with the value "BAR". But the variable is not stored in the process.env object in my Vue application. It seems like something odd is happening with Node and it's environmental variables. It's like it's ignoring the container environment.
Is there anyway for me to access the container level variables set in docker-compose.yml inside my Vue.js application? Furthermore, is there anyway to pass those variables as arguments with docker-compose up? Let me know if you need any clarification/more information.
So I figured out how to do this in sort of a hacky way that works perfectly for my use case. A quick review of what I wanted to do: Be able to pass in environment variables via a docker-compose file to a Vue.js application to allow different team members to test different development APIs depending on their assignment(localhost if running the server locally, api-dev, api-staging, api-prod).
The first step is to declare your variables in a JS file inside your VueJS project (it can be defined anywhere) formatted like this:
export const serverURL = 'VUE_APP_SERVER_URL'
Quick note about the value of this string: it has to be completely unique to your entire project. If there is any other string or variable name in your application that matches it, it will get replaced with the docker environment variable we pass using this method.
Next we have to go over to our docker-compose.yml and declare our environment variable there:
docker-compose.yml
your_vuejs_client:
build: /path/to/vuejs-app
restart: always
environment:
VUE_APP_SERVER_URL: ${SERVER_URL}
ports:
- "8080:80"
Now when you run docker-compose up in your terminal, you should see something like this:
WARNING: The SERVER_URL variable is not set. Defaulting to a blank string.
After we have our docker-compose setup properly, we need to create a entrypoint script in the VueJS application to run before the app is launched via nginx. To do this, navigate back to your VueJS directory and run touch entrypoint.sh to create a blank bash script. Open that up and this is what I have in mine:
entrypoint.sh
#!/bin/sh
ROOT_DIR=/usr/share/nginx/html
echo "Replacing env constants in JS"
for file in $ROOT_DIR/js/app.*.js* $ROOT_DIR/index.html $ROOT_DIR/precache-manifest*.js;
do
echo "Processing $file ...";
sed -i 's|VUE_APP_SERVER_URL|'${VUE_APP_SERVER_URL}'|g' $file
done
sed -i 's|VUE_APP_SERVER_URL|'${VUE_APP_SERVER_URL}'|g' $file This line transverses your entire application for the string 'VUE_APP_SERVER_URL' and replaces it with the environment variable from docker-compose.yml.
Finally we need to add some lines to our VueJS application Dockerfile to tell it to run the entrypoint script we just created before nginx is started. So right before the CMD ["nginx", "-g", "daemon off;"] line in your Dockerfile, add the lines below:
VueJS Dockerfile
# Copy entrypoint script as /entrypoint.sh
COPY ./entrypoint.sh /entrypoint.sh
# Grant Linux permissions and run entrypoint script
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
After that, running docker-compose run -e SERVER_URL=yourserverapi.com/api, the serverURL constant we set in a JS file at the beginning will be replaced with whatever you supply in the docker-compose command. This was a pain to finally get working, but I hope this helps out anyone facing similar troubles. The great thing is that you can add as many environment variables as you want, just add more lines to the entrypoint.sh file and define them in the Vue.js application and your docker-compose file. Some of the ones I've used are providing a different endpoint for the USPS API depending on whether you're running locally or hosted in the cloud, providing different Maps API keys based on whether the instance is running in production or development, etc. etc.
I really hope this helps someone out, let me know if anyone has any questions and I can hopefully be of some help.
The client app runs on a web browser, but environment variables on are on the server. The client needs a way to obtain the environment variable value from the server.
To accomplish that, you have several options, including:
Leverage Nginx to serve the environment variable itself for this using an approach like one of these: nginx: use environment variables. This approach may be quick, more dynamic or more static depending on your needs, maybe less formal and elegant. Or
Implement a server API (Node.js?) that reads the environment variable and returns it to the client over an AJAX call. This approach is elegant, dynamic, API-centric. Or
Lastly if the environment variable is static per nginx instance per deployment, you could build the static assets of the Vue app during deployment and hard-code the environment variable right there in the static assets. This approach is somewhat elegant but does pollute client code with server details and is somewhat static (can only change on deployment).
As i posted here https://stackoverflow.com/a/63097312/4905563, I have developed a package that could help.
Try with npm install jvjr-docker-env take a look to README.md to see some examples of use.
Even though the question title says how to consume environment variables from vue.js side, the questioner's goal is to be able to configure backend api endpoint dynamically without rebuilding docker image.
I achieved it by using reverse proxy.
For dev run, configure reverse proxy on vue.config.js, which is consumed by vue-cli web server.
For nginx run, configure reverse proxy on nginx.conf. You can use nginx template to read environment variables.
This approach also eliminates the need for CORS configuration on web-api server side, since web api is called from vue app's web server, not from the browser directly.
More thorough working sample can be found on this commit.
vue.config.js:
module.exports = {
devServer: {
proxy: {
'/api': {
target: 'http://host.docker.internal:5000',
},
},
},
};
nginx.conf:
...
http {
...
include /etc/nginx/conf.d/*.conf;
}
nginx.default.conf.template:
server {
listen 80;
...
location /api {
proxy_pass ${WEBAPI_ENDPOINT};
}
}
Dockerfile:
...
COPY nginx.conf /etc/nginx/nginx.conf
COPY nginx.default.conf.template /etc/nginx/templates/default.conf.template
Add a static config.js script in index.html. This file is not processed by Webpack, but included verbatim. Use your docker-compose file or kubernetes manifest or AWS ECS task config or similar to override that file at run time.
For example, in my Vue project:
public/config.js
// Run-time configuration. Override this in e.g. your Dockerfile, kubernetes pod or AWS ECS Task.
// Use only very simple, browser-compatible JS.
window.voxVivaConfig = {};
public/index.html
<!DOCTYPE html>
<html lang="en">
<head>
<!-- ... -->
<!-- Allow injection of run-time config -->
<script async src="config.js"></script>
<!-- ... -->
</head>
<body>
<div id="app" aria-busy="true">
<noscript><p>This app requires JavaScript.</p></noscript>
</div>
</body>
</html>
src/config.js
function getRunTimeConfig() {
if (typeof window.voxVivaConfig === "object") {
return window.voxVivaConfig || {};
}
return {};
}
export default Object.freeze({
appTitle: "My App",
backEndBaseUrl: process.env.VUE_APP_BACK_END_BASEURL || "https://example.com",
whatever: process.env.VUE_APP_WHATEVER || "",
/**
* Allow config specified at run time to override everything above.
*/
...getRunTimeConfig(),
});
Advantages
This puts all config in one place, and lets you choose which config values should be specified at compile time, build time or run time, as you see fit.

node and react running with docker-compose.yml file

I have a sample application, I am using nodejs and reactjs, So my project folder consists of client and server folder. The client folder is created using create-react-app.
i have created two Dockerfile for each of the folder, and i am using a docker-compose.yml on the root of the project.
everything is working fine. Now i just want to host this application. I am trying to use jenkins.
Since i have little knowledge on the devops side. i have some doubts
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
6) what is the use of volumes is it required in docker-compose.yml file ?
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
please see the below folder structure
movielisting
client
Dockerfile
package.json
package.lock.json
... other create-react-app folders like src..
server
Dockerfile
index.js
docker-compose.yml
.env
Dockerfile -- client
FROM node:10.15.1-alpine
#Create app directory and use it as the working directory
RUN mkdir -p /srv/app/client
WORKDIR /srv/app/client
COPY package.json /srv/app/client
COPY package-lock.json /srv/app/client
RUN npm install
COPY . /srv/app/client
CMD ["npm", "start"]
Dockerfile -- server
FROM node:10.15.1-alpine
#Create app directory
RUN mkdir -p /srv/app/server
WORKDIR /srv/app/server
COPY package.json /srv/app/server
COPY package-lock.json /srv/app/server
RUN npm install
COPY . /srv/app/server
CMD ["node", "index.js"]
docker-compose.yml -- root of project
version: "3"
services:
#########################
# Setup node container
#########################
server:
build: ./server
expose:
- ${APP_SERVER_PORT}
environment:
API_HOST: ${API_HOST}
APP_SERVER_PORT: ${APP_SERVER_PORT}
ports:
- ${APP_SERVER_PORT}:${APP_SERVER_PORT}
volumes:
- ./server:/srv/app/server
command: ${NODE_COMMAND:-node} index.js
##########################
# Setup client container
##########################
client:
build: ./client
environment:
- REACT_APP_PORT=${REACT_APP_PORT}
expose:
- ${REACT_APP_PORT}
ports:
- ${REACT_APP_PORT}:${REACT_APP_PORT}
volumes:
- ./client/src:/srv/app/client/src
- ./client/public:/srv/app/client/public
links:
- server
command: npm run start
.env
API_HOST="http://localhost:4000"
APP_SERVER_PORT=4000
REACT_APP_PORT=3000
package.json -- client
"proxy": "http://server:4000"
what all things can i refactor,
Any help appreciated.
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
Each dockerfile will build a docker image. So in the end you will have two images one for the react application and the other one for the backend which is nodejs application
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
You need to build the react application within the steps you have in its Dockerfile in order to be able to use it as a normal application. Also you might use environment varaible to customize the image during the build using build-args for example passing custom path or anything else.
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
It would be better if you use the dockerfile(s) with jenkins in order to build your images and keep docker-compose.yml file(s) for deploying the application itself without using the build keyword
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
Using command inside the docker-compose.yml file will override the CMD for the dockerfile which was set during the build step
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
Generally speaking yes you need it however as long as you want to use override it from the docker-compose file you might added it as CMD ["node", "--help"] or something
6) what is the use of volumes is it required in docker-compose.yml file ?
Volumes is needed in case you have shared files between containers or you need to keep data persistent on the host
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
server is an alias for the nodejs container inside the docker network once you start your application. and why named server ? because you have it inside your docker-compose.yml file in this part:
services:
server:
But of course you can change it by adding alias to it within network keyword inside the docker-compose.yml file
Note: React itself is a client side which means it works through the browser itself so it wont be able to contact the nodejs application through docker network you may use the ip itself or use localhost and make the nodejs accessible through localhost
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
Docker itself does not know about which port your application is using so you have to make both of them use the same port. and in nodejs this is achievable by using environment variable
For more details:
https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg
https://docs.docker.com/compose/networking/
https://docs.docker.com/compose/compose-file/#aliases
https://docs.docker.com/compose/compose-file/#command
https://facebook.github.io/create-react-app/docs/deployment
if any one facing issues with connecting react and express, make sure there is NO localhost attached to server api address in client code
(eg: http://localhost:5000/api should be changed to /api),
since proxy entry is there in package.json file.
PS: if no entry is there, add
{
"proxy": "http://server:5000"
}
to package.json, ('server' is your express app container name in docker-compose file)
finally made it work. thought of sharing this if it helps anyone else

Docker composed services can't communicate by service name

tldr: I can't communicate with a docker composed service by its service name in order to make requests to an api running in networked containers.
I have a single page application that makes requests to a json api. Its Dockerfile looks like this:
FROM nginx:alpine
COPY dist /usr/share/nginx/html
EXPOSE 80
A build process does it's thing and puts all the static assets in a dist directory which is then copied to the html directory of the nginx web server.
I have a mock json api powered by json-server. Its Dockerfile looks like this:
FROM node:7.10.0-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
I have a docker-compose file that looks like this:
version: '2'
services:
badass-ui:
image: mydocker-hub/badass-ui
container_name: badass-ui
ports:
- "80:80"
badderer-api:
image: mydocker-hub/badderer-api
container_name: badderer-api
ports:
- "3000:3000"
I'm able to build both containers successfully, and am able to run "docker-compose up" with both containers running smoothly. Fetch requests from badass-ui to badderer-api:3000/users returns "net::ERR_NAME_NOT_RESOLVED". Fetch requests to http://192.168.99.100:3000/users (or whatever the container IP may be) work fine. I thought by using docker compose I would be able to reference the name of a service defined in docker-compose.yml as a domain name, and that would enable communication between the containers via domain name. This doesn't seem to work. Is there something wrong with my docker-compose.yml? I'm on Windows 10 Home edition, using the tools that come with the Docker Quickstart terminal for Windows. I'm using docker-compose version 1.13.0, docker version 17.05.0-ce, docker-machine version 0.11.0 and VirtualBox 5.1.20.
Since you are using docker-compose.yml version 2, links should not be necessary. Containers within a compose network should be able to resolve other compose containers by service name.
Reading the comments on your question it seems like the networking and host name resolution works, so it seems like the problem is in your web UI. I don't see you passing any type of configuration to the UI application saying where to find the api. Maybe there is a hard coded url to the api in your UI causing the error?
Edit:
Is your UI a client side/javascript app? Are you sure the app isn't actually making the call from your browser? Your browser running on your local machine and not in docker will not be able resolve the badderrer-api hostname.

Resources