I am trying to deploy war file on tomcat server through docker. But post deploying the war file I don't see the UI page and is throwing error. Post checking the logs in the pod container can see the below error. I would really appreciate if you can point where I am going wrong
Dockerfile:
FROM xxx-docker-dev.docker.xxx.dev/tomcat:9.0.65-jdk11
EXPOSE 8080
COPY deployments/*.war /usr/local/tomcat/webapps/
COPY config/ /usr/local/tomcat/conf/
USER root
RUN mkdir -p /usr/local/tomcat/conf/Catalina/localhost
RUN chmod -R 777 /usr/local/tomcat/conf/Catalina
RUN chmod -R 777 /usr/local/tomcat/webapps/
RUN chmod -R 777 /usr/local/tomcat/conf/
USER ${UID}
Error:
com.xxx.edp.landlord.impl.DefaultConfigurationResolutionRegistry could not be instantiated","name":"java.util.ServiceConfigurationError","cause":{"commonElementCount":97,"localizedMessage":"The config root must be specified by the landlord.config.root JVM argument.","message":"The config root must be specified by the landlord.config.root JVM argument.","name":"java.lang.NullPointerException","extendedStackTrace":
Web UI Error:
HTTP Status 404 – Not Found
Type Status Report
Description The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.
Related
I am new to docker, and it is my first time meeting such error.
This is my DockerFile
FROM rust:latest as builder
ENV APP mapservice
WORKDIR /usr/src/$APP
COPY . .
RUN cargo install --path .
FROM debian:buster-slim
RUN apt-get update && rm -rf /var/lib/apt/lists/*
COPY --from=builder /usr/local/cargo/bin/$APP /usr/local/bin/$APP
#export this actix web service to port 8080 and 0.0.0.0
EXPOSE 8080
CMD ["mapservice"]
And when I run
docker run -it --rm -p 8080:8080 mapservice
I got an error like:
mapservice: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
I have no idea why would I got this error. Perhaps I have my APIKEY hardcoded in the main.rs??Does anyone know how to fix this problem? My laptop is M1pro Mac.
I try to run another sample project with a similar dockerfile, and everything is fine with it. I also tried to deploy it on AWS, which gives me another health check error on 8080. Is it something wrong with my docker file?
I have this Dockerfile setup:
FROM node:14.5-buster-slim AS base
WORKDIR /app
FROM base AS production
ENV NODE_ENV=production
RUN chown -R node:node /app
RUN chmod 755 /app
USER node
... other copies
COPY ./scripts/startup-production.sh ./
COPY ./scripts/healthz.sh ./
CMD ["./startup-production.sh"]
The problem I'm facing is that I can't execute ./healthz.sh because it's only executable by the node user. When I commented out the two RUN and the USER commands, I could execute the file just fine. But I want to enforce the executable permissions only to the node for security reasons.
I need the ./healthz.sh to be externally executable by Kubernetes' liveness & rediness probes.
How can I make it so? Folder restructuring or stuff like that are fine with me.
In most cases, you probably want your code to be owned by root, but to be world-readable, and for scripts be world-executable. The Dockerfile COPY directive will copy in a file with its existing permissions from the host system (hidden in the list of bullet points at the end is a note that a file "is copied individually along with its metadata"). So the easiest way to approach this is to make sure the script has the right permissions on the host system:
# mode 0755 is readable and executable by everyone but only writable by owner
chmod 0755 healthz.sh
git commit -am 'make healthz script executable'
Then you can just COPY it in, without any special setup.
# Do not RUN chown or chmod; just
WORKDIR /app
COPY ./scripts/healthz.sh .
# Then when launching the container, specify
USER node
CMD ["./startup-production.sh"]
You should be able to verify this locally by running your container and manually invoking the health-check script
docker run -d --name app the-image
# possibly with a `docker exec -u` option to specify a different user
docker exec app /app/healthz.sh && echo OK
The important thing to check is that the file is world-executable. You can also double-check this by looking at the built container
docker run --rm the-image ls -l /app/healthz.sh
That should print out one line, starting with a permission string -rwxr-xr-x; the last three r-x are the important part. If you can't get the permissions right another way, you can also fix them up in your image build
COPY ./scripts/healthz.sh .
# If you can't make the permissions on the original file right:
RUN chmod 0755 *.sh
You need to modify user Dockerfile CMD command like this : ["sh", "./startup-production.sh"]
This will interpret the script as sh, but it can be dangerous if your script is using bash specific features like [[]] with #!/bin/bash as its first line.
Moreover I would say use ENTRYPOINT here instead of CMD if you want this to run whenever container is up
I have a docker-compose project made up of five docker containers, three of which use an entrypoint.sh file. None of these three can find the entrypoint.sh file after building.
Ive tried several variations on the syntax but I am very new to docker and wouldnt know a syntax error if I was staring right at it. The build process for each completes without errors, but when I try to bring them online they cannot find the entrypoint file and continually restart in a loop.
The Dockerfile for one such
(a bunch of confidential stuff I cant post here)
# Enable apache modules
RUN sudo a2enmod actions headers alias deflate mime expires filter setenvif include rewrite
# Create a log file
RUN mkdir /var/www/logs
RUN chmod -R 777 /var/www/logs
#Open up needed ports
EXPOSE 8081
EXPOSE 80
ADD entrypoint.sh /
ENTRYPOINT ./entrypoint.sh
aaand the error
ss-fe-webserver | /bin/sh: 1: ./entrypoint.sh: not found
Its worth noting that I am running Docker for Windows, and these Dockerfiles were written by a Mac user. As long as this error persists, the servers continually restart and do not stay online.
Put your entrypoint at / path. It seems like you are using WORKDIR somewhere which is causing this conflict.
Prefer using COPY instead of ADD in this case.
Provide executable permissions to your entrypoint script.
Here is how it may look like at the end -
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
You add the entrypoint.sh on root folder as "/".
ADD entrypoint.sh /
But you still call the entrypoint.sh on ./ as workdir folder.
ENTRYPOINT ./entrypoint.sh
You try:
ENTRYPOINT /entrypoint.sh
or:
ADD entrypoint.sh ./
I am currently running into a problem trying to set up nginx:alpine in Openshift.
My build runs just fine but I am not able to deploy with permission being denied with the following error
2019/01/25 06:30:54 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
Now I know Openshift is a bit tricky when it comes to permissions as the container is running without root privilidges and the UID is gerenated on runetime which means it's not available in /etc/passwd. But the user is part of the group root. Now how this is supposed to be handled is being described here
https://docs.openshift.com/container-platform/3.3/creating_images/guidelines.html#openshift-container-platform-specific-guidelines
I even went further and made the whole /var completely accessible (777) for testing purposes but I still get the error. This is what my Dockerfile looks like
Dockerfile
FROM nginx:alpine
#Configure proxy settings
ENV HTTP_PROXY=http://my.proxy:port
ENV HTTPS_PROXY=http://my.proxy:port
ENV HTTP_PROXY_AUTH=basic:*:username:password
WORKDIR /app
COPY . .
# Install node.js
RUN apk update && \
apk add nodejs npm python make curl g++
# Build Application
RUN npm install
RUN ./node_modules/#angular/cli/bin/ng build
COPY ./dist/my-app /usr/share/nginx/html
# Configure NGINX
COPY ./openshift/nginx/nginx.conf /etc/nginx/nginx.conf
COPY ./openshift/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
RUN chgrp -R root /var/cache/nginx /var/run /var/log/nginx && \
chmod -R 777 /var
RUN sed -i.bak 's/^user/#user/' /etc/nginx/nginx.conf
EXPOSE 8080
It's funny that this approach just seems to effekt the alpine version of nginx. nginx:latest (based on debian I think) has no issues and the way to set it up described here
https://torstenwalter.de/openshift/nginx/2017/08/04/nginx-on-openshift.html
works. (but i am having some other issues with that build so I switched to alpine)
Any ideas why this is still not working?
I was using openshift, with limited permissions, so I fixed this problem by using the following nginx image (rather than nginx:latest)
FROM nginxinc/nginx-unprivileged
To resolve this. I think the Problem in this Dockerfile was that I used the COPY command to move my build and that did not exist. So here is my working
Dockerfile
FROM nginx:alpine
LABEL maintainer="ReliefMelone"
WORKDIR /app
COPY . .
# Install node.js
RUN apk update && \
apk add nodejs npm python make curl g++
# Build Application
RUN npm install
RUN ./node_modules/#angular/cli/bin/ng build --configuration=${BUILD_CONFIG}
RUN cp -r ./dist/. /usr/share/nginx/html
# Configure NGINX
COPY ./openshift/nginx/nginx.conf /etc/nginx/nginx.conf
COPY ./openshift/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
RUN chgrp -R root /var/cache/nginx /var/run /var/log/nginx && \
chmod -R 770 /var/cache/nginx /var/run /var/log/nginx
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
Note that under the Build Application section I now do
RUN cp -r ./dist/. /usr/share/nginx/html
instead of
COPY ./dist/my-app /usr/share/nginx/html
The copy will not work as I previously ran the ng build inside of the container the dist will only exist in the container as well, so I need to execute the copy command inside of that container
Had the same error on my nginx:alpine Dockerfile
There is already a user called nginx in the nginx:alpine image. My guess is that it's cleaner to use it to run nginx.
Here is how I resolved it:
Set the owner of /var/cache/nginx to nginx (user 101, group 101)
Create a /var/run/nginx.pid and set the owner to nginx as well
Copy all the files to the image using --chown=nginx:nginx
FROM nginx:alpine
RUN touch /var/run/nginx.pid && \
chown -R nginx:nginx /var/cache/nginx /var/run/nginx.pid
USER nginx
COPY --chown=nginx:nginx my/html/files /usr/share/nginx/html
COPY --chown=nginx:nginx config/myapp/default.conf /etc/nginx/conf.d/default.conf
...
If you're here because you failed to deploy an example helm chart (e.g: helm create mychart), do just like #quasipolynomial suggested but instead change your deployment file pull the right image.
i.e
containters:
- image: nginxinc/nginx-unprivileged
more info on the official unprivileged image: https://github.com/nginxinc/docker-nginx-unprivileged
You may change the folder using the nginx.conf file. You can read more information in the section Running nginx as a non-root user.
May or may not be a step in the right direction (especially helpful for those who came here looking for general help on the [emerg] mkdir() ... failed error).
This solution counts from Builing nginx from source.
It took me about seven hours to realize the solution is directly related to the prefix path set in compiling nginx.
This is where my configuration throws off nginx (as a very brief example), compiled from this nginx source:
sudo ./auto/configure \
--prefix=/usr/local/nginx \
--http-client-body-temp-path=/tmp/nginx/client-body-temp \
--http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp
Without realizing it, I was setting the prefix to /usr/local/nginx but setting the client body temp path & fastcgi temp path to a directory inside /tmp/nginx.
It's basically breaking nginx's ability to access the correct files, because the temp paths are not correlated to the prefix path.
So I fixed it by (again, super simple configure as an example):
sudo ./auto/configure \
--prefix=/usr/local/nginx \
--http-client-body-temp-path=/usr/local/nginx/client_body_temp \
--http-fastcgi-temp-path=/usr/local/nginx/fastcgi_temp \
Further simplified:
sudo ./auto/configure \
--prefix=/usr/local/nginx \
--http-client-body-temp-path=/client_body_temp \
--http-fastcgi-temp-path=/fastcgi_temp \
Again, not guaranteed to work, but definitely a step in the right direction.
run the below command to fix the above issue. The anyuid security context constraint required.
oc adm policy add-scc-to-user anyuid system:serviceaccount:<NAMESPACE>:default
I have several files in a directory on the host machine which I am trying to copy to the container and also have some run commands inside my docker-compose.
The first set up until the crowd section stats woks fine, but anything from the crown jar down just fails and doesn't work. I tried to run the manial docker cp command to copy the files from host to the container and that works. Can someone please shed some light on this?
This is a part of my Dockerfile:
WORKDIR /usr/local/tomcat
USER root
COPY server.xml conf/server.xml
RUN chmod 660 conf/server.xml
USER root
ADD tomcat.keystore /usr/local/tomcat/
RUN chmod 644 tomcat.keystore
RUN chown root:staff /usr/local/tomcat/tomcat.keystore
ADD crowd-auth-filter-1.0.0.jar /usr/local/tomcat/webapps/guacamole/WEB-INF/lib/
ADD crowd-filter.properties /usr/local/tomcat/webapps/guacamole/WEB-INF/lib/
RUN chmod 644 crowd-filter.properties
ADD web.xml /usr/local/tomcat/webapps/guacamole/WEB-INF/
RUN /usr/local/tomcat/bin/shutdown.sh
RUN /usr/local/tomcat/bin/startup.sh
Thanks