i have ran into an error in my project while the gitlab pipeline was building the docker image
$ docker tag project $NEXUS_URL/project:${TAG_COMMIT}
Error parsing reference: "url:port/repository/project:BugFix-29643813" is not a valid repository/tag: invalid reference format
this is my docker file
FROM codestrongbiz/jdk16-maven-docker:latest
ENV SPRING_OUTPUT_ANSI_ENABLED=ALWAYS \
APP_SLEEP=0 \
JAVA_OPTS=""
# add directly the jar
ADD *.jar /app.jar
EXPOSE 8087
CMD echo "The application will start in ${APP_SLEEP}s..." && \
sleep ${APP_SLEEP} && \
java ${JAVA_OPTS} -Djava.security.egd=file:/dev/./urandom -jar /app.jar
can anyone help?
after testing a lot i realized that my source branch's name should not contain an uppercase letter
Related
I am trying to fork this docker image so that if anything changes on the original it won't affect me.
I have forked the repo corresponding to that image to my own repo.
I have cloned the repo and am trying to build it:
docker build . -t davcal/gcc-cross-x86_64-elf
I am getting this error:
+ cd /usr/local/src
+ ./build-binutils.sh 2.31.1
/bin/sh: 1: ./build-binutils.sh: not found
The command '/bin/sh -c set -x && cd /usr/local/src && ./build-binutils.sh ${BINUTILS_VERSION} && ./build-gcc.sh ${GCC_VERSION}' returned a non-zero code: 127
What makes no sense to me is that if I use the original image, it builds successfully:
FROM randomdude/gcc-cross-x86_64-elf
...
Maybe Docker Hub stores a pre-built image?
How do I fix this?
Note: I am using Windows. This shouldn't make a difference since the error originates within the container.
Edit
I tried patching the Dockerfile to chmod executable permissions to the sh files in case that was causing problems on Windows. Unfortunately, the exact same error occurs.
RUN set -x \
&& chmod +x /usr/local/src/build-binutils.sh \
&& chmod +x /usr/local/src/build-gcc.sh \
&& cd /usr/local/src \
&& ./build-binutils.sh ${BINUTILS_VERSION} \
&& ./build-gcc.sh ${GCC_VERSION}
Edit 2
Following this method, I inspected the container to see if the sh files actually exist. Here is the output.
I ran docker run --rm -it c53693f11514 bash, including the hash of the intermediate container of the previous successful step of the Dockerfile.
This is the output showing that the files do exist:
root#9b8a64ac2090:/# cd usr/local/src
root#9b8a64ac2090:/usr/local/src# ls
binutils-2.31.1 build-binutils.sh build-gcc.sh gcc-8.2.0
From the described symptoms, file exists, is a shell script, and works on other machines, the "file not found" error is most likely from Winidows linefeeds being added to the file. When the Linux kernel processes a shell script, it looks at the first line, the #!/bin/sh or similar, and then finds that interpreter to run the shell script. If that interpreter isn't found, you'll get a "file not found" error.
In this case, the file it's looking for won't be /bin/sh, but instead /bin/sh\r or /bin/sh^M depending on how you want to represent the carriage return character. You can fix that for single files with a tool like dos2unix but in general, you'll want to fix git itself since there are likely other files that have had their linefeeds corrupted. For details on adjusting the behavior of git, see this post.
I am running Jmeter in noVNC, able to run Jmeter in noVNC but offcourse in default small window.
But when I create Http(s) script recorder and when click on Start button, I get this error
error is -> "Could not create script recorder -see log for details: >> keytool error: java.security.ProviderException: Could not initialize NSS << command failed code:1
'keytool -genkeypair -alias:root_ca: -dname"CN=_Jmeter Root CA for recording(INSTALL ONLY IF IT IS YOURS).......FULL ERROR in SCREENSHOT"'"
Tried creating Http(s) script recrorder with and without PRoxy setup in my Chrome browser, getting same error.
right hand side of screenshot
below is my Dockerfile
FROM uphy/novnc-alpine
RUN \
apk add --no-cache curl openjdk8-jre bash \
&& apk add --no-cache nss \
&& curl -L https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.4.1.tgz > /tmp/jmeter.tgz \
&& mkdir -p /opt \
&& tar -xvf /tmp/jmeter.tgz -C /opt \
&& rm /tmp/jmeter.tgz \
&& cd /etc/supervisor/conf.d \
&& echo '[program:jmeter]' >> supervisord.conf \
&& echo 'command=/opt/apache-jmeter-5.4.1/bin/./jmeter' >> supervisord.conf \
&& echo 'autorestart=true' >> supervisord.conf
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk/
RUN export JAVA_HOME
This is how I am running (related to Use Jmeter desktop application as web app)
creating docker image with noVNC and running Jmeter inside noVNC (dockerfile also provided in the end)
exposing it to some port and accessing it in browser
docker build -t jmeter .
docker run -it --rm -p 8080:8080 jmeter
I checked my docker container also, able to see JDK, jdk is already present here -> /usr/lib/jvm/java-1.8-openjdk/ and jmeter is present here /opt/apache-jmeter-5.4.1
I am not sure should I pass more options or arguments inside docker run command.
I am wondering, how this jmeter will create the certificate inside my bin directory on click of start button, since this Jmeter is running inside noVNC docker ?
Any other way by which we can automatically integrate/create this certificate without importing or without clicking on start button.
How Proxy setting can be done if Jmeter in running inside noVNC container.
I think you need to install nss package
change this line:
apk add --no-cache curl openjdk8-jre bash \
to this one:
apk add --no-cache curl openjdk8-jre bash nss \
Once you re-build the image the HTTP(S) Test Script Recorder should launch normally.
With regards to the certificate, it will be stored in JMeter's "bin" folder in the container so if you want to use in in the browser in the container - you will have to install the browser there as well.
If you want to use the browser on your local machine - you will need to copy the certificate from the container and to expose another port for JMeter's HTTP(S) test script recorder.
Just in case be aware that you can also record JMeter test scripts using JMeter Chrome Extension, in this case you won't have to worry about proxies, certificates and ports.
I run an image like that:
docker run <image_name> <config_file>
where config_file is the path to a JSON file which contains the configuration of my application.
Inside the Dockerfile, I do
ENTRYPOINT ["uwsgi", \
"--log-encoder", "json {\"msg\":\"${msg}\"}\n", \
"--http", ":80", \
"--master", \
"--wsgi-file", "app.py", \
"--callable", "app", \
"--threads", "10", \
"--pyargv"]
At the same time, I would like to access some of the values in the configuration file in the Dockerfile. For example to configure the JSON log encoder of uWSGI.
How can I do that?
This is impossible. See comment by #David Maze.
I'm running a Java program from within a Docker container (started with Docker Compose) and it's throwing a bunch of errors caused by UTF-8 characters (as they can't be mapped to the ASCII charset). Is there a way to enable UTF-8 encoding from the docker-compose file?
You can check by using below command to set java parameters and then try to run your java program -
export JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8
If it worked using above command, set it using an ENV command during docker image build.
Also if you need to set it in bash_profile, refer below portion of Dockerfile -
RUN echo "JAVA_HOME=/opt/jdk1.8.0_65" >> ~/.bash_profile
Add these lines in Dockerfile:
RUN echo "LC_ALL=en_US.UTF-8" >> /etc/environment
RUN echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen
RUN echo "LANG=en_US.UTF-8" > /etc/locale.conf
RUN locale-gen en_US.UTF-8
Source: https://github.com/tianon/docker-brew-debian/issues/45
I have a very simple docker build file:
FROM openjdk:10
ENV JENAVERSION=3.7.0
RUN mkdir /fuseki
RUN wget http://apache.claz.org/jena/binaries/apache-jena-fuseki-$JENAVERSION.tar.gz -P /tmp \
&& tar -zxvf /tmp/apache-jena-fuseki-$JENAVERSION.tar.gz -C /tmp \
&& mv -v /tmp/apache-jena-fuseki-$JENAVERSION/* /fuseki
EXPOSE 3030
ENTRYPOINT ["/bin/bash", "/fuseki/fuseki-server"]
I've tried different variations on CMD and ENTRYPOINT, but nothing allows "fuseki-server" to execute. Always a "No such file or directory" error. If I manually create an empty container from openjdk:10, and execute each command manually, it works fine. What's going on?
I think the issue is the line ending - the entrypoint needs to have LF line ending.
I get the same error when my entrypoint has CLRF line ending.
If I build and run your Dockerfile, I get a different error from what you've described. I see:
Can't find jarfile to run
If you look at the fuseki-server shell script, it's trying to find the jar file relative either to your current directory or to the $FUSEKI_HOME environment variable:
export FUSEKI_HOME="${FUSEKI_HOME:-$PWD}"
if [ ! -e "$FUSEKI_HOME" ]
then
echo "$FUSEKI_HOME does not exist" 1>&2
exit 1
fi
JAR1="$FUSEKI_HOME/fuseki-server.jar"
JAR2="$FUSEKI_HOME/jena-fuseki-server-*.jar"
JAR=""
So if you set the FUSEKI_HOME environment variable in your
Dockerfile:
ENV FUSEKI_HOME=/fuseki
Then the container starts up without errors:
[2018-06-04 14:02:17] Server INFO Apache Jena Fuseki 3.7.0
[2018-06-04 14:02:17] Config INFO FUSEKI_HOME=/fuseki
[2018-06-04 14:02:17] Config INFO FUSEKI_BASE=/run
[2018-06-04 14:02:17] Config INFO Shiro file: file:///run/shiro.ini
[2018-06-04 14:02:18] Server INFO Started 2018/06/04 14:02:18 UTC on port 3030
Wow... After going through #larsk's suggestion it occurred to me to change the entrypoint to
ENTRYPOINT ["tail", "-f", "/dev/null"]
and go into the container to see what was actually there. It turns out that I was accidently overwriting the /fuseki folder with a volume declaration in the compose file I was using. (facepalm...)