Try to dockerize bitcoin-sv on ubuntu:16.04, but have an error on last step.
A piece of Dockerfile :
RUN mkdir boost \
&& cd boost \
&& wget https://dl.bintray.com/boostorg/release/1.70.0/source/boost_1_70_0.tar.gz \
&& tar -xzvf boost_1_70_0.tar.gz \
&& cd boost_1_70_0 \
&& ./bootstrap.sh \
&& ./b2 \
&& ./b2 install \
&& cd ../../ \
&& git clone https://github.com/bitcoin-sv/bitcoin-sv \
&& cd bitcoin-sv \
&& ./autogen.sh \
&& mkdir build \
&& cd build \
&& ../configure \
&& make <------------- error on this final step
Error:
Makefile:4415: recipe for target 'rpc/libbitcoin_cli_a-client.o' failed
make[2]: *** [rpc/libbitcoin_cli_a-client.o] Error 1
make[2]: Leaving directory '/bitcoin-sv/build/src'
Makefile:8455: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory '/bitcoin-sv/build/src'
Makefile:660: recipe for target 'all-recursive' failed
make: *** [all-recursive] Error 1
Read here that there may be a lack of memory . How to fixed it?
One more error logged. There is out of memory in docker machine
../../src/validation.cpp:4046:27: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
cacheSize > std::max(static_cast<uint64_t>((9 * nTotalSpace) / 10),
~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
nTotalSpace - MAX_BLOCK_COINSDB_USAGE * ONE_MEBIBYTE);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
g++: internal compiler error: Killed (program cc1plus)
The error g++: internal compiler error: Killed (program cc1plus) is nearly always due to too little memory - as explained more thoroughly in this StackOverflow post by Jon.
Running on ubuntu (not meaning within a virtual machine on ubuntu) your machine may be out of memory as docker has access to the whole memory of a Linux host as far as I know. If running within a virtual machine make sure it has enough memory available and in turn that it is available to docker.
For macOS or Windows, you could easily manage the resources available to docker (assuming that there are still resources left) by following Roberts answer to the (not quite correctly phrased) StackOverflow question 'How to assign more memory to docker container'.
Related
Below is the test case that I am trying to execute inside the docker container.
Login To GUI
[Documentation] To open GUI and login with valid credentials
${chrome_options}= Evaluate sys.modules['selenium.webdriver'].ChromeOptions() sys, selenium.webdriver
Call Method ${chrome_options} add_argument --no-sandbox
Call Method ${chrome_options} add_argument --headless
Call Method ${chrome_options} add_argument --disable-dev-shm-usage
Call Method ${chrome_options} add_argument --ignore-certificate-errors-spki-list
Call Method ${chrome_options} add_argument --ignore-ssl-errors
Open Browser ${url} chrome options=${chrome_options} executable_path=/usr/lib/chromium/chromedriver
Set Browser Implicit Wait 5
Input Text id=username ${username}
Input Text id=password ${password}
Click Button //input[#value='Sign in']
The test case passed successfully when I tried to execute it directly from IDE (Pycharm) in the MAC terminal. But, When I tried to perform the same via docker container, it fails with error “Element with locator 'id=username' not found” and a blank white screen is attached as part of screenshot in logs. The page I request should get redirected to an authentication page (key cloak) with the username password field, but I am getting blank page in the docker container.
I checked the log file inside container “/usr/lib/chromium/chrome_debug.log”
[0302/115225.286372:WARNING:dns_config_service_posix.cc(342)] Failed to read DnsConfig.
[0302/115226.149284:ERROR:cert_issuer_source_aia.cc(32)] Error parsing cert retrieved from AIA (as DER):
ERROR: Failed parsing Certificate SEQUENCE
ERROR: Failed parsing Certificate
[0302/115226.345313:ERROR:cert_issuer_source_aia.cc(32)] Error parsing cert retrieved from AIA (as DER):
ERROR: Failed parsing Certificate SEQUENCE
ERROR: Failed parsing Certificate
[0302/115226.345462:ERROR:cert_issuer_source_aia.cc(104)] AiaRequest::OnFetchCompleted got error -301
[0302/115226.346040:ERROR:ssl_client_socket_impl.cc(960)] handshake failed; returned -1, SSL error code 1, net_error -202
Then I tried the below command inside the container and I got:
/usr/lib/chromium # chromium-browser --headless --no-sandbox --ignore-certificate-errors --ignore-ssl-errors https://<url>
[0302/115903.090501:ERROR:bus.cc(393)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
[0302/115903.091302:WARNING:dns_config_service_posix.cc(342)] Failed to read DnsConfig.
[0302/115903.152546:WARNING:dns_config_service_posix.cc(342)] Failed to read DnsConfig.
[0302/115903.631311:ERROR:cert_issuer_source_aia.cc(32)] Error parsing cert retrieved from AIA (as DER):
ERROR: Failed parsing Certificate SEQUENCE
ERROR: Failed parsing Certificate
[0302/115903.633207:ERROR:cert_issuer_source_aia.cc(32)] Error parsing cert retrieved from AIA (as DER):
ERROR: Failed parsing Certificate SEQUENCE
ERROR: Failed parsing Certificate
[0302/115903.633315:ERROR:cert_issuer_source_aia.cc(104)] AiaRequest::OnFetchCompleted got error -301
[0302/115904.273717:INFO:CONSOLE(27)] "Mixed Content: The page at 'https://<url>/auth/realms/ml/protocol/openid-connect/auth?client_id=ml-client&redirect_uri=https%3A%2F%2F<url>%2Foauth%2Fcallback&response_type=code&scope=ml-scope+openid+email+profile&state=6d35f7-add8-40b-a8e7-b169876cfc' was loaded over a secure connection, but contains a form that targets an insecure endpoint 'http://ml-sec-access-mgmt-http:8080/auth/realms/ml/login-actions/authenticate?session_code=mrjXrpjeadGywFIIgkHhddBag74tDnWV6FHA3Qk&execution=f19849-6670-406c-a1b0-139bb1f1dc05&client_id=ml-client&tab_id=vGTrJ7OI8'. This endpoint should be made available over a secure connection.", source: https://<url>/auth/realms/ml/protocol/openid-connect/auth?client_id=ml-client&redirect_uri=https%3A%2F%2F<url>%2Foauth%2Fcallback&response_type=code&scope=ml-scope+openid+email+profile&state=6d85f7-add8-40db-a8e7-b16239876cfc (27)
I even download the chromium browser in my MAC and tried opening the URL it works fine.
Docker File [Reference: https://github.com/ppodgorsek/docker-robot-framework/blob/master/Dockerfile]:
#Base image
FROM python:3.9.0-alpine3.12
# Set the reports directory environment variable
ENV ROBOT_REPORTS_DIR /opt/robotframework/reports
# Set the tests directory environment variable
ENV ROBOT_TESTS_DIR /opt/robotframework/tests
# Set the working directory environment variable
ENV ROBOT_WORK_DIR /opt/robotframework/temp
# Set number of threads for parallel execution
# By default, no parallelisation
ENV ROBOT_THREADS 1
# Install system dependencies
RUN apk update \
&& apk --no-cache upgrade \
&& apk --no-cache --virtual .build-deps add \
gcc \
libffi-dev \
linux-headers \
make \
musl-dev \
openssl-dev \
which \
wget \
curl \
vim \
ca-certificates \
git \
jq \
chromium \
chromium-chromedriver
#Install robotframework and required libraries from the requirements file
ADD requirements.txt /
RUN pip3 install \
--no-cache-dir \
-r requirements.txt
# Create the default report and work folders with the default user to avoid runtime issues
# These folders are writeable by anyone, to ensure the user can be changed on the command line.
RUN mkdir -p ${ROBOT_REPORTS_DIR} \
&& mkdir -p ${ROBOT_WORK_DIR} \
&& chmod ugo+w ${ROBOT_REPORTS_DIR} ${ROBOT_WORK_DIR}
# Installing product related utilities inside the container
XXXXX<contents are hidden as it is not relevant to this query>
# Allow any user to write logs
RUN chmod ugo+w /var/log
# Update system path
ENV PATH=/opt/robotframework/bin:$PATH
# A dedicated work folder to allow for the creation of temporary files
WORKDIR ${ROBOT_WORK_DIR}
Requirements.txt file contents:
#Required robot framework packages
robotframework==3.2.2
robotframework-requests==0.7.2
robotframework-seleniumlibrary==4.5.0
robotframework-jsonlibrary==0.3.1
robotframework-kubelibrary==0.2.0
I even referred the link Getting empty page running selenium in headless chrome Docker.
I could not figure out what could be the issue. Is it really a redirect issue or certificate issue or Mixed content? I am quite confused. Any ideas?
I found a solution for the above problem statement.
First I tried using chrome and firefox instead of chromium. But apline doesn't had chrome and so switched my base image to ubuntu. Also, in general, ubuntu is suggested [Reference: https://pythonspeed.com/articles/base-image-python-docker-images/] as a best docker base image for running Python Applications.
But even after changing to ubuntu as new docker base image with chrome and firefox, it is the same error (blank page white screen).
Below error as well,
oot#a4ac8fd9a950:/opt/google/chrome# google-chrome --headless --no-sandbox https://<URL>
[0306/152227.264852:WARNING:headless_content_main_delegate.cc(530)] Cannot create Pref Service with no user data dir.
[0306/152227.265234:WARNING:discardable_shared_memory_manager.cc(194)] Less than 64MB of free space in temporary directory for shared memory files: 63
[0306/152227.269687:ERROR:bus.cc(393)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
[0306/152228.160231:ERROR:cert_issuer_source_aia.cc(32)] Error parsing cert retrieved from AIA (as DER):
ERROR: Failed parsing Certificate SEQUENCE
ERROR: Failed parsing Certificate
[0306/152228.363766:ERROR:cert_issuer_source_aia.cc(32)] Error parsing cert retrieved from AIA (as DER):
ERROR: Failed parsing Certificate SEQUENCE
ERROR: Failed parsing Certificate
[0306/152228.363958:ERROR:cert_issuer_source_aia.cc(104)] AiaRequest::OnFetchCompleted got error -301
[0306/152228.364625:ERROR:ssl_client_socket_impl.cc(924)] handshake failed; returned -1, SSL error code 1, net_error -202
Then I tried the same with Xvfb [Xvfb (short for X virtual framebuffer) is an in-memory display server for UNIX-like operating system (e.g., Linux). It enables you to run graphical applications without a display (e.g., browser tests on a CI server) while also having the ability to take screenshots.] This worked. Giving all the contents below for reference.
Modified the docker file as below:
FROM ubuntu:20.04
# Set the reports directory environment variable
ENV ROBOT_REPORTS_DIR /opt/robotframework/reports
# Set the tests directory environment variable
ENV ROBOT_TESTS_DIR /opt/robotframework/tests
# Set the working directory environment variable
ENV ROBOT_WORK_DIR /opt/robotframework/temp
# Set number of threads for parallel execution
# By default, no parallelisation
ENV ROBOT_THREADS 1
ENV DEBIAN_FRONTEND=noninteractive
# Install system dependencies
RUN apt-get update \
&& apt-get install --quiet --assume-yes \
python3-pip \
unzip \
firefox \
wget \
curl \
vim \
ca-certificates \
git \
jq \
xvfb
# Install chrome package
RUN wget --no-verbose https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg --install google-chrome-stable_current_amd64.deb; apt-get --fix-broken --assume-yes install
#Install robotframework and required libraries from the requirements file
ADD requirements.txt /
RUN pip3 install \
--no-cache-dir \
-r requirements.txt
# Install webdrivers for chrome and firefox
RUN CHROMEDRIVER_VERSION=`wget --no-verbose --output-document - https://chromedriver.storage.googleapis.com/LATEST_RELEASE` && \
wget --no-verbose --output-document /tmp/chromedriver_linux64.zip http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip && \
unzip -qq /tmp/chromedriver_linux64.zip -d /opt/chromedriver && \
chmod +x /opt/chromedriver/chromedriver && \
ln -fs /opt/chromedriver/chromedriver /usr/local/bin/chromedriver
RUN GECKODRIVER_VERSION=`wget --no-verbose --output-document - https://api.github.com/repos/mozilla/geckodriver/releases/latest | grep tag_name | cut -d '"' -f 4` && \
wget --no-verbose --output-document /tmp/geckodriver.tar.gz https://github.com/mozilla/geckodriver/releases/download/$GECKODRIVER_VERSION/geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz && \
tar --directory /opt -zxf /tmp/geckodriver.tar.gz && \
chmod +x /opt/geckodriver && \
ln -fs /opt/geckodriver /usr/local/bin/geckodriver
# Create the default report and work folders with the default user to avoid runtime issues
# These folders are writeable by anyone, to ensure the user can be changed on the command line.
RUN mkdir -p ${ROBOT_REPORTS_DIR} \
&& mkdir -p ${ROBOT_WORK_DIR} \
&& chmod ugo+w ${ROBOT_REPORTS_DIR} ${ROBOT_WORK_DIR}
# Installing product related utilities inside the container
XXXXXXXX
# Allow any user to write logs
RUN chmod ugo+w /var/log
# Update system path
ENV PATH=/opt/robotframework/bin:$PATH
# A dedicated work folder to allow for the creation of temporary files
WORKDIR ${ROBOT_WORK_DIR}
Requirement text file:
#Required robot framework packages
robotframework==3.2.2
robotframework-requests==0.7.2
robotframework-seleniumlibrary==4.5.0
robotframework-jsonlibrary==0.3.1
robotframework-kubelibrary==0.2.0
robotframework-xvfb==1.2.2
New Robot FW test case with Xvfb:
*** Settings ***
Library SeleniumLibrary
Library XvfbRobot
*** Test Cases ***
Login To GUI
[Documentation] To open GUI and login with valid credentials
Start Virtual Display 1920 1080
Open Browser ${URL}
Set Window Size 1920 1080
Set Browser Implicit Wait 5
Input Text id=username ${username}
Input Text id=password ${password}
Click Button //input[#value='Sign in']
I'm trying to make a build of OpenCV 4.0.0 on my Raspberry Pi 3B+, and keep running into this issue:
[ 83%] Building CXX object modules/stitching/CMakeFiles/opencv_perf_stitching.dir/perf/opencl/perf_stitch.cpp.o
c++: internal compiler error: Segmentation fault (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-7/README.Bugs> for instructions.
modules/stitching/CMakeFiles/opencv_perf_stitching.dir/build.make:62: recipe for target 'modules/stitching/CMakeFiles/opencv_perf_stitching.dir/perf/opencl/perf_stitch.cpp.o' failed
make[2]: *** [modules/stitching/CMakeFiles/opencv_perf_stitching.dir/perf/opencl/perf_stitch.cpp.o] Error 4
CMakeFiles/Makefile2:23142: recipe for target 'modules/stitching/CMakeFiles/opencv_perf_stitching.dir/all' failed
make[1]: *** [modules/stitching/CMakeFiles/opencv_perf_stitching.dir/all] Error 2
Makefile:162: recipe for target 'all' failed
make: *** [all] Error 2
This is the make/build portion of the script I'm running:
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D PYTHON_EXECUTABLE=~/.virtualenvs/py3cv4/bin/python \
-D WITH_GSTREAMER=ON \
-D WITH_FFMPEG=ON \
-D WITH_OPENMP=ON \
-D BUILD_EXAMPLES=ON ..
echo ""
echo "======================="
echo "Building OpenCV..."
make -j4
sudo make install
sudo ldconfig
I read somewhere that I should change the make -j4 command to not use all four cores, because I'm running out of memory. I tried make -j1, but still got the same error at the same spot. I'm going to try again with just plain make, but delete all the pre-built stuff that's in there and start over from scratch to see if that helps.
Turns out I needed to entirely delete the build I had created and rebuild it with a single core instead of all four, as it was using up too much memory. I deleted my /opencv/build/ directory and then did make with no -j command, and it worked fine. It took a really long time (5+ hours), but it did complete successfully. Now I just have to figure out why I can't import cv2...
Trying to complete this tutorial to run grafana on Windows, at this point of compilation I kept got this error:
PS C:\Programs\Others\LocustReport\docker-grafana-graphite> make up
mkdir -p \
data/whisper \
data/elasticsearch \
data/grafana \
log/graphite \
log/graphite/webapp \
log/elasticsearch
The syntax of the command is incorrect.
make: *** [prep] Error 1
PS C:\Programs\Others\LocustReport\docker-grafana-graphite>
Please any workaorund to get it compiled?
You'll need to run a linux vm (I use virtualbox) to build the image on.
Qpid-cpp has been compiled in a Ubuntu docker image and the current size is 1.86GB:
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-qpid-cpp latest 7e60a5eabee1 44 hours ago 1.86 GB
Aim
To compile qpid-cpp within docker alpine to reduce the disk size of the image
Problem
Some packages that are available in Ubuntu are omitted or different in Alpine, e.g.:
ubuntu
RUN apt-get update -y && \
apt-get install -y wget && \
apt-get install -y build-essential python ruby && \
apt-get install -y cmake libblkid-dev e2fslibs-dev libboost-all-dev libaudit-dev
Attempt
In order to find the substitution packages the Dockerfile was built and when an error occurred the required package that is available in Alpine was added.
alpine
RUN apk update && \
apk add wget python ruby cmake build-base boost-dev util-linux-dev
Although most errors were solved, the following issue occurs while compile qpid-cpp within alpine:
[ 17%] Building CXX object
src/CMakeFiles/qpidcommon.dir/qpid/sys/posix/Condition.cpp.o
In file included from
/qpid-cpp/bld/qpid-cpp-1.36.0/src/qpid/sys/posix/Condition.h:31:0,
from /qpid-cpp/bld/qpid-cpp-1.36.0/src/qpid/sys/posix/Condition.cpp:22:
/usr/include/sys/errno.h:1:2: error: #warning redirecting incorrect
#include <sys/errno.h> to <errno.h> [-Werror=cpp]
#warning redirecting incorrect #include <sys/errno.h> to <errno.h>
^~~~~~~
cc1plus: all warnings being treated as errors
make[2]: *** [src/CMakeFiles/qpidcommon.dir/build.make:2727:
src/CMakeFiles/qpidcommon.dir/qpid/sys/posix/Condition.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:1494: src/CMakeFiles/qpidcommon.dir/all]
Error 2 make: *** [Makefile:161: all] Error 2
The command '/bin/sh -c cd qpid-cpp/bld/qpid-cpp-1.36.0 && make all && make
install' returned a non-zero code: 2
Question
How to solve the compilation issue Building CXX object src/CMakeFiles/qpidcommon.dir/qpid/sys/posix/Condition.cpp.o while compiling qpid-cpp within Docker Alpine?
I tried this with Ubuntu and Alpine docker images, and I get the same problem on Alpine. It appears that qpid won't build on Alpine Linux.
Be aware that the Ubuntu:16.04 image is 130 MB, 750 MB with dependencies installed. Compared to Alpine's 5 MB, and 476 MB with dependencies.
So those 1.86 GB are mostly made up of build dependencies and qpid itself. You won't be able to escape that with any other image. Maybe you could purge some of the build dependencies after building to decrease the final size.
I want to create a project from dockerfile. Firstly, I should clone a framework from github and install it.
In my Dockerfile I have the following instrutions:
RUN git clone https://github.com/simgrid/project.git
WORKDIR "/project"
RUN cmake option1 options2 .
RUN sudo make
RUN sudo make install
I build image with:
docker build -t "myimage" .
But I have an error about text file busy. How can I overcome it?
make[2]: execvp: /simgrid/tools/sg_unit_extractor.pl: Text file busy
make[2]: *** [src/cunit_unit.cpp] Error 127
CMakeFiles/testall.dir/build.make:69: recipe for target 'src/cunit_unit.cpp' failed
CMakeFiles/Makefile2:616: recipe for target 'CMakeFiles/testall.dir/all' failed
make[1]: *** [CMakeFiles/testall.dir/all] Error 2
Makefile:160: recipe for target 'all' failed
make: *** [all] Error 2
The command '/bin/sh -c sudo make' returned a non-zero code: 2
My Dockerfile content is:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y \
sudo \
git \
build-essential \
cmake \
libboost-dev \
libboost-all-dev \
doxygen \
python3
RUN git clone https://github.com/simgrid/simgrid.git
WORKDIR "/simgrid"
RUN cmake -Denable_documentation=OFF -Denable_coverage=OFF -Denable_java=OFF -Denable_model-checking=OFF \
-Denable_lua=OFF -Denable_compile_optimizations=OFF -Denable_smpi=OFF -Denable_smpi_MPICH3_testsuite=OFF -Denable_compile_warnings=OFF .
RUN sudo make
RUN sudo make install
The error message you are seeing is from the output of make. It does not appear be an error for Docker. Instead, this points back to the code being compiled inside the image and so you would want to raise this issue with them in github.
I do see a fair number of kernel and network components being compiled with the app, which may not properly function in a docker sandbox, and so the code you are trying to compile may not be able to run in this type of isolation without disabling some of the protections that docker provides. See docker's security documentation for more details, particularly on the namespaces, cgroups, and capabilities to protect the kernel.
Despite this is not a Docker issue there are some scenarios where you can face this error building a dockerfile.
Just to bring a known workaround (even if it is not the most elegant solution) let me show you this one.
In my case I got the message " Text file busy " when trying to build a Dockerfile with the following line:
RUN chmod 500 /build/build_dotcms.sh && /build/build_dotcms.sh ${BUILD_FROM} ${BUILD_ID}
Provoked an interruption with "Text file busy" intermitently.
The workaround was to add a "sleep 1" between chmod command and shell script execution
RUN chmod 500 /build/build_dotcms.sh && sleep 1 && /build/build_dotcms.sh ${BUILD_FROM} ${BUILD_ID}
I found the solution in a github thread: https://github.com/moby/moby/issues/9547
Hope it helps.