Using Docker getting exec format error when running - docker

I am attempting to complete a tutorial (Web Servers and API on C++ (Linkedin Learn)) and I am using Docker to build the web server.
I am running the following command: docker run -v /Users/arveanlabib/Desktop/cppweb:/usr/src/cppweb -p 8080:8080 -e PORT=8080 cppbox:latest /usr/src/cppweb/hello_crow/build/hello_crow
but I get an error message exec /usr/src/cppweb/hello_crow/build/hello_crow: exec format error.
Here are my files below:
DockerFile
FROM gcc:7.3.0
RUN apt-get -qq update
RUN apt-get -qq upgrade
RUN apt-get -qq install cmake
RUN apt-get -qq install libboost-all-dev=1.62.0.1
RUN apt-get -qq install build-essential libtcmalloc-minimal4 && \
ln -s /user/lib/libtcmalloc_minimal.so.4 /usr/lib/libtcmalloc_minimal.so
main.cpp
#include "crow_all.h"
using namespace std;
int main(int argc, char* argv[]) {
crow::SimpleApp app;
CROW_ROUTE(app, "/")
([](){
return "<div><h1>Hello, Crow.</h1></div>";
});
char* port = getenv("PORT");
uint16_t iPort = static_cast<uint16_t>(port != NULL ? stoi(port): 10000);
cout << "PORT = " << iPort << "\n";
app.port(iPort).multithreaded().run();
}
CMakeLists.txt
cmake_minimum_required(VERSION 3.7)
project(hello_crow)
set(CMAKE_CXX_STANDARD 11)
set(THREADS_PREFER_PTHREAD_FLAG ON)
find_package(Boost COMPONENTS system filesystem REQUIRED)
find_package(Threads)
include_directories(${Boost_INCLUDE_DIRS})
add_executable(hello_crow main.cpp)
target_link_libraries(hello_crow ${Boost_LIBRARIES} Threads::Threads)
I would appreciate any help

Related

Docker exec container pytest fail

I am using the dev image so i can have the cuda compiler, now the issue is that when running the CI, as below I get that error, but if I build the standard container (commented line in dockerfile).
CONTAINER=$(docker run -d gpu-test)
docker exec $CONTAINER pytest
OCI runtime exec failed: exec failed: unable to start container process: exec: "pytest": executable file not found in $PATH: unknown
Dockerfile:
# Pulls the basic Image from NVIDIA repository
FROM rapidsai/rapidsai-dev:22.04-cuda11.5-devel-ubuntu20.04-py3.9
# FROM rapidsai/rapidsai:22.04-cuda11.5-runtime-ubuntu20.04-py3.9
# Updates OS libraries
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
--fix-missing git python3-setuptools python3-pip build-essential libcurl4-gnutls-dev \
zlib1g-dev rsync vim nano cmake tabix
# Install libraries needed in the examples
RUN /opt/conda/envs/rapids/bin/pip install \
scanpy==1.9.1 wget pytabix dash-daq \
dash-html-components dash-bootstrap-components dash-core-components \
utils pytest
RUN /opt/conda/envs/rapids/bin/pip install \
torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
WORKDIR /workspace
ENV HOME /workspace

opam init fails on docker

I'm trying to install a simple linux environment with opam on docker:
$ type .\Dockerfile_Opam.txt
FROM ubuntu:22.04
RUN \
apt-get update -y && \
apt-get install opam -y && \
opam init
Equivalent commands work fine on native linux but with docker I get an error:
$ docker build --tag host --file .\Dockerfile_Opam.txt .
# ... omitted for brevity ...
#5 48.09 [ERROR] Sandboxing is not working on your platform ubuntu:
#5 48.09 "~/.opam/opam-init/hooks/sandbox.sh build sh -c echo SUCCESS >$TMPDIR/opam-sandbox-check-out && cat $TMPDIR/opam-sandbox-check-out; rm -f $TMPDIR/opam-sandbox-check-out" exited with code 1 "bwrap: Creating new namespace failed: Operation not permitted"
OPAM runs builds when installing packages. To guard against buggy makefiles (that might run rm -rf / by accident), OPAM uses bubblewrap to sandbox the builds. Either install bubblewrap (apt-get install bubblewrap) or, if you wish to skip, because you're running in a container anyway, initialize OPAM like this:
opam init --disable-sandboxing
This also works but idk if its safe or the cons/pros with the disabling sandbox option:
USER root
RUN apt-get update && apt-get install -y --no-install-recommends bubblewrap
RUN opam init

Error : WARN Low open file descriptor limit configured for the process. Current value: 4096, recommended value: 10000

I am trying to place a container in AWS Fargate, getting the error " WARN Low open file descriptor limit configured for the process. Current value: 4096, recommended value: 10000. -- Error: Input("Error opening spec file: No such file or directory (os error 2)") "
Can someone please help me to fix the issue
DockerFile:
FROM ubuntu:16.04
RUN apt-get -y update
RUN DEBIAN_FRONTEND="noninteractive" apt-get -y install tzdata
RUN apt install -y cmake pkg-config libssl-dev git build-essential clang libclang-dev curl
RUN curl https://sh.rustup.rs -sSf | bash -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
RUN rustup toolchain install nightly-2020-09-28 && rustup default nightly-2020-09-28 && rustup override
set nightly-2020-09-28
COPY ./polkadex-aura-node/ /polkadex-aura-node/
RUN cd /polkadex-aura-node && cargo build --release
RUN cd /polkadex-aura-node/scripts/ && ./createCustomSpec.sh
RUN echo "fs.file-max = 100000" >> /etc/sysctl.conf
RUN ulimit -n 90000
RUN echo "* soft nofile 65535" >> /etc/security/limits.conf
RUN echo "* hard nofile 65535" >> /etc/security/limits.conf
Tried the last four line to fix the issue but not working.
add to your systemd service following configuration:
LimitNOFILE=10000

Connect docker python to SQL server with pyodbc

I'm trying to connect a pyodbc python script running in a docker container to login to a MSSQL database I have tried all sorts of docker files, but not been able to make the connection (fails when bulding the docker or when python tries to connect), Does anyone have a working dockerfile, using pyodbc:
Dockerfile:
# Use an official Python runtime as a parent image
FROM python:2.7-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt
# Run app.py when the container launches
CMD ["python", "App.py"]
requirements.TXT
pyodbc
App.Py
import pyodbc
connection = pyodbc.connect('Driver={SQL Server};'
'Server=xxxx;'
'Database=xxx;'
'UID=xxxx;'
'PWD=xxxx')
cursor = connection.cursor()
cursor.execute("SELECT [Id],[Name] FROM [DCMM].[config].[Models]")
for row in cursor.fetchall():
print(row.Name)
connection.close()
Bulding the container
docker build -t sqltest .
Output:
Sending build context to Docker daemon 4.096kB
Step 1/5 : FROM python:2.7-slim
---> 426d65ab9a72
Step 2/5 : WORKDIR /app
---> Using cache
---> 725f35122880
Step 3/5 : ADD . /app
---> 3feb8b7744f7
Removing intermediate container 4214091a111a
Step 4/5 : RUN pip install -r requirements.txt
---> Running in 27aa4dcfe738
Collecting pyodbc (from -r requirements.txt (line 1))
Downloading pyodbc-4.0.17.tar.gz (196kB)
Building wheels for collected packages: pyodbc
Running setup.py bdist_wheel for pyodbc: started
Running setup.py bdist_wheel for pyodbc: finished with status 'error'
Failed building wheel for pyodbc
Complete output from command /usr/local/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-EfWsmy/pyodbc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmpa3S13tpip-wheel- --python-tag cp27:
running bdist_wheel
running build
running build_ext
building 'pyodbc' extension
creating build
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/src
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.17 -DSQL_WCHART_CONVERT=1 -I/usr/local/include/python2.7 -c src/cursor.cpp -o build/temp.linux-x86_64-2.7/src/cursor.o -Wno-write-strings
unable to execute 'gcc': No such file or directory
error: command 'gcc' failed with exit status 1
----------------------------------------
Running setup.py clean for pyodbc
Failed to build pyodbc
Installing collected packages: pyodbc
Running setup.py install for pyodbc: started
Running setup.py install for pyodbc: finished with status 'error'
Complete output from command /usr/local/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-EfWsmy/pyodbc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-BV4sRM-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_ext
building 'pyodbc' extension
creating build
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/src
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.17 -DSQL_WCHART_CONVERT=1 -I/usr/local/include/python2.7 -c src/cursor.cpp -o build/temp.linux-x86_64-2.7/src/cursor.o -Wno-write-strings
unable to execute 'gcc': No such file or directory
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/usr/local/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-EfWsmy/pyodbc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-BV4sRM-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-EfWsmy/pyodbc/
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1
Need to Run:
sudo apt-get install gcc
need to add a odbcinst.ini file containing:
[FreeTDS]Description=FreeTDS Driver Driver=/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so Setup=/usr/lib/x86_64-linux-gnu/odbc/libtdsS.so
need to add folowing to docker file
ADD odbcinst.ini /etc/odbcinst.ini
RUN apt-get update
RUN apt-get install -y tdsodbc unixodbc-dev
RUN apt install unixodbc-bin -y
RUN apt-get clean -y
need to change connection in .py to
connection = pyodbc.connect('Driver={FreeTDS};'
'Server=xxxxx;'
'Database=DCMM;'
'UID=xxxxx;'
'PWD=xxxxx')
Now the container compiles, and gets data from SQL server
Running through this recently I found it was necessary to additionally include the following line (note that it did not build without this step):
RUN apt-get install --reinstall build-essential -y
The full Dockerfile looks as follows:
# parent image
FROM python:3.7-slim
# install FreeTDS and dependencies
RUN apt-get update \
&& apt-get install unixodbc -y \
&& apt-get install unixodbc-dev -y \
&& apt-get install freetds-dev -y \
&& apt-get install freetds-bin -y \
&& apt-get install tdsodbc -y \
&& apt-get install --reinstall build-essential -y
# populate "ocbcinst.ini"
RUN echo "[FreeTDS]\n\
Description = FreeTDS unixODBC Driver\n\
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so\n\
Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so" >> /etc/odbcinst.ini
# install pyodbc (and, optionally, sqlalchemy)
RUN pip install --trusted-host pypi.python.org pyodbc==4.0.26 sqlalchemy==1.3.5
# run app.py upon container launch
CMD ["python", "app.py"]
Here's one way to then actually establish the connection inside app.py, via sqlalchemy (and assuming port 1433):
import sqlalchemy as sa
args = (username, password, server, database)
connstr = "mssql+pyodbc://{}:{}#{}/{}?driver=FreeTDS&port=1433&odbc_options='TDS_Version=8.0'"
engine = sa.create_engine(connstr.format(*args))
Based on Kåre Rasmussen's answer, here's a complete dockerfile for further use.
Make sure to edit the last two lines according to your architecture! They should reflect the actual paths to libtdsodbc.so and libtdsS.so.
If you're not sure about the paths to libtdsodbc.so and libtdsS.so, try dpkg --search libtdsodbc.so and dpkg --search libtdsS.so.
FROM python:3
#Install FreeTDS and dependencies for PyODBC
RUN apt-get update && apt-get install -y tdsodbc unixodbc-dev \
&& apt install unixodbc-bin -y \
&& apt-get clean -y
RUN echo "[FreeTDS]\n\
Description = FreeTDS unixODBC Driver\n\
Driver = /usr/lib/arm-linux-gnueabi/odbc/libtdsodbc.so\n\
Setup = /usr/lib/arm-linux-gnueabi/odbc/libtdsS.so" >> /etc/odbcinst.ini
Afterwards, install PyODBC, COPY your app and run it.
I was unable to use all of the above resolutions, I was keeping al kind of errors relating to the pyodbc package, in particular:
ImportError: libodbc.so.2: cannot open shared object file: No such file or directory.
I ended up with another resolution which defines the ODBC SQL Server Driver specifically for an Ubuntu 18.04 Docker image, in this case ODBC Driver 17 for SQL Server. In my specific use case I needed to make the connection to my MySQL database server on Azure via Flask SQLAlchemy, but the latter is not a necessity for the Docker configuration.
Dockerfile, with most important part adding the Microsoft repository and installing msodbcsql17 and unixodbc-dev:
# Ubuntu 18.04 base with Python runtime and pyodbc to connect to SQL Server
FROM ubuntu:18.04
WORKDIR /app
# apt-get and system utilities
RUN apt-get update && apt-get install -y \
curl apt-utils apt-transport-https debconf-utils gcc build-essential g++-5\
&& rm -rf /var/lib/apt/lists/*
# adding custom Microsoft repository
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/18.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
# install SQL Server drivers
RUN apt-get update && ACCEPT_EULA=Y apt-get install -y msodbcsql17 unixodbc-dev
# install SQL Server tools
RUN apt-get update && ACCEPT_EULA=Y apt-get install -y mssql-tools
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
RUN /bin/bash -c "source ~/.bashrc"
# python libraries
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev
# install necessary locales, this prevents any locale errors related to Microsoft packages
RUN apt-get update && apt-get install -y locales \
&& echo "en_US.UTF-8 UTF-8" > /etc/locale.gen \
&& locale-gen
# copy requirements and install packages, I added this for general use
COPY ./requirements.txt > ./requirements.txt
RUN pip3 install -r ./requirements.txt
# you can also use regular install of the packages
RUN pip3 install pyodbc SQLAlchemy
# and if you are also planning to use Flask and Flask-SQLAlchemy
Run pip3 install Flask Flask-SQLAlchemy
COPY ..
# run your app via entrypoint or change the CMD command to your regular command
COPY docker-entrypoint.sh wsgi.py ./
CMD ["./docker-entrypoint.sh"]
This should build without any errors in Docker.
My database url looked like this:
import urllib.parse
# name the sepcific ODBC driver by version number, we installed msodbcsql17
params = urllib.parse.quote_plus("DRIVER={ODBC Driver 17 for SQL Server};SERVER=<your.database.windows.net>;DATABASE=<your-db-name>;UID=<username>;PWD=<password>")
db_uri = "mssql+pyodbc:///?odbc_connect={PARAMS}".format(PARAMS=params)
And for the bonus if you are using Flask-SQLAlchemy, your app config should contain something like this:
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
app.config["SQLALCHEMY_DATABASE_URI"] = db_uri # from above
Happy coding!
How to install the necessary dependencies for pyodbc is related to the linux distribution and its version (in docker case, that is the base image of your docker image). If none of the above work for you, you can figure out the commands by trying in the docker container instance.
First, exec into the docker container
docker exec -it <container id> bash
Try various ways to get the distribution name and version of your linux. Then try different instructions in Install the Microsoft ODBC driver for SQL Server (Linux)
Here is a working example for Debian 9 based images, deriving exactly as the document instructions.
# Install pyodbc dependencies
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/9/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN ACCEPT_EULA=Y apt-get -y install msodbcsql17
RUN apt-get -y install unixodbc-dev
RUN pip install pyodbc
For me to solve this issue I also had to add the following 2 lines in the dockerfile:
RUN echo MinProtocol = TLSv1.0 >> /etc/ssl/openssl.cnf
RUN echo CipherString = DEFAULT#SECLEVEL=1 >> /etc/ssl/openssl.cnf
For those who wanted to do official microsoft approach to install odbc driver and use python:slim docker image, you can use this as DockerFile:
FROM python:3.9-slim
RUN apt-get -y update && apt-get install -y curl gnupg
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
# download appropriate package for the OS version
# Debian 11
RUN curl https://packages.microsoft.com/config/debian/11/prod.list \
> /etc/apt/sources.list.d/mssql-release.list
RUN exit
RUN apt-get -y update
RUN ACCEPT_EULA=Y apt-get install -y msodbcsql18
Then for sqlalchemy's this can be called:
con_str = f"mssql+pyodbc://{username}:{password}#{host}/{db}?" \
"driver=ODBC+Driver+18+for+SQL+Server&TrustServerCertificate=yes"
engine = create_engine(con_str)
I created a Gist on GitHub on how to do this. I hope it helps. I had to piece things together from what I found on different resources.
https://gist.github.com/joshatxantie/4bcf5d0243fba63845fce7cc40365a3a
Goodluck!
I fixed this problem by using pypyodbc instead of pyodbc.
pip install pypyodbc==1.3.5
https://pypi.org/project/pypyodbc/
Found the hint here:
https://github.com/Azure/azure-functions-python-worker/issues/249
For not more problem use library for python
pymssql this not need install driver
pip install pymssql
import pymssql
conn = pymssql.connect(server, user, password, "tempdb")
cursor = conn.cursor(as_dict=True)
cursor.execute('SELECT * FROM persons WHERE salesrep=%s', 'John Doe')
for row in cursor:
print("ID=%d, Name=%s" % (row['id'], row['name']))
conn.close()
and work in docker

Docker: Error response from daemon: no such id:

currently I try to launch docker image on deamon with docker run -d ID (after to launch this commande: docker build -t toto .)
But when I launch this commande: docker exec -it ID bash, I've got this error:
Error response from daemon: no such id: toto
My Dockerfile look like that:
# Dockerfile
FROM debian:jessie
# Upgrade system
RUN apt-get update && apt-get dist-upgrade -y --no-install-recommends
# Install TOR
RUN apt-get install -y --no-install-recommends tor tor-geoipdb torsocks
# INSTALL POLIPO
RUN apt-get update && apt-get install -y polipo
# INSTALL PYTHON
RUN apt-get install -y python2.7 python-pip python-dev build-essential libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev libffi-dev libxslt-dev libxml2-dev && apt-get clean
# INSTALL GIT
RUN apt-get install -y git-core
# INSTALL NANO
RUN apt-get install -y nano
# INSTALL SUPERVISOR
RUN apt-get install -y supervisor
# INSTALL SCRAPY and dependencies
RUN pip install lxml && pip install pyopenssl && pip install Scrapy && pip install pyopenssl && pip install beautifulsoup4 && pip install lxml && pip install elasticsearch && pip install simplejson && pip install requests && pip install scrapy-crawlera && pip install avro && pip install stem
# INSTALL CURL
RUN apt-get install -y curl
# Default ORPort
EXPOSE 9001
# Default DirPort
EXPOSE 9030
# Default SOCKS5 proxy port
EXPOSE 9050
# Default ControlPort
EXPOSE 9051
# Default polipo Port
EXPOSE 8123
# Configure Tor and Polopo
RUN echo 'socksParentProxy = "localhost:9050"' >> /etc/polipo/config
RUN echo 'socksProxyType = socks5' >> /etc/polipo/config
RUN echo 'diskCacheRoot = ""' >> /etc/polipo/config
RUN echo 'ORPort 9001' >> /etc/tor/torrc
RUN echo 'ExitPolicy reject *:*' >> /etc/tor/torrc
ENV PYTHONPATH $PYTHONPATH:/scrapy
WORKDIR /scrapy
VOLUME ["/scrapy"]
Thanks in advance.
In order to facilitate the usage of docker exec, make sure you run your container with a name:
docker run -d --name aname.cont ...
I don't see an entrypoint or exec directove in the Dockerfile, so do mention what you want to run when using docker run -d
(I like to add '.cont' as a naming convention, to remember that it is a container name, not an image name)
Then a docker exec aname.cont bash should work.
Check that the container is still running with a docker ps -a
When creating the container you should use the image name:
docker run -d --name my_toto toto
You cannot impose an ID when creating it. It's Docker who assigns the ID.
Then connecting
docker exec -it my_toto bash
A more quick way to do that is running directly
docker run -d -it -name my_toto toto bash
The container will still existing after you exit.

Resources