When I build the following Docker, the last step raises an error, but when I run the latest step image and run the same command it succeeds. I run in a Centos 7 server. There is a preinstalled podman service there. The issue might be related to this one: https://github.com/containers/buildah/issues/1544
Dockerfile:
FROM python:3.10.6
RUN apt update
RUN apt -y upgrade
RUN apt search golang-go
RUN apt install -y golang-go
# expose streamlit port
EXPOSE 8501
# working dir is /app
WORKDIR /app
ENV PYTHONPATH=/app
# Install virtual environment
RUN pip install poetry
COPY ./pyproject.toml /app
RUN poetry config virtualenvs.create false
RUN poetry install --no-root
pyproject
[tool.poetry]
version = "0.1.0"
description = ""
readme = "README.md"
[tool.pytest.ini_options]
log_cli = true
log_cli_level = "INFO"
log_cli_format = "%(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)"
log_cli_date_format = "%Y-%m-%d %H:%M:%S"
[tool.poetry.dependencies]
torch = [{markers = "sys_platform == 'macos'", url = "https://download.pytorch.org/whl/cpu/torch-1.13.0-cp310-none-macosx_11_0_arm64.whl"},
{markers = "sys_platform == 'linux' and platform_machine == 'arm64'", url="https://download.pytorch.org/whl/torch-1.13.0-cp310-cp310-manylinux2014_aarch64.whl"},
{markers = "sys_platform == 'linux' and platform_machine == 'x86_64'", url="https://download.pytorch.org/whl/cpu/torch-1.13.0%2Bcpu-cp310-cp310-linux_x86_64.whl"}]
python = "^3.10"
pytest = "^7.2.0"
toml = "^0.10.2"
scikit-learn = "^1.1.3"
scikit-optimize = "^0.9.0"
ax-platform = "^0.2.8"
setuptools = "^65.5.1"
fire = "^0.4.0"
jupyter = "^1.0.0"
streamlit-ace = "^0.1.1"
humanfriendly = "^10.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
When building I get:
Writing lock file
Package operations: 126 installs, 1 update, 0 removals
error building at step {Env:[PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LANG=C.UTF-8 GPG_KEY=A035C8C19219BA821ECEA86B64E628F8D684696D PYTHON_VERSION=3.10.6 PYTHON_PIP_VERSION=22.2.1 PYTHON_SETUPTOOLS_VERSION=63.2.0 PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/5eaac1050023df1f5c98b173b248c260023f2278/public/get-pip.py PYTHON_GET_PIP_SHA256=5aefe6ade911d997af080b315ebcb7f882212d070465df544e1175ac2be519b4 PYTHONPATH=/app] Command:run Args:[poetry install --no-root] Flags:[] Attrs:map[] Message:RUN poetry install --no-root Original:RUN poetry install --no-root}: error while running runtime: exit status 1
In interactive shell of the last successful step I get no error. Internally it seems like the install is using the correct torch version • Installing torch (1.13.0+cpu https://download.pytorch.org/whl/cpu/torch-1.13.0%2Bcpu-cp310-cp310-linux_x86_64.whl)
just change from
RUN poetry install --no-root
to
RUN poetry install --no-root --no-interaction
Related
I would like to run set of specific tests within Docker container and not sure how to tackle this. Tests I want to perform are security-related, like create user(s), manage GPG keys for them and similar - which I am reluctant to run on PC running the tests.
I tried pytest-xdist/socketserver combo and also copying tests into running Docker container and use pytest-json-report to get result(s) as json saved to a volume shared with the host, but not sure this approach is good.
For now, I would settle with all tests (without mark or similar other features) are executed "remotely" (in Docker) and results are presented like everything is ran on local PC.
Don't mind writing a specific plugin, but not sure if this is a good way: do I have to make sure than, my plugin is loaded before say, pytest-xdist (or some others)? Additionally, if I use say, pytest_sessionstart in my conftest.py to build Docker image that I can then target with xdist; but my tests have also some "dependency" that I have to put within conftest.py - I cant use same conftest.py within container and in my PC running the test.
Thank you in advance
In case anyone else maybe have similar need, I will share what I did.
First of all, there is already an excellent pytest-json-report to export JSON results. However, I made simpler and with less functionality plugin that uses pytest_report_to_serializable directly:
import json
from socket import gethostname
def pytest_addoption(parser, pluginmanager):
parser.addoption(
'--report-file', default='%s.json' % gethostname(), help='path to JSON report'
)
def pytest_configure(config):
plugin = JsonTestsExporter(config=config)
config._json_report = plugin
config.pluginmanager.register(plugin)
def pytest_unconfigure(config):
plugin = getattr(config, '_json_report', None)
if plugin is not None:
del config._json_report
config.pluginmanager.unregister(plugin)
print('Report saved in: %s' % config.getoption('--report-file'))
class JsonTestsExporter(object):
def __init__(self, config):
self._config = config
self._export_data = {'collected': 0, 'results': []}
def pytest_report_collectionfinish(self, config, start_path, startdir, items):
self._export_data['collected'] = len(items)
def pytest_runtest_logreport(self, report):
data = self._config.hook.pytest_report_to_serializable(
config=self._config, report=report
)
self._export_data['results'].append(data)
def pytest_sessionfinish(self, session):
report_file = self._config.getoption('--report-file')
with open(report_file, 'w+') as fd:
fd.write(json.dumps(self._export_data))
Reason beyond this is that I wanted results also imported using pytest_report_from_serializable.
Simplified Dockerfile:
FROM debian:buster-slim AS builder
COPY [ "requirements.txt", "run.py", "/artifacts/" ]
COPY [ "json_tests_exporter", "/artifacts/json_tests_exporter/" ]
RUN apt-get update\
# install necesssary packages
&& apt-get install --no-install-recommends -y python3-pip python3-setuptools\
# build json_tests_exporter *.whl
&& pip3 install wheel\
&& sh -c 'cd /artifacts/json_tests_exporter && python3 setup.py bdist_wheel'
FROM debian:buster-slim
ARG USER_UID=1000
ARG USER_GID=${USER_UID}
COPY --from=builder --chown=${USER_UID}:${USER_GID} /artifacts /artifacts
RUN apt-get update\
# install necesssary packages
&& apt-get install --no-install-recommends -y wget gpg openssl python3-pip\
# create user to perform tests
&& groupadd -g ${USER_GID} pytest\
&& adduser --disabled-password --gecos "" --uid ${USER_UID} --gid ${USER_GID} pytest\
# copy/install entrypoint script and preserver permissions
&& cp -p /artifacts/run.py /usr/local/bin/run.py\
# install required Python libraries
&& su pytest -c "pip3 install -r /artifacts/requirements.txt"\
&& su pytest -c "pip3 install /artifacts/json_tests_exporter/dist/*.whl"\
# make folder for tests and results
&& su pytest -c "mkdir -p /home/pytest/tests /home/pytest/results"
VOLUME [ "/home/pytest/tests", "/home/pytest/results" ]
USER pytest
WORKDIR /home/pytest/tests
ENTRYPOINT [ "/usr/local/bin/run.py" ]
JSON exporter plugin is located in same folder as Dockerfile
run.py is as simple as:
#!/usr/bin/python3
import pytest
import sys
from socket import gethostname
def main():
if 1 == len(sys.argv):
# use default arguments
args = [
'--report-file=/home/pytest/results/%s.json' % gethostname(),
'-qvs',
'/home/pytest/tests'
]
else:
# caller passed custom arguments
args = sys.argv[1:]
try:
res = pytest.main(args)
except Exception as e:
print(e)
res = 1
return res
if __name__ == "__main__":
sys.exit(main())
requirements.txt only contains:
python-gnupg==0.4.4
pytest>=7.1.2
So basically, I can run everything with:
docker build -t pytest-runner ./tests/docker/pytest_runner
docker run --rm -it -v $(pwd)/tests/results:/home/pytest/results -v $(pwd)/tests/fixtures:/home/pytest/tests pytest-runner
Last two lines I made programatically run from Python in pytest_sessionstart(session) hook using Docker API.
I can't start my cassandra container, I get the following error when cassandra container is starting:
/usr/bin/env: ‘python3\r’: No such file or directory
My Dockerfile:
FROM cassandra:3.11.6
RUN apt-get update && apt-get install -y apt-transport-https && apt-get install software-properties-common -y
COPY ["schema.cql", "wait-for-it.sh", "bootstrap-schema.py", "/"]
RUN chmod +x /bootstrap-schema.py /wait-for-it.sh
ENV BOOTSTRAPPED_SCHEMA_FILE_MARKER /bootstrapped-schema
ENV BOOTSTRAP_SCHEMA_ENTRYPOINT /bootstrap-schema.py
ENV OFFICIAL_ENTRYPOINT /docker-entrypoint.sh
# 7000: intra-node communication
# 7001: TLS intra-node communication
# 7199: JMX
# 9042: CQL
# 9160: thrift service
EXPOSE 7000 7001 7199 9042 9160
#Change entrypoint to custom script
COPY cassandra.yaml /etc/cassandra/cassandra.yaml
ENTRYPOINT ["/bootstrap-schema.py"]
CMD ["cassandra", "-Dcassandra.ignore_dc=true", "-Dcassandra.ignore_rack=true", "-f"]
I GOT THIS ERROR ONLY WHEN I ATTACH THIS LINE:
ENTRYPOINT ["/bootstrap-schema.py"]
I use Windows 10 (Docker for Windows installed).
What's wrong in this script: bootstrap-schema.py:
#!/usr/bin/env python3
import os
import sys
import subprocess
import signal
import logging
logger = logging.getLogger('bootstrap-schema')
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
logger.addHandler(ch)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
proc_args = [os.environ['OFFICIAL_ENTRYPOINT']]
proc_args.extend(sys.argv[1:])
if (not os.path.exists(os.environ["BOOTSTRAPPED_SCHEMA_FILE_MARKER"])):
proc = subprocess.Popen(proc_args) # Run official entrypoint command as child process
wait_for_cql = os.system("/wait-for-it.sh -t 120 127.0.0.1:9042") # Wait for CQL (port 9042) to be ready
if (wait_for_cql != 0):
logger.error("CQL unavailable")
exit(1)
logger.debug("Schema creation")
cqlsh_ret = subprocess.run("cqlsh -f /schema.cql 127.0.0.1 9042", shell=True)
if (cqlsh_ret.returncode == 0):
# Terminate bg process
os.kill(proc.pid, signal.SIGTERM)
proc.wait(20)
# touch file marker
open(os.environ["BOOTSTRAPPED_SCHEMA_FILE_MARKER"], "w").close()
logger.debug("Schema created")
else:
logger.error("Schema creation error. {}".format(cqlsh_ret))
exit(1)
else:
logger.debug("Schema already exists")
os.execv(os.environ['OFFICIAL_ENTRYPOINT'], sys.argv[1:]) # Run official entrypoint
Thanks for any tip
EDIT
Of course I tried to add ex.
RUN apt-get install python3
OK, my fault - there was a well known problem - encoding. I had to encode windows files to Linux files - EACH file, also scripts, everything. Now works excellent:)
To be fair, all I wanted to do is have metricbeat send sys stats to elasticsearch and view them on kibana.
I read through elasticsearch docs, trying to find clues.
I am basing my image on python since my actual app is written in python, and my eventual goal is to send all logs (sys stats via metricbeat, and app logs via filebeat) to elastic.
I can't seem to find a way to run logstash as a service inside of a container.
my dockerfile:
FROM python:2.7
WORKDIR /var/local/myapp
COPY . /var/local/myapp
# logstash
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
RUN apt-get update && apt-get install apt-transport-https dnsutils default-jre apt-utils -y
RUN echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-5.x.list
RUN apt-get update && apt-get install logstash
# metricbeat
#RUN wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-5.6.0-amd64.deb
RUN dpkg -i metricbeat-5.6.0-amd64.deb
RUN pip install --no-cache-dir -r requirements.txt
RUN apt-get autoremove -y
CMD bash strap_and_run.sh
and the extra script strap_and_run.sh:
python finalize_config.py
# start
echo "starting logstash..."
systemctl start logstash.service
#todo :get my_ip
echo "starting metric beat..."
/etc/init.d/metricbeat start
finalize_config.py
import os
import requests
LOGSTASH_PIPELINE_FILE = 'logstash_pipeline.conf'
LOGSTASH_TARGET_PATH = '/etc/logstach/conf.d'
METRICBEAT_FILE = 'metricbeat.yml'
METRICBEAT_TARGET_PATH = os.path.join(os.getcwd, '/metricbeat-5.6.0-amd64.deb')
my_ip = requests.get("https://api.ipify.org/").content
ELASTIC_HOST = os.environ.get('ELASTIC_HOST')
ELASTIC_USER = os.environ.get('ELASTIC_USER')
ELASTIC_PASSWORD = os.environ.get('ELASTIC_PASSWORD')
if not os.path.exists(os.path.join(LOGSTASH_TARGET_PATH)):
os.makedirs(os.path.join(LOGSTASH_TARGET_PATH))
# read logstash template file
with open(LOGSTASH_PIPELINE_FILE, 'r') as logstash_f:
lines = logstash_f.readlines()
new_lines = []
for line in lines:
new_lines.append(line
.replace("<elastic_host>", ELASTIC_HOST)
.replace("<elastic_user>", ELASTIC_USER)
.replace("<elastic_password>", ELASTIC_PASSWORD))
# write current file
with open(os.path.join(LOGSTASH_TARGET_PATH, LOGSTASH_PIPELINE_FILE), 'w+') as new_logstash_f:
new_logstash_f.writelines(new_lines)
if not os.path.exists(os.path.join(METRICBEAT_TARGET_PATH)):
os.makedirs(os.path.join(METRICBEAT_TARGET_PATH))
# read metricbeath template file
with open(METRICBEAT_FILE, 'r') as metric_f:
lines = metric_f.readlines()
new_lines = []
for line in lines:
new_lines.append(line
.replace("<ip-field>", my_ip)
.replace("<type-field>", "test"))
# write current file
with open(os.path.join(METRICBEAT_TARGET_PATH, METRICBEAT_FILE), 'w+') as new_metric_f:
new_metric_f.writelines(new_lines)
The reason is there is no init system inside the container. So you should not use service or systemctl. So you should yourself start the process in background. Your updated script would look like below
python finalize_config.py
# start
echo "starting logstash..."
/usr/bin/logstash &
#todo :get my_ip
echo "starting metric beat..."
/usr/bin/metric start &
wait
You will also need to add handling for TERM and other signal, and kill the child processes. If you don't do that docker stop will have few issues.
I prefer in such situation using a process manager like supervisord and run supervisor as the main PID 1.
I'm trying to build a docker image from a really simple project, just to start understanding how docker works and communicate. So, I have created a WebApi project, with just one method that returns a 200.
Once the project has been created, I created the dockerfile:
# TP5 for technology preview (will not be needed when we go GA)
# FROM microsoft/iis
FROM microsoft/iis:TP5
MAINTAINER Roman_Hervas
# Install Chocolatey (tools to automate commandline compiling)
ENV chocolateyUseWindowsCompression='false'
RUN #powershell -NoProfile -ExecutionPolicy unrestricted -Command "(iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))) >$null 2>&1" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"
# Install build tools
RUN powershell add-windowsfeature web-asp-net45 \
&& choco install microsoft-build-tools -y --allow-empty-checksums -version 14.0.23107.10 \
&& choco install dotnet4.6-targetpack --allow-empty-checksums -y \
&& choco install nuget.commandline --allow-empty-checksums -y \
&& nuget install MSBuild.Microsoft.VisualStudio.Web.targets -Version 14.0.0.3 \
&& nuget install WebConfigTransformRunner -Version 1.0.0.1
RUN powershell remove-item C:\inetpub\wwwroot\iisstart.*
# Copy files (temporary work folder)
RUN md c:\build
WORKDIR c:/build
COPY . c:/build
# Restore packages, build, copy
RUN nuget restore \
&& "c:\Program Files (x86)\MSBuild\14.0\Bin\MSBuild.exe" /p:Platform="Any CPU" /p:VisualStudioVersion=12.0 /p:VSToolsPath=c:\MSBuild.Microsoft.VisualStudio.Web.targets.14.0.0.3\tools\VSToolsPath WebApiDocker.sln \
&& xcopy c:\build\WebApiDocker\* c:\inetpub\wwwroot /s
# NOT NEEDED ANYMORE –> ENTRYPOINT powershell .\InitializeContainer
And the InitializeContainer:
If (Test-Path Env:\ASPNET_ENVIRONMENT)
{
\WebConfigTransformRunner.1.0.0.1\Tools\WebConfigTransformRunner.exe \inetpub\wwwroot\Web.config "\inetpub\wwwroot\Web.$env:ASPNET_ENVIRONMENT.config" \inetpub\wwwroot\Web.config
}
# prevent container from exiting
powershell
So, finally, I try to execute the command to build the project: docker build -t dockerexample .
The result is a failure with the following message (step 4):
Step 1/10 : FROM microsoft/iis:TP5
---> accd044753c1
Step 2/10 : MAINTAINER Roman_Hervas
---> Using cache
---> e42af9c57e0d
Step 3/10 : ENV chocolateyUseWindowsCompression 'false'
---> Using cache
---> 24621a9f18d9
Step 4/10 : RUN #powershell -NoProfile -ExecutionPolicy unrestricted -Command "(iex ((New-Object System.Net.WebClient).D
ownloadString('https://chocolatey.org/install.ps1'))) >$null 2>&1" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"
---> Running in 61199189917a
container 61199189917a0057fb54dddca6d80a6c6f9e8b77d2326379537684f58fefbe50 encountered an error during CreateContainer:
failure in a Windows system call: A connection could not be established with the Virtual Machine hosting the Container.
(0xc0370108) extra info: {"SystemType":"Container","Name":"61199189917a0057fb54dddca6d80a6c6f9e8b77d2326379537684f58fefb
e50","Owner":"docker","IsDummy":false,"IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\Docker\\windows
filter\\61199189917a0057fb54dddca6d80a6c6f9e8b77d2326379537684f58fefbe50","Layers":[{"ID":"08b847bd-7f7e-5758-90be-43262
e170e22","Path":"C:\\ProgramData\\Docker\\windowsfilter\\64e43de6efd9eee001b12f6ed8add83d1aefff6cb5f8b55e9a44c4b1b2f27b8
0"},{"ID":"293472e6-599f-5a8e-b531-ac7499b0c900","Path":"C:\\ProgramData\\Docker\\windowsfilter\\cfb71fcbe2f95caa2a5306d
800c3d649067c00702a26a208ead6f5fed58e49c8"},{"ID":"baacc247-5374-5761-812f-e1ad911fda31","Path":"C:\\ProgramData\\Docker
\\windowsfilter\\89144a071d22e130e0ca9a069857a181b8976e9557c95395fb58116358dd5a02"},{"ID":"3d538ae4-eaf0-574c-b274-30bba
ce1a9b0","Path":"C:\\ProgramData\\Docker\\windowsfilter\\e2ff3bea019eaee94ab33312b6a39d6305b85df9b0b950680aa38e55eec5437
1"},{"ID":"937e8340-c320-5f09-a87e-9cd5912f40bb","Path":"C:\\ProgramData\\Docker\\windowsfilter\\0dd23a484fe7eea9da274be
8e6e1f0768b52a8a121e7bf274d5974ada02400d8"}],"HostName":"2ac70997c0f2","MappedDirectories":[],"SandboxPath":"C:\\Program
Data\\Docker\\windowsfilter","HvPartition":true,"EndpointList":["deb85df1-5dba-4394-a1ac-77f4a106e31a"],"HvRuntime":{"Im
agePath":"C:\\ProgramData\\Docker\\windowsfilter\\0dd23a484fe7eea9da274be8e6e1f0768b52a8a121e7bf274d5974ada02400d8\\Util
ityVM"},"Servicing":false,"AllowUnqualifiedDNSQuery":true}
I'm totally noob with Docker, so I have no idea of the problem here, and Google has not been too much helpful. My operating system is Windows 10 Pro, and Docker version is 17.03.1-ce-win12 (12058).
Question:
Why is it launching an error in step 4?
Thank you very much in advance.
I'm trying to setup Vagrant with docker as a provider but when running
vagrant up --provider=docker --debug
I get this error:
"rsync" was not detected as installed in your guest machine. This
is required for rsync synced folders to work. In addition to this,
Vagrant doesn't know how to automatically install rsync for your
machine, so you must do this manually.
Full log here:
http://pastebin.com/zCTSqibM
Vagrantfile
require 'yaml'
Vagrant.configure("2") do |config|
user_config = YAML.load_file 'user_config.yml'
config.vm.provider "docker" do |d|
d.build_dir = "."
d.has_ssh = true
d.ports = user_config['port_mapping']
d.create_args = ["--dns=127.0.0.1","--dns=8.8.8.8", "--dns=8.8.4.4"]
d.build_args = ['--no-cache=true'] end
config.vm.hostname = "dev"
config.ssh.username = "it" config.ssh.port = 22 config.ssh.private_key_path = ["./initial_ssh_key", user_config['ssh_private_key_path']] config.ssh.forward_agent = true
end
Dockerfile
FROM debian:jessie MAINTAINER IT <it#email.com>
RUN echo 'exit 0' > /usr/sbin/policy-rc.d
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
RUN apt-get update RUN apt-get upgrade -y RUN apt-get install sudo apt-utils -y
RUN apt-get -y install sysvinit-core sysvinit sysvinit-utils RUN cp /usr/share/sysvinit/inittab /etc/inittab RUN apt-get remove -y --purge
--auto-remove systemd libpam-systemd systemd-sysv
RUN apt-get install ssh -y
RUN addgroup --system it RUN adduser --system --disabled-password
--uid 1000 --shell /bin/bash --home /home/it it RUN adduser it it RUN adduser it sudo
RUN echo "it ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
ADD initial_ssh_key.pub /home/it/.ssh/authorized_keys RUN chown it:it /home/it/ -R RUN echo "Host * \n\tStrictHostKeyChecking no" >> /etc/ssh/ssh_config
CMD exec /sbin/init
Note:
I'm on Mac OS X 10.12 and I've installed vagrant, virtualbox and docker I have rsync installed and added to my PATH in the host machine.
Also, the same vagrant and docker configs works perfectly on a ubuntu host.
How do I install rsync in the guest machine? Or is something else wrong with my config? Any ideas?
You may want to give the alternative boot2docker box a try: https://github.com/dduportal/boot2docker-vagrant-box
as it contains rsync while the hashicorp/boot2docker, which is used by default, seems to lack this!
If doing so, you must add the follwong line to your docker provider config (of course adopted to your system):
d.vagrant_vagrantfile = "../path/to/Vagrantfile"
This is because you're changing the docker provider host vm as described in the vagrant docker provider documentation.
Try adding rsync to your Docker file, somewhere in one of your apt-get lines. Linux hosts use NFS by default, that's why it works on your Ubuntu.
Normally Vagrant tries to install rsync on a guest machine, if that fails - it notifies you with that error message. More info on vagrant website (3rd paragraph in "Prerequisites" chapter)