I have to make a pdf out of some html pages in a containerized lambda. For this purpose I am trying to use pdfkit and wkhtmltopdf. I am not able to use it an d receiving the error as shown in picture-
Error text-
No wkhtmltopdf executable found: "./wkhtmltopdf"
If this file exists please check that this process can read it or you can pass path to it manually in method call, check README. Otherwise please install wkhtmltopdf - https://github.com/JazzCore/python-pdfkit/wiki/Installing-wkhtmltopdf
My lambda code:-
import pdfkit as pdf
def lambda_function:
config = pdf.configuration(wkhtmltopdf='./wkhtmltopdf')
pdf.from_file(
filelist_new,
output_filename,
options={
'margin-top': '0.2in',
'margin-right': '0.2in',
'margin-bottom': '0.4in',
'margin-left': '0.2in',
'orientation': 'Landscape',
'page-size': 'A4',
'encoding': 'UTF-8',
'footer-line': '',
'footer-spacing': 1,
'footer-font-name': 'Times,serif',
'footer-font-size': '10'
},
configuration=config,
)
My docker file-
FROM umihico/aws-lambda-selenium-python:latest
RUN pip install pdfkit
RUN pip install boto3
RUN pip install wkhtmltopdf --target "./"
COPY lambda_function.py ./
CMD [ "lambda_function.lambda_handler" ]
and this is when I tried to find wkhtmlpdf by running the docker container:-
Update: Issue got solved
This worked for my case.
DockerFile:
FROM umihico/aws-lambda-selenium-python:latest
RUN pip install pdfkit --target ${LAMBDA_TASK_ROOT}
RUN pip install boto3
RUN yum install -y openssl xorg-x11-fonts-75dpi xorg-x11-fonts-Type1
RUN curl "https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6-1/wkhtmltox-0.12.6-1.amazonlinux2.x86_64.rpm" -L -o wkhtmltox-0.12.6-1.amazonlinux2.x86_64.rpm
RUN rpm -i wkhtmltox-0.12.6-1.amazonlinux2.x86_64.rpm
COPY lambda_function.py ./
CMD [ "lambda_function.lambda_handler" ]
Lambda code:
import pdfkit as pdf
def lambda_function:
config = pdf.configuration(wkhtmltopdf='/usr/local/bin/wkhtmltopdf')
pdf.from_file(
filelist_new,
output_filename,
options={
'enable-local-file-access': '',
'margin-top': '0.2in',
'margin-right': '0.2in',
'margin-bottom': '0.4in',
'margin-left': '0.2in',
'orientation': 'Landscape',
'page-size': 'A4',
'encoding': 'UTF-8',
'footer-line': '',
'footer-spacing': 1,
'footer-font-name': 'Times,serif',
'footer-font-size': '10'
},
configuration=config,
)
Links which I referred- https://micropyramid.com/blog/how-to-create-pdf-files-in-python-using-pdfkit/ , How to install wkhtmltopdf on a linux based (shared hosting) web server
Related
I am trying to launch, via docker, a jupyterlab with a custom extension(extensionTest). However, I haven't been successful.
Can someone tell me how can i do it? Or put here an example?
What is the best base docker image to use?
Thank you
Best regards
I tried doing this dockerfile, but is not working :
FROM jupyter/minimal-notebook:lab-3.2.3
RUN pip install --no-cache-dir \
astropy \
ipytree \
ipywidgets \
jupyter \
numpy \
poliastro
RUN jupyter labextension install \
jupyterlab-plotly#4.14.2 \
plotlywidget#4.14.2
RUN pip install jupyterlab_widgets
COPY ./extensions/ ./extensions/
WORKDIR ./extensions/
RUN python -m pip install ./extensionTest
RUN jupyter labextension extensionTest
ENTRYPOINT start.sh jupyter lab
Thanks
Error that I get :
ERROR: Command errored out with exit status 1:
command: /opt/conda/bin/python /opt/conda/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmp2m0biqur
cwd: /home/jovyan/extensions/extensionTest
Complete output (44 lines):
INFO:hatch_jupyter_builder.utils:Running jupyter-builder
INFO:hatch_jupyter_builder.utils:Building with hatch_jupyter_builder.npm_builder
INFO:hatch_jupyter_builder.utils:With kwargs: {'build_cmd': 'build:prod', 'npm': ['jlpm']}
INFO:hatch_jupyter_builder.utils:Installing build dependencies with npm. This may take a while...
INFO:hatch_jupyter_builder.utils:> /tmp/pip-build-env-gj35ixm0/overlay/bin/jlpm install
yarn install v1.21.1
info No lockfile found.
[1/4] Resolving packages...
warning #jupyterlab/application > #jupyterlab/apputils > url > querystring#0.2.0: The querystring API is considered Legacy. new code should use the URLSearchParams API instead.
warning #jupyterlab/application > #jupyterlab/ui-components > #blueprintjs/core > popper.js#1.16.1: You can find the new Popper v2 at #popperjs/core, this package is dedicated to the legacy v1
warning #jupyterlab/application > #jupyterlab/ui-components > #blueprintjs/core > react-popper > popper.js#1.16.1: You can find the new Popper v2 at #popperjs/core, this package is dedicated to the legacy v1
warning #jupyterlab/builder > terser-webpack-plugin > cacache > #npmcli/move-file#1.1.2: This functionality has been moved to #npmcli/fs
warning #jupyterlab/builder > #jupyterlab/buildutils > crypto#1.0.1: This package is no longer supported. It's now a built-in Node module. If you've depended on crypto, you should switch to the one that's built-in.
warning #jupyterlab/builder > #jupyterlab/buildutils > verdaccio > request#2.88.0: request has been deprecated, see https://github.com/request/request/issues/3142
warning #jupyterlab/builder > #jupyterlab/buildutils > verdaccio > request > har-validator#5.1.5: this library is no longer supported
warning #jupyterlab/builder > #jupyterlab/buildutils > verdaccio > request > uuid#3.4.0: Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See https://v8.dev/blog/math-random for details.
[2/4] Fetching packages...
warning #blueprintjs/core#3.54.0: Invalid bin entry for "upgrade-blueprint-2.0.0-rename" (in "#blueprintjs/core").
warning #blueprintjs/core#3.54.0: Invalid bin entry for "upgrade-blueprint-3.0.0-rename" (in "#blueprintjs/core").
error lib0#0.2.58: The engine "node" is incompatible with this module. Expected version ">=14". Got "12.4.0"
error Found incompatible module.
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
main()
File "/opt/conda/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/conda/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 261, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
File "/tmp/pip-build-env-gj35ixm0/overlay/lib/python3.9/site-packages/hatchling/build.py", line 41, in build_wheel
return os.path.basename(next(builder.build(wheel_directory, ['standard'])))
File "/tmp/pip-build-env-gj35ixm0/overlay/lib/python3.9/site-packages/hatchling/builders/plugin/interface.py", line 136, in build
build_hook.initialize(version, build_data)
File "/tmp/pip-build-env-gj35ixm0/normal/lib/python3.9/site-packages/hatch_jupyter_builder/plugin.py", line 83, in initialize
raise e
File "/tmp/pip-build-env-gj35ixm0/normal/lib/python3.9/site-packages/hatch_jupyter_builder/plugin.py", line 78, in initialize
build_func(self.target_name, version, **build_kwargs)
File "/tmp/pip-build-env-gj35ixm0/normal/lib/python3.9/site-packages/hatch_jupyter_builder/utils.py", line 114, in npm_builder
run(npm_cmd + ["install"], cwd=str(abs_path))
File "/tmp/pip-build-env-gj35ixm0/normal/lib/python3.9/site-packages/hatch_jupyter_builder/utils.py", line 227, in run
return subprocess.check_call(cmd, **kwargs)
File "/opt/conda/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/tmp/pip-build-env-gj35ixm0/overlay/bin/jlpm', 'install']' returned non-zero exit status 1.
So, to anyone interested i was able to create a docker file that builds and run you jupyter extension. My code is this:
FROM node:14 AS build-env
RUN apt-get update && \
apt-get install -y python3-pip && \
pip3 install jupyterlab
COPY Path/to/Extension Path/to/Extension
WORKDIR Path/to/extension
RUN yarn install && yarn build && yarn run build
FROM jupyter/minimal-notebook:lab-3.2.3
RUN pip install --no-cache-dir \
astropy \
ipytree \
ipywidgets \
jupyter \
numpy \
poliastro
RUN jupyter labextension install \
jupyterlab-plotly#4.14.2 \
#jupyter-widgets/jupyterlab-manager \
plotlywidget#4.14.2
COPY --from=build-env Path/to/Extension/on/Nodejs Path/to/Extension
RUN jupyter labextension install Path/to/Extension
If someone knows something that can be simplified, let me know :)
I have a project A managed with pipenv that depends on another project B built with pipenv too. The project B is published in the private PYPI. My Pipfile looks like
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[[source]]
url = "https://${USERNAME}:${TOKEN}#MyPrivateRepoUrl"
verify_ssl = true
name = "MyRepoPYPI"
When I install A locally I have not problems and every works fine (declaring environment variables with export command). I have configured an azure-pipelines.yml with 2 jobs, first install A and testing, and the second one is to build and publish the docker image to the ACR. When I trigger the pipeline, the first job fails with this error:
FAIL
[ResolutionFailure]: File "/home/adminroot/.local/lib/python3.7/site-packages/pipenv/resolver.py", line 741, in _main
[ResolutionFailure]: resolve_packages(pre, clear, verbose, system, write, requirements_dir, packages, dev)
[ResolutionFailure]: File "/home/adminroot/.local/lib/python3.7/site-packages/pipenv/resolver.py", line 709, in resolve_packages
[ResolutionFailure]: requirements_dir=requirements_dir,
[ResolutionFailure]: File "/home/adminroot/.local/lib/python3.7/site-packages/pipenv/resolver.py", line 692, in resolve
[ResolutionFailure]: req_dir=requirements_dir
[ResolutionFailure]: File "/home/adminroot/.local/lib/python3.7/site-packages/pipenv/utils.py", line 1403, in resolve_deps
[ResolutionFailure]: req_dir=req_dir,
[ResolutionFailure]: File "/home/adminroot/.local/lib/python3.7/site-packages/pipenv/utils.py", line 1108, in actually_resolve_deps
[ResolutionFailure]: resolver.resolve()
[ResolutionFailure]: File "/home/adminroot/.local/lib/python3.7/site-packages/pipenv/utils.py", line 833, in resolve
[ResolutionFailure]: raise ResolutionFailure(message=str(e))
[pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.
First try clearing your dependency cache with $ pipenv lock --clear, then try the original command again.
Alternatively, you can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: Could not find a version that matches pymooslotting (from -r /tmp/pipenvzsh1c47crequirements/pipenv-y37q7d_w-constraints.txt (line 11))
No versions found
Were https://pypi.org/simple or https://MyUsername:***#MyPrivateRepoUrl reachable?
I'm pretty sure the credentials are set correctly.
The amazing thing is that if I run the second job, the docker image is built correctly, therefore it is telling me that the credentials to install B are being passed correctly.
here is the part of the pipelines where I install A
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: |
python -m pip install --user --upgrade pip
python -m pip install --user pipenv
displayName: Install Pipenv
- script: |
pipenv lock --clear
pipenv install --system
displayName: Install Dev Dependencies
- bash: pytest -rf
displayName: Unit test
I'm not commiting the lock file, but I have done many tests that include the lock in the repo.
Here is the command for docker Publish (that works fine and do the job)
- task: Docker#2
displayName: Build Docker Image
inputs:
command: build
containerRegistry: $(acrRepository)
repository: samples/execslotting
inputs:
tags: |
latest
$(tag)
arguments: '--build-arg USERNAME=$(MyUserName) --build-arg TOKEN=$(MyToken)'
The docker image is installing B, so in the build I pass USERNAME and TOKEN vars.
OBS: The error message No versions found Were https://pypi.org/simple or https://MyUsername:***#MyPrivateRepoUrl reachable? tells me that the envars are correctly read in the execution of the installation job.
the docs specify a different push and pull endpoint urls for pip... Try appending /pypi/simple to the repo description where you are pulling from.
full url: https://<your-feed-name>:<your-PAT-key>#pkgs.dev.azure.com/<your-organization-name>/<your-project-name>/_packaging/<your-feed-name>/pypi/simple/
I am trying to install pymol in CentOS 7 system. I installed the dependency glm-devel-0.9.6.3-1.el7.noarch by yum:
sudo yum install glm-devel.
During compiling I got an error related to glm as following:
layer1/SceneView.cpp:34:61: error: no matching function for call to ‘equal(const vec3&, const vec3&, float)’
Could anyone tell why this error appears?
I will appreciate any help!
Best regards.
install pymol in CentOS 7
Build example with updated glm-devel
# yum install mmtf-cpp-devel.x86_64 glew-devel.x86_64 libpng-devel freetype-devel msgpack-devel libxml2-devel python36-qt5-devel.x86_64 freeglut-devel mesa-libGL-devel.x86_64 python3-devel
# yum install ./glm-devel-0.9.9.6-6.el7.noarch.rpm
## works with the default python3 (3.6.8) and g++ -4.8.5
Link, glm-devel-0.9.9.6-6.el7 https://drive.google.com/file/d/1BXygEWqpvlbZg867dsXuW47T67Z0V0Q8/view?usp=sharing
https://github.com/schrodinger/pymol-open-source →
https://github.com/schrodinger/pymol-open-source/blob/master/INSTALL
git clone https://github.com/schrodinger/pymol-open-source.git
cd pymol-open-source/
python3 setup.py build
# python3 setup.py install
pymol ## the pymol GUI opens OK
There is also a binary version available, with python 3.7 included : https://pymol.org/2/ →
https://pymol.org/installers/PyMOL-2.5.2_293-Linux-x86_64-py37.tar.bz2 → tar xvf PyMOL-2.5.2_293-Linux-x86_64-py37.tar.bz2
cd pymol/ && ./pymol
The "cdk" command is not found after installing with "pip install --upgrade aws-cdk.core" (per the docs).
The package successfully installs because it is found in the "site-packages" directory.
tennis.smith at C02TM089GY6N in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages
$ ls
README.txt cattr/ docopt-0.6.2.dist-info/ pip/ requests-2.22.0.dist-info/ typing_extensions.py
__pycache__/ cattrs-0.9.0.dist-info/ docopt.py pip-19.3.dist-info/ setuptools/ urllib3/
attr/ certifi/ easy_install.py pkg_resources/ setuptools-41.2.0.dist-info/ urllib3-1.25.6.dist-info/
attrs-19.3.0.dist-info/ certifi-2019.9.11.dist-info/ idna/ publication-0.0.3.dist-info/ six-1.12.0.dist-info/ wheel/
aws_cdk/ chardet/ idna-2.8.dist-info/ publication.py six.py wheel-0.33.6.dist-info/
aws_cdk.core-1.13.1.dist-info/ chardet-3.0.4.dist-info/ jsii/ python_dateutil-2.8.0.dist-info/ tests/ yarg/
aws_cdk.cx_api-1.13.1.dist-info/ dateutil/ jsii-0.19.0.dist-info/ requests/ typing_extensions-3.7.4.dist-info/ yarg-0.1.9.dist-info/
Any ideas why it isn't being detected?
To install the CDK CLI run npm install -g aws-cdk.
I have just started exploring NiftyNet.I am getting the following error when I try to run the autocontext_mr_ct_model_zoo.
When I run the following command:
python net_regress.py train \ -c ~/niftynet/extensions/autocontext_mr_ct/net_autocontext.ini \
--starting_iter 0 --max_iter 500*
I get the following error message:
ValueError: Unknown keywords in config file: [error_map] -- all possible choices are ['', u'loss_border', 'output', 'image', 'weight', 'sampler', u'cuda_devices', u'num_threads', u'num_gpus', u'model_dir', u'dataset_split_file', u'name', u'activation_function', u'batch_size', u'decay', u'reg_type', u'volume_padding_size', u'window_sampling', u'queue_length', u'multimod_foreground_type', u'histogram_ref_file', u'norm_type', u'cutoff', u'foreground_type', u'normalisation', u'whitening', u'normalise_foreground_only', u'weight_initializer', u'bias_initializer', u'weight_initializer_args', u'bias_initializer_args', u'optimiser', u'sample_per_volume', u'rotation_angle', u'rotation_angle_x', u'rotation_angle_y', u'rotation_angle_z', u'scaling_percentage', u'random_flipping_axes', u'lr', u'loss_type', u'starting_iter', u'save_every_n', u'tensorboard_every_n', u'max_iter', u'max_checkpoints', u'validation_every_n', u'validation_max_iter', u'exclude_fraction_for_validation', u'exclude_fraction_for_inference', u'inference_iter', u'save_seg_dir', u'output_interp_order', u'border', u'csv_file', u'path_to_search', u'filename_contains', u'filename_not_contains', u'interp_order', u'pixdim', u'axcodes', u'spatial_window_size'].
I'm not quite sure what I've messed up here, any advice welcomed.
Looks like the source code you downloaded is out-of-date, are you running the command with the latest version of NiftyNet?
The pip package is slightly out-of-date at the moment,
you could have the latest source code by running
git clone https://github.com/NifTK/NiftyNet.git
# installing dependencies from the list of requirements
cd NiftyNet/
pip install -r requirements-gpu.txt
# run command from the source code folder
python net_regress.py train \
-c ~/niftynet/extensions/autocontext_mr_ct/net_autocontext.ini \
--starting_iter 0 --max_iter 500