This is fairly easy to do with cargo and npm.
When I run it from the shell $ pylint src && pylint tests, I have no problems.
But when I run it as a pipenv script
[scripts]
lint = "pylint src && pylint tests"
$ pipenv run lint
************* Module &&
&&:1:0: F0001: No module named && (fatal)
Pylint thinks && is another module.
Is the pipenv runtime not just the terminal?
As pointed on this issue of their GitHub tracker, this is caused by the fact that:
this is to difficult to get right, especially since Pipenv needs to support a cross-platform experience
Source: https://github.com/pypa/pipenv/issues/2038#issuecomment-387506323
There is a work around pointed in the issue report itself that could be a good fit for you:
[scripts]
lint = "bash -c 'pylint src && pylint tests'"
pipenv relevant issues on their tracker:
https://github.com/pypa/pipenv/issues/2878 — the answer of the maintainer here says more about the portability issue implementing this would cause
https://github.com/pypa/pipenv/issues/2283 — is interesting because it offers another solution, which is to use PyInvoke
https://github.com/pypa/pipenv/issues/2160
https://github.com/pypa/pipenv/issues/2038
Related
thank you in advance to solve my issue,
I am new to Craft CMS. just got a project and it is using fractal for template components build. on production when I update file in library it rebuild the components files in templates directory. but when i run the same command in my dev site terminal it is not rebuilding them. Note i am not getting any error.
on prod commands are :
npm run mix --production && npm run fractal:export && node replace.js
and on my dev I am running :
npm run mix && npm run fractal:export && node replace.js
I tried the same command as production but still not worked
I am writing a few kong custom plugins in Lua. I am using Kong 2.3.3 and Lua 5.1.
I have some test cases (unit tests + integration tests) and i am running them with pongo run -coverage option. I have already installed luacov (and also cluacov, both with luarocks install) and all my tests are passing but no luacov files are being generated with coverage data. I am not running pongo from Docker, i have installed and configured it in my local machine (which is Linux Ubuntu 20.04).
I have already tried a few things as follows:
my .busted file is setting coverage = true, verbose = true and output = "gtest" (already tried utfTerminal, tap and json too)
tried adding luacov as a dependency to my rockspec file... the build does not fail but no coverage file is generated
i even tried running the tests without pongo, using busted directly but this is a very bad option because things like spec.helpers, or the cjson lib are not set in my LUAPATH
A quick way to do this is to modify pongo
Edit your pongo.sh file to:
add coverage flag to busted --coverage
call luacov to generate the report luacov
display the report cat luacov.report.out
locate where busted is called, line 959 for me:
"/bin/sh" "-c" "bin/busted --coverage --helper=bin/busted_helper.lua ${busted_params[*]} ${busted_files[*]};luacov;cat luacov.report.out"
Install luacov, edit assets/Dockerfile
after busted installation add luacov:
&& luarocks install busted-htest \
&& luarocks install luacov \
pongo run will give you
[...]
==============================================================================
Summary
==============================================================================
File Hits Missed Coverage
-------------------------------------------------------------------------------------------------------
/kong-plugin/kong/plugins/myplugin/schema.lua 105 1 99.06%
/kong-plugin/spec/myplugin/01-schema_spec.lua 199 5 97.55%
[...]
You can create a docker image based on pongo
spec/unit/docker/Dockerfile
FROM kong-pongo-test:2.3.2
USER root
RUN luarocks install luacov
WORKDIR /kong-plugin
COPY . .
spec/unit/docker/run.sh
#!/bin/sh
busted --coverage spec/unit
luacov
cat luacov.report.out
Run
docker build -f spec/unit/docker/Dockerfile -t my-coverage .
docker run my-coverage sh spec/unit/docker/run.sh
Pongo gained some support for this (still a PR). Note that it only covers unit tests, not integration ones.
See https://github.com/Kong/kong-pongo/pull/184
btw: the other anwers are too complex imo, you can add .pongo/pongo-setup.sh to install LuaCov, and move the .luacov file from /kong-plugin to /kong. That should be all that is necessary.
Running tests with coverage can be simply done by passing the flag, without any need to edit pongo or the dockerfile. Try pongo run -- --coverage for example.
I have a Docker image that is used for running tests in Jenkins and Bamboo. I need to upgrade the version of g++ used (to something with C++11 support).
I tried using a Dockerfile that looks roughly like the following one:
FROM docker.blahblahblah/centos/6.6:latest
RUN yum install -y git gcc-c++ imake centos-release-scl-rh devtoolset-7-toolchain
# I've tried putting this into /etc/bashrc, ~/.bashrc, ~/.bash_profile
RUN echo "source scl_source enable devtoolset-7" >> ~/.bashrc
My issue is that when g++ is used within the container, it uses the older one, instead of the newer one in devtoolset-7, even though the newer one should be sourced from the bashrc. (Maybe I'm misunderstanding how Docker will try to run everything.)
Could anyone point me in the right direction here?
I am trying to compile tensorflow in a bit difficult environment (Centos 6.2 with a load of "own" compilations).
A lot of problems I have solved already, but the one baffling me is a SWIG error. I run with "--verbose_failures" and I can see the command that bazel tries to run:
(cd /home/../.cache/bazel/_bazel../[long-hexa]/execroot/org_tensorflow && \
exec env - \
bazel-out/host/bin/external/swig/swig -c++ ...
It fails with "Exit 1: swig failed: error executing command.
However, when I cd to the mentioned directory and run the mentioned bazel-out/../swig command it succeeds! What I have understood is that this probably is a problem of bazel trying to sanitize the environment.
How can I go to debug this? Is there a way to show what environment variables bazel is using?
I have a requirement that before an application runs, some part of it needs to read the environmental variable. For this I have the following docker file
FROM nodesource/jessie:0.12.7
# install gettext for envsubst
RUN apt-get update
RUN apt-get install -y gettext-base
# cache package.json and node_modules to speed up builds
ADD package.json package.json
RUN npm install
# Add source files
ADD src src
# Substiture value for backend endpoint env var
RUN envsubst < src/js/envapp.js > src/js/app.js
ADD node_modules node_modules
EXPOSE 8000
CMD ["npm","start"]
The above envsubst line reads (should read) an env variable $MYENV and substitutes it. But when I open the file app.js, its empty.
I checked if the environmental variable exists in the container and it does. Any reason its value is not read and substituted?
I also tried the same command in teh container and it works. It only does not work when I run the image
This is likely because $MYENV is not available for envsubst when you run the image.
Each RUN command runs on its own shell.
From the Docker documentations:
RUN (the command is run in a shell - /bin/sh -c - shell form)
You need to source your profile as well, for example if the $MYENV environment variable is available in the .bashrc file, you can modify your Dockerfile like this:
RUN source ~/.bashrc && envsubst < src/js/envapp.js > src/js/app.js
I encountered the same issues, and after much research and fishing through the internet. I managed to find a few work arounds to this issue. Below I'll list them and identifiable risks at the time of this "Answer post"
Solutions:
1.) apt-get install -y gettext its a standard GNU package language library, one of these libraries that it includes is envsubst` and I can confirm that it works for docker UBUNTU:latest and it should work for every flavored version.
2.) npm install envsub dependent on the "use case" - this approach would be better supported by node based projects.
3.) enstub cli project lib in my opinion it seems a bit overkill to downloading a custom cli from a random stranger but, it's also another option.
Risk:
apt-get install -y gettext:
1.) gettext - this approach would NOT be ideal for VM's as with any package library, it requires maintenance and updates as time passes. However, this isn't necessary for docker because once an a container is initialized and up and running we can create a bashscript to add the package, substitute env vars and then remove the package.
2.) It's a bad idea for VM's because it can be used to execute arbitrary code
npm install ensub
1.) envsub - updating packages and this approach wouldn't be ideal if your dealing with a different stack and not using nodejs.
NOTE:
There's also a PHP version for those developing a PHP application and it seems to work PHP's cli if you need a custom environment.
Resources:
GetText package library info: https://www.gnu.org/software/gettext/
GetText Risk - https://ubuntu.com/security/notices/USN-3815-2
PHP-GetText - apt-get install -y php-gettext
Custom ensubst cli: https://github.com/a8m/envsubst
I suggest that since your are using Node, you use the npm envsub module.
This module is well tested and is developed with docker in mind.
It avoids the need for relying on other dependencies when you already have the full Node arsenal at your fingertips.
envsub is described as
envsub is envsubst for NodeJS
NodeJS global CLI module providing file-level environment variable substitution via Handlebars
I am the author of the package. I think you will enjoy it.
I had some issues with envsubst in Docker.
For some reasons envsubst doesn't work when I try to copy the output in the same file. For example, this is not working:
RUN envsubst < file.conf > file.conf
But when I when I tried to use a temp file the issue disappeared:
RUN envsubst < file.conf > file.conf.temp && cp -f file.conf.temp file.conf