PyCharm: Running Pylint from docker image - docker

I've installed the PyLint PyCharm plugin (https://plugins.jetbrains.com/plugin/11084-pylint). I can get the linting to work if I choose the default project interpreter to be the one installed on my Windows laptop, but my project interpreter is the one on my attached Docker image. When the program default interpreter is the one on the Docker image, the PyLint plugin complains of:
The project interpreter is missing Pylint, which is needed to properly check the imports.
I've installed pylint on the docker image, it does not however show up package list when looking in File -> Settings -> Project Interpreter
Does anyone know if the PyLint plugin should work with this workflow?

To run Pylint in a Docker container I configured it as an external tool.
Edit Tool window screenshot
Program:
docker-compose
Arguments:
run --rm django pylint --msg-template="$ProjectFileDir$/{path}:{line}:{column}: {msg_id}: {msg} ({symbol})" $FilePathRelativeToProjectRoot$
msg-template argument makes each file path clickable so you can easily navigate to a line with a problem.

Related

How to ensure a sourced script is available when using a docker image for GitLab CI?

I use custom docker images (mostly based on phusion) for GitLab CI alright. But sometimes an image requires sourcing a shell file to work properly (set PATH, LD_LIBRARY_PATH, etc.).
When running an interactive shell from the docker image (e.g. docker run -it <image_name> /bin/bash), this can be fixed by simply adding the appropriate source command to /etc/profile or whatever. But it looks like the scripts in GitLab CI are not run in an interactive shell, and then the paths are not properly set up. I work around this by adding the source (or .) command to the GitLab CI script itself, but this is something image-specific, that should be in the image, not in the script.
Is there anything I can do that will effectively source the file directly on the image (or at least when GitLab CI runs the script on the image)? I could manually inspect what environment changes the sourced file introduces and put them in ENV instructions, but I'm looking for something less fragile when rebuilding the image from possibly updated sources.

How to run selenium java testcases and copy test framework from Mac host to Ubutnu Docker container

I have a selenium java test automation framework in my Mac os . Now , I want to execute my automation testcases in Ubuntu Docker container using a docker file which should automatically install java, selenium , TestNG, Maven in ubuntu docker container .
Docker require shell commands so, before thinking on docker, you need to be able to run your selenium tests using the shell. If your test cannot be executed using the shell on your mac, it will be difficult to execute it with docker.
If you are able to run the tests using the shell and al of your reports are well generated, you are ready to docker.
Selenium tests are not live applications, so your docker container will be use just to run the tests and after that, you should destroy it.
As you using java, is there an option to run your tests as a single jar, instead maven. If you achieve this, your flow will be more easy or light.
If you achieve the dockerization of your test, you could run your tests developed in you mac on any machine on-premise or cloud with this line:
docker run --name tests -d \
-e PARAM1=FOO \
-e PARAM2=BAR \
tests:1.0.0
Running tests with maven (source code level)
If you pom.xml is well configured, you could run your the testng test with : mvn clean test
So you docker file will be
FROM maven:3.3-jdk-8
RUN mkdir /usr/test
COPY . /usr/test
WORKDIR /usr/test
CMD["mvn","clean", "test"]
Note: I'm not tested this Dockerfile yet
The execution will be a little slower because compilation is performed at docker run phase.
Run tests using jar
According to this you can run testng with pure java:
java -cp F:\Selenium\SampleTestNG\lib\*;F:\Selenium\SampleTestNG\bin org.testng.TestNG testng.xml
As you can see, you need the testng framework jars. That will complicate the dockerization.
If you are able to use the maven-assembly-plugin, maven will merge all the jars in just one. If you achieve this, your automation flow will be:
mvn clean package
java -jar selenium-test.jar org.testng.TestNG testng.xml
If you achieve this, your Dockerfile could be:
FROM maven:3.3-jdk-8
RUN mkdir /usr/test
COPY . /usr/test
WORKDIR /usr/test
RUN mvn clean package
CMD["java","-jar", "selenium-test.jar","org.testng.TestNG","testng.xml"]
Note: I'm not tested this Dockerfile yet
In this approach, the compilation is done at docker build phase, so it is more fast than previous approach
Common mistake
If your selenium tests opens a browser in you developer machine, you could not achieve this with a single docker container.
Selenium needs a operative system with desktop interface and a browser installed. At developer phase, all of this is performed on you developer machine. On real environments, you have these options
- ubuntu with desktop
Is not common but it is possible. If you choose this, you will need to install many browsers (and its selenium drivers) as much as you can test. As this approach is not a shell solution, you will need to install an agent to be executed remotely. Also you will need emulators to be able to launch browser of specific os like safari or microsoft edge. This will be a nigthmare
So basically this is the same of your developer machine but in another network or in a cloud.
- Selenium grid server
Similar to the previous option, but more elegantly. Check:
https://digital.ai/catalyst-blog/set-up-cross-browser-testing-with-our-selenium-grid-tutorial
https://github.com/SeleniumHQ/docker-selenium
You are not saved of browser installation, but it is free.
Your test code will be the same, just the configuration change a little bit:
https://www.mstsolutions.com/technical/execution-of-test-in-remote-machine-using-selenium-grid/
- BrowserStack ($)
Basically, is a Selenium grid server private service ready to use which needs a payment (Sometime the time is worthier than money). Just need to point your selenium test to its url. You just pick your browsers and run:
https://www.browserstack.com/docs/automate/selenium/getting-started/java#run-your-first-test
Also with this service, docker may not necessary because you just need a simple mvn test or java -jar. These commands could be launched with a Jenkins or a simple shell script in your devops server.
- headless browser
Basically are browsers that run in background mode in your shell using your ram. This is perfect if you cannot pay browserstack o configure your own selenium grid server.
The only disadvantage is that some latest javascript features may not be work in this kind of virtual browsers. Also don't support features like printers, camera, or another low level requirement or in which a real UI is required.
Here some options:
https://phantomjs.org/
https://hacks.mozilla.org/2017/12/using-headless-mode-in-firefox/
https://developers.google.com/web/updates/2017/04/headless-chrome

Quarkus: Testing native image built in container

The Quarkus - Building a Native Executable guide discusses how to build and test a native executable, and also how to build a native executable inside a docker container.
I've followed this guide to set up a common native executable build using Docker, that we are using on our CI server and also to build it locally regardless of host operating system.
However, the produced native executable must be run on the architecture used by the builder docker image, but the Maven and Gradle test tasks try to execute the produced image directly. For example, the docker build produces a Linux native-image, but we want to run the tests from OSX and Windows systems too.
How can I tell Quarkus to run the native tests against the built docker container, instead of the raw binary?
UPDATE
QuarkusIntegrationTest is what you are looking for
ORIGINAL
We don't have such a capability yet. Please submit your idea at: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/quarkus-dev/IdwKtwdm7DY/eJKrHfX3AwAJ

Jenkins job build error | Trying to build docker image | Jenkins and Docker both are installed in Windows 7

I have installed docker toolbox and Jenkins on my windows 7 laptop (virtualization enabled), and I'm trying to build a Jenkins job for creating and deploying an Angular image.
However, its working fine when I try using windows commands line but its showing below error when I try to build the project.
C:\Program Files (x86)\Jenkins\workspace\MyDemo>docker run --rm -p 4200:4200 --name "TopMovies1" demoapp1
'docker' is not recognized as an internal or external command,
operable program or batch file.
C:\Program Files (x86)\Jenkins\workspace\MyDemo>exit 9009
Build step 'Execute Windows batch command' marked build as failure
Finished: FAILURE
**PS out put from windows command line ****
C:\Users\gbanerje>docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
FYI, I have added "C:\Program Files\Docker Toolbox;C:\Windows\System32" in the path varialble but still the error persist.
Please share if we have any clue on this error.
Thank you in advance.
try to add the above paths in environment variable section in manage jenkins -> configure system -> Environment Variables and add path=%path%;
try to add the above paths in environment variable section in manage jenkins -> configure system -> Environment Variables and add path=%path%;
if you can run the docker from the command line but not from Jenkins job,
you might need to change the order on how the environment are. you might want to move the docker path all the way down, and let grab the system Windows/linux variables first.

Coverity scan while building in Docker container

I have a custom Docker container in which I perform build and test of a project. It is somehow integrated with Travis CI. Now I want to run the Coverity scan analysis from within the Travis CI as well, but the tricky part is (if I understand the Coverity docs correctly), that I need to run the build. The build, however, runs in the container.
Now, according to the cov-build --help
The cov-build or cov-build-sbox command intercepts all calls to the
compiler invoked by the build system and captures source code from the
file system.
What I've tried:
cov-build --dir=./cov docker exec -ti container_name sh -c "<custom build commands>"
With this approach, however, Coverity apparently does not catch the calls to the compiler (it is quite understandable considering Docker philosophy) and emits no files
What I do not want (at least while there is hope for a better solution):
to install locally all the necessary stuff to build in the container
only to be able to run Coverity scan.
to run cov-build from within the container, since:
I believe this would increase the docker image size significantly
I use Travis CI addon for the Coverity scan and this would
complicate things a lot.
The Travis CI part just FWIW, tried all that locally and it doesn't work either.
I am thrilled for any suggestions to the problem. Thank you.
Okay, I sort of solved the issue.
I downloaded and modified ( just a few modification to fit my
environment ) the script that Travis uses to download and run Coverity
scan.
Then I installed Coverity to the host machine (in my case Travis
CI machine).
I ran the docker container and mounted the directory where the
Coverity is installed using docker run -dit -v <coverity-dir>:<container-dir>:ro .... This way I avoided increasing the docker image size.
Executed the cov-build command and uploaded the analysis using
another part of the script directly from docker container.
Hope this helps someone struggling with similar issue.
If you're amenable to adjusting your build, you can change your "compiler" to be cov-translate <args> --run-compile <original compiler command line>. This is effectively what cov-build does under the hood (minus the run-compile since your compiler is already running), and should result in a build capture.
Here is the solution I use:
In "script", "after_script" or another phase in Travis job's lifecycle you want
Download coverity tool archive using wget (the complete Command to use can be found in your coverity scan account)
Untar the archive into a coverity_tool directory
Start your docker container as usual without needing to mount coverity_tool directory as a volume (in case you've created coverity_tool inside the directory from where the docker container is started)
Build the project using cov-build tool inside docker
Archive the generated cov-int directory
Send the result to coverity using curl command
Step 6 should be feasible inside the container but I usually do it outside.
Also don't forget the COVERITY_SCAN_TOKEN to be encrypted and exported as an environment variable.
A concrete example is often more understandable than a long text; here is a commit that applies above steps to build and send results to coverity scan:
https://github.com/BoubacarDiene/NetworkService/commit/960d4633d7ec786d471fc62efb85afb5af2bed7c

Resources