I need to execute integration testing using Python code and Selenium HUB driver.
I'm planning to use remote driver (I'm using Selenium HUB docker image on https://github.com/SeleniumHQ/docker-selenium).
I am unable to figure out how to create a persistent profile in the Selenium HUB image and recall from remote webdriver.
I guess I need to first create the profile on Selenium HUB, than recall in the python code:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('user-data-dir=##remotepath')
browser = webdriver.Remote(command_executor='http://127.0.0.1:4444/wd/hub',desired_capabilities = chrome_options.to_capabilities())
browser.get('http://www.google.it')
session_id = browser.session_id
How can I create the "user data dir" profile on docker selenium hub image?
Thx
UPDATE
I run "chrome:\version" and I was able to identify the profile
I was able to specify it in the 'user-data-dir' param but, after committing on docker, when re-launching the image, the path change.
Is it there any way to make it persistent?
UPDATE 2
I've created a folder "/etc/opt/chrome/profile/maya"
I've created a test_policy.json file:
{
"UserDataDir": "/etc/opt/chrome/profile/maya"
}
placed in this directory:
/etc/opt/chrome/policies/managed
When I try to execute the "chrome://policy/" I see this:
Indicating something is wrong:
You can add custom path to chrome using chrome://policy and later add that policy files to docker images.
Try the policy setup manually to assert if this approach works for you. While trying make sure you created valid policy file. Detailed steps are here.
Available policy list
Adding Steps in docker file.
3.1. Creating folder as mentioned in step-1
3.2. Copying the policy file tested in step-1
Build the docker image and use it.
PS: This approach works for me as I did some customization in chrome. Let me know in case you need more information.
Related
I'm trying to do something a bit slick, and of course running into issues.
Specifically, I am using a Docker-based environment build system (Lando). I'm feeding it a Dockerfile that it then builds a cluster around, using docker-compose. Locally, this works fine. I also have it working great inside a GitHub Action to spin up a local-dev-identical environment for testing. Quite nice.
However, now I'm trying to expand the Dockerfile using the Dockerfile Plus extension. My Dockerfile looks like this:
# syntax = edrevo/dockerfile-plus
INCLUDE+ ./docker/prod/Dockerfile
COPY docker/dev/php.ini /usr/local/etc/php/conf.d/zzz-docker-custom.ini
# Other stuff here.
This works great for me locally, and I get the contents of docker/prod/Dockerfile included into my docker build.
When I run the same configuration inside a GitHub Actions workflow, I get a syntax error on the INCLUDE+ line, indicating that the extension is not being loaded. It uses BuildKit (according to the project page), which should be enabled by default on any recent Docker version, it says. Yet whatever is on GitHub is not using BuildKit. I've tried enabling it by setting the env vars explicitly (as specified on the Dockerfile+ project page), but it still doesn't seem to work.
How can I get Dockerfile+ working in GitHub Actions? Of note, I do not run the docker build command myself (it's run by docker-compose, using files generated on the fly by Lando), so I cannot modify that specific command. But I didn't need to locally anyway, so I don't know what the issue is.
I've attempted to enable Read-only user attributes in Keycloak as per the docs: https://www.keycloak.org/docs/latest/server_admin/
However the documented configuration does not actually prevent a user from changing their attributes.
Using Keycloak 15.0.0 with the regular Docker image from docker hub
Made a .cli file and added it to my Docker image, built from
FROM jboss/keycloak:15.0.0
ADD RESTRICT_USER_ATTRIBUTES.cli /opt/jboss/startup-scripts/
With contents of RESTRICT_USER_ATTRIBUTES.cli:
embed-server --server-config=standalone-ha.xml --std-out=echo
batch
/subsystem=keycloak-server/spi=userProfile/:add
/subsystem=keycloak-server/spi=userProfile/provider=legacy-user-profile/:add(properties={},enabled=true)
/subsystem=keycloak-server/spi=userProfile/provider=legacy-user-profile/:map-put(name=properties,key=read-only-attributes,value=[myUserAttribute])
run-batch
stop-embedded-server
The .cli file is processed according to the log. I can exec into the docker instance and check the configuration using jboss-cli.sh.
But the end user can freely edit myUserAttribute using Postman or another tool.
What am i doing wrong here?
I just had this issue, and it seems the documentation is out-of-date.
They changed the provider name, probably in 15.0.0.
Try changing your cli script to:
# ...
/subsystem=keycloak-server/spi=userProfile/:add
/subsystem=keycloak-server/spi=userProfile/provider=declarative-user-profile/:add(properties={},enabled=true)
/subsystem=keycloak-server/spi=userProfile/provider=declarative-user-profile/:map-put(name=properties,key=read-only-attributes,value=[myUserAttribute])
# ...
My end goal: I want to fetch data from a retail site on an hourly schedule to see if a specific product is back in stock or not.
I tried using xpath in python to scrape the site myself, but I'm not too familiar, and why reinvent the wheel if someone built a scraper already? In this case, Diggernaut has a github repo.
https://github.com/Diggernaut/configs/tree/master/bananarepublic.gap.com
I'm using the above github repo to try and run a pre-existing web scraper on the banana republic retail site. All that's included in the folder is a config.yml file. I don't even know where to start to try and run it... I am not familiar with using .yml files at all, barely know my way around a terminal (I can do basic "ls" and "cd" and "brew install", otherwise, no idea).
Help! I have docker and git installed (not that I know how to use docker). I have a Mac version 10.13.6 (High Sierra).
I'm not sure why you're looking at using Docker for this, as the config.yml is designed for use on Diggernaut.com and not as part of a docker container deployment. In fact, there is no docker container for Diggernaut that exists as far as I can see.
On the main Github config page for Diggernaut they list the following instructions:
All configs can be used with Diggernaut service to retrieve products information.
You need to create free account at Diggernaut
Login to your account
Create a project with any name and description you want
Get into your new project by clicking it and create new digger with any name
Then you will see 3 options suggested to you, you need to use one where you will use meta-language
Config editor will open and you can simply copy and paste config code and click on save button.
Switch mode for digger from Debug to Active and then run your digger.
Wait for completion.
Download data.
Schedule your runs if required.
We have a docker container running artifactory at my job and we need to add a custom keystore with the self-signed certificates to use Crowd authentication mechanism.
What we did was remove the old docker container and run a new one with the following launching argument :
-e EXTRA_JAVA_OPTIONS="-Djavax.net.ssl.trustStore=/var/opt/jfrog/artifactory/keystore/selfsignedcerts.jks -Djavax.net.ssl.trustStorePassword=selfsignedpassword"
This worked and we could use the crowd auth mechanism but it broke the npm-remote repository (https://registry.npmjs.org) (and other https repos too)
We get the following error when the launch argument is used trying to test the npm-remote repo :
Connection to remote repository failed: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
My hypothesis is that using the argument overwrites the default keystore but i am unsure. Instead of replacing it, is there any ways to use two keystores at once or append the self-signed certificates to the existing one? (I can't even locate the keystore).
We managed to find a solution with the following :
https://jfrog.com/knowledge-base/how-to-resolve-unable-to-find-valid-certification-path-to-requested-target-error/
Quick explain: We had to add out intermediate and root certificates to the regular cacerts file that comes with artifactory. We realized the best way to do this was through making a custom docker image based on artifactory :
Dockerfile :
FROM docker.bintray.io/jfrog/artifactory-pro:<your version or latest>
COPY cacerts_with_your_intermediatesAndRoots /etc/ssl/certs/java/cacerts
Then run this new image instead of the barebone artifactory and it'll work.
Note that if you currently have a custom image you should simply add the COPY line to your existing Dockerfile. Also, if you're not running artifactory using Docker, then just add your certificates to the file and restart.
You may also notice i'm using a different path than the one used in the link above. That's because their path is a symbolic link and not the actual file.
In case you have a question feel free to contact me.
I've personal ASP.NET Core project which scrapes data from the web using Selenium and Chromium and saves it in local sqlite database.
I want to be able to run this app in Docker image on my Synology NAS. Managed to create and run Docker image (on my Mac), it displays data from sqlite db correctly, but getting error when trying to scrape:
The chromedriver file does not exist in the current directory or in a directory on the PATH environment variable.
From my very limited understanding of Dockers in general, I understand that I need to add chromiumdriver inside the docker somehow.
I've searched a lot, went trough ~30 different examples and still can't get this to work.
Any help is appreciated!
You need to build a new image based on the existing one, in which you add the chromedriver binary. In other words you need to extend your current image.
So create a directory containing a Dockerfile and the chromedriver binary.
Your Dockerfile should look like this:
FROM your_existing_image_name:version
COPY chromedriver desired_path_inside_container
Then open a terminal inside this directory and execute:
docker build -t your_existing_image_name:version++ .
After that you should be able to start a container from the newly created image.
Some notes:
I have assumed that your existing image has been tagged with a version. If it is not the case then remove :version from Dockerfile
Similarly, remove :version++ from the build command. However, is a good practice to include versioning in your images.
I have not add any entrypoint assuming that you do not need to change the existing one.