I used mlflow.tensorflow.autolog function to track my keras deep learning model. However, the mlflow ui cannot find the /mlruns directory where the logged runs are stored.
My logged runs stored in /model/mlruns where the model's python code located in /model.
However, the mlflow ui tries to read logged run located at ./venv/Scripts/mlruns.
I need to manually copy the logged run from /model/mlruns to /venv/Scripts/mlruns in order to let mlflow ui find the logged run.
Is there any way to let mlflow ui read runs stored in /model/mlruns. Or make the output directory of logged run be /venv/Scripts/mlruns?
Related
My end goal: I want to fetch data from a retail site on an hourly schedule to see if a specific product is back in stock or not.
I tried using xpath in python to scrape the site myself, but I'm not too familiar, and why reinvent the wheel if someone built a scraper already? In this case, Diggernaut has a github repo.
https://github.com/Diggernaut/configs/tree/master/bananarepublic.gap.com
I'm using the above github repo to try and run a pre-existing web scraper on the banana republic retail site. All that's included in the folder is a config.yml file. I don't even know where to start to try and run it... I am not familiar with using .yml files at all, barely know my way around a terminal (I can do basic "ls" and "cd" and "brew install", otherwise, no idea).
Help! I have docker and git installed (not that I know how to use docker). I have a Mac version 10.13.6 (High Sierra).
I'm not sure why you're looking at using Docker for this, as the config.yml is designed for use on Diggernaut.com and not as part of a docker container deployment. In fact, there is no docker container for Diggernaut that exists as far as I can see.
On the main Github config page for Diggernaut they list the following instructions:
All configs can be used with Diggernaut service to retrieve products information.
You need to create free account at Diggernaut
Login to your account
Create a project with any name and description you want
Get into your new project by clicking it and create new digger with any name
Then you will see 3 options suggested to you, you need to use one where you will use meta-language
Config editor will open and you can simply copy and paste config code and click on save button.
Switch mode for digger from Debug to Active and then run your digger.
Wait for completion.
Download data.
Schedule your runs if required.
I've personal ASP.NET Core project which scrapes data from the web using Selenium and Chromium and saves it in local sqlite database.
I want to be able to run this app in Docker image on my Synology NAS. Managed to create and run Docker image (on my Mac), it displays data from sqlite db correctly, but getting error when trying to scrape:
The chromedriver file does not exist in the current directory or in a directory on the PATH environment variable.
From my very limited understanding of Dockers in general, I understand that I need to add chromiumdriver inside the docker somehow.
I've searched a lot, went trough ~30 different examples and still can't get this to work.
Any help is appreciated!
You need to build a new image based on the existing one, in which you add the chromedriver binary. In other words you need to extend your current image.
So create a directory containing a Dockerfile and the chromedriver binary.
Your Dockerfile should look like this:
FROM your_existing_image_name:version
COPY chromedriver desired_path_inside_container
Then open a terminal inside this directory and execute:
docker build -t your_existing_image_name:version++ .
After that you should be able to start a container from the newly created image.
Some notes:
I have assumed that your existing image has been tagged with a version. If it is not the case then remove :version from Dockerfile
Similarly, remove :version++ from the build command. However, is a good practice to include versioning in your images.
I have not add any entrypoint assuming that you do not need to change the existing one.
I'm totally newbie in Beanstalk. I'm developing a web application in which a sealed and black-box plugin is used. That plugin needs a physical path with full permission to use for cache.
Any solution?
You can use the .ebextensions files in the main project that will, for example, create a directory and change the access rights to it. It is not clear from your question how you install the plugin (e.g. is it a service that is loaded after the web application is installed or is it part of the web application).
Execute a command in the .ebextensions file such as in:
How to grant permission to users for a directory using command line in Windows?
You'll find a introduction into container customization in
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html
Be careful about the format of the files (ie. spaces, no tabs, the best is to edit it in a separate text editor). Experiment with simple commands first, so that you get the hang of how the commands are executed.
Note: The ebextensions commands are executed for each deployment, so your script should check if the directory exists already and only create it if it doesn't. Otherwise the execution will fail as you try to create a directory that exists already. In a second step you can add the permissions.
I have done a twitter bot using python that posts a tweet about the weather info for a specific city. I test it doing this: python file.py and then I check on my Twitter Account that it works.
But, how can I execute it periodically? Where can I upload my source code? Are there any free server that runs my file.py for free?
Assuming you're running gnu/linux and your machine is online most of the time, you can configure your own crontab to run your script periodically.
check: https://www.freebsd.org/doc/handbook/configtuning-cron.html
If that is not the case,
Check out https://wiki.python.org/moin/FreeHosts for your purpose first from the list should do the job. (https://www.pythonanywhere.com/)
You can host your code file to a github repository, then run your .py file through a Github action which run by a schedule you set up by a .yml file at .github/workflows folder.
I am using ANT task Copy to copy a zip file from one share to another
<copy file="\\server_share\nightly\xyz08022012.zip" todir="Z:\output\Nightly"/>
when this gets executed, I am getting the below exception
Failed to copy \server_share\nightly\xyz08022012.zip to Z:\output\Nightly\xyz08022012.zip due to java.io.FileNotFoundException Z:\output\Nightly\xyz08022012.zip (The system cannot find the path specified.)`
When I change the Z:\output\Nightly to C:\temp, the copy works
Z:\ points to a server share which is mounted on the server with different user credentials and the drive is made persistent. This work around is because of the fact that when the build runs, the build user does not have access to the output share
Hence I mapped the server share as a network drive with different credentials (the user who has read/write permission) and made this drive persistent
This is on a Windows 7 machine where the build is running.
I tried doing a copy manually and that worked
I looked into Ant Copy Task: Failed to copy due to java.io.FileNotFoundException but doesn't help me