how to open jupyter TensorFlow environment in a specific folder - machine-learning

recently i created jupyter notebook another environment of TensorFlow but when i am try to open TensorFlow environment in a specific folder (type command jupyter notebook) by default it open my anaconda environment not TensorFlow can anyone tell me how to open jupyter TensorFlow environment in a specific folder
how to open jupyter different environment in any specific folder

Related

Jupyter-Lab doesn't show the path for the new internal HDD

I recently added an internal HDD to my PC but I cannot locate its path from Jupyter-Lab to work with. It works fine through Spyder thou.
I finally found a command line that can open Jupyter Lab or Jupyter Notebook from any drive you would like.
To open Jupyter Lab in drive E for example, you just need to write the following line in anaconda prompt:
jupyter lab --notebook-dir=E://

Docker not showing up as an option for Python interpreter on PyCharm

I have a dockerized project where the python interpreter sits in a container. When I work with PyCharm (macOS), it keeps on displaying a 'No Python interpreter configured for the project' message at the top. When I click on Add Interpreter, I get the categories on the left hand sideas follows:
Virtualenv Environment
Conda Environment
System Interpreter
Pipenv Environment
Seems like all tutorials assume that Docker will be in that list. I have the Docker plugin enabled in PyCharm. When I click on Build, Execution, Deployment -> Docker, I get the 'Connection Successful' message. I also have the Docker toolbar that I have set up with my docker compose file. It is aware that the images are up.
Is there something that I am missing?
As per Sergey K.'s answer, the feature is only available in the paid version of PyCharm.

No such file or directory: 'docker': 'docker' when running sagemaker studio in local mode

I try to train a pytorch model on amazon sagemaker studio.
It's working when I use an EC2 for training with:
estimator = PyTorch(entry_point='train_script.py',
role=role,
sagemaker_session = sess,
train_instance_count=1,
train_instance_type='ml.c5.xlarge',
framework_version='1.4.0',
source_dir='.',
git_config=git_config,
)
estimator.fit({'stockdata': data_path})
and it's work on local mode in classic sagemaker notebook (non studio) with:
estimator = PyTorch(entry_point='train_script.py',
role=role,
train_instance_count=1,
train_instance_type='local',
framework_version='1.4.0',
source_dir='.',
git_config=git_config,
)
estimator.fit({'stockdata': data_path})
But when I use it the same code (with train_instance_type='local') on sagemaker studio it doesn't work and I have the following error: No such file or directory: 'docker': 'docker'
I tried to install docker with pip install but the docker command is not found if use it in terminal
This indicates that there is a problem finding the Docker service.
By default, the Docker is not installed in the SageMaker Studio (confirming github ticket response).
Adding more information to an almost 2 years old question.
SageMaker Studio does not natively support local mode. Studio Apps are themselves docker containers and therefore they require privileged access if they were to be able to build and run docker containers.
As an alternative solution, you can create a remote docker host on an EC2 instance and setup docker on your Studio App. There is quite a bit of networking and package installation involved, but the solution will enable you to use full docker functionality. Additionally, as of version 2.80.0 of SageMaker Python SDK, it now supports local mode when you are using remote docker host.
sdockerSageMaker Studio Docker CLI extension (see this repo) can simplify deploying the above solution in simple two steps (only works for Studio Domain in VPCOnly mode) and it has an easy to follow example here.
UPDATE:
There is now a UI extension (see repo) which can make the experience much smoother and easier to manage.

Restore Tensorflow model on docker

I am using a face recognition model based on tensorflow. in my local machine - ubuntu 14.04 - everything works.
when I deploy it using docker, I am getting the following error:
DataLossError: Unable to open table file /data/model/model.ckpt-80000: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you
need to use a different restore operator?
I am using python implementation for tensorflow.
The model is in the old 11.* format (model.meta & model.ckpt-80000) while the tensorflow python version is 12.* . It shouldn't be a problem, as that's the configuration in my local machine, as well as in the place where I took the model from.
The versions of tensorflow, numpy and protobuf are identical in my machine and in the docker machine.
Any advice?
UPDATE
I created a small script that runs perfectly on my machine. Then, I run the same script on the deployed on virtual machine (AWS instance) BUT NOT on docker. It also failed, with the same error.
The deployed machine is ubuntu 16.04.
Seems like i was dealing with a corrupted file

How to combine docker images to support python application

I have a python application implemented with python containing following components:
1. Database
2. python app upono anaconda
3. Linux OS
The idea is to dockerization these three components into isolated container and then linking them together by running.
To me it's clear how to link database image with linux image, but how can I combine anaconda and linux? Isn't anaconda suppose to be installed on linux system?
You will only have two containers. Both your database and python app presumably need a Linux OS of one flavor or another. In your docker file you would start with something like with ubuntu to pull in a base image and make your changes. Using the diff based file system your changes will be layered on top of the base image.

Resources