No module named 'twilio' - twilio

I am trying to use twilio to send a text message...when I put the demo script in a texteditor, ave it as something.py and run it from terminal, I receive the text message.
However, when I copy and paste the same code into Spyder (Anaconda 3 environment, Python 3.8) I get the following error.
Here is the code:
from twilio.rest import Client
account_sid = 'xxx'
auth_token = 'xxx'
client = Client(account_sid, auth_token)
message = client.messages \
.create(
body="Test.",
from_='x',
to='xxx'
)
print(message.sid)
And here is there error:
File "xxx", line 14, in <module>
import twilio
ModuleNotFoundError: No module named 'twilio'
I have installed with pip, pip3, Conda, all that.
I'm not really used to this anaconda environment thing, but when I search for the package "twilio" the only thing that shows up is r-Twilio. I imagine this is something to do with the problem, but I've no idea. I tried creating a new environment with python 2.7 and again only saw the r-twilio thing.

Related

Django Channels / Daphne Internal Server Error: 'module' object is not callable

I'm trying to install a production server for Django Channels application I've been running locally with success. However, when starting Daphne and reaching to the application via browser, I get an Internal Server Error from Daphne. Console output is as follows:
2021-08-07 11:57:09,584 DEBUG HTTP b'GET' request for ['127.0.0.1', 33566]
2021-08-07 11:57:10,071 ERROR Exception inside application: 'module' object is not callable
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/asgiref/compatibility.py", line 34, in new_application
instance = application(scope)
TypeError: 'module' object is not callable
2021-08-07 11:57:10,072 DEBUG HTTP 500 response started for ['127.0.0.1', 33566]
As the project/asgi.py might have some relevance here, it is below:
import os
import django
from django.core.asgi import get_asgi_application
os.environ['DJANGO_SETTINGS_MODULE'] = "proj.settings"
django.setup()
from channels.auth import AuthMiddlewareStack
from channels.routing import ProtocolTypeRouter, URLRouter
import app.routing
application = ProtocolTypeRouter({
"http": get_asgi_application(),
"websocket": AuthMiddlewareStack(
URLRouter(
app.routing.websocket_urlpatterns
)
),
})
However, I've been poking around with the said asgi.py, and I have a feeling that at least the application variable does not have anything to do with this, as that block could be commented out with no impact on the error message.
Relevant packages:
channels 3.0.4
daphne 3.0.2
Django 3.2.6
Any ideas?

Install Custom Dependency for KFP Op

I'm trying to setup a simple KubeFlow pipeline, and I'm having trouble packaging up dependencies in a way that works for KubeFlow.
The code simply downloads a config file and parses it, then passes back the parsed configuration.
However, in order to parse the config file, it needs to have access to another internal python package.
I have a .tar.gz archive of the package hosted on a bucket in the same project, and added the URL of the package as a dependency, but I get an error message saying tarfile.ReadError: not a gzip file.
I know the file is good, so it's some intermediate issue with hosting on a bucket or the way kubeflow installs dependencies.
Here is a minimal example:
from kfp import compiler
from kfp import dsl
from kfp.components import func_to_container_op
from google.protobuf import text_format
from google.cloud import storage
import training_reader
def get_training_config(working_bucket: str,
working_directoy: str,
config_file: str) -> training_reader.TrainEvalPipelineConfig:
download_file(working_bucket, os.path.join(working_directoy, config_file), "ssd.config")
pipeline_config = training_reader.TrainEvalPipelineConfig()
with open("ssd.config", 'r') as f:
text_format.Merge(f.read(), pipeline_config)
return pipeline_config
config_op_packages = ["https://storage.cloud.google.com/my_bucket/packages/training-reader-0.1.tar.gz",
"google-cloud-storage",
"protobuf"
]
training_config_op = func_to_container_op(get_training_config,
base_image="tensorflow/tensorflow:1.15.2-py3",
packages_to_install=config_op_packages)
def output_config(config: training_reader.TrainEvalPipelineConfig) -> None:
print(config)
output_config_op = func_to_container_op(output_config)
#dsl.pipeline(
name='Post Training Processing',
description='Building the post-processing pipeline'
)
def ssd_postprocessing_pipeline(
working_bucket: str,
working_directory: str,
config_file:str):
config = training_config_op(working_bucket, working_directory, config_file)
output_config_op(config.output)
pipeline_name = ssd_postprocessing_pipeline.__name__ + '.zip'
compiler.Compiler().compile(ssd_postprocessing_pipeline, pipeline_name)
The https://storage.cloud.google.com/my_bucket/packages/training-reader-0.1.tar.gz IRL requires authentication. Try to download it in Incognito mode and you'll see the login page instead of file.
Changing the URL to https://storage.googleapis.com/my_bucket/packages/training-reader-0.1.tar.gz works for public objects, but your object is not public.
The only thing you can do (if you cannot make the package public) is to use google.cloud.storage library or gsutil program to download the file from the bucket and then manually install it suing subprocess.run([sys.executable, '-m', 'pip', 'install', ...])
Where are you downloading the data from?
What's the purpose of
pipeline_config = training_reader.TrainEvalPipelineConfig()
with open("ssd.config", 'r') as f:
text_format.Merge(f.read(), pipeline_config)
return pipeline_config
Why not just do the following:
def get_training_config(
working_bucket: str,
working_directory: str,
config_file: str,
output_config_path: OutputFile('TrainEvalPipelineConfig'),
):
download_file(working_bucket, os.path.join(working_directoy, config_file), output_config_path)
the way kubeflow installs dependencies.
Export your component to loadable component.yaml and you'll see how KFP Lighweight components install dependencies:
training_config_op = func_to_container_op(
get_training_config,
base_image="tensorflow/tensorflow:1.15.2-py3",
packages_to_install=config_op_packages,
output_component_file='component.yaml',
)
P.S. Some small pieces of info:
#dsl.pipeline(
Not required unless you want to use the dsl-compile command-line program
pipeline_name = ssd_postprocessing_pipeline.name + '.zip'
compiler.Compiler().compile(ssd_postprocessing_pipeline, pipeline_name)
Did you know that you can just kfp.Client(host=...).create_run_from_pipeline_func(ssd_postprocessing_pipeline, arguments={}) to run the pipeline right away?

ImportError: No module named cv2 when run Batch transform jobs in SageMaker

When I tried to run a Batch transform job in AWS SageMaker, I met below error:
ImportError: No module named cv2
Please note that, I am able to "import CV2" in the notebook instance. The jupter can run "import CV2" in notebook instance. But failed to run it in endpoints during inference time. I have tried below method using "env" as the link AWS Sagemaker - Install External Library and Make it Persist
but it still not work.
anyone have good way to solve it? Thanks!
my codes are:
env = {
'SAGEMAKER_REQUIREMENTS': 'requirements.txt', # path relative to `source_dir` below.
}
image_embed_model = MXNetModel(model_data=model_data,
entry_point='sagemaker_entrypoint.py',
role=role,
source_dir = 'src',
env = env,
py_version='py3',
framework_version='1.6.0')
transformer = image_embed_model.transformer(instance_count=1, # Please pay attention here!!!
instance_type='ml.m4.xlarge',
output_path=output_path,
assemble_with = 'Line',
accept = 'text/csv'
)
transformer.transform(batch_input,
content_type='text/csv',
split_type='Line',
input_filter='$[0:]',
join_source='Input',
wait=False)
You can follow https://github.com/aws/sagemaker-python-sdk/blob/master/doc/using_mxnet.rst#use-third-party-libraries to import third party libraries to your batch transform instances. Make sure the requirement.txt file is saved under the right directory before packaging the model data.

How to change the Python version in Azure Machine Learning sdk ContainerImage with CondaDependencies

I am trying to get my Faster R-CNN model into an Container Instance on ACI. For that I need my docker image to posses python version 3.5.*. I specify that in my conda yaml file, but every time I spin an instance up and docker run -it *** /bin/bash into it I see that it only has Python 3.6.7.
https://user-images.githubusercontent.com/21140767/50680590-82b20b80-1008-11e9-9bfe-4a0e71084ce0.png
How can I get my Docker image to have Python version 3.5.*? I already tried conda installing Python version 3.5.2, but that didn't work as eventually it didn't posses 3.5.2, but only 3.6.7. (dfimage lets you see the dockerfile from which the image was created, https://hub.docker.com/r/chenzj/dfimage/).
https://user-images.githubusercontent.com/21140767/50680673-d6245980-1008-11e9-9d48-71a7c150d925.png
My yaml:
name: project_environment
dependencies:
- python=3.5.2
- pip:
- matplotlib
- opencv-python==3.4.3.18
- azureml-core==1.0.6
- numpy
- cntk
- cython
channels:
- anaconda
Notebook cell:
from azureml.core.conda_dependencies import CondaDependencies
svmandss = CondaDependencies.create(python_version="3.5.2", pip_packages=[
"matplotlib",
"opencv-python==3.4.3.18",
"azureml-core",
"numpy",
"cntk",
"cython"], )
svmandss.add_channel('anaconda')
with open("fasterrcnn.yml","w") as f:
f.write(svmandss.serialize_to_string())
Another notebook cell with ContainerImage specifications.
image_config = ContainerImage.image_configuration(execution_script="score_fasterrcnn.py",runtime="python",conda_file="./fasterrcnn.yml",dependencies=listdir("utils"),docker_file="./Dockerfile")
service = Webservice.deploy_from_model(workspace=ws,
name='faster-rcnn',
deployment_config=aciconfig,
models=[Model(workspace=ws, name='Faster-RCNN')],
image_config=image_config)
service.wait_for_deployment(show_output=True)
Note
For better readability see my GitHub issue: (https://github.com/Azure/MachineLearningNotebooks/issues/163).
Currently, the version of Python is fixed to what's in Azure ML's base image, when deploying the web service. We're investigating removing this limitation in future.
Since this is one of the top Google answers when searching for "azureml python version" I'm posting the answer here. The documentation is not very clear when it comes to this, but the following will work:
from azureml.core import Workspace
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
ws = Workspace.from_config()
# This is the important part
conda_dep = CondaDependencies(conda_dependencies_file_path="pipeline/environment.yml")
aml_run_config = RunConfiguration(conda_dependencies=conda_dep)
# Define compute target - must be preconfigured in th workspace
compute_target = ws.compute_targets['my-azureml-target']
aml_run_config.target = compute_target
from azureml.pipeline.steps import PythonScriptStep
script_source_dir = "./pipeline"
step_1_script = "test.py"
step_1 = PythonScriptStep(
script_name=step_1_script,
source_directory=script_source_dir,
compute_target=compute_target,
runconfig=aml_run_config,
allow_reuse=True
)
from azureml.pipeline.core import Pipeline
# Build the pipeline
pipeline1 = Pipeline(workspace=ws, steps=[step_1])
from azureml.core import Experiment
# Submit the pipeline to be run
pipeline_run1 = Experiment(ws, 'Test-pipeline').submit(pipeline1)
pipeline_run1.wait_for_completion(show_output=True)
This assumes the following directory structure:
root/
create_pipeline.py
pipeline/
test.py
environment.yml
where create_pipeline.py is the file above, test.py is the script you would like to run and environment.yml is the conda environment file - including the python version.
I was able to change the Python version by registering the environment in Azure ML Workspace:
from azureml.core.environment import Environment, Workspace
environment = Environment.from_conda_specification(name='myenv', file_path='environment.yml')
environment.python.user_managed_dependencies = False
workspace = Workspace.from_config()
environment = environment.register(workspace=workspace)
env_build = environment.build(workspace=workspace)
Then, configure the endpoint for publishing as follows:
from azureml.core.model import InferenceConfig
environment = Environment.get(workspace=workspace, name='myenv')
inference_config = InferenceConfig(
entry_script='inference.py',
source_directory='.',
environment=environment
)
This is using Azure ML SDK 1.29.0. Perhaps this has already been fixed and the original method works as well, but I didn't test that.
EDIT:
This is no longer an issue for me. I found another way to get my code to work with python version 3.6.7.
This is however still an issue if you ask me. If in the future I do need python version 3.5 then there will not be a solution as of now.
You can still post an answer if you would like.

Python 3 runing script from one user works but from the other doesn't?

When i run this script from user jenkins (Linux Mint) i get this error, and when running it from my user it works. Jenkins user is created with jenkins service. I have installed virtualenv.
import unittest
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
DRIVER = None
def getOrCreateWebdriver():
global DRIVER
DRIVER = DRIVER or webdriver.Firefox()
return DRIVER
class LoginTest(unittest.TestCase):
def setUp(self):
self.browser = getOrCreateWebdriver()
def test_Loggin(self):pass
browser = self.browser
def tearDown(self):
self.browser.close()
if __name__ == '__main__':
unittest.main(verbosity=2)
From user jenkins when i run this script i get error :
test_Loggin (__main__.LoginTest) ... ERROR
/usr/lib/python3.4/unittest/case.py:602: ResourceWarning: unclosed file <_io.BufferedWriter name='/dev/null'>
outcome.errors.clear()
======================================================================
ERROR: test_Loggin (__main__.LoginTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "Test.py", line 16, in setUp
self.browser = getOrCreateWebdriver()
File "Test.py", line 10, in getOrCreateWebdriver
DRIVER = DRIVER or webdriver.Firefox()
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/webdriver.py", line 64, in __init__
self.binary, timeout),
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/extension_connection.py", line 51, in __init__
self.binary.launch_browser(self.profile)
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 70, in launch_browser
self._wait_until_connectable()
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 100, in _wait_until_connectable
raise WebDriverException("The browser appears to have exited "
selenium.common.exceptions.WebDriverException: Message: The browser appears to have exited before we could connect. If you specified a log_file in the FirefoxBinary constructor, check it for details.
When you're logged in as yourself, you need to do echo $DISPLAY and note the display info it prints. Subsequently when you login as jenkins service you need to do xhost +, DISPLAY=[display-info]; export DISPLAY. (display-info is what you got from echo $DISPLAY, ignore the square brackets, they shouldn't be specified in the command)
Hopefully this should work. I don't have similar env to test, just mentioning what i recollect having done it quite sometime back.

Resources