Openstack SDK to resize the instance - sdk

I have a requirement to read the input YAML file and resize the servers with the specified configuration like(VCPU, Disk,memory..). Note that the server name is already exist in the environment. I have automated this using python code using the cli command. Reference link for command
https://docs.openstack.org/nova/latest/user/resize.html
But the requirement is to implement this via SDK. Please let me know how to implement this logic via python code by invoking openstack SDK?
Operating System: Ubuntu 16.04
Input Yaml:
Servername1: test1
VCPU: 2
Disk: 4000
Memory: 200
Servername2: test2
VCPU: 1
Disk: 1000
Memory: 100

According to the SDK API documentation, the Compute class (doc) has methods called resize_server, confirm_resize_server and revert_resize_server.
Please let me know how to implement this logic via python code by invoking openstack SDK?
The sequence would be:
Read your yaml file.
Find the existing server that you want to resize.
Lookup flavor1 with the specs that you need (VCPUs, disk, memory, etc).
Check that the server doesn't have that flavor already.
Resize the server
Check server is working correctly. How you do that will depend on the context. But if you skip this step and "confirm" anyway there is a risk that you will lose the existing server.
Either confirm resize or revert resize
For more information how to obtain and make calls on the Compute object, please see "Using OpenStack Compute".
1 - You could also synthesize flavors on the fly, but that is liable to give you a flavor management issue in the long term.

Related

How to make container installation behave like host machine installation

I'm working with the following:
Docker for Windows v20.10.11
Docker running in Windows container mode
mcr.microsoft.com/windows:1903 base image
Proprietary application installed on top of this base image
Each year we create a Docker image with the latest version of our company's software. However this year's version behaves differently. Host machine installation runs fine. Containerized installation fails to run in certain situations. I can start the application as a simple EXE, for example using the Docker run command. The app will start and show up in "tasklist". However I can't start the app via the COM API, which is a critical requirement. The problem appears to be COM related. Normally we can create COM objects for our software just like for any other application. For example, IE returns a COM object just fine:
Creating these objects for our application works outside containers. However inside the container, our latest installation gives this error:
Access permissions appear to be ok. I tried a couple tests to prove this. First I can install other software like MS Word into a container and create COM objects for that:
Second I tried retrieving + modifying the application's DACL in PowerShell.
Changing access masks or trustees can cause an Access Denied error:
This also appears to confirm the access permissions were Ok by default.
Next I made sure COM is aware of the application. This appears to be fine. I get the same result on host machine and container when running this PS script:
gci HKLM:\Software\Classes -ea 0| ? {$.PSChildName -match '^\w+.\w+$' -and
(gp "$($.PSPath)\CLSID" -ea 0)} | ft PSChildName
The application shows up just like any other. The details show up fine when querying by AppID. LocalServer32 points to the correct EXE:
Some other things I tried:
Querying registry keys. There are 7 keys created when installing our software. These appear identical on host machine install and container install.
Even though permissions appear fine, I still tried logging into the container as alternate users. For example "nt authority\system" is another virtual admin user. I also changed the password of the "builtin\administrator" user to enable logging in with that one. Lastly tried creating new users entirely and adding them to the Administrators user group. All these attempts had the same errors as "builtin\containeradministrator" (default user).
A minor check was ensuring CMD.exe / Powershell is running as x64:
Re-registering the DLLs associated with the installation using regsvr32.
Starting from different base images. https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/container-base-images. The full Win Server base image behaves exactly the same way regarding errors. The smaller Win Server Core base image is even more problematic, as I can't even start the app's EXE manually using that base. Lastly I tried other tags of the full Windows base image such as 20H2 and 2004. Same result from those. Multiarch or x64 makes no difference.
Included the "Ogawa hack" which was historically needed to make MS Office apps function correctly with COM: https://stackoverflow.com/a/1680214/7991646. It could be necessary for other COM apps too, but didn't help with my specific installation.
Is there anything else I can do to diagnose or solve this COM issue?
There are several things to consider:
The Considerations for server-side Automation of Office article states the following:
Microsoft does not currently recommend, and does not support, Automation of Microsoft Office applications from any unattended, non-interactive client application or component (including ASP, ASP.NET, DCOM, and NT Services), because Office may exhibit unstable behavior and/or deadlock when Office is run in this environment.
If you are building a solution that runs in a server-side context, you should try to use components that have been made safe for unattended execution. Or, you should try to find alternatives that allow at least part of the code to run client-side. If you use an Office application from a server-side solution, the application will lack many of the necessary capabilities to run successfully. Additionally, you will be taking risks with the stability of your overall solution.
The When CoCreateInstance returns 0x80080005 (CO_E_SERVER_EXEC_FAILURE) page describes possible reasons.
If many COM+ applications run under different user accounts that are specified in the This User property, the computer cannot allocate memory to create a new desktop heap for the new user. Therefore, the process cannot start. See Error when you start many COM+ applications: Error code 80080005 -- server execution failed for more information.
Finally, you may find a similar thread here helpful, see Server execution failed (Exception from HRESULT: 0x80080005 (CO_E_SERVER_EXEC_FAILURE)).

How to enable caching in ArangoDB via Docker or arangojs?

I would like to enable caching in ArangoDB, automatically when my app start.
I'm using docker-compose to start the whole thing but apparently there's no simple parameter to enable caching in ArangoDB official image.
According to the doc, all the files in /docker-entrypoint-initdb.d/ are executed at container start. So I added a js file with that code:
require('#arangodb/aql/cache').properties({mode: 'on'});
It is indeed executed but caching doesn't seem to be enabled (from what I see with arangosh within the container).
My app is a JS app using arangojs, so if I can do it this way, I'd be happy too.
Thanks!
According to the performance and server config docs, you can enable caching in several ways.
Your method of adding require("#arangodb/aql/cache").properties({ mode: "on" }); to a .js file in the /docker-entrypoint-initdb.d/ directory should work, but keep an eye on the logs. You may need to redirect log output with a different driver (journals, syslog, etc.) to see what's going on. Make sure to run the command via arangosh to see if it works.
If that's a bust, you might want to see if there is a way to pass parameters at runtime (such as --query.cache-mode on). Unfortunately, I don't use Docker Compose, so I can't give you direct advice here, but try something like -e QUERY.CACHE-MODE=ON
If there isn't a way to pass params, then you could modify the config file: /etc/arangodb3/arangod.conf.
And don't forget about the REST API methods for system management. You can access AQL configuration (view and alter) in the Web UI by clicking on the Support -> Rest API -> AQL.
One thing to keep in mind - I'm not sure if the caching settings are global or tied to a specific database. View the configuration on multiple databases (including _system) to test the settings.

Export environment variables to JupyterHub users, without using Docker?

JupyterHub has various authentication methods, and the one I am using is the PAMAuthenticator, which basically means you log into the JupyterHub with your Linux userid and password.
However, environment variables that I create, like this (or for that matter in those set in my .bashrc), before running JupyterHub, do not get set within the user's JupyterLab session. As you can see they're available in the console, with or without the pipenv, and within python itself via os.getenv().
However in JupyterHub's spawned JupyterLab for my user (me):
This environment variable myname is not available even if I export it in a bash session from within JupyterLab as follows:
Now the documentation says I can customize user environments using a Docker container for each user, but this seems unnecessarily heavyweight. Is there an easier way of doing this?
If not, what is the easiest way to do this via Docker?
In the jupyterhub_config.py file, you may want to add the environment variables which you need using the c.Spawner.env_keep variable
c.Spawner.env_keep = ['PATH', 'PYTHONPATH', 'CONDA_ROOT', 'CONDA_DEFAULT_ENV', 'VIRTUAL_ENV', 'LANG', 'LC_ALL', 'JUPYTERHUB_SINGLEUSER_APP']
Additional information on all the different configurations are available at https://jupyterhub.readthedocs.io/en/stable/reference/config-reference.html
Unfortunately, unlike a single-user Jupyter notebook/lab, Jupyterhub is for a multi-user environment and the customization along with setting security is not some concrete area. They provide you some default settings and a ton of ways to customize the use, alas they provide only a handful amount of examples. You need to dig into documents, check for similarities to your use case, and make adjustments in a trial-error process.
Fortunately, other than using configuration files used to configure Jupyterhub and Jupyter notebook servers, namely jupyter_notebook_config.py and jupyterhub_config.py, we can use environment reading packages per user. This flexibility comes from the use of a programming language kernel.
But this needs being able to install new packages, having them already installed, or asking admins to install them on the current kernel.
Here is one way to use customized environment variables in the current workspace.
Create a new file and give a clear name to show it is an environment file. You can have as many different files as you need. Most production exercises use the .env name but jupyter will not list dot files in file view so avoid doing that. Also, be careful about quotes; sometimes you need them, sometimes you get errors depending on what library you use and where you use them.
test.env:
NAME="My Name"
TEST=This is test 42
Install and use your preferred environment file reader then read from the file(s) you want. you can use `pip install`` in the notebook when needed, just use it cautiously.
test.ipynb
#package already installed, so installation commented out
#%pip install python-environ
import environ
env = environ.Env()
env.read_env(env.str('ENV_PATH', 'test.env'))
NAME=env("NAME")
TEST=env("TEST")
print(NAME," : ",TEST)
If you are an admin of the hub, then beware of the use cases for libraries such that some may break your restrictions. So keep an eye on what permissions you give to your users. If you use custom docker images though, there should not be a leakage as they are already designed to be isolated from your system.

Google Colab: Can we restore all the data even after the runtime disconnects?

I am a new learner. I recently started learning Google Colab. Whenever I close my Colab and reopen it, all the code start executing from beginning. Is there any way to restore the local variable, code outputs and all the previous program data?
It is really time-consuming to load the dataset every time.
Unfortunately No (by this answer posted date), you cannot restore to previous runtime. Everything restarts on a new runtime session on a different virtual machine. Notebooks run by connecting to virtual machines that have maximum lifetimes that can be as much as 12 hours. And Colab Pro says to provide around 24hrs of runtime. This is necessary for Colab to be able to offer computational resources for free.
However you can apply good practices to help you work faster. Some of them are:
Save your datasets and trained models on your Google Drive; Mount it and use it as required. Only runtime local variables and program data for that session are destroyed.
Use pre-trained models to implement Transfer Learning to save training time.
Use "Connect to hosted runtime" and "Manage Sessions" to use the free resources effectively.
Sadly, it's just part of the workflow with colab, but there are ways to make life easier. To persist data you'd want to connect to google drive and pull/save files from there:
from google.colab import drive
drive.mount('/content/drive')
Then follow instructions - click the link, copy/paste the auth token.
After connecting to google drive - copy files that are stored on the drive using command !cp. For example, these commands copy files stored on the drive to local notebook environment:
!cp "/content/drive/My Drive/Colab Notebooks/trainer.py" "trainer.py"
!cp "/content/drive/My Drive/Colab Notebooks/data.pkl" "data.pkl"
To copy files and folders from notebook environment to drive use the same !cp command:
!cp "model" "/content/drive/My Drive/Colab Notebooks/my-fancy-model"
Assuming you want to see previous ouputs of the code. You could use File > Save and Pin Revision to save revision history including revision name. That way it will store previous outputs including code changes. Now going to File > Revision History, it will show difference between two version. Clicking on three dot on right side it will show option to restore version, open, or name it.

Rabbitmq erlang client build failed due to file paths problems?

I have been able to build rabbitmq server on ubuntu linux. It came already prepackaged and on making, it is able to start as a service. When i got the client source, i failed to make because it appeared like it needed a folder called ./deps/rabbitmq-server. Analysing the code, i find that the author of the client was accessing the same header files as are found in the server, using include_lib("path to rabbit.hrl e.t.c") in his header file called "amqp_client.hrl". I then decided to add rabbitmq_server in the lib dir of erlang so as its paths are automatically added on start up of the vm. But still this didnot help. There is also another folder which the client references called "rabbit_common" for an include folder he assumes would contain all the .hrl files there. Please assist me in building both the client and server on my ubuntu server, for testing.
Also, if anyone has used RabbitMQ server for IMs, please provide some benchmarks and/or your findings on its throughput, speed and number of users. How can it be compared to ejabberd?. How can one create AJAX/Jquery/Javascript clients for Web functionality?
thanks
I hope you had made some progress as far as RabbitMQ and ejabberd are concerned.
Below is a link to an interesting discussion that might be of help.
http://old.nabble.com/AMPQ-vs-XMPP-and-RabbitMQ-vs-ejabberd-td17587109.html

Resources