Aurora: Unknown schema in docker parameters - docker

I have an aurora file which contains this docker parameters:
jobs = [
Service(cluster = 'mesos-fr',
environment = 'devel',
role = 'root',
instances = 1,
name = 'frontend_service',
task = run_frontend_service,
container=Docker(image='frontend_service', parameters=[{'name': 'frontend_service'}, {'publish': '{{thermos.ports[http]}}:3000'}])
)
]
Got this error:
Error loading configuration: Unknown schema attribute publish
Is there a solution for connecting the host port with a docker container port?

EDIT: Mustache variable replacements might not help since they happen after the container comes up.
It looks like there's a problem with the form of your Docker Parameters. An example of correct ones is container=Docker(image='nginx', parameters=[Parameter(name='env', value='HTTP_PORT={{thermos.ports[http]}')]).
There is a Parameter object, with name and value. Value can be a string with mustache variable (like ports) so you can throw that stuff in there.
This documentation (under Announcer Objects) might help too: http://aurora.apache.org/documentation/latest/reference/configuration/

Related

Is there a way to (auto) fetch composer's environment information internally?

I have a composer environment that is deployed on a GKE cluster, my wish is to be able to retrieve the info on this cluster via operators for example, without hard coding it, or manually putting it in environment variables.
Relevant info I wish to get for now :
COMPOSER_SERVICE_ACCOUNT = "<acc_name>#<project_id>.iam.gserviceaccount.com"
COMPOSER_BUCKET = "<bucket_name>"
COMPOSER_PROJECT = "<project_id_where_composer_is_deployed>"
COMPOSER_PYTHON_VERSION = "3.8.12"
COMPOSER_VERSION = "<relevant_v>"
COMPOSER_UI_URL = "<...>"
AIRFLOW_VERSION = "2.3.4"
...
My intuition is to use gcloud via a BashOperator, but I was hoping there was a library capable of performing this task better.
You can use the built in CloudComposerGetEnvironmentOperator operator :
get_env = CloudComposerGetEnvironmentOperator(
task_id="get_env",
project_id='project',
region='europe-west1',
environment_id='composer-env-name',
)
This operator displays all the environment information, it's equivalent to :
gcloud composer environments describe composer-env-name \
--location europe-west1
You can access to the result Dict with xcom if needed.
If you want to no hard coding the arguments like project id and Composer environment name, you can retrieve them with predefined Composer env vars, example :
PROJECT_ID = os.getenv("GCP_PROJECT")
COMPOSER_ENV_NAME = os.getenv("COMPOSER_ENVIRONMENT")
get_env = CloudComposerGetEnvironmentOperator(
task_id="get_env",
project_id=PROJECT_ID,
region='europe-west1',
environment_id=COMPOSER_ENV_NAME,
)

Why is this route test failing?

I've been following along with the testdriven.io tutorial for setting up a FastAPI with Docker. The first test I've written using PyTest errored out with the following message:
TypeError: Settings(environment='dev', testing=True, database_url=AnyUrl('postgres://postgres:postgres#web-db:5432/web_test', scheme='postgres', user='*****', password='*****', host='web-db',host_type='int_domain', port='5432', path='/web_test')) is not a callable object.
Looking at the picture, you'll notice that the Settings object has a strange form; in particular, its database_url parameter seems to be wrapping a bunch of other parameters like password, port, and path. However, as shown below my Settings class takes a different form.
From config.py:
# ...imports
class Settings(BaseSettings):
environment: str = os.getenv("ENVIRONMENT", "dev")
testing: bool = os.getenv("TESTING", 0)
database_url: AnyUrl = os.environ.get("DATABASE_URL")
#lru_cache()
def get_settings() -> BaseSettings:
log.info("Loading config settings from the environment...")
return Settings()
Then, in the conftest.py module, I've overridden the settings above with the following:
import os
import pytest
from fastapi.testclient import TestClient
from app.main import create_application
from app.config import get_settings, Settings
def get_settings_override():
return Settings(testing=1, database_url=os.environ.get("DATABASE_TEST_URL"))
#pytest.fixture(scope="module")
def test_app():
app = create_application()
app.dependency_overrides[get_settings] = get_settings_override()
with TestClient(app) as test_client:
yield test_client
As for the offending test itself, that looks like the following:
def test_ping(test_app):
response = test_app.get("/ping")
assert response.status_code == 200
assert response.json() == {"environment": "dev", "ping": "pong", "testing": True}
The container is successfully running on my localhost without issue; this leads me to believe that the issue is wholly related to how I've set up the test and its associated config. However, the structure of the error and how database_url is wrapping up all these key-value pairs from docker-compose.yml gives me the sense that my syntax error could be elsewhere.
At this juncture, I'm not sure if the issue has something to do with how I set up test_ping.py, my construction of the settings_override, with the format of my docker-compose.yml file, or something else altogether.
So far, I've tried to fix this issue by reading up on the use of dependency overrides in FastApi, noodling with my indentation in the docker-compose, changing the TestClient from one provided by starlette to that provided by FastAPI, and manually entering testing mode.
Something I noticed when attempting to manually go into testing mode was that the container doesn't want to follow suit. I've tried setting testing to 1 in docker-compose.yml, and testing: bool = True in config.Settings.
I'm new to all of the relevant tech here and bamboozled. What is causing this discrepancy with my test? Any and all insight would be greatly appreciated. If you need to see any other files, or are interested in the package structure, just let me know. Many thanks.
Any dependency override through app.dependency_overrides should provide the function being overridden as the key and the function that should be used instead. In your case you're assigning the correct override, but you're assigning the result of the function as the override, and not the override itself:
app.dependency_overrides[get_settings] = get_settings_override()
.. this should be:
app.dependency_overrides[get_settings] = get_settings_override
The error message shows that FastAPI tried to call your dictionary as a function, something that hints to it expecting a function instead.

Spring Cloud Data Flow java DSL: container properties

I have a SCDF local deployment where I want to deploy a custom docker-based sink. This sink internally consists of a java part that acts as translation wrapper between SCDF and another bit of nonjava code.
I need to be able to control
Name of container
Number of instances
volumes mounted to container
Ports mapped to container
Environment variables passed to the nonjava code
Looking at LocalAppDeployer and DockerCommandBuilder it seems I should be able to do (1) and (2) with something like
HashMap<String,String> params = new HashMap<>();
params.put(AppDeployer.COUNT_PROPERTY_KEY,2);
params.put(AppDeployer.GROUP_PROPERTY_KEY,"foo");
Stream.builder(scdf)
.name("mystream")
.definition("file|bar")
.create()
.deploy(props);
which I expect to give me 2 containers: foo-bar-1 and foo-bar-2
My question is how can I archive (3),(4) and (5)?
For any future searches:
TL;DR: use deployer.<appName>.local.docker.volume-mounts and deployer.<appName>s.local.docker.port-mappings
e.g:
Map<String, String> properties = new HashMap<>();
properties.put(String.format("deployer.%s.local.docker.volume-mounts", "myApp"),"/tmp/foo:/bar");
properties.put(String.format("deployer.%s.local.docker.port-mappings", "myApp"),"9090:80");
Stream.builder(scdf).name("myStream").definition("time|log").create().deploy(properties)
See PR. Thank you to the SCDF team for their help

Using Environment Variables to Configure a Sails.js app for CircleCI

I've accessed environmental varibales with node apps before with process.env.VARIABLE_NAME, but I was curious to try Sails' alternative solution. It seems like I should be able to put in a dummy value (or nothing) in the /config/foo.js file, then overwrite it with a carefully named environmental variable. I modeled my setup on this example.
Unfortunately, CircleCI seems to be ignoring the environmental variable and using the dummy value instead. Have I set something up incorrectly? FYI, I'm using /config/local.js (no environment variables) to overwrite the password on my local machine and everything works fine...
/config/datastores.js:
module.exports.datastores = {
postgresqlTestDb: {
adapter: 'sails-postgresql',
host: 'test-postgres.myhost.com',
user: 'postgres',
password: 'PASSWORD',
database: 'my-db',
},
};
Environment Variables in CircleCI:
sails_datastores__postgresqlTestDb__password = theRealPassword
Error in CircleCI:
1) "before all" hook:
Error: done() invoked with non-Error: {"error":{"name":"error","length":104,"severity":"FATAL","code":"28P01","file":"auth.c","line":"307","routine":"auth_failed"},"meta":{"adapter":"sails-postgresql","host":"test-postgres.myhost.com","user":"postgres","password":"PASSWORD","database":"","identity":"postgresqlTestDb","url":"postgres://postgres:PASSWORD#test-postgres.myhost.com:5432/my-db"}}
at sails.lift (test/lifecycle.test.js:46:23)
...
The Important part of the error:
"url":"postgres://postgres:PASSWORD#test-postgres.myhost.com:5432/my-db"
I want to connect to postgres://postgres:theRealPassword#test-postgres.myhost.com:5432/my-db instead...
I just set an ENV variable for the entire connection URL. looks something like this:
sails_datastores__default__url: postgresql://user:passwrod#host:port/databse
I think in your example you are missing the "default" part

run a remote ruby script (pass parameters as well) using ssh

I want to connect to a remote host and run a ruby script on the remote host. Following is the code that I am using -
ssh = Net::SSH.start(host, user)
args = "some argument" //can be any data type, list, string, anything
results = conn.exec!('ruby runfile.rb args')
It's not passing args to the file in this case. I have also tried using double quotes instead of single quotes. How do I send the parameters as well?
the name of the connection variable must be consistent (ssh ≠ conn).
You need to send the content of args instead of the string "args":
Double quotes are needed for the #{...} syntax to work. Or use 'ruby runfile.rb ' + args if you prefer.
Use # instead of // to comment Ruby code.
Use .shellescape to harden against unwanted (accidental or malicious) effects in the remote shell.
This code does work:
require 'net/ssh'
require 'shellwords'
ssh = Net::SSH.start(host, user)
args = "some argument".shellescape #can be any data type, list, string, anything
results = ssh.exec!("ruby runfile.rb #{args}")
puts results

Resources