so I have established two connections aws_default and google_cloud_default in a json file like so
{
"aws_default": {
"conn_type": "s3",
"host": null,
"login": "sample_login",
"password": "sample_secret_key",
"schema": null,
"port": null,
"extra": null
},
"google_cloud_default": {
"conn_type": "google_cloud_platform",
"project_id": "sample-proj-id123",
"keyfile_path": null,
"keyfile_json": {sample_json},
"scopes": "sample_scope",
"number_of_retries": 5,
}
}
I have a local airflow server containerized in docker. What I am trying to do, is to import the connections from this file, that way I don't need to define the connections in the Airflow UI.
I have an entrypoint.sh file which runs everytime the airflow image is built.
I have included this line airflow connections import connections.json in that shell file.
in my docker-compse.yaml file, I have added a binded a volume like so
- type: bind
source: ${HOME}/connections.json
target: /usr/local/airflow/connections.json
However, when I run my DAG locally, which includes hooks that connect to these connections, I receive errors: i.e.
The conn_id `google_cloud_default` isn't defined
So I'm not too sure how to proceed. I was reading about Airflow's local filesystem secrets backend here
And it mentions this code chunk to establish the file path
[secrets]
backend = airflow.secrets.local_filesystem.LocalFilesystemBackend
backend_kwargs = {"variables_file_path": "/files/var.json", "connections_file_path": "/files/conn.json"}
But, as I check my airflow.cfg, I can't find this code chunk. Am I supposed to add this to airflow.cfg?
Could use some guidance here.. I know the solution is simple but I've naive to setting up a connection like this. Thanks!
Related
I'm trying to create a Swagger UI configuration to show several of my apis. They are not hosted publicly, the definition files are in my local file system. I'm using swagger ui with docker. I run it with the following command:
docker run -p 8080:8080 -v $(pwd)/_build:/spec swaggerapi/swagger-ui
In _build directory is where I have my yaml spec files. This is the swagger-config.yaml config file:
urls:
- /spec/openapi2.yaml
- /spec/openapi.yaml
plugins:
- topbar
I have also tried:
urls:
- url: /spec/openapi2.yaml
name: API1
- url: /spec/openapi.yaml
name: API2
plugins:
- topbar
After running it, this is what I see:
That's the default API of Swagger UI, so I suppose there's an error in my configuration. I have tried several things, but they have not worked and I do not seem to find any good documentation about the swagger-config.yaml configuration file.
Any idea to make it work with several APIs?
According to the comments in the Swagger UI issue tracker, the Docker version needs the config file in the JSON format rather than YAML.
Try using this swagger-config.json:
{
"urls": [
{
"url": "/spec/openapi2.yaml",
"name": "API1"
},
{
"url": "/spec/openapi.yaml",
"name": "API2"
}
],
"plugins": [
"topbar"
]
}
Also add -e CONFIG_URL=/path/to/swagger-config.json to the docker run command.
I am new to Terraform so please be kind.
During Build process, Terraform is pushing the docker image to AWS ECR with a new name with every build.
As Image Name is different, we need to create a new Task Definition for each new build.
Is there a way to handle this issue in Terraform?
Any help is appreciated.
If you are ok to replace the task definition each time with the new image, then you can update the image name used in the task definition and Terraform will handle the update for you.
If you need to generate a new task definition each time and leave the old ones in place then read on..
If you do not need to keep the task definition in the Terraform state, then you could remove it after deployment so that next time a new one will be created.
The state rm command removes a resource from the state:
terraform state rm aws_ecs_task_definition.service
If you do need to keep each task definition in the Terraform state, you could create a data source for which you can use the for_each operator to generate a collection of resources for.
As an example you could save the container definitions of each task definition to a JSON file within a folder. Each file looks something like this:
[
{
"name": "a-service",
"image": "my-image",
"cpu": 10,
"memory": 512,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}
]
Use the fileset function to list files in the folder and generate a new resource for each file using for_each:
resource "aws_ecs_task_definition" "service" {
family = "service"
for_each = fileset(path.module, "task-definitions/*.json")
container_definitions = file("${path.module}/${each.key}")
}
I have an Elasticsearch deployment on Kubernetes (AKS). I'm using the official Elastic's docker images for deployments. Logs are being stored in a persistent Azure Disk. How can I migrate some of these logs to another cluster with a similar setup? Only those logs that matches a filter condition based on datetime of the logs needs to be migrated.
Please use Reindex API for achieving the same
POST _reindex
{
"source": {
"remote": {
"host": "http://oldhost:9200",
"username": "user",
"password": "pass"
},
"index": "source",
"query": {
"match": {
"test": "data"
}
}
},
"dest": {
"index": "dest"
}
}
Note:
Run the aforementioned command on your target instance.
Make sure that the source instance is whitelisted in elasticsearch.yml
reindex.remote.whitelist: oldhost:9200
Run the process asynchronously using below query param
POST _reindex?wait_for_completion=false
just looking for some guidance on how to properly invoke a command when a container starts, when creating it via azure-arm-containerinstance package. There is very little documentation on this specific part and I wasn't able to find any examples out there on the internet.
return client.containerGroups
.beginCreateOrUpdate(process.env.AZURE_RESOURCE_GROUP, containerInstanceName, {
tags: ['server'],
location: process.env.AZURE_INSTANCE_LOCATION,
containers: [
{
image: process.env.CONTAINER_IMAGE,
name: containerInstanceName,
command: ["./some-executable","?Type=Fall?"],
ports: [
{
port: 1111,
protocol: 'UDP',
},
],
resources: {
requests: {
cpu: Number(process.env.INSTANCE_CPU),
memoryInGB: Number(process.env.INSTANCE_MEMORY),
},
},
},
],
imageRegistryCredentials: [
{
server: process.env.CONTAINER_REGISTRY_SERVER,
username: process.env.CONTAINER_REGISTRY_USERNAME,
password: process.env.CONTAINER_REGISTRY_PASSWORD,
},
],```
Specifically this part below, is this correct? Just an array of strings? Are there any good examples anywhere? (tried both google and bing) Is this equivalent of docker's CMD ["command","argument"]?
```command: ["./some-executable","?Type=Fall?"],```
With your issue, most you have done is right, but there are points should pay attention to.
one is the command property will overwrite the CMD setting in the Dockerfile. So if the command will not always keep running, then the container will in a terminate state when the command finish execute.
Second is the command property is an array with string members and they will execute like a shell script. So I suggest you can set it like this:
command: ['/bin/bash','-c','echo $PATH'],
And you'd better keep the first two strings no change, just change the after.
If you have any more questions please let me know. Or if it's helpful you can accept it :-)
In Ghost 0.x, config was provided via a single config.js file with keys for each env.
In Ghost 1.0, config is provided via multiple config.json files
How do you provide environment variables in Ghost 1.0?
I would like to dynamically set the port value using process.env.port on Cloud9 IDE like so.
config.development.json
{
"url": "http://localhost",
"server": {
"port": process.env.port,
"host": process.env.IP
}
}
When I run the application using ghost start with the following config, it says You can access your publication at http://localhost:2368, but when I go to http://localhost:2368 in http://c9.io it gives me an error saying No application seems to be running here!
{
"url": "http://localhost:2368",
"server": {
"port": 2368,
"host": "127.0.0.1"
}
}
I managed to figure out how to do this.
Here is the solution incase if someone else is also trying to figure out how to do the same thing.
In your config.development.json file, add the following.
{
"url": "http://{workspace_name}-{username}.c9users.io:8080",
"server": {
"port": 8080,
"host": "0.0.0.0"
}
}
Alternatively, run the following command in the terminal. This will dynamically get the value for the port and host environment variable and add the above content to the config.development.json file.
ghost config url http://$C9_HOSTNAME:$PORT
ghost config server.port $PORT
ghost config server.host $IP