When running my Docker image and requesting the Swagger UI I receive: 502 Bad Gateway.
I am attempting to run Connexion (Flask-based Swagger UI generator) with uWSGI on nginx. I assume it is because uWSGI does not correctly pick up my Flask instance. However, it appears to me that my container instance is correctly configured.
If you look here https://github.com/Microsoft/python-sample-vscode-flask-tutorial, the setup of my application and the configuration is similar and it works without issues.
According to the Connexion documentation I should be able to expose the app instance to uWSGI using
app = connexion.App(__name__, specification_dir='swagger/')
application = app.app # expose global WSGI application object
You can find my complete application code here:
https://bitbucket.org/lammy123/flask_nginx_docker_connexion/src/master/
The Flask/Connexion object is in application/__init__.py
uwsgi.ini:
[uwsgi]
module = application.webapp
callable = app
uid = 1000
master = true
threads = 2
processes = 4
__init__.py:
import connexion
app = connexion.App(__name__, specification_dir='openapi/')
app.add_api('helloworld-api.yaml', arguments={'title': 'Hello World Example'})
webapp.py:
from . import app # For application discovery by the 'flask' command.
from . import views # For import side-effects of setting up routes.
from . import app
import connexion
application = app.app
Running the code with the build-in development server works.
Expected behavior is that the Swagger UI is available at:
http://localhost:5000/v1.0/ui/#/
when running from a Docker container.
Related
I'm getting an error details: name = ErrorInfo reason = IAM_PERMISSION_DENIED domain = iam.googleapis.com metadata = map[permission:logging.logEntries.create] when I check the logs of a deployed container in GCP. I'm not sure why this is happening since running the container in localhost seems to work fine.
The service is also deployed on the same host with another service but with a different port number, the other service seems to be working fine, although that didn't use any google API services.
The service having the error on GCP has a .env file with this content:
GOOGLE_APPLICATION_CREDENTIALS=json/name-of-json-file.json
With the json file being the service account keys file. The dockerfile looks like this:
# Specifies a parent image
FROM golang:1.19.2-bullseye
# Creates an app directory to hold your app’s source code
WORKDIR /app
# Copies everything from your root directory into /app
COPY . .
# Installs Go dependencies
RUN go mod download
# Builds your app with optional configuration
RUN go build -o /logging-go
# Tells Docker which network port your container listens on
EXPOSE 8040
# Specifies the executable command that runs when the container starts
CMD [ "/logging-go" ]
The service is making use of the google logging API and is accessed through this snipper of code:
c, cErr := Load(".env")
if cErr != nil {
log.Fatalf("could not load config: %s", cErr)
return
}
// initializes logger which writes to stdout
ctx := context.Background()
opt := option.WithCredentialsFile(c.GoogleApplicationCredentials);
loggerClient, clientErr := logging.NewClient(ctx, "poc-projects-01", opt)
if clientErr != nil {
log.Fatal(clientErr)
}
if clientErr := loggerClient.Ping(ctx); clientErr != nil {
log.Fatal(clientErr)
}
logger := loggerClient.Logger("frontend_logs")
It works fine on my localhost when running it through docker, but it doesn't work on GCP. Any ideas on how I can fix this?
error details: name = ErrorInfo reason = IAM_PERMISSION_DENIED domain
= iam.googleapis.com metadata = map[permission:logging.logEntries.create]
Above error means you have a permissions issue when trying to access the Google Logging API from your deployed container. This could occur if the service account key you are using does not have the correct permissions to access the API, or if the service account key has not been properly configured.
To ensure that the service account key has the correct permissions, you should check the IAM roles associated with the service account and make sure that the roles have the correct permissions to access the Google Logging API, check whether do you have ‘logging.logEntries.create’ role assigned to your service account.
Attaching troubleshooting document for reference.
I am trying to add a running instance of MinIO to Airflow connections, I thought it should be as easy as this setup in the GUI (never mind the exposed credentials, this is a blocked of environment and will be changed afterwards):
Airflow as well as minio are running in docker containers, which both use the same docker network. Pressing the test button results in the following error:
'ClientError' error occurred while testing connection: An error occurred (InvalidClientTokenId) when calling the GetCallerIdentity operation: The security token included in the request is invalid.
I am curious about what I am missing. The idea was to set up this connection and then use a bucket for data-aware scheduling (= I want to trigger a DAG as soon as someone uploads a file to the bucket)
I am also facing the problem that the endpoint URL refused connection. what I have done is the is actually running in the docker container so we should give docker host url
{
"aws_access_key_id":"your_minio_access_key",
"aws_secret_access_key": "your_minio_secret_key",
"host": "http://host.docker.internal:9000"
}
I am also facing this error in Airflow 2.5.0.
I've found workaround using boto3 library that already buit-in.
Firsty I created connection with parameters:
Connection Id: any label (Minio in my case)
Connection Type: Generic
Host: minio server ip and port
Login: Minio access key
Password: Minio secret key
And here's my code:
import boto3
from airflow.hooks.base import BaseHook
conn = BaseHook.get_connection('Minio')
s3 = boto3.resource('s3',
endpoint_url=conn.host,
aws_access_key_id=conn.login,
aws_secret_access_key=conn.password
)
s3client = s3.meta.client
#and then you can use boto3 methods for manipulating buckets and files
#for example:
bucket = s3.Bucket('test-bucket')
# Iterates through all the objects, doing the pagination for you. Each obj
# is an ObjectSummary, so it doesn't contain the body. You'll need to call
# get to get the whole body.
for obj in bucket.objects.all():
key = obj.key
I am using a openSUSE Tumbleweed container in a gitlab ci pipeline which runs some script. In that script, I need to send an email at some point of time with certain content.
In the container, I am installing postfix and configuring that relay server in /etc/postfix/main.cf.
The following command works on my laptop using that same relay server:
echo "This is the body of the email" | mail -s "This is the subject" -r sender#email.com receiver#email.com
but doesn't work from the container, even having the same postfix configuration.
I've seen some tutorials that show how to use the postfix/smtp configuration from the host, but since this is a container running in gitlab ci, that's not applicable.
So, finally opted for a python solution and call the script from bash, this way I really don't need to configure postfix, smtp or any other thing. You just export your variables in bash (our use argparse) and run this script. Of course, you need a relay server without auth (normally on port 25).
import os
import smtplib
from email.mime.text import MIMEText
smtp_server = os.environ.get('RELAY_SERVER')
port = os.environ.get('RELAY_PORT')
sender_email = os.environ.get('SENDER_EMAIL')
receiver_email = os.environ.get('RECEIVER_EMAIL')
mimetext = MIMEText("this is the body of the email")
mimetext['Subject'] = "this is the subject of the email"
mimetext['From'] = sender_email
mimetext['To'] = receiver_email
server = smtplib.SMTP(smtp_server, port)
server.ehlo()
server.sendmail(sender_email, receiver_email.split(','), mimetext.as_string())
I' m trying to run my docker containers ( Flask app) but i got an error that says:
Import error no module named bplanner.app
Docker compose :
Error :
Dockerfile :
bplanner.app:create_app()
You're calling a python method from within a gunicorn command. I don't think that's possible. Instead create a separate python file which imports the create_app function and makes the resulting app object available to gunicorn.
You haven't posted any python code, or a directory listing, but at a guess this should work...
# wsgi.py
from app import create_app()
application = create_app()
In the guicorn command:
wsgi:application
I have a custom module that I want to install on a container running the bitnami/magento docker image within a kubernetes cluster.
I am currently trying to install the module from a local dir into the containers Dockerfile:
# run bitnami's magento container
FROM bitnami/magento:2.2.5-debian-9
# add magento_code directory to the bitnami magento install
# ./magento_data/code contains the module, i.e. Foo/Bar
ADD ./magento_data/code /opt/bitnami/magento/htdocs/app/code
After building and running this image the site pings back a 500 error. The pod logs show that Magento installs correctly but it doesn't know what to do with the custom module:
Exception #0 (UnexpectedValueException): Setup version for module 'Foo_Bar' is not specified
Therefore to get things working I have to open a shell to the container and run some commands:
$ php /opt/bitnami/magento/htdocs/bin/magento setup:upgrade
$ chown -R bitnami:daemon /opt/bitnami/magento/htdocs
The first sorts the magento set up issue, the second ensures the next time an http request comes in Magento is able to correctly generate any directories and files it needs.
This gives me a functioning container, however, kubernetes is not able to rebuild this container as I am manually running a bunch of commands after Magento has installed.
I thought about running the above commands within the containers readinessProbe however not sure if it would work as not 100% on the state of Magento when that is first called, alongside it seeming very hacky.
Any advice on how to best set up custom modules within a bitnami/magento container would be much appreciated.
UPDATE:
Since opening this issue I've been discussing it further on Github: https://github.com/bitnami/bitnami-docker-magento/issues/82
I've got it working via the use of composer instead of manually adding the module to the app/code directory.
I was able to do this by firstly adding the module to Packagist, then I stored my Magento Marketplace authentication details in auth.json:
{
"http-basic": {
"repo.magento.com": {
"username": <MAGENTO_MARKETPLACE_PUBLIC_KEY>,
"password": <MAGENTO_MARKETPLACE_PRIVATE_KEY>
}
}
}
You can get the public & private key values by creating a new access key within marketplace. Place the file in the modules root, alongside your composer.json.
Once I had that I updated my Dockerfile to use the auth.json and require the custom module:
# run bitnami's magento container
FROM bitnami/magento:2.2.5
# Require custom modules
WORKDIR /opt/bitnami/magento/htdocs/
ADD ./auth.json .
RUN composer require foo/bar
I then completed a new install, creating the db container alongside the magento container. However it should also work fine with an existing db so long as the modules versions are the same.