| Branch | Status |
|---|---|
| develop | |
| master |
- Table of Content
- Summary Of The Project
- Logging Standard Django Management Commands
- Local Development
- Cognito
- OTEL
- Importing Data from the BOD
- Type Checking
- Deployment configuration
service-control provides and manages the verified permissions. TBC
This project uses a modified manage.py that supports redirecting the output of the standard
Django management commands to the logger. For this, simply add --redirect-std-to-logger, e.g.:
app/manage.py migrate --redirect-std-to-loggerPrerequisites on host for development and build:
- python version 3.12
- pipenv
dockeranddocker compose
To create and activate a virtual Python environment with all dependencies installed:
make setupTo start the local postgres container, run this:
make start-local-dbYou may want to do an initial sync of your database by applying the most recent Django migrations with
app/manage.py migrateAll packages used in production are pinned to a major version. Automatically updating these packages will use the latest minor (or patch) version available. Packages used for development, on the other hand, are not pinned unless they need to be used with a specific version of a production package (for example, boto3-stubs for boto3).
To update the packages to the latest minor/compatible versions, run:
pipenv update --devTo see what major/incompatible releases would be available, run:
pipenv update --dev --outdatedTo update packages to a new major release, run:
pipenv install logging-utilities~=5.0Run tests with, for example, 16 workers:
pytest -n 16There are some possibilities to debug this codebase from within visual studio code.
In order to debug the service from within vs code, you need to create a launch-configuration. Create
a folder .vscode in the root folder if it doesn't exist and put a file launch.json with this content
in it:
{
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Attach",
"type": "debugpy",
"request": "attach",
"justMyCode": false,
"connect": {
"host": "localhost",
"port": 5678
}
}
]
}Alternatively, create the file via menu "Run" > "Add Configuration" by choosing
- Debugger: Python Debugger
- Debug Configration: Remote Attach
- Hostname:
localhost - Port number:
5678
Now you can start the server with make serve-debug.
The bootup will wait with the execution until the debugger is attached, which can most easily done by hitting F5.
The unit tests can also be invoked inside vs code directly (beaker icon).
To do this you need to have the following settings either in
.vscode/settings.json or in your workspace settings:
"python.testing.pytestArgs": [
"app"
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,You can also create this file interactively via menu "Python: Configure Tests" in the Command Palette (Ctrl+Shift+P).
For the automatic test discovery to work, make sure that vs code has the Python
interpreter of your venv selected (.venv/bin/python).
You can change the Python interpreter via menu "Python: Select Interpreter"
in the Command Palette.
This project uses Amazon Cognito user identity and access management. It uses a custom user attribute to mark users managed_by_service by this service.
To synchronize all local users with cognito, run:
app/manage.py cognito_syncFor local testing the connection to cognito, cognito-local is used.
cognito-local stores all of its data as simple JSON files in its volume (.volumes/cognito/db/).
You can also use the AWS CLI together with cognito-local by specifying the local endpoint, for example:
aws --endpoint $COGNITO_ENDPOINT_URL cognito-idp list-users --user-pool-id $COGNITO_POOL_IDOpenTelemetry instrumentation can be done in many different ways, from fully automated zero-code instrumentation (otel-operator) to purely manual instrumentation. Since we are kubernetes, the ideal solution would be to use the otel-operator zero-code instrumentation.
For reasons unclear (possibly related to how we do gevent monkey patching), zero-code auto-instrumentation does not work. Thus, we fall back to programmatic instrumentation as described in the Python Opentelemetry Manual-Instrumentation Sample App. We may revisit this once we figure out how to make auto-instrumentation work for this service.
To still use as less code as we can, we use the so called OTEL programmatical instrumentation approach. Unfortunately there are different understandings,
levels of integration and examples of this approach. We use the method described here, since it provides the highest level of automatic instrumentation. I.e. we can use a initialize() method to automatically initialize all installed instrumentation libraries.
Other examples like these:
import the specific instrumentation libraries and initialize them with the instrument() method of each library.
It can be expected that documentations will improve and consolidate over time, as well that zero-code instrumentaton can be used in the future.
As mentioned above, all available and desired instrumentation libraries need to be installed first, i.e. added to the pipfile. Well known libraries like django, request and botocore could be added manually. To get a better overview and add broader instrumentation support, a otel bootstrap tool can be used to create a list of supported libraries for a given project.
Usage:
edot-bootstrap --action=requirementsto get the list of libraries- Add all or the desired ones to the Pipfile.
Note: edot-bootstrap should be already installed via infra-ansible-bgdi-dev. If not, install it with pipx install elastic-opentelemetry.
The following env variables can be used to configure OTEL
| Env Variable | Default | Description |
|---|---|---|
| OTEL_EXPERIMENTAL_RESOURCE_DETECTORS | OTEL resource detectors, adding resource attributes to the OTEL output. e.g. os,process |
|
| OTEL_EXPORTER_OTLP_ENDPOINT | http://localhost:4317 | The OTEL Exporter endpoint, e.g. opentelemetry-kube-stack-gateway-collector.opentelemetry-operator-system:4317 |
| OTEL_EXPORTER_OTLP_HEADERS | A list of key=value headers added in outgoing data. https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/#header-configuration | |
| OTEL_EXPORTER_OTLP_INSECURE | false | If exporter ssl certificates should be checked or not. |
| OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST | A comma separated list of request headers added in outgoing data. Regex supported. Use '.*' for all headers | |
| OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_RESPONSE | A comma separated list of request headers added in outgoing data. Regex supported. Use '.*' for all headers | |
| OTEL_PYTHON_EXCLUDED_URLS | A comma separated list of url's to exclude, e.g. checker |
|
| OTEL_PYTHON_DJANGO_TRACED_REQUEST_ATTRS | A comma separated list of attributes from the django request, e.g. path_info,content_type |
|
| OTEL_RESOURCE_ATTRIBUTES | A comma separated list of custom OTEL resource attributes, Must contain at least the service-name service.name=service-shortlink |
|
| OTEL_SDK_DISABLED | If set to "true", OTEL is disabled. See: https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#general-sdk-configuration |
The OpenTelemetry logging integration automatically injects tracing context into log statements. The following keys are injected into log record objects:
- otelSpanID
- otelTraceID
- otelTraceSampled
Note that although otelServiceName is injected, it will be empty. This is because the logging integration tries to read the service name from the trace provider, but our trace provider instance does not contain this resource attribute.
Local telemetry can be tested by using one of the serve commands that use gunicorn, either
make start-local-db
make gunicornserveor
make start-local-db
make dockerrunand visiting the Zipkin dashboard at http://localhost:9411.
The "Betriebsobjekte Datenbank" (BOD) is a central database for running and configuring the map viewer and some of its services. It contains metadata and translations on the individual layers and configurations for display and serving the data through our services such as Web Map Service (WMS), Web Map Tiling Service (WMTS) and our current api (mf-chsdi3/api3).
You can import a BOD dump and migrate its data:
make setup-bod
make import-bod file=dump.sql
app/manage.py bod_syncTo generate more BOD models, run:
app/manage.py inspectdb --database=bodThe BOD models are unmanaged, meaning Django does not manage any migrations for these models.
However, migrations are still needed during tests to set up the test BOD. To achieve this, it is
necessary to create migrations for the models and dynamically adjust the managed flag based on
whether the tests or the server is running (django.conf.settings.TESTING). Since these migrations
are only for testing purposes, the previous migration file can be removed and recreated:
rm app/bod/migrations/0001_initial.py
app/manage.py makemigrations bodAfterward, the managed flag needs to be set to django.conf.settings.TESTING in both the models
and the migrations.
Type checking can be done by either calling mypy or the make target:
make type-checkThis will check all files in the repository.
For type-checking, the external library mypy is being used. See the type hints cheat sheet for help on getting the types right.
Some 3rd party libraries need to have explicit type stubs installed for the type checker to work. Some of them can be found in typeshed. Sometimes dedicated packages exist, as is the case with django-stubs.
If there aren't any type hints available, they can also be auto-generated with stubgen
| Environment Variable | Default | Description |
|---|---|---|
SECRET_KEY |
None |
Django secret key. |
ALLOWED_HOSTS |
[] |
list of host/domain names allowed to serve the app. |
DB_NAME |
service_control |
Name of the primary PostgreSQL database. |
DB_USER |
service_control |
Username for the primary PostgreSQL database. |
DB_PW |
service_control |
Password for the primary PostgreSQL database. |
DB_HOST |
service_control |
Host address for the primary PostgreSQL database. |
DB_PORT |
5432 |
Port number for the primary PostgreSQL database. |
DB_NAME_TEST |
test_service_control |
Name of the PostgreSQL database used for testing. |
BOD_NAME |
service_control |
Name of the secondary (bod) PostgreSQL database. |
BOD_USER |
service_control |
Username for the bod PostgreSQL database. |
BOD_PW |
service_control |
Password for the bod PostgreSQL database. |
BOD_HOST |
service_control |
Host address for the bod PostgreSQL database. |
BOD_PORT |
5432 |
Port number for the bod PostgreSQL database. |
DJANGO_STATIC_HOST |
'' |
Optional base URL. |
COGNITO_ENDPOINT_URL |
http://localhost:9229 |
Base URL for AWS Cognito endpoint or local mock. |
COGNITO_POOL_ID |
local |
Cognito user pool ID used for authentication. |
COGNITO_MANAGED_FLAG_NAME |
dev:custom:managed_by_service |
Cognito custom attribute name for service-managed users. |
SHORT_ID_SIZE |
12 |
Default length of generated short IDs (nanoid). |
SHORT_ID_ALPHABET |
0123456789abcdefghijklmnopqrstuvwxyz |
Character set used for nanoid short IDs. |
LOGGING_CFG |
config/logging-cfg-local.yaml |
Path to YAML logging configuration file. |
LOG_ALLOWED_HEADERS |
List of default headers | list of HTTP headers allowed in logs (overrides defaults). |
HTTP_PORT |
8000 |
Port on which the Gunicorn/Django app will listen. |
GUNICORN_WORKERS |
2 |
Number of worker processes Gunicorn will start. |
GUNICORN_WORKER_TMP_DIR |
None |
Optional temporary directory for Gunicorn worker processes. |
GUNICORN_KEEPALIVE |
2 |
The keepalive setting for persistent HTTP connections (in seconds). |
GUNICORN_TIMEOUT |
Not explicitely set | The maximum time (in seconds) a worker can handle a request before timing out. |