The scheme describes the way how raw data from external sources gets collected, processed, and stored.
The Pipeline implies the ability to collect raw data in any text format with further conversion into JSON and automatically determine the DB schemas to store the data.
The Pipeline consists of four elements:
Log Collector is an HTTP receiver endpoint. The program wraps received data in the following model: (_id, _aggregatedAt, _connector, _sourceType, _source)
and sends it to the preprocessor. The program also validates the API key and the received data model.
Preprocessor processes received raw data and does the following:
DB Scheme Validator creates a DB schema according to the data model located in the connector. It creates the schema from a JSON model and adds additional attributes if needed. Does the stream processing using ML algorithms based on previously trained models and adds the following labels:
_labels.ml.cluster.id
– The ID of the cluster, the event is related to._labels.ml.cluster.modelVersion
– ML model version (iteration)._labels.ml.cluster.position
– Event position relative to the cluster's center. Fractional number, the closer to 1 means the closer to the center of the cluster.Data Buffer stores all the raw data in the ClickHouse DB according to the data stream's schema.