Connecting data sources to Monq is done by setting up Data Streams. A detailed guide on working with Data Streams is available here.
Integration example | Incoming connections (port/protocol) | Outgoing connection (port/protocol) | Notes | Used template |
---|---|---|---|---|
Zabbix | 80.443 / tcp | Connecting to Zabbix API | Zabbix default | |
Zabbix(webhooks) | 80.443 / tcp | Sending data to Monq | AnyStream default | |
SCOM | 1433 / tcp | Connecting to DBMS of SCOM | SCOM default | |
Prometheus | 80.443 / tcp | Sending data to Monq | Prometheus default | |
ntopng | 80.443 / tcp | Sending data to Monq | ntopng default | |
Nagios IX | 80.443 / tcp | Connecting to Nagios IX API | Nagios default | |
Nagios Core | 80.443 / tcp | Sending data to Monq | AnyStream default | |
Fluentd (Fluent Bit) | 80.443 / tcp | Sending data to Monq | AnyStream default | |
Splunk | 80.443 / tcp | Sending data to Monq | AnyStream default | |
Logstash | 80.443 / tcp | Sending data to Monq | AnyStream default | |
VMWare vCenter | 80.443 / tcp | Connecting to the vCenter API | vCenter default |
To connect a data source of Zabbix type, first you must properly configure the Zabbix side:
Next, go to the Monq menu section Data Collection→Data Streams and configure a data stream with the Zabbix default template, on the Settings tab fill in the fields:
apiUri
- must contain URL in the format http://zabbix.example.com/api_jsonrpc.php
login
- Zabbix loginpassword
- Zabbix passwordand click Save.
If necessary, configure the launching intervals for the Monq agent tasks:
Click Start at the upper right of the page to enable the data stream.
Monq implements interaction with the Zabbix monitoring system in the form of setting up links between Monq triggers and Zabbix triggers through configuration items (CIs). Detailed information on bound objects can be found in the corresponding section of the documentation.
To configure additional functionality for synchronizing the state of Zabbix triggers with Monq triggers, you need to set up a direct connection to the Zabbix database (tables: auditlog
, auditlog_details
, triggers
). Guide for configuring Zabbix connector.
This functionality provides for automatic shutdown of triggers in Monq after they are manually deactivated in Zabbix.
Restrictions: Zabbix DB - MySQL, Zabbix up to version 6.0.
Reference
The order of passing of an event about deactivation or removal of a trigger in Zabbix:
pl-connector-dispatcher-api-service-runner
periodically picks up events from service sm-zabbix-connector-api-service-zabbix-api
and sends them to service cl-stream-data-collector-api-service
via HTTP request.cl.stream-raw-event.new
to cl-stream-data-preprocessor-service
where it is enriched with labels (stream-ready-event.zabbix.new
) and then sent with the cl.stream-processed-event.new
key to the cl-stream-schema-validator-service
service.cl.stream-validated-event.new
key to the cl-stream-data-service-buffer
service.cl.stream-ready-event.zabbix.new
).sm-zabbix-connector-api-service-autodiscovery
service.sm-zabbix-connector-api-service-autodiscovery
service, events are sent with the cl.stream-ready-event.new
key in parallel to the pl-router-service
service, where they are routed via websockets to the screen Raw events and logs, as well as to the pl-automaton-prefilter-service
service.pl-automaton-prefilter-service
service, we have rules for launching events in the automaton and sending them to the pl-automaton-runner-service
.Using this example, you can implement receiving data from any source that supports Webhook.
In the Zabbix 5.0 frontend, go to Administration->Alert Methods and create a new alert type. Enter a name, select the type Webhook. Fill in the table Parameters that forms the content of a JSON file that will be sent to the Monq system:
EventID: {EVENT.ID}
EventName: {EVENT.NAME}
EventSeverity: {EVENT.SEVERITY}
HostName: {HOST.NAME}
TriggerStatus: {TRIGGER.STATUS}
In the script field, copy the code in JavaScript that forms and sends a POST request to the API of your Monq system:
var req = new CurlHttpRequest();
params = JSON.parse(value);
req.AddHeader('Content-Type: application/json');
req.Post('https://{GLOBAL_DOMAIN}/api/public/cl/v1/stream-data?streamKey={API-KEY}', JSON.stringify(params));
{GLOBAL_DOMAIN}
– the address of your Monq space, for example sm.monq.cloud.
{API-KEY}
– API-key, copied in the first step.
First, create a user in the SCOM system database. To do this, connect to the Operations Manager target base using a DBMS client, for example, MSSQL, and create a new user:
In the General section, enter a username, select SQL Server Authentication, enter a password. Copy your name and password - you will need them later.
In the Server Roles section, select the public
role.
In the User Mapping section, select the db_datareader
and public
roles.
Check the summary list of rights in Protected Objects - permissions must include CONNECT SQL
, VIEW ANY DATABASE
, VIEW ANY DEFINITION
.
Confirm the creation of the user - click OK.
Next, go to the Monq menu section Data Collection→Data Streams and configure a data stream with the SCOM default template and on the Settings tab fill in the fields:
OperationsManager
1433
Click Save.
Click Start at the upper right of the page to enable the data stream.
Go to the Monq menu section Data Collection→Data Streams, configure a data stream with the Prometheus default template and copy its API key.
Next, configure the alertmanager.yaml
file in Prometheus:
Add receiver 'web.hook'
:
receivers:
- name: 'web.hook'
webhook_configs:
- send_resolved: true
url: 'https://{GLOBAL_DOMAIN}/api/public/cl/v1/stream-data?streamKey={API-KEY}'
{GLOBAL_DOMAIN}
– the address of your Monq space, for example sm.monq.cloud.
{API-KEY}
– API-key, copied from the data stream page.
In the route
block, add the group_by
grouping order and the sending method via receiver 'web.hook'
, fill in the group_by
key manually:
route:
group_by: ['<Group tags>']
group_wait: 30s
group_interval: 30s
repeat_interval: 1h
receiver: 'web.hook'
Restart alertmanager.
An example of the final configuration file alertmanager.yaml
global:
resolve_timeout: 5m
route:
group_by: ['ingress']
group_wait: 30s
group_interval: 30s
repeat_interval: 1h
receiver: 'web.hook'
receivers:
- name: 'web.hook'
webhook_configs:
- send_resolved: true
url: 'https://sm.example.ru/api/public/cl/v1/stream-data?streamKey=e4da3b7f-bbce-2345-d777-2b0674a31z65'
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'dev', 'instance']
Click Start at the upper right of the page to enable the data stream.
Go to the Monq menu section Data Collection→Data Streams, configure a data stream with the Ntopng default template and copy its API-key.
Next, go to the ntopng system interface, to the Settings->Preferences->Alert Endpoints section and activate the Toggle Webhook Notification switch. Next, paste the address https://{GLOBAL_DOMAIN}/api/public/cl/v1/stream-data?streamKey={API-KEY}
in the Notification URL field.
{GLOBAL_DOMAIN}
– the address of your Monq space, for example sm.monq.cloud.
{API-KEY}
– API-key, copied from the data stream page.
In the deployed Nagios add a new user - go to the section Admin->Add new user accounts->Add new user. In the user creation window, enter name, password, email and check the boxes Can see all hosts and services, Has read-only access, and API access.
Click Add User.
Now select the created user from the list. On the user page, in the LDAP Settings block, copy the key from the API-Key field.
Next, go to the Monq menu section Data Collection→Data Streams, configure a data stream with the Nagios default template, on the Settings tab fill in the apiUri field and paste the previously copied key into the apiKey field, then click Save.
Click Start at the upper right of the page to enable the data stream.
In the Monq menu section Data Collection→Data Streams create an integration with the AnyStream default template and copy its API-key.
Nagios Core does not natively support the HTTP API interface. Integration with the monitoring system is configured by adding a custom alert script.
⚠️ This model uses the following static fields, since their values cannot be obtained in notification:
INSTANCE_ID="1" OBJECT_ID="1" LAST_HARD_STATE=0
To configure the stream from the Nagios side:
Enable enviroment_macros:
enable_environment_macros=1
Add comands:
define command {
command_name send-service-event-to-sm
command_line /usr/local/bin/send_sm 2 > /tmp/sm.log 2>&1
}
define command {
command_name send-host-event-to-sm
command_line /usr/local/bin/send_sm 1 > /tmp/sm.log 2>&1
}
Add contact:
define contact {
use generic-contact
contact_name sm
alias Service Monitor
service_notification_period 24x7
host_notification_period 24x7
host_notifications_enabled 1
service_notifications_enabled 1
service_notification_options w,u,c,r,f
host_notification_options d,u,r,f
service_notification_commands send-service-event-to-sm
host_notification_commands send-host-event-to-sm
register 1
}
Modify the current contactgroup by adding the created contact to it:
define contactgroup{
contactgroup_name admins
alias Nagios Administrators
members nagiosadmin,sm
}
Create a script:
cat > /usr/local/bin/send_sm <<EOF
#!/bin/bash
#############################
##### Define constants ######
#############################
SM_URI="<sm uri with proto>"
CONNECTOR_KEY="<key>"
INSTANCE_ID="1"
OBJECT_ID="1"
LAST_HARD_STATE=0
#################################
##### Define dynamic fields #####
#################################
STATE_TIME=`date '+%F %T'`
OBJECTTYPE_ID=$1
HOST_NAME=$NAGIOS_HOSTNAME
SERVICE_DESCRIPTION=$NAGIOS_SERVICEDESC
if [[ "$1" == "1" ]];then
STATE=$NAGIOS_HOSTSTATEID
LAST_STATE=$NAGIOS_LASTHOSTSTATEID
STATE_TYPE_NAME=$NAGIOS_HOSTSTATETYPE
ATTEMPT=$NAGIOS_HOSTATTEMPT
MAX_ATTEMPTS=$NAGIOS_MAXHOSTATTEMPTS
OUTPUT=$NAGIOS_HOSTOUTPUT
else
STATE=$NAGIOS_SERVICESTATEID
LAST_STATE=$NAGIOS_LASTSERVICESTATEID
STATE_TYPE_NAME=$NAGIOS_SERVICESTATETYPE
ATTEMPT=$NAGIOS_SERVICEATTEMPT
MAX_ATTEMPTS=$NAGIOS_MAXSERVICEATTEMPTS
OUTPUT=$NAGIOS_SERVICEOUTPUT
fi
if [[ "$STATE" != "LAST_STATE" ]];then
STATE_CHANGE=1
else
STATE_CHANGE=0
fi
if [[ "$STATE_TYPE_NAME" == "HARD" ]];then
STATE_TYPE=1
else
STATE_TYPE=0
fi
#############################
##### Send http request #####
#############################
curl -X POST -H "Content-Type: application/json" $SM_URI/api/public/sm/v1/events-aggregator?connectorKey=$CONNECTOR_KEY \
-d "{
\"recordcount\": \"1\",
\"stateentry\": [
{
\"instance_id\": \"$INSTANCE_ID\",
\"state_time\": \"$STATE_TIME\",
\"object_id\": \"$OBJECT_ID\",
\"objecttype_id\": \"$1\",
\"host_name\": \"$HOST_NAME\",
\"service_description\": \"$SERVICE_DESCRIPTION\",
\"state_change\": \"$STATE_CHANGE\",
\"state\": \"$STATE\",
\"state_type\": \"$STATE_TYPE\",
\"current_check_attempt\": \"$ATTEMPT\",
\"max_check_attempts\": \"$MAX_ATTEMPTS\",
\"last_state\": \"$LAST_STATE\",
\"last_hard_state\": \"$LAST_HARD_STATE\",
\"output\": \"$OUTPUT\"
}
]
}"
EOF
chmod +x /usr/local/bin/send_sm
Restart Nagios Core to apply the new config.
An example of setting up a data stream with an external "Fluentd" service through the configuration template AnyStream default.
To send log messages to the Monq system, the following conditions must be met:
@timestamp
field in the format "2019-11-02T17:23:59.301361+03:00"
,application/json
format,out_http
.Next, configure fluentd:
Install the fluentd package.
fluent-gem install fluent-plugin-out-http
Add a timestamp
entry to the log - to do this, add a filter
block to the configuration file, for example, for entries with the tag kubernetes.var.log.containers.nginx-ingress-**.log
.
<filter kubernetes.var.log.containers.nginx-ingress-**.log>
@type record_transformer
enable_ruby
<record>
@timestamp ${time.strftime('%Y-%m-%dT%H:%M:%S.%6N%:z')}
</record>
</filter>
Add to the send data block dispatching of logs to Monq by using the mechanism @type copy
.
<match **>
@type copy
<store>
@type stdout
format json
</store>
...
...
</store>
<store>
@type http
endpoint_url https://{GLOBAL_DOMAIN}/api/public/cl/v1/stream-data?streamKey={API-KEY}
http_method post
serializer json
rate_limit_msec 0
raise_on_error false
recoverable_status_codes 503
buffered true
bulk_request false
custom_headers {"X-Smon-Userspace-Id": "1"}
<buffer>
...
</buffer>
</store>
</match>
{GLOBAL_DOMAIN}
– the address of your Monq space, for example sm.monq.cloud.
{API-KEY}
– API-key, copied from the data stream page.
Apply the settings and check the cl-stream-data-collector-service
microservice logs in follow
mode.
If fluentd is used in a docker container inside kubernetes, rebuild the container with the plugin.
The example uses
fluentd-kubernetes-daemonset: v1.10-debian-elasticsearch7-1
.
mkdir fluentd-kubernetes; cd fluentd-kubernetes
cat > Dockerfile << EOF
FROM fluent/fluentd-kubernetes-daemonset:v1.10-debian-elasticsearch7-1
RUN fluent-gem install fluent-plugin-out-http
ENTRYPOINT ["tini", "--", "/fluentd/entrypoint.sh"]
EOF
docker build -t fluentd-kubernetes-daemonset:v1.10-debian-elasticsearch7-1_1 .
Click Start at the upper right of the page to enable the data stream.
An example of setting up a data stream with an external "Fluent Bit" service through the configuration template AnyStream default.
The Fluent Bit processor is capable of handling a variety of formats. Below the UDP syslog reception and reading of a local docker logfile are considered (more about other methods of receiving data (opens new window)).
Scheme of sending data to Monq:
On the Monq side, create two data streams with the AnyStream default configuration template and copy their API-keys.
Configure the Fluent Bit as follows:
cat /etc/td-agent-bit/td-agent-bit.conf
[SERVICE]
flush 5
daemon Off
log_level info
parsers_file parsers.conf
plugins_file plugins.conf
http_server On
http_listen 0.0.0.0
http_port 2020
storage.metrics on
@INCLUDE inputs.conf
@INCLUDE outputs.conf
@INCLUDE filters.conf
cat /etc/td-agent-bit/inputs.conf
[INPUT]
Name syslog
Parser syslog-rfc3164
Listen 0.0.0.0
Port 514
Mode udp
Tag syslog
[INPUT]
Name tail
Tag docker
Path /var/lib/docker/containers/*/*.log
Parser docker
DB /var/log/flb_docker.db
Mem_Buf_Limit 10MB
Skip_Long_Lines On
Refresh_Interval 10
cat /etc/td-agent-bit/outputs.conf
[OUTPUT]
Name http
Host ${MONQ_URL}
Match syslog
URI /api/public/cl/v1/stream-data
Header x-smon-stream-key ${KEY1}
Header Content-Type application/x-ndjson
Format json_lines
Json_date_key @timestamp
Json_date_format iso8601
allow_duplicated_headers false
[OUTPUT]
Name http
Host ${MONQ_URL}
Match docker
URI /api/public/cl/v1/stream-data
Header x-smon-stream-key ${KEY2}
Header Content-Type application/x-ndjson
Format json_lines
Json_date_key @timestamp
Json_date_format iso8601
allow_duplicated_headers false
${MONQ_URL}
– the address of your Monq space, for example sm.monq.cloud.
${KEY1}
,${KEY2}
– API-keys copied from Monq data stream page.
After modifying the config files, restart Fluent Bit to apply the new settings.
The example uses the standard parsers provided with Fluent Bit. If necessary, you can implement a new parser and place it in the configuration (see more (opens new window)).
Consider receiving a log file from a server via Logstash.
Create in the Monq system a data stream with the AnyStream default configuration template and copy its API-key.
Install on the server from which the logs will be transferred the logstash
component of the ELK stack.
root@server$ apt-get install logstash
Create a configuration file monq-logstash.conf
in the Logstash directory with the following content:
input {
stdin {
type => "logstash-monq"
}
}
filter {
}
output {
http {
url => "https://{GLOBAL_DOMAIN}/api/public/cl/v1/stream-data?streamKey={API-KEY}"
http_method => "post"
}
}
{GLOBAL_DOMAIN}
- the address of your Monq space, e.g. sm.monq.cloud
{API-KEY}
– API-key copied from the data stream page.
In this example, the transfer of the log file to **mon ** is carried out through the standard input <STDIN>
without additional processing and filtering by logstash.
For more information on working with logstash see the
ELK
documentation (opens new window).
Run the following command on the server with logstash to send the log file:
root@server$ cat {logfile} | nice /usr/share/logstash/bin/logstash -f monq-logstash.conf
Go to the Raw events screen of the Monq platform, in the list of data streams select the previously created integration, and view the data received from the log file.
In order to receive topology synchronization events and VM migration events in VMWare and build in Monq the vSphere Service Model, do the following:
Create a data stream with the vCenter default configuration template.
Go to the data stream Settings tab, fill in the fields:
apiUri
- the address at which the VMWare vCenter web-interface is available (specifying the protocol is not required)⚠️
apiUri
must contain URL in the formatvcenter.company.com
without specifying the protocol and the path to the SDK
login
- a user of the vCenter system with sufficient rights to receive events about topology changes or separate objects state changes that the user is interested in synchronizing.password
- vCenter user password.Go to the Configuration tab.
CM autobuild routing
to route events to the SM Autobuild service.Click Save to save the settings.
Click Run at the upper right of the page to enable the data stream.
Reference
In the current version, Monq supports the following types of vCenter events:
VmMigratedEvent
;DrsVmMigratedEvent
;HostAddedEvent
;HostRemovedEvent
;VmCreatedEvent
;VmRemovedEvent
.Example of event VmMigratedEvent
:
[
{
"Key": 11518,
"EventType: "vim.event.VmMigratedEvent",
"ChainId": 11515,
"CreatedTime": "2021-08-10T06:39:25.448Z",
"UserName": "VSPHERE.LOCAL\\ryzhikovav",
"Net": null,
"Dvs": null,
"FullFormattedMessage": "Migration of virtual machine vcenter-test from pion02.devel.ifx, Storwize3700 to pion01.devel.ifx, Storwize3700 completed",
"ChangeTag": null,
"Vm": {
"Id": "vm-23",
"Name": "vcenter-test",
"Type": "VirtualMachine",
"TargetHost": {
"Id": "host-15",
"Name": "pion01.devel.ifx",
"Type": "HostSystem",
"Cluster": {
"Id": "domain-c7",
"Name": "clHQ-test",
"Type": "ClusterComputeResource",
"Datacenter": {
"Id": "datacenter-2",
"Name": "dcHQ-test",
"Type": "Datacenter"
}
}
},
"SourceHost": {
"Id": "host-12",
"Name": "pion02.devel.ifx",
"Type": "HostSystem",
"Cluster": {
"Id": "domain-c7",
"Name": "clHQ-test",
"Type": "ClusterComputeResource",
"Datacenter": {
"Id": "datacenter-0",
"Name": "dcHQ-test",
"Type": "Datacenter"
}
}
}
}
},
{
"Key": 11946,
"ChainId": 11943,
"CreatedTime": "2021-08-10T20:37:30.995999Z",
"UserName": "VSPHERE.LOCAL\\ryzhikovav",
"Net": null,
"Dvs": null,
"FullFormattedMessage": "Migration of virtual machine vcenter-test from pion01.devel.ifx, Storwize3700 to pion02.devel.ifx, Storwize3700 completed",
"ChangeTag": null,
"Vm": {
"Id": "vm-23",
"Name": "vcenter-test",
"Type": "VirtualMachine",
"TargetHost": {
"Id": "host-12",
"Name": "pion02.devel.ifx",
"Type": "HostSystem",
"Cluster": {
"Id": "domain-c7",
"Name": "clHQ-test",
"Type": "ClusterComputeResource",
"Datacenter": {
"Id": "datacenter-2",
"Name": "dcHQ-test",
"Type": "Datacenter"
}
}
},
"SourceHost": {
"Id": "host-15",
"Name": "pion01.devel.ifx",
"Type": "HostSystem",
"Cluster": {
"Id": "domain-c7",
"Name": "clHQ-test",
"Type": "ClusterComputeResource",
"Datacenter": {
"Id": "datacenter-0",
"Name": "dcHQ-test",
"Type": "Datacenter"
}
}
}
}
}
]