You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Splunk output plugin allows to ingest your records into a [Splunk Enterprise](https://www.splunk.com/en_us/products/splunk-enterprise.html) service through the HTTP Event Collector \(HEC\) interface.
8
-
9
-
To get more details about how to set up the HEC in Splunk please refer to the following documentation: [Splunk / Use the HTTP Event Collector](http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/UsetheHTTPEventCollector)
10
-
11
-
## Configuration Parameters
12
-
13
-
Connectivity, transport and authentication configuration properties:
| host| IP address or hostname of the target Splunk service. |127.0.0.1 |
18
-
| port| TCP port of the target Splunk service. |8088|
19
-
|splunk\_token | Specify the Authentication Token for the HTTP Event Collector interface.||
20
-
|http\_user | Optional username for Basic Authentication on HEC||
21
-
|http\_passwd | Password for user defined in HTTP\_User ||
22
-
|http\_buffer\_size | Buffer size used to receive Splunk HTTP responses| 2M |
23
-
| compress| Set payload compression mechanism. The only available option is `gzip`. ||
24
-
| channel| Specify X-Splunk-Request-Channel Header for the HTTP Event Collector interface. ||
25
-
| http_debug_bad_request | If the HTTP server response code is 400 (bad request) and this flag is enabled, it will print the full HTTP request and response to the stdout interface. This feature is available for debugging purposes. ||
26
-
| workers| The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`2`|
27
-
28
-
Content and Splunk metadata \(fields\) handling configuration properties:
|splunk\_send\_raw | When enabled, the record keys and values are set in the top level of the map instead of under the event key. Refer to the _Sending Raw Events_ section from the docs for more details to make this option work properly. | off|
33
-
|event\_key | Specify the key name that will be used to send a single value as part of the record. ||
34
-
|event\_host | Specify the key name that contains the host value. This option allows a record accessors pattern. ||
35
-
|event\_source | Set the source value to assign to the event data. ||
36
-
|event\_sourcetype | Set the sourcetype value to assign to the event data. ||
37
-
|event\_sourcetype\_key| Set a record key that will populate 'sourcetype'. If the key is found, it will have precedence over the value set in `event_sourcetype`. ||
38
-
|event\_index | The name of the index by which the event data is to be indexed. ||
39
-
|event\_index\_key | Set a record key that will populate the `index` field. If the key is found, it will have precedence over the value set in `event_index`. ||
40
-
|event\_field | Set event fields for the record. This option can be set multiple times and the format is `key_name record_accessor_pattern`. ||
7
+
The _Splunk_output plugin lets you ingest your records into a [Splunk Enterprise](https://www.splunk.com/en_us/products/splunk-enterprise.html) service through the HTTP Event Collector (HEC) interface.
8
+
9
+
To learn how to set up the HEC in Splunk, refer to [Splunk / Use the HTTP Event Collector](http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/UsetheHTTPEventCollector).
10
+
11
+
## Configuration parameters
12
+
13
+
Connectivity, transport, and authentication configuration properties:
14
+
15
+
| Key | Description | Default|
16
+
|:----|:------------|:--------|
17
+
|`host`| IP address or hostname of the target Splunk service. |`127.0.0.1`|
18
+
|`port`| TCP port of the target Splunk service. |`8088`|
19
+
|`splunk_token`| Specify the authentication token for the HTTP Event Collector interface.|_none_|
20
+
|`http_user`| Optional username for basic authentication on HEC. |_none_|
21
+
|`http_passwd`| Password for user defined in `http_user`. |_none_|
22
+
|`http_buffer_size`| Buffer size used to receive Splunk HTTP responses. |`2M`|
23
+
|`compress`| Set payload compression mechanism. Allowed value: `gzip`. |_none_|
24
+
|`channel`| Specify `X-Splunk-Request-Channel` header for the HTTP Event Collector interface. |_none_|
25
+
|`http_debug_bad_request`| If the HTTP server response code is `400` (bad request) and this flag is enabled, it will print the full HTTP request and response to the stdout interface. This feature is available for debugging purposes. |_none_|
26
+
|`workers`| The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`2`|
27
+
28
+
Content and Splunk metadata (fields) handling configuration properties:
29
+
30
+
| Key | Description | Default|
31
+
|:---|:-----------|:-------|
32
+
|`splunk_send_raw`| When enabled, the record keys and values are set in the top level of the map instead of under the event key. See [Sending Raw Events](sending-raw-events)to configure this option. |`off`|
33
+
|`event_key`| Specify the key name that will be used to send a single value as part of the record. |_none_|
34
+
|`event_host`| Specify the key name that contains the host value. This option allows a record accessors pattern. |_none_|
35
+
|`event_source`| Set the source value to assign to the event data. |_none_|
36
+
|`event_sourcetype`| Set the `sourcetype` value to assign to the event data. |_none_|
37
+
|`event_sourcetype_key`| Set a record key that will populate `sourcetype`. If the key is found, it will have precedence over the value set in `event_sourcetype`. |_none_|
38
+
|`event_index`| The name of the index by which the event data is to be indexed. |_none_|
39
+
|`event_index_key`| Set a record key that will populate the `index` field. If the key is found, it will have precedence over the value set in `event_index`. |_none_|
40
+
|`event_field`| Set event fields for the record. This option can be set multiple times and the format is `key_name record_accessor_pattern`. |_none_|
41
41
42
42
### TLS / SSL
43
43
44
-
The Splunk output plugin supports TLS/SSL.
44
+
The Splunk output plugin supports TLS/SSL.
45
45
For more details about the properties available and general configuration, see [TLS/SSL](../../administration/transport-security.md).
46
46
47
-
## Getting Started
47
+
## Get started
48
48
49
-
In order to insert records into a Splunk service, you can run the plugin from the command line or through the configuration file:
49
+
To insert records into a Splunk service, you can run the plugin from the command line or through the configuration file.
50
50
51
-
### Command Line
51
+
### Command line
52
52
53
-
The **splunk** plugin, can read the parameters from the command line in two ways, through the **-p** argument \(property\), e.g:
53
+
The Splunk plugin can read the parameters from the command line through the `-p` argument (property):
54
54
55
55
```shell
56
56
fluent-bit -i cpu -t cpu -o splunk -p host=127.0.0.1 -p port=8088 \
57
57
-p tls=on -p tls.verify=off -m '*'
58
58
```
59
59
60
-
### Configuration File
60
+
### Configuration file
61
61
62
-
In your main configuration file append the following _Input_ & _Output_sections:
62
+
In your main configuration file append the following sections:
63
63
64
64
{% tabs %}
65
65
{% tab title="fluent-bit.yaml" %}
@@ -69,11 +69,11 @@ pipeline:
69
69
inputs:
70
70
- name: cpu
71
71
tag: cpu
72
-
72
+
73
73
outputs:
74
74
- name: splunk
75
75
match: '*'
76
-
host: 127.0.0.1
76
+
host: 127.0.0.1
77
77
port: 8088
78
78
tls: on
79
79
tls.verify: off
@@ -103,11 +103,10 @@ pipeline:
103
103
104
104
By default, the Splunk output plugin nests the record under the `event` key in the payload sent to the HEC. It will also append the time of the record to a top level `time` key.
105
105
106
-
If you would like to customize any of the Splunk event metadata, such as the host or target index, you can set `Splunk_Send_Raw On` in the plugin configuration, and add the metadata as keys/values in the record. _Note_: with`Splunk_Send_Raw` enabled, you are responsible for creating and populating the `event` section of the payload.
106
+
To customize any of the Splunk event metadata, such as the host or target index, you can set `Splunk_Send_Raw On` in the plugin configuration, and add the metadata as keys/values in the record. With`Splunk_Send_Raw` enabled, you are responsible for creating and populating the `event` section of the payload.
107
107
108
108
For example, to add a custom index and hostname:
109
109
110
-
111
110
{% tabs %}
112
111
{% tab title="fluent-bit.yaml" %}
113
112
@@ -116,18 +115,18 @@ pipeline:
116
115
inputs:
117
116
- name: cpu
118
117
tag: cpu
119
-
118
+
120
119
filters:
121
120
# nest the record under the 'event' key
122
121
- name: nest
123
122
match: '*'
124
123
operation: nest
125
124
wildcard: '*'
126
125
nest_under: event
127
-
126
+
128
127
- name: modify
129
128
match: '*'
130
-
add:
129
+
add:
131
130
- index my-splunk-index
132
131
- host my-host
133
132
@@ -188,41 +187,37 @@ This will create a payload that looks like:
188
187
}
189
188
```
190
189
191
-
For more information on the Splunk HEC payload format and all event metadata Splunk accepts, see here: [http://docs.splunk.com/Documentation/Splunk/latest/Data/AboutHEC](http://docs.splunk.com/Documentation/Splunk/latest/Data/AboutHEC)
190
+
### Sending raw events
192
191
193
-
### Sending Raw Events
192
+
If the option `splunk_send_raw` has been enabled, the user must add all log details in the event field, and only specify fields known to Splunk in the top level event. If there is a mismatch, Splunk returns an HTTP `400 Bad Request` status code.
194
193
195
-
If the option `splunk_send_raw` has been enabled, the user must take care to put all log details in the event field, and only specify fields known to Splunk in the top level event, if there is a mismatch, Splunk will return an HTTP error 400.
With Splunk version 8.0 and later, you can use the Fluent Bit Splunk output plugin to send data to metric indices. This lets you perform visualizations, metric queries, and analysis with other metrics you might be collecting. This is based off of Splunk 8.0 support of multi metric support using single JSON payload, more details can be found in [Splunk metrics documentation](https://docs.splunk.com/Documentation/Splunk/9.4.2/Metrics/GetMetricsInOther#The_multiple-metric_JSON_format)
216
213
217
-
With Splunk version 8.0> you can also use the Fluent Bit Splunk output plugin to send data to metric indices. This allows you to perform visualizations, metric queries, and analysis with other metrics you may be collecting. This is based off of Splunk 8.0 support of multi metric support via single JSON payload, more details can be found on [Splunk documentation page](https://docs.splunk.com/Documentation/Splunk/8.1.2/Metrics/GetMetricsInOther#The_multiple-metric_JSON_format)
214
+
Sending to a Splunk metric index requires the use of `Splunk_send_raw` option being enabled and formatting the message properly. This includes these specific operations:
218
215
219
-
Sending to a Splunk Metric index requires the use of `Splunk_send_raw` option being enabled and formatting the message properly. This includes three specific operations
216
+
- Nest metric events under a `fields` property
217
+
- Add `metric_name:` to all metrics
218
+
- Add `index`, `source`, `sourcetype` as fields in the message
220
219
221
-
* Nest metric events under a "fields" property
222
-
* Add `metric_name:` to all metrics
223
-
* Add index, source, sourcetype as fields in the message
224
-
225
-
### Example Configuration
220
+
### Example configuration
226
221
227
222
The following configuration gathers CPU metrics, nests the appropriate field, adds the required identifiers and then sends to Splunk.
228
223
@@ -234,7 +229,7 @@ pipeline:
234
229
inputs:
235
230
- name: cpu
236
231
tag: cpu
237
-
232
+
238
233
filters:
239
234
# Move CPU metrics to be nested under "fields" and
240
235
# add the prefix "metric_name:" to all metrics
@@ -246,10 +241,10 @@ pipeline:
246
241
nest_under: fields
247
242
add_prefix: 'metric_name:'
248
243
249
-
# Add index, source, sourcetype
244
+
# Add index, source, sourcetype
250
245
- name: modify
251
246
match: cpu
252
-
set:
247
+
set:
253
248
- index cpu-metrics
254
249
- source fluent-bit
255
250
- sourcetype custom
@@ -306,19 +301,17 @@ pipeline:
306
301
{% endtab %}
307
302
{% endtabs %}
308
303
309
-
## Send Metrics Events of Fluent Bit
304
+
## Send metrics events of Fluent Bit
310
305
311
-
Starting with Fluent Bit 2.0, you can also send Fluent Bit's metrics type of events into Splunk via Splunk HEC.
312
-
This allows you to perform visualizations, metric queries, and analysis with directly sent Fluent Bit's metrics type of events.
313
-
This is based off Splunk 8.0 support of multi metric support via single concatenated JSON payload.
306
+
In Fluent Bit 2.0 or later, you can send Fluent Bit metrics the `events` type into Splunk using Splunk HEC. This lets you perform visualizations, metric queries, and analysis with directly sent using Fluent Bit metrics. This is based off Splunk 8.0 support of multi metric support using a single concatenated JSON payload.
314
307
315
-
Sending Fluent Bit's metrics into Splunk requires the use of collecting Fluent Bit's metrics plugins.
316
-
Note that whether events type of logs or metrics can be distinguished automatically.
308
+
Sending Fluent Bit metrics into Splunk requires the use of collecting Fluent Bit metrics plugins, whether events type of logs or metrics can be distinguished automatically.
317
309
You don't need to pay attentions about the type of events.
310
+
318
311
This example includes two specific operations
319
312
320
-
* Collect node or Fluent Bit's internal metrics
321
-
* Send metrics as single concatenated JSON payload
313
+
- Collect node or Fluent Bit internal metrics
314
+
- Send metrics as single concatenated JSON payload
0 commit comments