Skip to content

Commit 34430ed

Browse files
authored
Merge pull request #1975 from fluent/lynettemiles/sc-136267/update-fluent-bit-docs-pipeline-outputs-splunk
2 parents df2a9a6 + 25d1b57 commit 34430ed

File tree

1 file changed

+77
-84
lines changed

1 file changed

+77
-84
lines changed

pipeline/outputs/splunk.md

Lines changed: 77 additions & 84 deletions
Original file line numberDiff line numberDiff line change
@@ -4,62 +4,62 @@ description: Send logs to Splunk HTTP Event Collector
44

55
# Splunk
66

7-
Splunk output plugin allows to ingest your records into a [Splunk Enterprise](https://www.splunk.com/en_us/products/splunk-enterprise.html) service through the HTTP Event Collector \(HEC\) interface.
8-
9-
To get more details about how to set up the HEC in Splunk please refer to the following documentation: [Splunk / Use the HTTP Event Collector](http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/UsetheHTTPEventCollector)
10-
11-
## Configuration Parameters
12-
13-
Connectivity, transport and authentication configuration properties:
14-
15-
| Key | Description | default |
16-
|:-----------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------|
17-
| host | IP address or hostname of the target Splunk service. | 127.0.0.1 |
18-
| port | TCP port of the target Splunk service. | 8088 |
19-
| splunk\_token | Specify the Authentication Token for the HTTP Event Collector interface. | |
20-
| http\_user | Optional username for Basic Authentication on HEC | |
21-
| http\_passwd | Password for user defined in HTTP\_User | |
22-
| http\_buffer\_size | Buffer size used to receive Splunk HTTP responses | 2M |
23-
| compress | Set payload compression mechanism. The only available option is `gzip`. | |
24-
| channel | Specify X-Splunk-Request-Channel Header for the HTTP Event Collector interface. | |
25-
| http_debug_bad_request | If the HTTP server response code is 400 (bad request) and this flag is enabled, it will print the full HTTP request and response to the stdout interface. This feature is available for debugging purposes. | |
26-
| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` |
27-
28-
Content and Splunk metadata \(fields\) handling configuration properties:
29-
30-
| Key | Description | default |
31-
|:-----------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------|
32-
| splunk\_send\_raw | When enabled, the record keys and values are set in the top level of the map instead of under the event key. Refer to the _Sending Raw Events_ section from the docs for more details to make this option work properly. | off |
33-
| event\_key | Specify the key name that will be used to send a single value as part of the record. | |
34-
| event\_host | Specify the key name that contains the host value. This option allows a record accessors pattern. | |
35-
| event\_source | Set the source value to assign to the event data. | |
36-
| event\_sourcetype | Set the sourcetype value to assign to the event data. | |
37-
| event\_sourcetype\_key | Set a record key that will populate 'sourcetype'. If the key is found, it will have precedence over the value set in `event_sourcetype`. | |
38-
| event\_index | The name of the index by which the event data is to be indexed. | |
39-
| event\_index\_key | Set a record key that will populate the `index` field. If the key is found, it will have precedence over the value set in `event_index`. | |
40-
| event\_field | Set event fields for the record. This option can be set multiple times and the format is `key_name record_accessor_pattern`. | |
7+
The _Splunk_ output plugin lets you ingest your records into a [Splunk Enterprise](https://www.splunk.com/en_us/products/splunk-enterprise.html) service through the HTTP Event Collector (HEC) interface.
8+
9+
To learn how to set up the HEC in Splunk, refer to [Splunk / Use the HTTP Event Collector](http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/UsetheHTTPEventCollector).
10+
11+
## Configuration parameters
12+
13+
Connectivity, transport, and authentication configuration properties:
14+
15+
| Key | Description | Default |
16+
|:----|:------------|:--------|
17+
| `host` | IP address or hostname of the target Splunk service. | `127.0.0.1` |
18+
| `port` | TCP port of the target Splunk service. | `8088` |
19+
| `splunk_token` | Specify the authentication token for the HTTP Event Collector interface.| _none_ |
20+
| `http_user` | Optional username for basic authentication on HEC. | _none_ |
21+
| `http_passwd` | Password for user defined in `http_user`. | _none_ |
22+
| `http_buffer_size` | Buffer size used to receive Splunk HTTP responses. | `2M` |
23+
| `compress` | Set payload compression mechanism. Allowed value: `gzip`. | _none_ |
24+
| `channel` | Specify `X-Splunk-Request-Channel` header for the HTTP Event Collector interface. | _none_ |
25+
| `http_debug_bad_request` | If the HTTP server response code is `400` (bad request) and this flag is enabled, it will print the full HTTP request and response to the stdout interface. This feature is available for debugging purposes. | _none_ |
26+
| `workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` |
27+
28+
Content and Splunk metadata (fields) handling configuration properties:
29+
30+
| Key | Description | Default |
31+
|:--- |:----------- |:------- |
32+
| `splunk_send_raw` | When enabled, the record keys and values are set in the top level of the map instead of under the event key. See [Sending Raw Events](sending-raw-events) to configure this option. | `off` |
33+
| `event_key` | Specify the key name that will be used to send a single value as part of the record. | _none_ |
34+
| `event_host` | Specify the key name that contains the host value. This option allows a record accessors pattern. | _none_ |
35+
| `event_source` | Set the source value to assign to the event data. | _none_ |
36+
| `event_sourcetype` | Set the `sourcetype` value to assign to the event data. | _none_ |
37+
| `event_sourcetype_key` | Set a record key that will populate `sourcetype`. If the key is found, it will have precedence over the value set in `event_sourcetype`. | _none_ |
38+
| `event_index` | The name of the index by which the event data is to be indexed. | _none_ |
39+
| `event_index_key` | Set a record key that will populate the `index` field. If the key is found, it will have precedence over the value set in `event_index`. | _none_ |
40+
| `event_field` | Set event fields for the record. This option can be set multiple times and the format is `key_name record_accessor_pattern`. | _none_ |
4141

4242
### TLS / SSL
4343

44-
The Splunk output plugin supports TLS/SSL.
44+
The Splunk output plugin supports TLS/SSL.
4545
For more details about the properties available and general configuration, see [TLS/SSL](../../administration/transport-security.md).
4646

47-
## Getting Started
47+
## Get started
4848

49-
In order to insert records into a Splunk service, you can run the plugin from the command line or through the configuration file:
49+
To insert records into a Splunk service, you can run the plugin from the command line or through the configuration file.
5050

51-
### Command Line
51+
### Command line
5252

53-
The **splunk** plugin, can read the parameters from the command line in two ways, through the **-p** argument \(property\), e.g:
53+
The Splunk plugin can read the parameters from the command line through the `-p` argument (property):
5454

5555
```shell
5656
fluent-bit -i cpu -t cpu -o splunk -p host=127.0.0.1 -p port=8088 \
5757
-p tls=on -p tls.verify=off -m '*'
5858
```
5959

60-
### Configuration File
60+
### Configuration file
6161

62-
In your main configuration file append the following _Input_ & _Output_ sections:
62+
In your main configuration file append the following sections:
6363

6464
{% tabs %}
6565
{% tab title="fluent-bit.yaml" %}
@@ -69,11 +69,11 @@ pipeline:
6969
inputs:
7070
- name: cpu
7171
tag: cpu
72-
72+
7373
outputs:
7474
- name: splunk
7575
match: '*'
76-
host: 127.0.0.1
76+
host: 127.0.0.1
7777
port: 8088
7878
tls: on
7979
tls.verify: off
@@ -103,11 +103,10 @@ pipeline:
103103

104104
By default, the Splunk output plugin nests the record under the `event` key in the payload sent to the HEC. It will also append the time of the record to a top level `time` key.
105105

106-
If you would like to customize any of the Splunk event metadata, such as the host or target index, you can set `Splunk_Send_Raw On` in the plugin configuration, and add the metadata as keys/values in the record. _Note_: with `Splunk_Send_Raw` enabled, you are responsible for creating and populating the `event` section of the payload.
106+
To customize any of the Splunk event metadata, such as the host or target index, you can set `Splunk_Send_Raw On` in the plugin configuration, and add the metadata as keys/values in the record. With `Splunk_Send_Raw` enabled, you are responsible for creating and populating the `event` section of the payload.
107107

108108
For example, to add a custom index and hostname:
109109

110-
111110
{% tabs %}
112111
{% tab title="fluent-bit.yaml" %}
113112

@@ -116,18 +115,18 @@ pipeline:
116115
inputs:
117116
- name: cpu
118117
tag: cpu
119-
118+
120119
filters:
121120
# nest the record under the 'event' key
122121
- name: nest
123122
match: '*'
124123
operation: nest
125124
wildcard: '*'
126125
nest_under: event
127-
126+
128127
- name: modify
129128
match: '*'
130-
add:
129+
add:
131130
- index my-splunk-index
132131
- host my-host
133132

@@ -188,41 +187,37 @@ This will create a payload that looks like:
188187
}
189188
```
190189

191-
For more information on the Splunk HEC payload format and all event metadata Splunk accepts, see here: [http://docs.splunk.com/Documentation/Splunk/latest/Data/AboutHEC](http://docs.splunk.com/Documentation/Splunk/latest/Data/AboutHEC)
190+
### Sending raw events
192191

193-
### Sending Raw Events
192+
If the option `splunk_send_raw` has been enabled, the user must add all log details in the event field, and only specify fields known to Splunk in the top level event. If there is a mismatch, Splunk returns an HTTP `400 Bad Request` status code.
194193

195-
If the option `splunk_send_raw` has been enabled, the user must take care to put all log details in the event field, and only specify fields known to Splunk in the top level event, if there is a mismatch, Splunk will return an HTTP error 400.
194+
Consider the following examples:
196195

197-
Consider the following example:
196+
- `splunk_send_raw` off
198197

199-
**splunk\_send\_raw off**
198+
```json
199+
{"time": "SOMETIME", "event": {"k1": "foo", "k2": "bar", "index": "applogs"}}
200+
```
200201

201-
```json
202-
{"time": "SOMETIME", "event": {"k1": "foo", "k2": "bar", "index": "applogs"}}
203-
```
202+
- `splunk_send_raw` on
204203

205-
**splunk\_send\_raw on**
204+
```json
205+
{"time": "SOMETIME", "k1": "foo", "k2": "bar", "index": "applogs"}
206+
```
206207

207-
```json
208-
{"time": "SOMETIME", "k1": "foo", "k2": "bar", "index": "applogs"}
209-
```
210-
211-
For up-to-date information about the valid keys in the top level object, refer to the Splunk documentation:
208+
For up-to-date information about the valid keys, see [Getting Data In](https://docs.splunk.com/Documentation/Splunk/7.1.10/Data/AboutHEC).
212209

213-
[http://docs.splunk.com/Documentation/Splunk/latest/Data/AboutHEC](http://docs.splunk.com/Documentation/Splunk/latest/Data/AboutHEC)
210+
## Splunk metric index
214211

215-
## Splunk Metric Index
212+
With Splunk version 8.0 and later, you can use the Fluent Bit Splunk output plugin to send data to metric indices. This lets you perform visualizations, metric queries, and analysis with other metrics you might be collecting. This is based off of Splunk 8.0 support of multi metric support using single JSON payload, more details can be found in [Splunk metrics documentation](https://docs.splunk.com/Documentation/Splunk/9.4.2/Metrics/GetMetricsInOther#The_multiple-metric_JSON_format)
216213

217-
With Splunk version 8.0> you can also use the Fluent Bit Splunk output plugin to send data to metric indices. This allows you to perform visualizations, metric queries, and analysis with other metrics you may be collecting. This is based off of Splunk 8.0 support of multi metric support via single JSON payload, more details can be found on [Splunk documentation page](https://docs.splunk.com/Documentation/Splunk/8.1.2/Metrics/GetMetricsInOther#The_multiple-metric_JSON_format)
214+
Sending to a Splunk metric index requires the use of `Splunk_send_raw` option being enabled and formatting the message properly. This includes these specific operations:
218215

219-
Sending to a Splunk Metric index requires the use of `Splunk_send_raw` option being enabled and formatting the message properly. This includes three specific operations
216+
- Nest metric events under a `fields` property
217+
- Add `metric_name:` to all metrics
218+
- Add `index`, `source`, `sourcetype` as fields in the message
220219

221-
* Nest metric events under a "fields" property
222-
* Add `metric_name:` to all metrics
223-
* Add index, source, sourcetype as fields in the message
224-
225-
### Example Configuration
220+
### Example configuration
226221

227222
The following configuration gathers CPU metrics, nests the appropriate field, adds the required identifiers and then sends to Splunk.
228223

@@ -234,7 +229,7 @@ pipeline:
234229
inputs:
235230
- name: cpu
236231
tag: cpu
237-
232+
238233
filters:
239234
# Move CPU metrics to be nested under "fields" and
240235
# add the prefix "metric_name:" to all metrics
@@ -246,10 +241,10 @@ pipeline:
246241
nest_under: fields
247242
add_prefix: 'metric_name:'
248243

249-
# Add index, source, sourcetype
244+
# Add index, source, sourcetype
250245
- name: modify
251246
match: cpu
252-
set:
247+
set:
253248
- index cpu-metrics
254249
- source fluent-bit
255250
- sourcetype custom
@@ -306,19 +301,17 @@ pipeline:
306301
{% endtab %}
307302
{% endtabs %}
308303

309-
## Send Metrics Events of Fluent Bit
304+
## Send metrics events of Fluent Bit
310305

311-
Starting with Fluent Bit 2.0, you can also send Fluent Bit's metrics type of events into Splunk via Splunk HEC.
312-
This allows you to perform visualizations, metric queries, and analysis with directly sent Fluent Bit's metrics type of events.
313-
This is based off Splunk 8.0 support of multi metric support via single concatenated JSON payload.
306+
In Fluent Bit 2.0 or later, you can send Fluent Bit metrics the `events` type into Splunk using Splunk HEC. This lets you perform visualizations, metric queries, and analysis with directly sent using Fluent Bit metrics. This is based off Splunk 8.0 support of multi metric support using a single concatenated JSON payload.
314307

315-
Sending Fluent Bit's metrics into Splunk requires the use of collecting Fluent Bit's metrics plugins.
316-
Note that whether events type of logs or metrics can be distinguished automatically.
308+
Sending Fluent Bit metrics into Splunk requires the use of collecting Fluent Bit metrics plugins, whether events type of logs or metrics can be distinguished automatically.
317309
You don't need to pay attentions about the type of events.
310+
318311
This example includes two specific operations
319312

320-
* Collect node or Fluent Bit's internal metrics
321-
* Send metrics as single concatenated JSON payload
313+
- Collect node or Fluent Bit internal metrics
314+
- Send metrics as single concatenated JSON payload
322315

323316
{% tabs %}
324317
{% tab title="fluent-bit.yaml" %}
@@ -328,7 +321,7 @@ pipeline:
328321
inputs:
329322
- name: node_exporter_metrics
330323
tag: node_exporter_metrics
331-
324+
332325
outputs:
333326
- name: splunk
334327
match: '*'
@@ -358,4 +351,4 @@ pipeline:
358351
```
359352

360353
{% endtab %}
361-
{% endtabs %}
354+
{% endtabs %}

0 commit comments

Comments
 (0)