diff --git a/imgs/processor_opentelemetry_envelope.png b/.gitbook/assets/processor_opentelemetry_envelope.png similarity index 100% rename from imgs/processor_opentelemetry_envelope.png rename to .gitbook/assets/processor_opentelemetry_envelope.png diff --git a/imgs/traces_head_sampling.png b/.gitbook/assets/traces_head_sampling.png similarity index 100% rename from imgs/traces_head_sampling.png rename to .gitbook/assets/traces_head_sampling.png diff --git a/imgs/traces_tail_sampling.png b/.gitbook/assets/traces_tail_sampling.png similarity index 100% rename from imgs/traces_tail_sampling.png rename to .gitbook/assets/traces_tail_sampling.png diff --git a/.gitbook/includes/untitled.md b/.gitbook/includes/untitled.md deleted file mode 100644 index 8441b798d..000000000 --- a/.gitbook/includes/untitled.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: Untitled ---- - -{% embed url="https://o11y-workshops.gitlab.io/workshop-fluentbit/lab01.html" fullWidth="false" %} -Lab 1 - Introduction to Fluent Bit -{% endembed %} diff --git a/administration/monitoring.md b/administration/monitoring.md index 33efe75ed..811387d47 100644 --- a/administration/monitoring.md +++ b/administration/monitoring.md @@ -354,7 +354,7 @@ name: You can create Grafana dashboards and alerts using Fluent Bit's exposed Prometheus style metrics. -The provided [example dashboard](https://github.com/fluent/fluent-bit-docs/tree/8172a24d278539a1420036a9434e9f56d987a040/monitoring/dashboard.json) +The provided [example dashboard](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/dashboard.json) is heavily inspired by [Banzai Cloud](https://banzaicloud.com)'s [logging operator dashboard](https://grafana.com/grafana/dashboards/7752) with a few key differences, such as the use of the `instance` label, stacked graphs, and a focus @@ -366,7 +366,7 @@ for more information. ### Alerts -Sample alerts are available [here](https://github.com/fluent/fluent-bit-docs/tree/8172a24d278539a1420036a9434e9f56d987a040/monitoring/alerts.yaml). +Sample alerts are available [here](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/alerts.yaml). ## Health Check for Fluent Bit diff --git a/imgs/logo_documentation_1.6.png b/imgs/logo_documentation_1.6.png deleted file mode 100644 index 6cd5b7ed0..000000000 Binary files a/imgs/logo_documentation_1.6.png and /dev/null differ diff --git a/input/example-page-under-group-1.md b/input/example-page-under-group-1.md deleted file mode 100644 index 2c9158684..000000000 --- a/input/example-page-under-group-1.md +++ /dev/null @@ -1,2 +0,0 @@ -# example page under group 1 - diff --git a/pipeline/processors/opentelemetry-envelope.md b/pipeline/processors/opentelemetry-envelope.md index f9df45a3c..8e60575c8 100644 --- a/pipeline/processors/opentelemetry-envelope.md +++ b/pipeline/processors/opentelemetry-envelope.md @@ -2,7 +2,7 @@ The _OpenTelemetry Envelope_ processor is used to transform your data to be compatible with the OpenTelemetry Log schema. If your data was __not__ generated by [OpenTelemetry input](../inputs/opentelemetry.md) and your backend or destination for your logs expects to be in an OpenTelemetry schema. -![](/imgs/processor_opentelemetry_envelope.png) +![](../.gitbook/assets/processor_opentelemetry_envelope.png) ## Configuration Parameters diff --git a/pipeline/processors/sampling.md b/pipeline/processors/sampling.md index 5f44b8614..fb378a9a5 100644 --- a/pipeline/processors/sampling.md +++ b/pipeline/processors/sampling.md @@ -34,7 +34,7 @@ Sampling has both a name and a type with the following possible settings: In this example, head sampling will be used to process a smaller percentage of the overall ingested traces and spans. This is done by setting up the pipeline to ingest on the OpenTelemetry defined port as shown below using the OpenTelemetry Protocol (OTLP). The processor section defines traces for head sampling and the sampling percentage defining the total ingested traces and spans to be forwarded to the defined output plugins. -![](/imgs/traces_head_sampling.png) +![](../.gitbook/assets/traces_head_sampling.png) | Sampling settings | Description | | :-------------------- | :------------------------------------------------------------------------------------------------------------------ | @@ -72,7 +72,7 @@ With this head sampling configuration, a sample set of ingested traces will rand Tail sampling is used to obtain a more selective and fine grained control over the collection of traces and spans without collecting everything. Below is an example showing the process is a combination of waiting on making a sampling decision together followed by configuration defined conditions to determine the spans to be sampled. -![](/imgs/traces_tail_sampling.png) +![](../.gitbook/assets/traces_tail_sampling.png) The following samplings settings are available with their default values: diff --git a/stream-processing/README.md b/stream-processing/README.md deleted file mode 100644 index c08be304c..000000000 --- a/stream-processing/README.md +++ /dev/null @@ -1,7 +0,0 @@ -# Introduction - -![](https://github.com/fluent/fluent-bit-docs/tree/6bc4af039821d9e8bc1636797a25ad23b52a511f/stream-processing/imgs/stream_processor.png) - -Fluent Bit is a fast and flexible log processor that collects, parsers, filters, and delivers logs to remote databases, where data analysis can then be performed. - -For real-time and complex analysis needs, you can also process the data while it's still in motion through _stream processing on the edge_. diff --git a/stream-processing/introduction.md b/stream-processing/introduction.md index d0612bc45..b2f6c7228 100644 --- a/stream-processing/introduction.md +++ b/stream-processing/introduction.md @@ -2,6 +2,6 @@ ![](../.gitbook/assets/stream_processor.png) -[Fluent Bit](https://fluentbit.io) is a fast and flexible Log processor that collects, parses, filters and delivers logs to remote databases, so that data analysis can be performed. +Fluent Bit is a fast and flexible log processor that collects, parsers, filters, and delivers logs to remote databases, where data analysis can then be performed. -Data analysis usually happens after the data is stored and indexed in a database. However, for real-time and complex analysis needs, you can process the data while it's still in motion in the log processor. This approach is called **stream processing on the edge**. +For real-time and complex analysis needs, you can also process the data while it's still in motion through _stream processing on the edge_.