Skip to content

Commit

Permalink
[summerospp]add fluentbit opentelemetry plugin
Browse files Browse the repository at this point in the history
Signed-off-by: 刘帅军 <liudonglan@192.168.2.101>
  • Loading branch information
刘帅军 committed Aug 30, 2023
1 parent 8c07d09 commit 40ec334
Show file tree
Hide file tree
Showing 11 changed files with 111 additions and 42 deletions.
21 changes: 21 additions & 0 deletions docs/fluentbit.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ This Document documents the types introduced by the fluentbit Operator.
* [ParserSpec](#parserspec)
* [Script](#script)
* [Service](#service)
* [Storage](#storage)
# ClusterFilter

ClusterFilter defines a cluster-level Filter configuration.
Expand Down Expand Up @@ -417,6 +418,8 @@ InputSpec defines the desired state of ClusterInput
| prometheusScrapeMetrics | PrometheusScrapeMetrics defines Prometheus Scrape Metrics Input configuration. | *[input.PrometheusScrapeMetrics](plugins/input/prometheusscrapemetrics.md) |
| fluentBitMetrics | FluentBitMetrics defines Fluent Bit Metrics Input configuration. | *[input.FluentbitMetrics](plugins/input/fluentbitmetrics.md) |
| customPlugin | CustomPlugin defines Custom Input configuration. | *custom.CustomPlugin |
| forward | Forward defines forward input plugin configuration | *[input.Forward](plugins/input/forward.md) |
| openTelemetry | OpenTelemetry defines forward input plugin configuration | *[input.OpenTelemetry](plugins/input/opentelemetry.md) |

[Back to TOC](#table-of-contents)
# NamespacedFluentBitCfgSpec
Expand Down Expand Up @@ -487,6 +490,7 @@ OutputSpec defines the desired state of ClusterOutput
| splunk | Splunk defines Splunk Output Configuration | *[output.Splunk](plugins/output/splunk.md) |
| opensearch | OpenSearch defines OpenSearch Output configuration. | *[output.OpenSearch](plugins/output/opensearch.md) |
| opentelemetry | OpenTelemetry defines OpenTelemetry Output configuration. | *[output.OpenTelemetry](plugins/output/opentelemetry.md) |
| prometheusExporter | PrometheusExporter_types defines Prometheus exporter configuration to expose metrics from Fluent Bit. | *[output.PrometheusExporter](plugins/output/prometheusexporter.md) |
| prometheusRemoteWrite | PrometheusRemoteWrite_types defines Prometheus Remote Write configuration. | *[output.PrometheusRemoteWrite](plugins/output/prometheusremotewrite.md) |
| s3 | S3 defines S3 Output configuration. | *[output.S3](plugins/output/s3.md) |
| customPlugin | CustomPlugin defines Custom Output configuration. | *custom.CustomPlugin |
Expand Down Expand Up @@ -559,5 +563,22 @@ ParserSpec defines the desired state of ClusterParser
| logFile | File to log diagnostic output | string |
| logLevel | Diagnostic level (error/warning/info/debug/trace) | string |
| parsersFile | Optional 'parsers' config file (can be multiple) | string |
| storage | Configure a global environment for the storage layer in Service. It is recommended to configure the volume and volumeMount separately for this storage. The hostPath type should be used for that Volume in Fluentbit daemon set. | *Storage |

[Back to TOC](#table-of-contents)
# Storage




| Field | Description | Scheme |
| ----- | ----------- | ------ |
| path | Select an optional location in the file system to store streams and chunks of data/ | string |
| sync | Configure the synchronization mode used to store the data into the file system | string |
| checksum | Enable the data integrity check when writing and reading data from the filesystem | string |
| backlogMemLimit | This option configure a hint of maximum value of memory to use when processing these records | string |
| maxChunksUp | If the input plugin has enabled filesystem storage type, this property sets the maximum number of Chunks that can be up in memory | *int64 |
| metrics | If http_server option has been enabled in the Service section, this option registers a new endpoint where internal metrics of the storage layer can be consumed | string |
| deleteIrrecoverableChunks | When enabled, irrecoverable chunks will be deleted during runtime, and any other irrecoverable chunk located in the configured storage path directory will be deleted when Fluent-Bit starts. | string |

[Back to TOC](#table-of-contents)
9 changes: 6 additions & 3 deletions docs/fluentd.md
Original file line number Diff line number Diff line change
Expand Up @@ -287,9 +287,9 @@ FluentdSpec defines the desired state of Fluentd
| ----- | ----------- | ------ |
| globalInputs | Fluentd global inputs. | [][[input.Input](plugins/input/input.md)](plugins/[input/input](plugins/input/input/md).md) |
| defaultFilterSelector | Select cluster filter plugins used to filter for the default cluster output | *[metav1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#labelselector-v1-meta) |
| defaultOutputSelector | Select cluster output plugins used to send all logs that did not match a route to the matching outputs | *[metav1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#labelselector-v1-meta) |
| defaultOutputSelector | Select cluster output plugins used to send all logs that did not match any route to the matching outputs | *[metav1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#labelselector-v1-meta) |
| disableService | By default will build the related service according to the globalinputs definition. | bool |
| replicas | Numbers of the Fluentd instance | *int32 |
| replicas | Numbers of the Fluentd instance Applicable when the mode is \"collector\", and will be ignored when the mode is \"agent\" | *int32 |
| workers | Numbers of the workers in Fluentd instance | *int32 |
| logLevel | Global logging verbosity | string |
| image | Fluentd image. | string |
Expand All @@ -310,10 +310,13 @@ FluentdSpec defines the desired state of Fluentd
| rbacRules | RBACRules represents additional rbac rules which will be applied to the fluentd clusterrole. | []rbacv1.PolicyRule |
| volumes | List of volumes that can be mounted by containers belonging to the pod. | []corev1.Volume |
| volumeMounts | Pod volumes to mount into the container's filesystem. Cannot be updated. | []corev1.VolumeMount |
| volumeClaimTemplates | volumeClaimTemplates is a list of claims that pods are allowed to reference. The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod. Every claim in this list must have at least one matching (by name) volumeMount in one container in the template. | []corev1.PersistentVolumeClaim |
| volumeClaimTemplates | volumeClaimTemplates is a list of claims that pods are allowed to reference. The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod. Every claim in this list must have at least one matching (by name) volumeMount in one container in the template. Applicable when the mode is \"collector\", and will be ignored when the mode is \"agent\" | []corev1.PersistentVolumeClaim |
| service | Service represents configurations on the fluentd service. | FluentDService |
| securityContext | PodSecurityContext represents the security context for the fluentd pods. | *corev1.PodSecurityContext |
| schedulerName | SchedulerName represents the desired scheduler for fluentd pods. | string |
| mode | Mode to determine whether to run Fluentd as collector or agent. | string |
| containerSecurityContext | ContainerSecurityContext represents the security context for the fluentd container. | *corev1.SecurityContext |
| positionDB | Storage for position db. You will use it if tail input is enabled. Applicable when the mode is \"agent\", and will be ignored when the mode is \"collector\" | [corev1.VolumeSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#volume-v1-core) |

[Back to TOC](#table-of-contents)
# FluentdStatus
Expand Down
16 changes: 16 additions & 0 deletions docs/plugins/fluentbit/input/forward.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Forward

Forward defines the in_forward Input plugin that listens to TCP socket to recieve the event stream. **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/inputs/forward**


| Field | Description | Scheme |
| ----- | ----------- | ------ |
| port | Port for forward plugin instance. | *int32 |
| listen | Listener network interface. | string |
| tag | in_forward uses the tag value for incoming logs. If not set it uses tag from incoming log. | string |
| tagPrefix | Adds the prefix to incoming event's tag | string |
| unixPath | Specify the path to unix socket to recieve a forward message. If set, Listen and port are ignnored. | string |
| unixPerm | Set the permission of unix socket file. | string |
| bufferMaxSize | Specify maximum buffer memory size used to recieve a forward message. The value must be according to the Unit Size specification. | string |
| bufferchunkSize | Set the initial buffer size to store incoming data. This value is used too to increase buffer size as required. The value must be according to the Unit Size specification. | string |
| threaded | Threaded mechanism allows input plugin to run in a separate thread which helps to desaturate the main pipeline. | string |
14 changes: 14 additions & 0 deletions docs/plugins/fluentbit/input/open_telemetry.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# OpenTelemetry

The OpenTelemetry plugin allows you to ingest telemetry data as per the OTLP specification, <br /> from various OpenTelemetry exporters, the OpenTelemetry Collector, or Fluent Bit's OpenTelemetry output plugin. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/inputs/opentelemetry**


| Field | Description | Scheme |
| ----- | ----------- | ------ |
| listen | The address to listen on,default 0.0.0.0 | string |
| port | The port for Fluent Bit to listen on.default 4318. | *int32 |
| tagKey | Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key. | string |
| rawTraces | Route trace data as a log message(default false). | *bool |
| bufferMaxSize | Specify the maximum buffer size in KB to receive a JSON message(default 4M). | string |
| bufferChunkSize | This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size(default 512K). | string |
| successfulResponseCode | It allows to set successful response code. 200, 201 and 204 are supported(default 201). | *int32 |
2 changes: 2 additions & 0 deletions docs/plugins/fluentbit/input/systemd.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,5 @@ The Systemd input plugin allows to collect log messages from the Journald daemon
| systemdFilterType | Define the filter type when Systemd_Filter is specified multiple times. Allowed values are And and Or. With And a record is matched only when all of the Systemd_Filter have a match. With Or a record is matched when any of the Systemd_Filter has a match. | string |
| readFromTail | Start reading new entries. Skip entries already stored in Journald. | string |
| stripUnderscores | Remove the leading underscore of the Journald field (key). For example the Journald field _PID becomes the key PID. | string |
| storageType | Specify the buffering mechanism to use. It can be memory or filesystem | string |
| pauseOnChunksOverlimit | Specifies if the input plugin should be paused (stop ingesting new data) when the storage.max_chunks_up value is reached. | string |
2 changes: 2 additions & 0 deletions docs/plugins/fluentbit/input/tail.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,3 +31,5 @@ The Tail input plugin allows to monitor one or several text files. <br /> It has
| dockerModeParser | Specify an optional parser for the first line of the docker multiline mode. The parser name to be specified must be registered in the parsers.conf file. | string |
| disableInotifyWatcher | DisableInotifyWatcher will disable inotify and use the file stat watcher instead. | *bool |
| multilineParser | This will help to reassembly multiline messages originally split by Docker or CRI Specify one or Multiline Parser definition to apply to the content. | string |
| storageType | Specify the buffering mechanism to use. It can be memory or filesystem | string |
| pauseOnChunksOverlimit | Specifies if the input plugin should be paused (stop ingesting new data) when the storage.max_chunks_up value is reached. | string |
1 change: 1 addition & 0 deletions docs/plugins/fluentbit/output/open_search.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,3 +38,4 @@ OpenSearch is the opensearch output plugin, allows to ingest your records into a
| suppressTypeName | When enabled, mapping types is removed and Type option is ignored. Types are deprecated in APIs in v7.0. This options is for v7.0 or later. | *bool |
| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | *int32 |
| tls | | *[plugins.TLS](../tls.md) |
| totalLimitSize | Limit the maximum number of Chunks in the filesystem for the current output logical destination. | string |
10 changes: 10 additions & 0 deletions docs/plugins/fluentbit/output/prometheus_exporter.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# PrometheusExporter

PrometheusExporter An output plugin to expose Prometheus Metrics. <br /> The prometheus exporter allows you to take metrics from Fluent Bit and expose them such that a Prometheus instance can scrape them. <br /> **Important Note: The prometheus exporter only works with metric plugins, such as Node Exporter Metrics** <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/prometheus-exporter**


| Field | Description | Scheme |
| ----- | ----------- | ------ |
| host | IP address or hostname of the target HTTP Server, default: 0.0.0.0 | string |
| port | This is the port Fluent Bit will bind to when hosting prometheus metrics. | *int32 |
| addLabels | This allows you to add custom labels to all metrics exposed through the prometheus exporter. You may have multiple of these fields | map[string]string |
52 changes: 26 additions & 26 deletions docs/plugins/fluentbit/output/s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,29 +5,29 @@ The S3 output plugin, allows to flush your records into a S3 time series databas

| Field | Description | Scheme |
| ----- | ----------- | ------ |
| region | The AWS region of your S3 bucket | string |
| bucket | S3 Bucket name | string |
| json_date_key | Specify the name of the time key in the output record. To disable the time key just set the value to false. | string |
| json_date_format | Specify the format of the date. Supported formats are double, epoch, iso8601 (eg: 2018-05-30T09:39:52.000681Z) and java_sql_timestamp (eg: 2018-05-30 09:39:52.000681) | string |
| total_file_size | Specifies the size of files in S3. Minimum size is 1M. With use_put_object On the maximum size is 1G. With multipart upload mode, the maximum size is 50G. | string |
| upload_chunk_size | The size of each 'part' for multipart uploads. Max: 50M | string |
| upload_timeout | Whenever this amount of time has elapsed, Fluent Bit will complete an upload and create a new file in S3. For example, set this value to 60m and you will get a new file every hour. | string |
| store_dir | Directory to locally buffer data before sending. | string |
| store_dir_limit_size | The size of the limitation for disk usage in S3. | string |
| s3_key_format | Format string for keys in S3. | string |
| s3_key_format_tag_delimiters | A series of characters which will be used to split the tag into 'parts' for use with the s3_key_format option. | string |
| static_file_path | Disables behavior where UUID string is automatically appended to end of S3 key name when $UUID is not provided in s3_key_format. $UUID, time formatters, $TAG, and other dynamic key formatters all work as expected while this feature is set to true. | *bool |
| use_put_object | Use the S3 PutObject API, instead of the multipart upload API. | *bool |
| role_arn | ARN of an IAM role to assume | string |
| endpoint | Custom endpoint for the S3 API. | string |
| sts_endpoint | Custom endpoint for the STS API. | string |
| canned_acl | Predefined Canned ACL Policy for S3 objects. | string |
| compression | Compression type for S3 objects. | string |
| content_type | A standard MIME type for the S3 object; this will be set as the Content-Type HTTP header. | string |
| send_content_md5 | Send the Content-MD5 header with PutObject and UploadPart requests, as is required when Object Lock is enabled. | *bool |
| auto_retry_requests | Immediately retry failed requests to AWS services once. | *bool |
| log_key | By default, the whole log record will be sent to S3. If you specify a key name with this option, then only the value of that key will be sent to S3. | string |
| preserve_data_ordering | Normally, when an upload request fails, there is a high chance for the last received chunk to be swapped with a later chunk, resulting in data shuffling. This feature prevents this shuffling by using a queue logic for uploads. | *bool |
| storage_class | Specify the storage class for S3 objects. If this option is not specified, objects will be stored with the default 'STANDARD' storage class. | string |
| retry_limit | Integer value to set the maximum number of retries allowed. | *int32 |
| external_id | Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID. | string |
| Region | The AWS region of your S3 bucket | string |
| Bucket | S3 Bucket name | string |
| JsonDateKey | Specify the name of the time key in the output record. To disable the time key just set the value to false. | string |
| JsonDateFormat | Specify the format of the date. Supported formats are double, epoch, iso8601 (eg: 2018-05-30T09:39:52.000681Z) and java_sql_timestamp (eg: 2018-05-30 09:39:52.000681) | string |
| TotalFileSize | Specifies the size of files in S3. Minimum size is 1M. With use_put_object On the maximum size is 1G. With multipart upload mode, the maximum size is 50G. | string |
| UploadChunkSize | The size of each 'part' for multipart uploads. Max: 50M | string |
| UploadTimeout | Whenever this amount of time has elapsed, Fluent Bit will complete an upload and create a new file in S3. For example, set this value to 60m and you will get a new file every hour. | string |
| StoreDir | Directory to locally buffer data before sending. | string |
| StoreDirLimitSize | The size of the limitation for disk usage in S3. | string |
| S3KeyFormat | Format string for keys in S3. | string |
| S3KeyFormatTagDelimiters | A series of characters which will be used to split the tag into 'parts' for use with the s3_key_format option. | string |
| StaticFilePath | Disables behavior where UUID string is automatically appended to end of S3 key name when $UUID is not provided in s3_key_format. $UUID, time formatters, $TAG, and other dynamic key formatters all work as expected while this feature is set to true. | *bool |
| UsePutObject | Use the S3 PutObject API, instead of the multipart upload API. | *bool |
| RoleArn | ARN of an IAM role to assume | string |
| Endpoint | Custom endpoint for the S3 API. | string |
| StsEndpoint | Custom endpoint for the STS API. | string |
| CannedAcl | Predefined Canned ACL Policy for S3 objects. | string |
| Compression | Compression type for S3 objects. | string |
| ContentType | A standard MIME type for the S3 object; this will be set as the Content-Type HTTP header. | string |
| SendContentMd5 | Send the Content-MD5 header with PutObject and UploadPart requests, as is required when Object Lock is enabled. | *bool |
| AutoRetryRequests | Immediately retry failed requests to AWS services once. | *bool |
| LogKey | By default, the whole log record will be sent to S3. If you specify a key name with this option, then only the value of that key will be sent to S3. | string |
| PreserveDataOrdering | Normally, when an upload request fails, there is a high chance for the last received chunk to be swapped with a later chunk, resulting in data shuffling. This feature prevents this shuffling by using a queue logic for uploads. | *bool |
| StorageClass | Specify the storage class for S3 objects. If this option is not specified, objects will be stored with the default 'STANDARD' storage class. | string |
| RetryLimit | Integer value to set the maximum number of retries allowed. | *int32 |
| ExternalId | Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID. | string |
Loading

0 comments on commit 40ec334

Please sign in to comment.