[Q&A] filter_grep exclude not filtering out? #4973
Unanswered
meditator3
asked this question in
Q&A
Replies: 2 comments 6 replies
-
|
Beta Was this translation helpful? Give feedback.
1 reply
-
|
I see!
Example: <source>
@type sample
tag raw.containers
sample {"log":{"msg":"normal message","level":"info"},"stream":"stdout","time":"2025-05-21T08:32:49.702332001Z"}
</source>
<source>
@type sample
tag raw.containers
sample {"log":{"msg":"warning message","level":"warn"},"stream":"stdout","time":"2025-05-21T08:32:49.702332001Z"}
</source>
<filter raw.containers.**>
@type grep
<exclude>
key $.log.level
pattern /info/
</exclude>
</filter>
<match raw.containers.**>
@type stdout
</match>This excludes info events, and output only warn events. |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
What is a problem?
using:
which is supposedly supposed to filter out all info logs.
I"ve tried a bunch of others. also didn't work
also I'm unsure how fluentD is reading my messages, those that are about to be filtered out? a parsing to json, so i assume, all my raw logs are parsed as json? so it should be able to ID key-level ?
i have on my
this is example of a part of a raw log file i took from coralogix(which takes our fluentD to more readable)-
this is my
<source>in case its needed:Describe the configuration of Fluentd###
coralogix.conf: |- <system> log_level "#{ENV['LOG_LEVEL']}" </system> <source> @type tail @id in_tail_container_logs path /var/log/containers/*.log exclude_path ["/var/log/containers/*_argocd_*.log", "/var/log/containers/*_default_*.log", "/var/log/containers/*_kube-node-lease_*.log", "/var/log/containers/*_kube-system_*.log"] path_key filename pos_file /var/log/fluentd-containers.log.pos tag raw.containers.* read_from_head false <parse> @type multi_format <pattern> format json time_key time time_format %Y-%m-%dT%H:%M:%S.%NZ keep_time_key true </pattern> <pattern> format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/ time_format %Y-%m-%dT%H:%M:%S.%N%:z keep_time_key true </pattern> </parse> </source> # This Segment is using the detect-exceptions-plugin # It will scan the "log" field for well known sctructures of Exception messages # I any are found it will assemble them and will aatempt to resolve the multiline. # this segment will also remove the "raw" prefix fro mthe tag <match raw.containers.**> @id raw.containers @type detect_exceptions remove_tag_prefix raw message log stream stream multiline_flush_interval 5 max_bytes 500000 max_lines 1000 </match> # This Segment takes the raw logs and enriches them with the kubernetes metadata # Other Parts in this config are relaying on it. <filter containers.**> @type kubernetes_metadata @id filter_kube_metadata skip_labels false skip_container_metadata false skip_namespace_metadata true skip_master_url true </filter> # this filters info messages because of overloaded quota limit on coralogix(0.03 daily) ....here is the filter_grep i showed before..Describe the logs of Fluentd
No response
Environment
Beta Was this translation helpful? Give feedback.
All reactions