How to setup a low-latency scheduled search which doesn't duplicate results?

Comments

4 comments

  • Official comment
    Avatar
    Graham Watts

    Hello,

    Can you share a bit more about your use case? There are a few potential solutions here depending on what you are trying to do.

    For example, how frequently do you need to deliver the data, and how is it used once delivered?

    Here are a few ideas:

    1. Monitors - this is a newer alert type with built in alert suppression and an adjustable wait/delay, however this is better for changes in the status of an alarm/condition, and not designed for streaming and forwarding data 

    2. Logs to Metrics - you can derive metrics form logging and dashboard/alert on time series metrics in Sumo 

    3. Search Job API - this is likely the most flexible approach but may be a bit more work than the below approach

    4. Scheduled Views - great for creating 1 minute rollups of data, and they now support data forwarding to S3, so you could create your aggregate then deliver it to an S3 bucket and consume the summary data from there (see screenshot below) 


    Comment actions Permalink
  • Avatar
    Shobhit Garg

    Hi,

    By design, the real time alerts should not re-generate the alerts on the log line for which there is already an alert is generated and the same is mentioned in our docs.

    https://help.sumologic.com/Visualizations-and-Alerts/Alerts/Scheduled-Searches/Create_a_Real_Time_Alert

    Real Time Alerts are not duplicated, which means that if a specific raw log message has triggered an alert once already, that same log message will not trigger an alert a second time.

    For example, if Message X caused an alert to be sent at Time T, and Sumo Logic detects Message X again at Time T+1, Sumo Logic does not send a second alert at Time T+1. But if Sumo Logic detects Message Y at Time T+1, a new alert is sent, because the root cause is different.

    But you have a use case where you are getting duplicates, please feel free to raise a support case with us by providing all the details.


    Regards,

    0
    Comment actions Permalink
  • Avatar
    Dev Resource

    That only works if you're triggering on and sending individual raw log messages - I'm doing aggregations and then sending counts etc to my webhook....the alert dedupe functionality doesn't work as soon as you use any aggregations in your search.

    Really, it should allow me to set a timerange like "-6m to -5m"....I can use that timerange when running my search normally but not when on a schedule, which makes no sense imho.

    0
    Comment actions Permalink
  • Avatar
    Dev Resource

    Hi Graham 👋  My use case is effectively metrics - I want to have counts of how frequently certain things are happening, average durations of how long select process take to complete etc, all fed into an existing monitoring system. My understanding is that this would likely fit in pretty nicely to the logs-to-metrics feature, however we do not use sumologic for our metrics...and as far as I could find there doesn't seem to be a way to get sumo metrics flowing through to other systems (like prometheus).  I did also look at Monitoring, and yeah it is not a good fit with that. The search job api looks like it could definitely be made to work, but it would take significantly more effort to deliver a solution based on it, so I'm hoping to not have to resort to that 😁. The "Scheduled Views" is something I have not investigated, and looks very interesting.  I'll dig into that, thanks!

    0
    Comment actions Permalink

Please sign in to leave a comment.