Updating cutoffTimestamp does not trigger Sumo to update the source

Comments

4 comments

  • Official comment
    Avatar
    Matt Sullivan

    Clever use of cutoffTimestamp. The normal use case for this property is to prevent too many old logs from ingesting when deploying a new source. Confess I didn't test it myself, but trust your findings since it makes sense conceptually. Modifying this property on a source is normally a "no op" by design because the backdated logs would have already been ingested and the field is effectively replaced by the collector's cached value(s) for date/time of lasted ingested logs. seems you worked around in a reasonable way but if you think this worth changing the product, you might pop over to ideas.sumologic.com to add it there.

  • Avatar
    Michael Meyers

    Thanks Matt.  Do you have a better recommendation on how to temporarily disable all logs from being ingested into SumoLogic?  We are using JSON config based collectors, but we have a lot of nodes so it would be awesome if we had a way to switch them on and off from the API.

  • Avatar
    Matt Sullivan

    If you are JSON managed, then as you point out, there is no changing sources via API or UI after the fact. JSON managed collectors just look at the local files for updates, so cleanest approach is probably to remove the source JSON, and re-add later, making sure the cutoffTimestamp is set correctly and not overlapping with what was already collected. If by "nodes" you mean sources, you can sync at the directory level, and use 1 well named JSON file per source, and that might make things easier to maintain.

  • Avatar
    Michael Meyers

    By nodes I meant we have about 20 VMs for each environment, each running a collector.  I'll try your suggestion.  Thanks Matt! 

Please sign in to leave a comment.