Problem:
We have many collectors reporting:
ERROR com.sumologic.scala.collector.Collector - * ERROR: Registration failed: No Sumo credentials or access key provided
Cause:
This could happen for the following reasons:
- Both access id and access key were not supplied as part of the install cli command.
- The collector has lost its credentials due to an unexpected server failure resulting in the corruption of the credentials file.
- The Collector was deleted from the UI or had been offline for 12 hours and was deleted due to the Collector being marked ephemeral, or was overwritten by a new Collector using the clobber flag.
Solution:
The collector needs to be re-registered with the service with the steps given below:
1. You will need to update the user.properties file located at one of the following locations:
C:\Program Files\Sumo Logic Collector\config\user.properties (default for windows)
/opt/SumoCollector/config/user.properties (default for Linux)
<collector-install-dir>/config/user.properties
- Make sure there is an entry for the "access key" in particular in addition to "access id". By default, the "access key" line will be removed from the user.properties file for security reasons unless a specific property is set with a line having "skipAccessKeyRemoval=true"
- Check the remaining entries such access id (belongs to an active user), name field
- If the collector is to be set as ephemeral, make sure to add a new line with "ephemeral=true"
- If the collector name needs the clobber flag, make sure to add a new line with "clobber=true".
- If you have an existing collector configuration you want to mimic you can download the JSON from Sumo UI as follows:
- In Sumo Logic select Manage Data > Collection > Collection.
- Click the
icon to the right of the Collector .
A dialog box opens to display the JSON configuration.
- Then you can add one of the following entries
sources=filepath or folderpath
syncSources=filepath or folderpath # local json file monitored by the collector
- If you are using the source.json file, note that there is a cutoffTimestamp setting for each source that indicates how far back in time the data should be ingested. If this is set to 0, then it will match all the (legacy or not) files matched by the path expression. The cutoffTimestamp value is a Linux epoch timestamp that can be interpreted using epochconverter.com and this field can be set for every source so that data ingested does not result in duplicate data
- If you want the collector to come up with the same name, delete the existing collector in the UI otherwise it will add a bunch of numbers to the existing name using the "<some_name>-epochtimestamp". This does not affect the data ingested by the collector refer this KB link
2. Restart the collector service either via the Task Manager/services.msc in Windows or from the command line in Linux.
3. Verify the collector has successfully started up and is ingesting new data.
Comments
0 comments
Please sign in to leave a comment.