cancel
Showing results for 
Search instead for 
Did you mean: 
Mateen
Employee
Employee

Introduction

It wouldn't be an understatement to say that applications are responsible for the bulk of end user activity in an organization and these activities are recorded in various logs. They play a critical role in understanding the usage, performance, debugging and health of the Incorta cluster. However these logs tend to grow overtime and require proper management. This article provide guidelines on managing various logs that are generated by different components of Incorta. 

What you should know before reading this article

This article requires understanding of Incorta architecture and its various components. It is geared mostly towards Incorta Administrators who are responsible for installation and administration.

For more details on installation and administration of Incorta, please review guides at https://docs.incorta.com

Applies to

This article applies to on-premise installations for all versions of Incorta.  Log rotation will be handled automatically for Incorta Cloud customers.

Let’s Go

Incorta Analytics software is comprised of the following major components 

  • CMC: Provides interface to manage Incorta cluster
  • Loader: Connects with data sources and load data into Incorta  
  • Analytics: Provides analytics capability
  • Spark: Incorta integrates with spark to perform complex transformations and run queries on Parquet data
  • Zookeeper: Coordinates the various components of the Incorta cluster.
  • Metadata database: Stores core metadata information.

The following sections provide details on managing logs generated by each of the above components.

CMC Logs

The Cluster Management Console (CMC) is a very small lightweight java application. It provides a user interface for creating and maintaining the Incorta cluster. All CMC activities are logged under the following directory:

<INCORTA_HOME>/cmc/logs

A single catalina.out file is maintained to log all Tomcat activities.

A daily cmc.log file is maintained to log daily CMC monitoring activities.

Both catalina and daily CMC logs do not occupy much space and do not need any active maintenance. They can be cleaned up once a quarter to clear old files. 

Loader and Analytics Logs

Loader and Analytics are the major components of Incorta. Loader helps with the extraction of data from various data sources and Analytics provides capability to slice and dice that data and generate various dashboards and insights.

Incorta Loader and Analytics logs are stored under the following directory for each tenant.

    $INCORTA_HOME/IncortaNode/services/<service string>/logs/incorta/<tenant name>

A daily log file is generated that records loader and analytics usage activity.

The Tomcat log file catalina.out is stored under the following directory.

    $INCORTA_HOME/IncortaNode/services/<service string>/logs

catalina.out grows significantly over time. It should be rotated to split into daily files.

Both the Tomcat and Loader/Analytics logs should be archived, retaining only a subset based on business requirements. Generally a month's worth of files is good enough to go back and troubleshoot any issues.

For sample code on rotating tomcat log, please see https://community.incorta.com/t/y7njzh/how-to-rotate-your-tomcat-catalina-out-log-file

For implementing a process to archive the Loader and Analytics Logs, scripts can be scheduled to Archive and Delete the daily log files.  These scripts can then be scheduled with cron to run periodically.  The following is an example of how this can work.  It contains:

  • environment file script to enter in path and service guid directory names
  • shell script to archive log files older than "n" days to a zip file and then delete those same log files
  • cron job syntax that can be added to the /etc/crontab to schedule the rotation to run each week

Environment Variables File

 

#!/usr/bin/env bash

# environment variables

export ScriptHome="<Enter Incorta Install Path>/IncortaNode/services/logrotate"
export LogDir="<Enter Incorta Install Path>/IncortaNode/services/logrotate/logs"
export dt=`date "+%Y%m%d%H%M"`
export analytics1="<Enter service GUID directory name here>"
export analytics2="<Enter service GUID directory name here>"
export loader1="<Enter service GUID directory name here>"
export loader2="<Enter service GUID directory name here>"
export SvcDir="<Enter Incorta Install Path>/IncortaNode/services/${analytics1}"
export i_name="<Enter Tenant Name >"

 

Archive Script File

 

#!/usr/bin/env bash

# Initialize variables
. ./log_env.sh

echo "*** Beginning Incorta Log Rotate Job"
date

currDir=$pwd

#change directory to where incorta logs exist
cd "${SvcDir}/logs/incorta/${i_name}"
pwd

#invoke Zip to archive Log files > 14 days old
find ./*.log -mtime +14 -exec zip -r incorta-log-backps.zip {} \;

#command to remove Log files > 14 days old
find ./*.log -mtime +14 -exec rm {} \;

cd $currDir

date
echo "*** Ending Incorta Log Rotate Job"

 

Cron Job Syntax

 

 0 0 * * SAT incorta cd <Enter Path Where Script is Located> && ./inc_log_rotate.sh >> ./inc_log_rotate.log 2>&1

 

The above process as scheduled would run once per week (12:00 am each Saturday).  The process first executes the environment script to get all the path information, then finds all log files older than 14 days and from that list adds them to a Zip file.  Compression on the logs is > 95% when added to the Zip.  The script then uses the same list of files to remove them.  So, in this example, there would always be 14 days worth of log files stored at a minimum increasing to 21 before the weekly process brings it back down to 14. 

Spark Logs

Incorta uses Spark to perform complex transformations and sometimes logs generated by workers and applications can grow huge, filling up the disk space.  By default, Spark does not clean up these log files. 

Cleaning up Event Log Files

Add the following properties in the spark-env.sh file to perform clean up of event log files:

    spark.history.fs.cleaner.enabled  - Set it to true to enable the automatic cleanup. 

    spark.history.fs.cleaner.maxAge  - Controls the retention time. Set it to 7 days

    spark.history.fs.cleaner.interval  - Controls the cleanup interval. Set it to 1 day

 The following two parameters should be set to split the log files into smaller log files:

    spark.eventLog.rolling.enabled  Set it to true to enable the rolling event log files instead of single huge file

    spark.eventLog.rolling.maxFileSize Specify max file size for each file before splitting. Default value of 128M is good.  

Cleaning up Worker and Application Directories

Set the following parameters in the spark-defaults.conf file to clean up the worker files:

    spark.worker.cleanup.enabled  - Set it to true to enable periodic clean up of worker and application directories

    spark.worker.cleanup.interval  - The value specified controls the clean interval.  Default value of 30 minutes is good

    spark.worker.cleanup.appDataTtl  - Controls how long to retain application work directories. Default value of 7 days is good.

Executor Logs

The following properties should be set in spark-defaults.conf  to enable rolling of executor logs:

    spark.executor.logs.rolling.maxRetainedFiles - Sets the number of latest rolling log files to be retained. Set it to 7

    spark.executor.logs.rolling.strategy  - Sets the strategy for rolling of executor logs. Valid values are Time (Time-based rolling) ,Size (Size-based rolling). - Set this property to Time

    spark.executor.logs.rolling.time.interval   - Used for time-based rolling. Default value of daily is good 

Zookeeper Logs

Zookeeper server keeps logs and snapshot files in the data directory to enable recovery in case of failure. These log and snapshot files grow huge over time and should be cleaned up periodically.

Cleaning up logs and snapshots

Zookeeper provides a PurgeTxnLog utility that implements a simple retention policy that administrators can use and schedule as a cron job.

 

java -cp zookeeper.jar:log4j.jar:conf org.apache.zookeeper.server.PurgeTxnLog <dataDiectoryr> <txnLogDirectory> -n <count>

 

By default, dataDirectory and txnLogDirectory point to the location specified by dataDir parameter in zoo.cfg configuration file.

Example:  dataDir=/incorta/IncortaAnalytics/IncortaNode/zookeeper/data

The following example illustrates how to run the purge utility.

 

java -cp zookeeper.jar:log4j.jar:conf org.apache.zookeeper.server.PurgeTxnLog $dataDir $dataDir -n 3

 

In the above example, the value of <count> 3 keeps the last three backups and purges all the older logs. 

Metadata database Logs

Incorta uses a lightweight database repository to maintain core metadata information. Generally, oracle or mysql databases are used for this purpose. The logs generated by databases are not significant and typically are maintained by DBAs. From the Incorta perspective, there are no maintenance requirements. 

Related Material

Best Practices Index
Best Practices

Just here to browse knowledge? This might help!

Version history
Last update:
‎05-31-2022 09:52 AM
Updated by: