cancel
Showing results for 
Search instead for 
Did you mean: 
dylanwan
Employee
Employee

This article is about configuring Spark cluster for your Incorta deployment.

Here are a few of questions to be asked.  Consideration and configuration may be different depends on the answers

  • Do you have separate machines for the Spark cluster, or all running on a single box?
  • Do you plan to use SQL interface?
  • Do you need many MV jobs to be executed in parallel?

Prerequisite

  1. Most configurations can be done from Incorta CMC page, except that SPARK_WORKER_MEMORY and SPARK_WORKER_CORES will have to be set in the spark-env.sh from the machine where the spark master and spark worker process will be started from.
  2. You should have the access to Spark Web UI from browser (typically running on the port 9090 and 9091)
  3. Starting and Stopping Spark cluster has to be done from the machine (as of Incorta 4.8.x)
  4. The size of the data can help determine how much memory to be allocated to Incorta instance (the loader service and analytics service)

Spark Cluster Configuration

If you are running Spark from a separate machine, please copy the spark folder from one of the installed <Incorta home>/IncortaNode/spark folder.  Create a zip file and unzip it on the separate machine.  Running a separate Spark cluster is recommended if your deployment has heavy usage of Spark MV or plan to use SQLi with fallback option.

We need to ensure that the Spark version used from the IncortaNode for all services is the same as the Spark version deployed in the cluster.

If you are running Spark cluster on the same machine with Incorta,  setting SPARK_WORKER_MEMORY and SPARK_WORKER_CORES is required to limit the resource available to Spark and will not affect the Incorta operation and vis versa.  Incorta Loader service should have sufficient memory to hold the data used for loading the data.  The key columns, formula columns, columns used in joins, columns used in calculating  formula, Incorta derived table all require memory.  Incorta Analytics service host the data that serve the dashboard request.  Incorta recycle the memory space depending on the request.  The largest dashboard may determine the minimum required memory for analytics.  The memory available does affect the performance of rendering the dashboard.  You can check CMC loader service node to see the current memory usage by schemas, by tables, and by columns.  Configuring Incorta is beyond the scope of this article. 

After you set these environment variable in spark-env.sh, you should see them reflected in Spark web UI. These variables determine how many MV processes can be running at the same time by Spark cluster. For example, if you allocate 40GB memory to spark cluster, and each MV claim to have 4GB, ten Spark MV program can be executed at the same time. If you allocated 8 cores to Spark cluster and each MV program claim 4 cores, two MV program can be executed at the same time. When resource is not available, the MV program will be in WAITING mode.

Spark SQLApp

Spark SQL App is for serving the request from Incorta SQL interface as a fallback option.  Spark SQL app runs the query against data stored in the Incorta Data Store as parquet format in the compacted form. If you plan to use SQL interface, SQL App should be running.  Please check Configure Spark for Use with the SQL Interface

About the resource allocated to Spark SQL App, it can be done from CMC. The consideration is well documented here - Spark Best Practices

Spark Property Defaults

Each Spark application is restricted to use specific number of cores and allocated memory.  Incorta does not use the spark-default.conf file to control the default values.  Otherwise, you will have to set these values in each node that a spark process can be launched. The default values are set in Incorta CMC.

SparkSQL App is a Spark application that takes the request from SQL interface and responds with the data.  It is a process that Incorta launches and assume that it is running and listening to the request.  The memory and cores available should be specified if you enable the Spark SQLApp

Each MV launched by Incorta is a separate Spark application and they are executed independently from each other.  The memory and cores default can be specified in CMC and those settings are defaults.  You can fine tune the Spark MV properties at a per MV basis.

Tuning Spark executions by Setting  MV properties

The goal of setting Spark MV properties should be

  • Ensure the Spark program can finish successfully
  • Maximize the throughput by enabling MVs running in parallel

Depending on the its data volume and complicity of the logic, a MV may have a minimum cores and memory that is required to run to provide a reasonable performance. The requirement for memory can be reduced by increasing the shuffling partitions, but it may degrade the performance.  Overall, adding memories may not improve the performance but is required to ensure the MV job can finish.  Adding number of cores may improve the performance but marginal improvement may start reducing after certain number.  

The first consideration of improving overall performance refresh is to check if there is unnecessary groups defined for MVs to control the MV execution order.  When a MV depends on another MV, defining group and put the MVs with dependencies into different group may be necessary if you want to ensure the result of MV can be used as the input source for another MV.  However, the cost of creating group may be high as the groups are executed in a sequential order.  It means that while the last MV is running, the Spark cluster may have capacity that cannot be used by the MV allocated to the subsequent groups until the last MV finishes.

MVs within the same group can be executed in parallel only if there is enough capacity.  For example, if the Spark cluster has 40G memory allocated to the cluster, if each MV take 4GB as the executor memory assuming that there is one executor per MV, 10 MVs can be executed in parallel.  However, if the cluster has allocated 20 cores to run MVs and each MV claims the executor core as 4 and the core max is also 4, there can be up to 5 MVs can be executed in parallel.  

Overall tuning will be based on analyzing the critical path of MVs and improving the performance should be based on giving the resource to the bottleneck.  Optimizing each individual MV performance may not help unless the MV is on the critical path.

Best Practices Index
Best Practices

Just here to browse knowledge? This might help!

Contributors
Version history
Last update:
‎03-07-2022 12:43 PM
Updated by: