Showing results for 
Search instead for 
Did you mean: 

The default Incorta Installation installs both Incorta and Spark on the same server.  You can separate spark to a different node so that both Incorta and Spark run on different nodes and make use of the computing power and memory of their nodes to improve performance.


- Shutdown Incorta and Spark on Incorta server

- Take export of tenant on Incorta server (i.e. ./ --export <tenant-name> <> ) (e.g. ./ --export sbux-dev

- Zip spark directory on Incorta server

- Unzip spark directory on to new node for Spark Server

- Make sure Java is installed on Spark Server

- Modify .bash_profile in Spark server to have correct JAVA_HOME and PATH. Also add SPARK_HOME variable pointing to spark directory.

- Modify hostname in spark-defaults.conf and files under ..spark/conf directory on Spark Server

- Start Master and Slave on Spark server

- Go to Master and Worker URLs. (Default ports are 9091,9092) to verify spark is up and running


We also need to move the tenant folder from Incorta Server to Shared Storage so that both Incorta and Spark Nodes can read and write data.

Follow the steps below before Starting Incorta

- Move tenant dir to shared folder  (Tenant Directory is under IncortaHome)

- Run tmt utility to update the tenant path

   (Syntax:     ./ -u <tenant_name> path <newpath/tenant_name>)

- Validate with ./ -l

  (It should list the tenant name with the new path)

- On Incorta Server, edit file ..IncortaHome/incorta/ and modify hostname if there is an entry for spark.master.url

- On Incorta Server, update spark.master.url using tmt command

(example: -u system spark.master.url 'spark://SparkNodeHostName:7077')

- Start Incorta.

- Login to Incorta/admin url. Verify the spark master url and other parameters under spark integration

Best Practices Index
Best Practices

Just here to browse knowledge? This might help!

Version history
Last update:
‎04-07-2022 12:21 PM
Updated by: