The default Incorta Installation installs both Incorta and Spark on the same server. You can separate spark to a different node so that both Incorta and Spark run on different nodes and make use of the computing power and memory of their nodes to improve performance.
- Shutdown Incorta and Spark on Incorta server
- Take export of tenant on Incorta server (i.e. ./tmt.sh --export <tenant-name> <export-file-name.zip> ) (e.g. ./tmt.sh --export sbux-dev sbux-dev-08152018.zip)
- Zip spark directory on Incorta server
- Unzip spark directory on to new node for Spark Server
- Make sure Java is installed on Spark Server
- Modify .bash_profile in Spark server to have correct JAVA_HOME and PATH. Also add SPARK_HOME variable pointing to spark directory.
- Modify hostname in spark-defaults.conf and spark-env.sh files under ..spark/conf directory on Spark Server
- Start Master and Slave on Spark server
- Go to Master and Worker URLs. (Default ports are 9091,9092) to verify spark is up and running
We also need to move the tenant folder from Incorta Server to Shared Storage so that both Incorta and Spark Nodes can read and write data.
Follow the steps below before Starting Incorta
- Move tenant dir to shared folder (Tenant Directory is under IncortaHome)