com.incorta.miniThrift.App cores and memory usage
We're trying to better understand what this "com.incorta.miniThrift.App" is running in our Spark applications. In addition, it currently appears to be configured at 8 cores and 2GB per executor - is this reasonable? It appears a bit high to me for something that doesn't look like it is running any jobs.
This is a long running Spark job to delegate SQL queries to Spark (Incorta SQL Interface - Spark) when your use a DB query tool or other BI client connecting to Incorta with 5436/5442 (by default) port. If you don't use it this feature, you can turn off this feature from Admin UI -> System Configuration -> Spark Integration -> Enable SQL App. If you are using SQL interface, you you can change based on average size of data and queries and available CPUs. You normally don't need to allocate more than 5 cores per executor. For example, four executers with 4 cores per executor might perform better than two executors - 8 cores per executor.
Thank you kindly for this Shinji, and I apologize we didn't reply back sooner!
We've increased our memory on the server, but now are questioning our configuration for our cores across the different services. What's the recommendation for the following scenario:
32-core server, with Incorta configured as a standalone service. Incorta, Spark, and both Analytics & Loader services are all on the same server. We can find references for configuring the number of cores for a job, but what about making sure the Loader Service has adequate cores available for a scheduled scheme load if they run at the same time as a PySpark MV?
Or, is the above scenario configured the same as the Spark.cores.max, Spark.executor.cores, Spark.driver.memory, and Spark.executor.memory configurations?
Thank you and anyone in advance!Reply