Level-Based Aggregate in Incorta
Compare the discounts given in different area, business units and at the different levels of details.
Compare the discounts given in different area, business units and at the different levels of details.
Learn how to install the python package in Incorta Cloud!
An Incorta job runs for more than 40 hours and none of us noticed until business users contacted us and asked why they do not see the updated data from the dashboard. Incorta provided us the Long Spinning Job Alert before. Why does it not work?
Have you ever needed to calculate a Z-score?
This article will review how to create a Kafka Data Source.
This article provides more information about the Avro Extractor tool that generates an Avro file from a provided JSON file.
Apache Kafka is a distributed publish-subscribe messaging system that can handle a high volume of data and enables you to pass messages from one end-point to another.
Understanding Incorta Joins: A Comprehensive Guide In data analytics, effectively linking and analyzing data from diverse sources is crucial for deriving actionable insights. Incorta’s advanced join capabilities enable businesses to seamlessly integr...
Prerequisites: CMC instance is up and runningNode agent instance(s) are up and runningZookeeper is up and running (either shipped with incorta or a clustered zookeeper)An empty schema is created (either oracle metadata or MySQL metadata)A common/shar...
Use Incorta to prepare data and create an ML model in DataRobot. You can then send the data as part of an Incorta data pipeline to DataRobot for making predictions. The DataRobot Batch Prediction API can be called from an Incorta MV to complete the prediction job.
String matching is a very common problem in business. You will learn how to perform address matching using FuzzyWuzzy in Incorta.
use as.DataFrame()and createDataFrame()to convert R data frame into Spark DataFrame.
Use collect() to collect the data from Spark DataFrame to R DataFrame.
Run SQL queries to do interactive analysis in Incorta Notebook with SparkR.
We would like to get the current hour, how can we get it in Spark SQL? We also need the time as of one hour ago, how do we do that?