Spark Thrift Queries

Click Spark ThriftQueries to view the list of queries classified in the following charts.

  • To view the distribution charts, click the .
  • The default time range is Last 24 hrs. To view statistics from a custom date range, click the icon and select a time frame and timezone of your choice.
DistributionThe Distributions panel displays the summary of jobs as a Sankey diagram. By default, the chart displays the distribution by Duration. You can filter the distribution by Input Data, Output Data, Shuffle Reads, or Shuffle Writes.
Core WastageThe Core Wastage chart displays the core wastage by the following locality types. The chart also displays Core Used and Core Wasted values (in%).
Process Local: The tasks in this locality are run within the same process as the source data.
Node Local: The tasks in this locality are run on the same machine as the source data.
Rack Local: The tasks in this locality are run in the same rack as the source data.
Any: The tasks in this locality are run anywhere else but not on the same node or rack.
No pref: The tasks in this locality have no locality preference.
Idle: The tasks in this locality that are idle.
VCore UsageThe number of physical virtual cores used by a queue in the cluster. This chart displays the Average VCore Usage and Max VCore Usage.

The following metrics are displayed for each user.

UserThe name of the user running a query.
PoolThe name of the fair scheduler pool the user belongs to.
StateThe final state of the query run by the user. The state can be: Failed, Finished, Compiled.
Over Head TimeThe excess time taken for a query to run (in nanoseconds).
Start TimeThe time at which the user executed the query.
Completion TimeThe time at which the query completed execution.
# of StagesThe number of stages of the query.
DurationThe time taken to run the query.
Data ReadThe amount of data read by the query.
Data WrittenThe amount of data written by the query.
GC TimeTime spent by the JVM in garbage collection while executing a query.