jbreak JBreak 2017 (04.04.2017 — 04.04.2017)

Handling with Spark what Kafka is streaming

img

Using Spark 2.1 for resilient real-time processing of data coming out of Kafka yearning to be aggregated. During the talk we will fix bugs, edit configs, calculate resource usage and weep.

Lots of people know how to use Hadoop/Spark for generating reports, but lately there’s an increasing number of clients asking for real-time processing of big data streams (as opposed to merely saving them to S3 or Cassandra for future reports which take hours to generate). It’s often required to calculate “on the fly” some aggregated values for a short period of time and filter a stream to decrease a load on the subsequent calculation stages.

It’s common to see projects where the team organize a “lake of data” in Amazon, just dropping all the incoming events in Kafka. Is Spark able to manage all the streams from Kafka? If so, at what cost and what can be used to help it?

Don’t expect an introduction to Spark and RDD or some blah-blah about Big Data. One case — one solution — a bit of theory — editing the configs — writing the code.