This talk will start by addressing what we need from a distributed big data pipeline in terms of performance, reliability and scalability. From there I will step into how this particular set of technologies addresses these requirements and the features of each which support them. Then we step into Spark (Spark Streaming, Spark MLLib and Spark SQL), Kafka, Cassandra and Akka to show how they actually work together, from the application layer to deployment across multiple data centers, in the framework of Lambda Architecture. Finally, I will show how to easily leverage and integrate everything very cleanly in your Scala code for fast, streaming computations in asynchronous event-driven environments. Filmed at ScalaDays 2015.