No doubt working with huge data volumes is hard, but to move a mountain, you have to deal with a lot of small stones. But why strain yourself? Using Mapreduce and Spark you tackle the issue partially, thus leaving some space for high-level tools. Stop struggling to make your big data workflow productive and efficient, make use of the tools we are offering you.
This course will teach you how to:
- Warehouse your data efficiently using Hive, Spark SQL and Spark DataFframes.
- Work with large graphs, such as social graphs or networks.
- Optimize your Spark applications for maximum performance.
Precisely, you will master your knowledge in:
- Writing and executing Hive & Spark SQL queries;
- Reasoning how the queries are translated into actual execution primitives (be it MapReduce jobs or Spark transformations);
- Organizing your data in Hive to optimize disk space usage and execution times;
- Constructing Spark DataFrames and using them to write ad-hoc analytical jobs easily;
- Processing large graphs with Spark GraphFrames;
- Debugging, profiling and optimizing Spark application performance.
Still in doubt? Check this out. Become a data ninja by taking this course!
Special thanks to:
- Prof. Mikhail Roytberg, APT dept., MIPT, who was the initial reviewer of the project, the supervisor and mentor of half of the BigData team. He was the one, who helped to get this show on the road.
- Oleg Sukhoroslov (PhD, Senior Researcher at IITP RAS), who has been teaching MapReduce, Hadoop and friends since 2008. Now he is leading the infrastructure team.
- Oleg Ivchenko (PhD student APT dept., MIPT), Pavel Akhtyamov (MSc. student at APT dept., MIPT) and Vladimir Kuznetsov (Assistant at P.G. Demidov Yaroslavl State University), superbrains who have developed and now maintain the infrastructure used for practical assignments in this course.
- Asya Roitberg, Eugene Baulin, Marina Sudarikova. These people never sleep to babysit this course day and night, to make your learning experience productive, smooth and exciting.