About this Specialization
100% online courses

100% online courses

Start instantly and learn at your own schedule.
Flexible Schedule

Flexible Schedule

Set and maintain flexible deadlines.
Intermediate Level

Intermediate Level

Hours to complete

Approx. 5 months to complete

Suggested 7 hours/week
Available languages

English

Subtitles: English...

Skills you will gain

Apache HadoopRecommender SystemsMapreduceApache Spark
100% online courses

100% online courses

Start instantly and learn at your own schedule.
Flexible Schedule

Flexible Schedule

Set and maintain flexible deadlines.
Intermediate Level

Intermediate Level

Hours to complete

Approx. 5 months to complete

Suggested 7 hours/week
Available languages

English

Subtitles: English...

How the Specialization Works

Take Courses

A Coursera Specialization is a series of courses that helps you master a skill. To begin, enroll in the Specialization directly, or review its courses and choose the one you'd like to start with. When you subscribe to a course that is part of a Specialization, you’re automatically subscribed to the full Specialization. It’s okay to complete just one course — you can pause your learning or end your subscription at any time. Visit your learner dashboard to track your course enrollments and your progress.

Hands-on Project

Every Specialization includes a hands-on project. You'll need to successfully finish the project(s) to complete the Specialization and earn your certificate. If the Specialization includes a separate course for the hands-on project, you'll need to finish each of the other courses before you can start it.

Earn a Certificate

When you finish every course and complete the hands-on project, you'll earn a Certificate that you can share with prospective employers and your professional network.

how it works

There are 5 Courses in this Specialization

Course1

Big Data Essentials: HDFS, MapReduce and Spark RDD

4.1
219 ratings
63 reviews
Have you ever heard about such technologies as HDFS, MapReduce, Spark? Always wanted to learn these new tools but missed concise starting material? Don’t miss this course either! In this 6-week course you will: - learn some basic technologies of the modern Big Data landscape, namely: HDFS, MapReduce and Spark; - be guided both through systems internals and their applications; - learn about distributed file systems, why they exist and what function they serve; - grasp the MapReduce framework, a workhorse for many modern Big Data applications; - apply the framework to process texts and solve sample business cases; - learn about Spark, the next-generation computational framework; - build a strong understanding of Spark basic concepts; - develop skills to apply these tools to creating solutions in finance, social networks, telecommunications and many other fields. Your learning experience will be as close to real life as possible with the chance to evaluate your practical assignments on a real cluster. No mocking, a friendly considerate atmosphere to make the process of your learning smooth and enjoyable. Get ready to work with real datasets alongside with real masters! Special thanks to: - Prof. Mikhail Roytberg, APT dept., MIPT, who was the initial reviewer of the project, the supervisor and mentor of half of the BigData team. He was the one, who helped to get this show on the road. - Oleg Sukhoroslov (PhD, Senior Researcher at IITP RAS), who has been teaching MapReduce, Hadoop and friends since 2008. Now he is leading the infrastructure team. - Oleg Ivchenko (PhD student APT dept., MIPT), Pavel Akhtyamov (MSc. student at APT dept., MIPT) and Vladimir Kuznetsov (Assistant at P.G. Demidov Yaroslavl State University), superbrains who have developed and now maintain the infrastructure used for practical assignments in this course. - Asya Roitberg, Eugene Baulin, Marina Sudarikova. These people never sleep to babysit this course day and night, to make your learning experience productive, smooth and exciting....
Course2

Big Data Analysis: Hive, Spark SQL, DataFrames and GraphFrames

3.9
70 ratings
14 reviews
No doubt working with huge data volumes is hard, but to move a mountain, you have to deal with a lot of small stones. But why strain yourself? Using Mapreduce and Spark you tackle the issue partially, thus leaving some space for high-level tools. Stop struggling to make your big data workflow productive and efficient, make use of the tools we are offering you. This course will teach you how to: - Warehouse your data efficiently using Hive, Spark SQL and Spark DataFframes. - Work with large graphs, such as social graphs or networks. - Optimize your Spark applications for maximum performance. Precisely, you will master your knowledge in: - Writing and executing Hive & Spark SQL queries; - Reasoning how the queries are translated into actual execution primitives (be it MapReduce jobs or Spark transformations); - Organizing your data in Hive to optimize disk space usage and execution times; - Constructing Spark DataFrames and using them to write ad-hoc analytical jobs easily; - Processing large graphs with Spark GraphFrames; - Debugging, profiling and optimizing Spark application performance. Still in doubt? Check this out. Become a data ninja by taking this course! Special thanks to: - Prof. Mikhail Roytberg, APT dept., MIPT, who was the initial reviewer of the project, the supervisor and mentor of half of the BigData team. He was the one, who helped to get this show on the road. - Oleg Sukhoroslov (PhD, Senior Researcher at IITP RAS), who has been teaching MapReduce, Hadoop and friends since 2008. Now he is leading the infrastructure team. - Oleg Ivchenko (PhD student APT dept., MIPT), Pavel Akhtyamov (MSc. student at APT dept., MIPT) and Vladimir Kuznetsov (Assistant at P.G. Demidov Yaroslavl State University), superbrains who have developed and now maintain the infrastructure used for practical assignments in this course. - Asya Roitberg, Eugene Baulin, Marina Sudarikova. These people never sleep to babysit this course day and night, to make your learning experience productive, smooth and exciting....
Course3

Big Data Applications: Machine Learning at Scale

3.9
48 ratings
12 reviews
Machine learning is transforming the world around us. To become successful, you’d better know what kinds of problems can be solved with machine learning, and how they can be solved. Don’t know where to start? The answer is one button away. During this course you will: - Identify practical problems which can be solved with machine learning - Build, tune and apply linear models with Spark MLLib - Understand methods of text processing - Fit decision trees and boost them with ensemble learning - Construct your own recommender system. As a practical assignment, you will - build and apply linear models for classification and regression tasks; - learn how to work with texts; - automatically construct decision trees and improve their performance with ensemble learning; - finally, you will build your own recommender system! With these skills, you will be able to tackle many practical machine learning tasks. We provide the tools, you choose the place of application to make this world of machines more intelligent. Special thanks to: - Prof. Mikhail Roytberg, APT dept., MIPT, who was the initial reviewer of the project, the supervisor and mentor of half of the BigData team. He was the one, who helped to get this show on the road. - Oleg Sukhoroslov (PhD, Senior Researcher at IITP RAS), who has been teaching MapReduce, Hadoop and friends since 2008. Now he is leading the infrastructure team. - Oleg Ivchenko (PhD student APT dept., MIPT), Pavel Akhtyamov (MSc. student at APT dept., MIPT) and Vladimir Kuznetsov (Assistant at P.G. Demidov Yaroslavl State University), superbrains who have developed and now maintain the infrastructure used for practical assignments in this course. - Asya Roitberg, Eugene Baulin, Marina Sudarikova. These people never sleep to babysit this course day and night, to make your learning experience productive, smooth and exciting....
Course4

Big Data Applications: Real-Time Streaming

4.7
3 ratings
There is a significant number of tasks when we need not just to process an enormous volume of data but to process it as quickly as possible. Delays in tsunami prediction can cost people’s lives. Delays in traffic jam prediction cost extra time. Advertisements based on the recent users’ activity are ten times more popular. However, stream processing techniques alone are not enough to create a complete real-time system. For example to create a recommendation system we need to have a storage that allows to store and fetch data for a user with minimal latency. These databases should be able to store hundreds of terabytes of data, handle billions of requests per day and have a 100% uptime. NoSQL databases are commonly used to solve this challenging problem. After you finish this course, you will master stream processing systems and NoSQL databases. You will also learn how to use such popular and powerful systems as Kafka, Cassandra and Redis. To get the most out of this course, you need to know Hadoop and SQL. You should also have a working knowledge of bash, Python and Spark. Do you want to learn how to build Big Data applications that can withstand modern challenges? Jump right in!...

Instructors

Avatar

Pavel Klemenkov

Chief Data Scientist
NVIDIA
Avatar

Ivan Mushketyk

Software Engineer, ConsenSys
Avatar

Evgeny Frolov

Data Scientist, PhD Student @Skoltech
Computational and Data Intensive Science and Engineering
Avatar

Ilya Trofimov

Principal Data Scientist
Yandex
Avatar

Ivan Puzyrevskiy

Technical Team Lead
Avatar

Alexey A. Dral

Founder and Chief Executive Officer
BigData Team
Avatar

Pavel Mezentsev

Senior Data Scientist
PulsePoint inc
Avatar

Vladislav Goncharenko

DCAM MIPT, Skoltech
Avatar

Artyom Vybornov

Lead software engineer at Rambler&Co

Industry Partners

Industry Partner Logo #0

About Yandex

Yandex is a technology company that builds intelligent products and services powered by machine learning. Our goal is to help consumers and businesses better navigate the online and offline world....

Frequently Asked Questions

  • Yes! To get started, click the course card that interests you and enroll. You can enroll and complete the course to earn a shareable certificate, or you can audit it to view the course materials for free. When you subscribe to a course that is part of a Specialization, you’re automatically subscribed to the full Specialization. Visit your learner dashboard to track your progress.

  • This course is completely online, so there’s no need to show up to a classroom in person. You can access your lectures, readings and assignments anytime and anywhere via the web or your mobile device.

  • This Specialization doesn't carry university credit, but some universities may choose to accept Specialization Certificates for credit. Check with your institution to learn more.

  • 6 months

  • - Programming experience in Python. It is required to complete programming assignments.

    - Unix basics. As the technologies covered throughout the specialization operate in Unix environment, we expect you to have basic understanding of the subject. Things like processes and files assumed to be familiar for the learner.

    - Basic linear algebra and probability theory. To grasp the “Big Data Applications: Machine Learning at Scale” course, you should be familiar with math primer or should complete an introductory course on machine learning.

  • It is expected to take course from the first to the last.

  • You will be able to present your portfolio project (Capstone project) to potential employers.

More questions? Visit the Learner Help Center.