Notifications can be turned off anytime from settings.
Item(s) Added To cart
Qty.
Something went wrong. Please refresh the page and try again.
Something went wrong. Please refresh the page and try again.
Exchange offer not applicable. New product price is lower than exchange product price
Please check the updated No Cost EMI details on the payment page
Exchange offer is not applicable with this product
Exchange Offer cannot be clubbed with Bajaj Finserv for this product
Product price & seller has been updated as per Bajaj Finserv EMI option
Please apply exchange offer again
Your item has been added to Shortlist.
View AllYour Item has been added to Shopping List
View AllSorry! Learn Hadoop and BigData Technologies is sold out.
You will be notified when this product will be in stock
Language
English
Course Features
• Over 74 lectures and 15.5 hours of content!
• Become literate in Big Data terminology and Hadoop.
• Understand the Distributed File Systems architecture and any implementation such as Hadoop Distributed File System or Google File System
• Use the HDFS shell
• Use the Cloudera, Hortonworks and Apache Bigtop virtual machines for Hadoop code development and testing
• Configure, execute and monitor a Hadoop Job
Target Audience
• Big Data professionals who want to Master MapReduce and Hadoop
• IT professionals and managers who want to understand and learn this hot new technology
System Requirements
• A familiarity of programming in Java.
• A familiarity of Linux
• Have Oracle Virtualbox or VMware installed and functioning
Detailed Product Description
Modern companies estimate that only 12% of their accumulated data is analyzed, and IT professionals who are able to work with the remaining data are becoming increasingly valuable to companies. Big data talent requests are also up 40% in the past year.
Simply put, there is too much data and not enough professionals to manage and analyze it. This course aims to close the gap by covering MapReduce and its most popular implementation: Apache Hadoop. We will also cover Hadoop ecosystems and the practical concepts involved in handling very large data sets.
Learn and Master the Most Popular Big Data Technologies in this Comprehensive Course.
Apache Hadoop and MapReduce on Amazon EMR
Hadoop Distributed File System vs. Google File System
Data Types, Readers, Writers and Splitters
Data Mining and Filtering
Shell Comments and HDFS
Cloudera, Hortonworks and Apache Bigtop Virtual Machines
Mastering Big Data for IT Professionals World Wide
Broken down, Hadoop is an implementation of the MapReduce Algorithm and the MapReduce Algorithm is used in Big Data to scale computations. The MapReduce algorithms load a block of data into RAM, perform some calculations, load the next block, and then keep going until all of the data has been processed from unstructured data into structured data.
IT managers and Big Data professionals who know how to program in Java, are familiar with Linux, have access to an Amazon EMR account, and have Oracle Virtualbox or VMware working will be able to access the key lessons and concepts in this course and learn to write Hadoop jobs and MapReduce programs.
This course is perfect for any data-focused IT job that seeks to learn new ways to work with large amounts of data.
Contents and Overview
In over 16 hours of content including 74 lectures, this course covers necessary Big Data terminology and the use of Hadoop and MapReduce.
This course covers the importance of Big Data, how to setup Node Hadoop pseudo clusters, work with the architecture of clusters, run multi-node clusters on Amazons EMR, work with distributed file systems and operations including running Hadoop on HortonWorks Sandbox and Cloudera.
Students will also learn advanced Hadoop development, MapReduce concepts, using MapReduce with Hive and Pig, and know the Hadoop ecosystem among other important lessons.
Upon completion students will be literate in Big Data terminology, understand how Hadoop can be used to overcome challenging Big Data scenarios, be able to analyze and implement MapReduce workflow, and be able to use virtual machines for code and development testing and configuring jobs.
Curriculum
SECTION 1: INTRODUCTION TO BIG DATA | |
1 | Introduction to the Course |
2 | Why Hadoop, Big Data and Map Reduce Part - A |
3 | Why Hadoop, Big Data and Map Reduce Part - B |
4 | Why Hadoop, Big Data and Map Reduce Part - C |
5 | Architecture of Clusters |
6 | Virtual Machine (VM), Provisioning a VM with vagrant and puppet |
SECTION 2: HADOOP ARCHITECTURE | |
7 | Set up a single Node Hadoop pseudo cluster Part - A |
8 | Set up a single Node Hadoop pseudo cluster Part - B |
9 | Set up a single Node Hadoop pseudo cluster Part - c |
10 | Clusters and Nodes, Hadoop Cluster Part - A |
11 | Clusters and Nodes, Hadoop Cluster Part - B |
12 | NameNode, Secondary Name Node, Data Nodes Part - A |
13 | NameNode, Secondary Name Node, Data Nodes Part - B |
14 | Running Multi node clusters on Amazons EMR Part - A |
15 | Running Multi node clusters on Amazons EMR Part - B |
16 | Running Multi node clusters on Amazons EMR Part - C |
17 | Running Multi node clusters on Amazons EMR Part - D |
18 | Running Multi node clusters on Amazons EMR Part - E |
SECTION 3: DISTRIBUTED FILE SYSTEMS | |
19 | Hdfs vs Gfs a comparison - Part A |
20 | Hdfs vs Gfs a comparison - Part B |
21 | Run hadoop on Cloudera, Web Administration |
22 | Run hadoop on Hortonworks Sandbox |
23 | File system operations with the HDFS shell Part - A |
24 | File system operations with the HDFS shell Part - B |
25 | Advanced hadoop development with Apache Bigtop Part - A |
26 | Advanced hadoop development with Apache Bigtop Part - B |
27 | MapReduce Concepts in detail Part - A |
28 | MapReduce Concepts in detail Part - B |
29 | Jobs definition, Job configuration, submission, execution and monitoring Part -A |
30 | Jobs definition, Job configuration, submission, execution and monitoring Part -B |
31 | Jobs definition, Job configuration, submission, execution and monitoring Part -C |
32 | Hadoop Data Types, Paths, FileSystem, Splitters, Readers and Writers Part A |
33 | Hadoop Data Types, Paths, FileSystem, Splitters, Readers and Writers Part B |
34 | Hadoop Data Types, Paths, FileSystem, Splitters, Readers and Writers Part C |
35 | The ETL class, Definition, Extract, Transform, and Load Part - A |
36 | The ETL class, Definition, Extract, Transform, and Load Part - B |
37 | The ETL class, Definition, Extract, Transform, and Load Part - C |
38 | The UDF class, Definition, User Defined Functions Part - A |
39 | The UDF class, Definition, User Defined Functions Part - B |
SECTION 5: MAPREDUCE WITH HIVE ( DATA WAREHOUSING ) | |
40 | Schema design for a Data warehouse Part - A |
41 | Schema design for a Data warehouse Part - B |
42 | Hive Configuration |
43 | Hive Query Patterns Part - A |
44 | Hive Query Patterns Part - B |
45 | Hive Query Patterns Part - C |
46 | Example Hive ETL class Part - A |
47 | Example Hive ETL class Part - B |
SECTION 6: MAPREDUCE WITH PIG (PARALLEL PROCESSING) | |
48 | Introduction to Apache Pig Part - A |
49 | Introduction to Apache Pig Part - B |
50 | Introduction to Apache Pig Part - C |
51 | Introduction to Apache Pig Part - D |
52 | Pig LoadFunc and EvalFunc classes |
53 | Example Pig ETL class Part - A |
54 | Example Pig ETL class Part - B |
SECTION 7: THE HADOOP ECOSYSTEM | |
55 | Introduction to Crunch Part - A |
56 | Introduction to Crunch Part - B |
57 | Introduction to Avro |
58 | Introduction to Mahout Part - A |
59 | Introduction to Mahout Part - B |
60 | Introduction to Mahout Part - C |
SECTION 8: MAPREDUCE VERSION 2 | |
61 | Apache Hadoop 2 and YARN Part - A |
62 | Apache Hadoop 2 and YARN Part - B |
63 | Yarn Examples |
SECTION 9: PUTTING IT ALL TOGETHER | |
64 | Amazon EMR example Part - A |
65 | Amazon EMR example Part - B |
66 | Amazon EMR example Part - C |
67 | Amazon EMR example Part - D |
68 | Apache Bigtop example Part - A |
69 | Apache Bigtop example Part - B |
70 | Apache Bigtop example Part - C |
71 | Apache Bigtop example Part - D |
72 | Apache Bigtop example Part - E |
73 | Apache Bigtop example Part - F |
74 | Course Summary |
Benefits
Over 74 lectures and 15.5 hours of content!
Become literate in Big Data terminology and Hadoop.
Understand the Distributed File Systems architecture and any implementation such as Hadoop Distributed File System or Google File System
Use the HDFS shell
Use the Cloudera, Hortonworks and Apache Bigtop virtual machines for Hadoop code development and testing
Configure, execute and monitor a Hadoop Job
USP of the product
Online, Downloadable, you can do it with your own pace at comfort of your home
Learn Everything, Anywhere, Anytime
India's Largest Online Education Marketplace
The images represent actual product though color of the image and product may slightly differ.
Register now to get updates on promotions and
coupons. Or Download App