Big Data on AWS
Course ID
90550
Course Description
In this course, you will learn about cloud-based big data solutions such as Amazon Elastic MapReduce (EMR), Amazon Redshift, Amazon Kinesis, and the rest of the AWS big data platform. You will learn how to use Amazon EMR to process data using the broad ecosystem of Apache Hadoop tools like Hive and Hue. Additionally, you will learn how to create big data environments, work with Amazon DynamoDB, Amazon Redshift, and Amazon Kinesis, and leverage best practices to design big data environments for security and cost-effectiveness.
Prerequisites
Familiarity with big data technologies, including Apache Hadoop and HDFS
Knowledge of big data technologies such as Pig, Hive, and MapReduce is helpful but not required
Working knowledge of core AWS services and public cloud implementation
Students should complete the AWS Essentials course or have equivalent experience
Basic understanding of data warehousing, relational database systems, and database design
Audience
Individuals responsible for designing and implementing big data solutions, such as solutions architects and system operator administrators
Data scientists and data analysts interested in learning about big data solutions on AWS
Course Content
Apache Hadoop in the context of Amazon EMR
The architecture of an Amazon EMR cluster
Launch an Amazon EMR cluster using an appropriate Amazon Machine Image and Amazon EC2 instance types
Appropriate AWS data storage options for use with Amazon EMR
Ingesting, transferring, and compressing data for use with Amazon EMR
Use common programming frameworks available for Amazon EMR including Hive, Pig, and Streaming
Work with Amazon Redshift to implement a big data solution
Leverage big data visualization software
Appropriate security options for Amazon EMR and your data
Perform in-memory data analysis with Spark and Shark on Amazon EMR
Options to manage your Amazon EMR environment cost-effectively
Benefits of using Amazon Kinesis for big data