Part Ⅰ. Architectural Considerations for Hadoop Applications 1. Data Modeling in Hadoop Data Storage Options Standard File Formats Hadoop File Types Serialization Formats Columnar Formats Compression HDFS Schema Design Location of HDFS Files Advanced HDFS Schema Design HDFS Schema Design Summary HBase Schema Design Row Key Timestamp Hops Tables and Regions Using Columns Using Column Families Time-to-Live Managing Metadata What Is Metadata? Why Care About Metadata? Where to Store Metadata? Examples of Managing Metadata Limitations of the Hive Metastore and HCatalog Other Ways of Storing Metadata Conclusion 2. Data Movement Data Ingestion Considerations Timeliness of Data Ingestion Incremental Updates Access Patterns Original Source System and Data Structure Transformations Network Bottlenecks Network Security Push or Pull Failure Handling Level of Complexity Data Ingestion Options File Transfers Considerations for File Transfers versus Other Ingest Methods Sqoop: Batch Transfer Between Hadoop and Relational Databases Flume: Event-Based Data Collection and Processing Kafka Data Extraction Conclusion 3. Processing Data in Hadoop MapReduce MapReduce Overview Example for MapReduce When to Use MapReduce Spark Spark Overview Overview of Spark Components Basic Spark Concepts Benefits of Using Spark Spark Example When to Use Spark Abstractions Pig Pig Example When to Use Pig Crunch Crunch Example When to Use Crunch Cascading Cascading Example When to Use Cascading Hive Hive Overview Example of Hive Code When to Use Hive Impala Impala Overview Speed-Oriented Design Impala Example When to Use Impala Conclusion 4. Common Hadoop Processing Patterns Pattern: Removing Duplicate Records by Primary Key Data Generation for Deduplication Example Code Example: Spark Deduplication in Scala Code Example: Deduplication in SQL Pattern: Windowing Analysis Data Generation for Windowing Analysis Example Code Example: Peaks and Valleys in Spark Code Example: Peaks and Valleys in SQL Pattern: Time Series Modifications Use HBase and Versioning Use HBase with a RowKey of RecordKey and StartTime Use HDFS and Rewrite the Whole Table Use Partitions on HDFS for Current and Historical Records Data Generation for Time Series Example Code Example: Time Series in Spark Code Example: Time Series in SQL Conclusion 5. Graph Processing on Hadoop What Is a Graph? What Is Graph Processing? How Do You Process a Graph in a Distributed System? The Bulk Synchronous Parallel Model BSP by Example Giraph Read and Partition the Data Batch Process the Graph with BSP Write the Graph Back to Disk Putting It All Together When Should You Use Giraph? GraphX Just Another RDD GraphX Pregel Interface vprog0 sendMessage0 mergeMessage0 Which Tool to Use? Conclusion 6. Orchestration Why We Need Workflow Orchestration The Limits of Scripting The Enterprise Job Scheduler and Hadoop Orchestration Frameworks in the Hadoop Ecosystem Oozie Terminology Oozie Overview Oozie Workflow Workflow Patterns Point-to-Point Workflow Fan- Out Workflow Capture-and-Decide Workflow Parameterizing Workflows Classpath Definition Scheduling Patterns Frequency Scheduling Time and Data Triggers Executing Workflows Conclusion 7. Near-Real-Time Processing with Hadoop Stream Processing Apache Storm Storm High-Level Architecture Storm Topologies Tuples and Streams Spouts and Bolts Stream Groupings Reliability of Storm Applications Exactly-Once Processing Fault Tolerance Integrating Storm with HDFS Integrating Storm with HBase Storm Example: Simple Moving Average Evaluating Storm Trident Trident Example: Simple Moving Average Evaluating Trident Spark Streaming Overview of Spark Streaming Spark Streaming Example: Simple Count Spark Streaming Example: Multiple Inputs Spark Streaming Example: Maintaining State Spark Streaming Example: Windowing Spark Streaming Example: Streaming versus ETL Code Evaluating Spark Streaming Flume Interceptors Which Tool to Use? Low-Latency Enrichment, Validation, Alerting, and Ingestion NRT Counting, Rolling Averages, and Iterative Processing Complex Data Pipelines Conclusion
Part Ⅱ. Case Studies 8. Clickstream Analysis Defining the Use Case Using Hadoop for Clickstream Analysis Design Overview Storage Ingestion The Client Tier The Collector Tier Processing Data Deduplication Sessionization Analyzing Orchestration Conclusion 9. Fraud Detection Continuous Improvement Taking Action Architectural Requirements of Fraud Detection Systems Introducing Our Use Case High-Level Design Client Architecture Profile Storage and Retrieval Caching HBase Data Definition Delivering Transaction Status: Approved or Denied? Ingest Path Between the Client and Flume Near-Real-Time and Exploratory Analytics Near-Real-Time Processing Exploratory Analytics What About Other Architectures? Flume Interceptors Kafka to Storm or Spark Streaming External Business Rules Engine Conclusion 10. Data Warehouse Using Hadoop for Data Warehousing Defining the Use Case OLTP Schema Data Warehouse: Introduction and Terminology Data Warehousing with Hadoop High-Level Design Data Modeling and Storage Ingestion Data Processing and Access Aggregations Data Export Orchestration Conclusion A. Joins in Impala
以下为对购买帮助不大的评价