Nikhil Buduma是Remedy的联合创始人和首席科学家,该公司位于美国旧金山,旨在建立数据驱动为主的健康管理新系统。16岁时,他在圣何塞州立大学管理过一个药物发现实验室,为资源受限的社区研发新颖而低成本的筛查方法。到了19岁,他是国际生物学奥林匹克竞赛的两枚金牌获得者。随后加入MIT,在那里他专注于开发大规模数据系统以影响健康服务、精神健康和医药研究。在MIT,他联合创立了Lean On Me,一家全国性的非营利组织,提供匿名短信热线在大学校园内实现有效的一对一支持,并运用数据来积极影响身心健康。如今,Nikhil通过他的风投基金Q Venture Partners投资硬科技和数据公司,还为Milwaukee Brewers篮球队管理一支数据分析团队。
本书内容贡献者Nick Locascio是一位深度学习顾问、作家和研究者。Nick在MIT的Regina Barzilay实验室获得了本科和工程硕士学位,专业从事NLP和计算机视觉研究。他曾工作于多个项目,从训练神经网络到编写自然语言提示,甚至与MGH Radiology部门合作将深度学习应用于乳腺X线摄影的医学辅助诊断。Nick的工作已被MIT News和CNBC报道。在其闲暇之余,Nick为财富500强企业提供私人的深度学习咨询服务。他还联合创立了标志性的MIT课程6.S191 Intro to Deep Learning,教过300余名学生,听众包括博士后和教授。
【目录】
Preface1. The Neural NetworkBuilding Intelligent MachinesThe Limits of Traditional Computer ProgramsThe Mechanics of Machine LearningThe NeuronExpressing Linear Perceptrons as NeuronsFeed-Forward Neural NetworksLinear Neurons and Their LimitationsSigmoid, Tanh, and ReLU NeuronsSoftmax Output LayersLooking Forward2. Training Feed-Forward Neural NetworksThe Fast-Food ProblemGradient DescentThe Delta Rule and Learning RatesGradient Descent with Sigmoidal NeuronsThe Backpropagation AlgorithmStochastic and Minibatch Gradient DescentTest Sets, Validation Sets, and OverfittingPreventing Overfitting in Deep Neural NetworksSummary3. Implementing Neural Networks in TensorFIowWhat Is TensorFlow?How Does TensorFlow Compare to Alternatives?Installing TensorFlowCreating and Manipulating TensorFlow VariablesTensorFlow OperationsPlaceholder TensorsSessions in TensorFlowNavigating Variable Scopes and Sharing VariablesManaging Models over the CPU and GPUSpecifying the Logistic Regression Model in TensorFlowLogging and Training the Logistic Regression ModelLeveraging TensorBoard to Visualize Computation Graphs and LearningBuilding a Multilayer Model for MNIST in TensorFlowSummary4. Beyond Gradient DescentThe Challenges with Gradient DescentLocal Minima in the Error Surfaces of Deep NetworksModel IdentifiabilityHow Pesky Are Spurious Local Minima in Deep Networks?Flat Regions in the Error SurfaceWhen the Gradient Points in the Wrong DirectionMomentum-Based OptimizationA Brief View of Second-Order MethodsLearning Rate AdaptationAdaGrad——Accumulating Historical GradientsRMSProp——Exponentially Weighted Moving Average of GradientsAdam——Combining Momentum and RMSPropThe Philosophy Behind Optimizer SelectionSummary5. Convolutional Neural NetworksNeurons in Human VisionThe Shortcomings of Feature SelectionVanilla Deep Neural Networks Dont ScaleFilters and Feature MapsFull Description of the Convolutional LayerMax PoolingFull Architectural Description of Convolution NetworksClosing the Loop on MNIST with Convolutional NetworksImage Preprocessing Pipelines Enable More Robust ModelsAccelerating Training with Batch NormalizationBuilding a Convolutional Network for CIFAR-10Visualizing Learning in Convolutional NetworksLeveraging Convolutional Filters to Replicate Artistic StylesLearning Convolutional Filters for Other Problem DomainsSummary6. Embedding and Representation LearningLearning Lower-Dimensional RepresentationsPrincipal Component AnalysisMotivating the Autoencoder ArchitectureImplementing an Autoencoder in TensorFlowDenoising to Force Robust RepresentationsSparsity in AutoencodersWhen Context Is More Informative than the Input VectorThe Word2Vec FrameworkImplementing the Skip-Gram ArchitectureSummary7. Models for Sequence AnalysisAnalyzing Variable-Length InputsTackling seq2seq with Neural N-GramsImplementing a Part-of-Speech TaggerDependency Parsing and SyntaxNetBeam Search and Global NormalizationA Case for Stateful Deep Learning ModelsRecurrent Neural NetworksThe Challenges with Vanishing GradientsLong Short-Term Memory (LSTM) UnitsTensorFlow Primitives for RNN ModelsImplementing a Sentiment Analysis ModelSolving seq2seq Tasks with Recurrent Neural NetworksAugmenting Recurrent Networks with AttentionDissecting a Neural Translation NetworkSummary8. Memory Augmented Neural NetworksNeural Turing MachinesAttention-Based Memory AccessNTM Memory Addressing MechanismsDifferentiable Neural ComputersInterference-Free Writing in DNCsDNC Memory ReuseTemporal Linking of DNC WritesUnderstanding the DNC Read HeadThe DNC Controller NetworkVisualizing the DNC in ActionImplementing the DNC in TensorFlowTeaching a DNC to Read and ComprehendSummary9. Deep Reinforcement LearningDeep Reinforcement Learning Masters Atari GamesWhat Is Reinforcement Learning?Markov Decision Processes (MDP)PolicyFuture ReturnDiscounted Future ReturnExplore Versus ExploitPolicy Versus Value LearningPolicy Learning via Policy GradientsPole-Cart with Policy GradientsOpenAI GymCreating an AgentBuilding the Model and OptimizerSampling ActionsKeeping Track of HistoryPolicy Gradient Main FunctionPGAgent Performance on Pole-CartQ-Learning and Deep Q-NetworksThe Bellman EquationIssues with Value IterationAppromating the Q-FunctionDeep Q-Network (DQN)Training DQNLearning StabilityTarget Q-NetworkExperience ReplayFrom Q-Function to PolicyDQN and the Markov AssumptionDQNs Solution to the Markov AssumptionPlaying Breakout wth DQNBuilding Our ArchitectureStacking FramesSetting Up Training OperationsUpdating Our Target Q-NetworkImplementing Experience ReplayDQN Main LoopDQNAgent Results on BreakoutImproving and Moving Beyond DQNDeep Recurrent Q-Networks (DRQN)Asynchronous Advantage Actor-Critic Agent (A3C)UNsupervised REinforcement and Auliary Learning (UNREAL)SummaryIndex 作者介绍 Nikhil Buduma是Remedy的联合创始人和首席科学家,该公司位于美国旧金山,旨在建立数据驱动为主的健康管理新系统。16岁时,他在圣何塞州立大学管理过一个药物发现实验室,为资源受限的社区研发新颖而低成本的筛查方法。到了19岁,他是国际生物学奥林匹克竞赛的两枚获得者。随后加入MIT,在那里他专注于开发大规模数据系统以影响健康服务、精神健康和医药研究。在MIT,他联合创立了LeaOMe,一家全国性的非营利组织,提供匿名短信热线在大学校园内实现有效的一对一支持,并运用数据来积极影响身心健康。如今,Nikhil通过他的风投基金Q Venture Partners投资硬科技和数据公司,还为Milwaukee Brewers篮球队管理一支数据分析团队。 n本书内容贡献者Nick Locascio是一位深度学习顾问、作家和研究者。Nick在MIT的Regina Barzilay实验室获得了本科和工程硕士学位,专业从事NLP和计算机视觉研究。他曾工作于多个项目,从训练神经网络到编写自然语言提示,甚至与MGH Radiology部门合作将深度学习应用于乳腺X线摄影的医学辅助诊断。Nick的工作已被MIT News和CNBC报道。在其闲暇之余,Nick为财富500强企业提供私人的深度学习咨询服务。他还联合创立了标志性的MIT课程6.S191 Intro to Deep Learning,教过300余名学生,听众包括博士后和教授。 序言
以下为对购买帮助不大的评价