人工智能:一种现代的方法(第3版)(影印版)
Ⅰ artificial intelligence
1 introduction
1.1what is al?
1.2the foundations of artificial intelligence
1.3the history of artificial intelligence
1.4the state of the art
1.5summary, bibliographical and historical notes, exercises
2 intelligent agents
查看完整
1 introduction
1.1what is al?
1.2the foundations of artificial intelligence
1.3the history of artificial intelligence
1.4the state of the art
1.5summary, bibliographical and historical notes, exercises
2 intelligent agents
查看完整
Stuart Russell,1962年生于英格兰的Portsmouth。他于1982年以一等成绩在牛津大学获得物理学学士学位,并于1986年在斯坦福大学获得计算机科学的博士学位。之后他进入加州大学伯克利分校,任计算机科学教授,智能系统中心主任,拥有Smith-Zadeh工程学讲座教授头衔。1990年他获得国家科学基金的“总统青年研究者奖”(Presidential Young Investigator Award),1995年他是“计算机与思维奖”(Computer and Thought Award)的获得者之一。1996年他是加州大学的Miller教授(Miller Professor),并于2000年被任命为首席讲座教授(Chancellor's Professorship)。1998年他在斯坦福大学做过Forsythe纪念演讲(Forsythe Memorial Lecture)。他是美国人工智能学会的会士和前执行委员会委员。他已经发表10…
查看完整
查看完整
《大学计算机教育国外著名教材系列·人工智能:一种现代的方法(第3版)(影印版)》专业、经典的人工智能教材,已被全世界100多个国家的1200多所大学用作教材。《大学计算机教育国外著名教材系列·人工智能:一种现代的方法(第3版)(影印版)》的全新版全面而系统地介绍了人工智能的理论和实践,阐述了人工智能领域的核心内容,并深入介绍了各个主要的研究方向。全书仍分为八大部分:一部分“人工智能”,第二部分“问题求解”,第三部分“知识与推理”,第四部分“规划”,第五部分“不确定知识与推理”,第六部分“学习”,第七部分“通信、感知与行动”,第八部分“结论”。《大学计算机教育国外著名教材系列·人工智能:一种现代的方法(第3版)(影印版)》既详细介绍了人工智能的基本概念、思想和算法,还描述了其各个研究方向前沿的进展,同时收集整理了详实的历史文献与事件。另外,《大学计算机教育国外著名教材系列·人工智能:一种现代的方法(第3版)(影印版)》的配套网址为教师和学生提供了大量教…
查看完整
查看完整
Ⅰ artificial intelligence
1 introduction
1.1what is al?
1.2the foundations of artificial intelligence
1.3the history of artificial intelligence
1.4the state of the art
1.5summary, bibliographical and historical notes, exercises
2 intelligent agents
2.1agents and environments
2.2good behavior: the concept of rationality
2.3the nature of environments
2.4the structure of agents
2.5summary, bibliographical and historical notes, exercises
Ⅱ problem-solving
3 solving problems by searching
3.1problem-solving agents
3.2example problems
3.3searching for solutions
3.4uninformed search strategies
3.5informed (heuristic) search strategies
3.6heuristic functions
3.7summary, bibliographical and historical notes, exercises
4 beyond classical search
4.1local search algorithms and optimization problems
4.2local search in continuous spaces
4.3searching with nondeterministic actions
4.4searching with partial observations
4.5online search agents and unknown environments
4.6summary, bibliographical and historical notes, exercises
5 adversarial search
5.1games
5.2optimal decisions in games
5.3alpha-beta pruning
5.4imperfect real-time decisions
5.5stochastic games
5.6partially observable games
5.7state-of-the-art game programs
5.8alternative approaches
5.9summary, bibliographical and historical notes, exercises
6 constraint satisfaction problems
6.1defining constraint satisfaction problems
6.2constraint propagation: inference in csps
6.3backtracking search for csps
6.4local search for csps
6.5the structure of problems
6.6summary, bibliographical and historical notes, exercises
Ⅲ knowledge, reasoning, and planning
7 logical agents
7.1knowledge-based agents
7.2the wumpus world
7.3logic
7.4propositional logic: a very simple logic
7.5propositional theorem proving
7.6effective propositional model checking
7.7agents based on propositional logic
7.8summary, bibliographical and historical notes, exercises
8 first-order logic
8.1representation revisited
8.2syntax and semantics of first-order logic
8.3using first-order logic
8.4knowledge engineering in first-order logic
8.5summary, bibliographical and historical notes, exercises
9 inference in first-order logic
9.1propositional vs. first-order inference
9.2unification and lifting
9.3forward chaining
9.4backward chaining
9.5resolution
9.6summary, bibliographical and historical notes, exercises
10 classical planning
10.1 definition of classical planning
10.2 algorithms for planning as state-space search
10.3 planning graphs
10.4 other classical planning approaches
10.5 analysis of planning approaches
10.6 summary, bibliographical and historical notes, exercises
11 planning and acting in the real world
11.1 time, schedules, and resources
11.2 hierarchical planning
11.3 planning and acting in nondeterministic domains
11.4 multiagent planning
11.5 summary, bibliographical and historical notes, exercises
12 knowledge representation
12.1 ontological engineering
12.2 categories and objects
12.3 events
12.4 mental events and mental objects
12.5 reasoning systems for categories
12.6 reasoning with default information
12.7 the intemet shopping world
12.8 summary, bibliographical and historical notes, exercises
Ⅳ uncertain knowledge and reasoning
13 quantifying uncertainty
13.1 acting under uncertainty
13.2 basic probability notation
13.3 inference using full joint distributions
13.4 independence
13.5 bayes' rule and its use
13.6 the wumpus world revisited
13.7 summary, bibliographical and historical notes, exercises
14 probabilistic reasoning
14.1 representing knowledge in an uncertain domain
14.2 the semantics of bayesian networks
14.3 efficient representation of conditional distributions
14.4 exact inference in bayesian networks
14.5 approximate inference in bayesian networks
14.6 relational and first-order probability models
14.7 other approaches to uncertain reasoning
14.8 summary, bibliographical and historical notes, exercises
15 probabilistic reasoning over time
15.1 time and uncertainty
15.2 inference in temporal models
15.3 hidden markov models
15.4 kalman filters
15.5 dynamic bayesian networks
15.6 keeping track of many objects
15.7 summary, bibliographical and historical notes, exercises
16 making simple decisions
16.1 combining beliefs and desires under uncertainty
16.2 the basis of utility theory
16.3 utility functions
16.4 multiattribute utility functions
16.5 decision networks
16.6 the value of information
16.7 decision-theoretic expert systems
16.8 summary, bibliographical and historical notes, exercises
17 making complex decisions
17.1 sequential decision problems
17.2 value iteration
17.3 policy iteration
17.4 partially observable mdps
17.5 decisions with multiple agents: game theory
17.6 mechanism design
17.7 summary, bibliographical and historical notes, exercises
V learning
18 learning from examples
18.1 forms of learning
18.2 supervised learning
18.3 leaming decision trees
18.4 evaluating and choosing the best hypothesis
18.5 the theory of learning
18.6 regression and classification with linear models
18.7 artificial neural networks
18.8 nonparametric models
18.9 support vector machines
18.10 ensemble learning
18.11 practical machine learning
18.12 summary, bibliographical and historical notes, exercises
19 knowledge in learning
19.1 a logical formulation of learning
19.2 knowledge in learning
19.3 explanation-based learning
19.4 learning using relevance information
19.5 inductive logic programming
19.6 summary, bibliographical and historical notes, exercis
20 learning probabilistic models
20.1 statistical learning
20.2 learning with complete data
20.3 learning with hidden variables: the em algorithm.
20.4 summary, bibliographical and historical notes, exercis
21 reinforcement learning
21. l introduction
21.2 passive reinforcement learning
21.3 active reinforcement learning
21.4 generalization in reinforcement learning
21.5 policy search
21.6 applications of reinforcement learning
21.7 summary, bibliographical and historical notes, exercis
VI communicating, perceiving, and acting
22 natural language processing
22.1 language models
22.2 text classification
22.3 information retrieval
22.4 information extraction
22.5 summary, bibliographical and historical notes, exercis
23 natural language for communication
23.1 phrase structure grammars
23.2 syntactic analysis (parsing)
23.3 augmented grammars and semantic interpretation
23.4 machine translation
23.5 speech recognition
23.6 summary, bibliographical and historical notes, exercis
24 perception
24.1 image formation
24.2 early image-processing operations
24.3 object recognition by appearance
24.4 reconstructing the 3d world
24.5 object recognition from structural information
24.6 using vision
24.7 summary, bibliographical and historical notes, exercises
25 robotics
25.1 introduction
25.2 robot hardware
25.3 robotic perception
25.4 planning to move
25.5 planning uncertain movements
25.6 moving
25.7 robotic software architectures
25.8 application domains
25.9 summary, bibliographical and historical notes, exercises
VII conclusions
26 philosophical foundations
26.1 weak ai: can machines act intelligently?
26.2 strong ai: can machines really think?
26.3 the ethics and risks of developing artificial intelligence
26.4 summary, bibliographical and historical notes, exercises
27 al: the present and future
27.1 agent components
27.2 agent architectures
27.3 are we going in the right direction?
27.4 what if ai does succeed?
a mathematical background
a. 1complexity analysis and o0 notation
a.2 vectors, matrices, and linear algebra
a.3 probability distributions
b notes on languages and algorithms
b.1defining languages with backus-naur form (bnf)
b.2describing algorithms with pseudocode
b.3online help
bibliography
index
^ 收 起
1 introduction
1.1what is al?
1.2the foundations of artificial intelligence
1.3the history of artificial intelligence
1.4the state of the art
1.5summary, bibliographical and historical notes, exercises
2 intelligent agents
2.1agents and environments
2.2good behavior: the concept of rationality
2.3the nature of environments
2.4the structure of agents
2.5summary, bibliographical and historical notes, exercises
Ⅱ problem-solving
3 solving problems by searching
3.1problem-solving agents
3.2example problems
3.3searching for solutions
3.4uninformed search strategies
3.5informed (heuristic) search strategies
3.6heuristic functions
3.7summary, bibliographical and historical notes, exercises
4 beyond classical search
4.1local search algorithms and optimization problems
4.2local search in continuous spaces
4.3searching with nondeterministic actions
4.4searching with partial observations
4.5online search agents and unknown environments
4.6summary, bibliographical and historical notes, exercises
5 adversarial search
5.1games
5.2optimal decisions in games
5.3alpha-beta pruning
5.4imperfect real-time decisions
5.5stochastic games
5.6partially observable games
5.7state-of-the-art game programs
5.8alternative approaches
5.9summary, bibliographical and historical notes, exercises
6 constraint satisfaction problems
6.1defining constraint satisfaction problems
6.2constraint propagation: inference in csps
6.3backtracking search for csps
6.4local search for csps
6.5the structure of problems
6.6summary, bibliographical and historical notes, exercises
Ⅲ knowledge, reasoning, and planning
7 logical agents
7.1knowledge-based agents
7.2the wumpus world
7.3logic
7.4propositional logic: a very simple logic
7.5propositional theorem proving
7.6effective propositional model checking
7.7agents based on propositional logic
7.8summary, bibliographical and historical notes, exercises
8 first-order logic
8.1representation revisited
8.2syntax and semantics of first-order logic
8.3using first-order logic
8.4knowledge engineering in first-order logic
8.5summary, bibliographical and historical notes, exercises
9 inference in first-order logic
9.1propositional vs. first-order inference
9.2unification and lifting
9.3forward chaining
9.4backward chaining
9.5resolution
9.6summary, bibliographical and historical notes, exercises
10 classical planning
10.1 definition of classical planning
10.2 algorithms for planning as state-space search
10.3 planning graphs
10.4 other classical planning approaches
10.5 analysis of planning approaches
10.6 summary, bibliographical and historical notes, exercises
11 planning and acting in the real world
11.1 time, schedules, and resources
11.2 hierarchical planning
11.3 planning and acting in nondeterministic domains
11.4 multiagent planning
11.5 summary, bibliographical and historical notes, exercises
12 knowledge representation
12.1 ontological engineering
12.2 categories and objects
12.3 events
12.4 mental events and mental objects
12.5 reasoning systems for categories
12.6 reasoning with default information
12.7 the intemet shopping world
12.8 summary, bibliographical and historical notes, exercises
Ⅳ uncertain knowledge and reasoning
13 quantifying uncertainty
13.1 acting under uncertainty
13.2 basic probability notation
13.3 inference using full joint distributions
13.4 independence
13.5 bayes' rule and its use
13.6 the wumpus world revisited
13.7 summary, bibliographical and historical notes, exercises
14 probabilistic reasoning
14.1 representing knowledge in an uncertain domain
14.2 the semantics of bayesian networks
14.3 efficient representation of conditional distributions
14.4 exact inference in bayesian networks
14.5 approximate inference in bayesian networks
14.6 relational and first-order probability models
14.7 other approaches to uncertain reasoning
14.8 summary, bibliographical and historical notes, exercises
15 probabilistic reasoning over time
15.1 time and uncertainty
15.2 inference in temporal models
15.3 hidden markov models
15.4 kalman filters
15.5 dynamic bayesian networks
15.6 keeping track of many objects
15.7 summary, bibliographical and historical notes, exercises
16 making simple decisions
16.1 combining beliefs and desires under uncertainty
16.2 the basis of utility theory
16.3 utility functions
16.4 multiattribute utility functions
16.5 decision networks
16.6 the value of information
16.7 decision-theoretic expert systems
16.8 summary, bibliographical and historical notes, exercises
17 making complex decisions
17.1 sequential decision problems
17.2 value iteration
17.3 policy iteration
17.4 partially observable mdps
17.5 decisions with multiple agents: game theory
17.6 mechanism design
17.7 summary, bibliographical and historical notes, exercises
V learning
18 learning from examples
18.1 forms of learning
18.2 supervised learning
18.3 leaming decision trees
18.4 evaluating and choosing the best hypothesis
18.5 the theory of learning
18.6 regression and classification with linear models
18.7 artificial neural networks
18.8 nonparametric models
18.9 support vector machines
18.10 ensemble learning
18.11 practical machine learning
18.12 summary, bibliographical and historical notes, exercises
19 knowledge in learning
19.1 a logical formulation of learning
19.2 knowledge in learning
19.3 explanation-based learning
19.4 learning using relevance information
19.5 inductive logic programming
19.6 summary, bibliographical and historical notes, exercis
20 learning probabilistic models
20.1 statistical learning
20.2 learning with complete data
20.3 learning with hidden variables: the em algorithm.
20.4 summary, bibliographical and historical notes, exercis
21 reinforcement learning
21. l introduction
21.2 passive reinforcement learning
21.3 active reinforcement learning
21.4 generalization in reinforcement learning
21.5 policy search
21.6 applications of reinforcement learning
21.7 summary, bibliographical and historical notes, exercis
VI communicating, perceiving, and acting
22 natural language processing
22.1 language models
22.2 text classification
22.3 information retrieval
22.4 information extraction
22.5 summary, bibliographical and historical notes, exercis
23 natural language for communication
23.1 phrase structure grammars
23.2 syntactic analysis (parsing)
23.3 augmented grammars and semantic interpretation
23.4 machine translation
23.5 speech recognition
23.6 summary, bibliographical and historical notes, exercis
24 perception
24.1 image formation
24.2 early image-processing operations
24.3 object recognition by appearance
24.4 reconstructing the 3d world
24.5 object recognition from structural information
24.6 using vision
24.7 summary, bibliographical and historical notes, exercises
25 robotics
25.1 introduction
25.2 robot hardware
25.3 robotic perception
25.4 planning to move
25.5 planning uncertain movements
25.6 moving
25.7 robotic software architectures
25.8 application domains
25.9 summary, bibliographical and historical notes, exercises
VII conclusions
26 philosophical foundations
26.1 weak ai: can machines act intelligently?
26.2 strong ai: can machines really think?
26.3 the ethics and risks of developing artificial intelligence
26.4 summary, bibliographical and historical notes, exercises
27 al: the present and future
27.1 agent components
27.2 agent architectures
27.3 are we going in the right direction?
27.4 what if ai does succeed?
a mathematical background
a. 1complexity analysis and o0 notation
a.2 vectors, matrices, and linear algebra
a.3 probability distributions
b notes on languages and algorithms
b.1defining languages with backus-naur form (bnf)
b.2describing algorithms with pseudocode
b.3online help
bibliography
index
^ 收 起
Stuart Russell,1962年生于英格兰的Portsmouth。他于1982年以一等成绩在牛津大学获得物理学学士学位,并于1986年在斯坦福大学获得计算机科学的博士学位。之后他进入加州大学伯克利分校,任计算机科学教授,智能系统中心主任,拥有Smith-Zadeh工程学讲座教授头衔。1990年他获得国家科学基金的“总统青年研究者奖”(Presidential Young Investigator Award),1995年他是“计算机与思维奖”(Computer and Thought Award)的获得者之一。1996年他是加州大学的Miller教授(Miller Professor),并于2000年被任命为首席讲座教授(Chancellor's Professorship)。1998年他在斯坦福大学做过Forsythe纪念演讲(Forsythe Memorial Lecture)。他是美国人工智能学会的会士和前执行委员会委员。他已经发表100多篇论文,主题广泛涉及人工智能领域。他的其他著作包括《在类比与归纳中使用知识》(The Use of Knowledge in Analogy abd Induction).以及(与Eric Wefald合著的)《做正确的事情:有限理性的研究》(Do the Right Thing: Studies in Limited Rationality)。
Peter Norvig,现为Google研究院主管(Director of Research),2002-2005年为负责核心Web搜索算法的主管。他是美国人工智能学会的会士和ACM的会士。他曾经是NASAAmes研究中心计算科学部的主任,负责NASA在人工智能和机器人学领域的研究与开发,他作为Junglee的首席科学家帮助开发了一种zui早的互联网信息抽取服务。他在布朗( Brown)大学得应用数学学士学位,在加州大学伯克利分校获得计算机科学的博士学位。他获得了伯克利“卓越校友和工程创新奖”,从NASA获得了“非凡成就勋章”。他曾任南加州大学的教授,并是伯克利的研究员。他的其他著作包括《人工智能程序设计范型:通用Lisp语言的案例研究》(Paradigms of AI Programming: Case Studies in Common Lisp)和《Verbmobil:一个面对面对话的翻译系统》(Verbmobil:A Translation System for Face-to-FaceDialog),以及《UNIX的智能帮助系统》(lntelligent Help Systemsfor UNIX)。
^ 收 起
Peter Norvig,现为Google研究院主管(Director of Research),2002-2005年为负责核心Web搜索算法的主管。他是美国人工智能学会的会士和ACM的会士。他曾经是NASAAmes研究中心计算科学部的主任,负责NASA在人工智能和机器人学领域的研究与开发,他作为Junglee的首席科学家帮助开发了一种zui早的互联网信息抽取服务。他在布朗( Brown)大学得应用数学学士学位,在加州大学伯克利分校获得计算机科学的博士学位。他获得了伯克利“卓越校友和工程创新奖”,从NASA获得了“非凡成就勋章”。他曾任南加州大学的教授,并是伯克利的研究员。他的其他著作包括《人工智能程序设计范型:通用Lisp语言的案例研究》(Paradigms of AI Programming: Case Studies in Common Lisp)和《Verbmobil:一个面对面对话的翻译系统》(Verbmobil:A Translation System for Face-to-FaceDialog),以及《UNIX的智能帮助系统》(lntelligent Help Systemsfor UNIX)。
^ 收 起
《大学计算机教育国外著名教材系列·人工智能:一种现代的方法(第3版)(影印版)》专业、经典的人工智能教材,已被全世界100多个国家的1200多所大学用作教材。《大学计算机教育国外著名教材系列·人工智能:一种现代的方法(第3版)(影印版)》的全新版全面而系统地介绍了人工智能的理论和实践,阐述了人工智能领域的核心内容,并深入介绍了各个主要的研究方向。全书仍分为八大部分:一部分“人工智能”,第二部分“问题求解”,第三部分“知识与推理”,第四部分“规划”,第五部分“不确定知识与推理”,第六部分“学习”,第七部分“通信、感知与行动”,第八部分“结论”。《大学计算机教育国外著名教材系列·人工智能:一种现代的方法(第3版)(影印版)》既详细介绍了人工智能的基本概念、思想和算法,还描述了其各个研究方向前沿的进展,同时收集整理了详实的历史文献与事件。另外,《大学计算机教育国外著名教材系列·人工智能:一种现代的方法(第3版)(影印版)》的配套网址为教师和学生提供了大量教学和学习资料。
《大学计算机教育国外著名教材系列·人工智能:一种现代的方法(第3版)(影印版)》适合于不同层次和领域的研究人员及学生,是高等院校本科生和研究生人工智能课的优选教材,也是相关领域的科研与工程技术人员的重要参考书。
^ 收 起
《大学计算机教育国外著名教材系列·人工智能:一种现代的方法(第3版)(影印版)》适合于不同层次和领域的研究人员及学生,是高等院校本科生和研究生人工智能课的优选教材,也是相关领域的科研与工程技术人员的重要参考书。
^ 收 起
比价列表