Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
1d135af
Initial plan
Copilot Nov 6, 2025
5ac4f1f
Add Proof class to logic module
Copilot Nov 6, 2025
f2d226d
Document limitation in is_complete method
Copilot Nov 6, 2025
1b2b478
Merge pull request #2 from ewdlop/copilot/add-proof-to-issue
ewdlop Nov 6, 2025
0f070d4
Initial plan
Copilot Nov 6, 2025
0c2bf3e
Update book edition reference and add 4th edition cover
Copilot Nov 6, 2025
12c9fe2
Improve 4th edition book cover design
Copilot Nov 6, 2025
08dfb99
Merge pull request #4 from ewdlop/copilot/rename-book-add-figure
ewdlop Nov 6, 2025
80702d7
Initial plan
Copilot Nov 6, 2025
fe0a479
Add robotics_map example for Figure 25 Monte Carlo Localization
Copilot Nov 6, 2025
f2e4259
Merge pull request #6 from copilot/resolve-figure-25-issue
ewdlop Nov 6, 2025
bd39ac1
Initial plan
Copilot Nov 6, 2025
7121ce9
Add AI agent architecture figures to agents.ipynb
Copilot Nov 6, 2025
b0ab1f1
Merge pull request #8 from ewdlop/copilot/add-ai-figures
ewdlop Nov 6, 2025
2768d0c
Add aima-pseudocode as a new submodule
ewdlop Nov 6, 2025
50f51b4
[Cursor states: "]Update README.md with new content on AI pioneers an…
ewdlop Nov 6, 2025
3060cc7
[Cursor stated: "]Update README.md to include algorithm categories an…
ewdlop Nov 6, 2025
e615811
ewdlop states "Building up to Curry–Howard correspondence[.]"
ewdlop Nov 6, 2025
65ddf2f
[Cursor states: ]"Update README.md with new entries on proof theory a…
ewdlop Nov 6, 2025
ecf15f4
Add .venv to .gitignore, mark subproject as dirty.
ewdlop Nov 11, 2025
a7171eb
Refactored app structure and update Docker setup
ewdlop Nov 11, 2025
3222ce9
Update algorithms table to include 'Nature Language' column
ewdlop Dec 12, 2025
5f044e9
Merge pull request #14 from ewdlop/ewdlop-potent-1-Update-algorithms-…
ewdlop Dec 12, 2025
5d6a889
Fix table formatting in README.md
ewdlop Dec 12, 2025
163b9ec
Merge pull request #15 from ewdlop/ewdlop-pivot-1
ewdlop Dec 12, 2025
d9696c9
Update descriptions for RNN and LSTM in README
ewdlop Dec 13, 2025
4400f99
Merge pull request #16 from ewdlop/ewdlop-質物-1
ewdlop Dec 13, 2025
6d8bf89
Update table headers in README.md
ewdlop Dec 13, 2025
08051c5
Merge pull request #17 from ewdlop/ewdlop-化简-1
ewdlop Dec 13, 2025
d281a1e
Fix typo in 'Status' column heading
ewdlop Dec 13, 2025
7ec91c5
Merge pull request #18 from ewdlop/ewdlop-realize-1
ewdlop Dec 13, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -76,3 +76,6 @@ target/
# for macOS
.DS_Store
._.DS_Store

.venv
.venv310
3 changes: 3 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
[submodule "aima-data"]
path = aima-data
url = https://github.com/aimacode/aima-data.git
[submodule "aima-pseudocode"]
path = aima-pseudocode
url = https://github.com/aimacode/aima-pseudocode
691 changes: 596 additions & 95 deletions README.md

Large diffs are not rendered by default.

104 changes: 103 additions & 1 deletion agents.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
"* Overview\n",
"* Agent\n",
"* Environment\n",
"* Agent Architectures\n",
"* Simple Agent and Environment\n",
"* Agents in a 2-D Environment\n",
"* Wumpus Environment\n",
Expand Down Expand Up @@ -103,6 +104,107 @@
"* `execute_action(self, agent, action)`: The environment reacts to an action performed by a given agent. The changes may result in agent experiencing new percepts or other elements reacting to agent input."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## AGENT ARCHITECTURES\n",
"\n",
"In this section, we'll explore the different types of agent architectures described in Chapter 2 of the AIMA book. These architectures represent different ways an agent can process percepts and select actions.\n",
"\n",
"### Table-Driven Agent\n",
"\n",
"A table-driven agent uses a lookup table that maps every possible percept sequence to an action. This approach is only practical for very small domains because the table grows exponentially with the length of the percept sequence.\n",
"\n",
"The `TableDrivenAgentProgram` function implements this architecture as shown in **Figure 2.7** of the book."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"psource(TableDrivenAgentProgram)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simple Reflex Agent\n",
"\n",
"A simple reflex agent selects actions based only on the current percept, ignoring the rest of the percept history. These agents work on **condition-action rules** (also called **situation-action rules**, **productions**, or **if-then rules**), which tell the agent what action to take when a particular situation is encountered.\n",
"\n",
"The schematic diagram shown in **Figure 2.10** of the book illustrates this architecture:\n",
"\n",
"![Simple Reflex Agent](images/simple_reflex_agent.jpg)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"psource(SimpleReflexAgentProgram)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Model-Based Reflex Agent\n",
"\n",
"A model-based reflex agent maintains an **internal state** that depends on the percept history and reflects at least some of the unobserved aspects of the current state. In addition to this, it requires a **model** of the world\u2014knowledge about \"how the world works\"\u2014including:\n",
"\n",
"* How the world evolves independently of the agent\n",
"* How the agent's actions affect the world\n",
"\n",
"The schematic diagram shown in **Figure 2.12** of the book illustrates this architecture:\n",
"\n",
"![Model-Based Reflex Agent](images/model_based_reflex_agent.jpg)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"psource(ModelBasedReflexAgentProgram)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Goal-Based Agent\n",
"\n",
"A goal-based agent needs **goal** information that describes desirable situations. The agent program can combine this with information about the results of possible actions (the model) to choose actions that achieve the goal. This makes the agent more flexible because the knowledge that supports its decisions is represented explicitly and can be modified.\n",
"\n",
"The schematic diagram shown in **Figure 2.13** of the book illustrates a model-based, goal-based agent:\n",
"\n",
"![Goal-Based Agent](images/model_goal_based_agent.jpg)\n",
"\n",
"**Search** (Chapters 3 to 5) and **Planning** (Chapters 10 to 11) are the subfields of AI devoted to finding action sequences that achieve the agent's goals."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Utility-Based Agent\n",
"\n",
"Goals alone are not always enough to generate high-quality behavior. For example, there may be many action sequences that achieve the goal, but some are better, faster, safer, or more reliable than others. A utility-based agent uses a **utility function** that maps a state (or a sequence of states) onto a real number describing the associated degree of happiness.\n",
"\n",
"The schematic diagram shown in **Figure 2.14** of the book illustrates a model-based, utility-based agent:\n",
"\n",
"![Utility-Based Agent](images/model_utility_based_agent.jpg)\n",
"\n",
"A complete utility-based agent chooses the action that maximizes the expected utility of the action outcomes\u2014that is, what the agent expects to achieve, given the probabilities and utilities of each outcome."
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down Expand Up @@ -732,4 +834,4 @@
},
"nbformat": 4,
"nbformat_minor": 1
}
}
1 change: 1 addition & 0 deletions aima-pseudocode
Submodule aima-pseudocode added at d2d5da
Loading
Loading