Chains of Inference
Now we look at how to get an agent to verify a given theorem using several search strategies. In previous lectures we have noted that, to specify a search problem, we have to describe the representation language for the artifacts being searched for, the starting state, the goal state (or some information regarding what a goal should look like?), and the operators- how to go from one state to another?.
We may state the problem of proving a given theorem from some axioms as a search problem. 3 different specifications give rise to 3 different ways to answer the problem, namely backward and forward chaining and proof by contradiction. In all of these provisions the representation language is predicate logic (not surprisingly), and operators are the rules of inference, which let us to rewrite a set of sentences as another set. We may think of each state in our search space as a sentence in first order logic. For searching new sentences the operators will traverse this space. However, we are actually interested in searching a path from the start states to the goal state, as this path will constitute evidence. (Note that there are other ways to verify theorems such as exhausting the search for a counterexample and finding none - in this case we do not have a deductive evidence for the truth of the theorem, but we know this is true).