CSCI 3202: Artificial Intelligence

Solutions to Problem Set 1

1.1 A is a knave; B a (lying) normal; C a knight.

1.2 In the general case, there are N people, each of whom can be any one of four types. Thus the size of the problem space -- the number of distinct identifiable states -- is 4^N. Some people assumed that the "general" problem would stipulate one person of each type, as our sample problem does; but this would mean that, in addition to N people, there are also N distinct types of people.

1.3 My own solution path looked something like this. States are written as triplets of ABC, with N standing for Normal, T for Knight, and F for Knave.

U U U --> T U U [impossible] --> N U U --> N T U --> N T F [impossible] --> N F U [impossible] --> F U U --> F N U --> F N T [goal]

This is "depth-first-search-like" in its order of examination of states. Some solutions, appropriately, described an "intelligent depth-first search" in which partially-unknown states are tested for compatibility with the goal state.


2.1 Light blue left 3; green right 1; purple up 1; brown up 1; dark green left 2; dark blue down 2; yellow down 3; red car right.

2.2 A reasonable (but high) first estimate is simply to note that each car can be (in principle) in any of five positions; each truck in any of four positions. Since there are four cars and four trucks, this gives us a rough guess of (5^4)(4^4) = 160,000 positions. But we can do a lot better than this. First, we note that the car-and-truck in the left column can only occupy three distinct (joint) positions; and each of those three positions blocks the red and light blue cars from the left column. So a better approximation is:

3 [left column] * 5 * 4 * 4 [cars] * 4 * 4 * 4 [trucks] = 15360

We could do better still, but this is a reasonable overestimate.

2.3 The number of vehicles blocking the red car from the exit is a reasonable first start. If we wish to get a bit better (at the cost of a bit more computation), we could use the number of unblocked (movable) vehicles blocking our red car.

2.4 Most people mentioned means-ends analysis in their solution to the Rush Hour problem. This is reasonable; the pattern for solving the puzzle is something like "look to see which vehicles are blocking us, and identify those moves which will allow us to move the blocking vehicles; and if we the blocking vehicles are themselves blocked from moving, we have to find ways of freeing the "second-level" blockers, and so forth. This pattern of solution (interestingly) doesn't always work for Rush Hour, but it works for many instances of the puzzle.

Bidirectional search is hard to apply to this puzzle, since it assumes that you have a known particular goal state from which to search. In the case of Rush Hour, there is no one "goal state" but rather a class of goal states in which the red car is unblocked; so we don't really know which of the (many) potential final states of the puzzle from which to do our reverse search.


3. For these problems I will simply quote the Russell/Norvig solutions.

3.10 If we had two trees growing towards each other, it would suggest that the two halves of the search were somehow aware of each other, and were seeking each other out. The circular diagram correctly conveys the idea that most of the time is spent in directions that are actually moving apart from each other, and that there is just a small part of the search space where the two searches are approaching each other.

3.12 Bidirectional search is O(b^(d/2)) under the assumption that hash table lookup is O(1). If instead of using a hash table, we use an exhaustive comparison against all the nodes generated by the backward search (that is, all O(b^(d/2)) of them), then the resulting search is O((b^(d/2) * b^(d/2)) or O(b^d), which is the same as breadth-first and iterative deepening. Bidirectional search also often has a worse constant factor, because we have to compare states for equality rather than just checking a goal test.

3.13 Technically, we were wrong to say that one direction must be breadth-first. What we meant to say is that one direction must store all the states it has generated, so that the other direction can look for them. So what we should have said is that we need a data structure that stores all the states (and thus is equivalent in space to breadth-first search), but we could find the states in any order. That said, a good choice for the other direction would be breadth-first if space is available, or iterative deepening otherwise. If you look at Figure 3.17, the idea is that we want to search in all directions; it wouldn't do to have a depth-first search, because it might never hook up with the other half of the search.


4. Here are some sample answers from the class.

Sample Response 1:

I found what Block had to say about Searle's paper very interesting. Specifically, when he is giving the example of infinitely small beings creating spaceships and mimicking our basic elements by flying in the same pattern electrons and neutrons fly. He then says that if we were to go and live in this environment, we would eventually be made of this "false" matter. I find this argument interesting because it says that there is more to a mind than what it is composed of. According to Block, the mind is made up of primitive "black boxes", each of which take some input, and produce some output. But if these primitive mechanisms can be replaced with anything that can do the same job, it would be conceivably possible to make brains out of anything that could produce the same output given the same input, such as Searle's water pipe example. Block says that our intuitions say that this change in matter would not affect our consciousness. The best analogy that I can think of would be a computer program (our consciousness) running on one set of hardware (our physical brain), and then either suddenly or piece by piece being replaced with another set of hardware. This leads to a different question altogether, what is consciousness? If it is not dependent on specific hardware, then maybe it depends on the state it is in on the given hardware. But if that were the case, then there could possibly be two identical thinking persons. Block's objection intrigued me because of it's implications, and his interpretation about people's intuitions. I believe that maybe our intuitions about whether or not changing the matter of the brain would really not affect our consciousness may be wrong. It seems logical that it wouldn't, but there is really no way to know because we don't really have the foggiest idea about what is in those "black boxes".

Sample Response 2:

I find Lady Lovelace's Objection (or the Originality Objection) from Turing's paper to be particularily provocative. It raises an interesting question as to whether machines will ever be able to be "creative" or "original," acting upon a will of their own rather than that of the human programmer. While at first I thought the objection to be entirely reasonable, I now think it to be shaky at best. To give an example of a similar situation (in my mind) which has already occured, look at the history of biological evolution on Earth.

We know that all life on this planet can be broken down to its component parts and "programming" (DNA), and that new species are born through a process of mutation (or perhaps simple DNA copying errors), where the mutation better suits the species' environment. This could be considered an act of creativity by the machine of nature. For a human-made machine to be similarly creative, it needs only to be able to produce copying errors (which, through experience, I know is well within the grasp of modern computers).

Sample Response 3:

In Searle's paper, he remarks about "the brain simulator reply". The brain simulator reply was that a program could completely simulate the brain of a native Chinese speaker, thus producing a system that understood Chinese. Searle remarked that this brain simulator could be turned into a very complex system of pipes, which a man could operate in such a way to once again simulate the brain and speak Chinese. He contends that neither the operator nor the pipes has any real understand of Chinese. However, if we could completely simulate a brain, I contend that we could completely simulate the conscious entity that is the native Chinese speaker. This entity would have a full grasp of Chinese, and even a grasp of the idea that the Chinese word for "duck" corresponds to a yellow bird that quacks.

Searle also contends that this gives the field of AI nothing, as we are reduced to neurobiology. I contend that it gives us insight that a system can be constructed to mimic the brain and mimic the brain's understanding. If we took the original example of a program (or man) in a room transcribing symbols, and add the fact that perhaps this program has a reference library, it can then associate a certain set of squiggles to a picture. Even better, this reference library could be to some "object", which has references to pictures and to other objects as well. Doesn't the system have some understanding of Chinese if not only does it know the grammar of Chinese, but it also knows the associations of the words. If we make this reference library so complete as to encompass all of the knowledge of our native Chinese speaker, it would be able to completely reproduce the responses and even the "meaning" behind the responses. I contend that this system would understand Chinese as well as a man, and in much the same sense.

As humans, we are large input-output systems. Perhaps we have some internal connections from output to input and we call these loops "thought" or "consciousness". I believe Searle's main conceptual problem is an attempt to separate these two concepts from information, yet humans do not achieve either if they are completely devoid of neural input. It can also be shown that much of the way we think is dependent on what information we have. Thus, a system which mimics our brain functioning carries with it all of the information our brain has, as well as all of the "understanding". A system that mimics it or can reproduce it at higher and higher levels does not lose this understanding. The real question is, to what degree of reproduction should we be satisfied. Turing says conversation, but I believe that is not sufficient and thus the Turing test as it stands is not satisfactory.


5. Code for solving the can problem is shown here. The search-for-all-possible-states procedure reveals that there are in fact only 63 achievable states in this problem. (We stipulate that the 40-quart can with the larger value is always listed first.) Several different variations of a basic DFS search procedure are also included.