
2005年度上期 未踏ソフトウェア創造事業 採択案件評価書

|

1.担当PM

|
| Alan Kay (President, Viewpoints Research Institute)
|

|

2.採択者氏名

|
|

|
開発代表者
|
:
|
Yokokawa Koji <横川 耕二> (Engineering Solution Division, Mamezou
Co., Ltd. )
|
|
共同開発者
|
:
|
なし
|
|



|

5.テーマ名

|
| 
Spottie - a virtual companion for children
|


|

7.テーマ概要

|
To develop a smart software agent in the Squeak/Tweak development environment
that reacts to and assists user actions. The system provides a software
agent that notices changes in the state of the Squeak/Tweak display objects
and infers the state of the user and reacts to that state. The system is
intended to be fun and easy for children to use.
|

|

8.採択理由

|
| 
This proposal is further work that extends the results and directions of
a previously funded project and researcher. It is likely to produce good
results, and it involves considerable learning on the part of the children
end-users.
|
|

9.成果

|
| 
This was a very difficult set of goals in an area that has been worked
on by many researchers for more than 40 years. Because of the difficulty
and scope of the project we have to evaluate the results by examining the
amount of effort, worthwhile ideas, and degree of demonstration that one
researcher was able to accomplish in the few months of this project.
It was clear that Koji put in a lot of effort, came up with a number
of good ideas, and made several demonstrations of these ideas.
I think that less of the effort should have been put into the demonstrations
and more into the ideas. However, this is very tricky because a large part
of this project also has to do with the user interface that allows the
"agent trainer" to advise the agent.
If Koji were a graduate student he would have received more advice
about how to organize this project.
My general feeling is that he did a very good job in a very difficult
area, and several of his ideas are strong enough to deserve more work.
|

|

10.今後の課題

|
| 
This area is tremendously important because it involves automatic mentoring
of all kinds for computer users. So the difficulty of the project should
not stand in the way of trying harder.
My guess is that a good approach to this part of the problem would be to have the agent advisor first construct an example by hand that the system would turn into a runable script. This script can recreate the example, and it can also be used to watch and judge an end-user try to do the task.
But there are more ways to do the task than the script can model. And the end-user will make mistakes. To the judge, both of these will look like mistakes. The big problem (and why no good solutions have arisen in 40 years) is that the agent must do two very different things if it can discern whether the user is making a real error or is simply doing things slightly differently.
We can get some hints if we compare what a human mentor would do. The mentor has to wait long enough to really see if a bad path has been taken. Then it has to decide what kind of gentle advice to give.
I think that both the detection and advice can be helped tremendously by allowing the script to be annotated by the agent's teacher. There can be hints about how to tell when things are more or less right or going wrong. There can be hints about what to tell the end-user. Etc.
A very important part of this kind of system will be a comprehensive UNDO facility that can deal with small and (especially) large errors and roll-backs.
|
ページトップへ
|

|