Abstract
There is a disparity between the multitude of apparently successful expert system prototypes and the scarcity of expert systems in real everyday use. Modern tools make it deceptively easy to make reasonable prototypes, but these prototypes are seldom made subject to serious evaluation. Instead the development team confronts their product with a set of cases, and the primary evaluation criterion is the percentage of correct answers: we are faced with a “95% syndrome”. Other aspects related to the use of the system are almost ignored. There is still a long way to go from a promising prototype to a final system.It is maintained in the article that a useful test must be performed by future users in a situation that is as realistic as possible. If this is not done claims of usefulness cannot be justified. It is also stated that prototyping does not make “traditional” analysis and design obsolete, although the contents of these activities will change.In order to discuss the effects of using the systems a distinction between expert systems as media, tools and experts is proposed