Introduction

What do the Altamira cave paintings, kids’ drawings, and professional paper napkin sketches have in common? They all tell a story, but there is no voice of the storyteller. Our observations show that the most striking means of knowledge transfer from experts to novices in both education and industry settings is through the informal recounting of experiences from past projects and collaborative dialogue connecting ideas and solutions. Stories convey great amounts of knowledge and information in relatively few words together with sketches on paper or annotations on formal printed documents (Gershon and Page 2001).

A decision delay can translate into significant financial and business losses. One way to accelerate the decision process is through improved communication among the stakeholders engaged in the project. Capturing, transferring, managing, and reusing data, information, and knowledge in the context it is generated can lead to higher productivity, effective communication, reduced number of requests for clarification and time-to-market cycle. However, knowledge transfer often fails, since knowledge is not captured, it is captured in an abstract format rendering it not reusable, or there are no formal mechanisms to find and retrieve it. Valuable knowledge is lost at project transition points of the building life cycle from one phase to the next, i.e., finance, design, procure/fabricate, build, manage as the handover between different stakeholder teams takes place.

We view knowledge capture, sharing, and reuse as key steps in the knowledge life cycle (Fruchter and Demain 2002, 2005). Knowledge is created as designers collaborate on design projects using data, information, past experience and knowledge. It is captured, indexed, and stored in human memory or digital archives. At a later stage, it is retrieved and reused. Finally, as knowledge is reused it is refined and becomes more valuable. In this sense, the archive acts as a knowledge refinery.

Our ethnographic studies performed over the past decade of cross-disciplinary team at work show that a primary source of information behind design decisions is embedded within the verbal conversation among designers. Capturing these conversations is difficult because the information exchange is unstructured and spontaneous. In addition, discourse is often multimodal. It is common to augment speech with sketches as an embodyment of the mental model, or launch into a problem solving discussion tirggered by a skeched solution.

Advances in digital technology promise to assist in knowledge capture and reuse. However, the more digital content is created the more paper we print and use. Most digital content management today offer document management solutions with few answers how to capitalize on the core corporate competence, i.e., to capture, share, and reuse business critical knowledge. Digital archives store formal documents (CAD, Word, Excel, etc.) that can be easily edited, shared, searched, and archived. Knowledge reuse and externalization of tacit knowledge is not revealed by these formal documents. The knowledge creation takes place in informal concept generation and problem solving sessions in which knowledge workers gather around multiple blueprints or other documents, and engage in dialogue and paper and pencil sketching. Paper has a tactile feel; it can be easily folded or rolled and carried to meetings or site visits. It affords single or multiple users to interact and jointly annotate one or multiple documents, and more importantly; it is socially and legally accepted (Sellen 2001). However, paper is difficult to modify and expensive to distribute, archive, search, retrieve, and reuse. These limitations are very effectively supported by digital technology. Nevertheless, when inspecting for instance large CAD models on screen the current resolution of computer monitors only affords to either zoom in and see the details but loose the big picture, or zoom out and see the big picture but miss the details. Paper provides a high resolution for navigation through the content that enables users to view at a glance local details and global context.

We argue that in order for knowledge to be captured and reused, the knowledge worker needs to be able to:

  • Create content using natural idioms as communication media such as dialogue and paper and pencil sketches.

  • Explore and understand the context in which this knowledge was originally created.

  • Interact with the content in a rich, multimedia environment.

Our objective is to leverage the advantages of both analog paper world and digital world in support of the knowledge life cycle, i.e., knowledge creation, capture, sharing, and reuse.

We introduce the concept of reflection in interaction during communicative events among stakeholders. The concept was formalized based on ethonographic observations. We model the observed reflection in interaction with the prototype called TalkingPaperTM.Footnote 1 TalkingPaperTM represents a ubiquitous collaborative environment for collocated and mobile knowledge workers. It bridges the analog (speech, dialogue, paper and pencil sketching and annotation) and digital (audio, video, etc.) worlds to facilitate synchronous and asynchronous communicative events and support the knowledge lifecycle. We present the theoretical points of departure, and discuss evidence collected during ethnographic studies in the typical paper intensive environments such as project teamwork and the building permit process. The paper presents the problem and solution space, the TalkingPaperTM prototype, and the spectrum of interaction scenarios the prototype currently supports.

Theoretical points of departure

The points of departure of this research are: design theory and methodology, knowledge creation and management, and human computer interaction.

Design theory and methodology

The issue of how to capture knowledge in project design teams has received extensive attention from researchers in design theory and methodology. The value of contextual design knowledge (process, evolution, rationale) has been repeatedly recognized, but so has the additional overhead required of the designer in order to capture it. Other studies of design focused on either the sketch activity, i.e., learning from sketched accounts of design (Tversky 1999; Stiedel and Henderson 1983; Olszweski 1981; Kosslyn 1981; Goel 1995) or verbal accounts of design (Cross 1992, 1996; Dorst 1996). Some researchers have studied the relation between sketching and talking (Eastman 1969; Goldschmidt 1991). Recent studies of interactive workspaces (Ju et.al. 2004) explore capture and navigation issues related to technology-augmented interactions. To help guide the designer’s exploration of an archive of unstructured dialogue and sketch content linked to structured document databases, it will be necessary to develop a search and retrieval mechanism. Our research builds on Donald Schon’s concept of the reflective practitioner paradigm of design (Schon 1983). Schön argues that every design task is unique, and that the basic problem for designers is to determine how to approach such a single unique task. Schön places this tackling of unique tasks at the center of design practice, a notion he terms knowing in action (Schön 1983, p. 50). To Schön, design, like tightrope walking, is an action-oriented activity. However, when knowing-in-action breaks down, the designer consciously transitions to acts of reflection. Schön calls this reflection in action. In a cycle which Schön refers to as a reflective conversation with the situation, designers reflect by naming the relevant factors, framing the problem in a certain way, making moves toward a solution and evaluating those moves. Schön argues that, whereas action-oriented knowledge is often tacit and difficult to express or convey, what can be captured is reflection in action.

Knowledge creation and management

The digital age holds a great promise to assist in knowledge capture, transfer, and reuse. However, the more digital content is created the more paper we print. More importantly, we need to offer clear and distinguishing definitions and instantiations for data, information, and knowledge, rather than using them interchangably. Similar to Davenport and Prusak (1998), our research uses the following working definitions for data, information, and knowledge. Data (e.g., printed documents or digital documents of CAD, spreadsheeds, text) represent the “raw material”. This is easy to manage and store in corporate data bases or ftp sited. Nevertheless, data is not information. Information emerges during a communicative transaction between a sender and a receiver. Information is created as the sender takes data and adds meaning, relevance, purpose, value through a process of contexualization and synthesis. Neither data nor information represent knowledge. We believe and observe that knowledge is created through dialogue within or among people as they use their past experiences and knowledge in a specific context to create alternative solutions. During these dialogues knowledge is created as connections, comparisons, combinations, and their consequences are explored. It is important to note that documents do not reveal the tacit knowledge externalized during the permit checking process. They also ignore the highly contextual and interlinked modes of communication in which people generate concepts through verbal discourse and sketching.

We view the act of reflection in action as a step in the knowledge creation and capture phase of what we call the “knowledge life cycle” (Fruchter and Demian 2002, 2005). Knowledge represents an instance of what Nonaka’s knowledge creation cycle calls “socialization, and externalization of tacit knowledge”. (Nonaka and Takeuchi 1995). We build on these constructs of the knowledge lifecycle and the “socialization, externalization, combination, and internalization” cycle of knowledge transfer.

Human computer interaction

We use the scenario-based design approach (Rosson and Carroll 2001) that offers a methodology to study the current state-of-practice, describe how people use technology and analyze how technology can support and improve their activities. The scenario-based design process begins with an analysis of current practice using problem scenarios. These are transformed into activity scenarios, information scenarios and interaction scenarios. The final stage is prototyping and evaluation based on the interaction scenarios. The process as a whole from problem scenarios to prototype development is iterative.

Ethnographic evidence

Project teamwork and building permit approval are still paper intensive communication processes. Numerous cycles of requests for changes and clarifications lead to high hidden work, i.e., additional coordination and rework efforts (Levitt and Kunz 2002). For instance, a typical building permit approval cycle can take up to 18 days. A permit approval delay can translate into significant financial and business losses. One way to accelerate the permit process is through improved communication among the permitting agency and stakeholders engaged in the design, and construction of facilities. Capturing, transferring, managing, and reusing data, information, and knowledge in the context it is generated can lead to higher productivity, effective communication, and reduced number of requests for clarification and requests for information cycles. However knowledge transfer often fails. Our objective is to reduce the number of cycles to one cycle. This objective is aimed to:

  • Reduce hidden work (i.e., less coordination and rework).

  • Improve communication and knowledge transfer among the stakeholders, and

  • Decrease response time and decision delays.

One of the extensive ethnographic studies we performed in the last 2 years was at the San Jose Redevelopment Agency (SJ RDA) (Fruchter and Swaminathan 2004). This provided a better understanding of the work environment and overall permitting process, i.e., activities, actors, interactions, information, and specific media through which these interactions occur. Ethnographic evidence and our field observations show that:

  • The typical workspace of a professional expert at the SJ RDA (e.g., architect, engineers) comprises a large desk mainly used to spread out drawings, tons of tracing paper and sketches, a rack with all the drawings and tracing paper sketches of on-going projects.

  • SJ RDA experts constantly trace over the blueprints sent by the client several times using tracing paper to understand all the intricacies of the drawings—a process representing reflection-in-action.

  • If a SJ RDA expert had an issue he/she wanted to think about or a new idea it was quicker to capture it by sketching on tracing paper than to make a computer model.

  • Blueprints are checked, annotated, and traced over to understand how they correlate.

  • During meetings SJ RDA experts from different disciplines gather around a large meeting desk with multiple blueprints and other sets of documents (calculations, spreadsheets, docs) that they annotate, sketch on, correlate to identify problems, discuss key issues, make recommendations, request changes.

  • Sketches and tracing paper drawings are archived for future reference during meetings and re-use of good ideas. Nevertheless, it is very hard to search and find relevant material through the paper archive.

  • The final decisions, recommendations, and requests for changes are summarized in a text document and sent to the client. Nevertheless, all the discourse, arguments, and rationale behind these items are not provided. This leads to: (1) multiple requests for clarifications sent to the agency by the client and the project team members, and (2) delays in the permit process and project progress triggered by coordination and rework efforts.

Reflection in interaction

This research adopts a scenario-based approach (Rosson and Carroll 2001) to the design of human-computer interaction. The premise behind scenario-based methods is that descriptions of people using technology are essential in analyzing how technology can support and improve their activities. We address the following research question:

What are the governing principles of and how can we map the natural paper-based environment into a digital interactive environment that emulates the dynamic and complex interactions among multiple participants, documents, and input devices (e.g., pens)?

Based on our ethnographic studies focused on the interactions among stakeholders engaged in building projects we identified and modeled the activities, interaction, and information generated by stakeholders in different settings. This lead to the formalization of a problem space defined by three dimensions (Fig. 1):

  • Number of participants; from single to multiple participants.

  • Number of documents or paper artifacts; from one to multiple documents, and

  • Number of input devices used to sketch or mark up the documents; from one to multiple input devices (e.g., pens, markers).

Fig. 1
figure 1

Research problem space

This problem space defines a spectrum of interaction scenarios of increasing complexity. These interaction scenarios are consistent with the observed communicative events in real project teams and work settings. For instance, the two extremes of the spectrum are defined by the following two interaction scenarions:

  • At one end of the spectrum we have a single participant interacting with one document or artifact and using one input device or pen (Fig. 2a), and

  • At the other end of the spectrum we have multiple participants interacting with diverse documents or artifacts using multiple input devices (Fig. 2b).

Fig. 2
figure 2

Interaction scenarios representing extreme cases of the problem space. a Interaction scenario defined by a single participant marking up one document page with one input device/pen corresponding to the origin of the three dimensional problem space. b Interaction scenario defined by multiple participants marking up multiple documents with multiple devices/pens corresponding to the extreme corner of the three dimensional problem space

It is important to note that the three dimensional problem space includes other intermediary interaction scenarios, such as:

  • a single participant marking up multiple document with one input device/pen,

  • multiple participants marking up one document with multiple devices/pens, etc.

Analyzing the interaction scenario defined by “a single participant marking up one document with one input device/pen” we observed that it matches Donald Schon’s theory of reflection in action of a single practitioner (Schon 1983). In the case of the reflection in action a single practitioner has a reflective conversation with the design situation. This entails the following activities:

  1. 1.

    Naming the relevant factors in the studied design.

  2. 2.

    Framing the problem in a specific domain.

  3. 3.

    Making moves towards a solution, i.e., often modifying the design solution to address some of the identified problems, and

  4. 4.

    Evaluating the moves or proposed modifications.

It is important to note that each move or modification made by one team member in one discipline can impact solutions in other disciplines (e.g., a change made by the architect in the floor plan layout can impact the strucutal system solution proposed by the structural engineer). This in turn creates a new situation for that team member and triggers a reflection in action cycle in that domain.

As we studied the second interaction scenario defined by multiple participants marking up multiple documents with multiple devices/pens we formalized and introduced the concept of reflection in interaction during communicative events among stakeholders. Reflection in interaction extends Donald Schon’s theory of reflection in action of a single practitioner and builds on our ethnographic observations. As the practitioners review concurrently multiple documents, they have a constant reflective conversation with the situation, the artifacts or documents, and the stakeholders. Their interactive reflective process consists of:

  1. 1.

    Identifying the relevant factors in all considered disciplines through exploratory sketching and discussion.

  2. 2.

    Correlating these factors across disciplines and documents.

  3. 3.

    Discussing and exploring alternatives across disciplines.

  4. 4.

    Assessing alternatives and their implications.

We argue that, whereas action-oriented knowledge is tacit and difficult to transfer, what can be captured and transferred is the reflection in interaction that reveals the rationale and correlation across disciplines and documents, as well as the new knowledge that is created through discourse among the stakeholders.

Capturing, sharing, and reusing knowledge created in cross-disciplinary, collaborative, teams is critical to increase the quality of the product, reduce time-to-market and cost. Concept generation and development occur most frequently in informal media where design capture tools are the weakest. This statement has strong implications for the capture and reuse of design knowledge because conceptual design generates the majority of initial ideas and directions that guide the course of the project. Sketching is a natural mode for designers to communicate ideas in highly informal activities such as brainstorming sessions, project reviews. Often, the sketch itself is merely the vehicle that spawns discussion about a particular design issue. Thus, from a design perspective, capture of both the sketch itself and the discussion that provides the context behind the sketch are important. It is interesting to note that today’s state-of-practice or best practices are not captured and knowledge is lost when the whiteboard is erased or the paper napkin sketch is tossed away. With all the advances in computing today people still prefer to have a conversation and use paper and pencil sketches to communicate and capture ideas. Our observations show that during communicative events there is a continuum between discourse and sketching as ideas are explored and shared. We assert that a primary source of knowledge behind design decisions is embedded within the verbal conversation among designers. The link between dialogue and sketch provides a rich context to express and exchange knowledge. This link becomes critical in the process of knowledge sharing, retrieval and reuse to support the user’s understanding of the shared information and assessment of the relevance of the retrieved content with respect to the task at hand. Nevertheless, paper is a media hard to share, exchanged, and reuse, and does not capture the discourse among users. The moment you lost the paper sketch the ideas are lost.

This research address the communication, coordination, and cognition needs defined by: (1) the need to bridge the analog paper and discourse world with the digital world. Such a bridge can improve the knowledge transfer and reuse over the life cycle of the building, (2) the increasing complexity defined by the three dimensional problem space–multiple participants, multiple documents, multiple pens (as input devices) posed by the paper-intensive reflection in interaction process in which the stakeholders engage in dialogue and sketching activities. Note that this three dimensional problem space is applicable to both the paper and digital worlds.

TalkingpaperTM environment

TalkingPaperTM aims to empower the project stakeholders and engage them in productive collaborative synchronous and asynchronous teamwork by leveraging the best of the all worlds paper, digital multimedia, and networked communication. TalkingPaperTM is building on our experience in developing multimedia knowledge capture innovative technologies, e.g. RECALLTM technology (Fruchter and Yen 2000)Footnote 2, as well as commercial technologies such as AnotoTM paper, digital pens (e.g., by Nokia, Logitech, Maxell), cell phones, Bluetooth communication, and GSM/GPRS network services. To date, related digital pen pilot efforts and trial projects focus on text document mark-up (Guimbretiere 2003) and forms automation for different sectors such as healthcare, service and support companies, government organizations, education testing agencies, pharmaceutical research companies, e.g., forms for clinical studies, forms for customer data and signature, weekly report forms for service engineers, etc. (e.g., Dai Nipon Printing and Hitachi Corporation in Japan; HP, USA).

Our focus goes beyond data entry automation using digital pens and forms on Anoto paper and presents an approach to capture, share, and reuse knowledge created during multimodal communicative envents. We model the reflection in action and reflection in interaction with the TalkingPaperTM prototype that represents a ubiquitous, multimedia collaborative environment. The aim is to: (1) support the creative process of concept generation and problem solving of cross-disciplinary project teams, and (2) enable the capture, sharing, and reuse of the knowledge created during these activities. Previous empirical observations (Fruchter and Demian 2002; Fruchter and Swaminathan 2004) of cross-disciplinary teams at work show that knowledge reuse is effective since designers can:

  • Quickly sketch and explain their ideas using paper and pencil.

  • Quickly find (mentally) reusable items, and

  • Remember the context of each item, therefore understand it and reuse it effectively.

Based on our observations, we formalize key activities in the knowledge life cycle:

  • Create and capture to publish and share reusable content.

  • Find reusable content, and

  • Understand this content in the original context it was created.

Highly structured representations of design knowledge can be used for reasoning. However, these approaches usually require manual pre or post processing, structuring and indexing of design knowledge. In order to capture, share, and reuse relevant content (i.e., knowledge in context) from media such as paper and pencil sketches and verbal discourse it is critical to convert such externalized tacit knowledge into digital symbolic representations. The digital unstructured, informal content captured from different communication channels i.e., digital audio from verbal discourse and digital sketches from paper and pencil sketches, needs to be indexed and synchronized. This facilitates future searching, sharing, replay, and reuse of the knowledge.

TalkingPaperTM provides an analog-to-digital content conversion processor that:

  • Enables seamless transformation of the informal analog content, such as dialogue and paper and pencil sketches, into digital sketch objects indexed and synchronized with the streamed digital audio of a TalkingPaperTM session. This conversion process takes place in real-time, with high-fidelity, and least overhead to the participants.

  • Supports knowledge reuse by allowing the user to understand the content in the context it originated, i.e., interactive replay of indexed digital audio-sketch rich multimedia content that captures the creative human activities of concept generation through dialogue and paper and pencil sketching. The TalkingPaperTM sessions are automatically uploaded to a TalkingPaperTM web server that was developed to archive, share, and stream these sessions on-demand.

TalkingPaperTM allows future contextual search, retrieval, replay, and reuse based on sketch, annotation of document, keyword, and/or participant who represents a specific domain expertise or perspective. TalkingPaperTM interactive environment provides methods for unique identification of participants and documents synchronized and indexed with the digital audio and sketches. The participants’ sketches or annotations can be flagged with color markers on the TalkingPaperTM page in response to the user’s query request. For instance, the query request “bill” will flag all the sketches and annotations made by bill on a specific TalkingPaperTM page that is replayed. The user can select any of these flags to strat play back of the session from that point on.

TalkingPaperTM client-server environment supports the identified create and capture, publish and share, find, and understand activities in the following way:

  • The create and capture activity is supported by TalkingPaperTM client application through a high-fidelity, interactive, integrated multi-user, analog-to-digital multimedia conversion process. This process converts all sketches and annotations on the AnotoTM paper into digital sketch objects that are synchronized with the speech from the digital audio channel and the documents form the corporate datatbase that were printed on AnotoTM pages. The indexing and synchronization based on the time stamps of the different channels (i.e., digital audio and sketch).

  • The TalkingPaperTM client application allows real time publishing of the dialogue and sketches/annotations to a TalkingPaperTM web server. The sharing of content is supported by streaming the selected session from the TalkingPaperTM web server. It automatically synchronizes digital audio-sketch episodes with the corresponding document that was printed on the AnotoTM paper used in that session.

  • The find activity is supported by an integrated digital audio-sketch search engine provided by I-Dialogue at the macro enterprise database search level (Yin and Fruchter, presented in this volume), and by user driven search or selection at the micro session level. In this case, the user selects specific items from a sketch or annotation on a page TalkingPaperTM to trigger play back from that item on.

  • The understand activity is supported by TalkingPaperTM that allows the user to replay a selected content item or full session in the context it was created. The verbal discourse provides the rationale behind the sketch and annotations on the document pages. This is achieved as the TalkingPaperTM web server streams the selected: (1) session, (2) sketch/annotation item from within a session, or (3) flag that marks up a sketch or annotation on the displayed TalkingPaperTM page indicating a specific person’s ID.

Over the past 3 years we developed six rapid prototyping generations of the TalkingPaperTM system. Each new generation of TalkingPaperTM implemented one of the interaction scenarios defined by the problem space (Fig. 1). These prototypes are in increasing complexity, i.e.:

  1. 1.

    The first TalkingPaperTM prototype addressed the scenario of a single participant marking up one document page with one input digital pen.

  2. 2.

    After testing that our concept and algorithm work we developed next prototype that allowed to run multiple parallel sessions of TalkingPaperTM client applications in which single participants mark up a different document page each having their own digital pen.

  3. 3.

    The next prototype considered parallel sessions of single participants each marking up multiple document pages with their own digital pen. It is important to note that from a cognition point of view it is very natural for the human to switch from one page to the other, or one document to the other, as he/she marks up and explains issues of correlated items on different pages or documents in the analog paper and pencil world. However, it was a complex task to model and implement this cognitive activity in the digital world. To accomplish this goal we developed and implemented a printing and synchronization algorithm that keeps track and knows what document page was printed on which Anoto page. The algorithm further indexes and synchronizes the pages and their printed document content with the digital sketch/annotation objects and audio stream. Once the session is published on the TalkingPaperTM web server, it streams and recreates the same look and feel of the sketching or annotation activity and verbal discourse, as well as the sequence of marked up pages that occured in the analog paper and pencil world.

  4. 4.

    We continued the rapid prototyping cycle with a TalkingPaperTM system that enables a single user to mark up multiple pages with multiple pens. This setting allows users to assign different semantic meanings to each digital pen having a different colors. For instance, we emulated a real world setting in which structural engineers check detailing blue prints using red to mean “changes to be made,” green to mean “correct detail,” yellow to mean “checked item,” etc. In this case the user that replays the session from the TalkingPaperTM web server can immediately see the color coded annotations, for instance all the items that need to be changed marked with red, replay a selected red sketch or annotation and listen to the rationale behind the requested modification.

  5. 5.

    The next TalkingPaperTM prototype implemented the reflection in interaction scenario with multiple participants marking up one document printed on a single Anoto page sharing a single digital pen. This emulates the real world case where multiple experts examine, discuss and mark-up changes and issues on one large blue print (e.g., an architectural floor plan or a structural system for a future building).

  6. 6.

    The latest implemented prototype models the interaction scenario that engages multiple participants correlating and marking up items on multiple documents printed on different Anoto pages, each using their own digital pen. This is the most complex scenario. The implementation allows for multiple parallel sessions (i.e., project teams or groups to meet and use TalkingPaperTM to capture their discourse and annotations/sketches). Allowing each participant to use their own digital pen provides a unique identifier for each person. Participants can assign their own name to the pen ID. The person’s unique identifier (e.g., their name or the pen number) can be used during search and replay. During the streamed playback of a TalkingPaperTM session the user can enter a query such as “Joe” or “Mary and Joe” and TalkingPaperTM will flag with different colored, small markers the locations where Joe or Mary have sketched or annotated the document and spoken. The user can select any of these flags on screen and TalkingPaperTM will start playing from that point.

It is important to note that a TalkingPaperTM session on the IE Browser can contain any number of digital pages, like in an electronic note book. This provides the user with what we call content in context. Content being the printed document image, sketch or annotation. Context being represented by the project or discipline context that the user is familar with, as well as the context of the dialogue in which the sketches and annotations were made. This affordance allows the user, who is either a current project team member or a new user who is exploring alternavites from past projects, to understand the proposed concepts or changes.

We have implemented these prototypes using diverse hardware and network environments to address synchronous collocated meeting needs, as well as asynchronous mobile workers’ needs.

  • In the first case participants who are in their office or in a meeting room use: (1) a TalkingPaperTM client application that runs on a PC connected through LAN to the TalkingPaperTM web server, (2) the digital pen(s), (3) the cell phone to push the pen strokes to the TalkingPaperTM web server, and (4) the Anoto paper that can be a blank page for sketching or contain printed document pages that can be annotated.

  • For the asynchronous mobile solution the knowledge worker uses: (1) TalkingPaper client application that runs on the cell phone that is connected to the TalkingPaper web server via GSM/GPRS, (2) the digital pen, and (3) the Anoto paper that can be a blank page for sketching or can contain printed document pages that can be annotated.

In both cases, i.e., TalkingPaperTM client application running on a PC or on a cell phone, all digital pens are paired and communicate with the cell phone via Bluetooth.

Figure 3 illustrates one of the implemented interaction scenarios. In the reflection in interaction collaborative scenario each participant has a digital pen and can be uniquely identified as he/she sketches and annotates documents during the meeting. The TalkingPaperTM interactive replay retrieves and synchronizes formal digital documents with the digital audio and sketch if the user(s) printed the specific document on the AnotoTM paper, e.g., CAD drawings, spreadsheets, or other documents, that are stored in a digital database. This enables the users to explore, understand and assess the content. (Fig. 3) It is important to note that the analog-to-digital transformation can be iterative as the stakeholders replay the digital multimedia content on screen and understand the proposed idea or solution. They can decide to print the sketch or annotated document displayed on screen, and further annotate it or sketch on it using TalkingPaperTM to provide rapid feedback. Therefore creating a new cycle. TalkingPaperTM affords any number of analog-to-digital and digital-to-analog cycles necessary in the decison making process.

Fig. 3
figure 3

Talking PaperTM bridging the analog and digital worlds

Conclusions

We formalized the observed natural, social intelligence in project teamwork as a reflection in interactions process and model it with the TalkingPaperTM prototype system. The prototype transforms the dialogue and the paper and pencil sketches or joint annotation of one or multiple shared paper documents (e.g., blueprints, Excel or Word documents, Images etc.), into indexed and synchronized digital multimedia content that can be streamed on-demand over the web to all project stakeholders for rapid knowledge transfer and decision-making. TalkingPaperTM is a horizontal technology that can have a huge impact on the work practice and process in all phases of the life cycle of a building, e.g., design, construction, facility management, as well as in other domains such as manufacturing, publishing, and education. The scientific and technology contributions of this research effort include:

  • An extension of Schon’s concept of the reflection in action of a practitioner to a multi-practitioner reflection in interaction paradigm.

  • A formalization of the complex interactions in a “multiple participants, multiple documents, multiple pens” interactive multimedia workspace that bridges the paper and digital worlds, and

  • TalkingPaperTM prototype as an analog-to-digital content conversion processor of rich multimodal communications in support of the knowledge life cycle i.e. create, capture, index, store, search, find, retrieve, share and reuse knowledge.

A rapid prototyping approach was used in this research. Six TalkingPaperTM prototypes were developed to tackle the different interaction scenarios and needs. Preliminary testing and evaluation is on-going. The evaluation focuses on assessing our understanding of the reflection in interaction model, and the extent to which the TalkingPaperTM user interactions support and improve this process. The evaluation is formative (Rosson and Carroll 2001), i.e. the evaluation results are used iteratively to guide the process of developing and refining the reflection-in-interaction model and TalkingPaperTM prototype. Tests focus in particular on the usability of TalkingPaperTM and how the prototype can reduce hidden work, i.e., coordination and rework, improve communication and knowledge transfer, and decrease response time and decision delays. Specific metrics we consider are efficiency, effectiveness, and satisfaction of both quality and process. Usability tests of TalkingPaperTM prototype are performed along the spectrum of scenarios defined by the three dimensional space in increasing order of complexity, from reflection-in-action of a single user, single document, and single pen scenario to a reflection in interaction multiple users, documents, and pens. Results of prliminary usability evaluation during interactions between, e.g., architects and strucutral engineers, or engineers and detailers, indicate that decision cycles are significantly decreased from 2 weeks to half a day on average. Further usability studies in large testbeds are currently under way.