Activities

No upcoming events

Intelligent Systems Lab Project: Interactive Digital Attachments

Participants

Associated Members

Supervisors

Motivation

The systemInteractive Digital Attachments(INDIGA) aims at overcoming these limitations by using new technology in augmented reality to identify physical documents and link them to digital documents.

First term

Application scenario

Businesses that heavily depend on documents and want to improve their workflow and efficiency are predestined for a system that simplifies sharing of whole document structures (e.g. documents belonging to a certain contract) and enable linking meaningful attachments to a master document.

Specific scenario

A soap manufacturer asks an advertising agency to design an additional landing page for their website which is supposed to target male students. The agency uses INDIGA to manage their documents. One employee has drawn a design draft (physical document with an identifier for the system). He places the document on his table in range of the camera that detects the document. He then attaches it to the requirements document his supervisor sent to him. He then shares the requirements document with a web developer.

Objectives

The project goal is to extend the possibilites of a physical workspace by providing the following capabilities

Description

Setup

The setup uses a camera for document detection, placed above the user, facing down on the desktop and a camera that detects gestures to navigate through the system's menu. The down facing camera recognizes markers on documents to search a database and display them in the system. The user can then interact with them via gestures to group or link them with other documents or share them with other users. New documents can be added to the database via a second, more elaborate, interface that is operated with a mouse and keyboard.

Assets

Results

Working demo system, which includes:

The video shows a number of basic use cases including adding documents to the system, login for multiple users, display of recognised documents, and attaching and sharing documents via gestures/hand recognition.

Discussion and conclusion

Outlook

Second term

Conclusion of first term

The results of the system, developed in the first term, revealed two major flaws in the design. The interface proved to be not as intuitive as expected and the digital and physical workspaces were too losely coupled. We took a new approach on implementing the interface and developed in a new direction to reach our intended goals. To prevent repeating our mistakes from the previous term we concentrated on fusing both workspaces from the beginning of development (see here) on.

Changed focus of objectives

Description

New interface concept

To couple the two workspaces more tightly, we decided to fuse them by projecting the digital view directly onto the desk and establishing a connection via simultaneous tracking of multiple QR-markers printed on the physical documents. The tracking of all physical documents enables us to project virtual elements (buttons, overlays, etc.) on top of those documents and previously implemented actions like sharing, grouping and attaching with them. By this, we hope to improve the augmented reality experience and reduce the cognitive load of the user by eliminating their need to connect both workspaces in their mind.

Setup

The picture above depicts the currently used system setup. A projector as well as a Kinect system are attached to a tripod. They are rotated towards the table to be able to scan the surface for documents with QR-codes.

Assets

Discarded technologies:

Results

Working demo system, which includes:

The video shows that scanning a QR-marker results in a projection of a virtual interaction menu on top of the documents. The menu includes sharing, grouping, and attaching other documents as functionalities. During any of these actions the documents selected are colour coded - green for selected, red for unselected documents. Due to some issues between gesture tracking in combination with a sheet of paper we were forced to us the mouse as an input.

Discussion and conclusion

References


[1] As the QR-codes serve the additional purpose of locating documents on the table, we refer to them as QR-markers showing their close relationship to AR-markers.