The Virtual Constructor is a knowledge-based system that enables
an interactive assembly of 3D visualized mechanical parts to complex aggregates.
The user can both directly manipulate the virtual scene using the mouse
or similar input devices and instruct the system using simple commands
in natural language. Various operations such as assembly, disassembly,
and rotation of sub-assemblies are supported by a knowledge-based description
of the objects' mating possibilities. A key feature of the system is that
the current state of the assembly is dynamically conceptualized
by step-keepingly matching the geometry scene against a structured model
of the target aggregate. Dynamic knowledge representations are created
when constructed aggregates are recognized as assembly groups of the target
aggregate. The internal representations are further modified when, according
to their use, the specific functions of single parts in the target aggregate
are determined. Therefore, verbal instructions can always refer to the
current state of the assembly.
- Knowledge-Based, Real-Time Assembly Simulation
- Knowledge-based descriptions of the visualized mechanical
objects' connection ports enable the simulation
of part assembly and disassembly as well as aggregate
modifications along rotational and translational degrees
Top-level concepts our port taxonomy include
extrusion ports for modeling peg-in-hole-like insertions,
plane ports for modeling connections between co-planar faces,
and point ports for modeling point-like connections that induce
no translational degrees of freedom.
- Dynamic Scene Conceptualization
- COAR: Geometry scene objects are represented in the frame-based language
COAR. Inferences over COAR representations include aggregate conceptualization,
by which constructed aggregates are recognized as subassemblies of the
target aggregate, and role assignment, by which components are reclassified
w.r.t. the underlying concept hierarchy according to their use in larger
assemblies. COAR representations also integrate spatial informations, such
as position, size, distance, or orthogonality, which are inferred on need
from the geometry scene.
- Imaginal Prototypes: These parametric 3D shape representations of single
objects and complex aggregates are currently developed in the CODY
- Multimodal Input
- Natural language instructions:
Typed and spoken input is supported.
Verbal instructions may refer to spatial and visual properties of objects
and - due to dynamic conceptualization - to currently assembled
aggregates and functional roles of objects.
- Direct manipulation: Parts can be assembled, disassembled, and rotated
by using the mouse or similar input devices.
Direct manipulation operations build on a knowledge-based
E.g., for object assembly, the user moves an object close to another
the snapping mechanism then completes the fitting process
in a collision-free manner.
- Multimodal Output
- Real-time visualization: 2D and 3D visualization is supported.
- Acoustic feedback: Material sound, e.g. clicking for assembly operations,
generates a more realistic sensation of the virtual scene. A speech generator
informs the user whenever meaningful assemblies are created.
- In the SGIM
project, speech- and gesture based interfaces using large screen displays
are developed. The Virtual Constructor serves as intelligent subsystem
in the virtual assembly scenario.
the Multimodal Assembly eXpert, is an anthropomorhic agent that can
demonstrate to the user the assembly of complex aggregates using gesture
- Within SFB
360, the Virtual Constructor has been coupled with vision
and robotic components
of a prototypical "Situated Artificial Communicator". Here, the
virtual environment is a reconstruction of a real world assembly scene.
While the vision component only provides information about single parts,
dynamic scene conceptualization infers knowledge about the objects' assembly
structure and functional roles.
- B. Jung, S. Kopp, M.E. Latoschik, T. Sowa, I. Wachsmuth:
Virtuelles Konstruieren mit Gestik und Sprache.
In Künstliche Intelligenz, KI 2/00, 2000, 5-11.
PDF at publisher's site (in German)
- B. Jung, M. Latoschik, I. Wachsmuth:
Knowledge-Based Assembly Simulation for Virtual Prototype Modeling.
IECON'98 - Proceedings of the 24th Annual Conference
of the IEEE Industrial Electronics Society, Vol. 4,
IEEE, 1998, 2152-2157.
- B. Jung, M. Hoffhenke, I. Wachsmuth: Virtual Assembly with Construction
Presented at 1997 Design for Manufacturing Conference (DFM'97),
September 14-17, 1997, Sacramento, CA.
Appeared in Proceedings of the 1998 ASME Design for
Engineering Technical Conferences (DECT-DFM '98).
- B. Jung, I. Wachsmuth:
Integration of Geometric and Conceptual Reasoning
for Interacting with Virtual Environments.
Proc. 98'AAAI Spring Symposium on Multimodal Reasoning,
- Y. Cao, B. Jung & I. Wachsmuth: Situated Verbal Interaction in Virtual
Design and Assembly, IJCAI-95 Videotape Program. Abstract in Proc. FourteenthInternational Joint Conference on Artificial Intelligence, 1995, 2061-2062.
Knowledge-based assembly simulation in the Virtual Constructor:
When port knowledge is activated (indicated by highlighting), parts snap
together according to legal mating conditions.
Extraction of port knowledge from unstructured CAD models. The port
recognition software detects mating features in polygon soups. Non-visual
port attributes are interactively modeled on the Responsive Workbench.
- This mpeg video
demonstrates a speech and gesture based interface to the Virtual Construcor
that was developed in the
Max, the Multimodal Assembly eXpert, can demonstrate to the user the
assembly of complex aggregates.
Don't miss the other videos in the
showcase of the VR-Lab!
The Virtual Constructor is a spin-off from the
Documentation (internal link)