Artificial Intelligence for Predictive and Evidence Based Architecture Design

Authors

  • Mehul Bhatt University of Bremen and The DesignSpace Group
  • Jakob Suchan University of Bremen and The DesignSpace Group
  • Carl Schultz University of Bremen and The DesignSpace Group
  • Vasiliki Kondyli University of Bremen and The DesignSpace Group
  • Saurabh Goyal University of Bremen and The DesignSpace Group

DOI:

https://doi.org/10.1609/aaai.v30i1.9850

Keywords:

applied artificial intelligence, visual perception, architectural cognition

Abstract

The evidence-based analysis of people's navigation and wayfinding behaviour in large-scale built-up environments (e.g., hospitals, airports) encompasses the measurement and qualitative analysis of a range of aspects including people's visual perception in new and familiar surroundings, their decision-making procedures and intentions, the affordances of the environment itself, etc. In our research on large-scale evidence-based qualitative analysis of wayfinding behaviour, we construe visual perception and navigation in built-up environments as a dynamic narrative construction process of movement and exploration driven by situation-dependent goals, guided by visual aids such as signage and landmarks, and influenced by environmental (e.g., presence of other people, time of day, lighting) and personal (e.g., age, physical attributes) factors. We employ a range of sensors for measuring the embodied visuo-locomotive experience of building users: eye-tracking, egocentric gaze analysis, external camera based visual analysis to interpret fine-grained behaviour (e.g., stopping, looking around, interacting with other people), and also manual observations made by human experimenters. Observations are processed, analysed, and integrated in a holistic model of the visuo-locomotive narrative experience at the individual and group level. Our model also combines embodied visual perception analysis with analysis of the structure and layout of the environment (e.g., topology, routes, isovists) computed from available 3D models of the building. In this framework, abstract regions like the visibility space, regions of attention, eye movement clusters, are treated as first class visuo-spatial and iconic objects that can be used for interpreting the visual experience of subjects in a high-level qualitative manner. The final integrated analysis of the wayfinding experience is such that it can even be presented in a virtual reality environment thereby providing an immersive experience (e.g., using tools such as the Oculus Rift) of the qualitative analysis for single participants, as well as for a combined analysis of large group. This capability is especially important for experiments in post-occupancy analysis of building performance. Our construction of indoor wayfinding experience as a form of moving image analysis centralizes the role and influence of perceptual visuo-spatial characteristics and morphological features of the built environment into the discourse on wayfinding research. We will demonstrate the impact of this work with several case-studies, particularly focussing on a large-scale experiment conducted at the New Parkland Hospital in Dallas Texas, USA.

Downloads

Published

2016-03-05

How to Cite

Bhatt, M., Suchan, J., Schultz, C., Kondyli, V., & Goyal, S. (2016). Artificial Intelligence for Predictive and Evidence Based Architecture Design. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.9850