DynaLearn — An Intelligent Learning Environment for Learning Conceptual Knowledge

a


M
odeling is regarded as fundamental to human cognition and scientific inquiry (Schwarz and White 2005).It helps learners express and externalize their thinking, visualize and test components of their theories, and make materials more interesting.Particularly, the importance of learners constructing conceptual interpretations of system behavior has been pointed out many times (Mettes and Roossink [1981], Elio and Sharf [1990], Ploetzner and Spada [1998], Frederiksen and White [2002]).Modeling environments can thus make a significant contribution to the improvement of science education.
A new class of knowledge construction tools is emerging that uses logic-based (symbolic, nonnumeric) representations for expressing conceptual systems knowledge (Forbus et al. 2005, Leelawong and Biswas 2008, Bredeweg et al. 2009).Different from the numeric-based tools (Richmond andPeterson 1992, Pratap 2009), these tools employ a qualitative vocabulary (Forbus 2008) for users to construct their explanation of phenomena, notably about systems and how they behave.The use of graphical interfaces has improved usability (Bouwer and Bredeweg 2010), and the tools are becoming more common in education (Forbus et al. 2004, Kinnebrew andBiswas 2011), and professional practice (Bredeweg and Salles 2009).
The DynaLearn interactive learning environment (ILE) can be regarded as a member of this new class of tools.Its development is directly motivated by specific needs from the edu-I Articulating thought in computerbased media is a powerful means for humans to develop their understanding of phenomena.We have created DynaLearn, an intelligent learning environment that allows learners to acquire conceptual knowledge by constructing and simulating qualitative models of how systems behave.DynaLearn uses diagrammatic representations for learners to express their ideas.The environment is equipped with semantic technology components that are capable of generating knowledge-based feedback and virtual characters that enhance the interaction with learners.Teachers have created course material, and successful evaluation studies have been performed.This article presents an overview of the DynaLearn system.
WINTER 2013 47 cational field (Osborne, Simon, and Collins [2003]), which gives DynaLearn its three strategic characteristics (figure 1).First, the workspaces for knowledge construction support learners in expressing and simulating conceptual knowledge in a way that closely matches the true nature of this expertise.Moreover, the representations are organized in a sequence of increasing complexity, which acts as a scaffold for learners to develop their proficiency.Second, learners are given control over their learning activities, providing them with personal autonomy, while the software coaches them individually based on their current progress and learning goals.Third, the ILE is made extra engaging and motivating by using personified agent technology (André 2008) in the interaction with learners.
This article presents the DynaLearn ILE.The notion of conceptual knowledge and modeling is discussed using the idea of learning spaces, a set of workspaces that supports learners in gradually establishing their ability to create, and learn from creating, conceptual models.This is followed by a number of sections describing the support options available in DynaLearn.We then briefly discuss the design of the virtual characters, the repository of community models, and the evaluation studies.The discussion and conclusion end the article.

Conceptual Models
Conceptual models can be defined as models that improve our understanding of systems and their behavior (compare with Grimm [1994], Mylopoulos [1992], Haefner [2005]).They can be used as a premathematical modeling step or as stand-alone tools for knowledge capture.Conceptual models come in a wide range of varieties, including word models, pictures, diagrams, matrices, and certain mathematical models.In fact, any representation can be used to express a modeler's beliefs.According to Grimm (1994), conceptual models are useful because they provide a conceptual framework for the modeling effort.They also aid in proposing hypotheses and sacrifice details of the system while emphasizing the general characteristics.And, maybe most importantly, they can show the consequences of what we believe to be true, particularly when these representations have a formal status and can be simulated using computer processing.
The formalisms developed in the field of qualitative reasoning (QR) (Forbus [2008]) can be considered natural candidates to form the basis for conceptual modeling tools, particularly as knowledge construction tools for learners (Bredeweg and Forbus 2004).Being articulate modeling languages, such formalisms allow for the development of explanatory The DynaLearn software has three main components: a workspace for creating conceptual knowledge, feedback generators, and virtual characters for reflective interaction.The arrowheads highlight the main flow of information, illustrating the central role of the workspace.models, that is, models that capture explanations of how systems behave and why (Forbus and Falkenhainer 1990).Explanatory models are considered important instruments for learners to engage with and learn from (Bredeweg and Winkels 1998).Moreover, QR modeling encourages reflection, which is an important aspect of learning (Eurydice 2006, Hucke and Fischer 2003, Niedderer et. al. 2003).The developed models can be simulated, so that modelers can reflect on the implications of the knowledge they articulated.Both conceptual modeling and reflection are important when taking a constructive approach to learning.
Key ideas characterizing conceptual models based on QR are briefly presented below.In QR-based conceptual models, the quantities that describe the dynamic features of a system hold qualitative information concerning the current value and direction of change, using an interval scale, consisting of an ordered set of labels (without any numerical information), for example, {Zero, Small, Medium, Large}.Landmarks are specific point values within this set that refer to situations in which the behavior of the system changes significantly.For instance, a substance reaching its boiling temperature will stop getting hotter and start boiling.
The simulation results represent the system behavior and how it evolves over time.Time is captured as a graph of states (including loops) in which each state reflects a qualitatively distinct behavior.To represent qualitatively distinct behavior, each state has a unique set of constrains on quantity values (pairs of <magnitude, derivative> , current value and its change, respectively) such as magnitude X = 0, magnitude in/equality X = Y, derivative ∂X = 0, and derivative in/equality ∂X = ∂Y.Transitions from a state to its successor(s) reflect changes in such sets, for example, X = 0 → X > 0, X = Y → X > Y, ∂X = 0 → ∂X > 0, and ∂X = ∂Y → ∂X > ∂Y (and also for second-and third-order derivatives).
Theory development on QR has resulted in a set of dependencies that capture cause-effect relationships between quantities (Forbus 2008).These dependencies are defined such that on the one hand they represent conceptual notions that closely match human reasoning (de Koning 1997), while on the other hand they are grounded in mathematical formalisms allowing automated computation (Kuipers 1994).Two examples of such dependencies are influences (initial changes caused by processes) and proportionalities (propagation of changes) (Forbus 1984).
Meanwhile, QR has developed into an area of artificial intelligence with advanced knowledge representation and automated reasoning.State of the art QR is comprehensive and intricate, which complicates its immediate use by learners (Bredeweg et al. 2007).Below we discuss the idea of the learning spaces, as developed within the DynaLearn project, to accommodate this problem, and leverage the potential of QR as an instrument for having learners acquire conceptual models of systems.

Concept
Node Arc  The LSs can be traversed in the indicated order, but alternative routes are also possible.Moreover, each LS can be regarded as a workspace by itself and used as a stand-alone instrument for acquiring a specific kind of conceptual knowledge.

Learning Spaces in DynaLearn
A progressive sequence of representations with increasing complexity has been developed, referred to as the learning spaces (LSs), which acts as a scaffold to support learners in developing their ability to create knowledge models and understand systems.One of the key aspects guiding the design is the ability for each representation to highlight qualitatively unique and relevant aspects of system behavior.Six LSs have been established (figure 2) (Bredeweg et al. 2010;Liem, Beek, and Bredeweg 2010).A summary of the main ideas and results follows.
Concept Map (LS1) The formal context and starting point for developing the LSs is the QR software Garp3 (Bredeweg et al. 2009).LS1 is the smallest set of ingredients that constitute a meaningful subset from the representation used by this engine (figure 3).Effectively, this subset of modeling ingredients allows for the construction of concept maps (Novak and Gowin 1984), consisting of nodes connected by arcs (referred to as entities and configurations in Garp3, respectively).It is the only space in DynaLearn that has no explicit handles for capturing causal information.Learners are free to express knowledge that they believe to be causal information, but ideas remain in the eye of the beholder.They are not available for automated reasoning.The concept map representation available at LS1 implements a very approachable starting point for knowledge modeling.
Basic Causal Model (LS2) Defining a higher LS is done by augmenting the current space with the smallest subset of possible mod-eling ingredients while ensuring that the next level is self-contained.Self-contained implies that the representational primitives present within an LS form a subset (of all the primitives available) such that this set allows for automated reasoning on behalf of the underlying software.After all, learners will be confronted with the logical consequences of their expressions through simulation, which either may or may not match the observed system behavior and the learner's expectations thereof.Also important is that learners are able to create meaningful, qualitatively distinct, representations of the phenomena they are studying.
The goal of LS2 is to have learners acquire an overall understanding of the cause-effect relationships governing the behavior of a system.Particularly, it allows learners to work on the distinction between entities and quantities and to express dependencies between quantities that carry causal information regarding how changes in the source quantity determine changes in the target quantity.Two such causal dependencies are available: + (positive: the source causes the target quantity to change in the same direction as the source), and -(negative: the source causes the target quantity to change in the opposite direction as the source).A user can also express direction of change (decrease, steady, and increase) for any of the available quantities (figure 4).
When running the simulation, the tendencies (directions of change) of the yet unknown quantities are calculated based on the known value information and the represented dependencies (figure 5).
In LS2, the conceptual model as a whole concerns a single state of system behavior.As such, the reasoning can be thought of as an intrastate analysis.The results may include ambiguity (figure 6) and inconsistency (figure 7), following standard QR calculus.The map concerns a small food web in which a target population benefits from a producer that lives in a habitat, and a predator that feeds on it.

Habitat
Causal Model with State Graph (LS3) LS2 supports learners to distinguish between entities (for example, habitat) and quantities (for example, size), express causal information between quantities (for example, changes in the predator size change the target population size in the opposite way), and define initial directions of change (for example, predator sze is steady and habitat resources increasing).Also note that entities are related through structural relations, referred to as configurations (for example, producer live in habitat).

Benefit from Live in Feed on
Resources !Size Size Size !With a different initial direction of change for predator size (increase instead of steady as in figure 4) the influences on target population size become ambiguous: it may decrease, stay steady, or increase, as shown by the three inferred value assignments for this quantity.

Benefit from Live in Feed on
Resources !Size Size !Size !When a simulation is inconsistent a red question mark is shown (and no inferred values result).Here the inconsistency is caused by target population size being set to decrease, which conflicts with this quantity increasing as a result of calculating the impact from the two other influencing quantities (see also figure 5).

Habitat Producer Target population Predator
Benefit An expression regarding a food web is shown at LS3 in modeling mode, including the notions of quantity space and value correspondence (V).Target population size can take on values {Extinct, Critical, Low boundary, Sustainable, High boundary, Overpopulation}, as defined by its quantity space named Eclsho, while habitat resources can take on values {Zero, Plus, Maximum}.Initially, these quantities start at Sustainable and Plus, respectively.Habitat resources increase and predator size decreases.The directed value correspondence states that target population size may get the value High boundary when habitat resources reach Maximum.This LS4 model defines entity mice population with quantities number of (state variable), and birth and death (both rates).Number of can take on values {Zero, Plus, Boundary, Higher}, as defined by its quantity space Zpbh.
Birth and death can take on {Zero, Plus}, as defined by zp.In this initial state, all quantities have been assigned the value Plus (note that by definition each value is unique, and that these Plus values thus donate potentially different intervals).The derivatives are unassigned and thus unknown.Birth has a positive direct influence (I+) on number of, which means that the magnitude of the rate determines the change in number of.Changes in number of have a positive indirect influence (P+) on birth, implementing a feedback mechanism.There is a bidirectional value correspondence (V) between the Zero values of number of and birth.Similar details are specified for death, but the direct influence is negative (I-), since a positive death rate makes number of decrease.Causal Differentiation (LS4)

Mice population
The goal of LS4 is to have learners refine the notion of causality.What is the initial cause of change?How is this triggered?How do changes propagate through the system?LS4 refines the notion of causality by distinguishing between proportionality and direct influence (Forbus 1984).The notion of the propagation of changes as used in LS2 and LS3 stays in place, but this is now referred to as a dependency of the type proportionality.Newly added is an additional way in which the initial change may come about, namely using the notion of direct influence.The direct influence allows for specifying that the magnitude of some quantity (for example, a steady flow of water) causes the magnitude of some other quantity to change (to decrease or increase, for example, the amount of water in a bathtub).Also added is the idea of exogenous as opposed to endogenous behavior (Bredeweg, Salles, and Nuttle 2007), and the notion of an agent is used as a representation of the former.
Multiple opposing direct influences on the same target quantity may result in ambiguity, or in a unique change when the relative strength of each flow can be determined.Hence, in/equality reasoning is relevant at this level and may become part of a causal account.Figure 10 shows a small LS4 model.Figure 11 shows its simulation results.

Conditional Knowledge (LS5)
The goal of LS5 is to have learners acquire a more refined understanding of the conditional aspects of processes and system behavior.Newly added therefore is the idea of conditional knowledge.In LS1 through LS4 the specified knowledge is always true.
That is, all facts (except current values) hold in all possible behavioral states of the system.At LS5 this idea is refined, recognizing that it may be the case that some facts are only true under certain conditions.A condition can be seen as an event and is typically represented as a value assignment or an in/equality statement.When the condition is satisfied, the additional knowledge becomes active and is taken into account.LS5 allows conditions and consequences to be added to a single core representation and can be considered a simplified version of LS6.

Generic and Reusable (LS6)
LS6 provides the full range of representation and reasoning as available in Garp3 (Bredeweg et al. 2009).
Learners create scenarios (figure 12) and model fragments (figures 13 and 14).In LS6, simulations are based on scenarios (describing the initial state of the system).The model fragments capture partial domain theories and can be seen as rules (IF [conditions], THEN [consequences]).If a scenario fulfills the conditions of a model fragment, the fragment becomes active, and the ingredients represented as consequences of the fragment are introduced to the state description and used to simulate the system's behavior (figure 15).In fact, LS6 allows for expert-level modeling (Cioaca et al. [2009], Nakova et al. [2009]).
The goals of LS6 are to have learners acquire generic knowledge of reoccurring processes and other patterns in system behavior, and how that generic knowledge instantiates to particular situations.In the example shown in figures 12-15 three such units occur.There is the notion of a birth and a death process, each applying twice to the scenario, and there is the notion of commensalism, also applying twice (here represented as a population benefiting from a resource).

Support and Interaction
The LSs discussed above leave considerable freedom regarding the use of them in educational settings.For instance, a teacher can decide to give a buggy LS4 model to a class of learners with the assignment to study the related domain knowledge with the goal of repairing the buggy model.To support these different  This figure shows a scenario at LS6, detailing three individuals: habitat, producer, and beneficiary.The latter two are of type population.The beneficiary benefits from the producer, which in turns benefits from the habitat.Size is the state variable for each of the individuals, and initially they all start at value High.The populations also have a birth and a death rate, all starting at value Plus, and the rates start as being equal for both populations.The habitat size has been set to decrease (not shown in this diagram).Given this situation, how will the system behavior evolve, and why?What are the mechanisms (such as processes) that explain that behavior?See figures 13, 14, and 15 for answers to these questions.This figure (together with figure 14) details the knowledge used to simulate and explain the behavior of the system shown in figure 12. Left top shows the entity is-a hierarchy.Individuals in the scenario are instantiated from these types.Similarly, model fragments (MFs) use these types to specify the structural details (conditions) under with they are applicable.As such, MFs are mapped onto a scenario using the entity hierarchy as an intermediate.The model shown here has five MFs, organized in is-a hierarchy, and each MF being of type static, process, or agent.The MF population introduces the state variable size.The MF biomass is a subtype of MF population and introduces the quantity Biomass.Biomass fully corresponds to size, as defined by the quantity correspondence (Q) and the proportionality (P+).The MF population is conditional for the birth and death processes to apply.When applicable these processes introduce their rates and feedback details (also shown and discussed in figure 10).Notice that the four MFs discussed here (c, d, e, and f) each apply twice to the scenario from figure 12.

Fi 12 S i LS6
modes of interaction, and also to enable learners to work autonomously, a set of supporting instruments has been developed.The global functions available in the DynaLearn ILE are illustrated in figure 16 (a screenshot of DynaLearn in action is shown in figure 19).At the kernel of the environment is a QR problem solver (Garp3) for constructing and simulating conceptual knowledge.The LSs are situated on top of this engine.In addition, the following support options are available: basic help (essentially elementary help on how to use the software), recommendation (feedback derived from comparing the learner-built model to the models created by the community and stored in the repository), and diagnosis (feedback on simulation results, particularly helping learners maintain consistency between their models and the expectations they hold regarding the inferences that can be made on behalf of those models).Moreover, the DynaLearn ILE is made extra engaging and motivating by the design and use of virtual characters.The support and interaction are further discussed in the next sections.

Support by Basic Help
The LSs are accompanied by a set of components to aid learners.The basic help component helps learners use the DynaLearn software (Beek, Bredeweg, and Latour 2011).It requires little foreknowledge regarding conceptual modeling or system dynamics thinking, and pertains to those aspects of the software that are visible to learners and that they can directly interact with.Three types of basic help have been addressed.
A conceptual model in DynaLearn consists of domain-specific assertions embedded in the generic modeling language vocabulary.Each expression created by a learner is a subtype or a refinement of the latter.The What Is? help is able to describe occurrences of domain-specific assertions in terms of their context (other assertions) and generic embedding (modeling vocabulary).The How to? help explains how to perform tasks within the LSs.Only tasks that can be performed given the current state of the ILE are communicated (figure 17).The Why? help gives information about the simulation results, including details such as listed in the caption text of figure 15.

Model Improving Recommendations
DynaLearn has two components that provide advanced feedback on models created by learners: recommendation and diagnosis (figure 16).Recommendation concerns the ingredients that constitute a particular model, and does not involve simulation.Recommendation allows learners to compare their model to a repository of models and to get feedback on how their model differs from the models created by the community (teachers and experts) (Gracia et al. 2010).The recommendation component consists of three parts (1) grounding of model terminology into well-defined vocabularies; (2) storage of conceptual models in a centralized web-based repository; and (3) ontology-based feedback on the quality of the model, based on knowledge extracted from recommended models.
The overall approach is illustrated in figure 18.After creating a model (or a partial model) a learner can call for recommendations.The first step is to ground the terms used in that model to an external vocabulary.For this, the software automatically generates a ranked list of most similar terms, from the external vocabulary, for each of the terms used in the model.This list is presented to the learner, who has to decide upon the most appropriate match and select it.Models that are created by the community, and grounded in this way, are stored in a semantic repository.For each newly added model an analysis is made regarding how it relates to the models already stored.And, because all models are grounded (by their creators) using the same external vocabulary, the repository models can be ranked automatically according to their similarity with this new model.The most similar models are considered most relevant and used to provide recommendations, essentially by listing the differences between these most similar models and the model created by the learner.
The semantic repository is based on the MySQL 1 database and the Jena 2 semantic platform.The semantic repository has been developed as a web service, which is called from the DynaLearn ILE.The service includes storage, retrieval, classification, grounding, and so on of conceptual models.Before the model is stored, it is transformed into OWL (Grau et al. 2008), to allow for processing on behalf of the repository.A simple user management system has been developed as a separate web application to enable the creation and handling of users.There are five roles of users: guest, learner, teacher, domain expert, and administrator, each one with its own rights and privileges.The MF resource implements a balance between the size of a biological entity (B SIZE ) and a population (P SIZE ) that benefits from this entity, and the birth (P BIRTH ) and death (P DEATH ) processes of this population, including: (B SIZE -P SIZE ) = (P BIRTH -P DEATH ).The balance causes the size of the population, depending on the resource, to move toward the carrying capacity supported by that resource.In summary, the biological entity acts as a resource.If the size of this resource is higher compared to the size of the benefiting population, that population will grow.When smaller, the population will shrink.Notice that during simulation the MF resource details intertwine with the details introduced by the birth and death processes (figure 13).The MF resource applies to both populations shown in figure 12.
WINTER 2013 57  By grounding a model, DynaLearn is able to bridge the gap between the loose and imprecise terminology used by a learner and the well-defined semantics of an ontology.We decided to use DBpedia 3 as the primary source of background knowledge for DynaLearn (Gracia et al. 2010).In the case there is no suitable DBpedia term that represents the intended user meaning, the definition of a new term is allowed (this option is only available for teachers and experts).Such anchor terms are stored in an external ontology (anchor ontology) in addition to DBpedia.Finally, multilingual grounding has been enabled to support the grounding of terms written in languages other than English, including Spanish, Portuguese, German, and Hebrew.
As the repository of models grows, more related models will be available to fine-tune the feedback for a specific learner.However, to address specific requirements found during evaluation sessions, it is also made possible for teachers to create a virtual class, upload one or more specific models into this class, and have learners receive feedback based on the content of the models stored in this class only.

Diagnosing Simulation Results
When a learner simulates a model, the results may turn out different from what he or she expected.This is when learners can call upon the diagnostic component and receive support in determining the cause of this discrepancy.The simulation now becomes an instrument that helps learners reflect on their knowledge and work toward improving it.In principle, there are two ways to align simulation results with expectations thereof: learners can change their model, or they can change their expectations.Both are relevant for learning, and often interact.
The diagnostic component complements the recommendation component.Instead of focusing on the ingredients constituting a model, it addresses the simulation results of a model.The diagnostic component takes the discrepancies between the actual and the learner-expected simulation results as input.By asking additional questions it identifies those ingredients of the learner-built model that are accountable for these differences.The goal is to support the learner's knowledge construction endeavor by maintaining consistency between the expression created by the learner and the expectations he or she holds regarding the inferences that can be made on behalf of that expression.
The diagnostic component is derived from wellestablished model-based diagnostic technology (de Kleer and Williams 1987).An algorithm has been created based on the characteristics of the representation used for conceptual models (de Koning et al. 2000), and adjusted to include reasoning with the expectations that learners express (Beek and Bredeweg 2012).Following this approach, an expectation-based cognitive diagnosis has been established and integrated within the DynaLearn ILE.Learners can build models, run simulations, and express expectations.The diagnostic component provides automated feedback based on the discrepancy between simulation results and expectations, in terms of ingredients in the built model.In this way, the learner is pointed toward those parts of his or her model that cannot be right given the expectations he or she expressed.

Special Modes of Interaction
DynaLearn has two specialized interactive modes, the teachable agent (TA) and the quiz.The TA works on top of LS2 and implements a specific setting in which learners "learn by teaching" (Leelawong and In the center, some of the ingredients are shown that can be manipulated by learners, with on the left-hand side the build and on the righthand side the simulate environment (for example, as shown in figures 10 and 11, respectively), and the six learning spaces as a scaffold on top of this (left-hand side top) (also shown in figure 2).Users can upload models to the repository.Three components generate feedback: basic help, recommendation, and diagnosis.The feedback is communicated to learners through a cast of virtual characters.Finally, two specific modes of interaction are available as stand-alone learning environments: the teachable agent and the quiz.The How to? help options are listed for the model fragment shown on the left-hand side.The example shows a learner requesting how to add an inequality relation (for example, to state that costs > revenue).The subtasks required for performing this action are shown on the right-hand side and presented to the learner when selecting a particular task from the pull-down menu.After building a model (Learner model), the learner grounds the model ingredients using an online ontology.The grounded model is compared with other grounded models from the repository.The most relevant models are used to generate feedback.The suggestions are presented as a list (and commented on by the virtual characters, if asked for by the learner).

List of suggestions
By creating, testing, correcting, and refining the TA's understanding of the subject matter, the learner effectively constructs his or her own knowledge of the subject matter.
The quiz can be used in all spaces except for LS1.It is based on QUAGS (Goddijn, Bouwer, and Bredeweg 2003) and reorganized into a component that automatically generates multiple-choice questions for arbitrary QR models.Learners can test their understanding of a given model (or be assessed for it) by taking a quiz with the quizmaster.The quizmaster selects questions using four heuristics: (1) the model ingredients they are about (those not well understood by the learner are preferred), (2) whether they were asked before (those new to the learner are preferred), (3) the question type (those that were asked least are preferred), and (4) the question difficulty (each session starts with an easy question, difficulty is adjusted based on the learner's performance, and questions of the appropriate difficulty are preferred).The quiz can be used in stand-alone mode, but can also be integrated with other interactive modes, such as with the TA mentioned above.

Virtual Characters
A large part of the communicative interaction with the DynaLearn ILE happens through virtual characters, although learners (and teachers) can decide to not use the characters (except for the TA mode, which requires the characters).Given the range of possible interactions the DynaLearn ILE offers to a learner, the characters have to show a likewise diversity (through their design and the implementation of their behavior).
Animal characters are expected to have lower communication skills than humanlike characters, allowing learners to more willingly forgive technical imperfections (Mehlmann et al. 2010).For these reasons, it was decided to create a set of cartoonish hamster characters with unique roles and personalities.The basic help, recommendation, diagnosis, teachable agent, and quiz interactions were each assigned characters based on character roles and personality (for example, critical, supportive, inquisitive).Keeping the target audience of students in mind, a schoolyardlike scenario was conceived, but not strictly enforced.Beside the student characters,  which reflect the situation of the actual learners, a teacher character assists and advises the learner during the application usage (mainly basic help).The character of the quizmaster is used to present the playful part in the quiz, the critic brings the recommendations, and the mechanic character handles the diagnosis (figure 20).
Figure 21 illustrates the process by which DynaLearn generates dialogue.The process starts with an input from a modeling workspace.Let us assume that a learner just sent the TA to take the quiz.As a result, the conceptual knowledge submitted in this case consists of questions and answers generated by QUAGS based on the reference model, but it will also include answers to these same questions generated from the learner-built model.First, the dialogue content needs to be decided.This can be based on previous actions performed by the learner or the characters (derived from the interaction history and learner model), as well as certain pedagogical strategies.Next, the content is assigned to the different characters involved.In this case, the quizmaster character will ask the questions, while the TA will present the answers (generated by the learner-built model).Also, the quizmaster will comment on the TA's success and the TA will show a reaction to that.Next, the dialogue turns are verbalized using an appropriate template.If there is more than one matching template, one of them is chosen randomly.Finally, nonverbal behavior is selected to accompany the dialogues.The characters can move around the screen, perform gestures and facial animations, and point out spaces on the screen.In the example, after each question, the quizmaster will perform either a thumbs-up gesture or shake his head depending on the TA's success, and the TA will perform a cheering or sulking gesture accordingly.Based on the decisions made by the software, the scene script is constructed (in XML).Next, the content of say-tags is extracted and the speech is created accordingly, using the assigned character's voice.Together with the appropriate data from the animation library, the complete dialogue can then be presented to the learner.For further details, see Wissner et al. (2012).

Content Models
Content models populate the semantic repository.They allow for semantic feedback and can be used in the TA and quiz interactions.For this, a set of environmental science topics was selected focusing on seven main themes: Earth systems and resources, the living world, human population, land and water use, energy resources and consumption, pollution, and global changes.This selection was based on EU directives for science education (Eurydice 2006), and on a survey of existing curricula of environmental science in secondary schools and undergraduate courses from the national educational systems of the DynaLearn project partners (Salles et al. 2009).Ultimately, the domain knowledge of 65 topics within the seven themes were explored, covered in 210 expert-built models, and stored in the semantic repository.The models themselves are described in a set of technical reports.For a recent review see Salles et al. (2012).

Evaluation Studies with DynaLearn
Students have been using the DynaLearn ILE in multiple educational settings, including high school, undergraduate, graduate, and Ph.D., and also in many countries, including Austria, Brazil, Bulgaria, Israel, the Netherlands, and the United Kingdom.Most of these uses were part of evaluations researching the educational benefits of teaching using DynaLearn (in total 49 evaluation activities involving 736 participants).
To illustrate the kind of studies conducted, consider the following examples.One study concerned a multiple days taking enterprise in which two learners developed models of phenomena following explanatory-seeking assignments (Zitek et al. in press).Learners started with LS1 and progressed up to LS4.The main focus of the study was to investigate conceptual changes on behalf of the learners.The evaluation results showed significant changes occurring in knowledge structure and content reflecting the learners' understanding of the subject matter, supporting the hypothesis that building conceptual models helps learners grow from initial (personal) under-standing toward more scientific understanding.
In another study, the role of semantic-based feedback (recommendations) was investigated in the context of a problem-based learning setting (Lozano et al. 2012).After having created their initial (solution) models, learners engaged in the activity of grounding the terms occurring in their models and after that obtaining feedback generated by the learning environment.The study showed how this approach helps learners bridge the gap between their initial model, as their first attempt to represent the system, and the target models that provide expert solutions to the problem.
An exhaustive review of the evaluation studies carried out within the DynaLearn project can be found in Mioduser et al. (2012).In summary, the evaluations show that throughout modeling sessions, the students' system thinking improved considerably.Aspects of system thinking that improved include gaining a systemic view of systems, identifying structure and behavior, distinguishing different kinds of causal relations (notably, influences and proportionalities), and understanding of causal patterns such as chains and feedback loops.
Most of these evaluation studies also measured motivational aspects.The results indicate that learning by modeling with DynaLearn is considered motivating.Many students indicate the possibility (and in some cases desire) to apply conceptual models in other science courses.However, this result appears independent of whether the virtual characters are used or not.In fact, the added value of this component is not conclusive.

Discussion
Each of the learning spaces (LSs) in DynaLearn acts as a stand-alone unit.Models created at one space cannot be loaded into any of the other LSs.Future research could focus on supporting the transition from an expression at one LS to an expression at another LS.
User modeling in DynaLearn, in the sense of capturing the learner's progress on learning, is only implemented for the quiz mode.For all other interactions the feedback is generated based on the current status of the model and the kind of feedback evoked by the learner.For instance, if the recommendation is called twice in a row without changing the model, the feedback will be the same for both calls.Although this is in principle correct, it would be interesting to investigate whether a learner model (including the dialogue history) could be used to influence and modify the interaction in a sensible way, and make it have a positive impact on the learner's knowledge-construction progress.
Feedback in DynaLearn depends on a learner asking for it.The idea was to establish a learning environment that is unobtrusive, and hence, the learner Articles WINTER 2013 63 is in control and decides when help is required.But evaluation studies show that learners may not always ask for support, even when that would be the better option at some point in the learning process.Future research could focus on finding a proper balance such that the learning environment automatically intervenes when needed and appropriate, and otherwise waits until being called for.
DynaLearn has been focusing mainly on environmental science.Although this was according to plan, the applicability of the DynaLearn approach is expected for all areas of science that take a systems perspective on the subject matter.But moving into a new area is not without costs.It will require a repository of models to be created for that domain in order for the recommendation functionality to work, as well as other course materials.

Conclusion
The DynaLearn project has established an intelligent learning environment (ILE) that allows learners to work with conceptual knowledge using a representation that closely fits the true nature of that expertise.The complexity of the underlying qualitative reasoning vocabulary has been successfully overcome by establishing a set of learning spaces that scaffolds learners in gradually building up their abilities, both in learning qualitative system dynamics in general and in learning domain knowledge specifically.Additional instruments have been created that support learners in their knowledge construction effort, including (1) a procedure that automatically generates relevant multiple-choice questions for any model expressed in a DynaLearn workspace, (2) a diagnostic component that aids learners in creating a model that is consistent with the expectations the learner has for the system behavior, and (3) a recommendation component that allows learners to obtain feedback regarding how their models differ from related models created by the community and stored in a repository (and a significant set of conceptual models has already been created and stored in the repository).
An encompassing set of studies has been undertaken, investigating the DynaLearn ILE.Overall, these studies show the great potential of the DynaLearn approach.
Bert Bredeweg is an associate professor at the Informatics Institute within the University of Amsterdam (UvA), Netherlands, and coordinator of the DynaLearn project.His research focuses on interactive learning environments that enable learners to explore and acquire conceptual knowledge.
Jochem Liem is a postdoc researcher at the University of Amsterdam (UvA), Netherlands, and technical coordinator for the DynaLearn project.His research focuses on OWLbased interoperability between functional units of knowledge systems.
Wouter Beek is a Ph.D. student at the Free University of Amsterdam (VU), Netherlands.His job is to invent a new semantic paradigm for interpreting existing semantic web data, taking the contradictions, ambiguities, and context dependencies into account.
Floris Linnebank worked as a programmer on the DynaLearn project on behalf of the University of Amsterdam (UvA), Netherlands, addressing the underlying qualitative reasoning engine (Garp3).
Jorge Gracia is a researcher at Universidad Politécnica de Madrid (UPM), Spain, within the Ontology Engineering Group (OEG), participating in leading research projects on semantics and knowledge engineering.
Esther Lozano is a Ph.D. student at Universidad Politécnica de Madrid (UPM), Spain, within the Ontology Engineering Group (OEG).Her research activities focus on semantic web and ontological engineering.
Michael Wißner is a researcher at the University of Augsburg (UAU), Germany, within the Human-Centered Multimedia group, focusing on pedagogical agents in interactive (learning) environments and natural language dialog generation.
René Bühling is a Ph.D. student at the University of Augs-burg (UAU), Germany, within the Human-Centered Multimedia group, addressing entertainment computing, focusing on working on virtual characters, interactive cinematography, and visual dramaturgy.
Paulo Salles is an associate professor at the Institute of Biological Sciences within the University of Brasília (UNB), Brazil, focusing on Qualitative Reasoning.He develops models of issues related to sustainable development, water resources, and ecology of populations and communities.
Richard Noble is a researcher at the Hull International Fisheries Institute within the University of Hull (UK), with a diverse interest in the field of fish ecology and environmental management.He is also interested in the role conceptual modeling can play in the enhancement of environmental science education.
Andreas Zitek is a senior researcher at the University of Natural Resources and Life Sciences (BOKU), Austria.One focus of his work is the assessment and evaluation of fish migrations in rivers.Another one is modeling, particularly the application of qualitative reasoning in research, education, and management.
Petya Borisova is a Ph.D. student at the Institute of Biodiversity and Ecosystem Research at the Bulgarian Academy of Sciences (IBER), Bulgaria, focusing on ecological modeling.
David Mioduser is a professor of science and technology education within the Tel Aviv University (TAU), Israel.His interest includes learner's perception and ability to understand, interact with, and design the artificial world, with a focus on young children's thinking and learning processes.

Please Join Us
for the Fifth Symposium on Educational Advances in Artificial Intelligence!
For more information and a call for papers, please see www.aaai.org/Symposia/EAAI/2014/eaai14.php Figure 1.DynaLearn Software Components.

Figure 2 .
Figure 2. Schematic Overview of the Learning Spaces in the DynaLearn ILE.
Figure 3.A Concept Map Created with LS1 in DynaLearn.

Figure 5 .
Figure 5. LS2 in Simulation Mode.LS2 is shown in simulation mode following the expression from figure4.Habitat resources cause producer size to increase, which in turn causes target population size to increase.Predator size is steady and thus has no effect on this.

Figure 9 .
Figure 9. Simulation Results at LS3, Following the Details Given in Figure 8.The state graph (left bottom) shows four states (black circles with numbered identifiers) and a reference to the initial starting situation (the not-numbered black circle).Also shown are the value histories for each quantity.For instance, in state 2 the target population size has the value High boundary (denoted by the small arrow) and is increasing (arrow pointing up); in state 4 this quantity has reached the value Overpopulation and is still increasing.The arrows between states indicate state transitions.Multiple state transitions originating from the same state indicate ambiguity, reflecting alternative possibilities.There are two behavior-paths [1 → 2 → 4] and [1 → 3 → 2 → 4].The ambiguity from state 1 is about whether target population size and habitat resources simultaneously move to the next higher point value (as denoted by [1 → 2]), or if this change happens for habitat resources first and only later for target population size (as donated by [1 → 3 → 2]).
Figure 11.Simulation Results of Figure 10.Simulation results are shown, based on the model shown in figure 10, using the state graph (left-hand side top) and the value history (lefthand side bottom and right-hand side).The state graph has seven states.Each state reflects a qualitatively distinct behavior the system can manifest.Each sequence of states reflects a possible sequence of behaviors of the system.There are four such behavior paths that the system may manifest.The value history shows the quantities, their possible values, their actual value, and their direction of change in each state.For instance, the quantity number of in state 1 has value Plus and is decreasing.The four behavior paths reflect the typical behaviors of a population when only birth and death are known, namely: balance (paths [2] and [3 → 4]), growth (path [3 → 5 → 7]), or extinction (path [1 → 6]).

Figure 13 .
Figure 13.The Knowledge Used to Simulate and Explain Figure 12 System Behavior.

Figure 15 .
Figure 15.Simulation Results from Figure 12.This figure shows part of the simulation results for the scenario from figure12.The state graph has 10 states, with no branching (no ambiguity).Habitat size is set to decrease, and in state 1 it has value High and decreases.This causes the birth rate of the producer to decrease, and state 1 to change into state 2 in which birth = death becomes birth < death.This causes the producer size to decrease, which in turn causes the birth rate of the beneficiary to decrease, and state 2 to transit into state 3 in which birth = death becomes birth < death (for the beneficiary).These inequalities stay in place while the three state variables (the sizes) one after the other decrease and change value from High through Boundary to Low: resource size: [3 → 4 → 5], producer size: [5 → 6 → 7], and beneficiary size: [7 → 8 → 9].Finally, all quantities move to 0 (Zero) in state 10.In summary, when the habitat size decreases to 0, the populations depending on that directly (producer) and indirectly (beneficiary) become extinct.SeeBredeweg et al. (2009)  for additional information on the knowledge representation and reasoning used here.

Figure 16 .
Figure 16.Functional Overview of the DynaLearn ILE.
Figure 18.The Recommendation Process.

Figure 19 .
Figure 19.The Learner Teaches the TA by Creating a Model.DynaLearn TA mode using LS2, with a learner-made model (left-hand side), and two interacting virtual characters (right hand side).The rightmost peer (named Tobi), representing the TA, is taking the quiz, and the quizmaster (to the left of Tobi, named Harry) is asking the questions.