|(…) we make search in our memory for a forgotten idea, just as we rummage our house for a lost object. In both cases we visit what seems to us the probable neighborhood of that which we miss. We turn over the things under which, or within which, or alongside of which, it may possibly be;
and if it lies near them, it soon comes to view.
|William James (1890), The Principles of Psychology, p. 654|
[Copyright neth.de, 2007–2014]:
Steve Payne, Geoff Duggan, Hans Neth (2007).
Discretionary task interleaving: Heuristics for time allocation in cognitive foraging.
Journal article in JEP:G.
Abstract: When participants allocated time across 2 tasks (in which they generated as many words as possible from a fixed set of letters), they made frequent switches. This allowed them to allocate more time to the more productive task (i.e., the set of letters from which more words could be generated) even though times between the last word and the switch decision (“giving-up times”) were higher in the less productive task. These findings were reliable across 2 experiments using Scrabble tasks and 1 experiment using word-search puzzles. Switch decisions appeared relatively unaffected by the ease of the competing task or by explicit information about tasks’ potential gain. The authors propose that switch decisions reflected a dual orientation to the experimental tasks. First, there was a sensitivity to continuous rate of return — an information-foraging orientation that produced a tendency to switch in keeping with R. F. Green’s (1984) rule and a tendency to stay longer in more rewarding tasks. Second, there was a tendency to switch tasks after subgoal completion. A model combining these tendencies predicted all the reliable effects in the experimental data.
|There is no reason to suppose that most human beings are
engaged in maximizing anything unless it be unhappiness,
and even this with incomplete success.
|R.H. Coase (1980), The Firm, the Market, and the Law, p. 4|
[Copyright neth.de, 2006]:
Hans Neth, Chris Sims, Wayne Gray (2006). Melioration dominates maximization: Stable suboptimal performance despite global feedback. Paper presented at CogSci 2006.
Abstract: Situations that present individuals with a conflict between local and global gains often evoke a behavioral pattern known as melioration — a preference for immediate rewards over higher long-term gains. Using a variant of a binary forced- choice paradigm by Tunney & Shanks (2002), we explored the potential role of global feedback as a means to reduce this bias.
Abstract: Attempts to model complex task environments can serve as benchmarks that enable us to assess the state of cognitive theory and to identify productive topics for future research. Such models must be accompanied by a thorough examination of their fit to overall performance as well as their detailed fit to the microstructure of performance. We provide an example of this approach in our Argus Prime model of a complex simulated radar operator task that combines real-time demands on human cognitive, perceptual, and action with a dynamic decision-making task. The generally good fit of the model to overall performance is a mark of the power of contemporary cognitive theory and architectures of cognition. The multiple failures of the model to capture fine-grained details of performance mark the limits of contemporary theory and signal productive areas for future research.
|For the exogenously extended organizational complex
functioning as an integrated homeostatic system unconsciously,
we propose the term “cyborg”.
|M.E. Clynes and N.S. Kline (1960), Cyborgs and Space (Astronautics, 13)|
[Copyright neth.de, 2004–2014]:
Chris Myers, Hans Neth, Mike Schoelles, Wayne Gray (2004): The simBorg approach to modeling a dynamic decision-making task. ICCM 6, CMU, Pittsburgh, USA.
Abstract: The simulated cyborg (or, simBorg) approach blends computational embodied-cognitive models of interactive behavior with artificial intelligence based components in a simulated task environment (Gray, Schoelles, & Veksler, 2004). simBorgs combine human and machine components. This combination of high fidelity cognitive modeling (human) and AI (machine) facilitates the development of families of models that allow the modeler to hold components (memory, vision, etc) at different levels of expertise without concern for cognitive plausibility. For example, rather than modeling human problem solving, the modeler can rely on various black-box techniques (i.e., cognitively implausible AI), thereby focusing on predicting how subtle differences in costs and benefits in interactive methods affect performance and errors. The current modeling endeavor adopts the simBorg approach in order to build a family of interactive decision-making agents.