My approach to affordances is that they are dynamical properties of tasks, which means that in order to study them, I need to be able to characterise my task dynamics in great detail. I developed an analysis (Wilson et al, 2016) to do this, and I also have a hunch this analysis will fit perfectly with the motor abundance analyses like UCM (Wilson, Zhu & Bingham, in press). I have recently discovered that another research group (led by Dagmar Sternad) has been doing this whole package for a few years, which is exciting news. Here I just want to briefly summarise the analysis and what the future might hold for this work.
Affordance Maps (me)
In order to hit a target by throwing, a person must produce one of a specific set of release parameters, a set that changes as target details change. This set is actually a subset of a broader release parameter space, where the rest of the space describes release parameter combinations that lead to misses. The 'hit' subset has more than one solution in it (there is redundancy in the task demands), and different regions of the subset are more tolerant of error than others.I produced these graphs by simulating the dynamics of projectile motion across the full space of release parameters and color coding the results of each throw. I did this because I wanted to quantify the task demands that these task dynamics were imposing, and I identified these demands as the affordance of the target to be hit by throwing. I now call these affordance maps. This is part of my empirical work that argues affordances are best understood as dynamical dispositional properties of tasks.
When I got interested in the various 'motor abundance' analysis methods (like UCM, optimal feedback control, nonlinear covariation and GEM), I realised that my affordance maps might be a natural fit for these techniques. Each of these effectively does some kind of movement variability partioning, into 'bad' variance (which is taking you away from achieving the goal) and 'good variance' (which is just you moving around within the subset of the space that will produce the outcome). Good variance is left to accumulate, bad variance must be detected and controlled away. I realised that a) my affordance maps define the goal subspace each method needs and that b) it does it in a way that might lead to explaining how people perceive the goal subspace and can thus work to control their actions appropriately. I sketched out this hypothesis in an upcoming book chapter, and I have a 90gig motion capture data set from last Easter waiting for me to try this out on.
Solution Space Geometry (Sternad)
I've recently discovered that Dagmar Sternad has been doing this analysis for a while now, and is a few papers into figuring out how to get it to work. After being a bit bummed for 5 minutes that someone was beating me to it, I got over it and now I'm excited that someone who knows what she is doing is in the game. We're swapping emails now to see how we can help each other out.Sternad's work primarily uses a virtual reality (VR) 'throwing' game that's actually a bit more like tetherball. The participant's job is to throw a tethered ball towards a target, and because it's a VR task she can alter the task by changing the shape (the geometry) of the solution subspace. In her most recent paper (Zhang et al, 2018) she created 4 different spaces; note that her spaces are different from mine because her task dynamics are different; she modelled the tetherball dynamics instead, but the space is still defined by release parameters.
Summary
First, I am super excited that a) my idea about affordances and motor abundance has some legs and that b) someone who actually knows the maths is seriously getting into this. I really was a little bummed at first about being 'scooped' but I've realised that's the wrong mindset and I'm hoping to make some real progress now by not having to do everything myself!Our work complements each other nicely, I think.- I'm using motion capture of expert throwers in a very natural throwing task; this makes my work very directly about throwing itself. My motivation is about figuring out how best to dynamically characterise the affordances of our environment to test the hypothesis that we perceive these and use them to control actions.
- Her work uses VR, which gives her a lot of experimental control over the task dynamics (her manipulations of the solution spaces where much more refined than I can do with my real target). Her motivation is more about the action variability analyses and figuring out what you can and can't do with them