(Draft) Designing Playful Wearables to Support Physical Rehabilitation and Training

José Manuel Vega-Cebrián

A dissertation submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science and Technology, Universidad Carlos III de Madrid

Advisors: Elena Márquez Segura and Ana Tajadura Jiménez

April 2026

Abstract

Work in progress

Motivational quote.

List of Publications

Published and submitted content

The publications in the following list are partly included in the thesis. The inclusion of material from these sources is specified in each chapter where an inclusion occurs, using the corresponding uppercase letter and reference. The material included from these sources is not singled out with typographic means and references. The list is ordered chronologically.

  1. José Manuel Vega-Cebrián, Elena Márquez Segura, Laia Turmo Vidal, Omar Valdiviezo-Hernández, Annika Waern, Robby Van Delden, Joris Weijdom, Lars Elbæk, Rasmus Vestergaard Andersen, Søren Stigkær Lekbo, and Ana Tajadura-Jiménez. 2023. Design Resources in Movement-based Design Methods: a Practice-based Characterization. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (DIS ’23), July 10, 2023. Association for Computing Machinery, New York, NY, USA, 871–888. doi.org/10.1145/3563657.3596036
    • This item is a peer-reviewed conference paper.
    • This item is partly included in the thesis, mainly in Chapter 4 but also in Chapter 2.
    • The material from this source is not singled out with typographic means and references.
  2. José Manuel Vega-Cebrián, Elena Márquez Segura, and Ana Tajadura-Jiménez. 2024. Towards a Minimalist Embodied Sketching Toolkit for Wearable Design for Motor Learning. In Proceedings of the Eighteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’24). Association for Computing Machinery, New York, NY, USA, Article 73, 1–7. doi.org/10.1145/3623509.3635253
    • This item is a peer-reviewed work in progress conference paper.
    • This item is partly included in the thesis, mainly in Chapter 5 but also in Chapter 2.
    • The material from this source is not singled out with typographic means and references.
  3. José Manuel Vega-Cebrián, Laia Turmo Vidal, Ana Tajadura-Jiménez, Tomás Bonino Covas, and Elena Márquez Segura. 2024. Movits: a Minimalist Toolkit for Embodied Sketching. In Proceedings of the 2024 ACM Designing Interactive Systems Conference (DIS ’24), July 01, 2024. Association for Computing Machinery, New York, NY, USA, 3302–3317. doi.org/10.1145/3643834.3660706
    • This item is a peer-reviewed conference paper, extending the previous work in progress (b.)
    • This item is partly included in the thesis, mainly in Chapter 5 but also in Chapter 2.
    • The material from this source is not singled out with typographic means and references.
  4. José Manuel Vega-Cebrián, Elena Márquez Segura, María Fernanda Alarcón, Tomás Bonino Covas, Lara Cristóbal, Andrés A. Maldonado, and Ana Tajadura-Jimenez. 2025. Co-designing Minimalist Wearables to Support Physical Rehabilitation after Peripheral Nerve Transfer Surgery. doi.org/10.5281/zenodo.17903256
    • This item TODO(will be) a peer-reviewed journal paper.
    • This item is partly included in the thesis, mainly in Chapter 6 but also in Chapters 2 and 3.
    • The material from this source is not singled out with typographic means and references.

Other research merits

The peer-reviewed publications in the following list are not included in the thesis but constitute work that was done during the same period. The list is ordered chronologically.

Preface

Acknowledgements

I’m much grateful.

1 Introduction

how can we design multisensory wearables to support physical rehabilitation

This is an Interaction Design project using movement-based design methods, involving multisensory feedback in response to movement, and implementing design concepts in minimalist technologies.

In this thesis, my main contributions are:

In this thesis, I present how:

1.1 Context

Research projects: MeCaMInD, MovIntPlayLab, BODYinTRANSIT.

initial objectives

TODO(further context) Because physical rehabilitation is a broad application domain, for this work, it was necessary to focus the scope on a single case. The research projects that funded my work had already instigated a collaboration with the Peripheral Nerve Unit of Hospital Universitario de Getafe, in Madrid, Spain. I joined such work to research how we could design playful wearables that could support the rehabilitation of patients who had undergone peripheral nerve transfer surgery. In this type of surgery, surgeons reconnect nerves to bypass or replace damages nerves. In such cases, the rehabilitation involves not only recovering the mechanics of movements but also their neurological foundations. For the technologies that would support this kind of rehabilitation, I was interested in taking into account the perspectives and needs of patients and medical personnel. This led to a complex co-design process resulting in rich insights and several prototypes that I developed based on them.

1.2 Positionality

first-person perspective [73].

1.3 Collaborators

During this thesis project I collaborated with several people. Here, I indicate the acronyms that I use to refer to them throughout the text, when I indicate the roles they performed.

1.4 Thesis Structure

Chapters 2 and 3 constitute the context for this work: Within them, I introduce foundational concepts that situate and describe the kind of design research that I delved into during this thesis, mostly based on embodied interaction, multisensory feedback and Research through Design. In Chapter 2, I provide an overview of the theoretical concepts of action, perception and movement learning that inform this work. Additionally, I survey related work in HCI research along three different axes: movement-based design methods, embodied ideation toolkits, and the design of interactive technologies for rehabilitation. These axes mirror the areas where this thesis mostly contributes to. Chapter 3 describes the main principles behind the methodology used during the work, Embodied Sketching and Research through Design, along with the design drives that characterised it: minimalism, open-endedness and generalisability.

Methods and tools: Chapter 4. Chapter 5.

Process and Prototypes: Chapter 6. Chapter 7.

Finally, in Chapter 8, I attemp to tie all the work together.

2 Background

This chapter draws on publications A, B, C, and D [189192].

In this chapter,

2.1 Body, Perception and Action

TODO(general context)

TODO(“activity”)

2.1.1 Motor Learning

TODO(rework this. Add implicit vs explicit motor learning) The focus of attention refers to the location to which a person pays attention while performing a certain movement [116]. An external focus of attention consists of directing the learner’s focus to the effects of their movements on the environment, such as to an apparatus or implement [207]. In contrast, an internal focus of attention consists of concentrating on the inside of the body while performing a movement [116]. Existing studies on attentional focus have generally recognised the benefits of adopting an external focus over an internal focus in motor learning and performance in a variety of practices such as golf [89], tennis [206], standing long jump [205], swimming [163], jump height [1], throwing [210], and striking combat sports [65].

2.1.2 Intercorporeal Biofeedback

My thesis work is influenced by the strong concept [75] of Intercorporeal biofeedback [183], which proposes the role of interactive technology as a mediator that can support joint sense making on body processes by different actors [183]. TODO(a strong concept is…).

Articulated through design work focused on practices of movement teaching and learning, the Intercorporeal Biofeedback concept presents a way to design biofeedback technology to achieve such a role. It proposes four core characteristics that have served as guideposts for the technologies I designed for this thesis. First, an intercorporeal biofeedback tool should provide a shared frame of reference so that the biofeedback is accessible—through using e.g. audiovisual or visuotactile and not only vibrotactile feedback—to different people at the same time. This helps create a frame that involved parties can refer to in their sensemaking processes. Secondly, such a tool should support a fluid meaning allocation, i.e. supporting in-the-moment constructive meaning-making by people by favouring open-endedness [186] in the feedback representations. Thirdly, it should support guiding attention and action, enabling a focus of attention fluctuation from the body to the biofeedback, their tight loops, or the instructions provided by observing peers [183]. Finally, it should be designed as an interwoven interactional resource to be used alongside a wider variety of interaction resources—such as verbal instructions, demonstrations, and material equipment [183]—, so that the technology does not become the sole focus of the interaction. TODO(maybe develop more?)

2.2 Movement-based Design Methods

Given TODO(the renewed importance of the body in action / HCI waves)… Movement-based Design Methods haven been proposed and used by the Interaction Design and HCI communities. In the following, I briefly present methods and strategies that have been historically relevant to the trajectory of movement-based design research in HCI.

First of all, Bodystorming is a situated generative design method focused on generating multiple design ideas. In contrast to brainstorming, bodystorming uses full-body engagement with objects, the space and other people to come up with ideas. There have been several proposals regarding bodystorming, exemplifying how movement-based design methods are often appropriated, adapted and tweaked to fit a specific design agenda and design process. For instance, in 2003, [129] focused on carrying out ideation sessions in the very context in which designs will be used. Later, in 2010, [146] articulated bodystorming as three different approaches: prototyping using enactment; physically emulating the spatial environment in which technology will be used to generate/evaluate ideas in context; and employing actors and props to play out expected use case scenarios.

More recently, [107] advanced bodystorming for movement-based interaction as a generative strategy to develop ideas from scratch, emphasizing its playful and participatory components. Later on, [184] introduced Sensory Bodystorming, which bridges bodystorming and material ideation approaches. This method uses non-digital materials and objects with different sensory qualities to foster exploration and ideation of sensing/actuating possibilities. Finally, [200] proposed Performative Prototyping, which combines bodystorming methods and Wizard of Oz techniques with a puppeteering approach in collaborative mixed-reality environments. With this approach, they claim to leverage both somaesthetic and dramaturgical perspectives, the former conceived as a point of view from the inside out and the latter from the outside in.

[145] contended the importance of somatic facilitation during a technological design process and named it the practice of Somatic Connoisseurship. The careful and trained focus on the lived experiences in the body can enrich the design space in Interaction Design and HCI [145].

Relatedly, Soma Design [7274,178] refers to a design process that is holistic and builds upon the ideas of Somaesthetics [148,149]. It connects sensations, feelings, emotions, and subjectivity in participants’ bodies and aims to examine and improve them. These frameworks emphasize introspection, slowness, increased awareness, and the use of sensitizing and body maps.

On a similar note, Embodied Sketching [108] encompasses movement-based ideation practices that harness a combination of physical engagement in the surrounding context with play and playfulness to elicit a creative mindset. This context includes the social and spatial settings along with digital and non-digital artefacts, which are catalyzers of engagement and idea generation.

Estrangement, which refers to the process of turning “the familiar” upside-down and making it unfamiliar, is also a common resource and an important component of Soma Design TODO(confirm references) [7274,178]. [201] analyzed the use of estrangement as a powerful approach in embodied design methods. Estrangement can be used to inspect and experiment with already-known practices, movements and actions, causing a disruption that makes the familiar tangible or visible. Estrangement can be used to arrive at new kinds of movements, objects or design concepts [201].

In the same line, with Moving and Making Strange, [97] centred bodies and movement in the design process using a choreographic approach. The work foregrounded the use of choreographic strategies—such as explorations of variations of movement qualities including speed and direction—as possible ways to defamiliarize everyday movements and arrive at interesting interaction possibilities. The first-person perspective of the mover was their emphasis, alongside the third-person perspectives of the observer and machine. Relatedly, [17] also contended the use of estrangement to open design spaces, specifically in the context of the design of home appliances.

Role-playing as a method involves deliberately assuming a character role and playing out a more or less defined scene or script, with or without props [150]. It can be used throughout the whole design process: to discover and identify issues to solve; to observe and understand the design context and target users; to generate new ideas; to evaluate them; and to communicate them. Informances [27,150] are an example of role-playing which combine performance, scenario-based design, and Wizard of Oz to simulate and improvise future generative-oriented situations with future technology. In Informances, simple props are often used to simulate and recreate the technology and key contextual elements of the scenario. A more elaborate form of role-play is Larping—Live Action Role Playing—, which involves complex and well-crafted simulations, character descriptions, narrations and strategies for representation [104]. Larping has the potential to cultivate deeper connections between participants and their characters and can be used as a sensitizing activity or as a stage for testing and evaluating design concepts and prototypes [104].

Other methods that are used as references and inspiration are Service Walkthrough [24] and Interaction Relabelling [45]. Even though these were not originally proposed as movement-based design methods per se, they similarly entail physical engagement with artefacts and the environment. Additionally, they are cited as relevant methods by others [17,97]. Service Walkthrough [24] is a design technique that facilitates and guides the physical representation and enactment of service moments or stages to prototype/evaluate them. While the entire service journey is walked through, feedback can be gathered as a whole process or in each journey moment/stage. Interaction Relabelling [45] supports the ideation process of novel forms of interaction with electronic devices by asking to use an existing product simulating to be the intended design. Interactions are mapped and evaluated. When the products are quite different to the intended designs, they may lead to creative ideas/concepts.

Finally, a common physical resource employed in movement-based design methods is paper cards. These are used to provide descriptions and instructions [119], to aid in ideation/reflection [175], as a documentation tool of design constructs in workshops [144,186], or as rule facilitators of body play [114]. TODO(mecamind reference. In fact…)

2.2.1 Classifications of Movement-based Design Methods

As part of this thesis, in Chapter 4, I describe a characterisation endeavour which had the objective of elucidating and mapping common features of movement-based design methods, to aid in their understanding, usage and reappropriation. In the following, I present previous works that have similarly addressed the need for a comprehensive framework to understand, describe and appropriate such design methods.

To approach the analysis of embodied design methods, a couple of works have focused on a single yet powerful dimension as their starting point. For instance, [201] proposed and used a framework to analyze embodied design ideation methods with a focus on estrangement. In the work, they interrogated: (1) What is being done to cause a disruption; (2) What is destabilized by this disruption; (3) What emerges from the process, and (4) What is embodied, e.g. made tangible or visible from doing it [201]. This framework was used to analyze eight embodied design methods.

Alternatively, [97] focused on the first-person perspective of the person in movement. From there, they proposed a design methodology based on a whole set of choreographic tools, and grounded in prior interactive design projects from the same authors.

In contrast to these two works, the analysis I present in Chapter 4 followed a bottom-up approach to map the characteristics of a larger corpus of movement-based design methods. With this approach, the objective was to obtain a set of general categories that would allow to describe the elements in play for the implementation of these methods in practice.

In another work, [5] analysed 23 methods in seven articles and constructed a typology for movement-based design methods, based on the following seven foci: (1) Sensing; (2) a Playful approach; (3) an Experimental approach; (4) Props, Artifacts and Technology; (5) Enactment; (6) Social Interaction; and (7) Specific Context. Simultaneously, they classified the methods regarding the design stage in which they were used: Divergent, Explorative or Convergentstage.

I would argue that a limitation of this approach is that the methods are pigeonholed to a specific focus when they could be present in more than one. Thus, it can be difficult to see how the seven found dimensions relate to each other. Additionally, there is not a clear path to use the classification to implement one’s methods. In contrast, in the work I present in Chapter 4, the categories are not exclusive and therefore reflect several methods at the same time. Further, there were made actionable by providing recommendations and considerations for the reader.

2.3 Embodied Ideation Toolkits

In this thesis, in Chapter 5, I present the design process of a minimalist toolkit that could support embodied sketching [108] focused on applications of multisensory feedback in response to movement. In the following, I describe related embodied ideation toolkits that have been designed and employed in the HCI community, given that movement-based design research has often explored, produced and employed toolkits and probes to facilitate design sensitisation, exploration and ideation.

First of all, I wish to highlight some foundational works. Here I include the concept of technology probes: simple, flexible and adaptable technologies designed to inspire users and researchers to ideate new technologies [80]. The Inspirational bits [165] consisted of multiple units that exposed the workings of common technologies and input modalities. The Embodied Ideation Toolkit [79,153] involved the the curation, design and use of multiple tangible objects to support embodied co-design processes with the participation of diverse stakeholders. TODO(expand more on these?)

Similarly, my work draws from toolkits and probes often employed in Soma Design [72,74] processes, such as the Soma Bits [202,203]. The Soma Bits were introduced as a kit of objects that allow exploring haptic modalities—vibration, heat, and inflatables—at varied levels of intensity and in different parts of the body [202,203]. The kit combines the Soma Bits—the devices consisting of electronic actuators, control units, knobs and power—with the Soma Shapes—soft and diverse objects with pockets to place the Bits [202,203]. Related to the Soma Bits, but not developed as a toolkit per se, the Felt Sense Glove [125,126] and Sense Pouch [125] were ideation probes that supported exploration of the effects of heat and vibration in people’s somatic experiences. In a similar line, but intended as an open-ended prototyping toolkit to design wearable menstrual technologies for young adolescents, the Menarche Bits [154,155] consisted of custom shape-changing actuators and heat pads.

Another line of research into toolkits concerns designing individual modules that can be interconnected and used to explore and prototype wearables and e-textiles. For example, the Wearable Bits [83] were a modular set of patches of different levels of fidelity with common electronic components—sensors and actuators—that could be arranged according to one’s design and prototype. The Kit-of-No-Parts approach [132] consists of handcrafting textile interfaces—such as tilt, pressure or stroke sensors—from scratch so that one can personalize, understand and share them.

The DanceBits were developed as a wearable prototyping kit for dance that was co-developed with a justice-oriented, computing and dance education organization [41]. The DanceBits provided several input components, such as buttons and tilt sensors, and output components, such as different types of lights, that could be easily interconnected to design and perform choreographies while wearing electronic costumes [41]. Focusing on haptic feedback, the TactorBots [211] consisted of a toolkit of multiple wearable units where each one provided a different type of touch gesture. The touch gestures implemented in these units arose from an analysis of prior work, similar to the process we follow in this paper. The design process of the TactorBots resulted in a comprehensive toolkit which could render all touch types, could be worn in any place of the body, and could be used in the wild [211]. From all these kits I took inspiration in how they identified and built minimal modules with a single function each.

More directly related to my work in movement feedback are the Training Technology Probes (TTPs) [105,106,182]. These were a collection of simple wearable devices that sensed a few body parameters, such as movement speed, body orientation or breathing, and provided feedback loops through different sensory modalities—e.g. lights, sound and vibration mapped to orientation or motion [105,106,182]. They emerged from diverse embodied sketching activities [108,184]. While they were not specifically designed as an ideation toolkit themselves, they lent themselves to be appropriated, iterated, and re-designed to be used in diverse contexts related to motor learning and training. For instance, the TTPs have been used in a multitude of projects relating to motor learning in circus training, yoga, weightlifting and physical training in general [105,106,180182,186]. This was due to key properties of the TTPs, such as their simplicity, open-ended feedback and use of redundant multisensory feedback for the wearer and others, all of which we take as inspiration for our toolkit.

My work draws inspiration from these prior toolkits and probes in different ways. For instance, this work shares with the Soma Bits [202,203], Menarche Bits [154,155], Wearable Bits [83] and TTPs [105,106,182] the values of minimalism and an integral understanding of embodied experiences, where technology is not the sole focus. The insights from studies with the TTPs [105,106,182] comprised the empirical grounding of the Intercorporeal Biofeedback strong concept [183], which I used as an analytical lens in my work. Additionally, I took from the Embodied Ideation Toolkit [79,153] and the description of bodystorming baskets [191] the approach of bringing a variety of small, readily available probes to help stakeholders engage in embodied design activities. Finally, my work shares with the Wearable Bits [83], Soma Bits [202,203], TactorBots [211], Kit-of-No-Parts [132] and DanceBits [41] a design approach based on individual modules with identifiable functions, which can be adapted to different situations.

2.4 Interactive Technologies for Rehabilitation

TODO(intro, connect with Chapter 6)

In this section I focus on designs and design processes supporting neurorehabilitation, especially of conditions affecting the upper limbs. I focus on conditions that share needs and characteristics with the rehabilitation of peripheral nerve transfer surgery, such as stroke, spinal cord injury, or chronic pain.

In the Health domain, Tangible User Interfaces have been employed for purposes such as promoting health, facilitating diagnosis, facilitating or improving the rehabilitation process, improving everyday life or practitioner work, and providing mental and social support [20]. Furthermore, rehabilitation is the second most targeted medical field in tangible design, which has been proven useful to facilitate or improve the rehabilitation process [20].

Several types of tangible and interactive technologies have been designed and researched to support physical rehabilitation processes [20]. Many examples have combined these technologies with playful or game-like features to create Exertion Games (Exergames) for rehabilitation. For instance, some projects have focused on the affordances of Virtual Reality (VR) or Mixed Reality for this purpose. Exergames have been developed to support the rehabilitation of Spinal Cord Injury [130], stroke [14,90], or upper Limb rehabilitation in general [15,55,208]. [30], interviewed several physical therapists after playtesting a commercial VR game to evaluate the potential of these technologies to support physical therapy, providing a positive outlook on the possibilities of such games. Some of these works have involved an interaction not only with the VR system but also with robotic devices (arms or exoskeletons) [55,208] or with custom wearable sensors [90,91].

Relatedly, there is a strand of work that has focused on employing external computer vision sensors such as the Kinect or Leap Motion to estimate the position of the limbs and use them to act upon the games or experiences  [49,53,54,138,142]. These technologies free the patient from wearing anything but involve setting up a place with a specific configuration to be able to leverage the sensing that they provide. All these Exergame works have indicated advantages of employing game-related features to support rehabilitation by promoting engagement and providing further anchor points for motivation.

Regarding wearable devices, there is a line of work developing, testing and exploring custom e-textiles-based sensors for joint position estimation. For example, in the works of SeamSleeve [139], E-Serging [81] and ReKnit-Care [77] the researchers investigated several possibilities for embedding conductive seams within garments for motion detection. These e-textiles sensors and actuators remove the need for wearable integrated circuits or external computer vision sensors, and are promising for physical rehabilitation applications. They allow for completely custom-fit designs, which would also allow personalisation for the wearer needs. However, they tend to require specialized equipment that could make them challenging to replicate.

2.4.1 Applications of Multisensory Feedback

Sensory feedback on movement has been increasingly used in the context of physical activity and physical rehabilitation to motivate, inform and guide people. For instance, real-time auditory feedback can provide additional information on movement—such as movement trajectories or qualities—, to aid in movement execution, control and sensorimotor learning [22,143] or to support the reacquisition of lost motor capabilities [194], such as those following strokes [147,198]. Sensory feedback on movement can further address the underlying psychological barriers or needs that prevent people from engaging in physical activity or rehabilitation [134]. For example, in the Go-with-the-flow project [152], movement sonification provided information about the movement angle, start or end to help people build confidence in physical activity despite chronic pain [152].

Haptic feedback represents a flexible channel for conveying information, with vibration emerging as the most frequently used modality in this space TODO(elaborate on and separate all these references) [9,18,21,67,105,156,157,167,174,204]. Additional forms of haptic input used to support body awareness include thermal cues [36,84] and pressure-based feedback [85,184]. Vibratory cues are often applied to prompt posture correction [18,204] or to deliver instructional signals during movement [105,156,157], and they can also be designed to evoke a range of bodily sensations. For example, metaphor-driven vibration patterns have been shown to elicit altered perceptions related to body shape, posture, size, or weight [171], or to modulate bodily experience in ways that promote body awareness and greater engagement in physical activity for inactive populations[94]. Such metaphorical cues can further support movement quality—for instance, by creating sensations of being “pulled” upward or “pushed” downward during squat exercises, which may help guide proper execution [151].

A recent line of research has also explored the potential of altered feedback—modifying the perceived body or body movement instead of providing accurate movement information—to address psychological factors related to physical activity and rehabilitation. This approach can, for instance, create the perception of a lighter and more capable body [170] TODO(amar references), evoke sensations of being “pushed” by sound or vibrotactile feedback [151], or alter the perceived weight of a body part to influence movement execution, such as reducing gait asymmetry in chronic stroke patients [62]. Employing real-time sensory feedback to movement can be a powerful design material to better support rehabilitation processes.

2.4.2 Smartwatches and Mobile Devices

To address the feasibility of deploying and evaluating designs resulting from research into-the-wild, there have been some works reappropriating and employing readily-available wearable technologies, such as commodity smartwatches or mobile phones. These devices provide an assortment of available sensors such as accelerometers, gyroscopes, magnetometers, heart rate sensors, cameras and more, along with processing capabilities to extract and analyse movement data. For example, the aforementioned Go-with-the-flow project [152] used a mobile phone worn in a belt to provide real-time feedback for chronic pain patients based on motion sensor data. Also using a mobile phone, [172] leveraged the camera in the device to perform computer vision-based motion tracking to provide gamified feedback.

Nowadays, most modern smartwatches are geared towards physical activity and exercise uses. For example, they track activity and rest during the day, and can detect and provide detailed data for specific types of sports and exercises, including strength training movements, although not necessarily for the purposes of rehabilitation. Given their compact form factor and their compatibility with custom fabric straps that extend their wearability, these devices can become a very appropriate platform for exploring and prototyping wearable interactions, especially because some of these devices allow for the development and installation of custom software (apps)—the main ones being those based on Wear OS by Google and watchOS by Apple. By developing for these platforms, one can leverage the already-existing hardware and operating system to focus on prototyping the interactions. For me, this was especially relevant because custom hardware tends to be more expensive to create, control, modify, maintain and distribute [68], and in several contexts where resources are scarce—financially, time- or labour-wise—there is a need to consider how to design technologies within limits [37]. TODO(expand on the LIMITS angle, from demo paper)

Based on the capabilities of programmable smartwatches, several works have explored how to use them to support physical rehabilitation. For instance, some works [60,141] have investigated the potential of smartwatches for the detection and evaluation of rehabilitation movements, with promising results regarding their level of accuracy. [60], analysed 43 works employing smartwatches and leveraging their in-device accelerometers, gyroscopes and magnetometers as inputs for different detection algorithms to achieve gesture recognition with high precision.

For the case of arm rehabilitation,  [141] focused on detecting and providing feedback for the quality factors of rehabilitation exercises, such as missed repetitions and duration. In a similar line, to support real-time movement feedback for upper limb rehabilitation, some works have taken advantage of the pairing between smartwatch and mobile phone, using the former as a sensor array and controller and the latter as the data visualizer [28,113], game screen [29], or as another sensor array [43]. Others have studied the effects of providing feedback on activity levels across time in rehabilitation, showing promising possibilities and results [47,177]. Most of these works focus on the movement detection algorithms or the effects of the feedback on the patients, both in real-time or during longer periods.

2.4.2.1 Critical Perspectives to Design Smartwatch Applications

It is worth considering that, despite their ubiquity, there are several design decisions behind commercially available smartwatches and fitness trackers that prior work has problematised, as these tend to have a normative and healthist approach to activity tracking [159]. First of all, an argument can be made regarding how these devices extend a line of thinking that emphasises an “individual responsibility for health, well-being, and self-knowledge” [46] instead of providing or cultivating a more suitable social support system. In this sense, there is an assumption that tracking more data is better [120] because it would help the wearers to get a better understanding of what they do and how to change it [120].

Additionally, many of these devices demonstrate a level of opacity [209] in the measures they present to the wearer, such as number of steps [159,209] o stress scores [19], which might not always match the experience of the wearer [209]. This becomes more problematic when those measures are reductively taken as signs of fitness and health—more steps and less stress are better—without accounting for the real context and needs of the wearer [19,159,209]. For the case of activity tracking (e.g. in sports), there might be a mismatch between the data that are captured and that which would provide a better understanding of such activity for the wearer, such as “felt, emotional, and contextual aspects” [61] or broader temporal windows surrounding the activity [61].

When involving chronic conditions, tracking technologies could support more nuanced recordings of the activities and their context [61], while also considering that what the wearer might need to do is to limit their activity levels and balance exertion with rest [6971]. To illustrate this point, [71] investigated how commercial fitness trackers were reappropriated and (mis)used for the purposes of pacing energy levels, identifying tensions between this and the intended use, while also demonstrating that the devices could technically support these alternative goals.

2.4.3 Designing Technologies for Rehabilitation

In this work, I take inspiration from several works describing co-design processes and discussing design insights for tangibles and wearables to support physical rehabilitation.

For instance, [100] presented design recommendations for tangible interaction projects for stroke rehabilitation, where they include considerations for holistic activities—e.g. taking into account the social context of the stroke survivor, purposeful goals, balance between rest and activity, personalization, and others. [87] described a co-design process where they created modular objects for interactive therapy at home, focusing on the affordances of their shapes and their relation to everyday activities. Similarly,  [13] engaged in a participatory design project where they iterated personalised interactive devices to support a long-term engagement of the patients’ rehabilitation activities. These works involved stroke survivors as participants in their co-design activities, along with health personnel such as physiotherapists and occupational therapists, and family members in one case [13].

Beyond stroke, [96] involved people with upper limb disabilities to co-design gaming wearables, and highlighted how there is a broad line of research on wearables for rehabilitation and not so much for non-corrective uses, such as play. These works exemplify the importance of involving as co-designers the people who are directly affected and will use the proposed designs.

Other co-design projects have recruited and collaborated only with the health personnel experts in their application domain. For instance, [99] recruited clinicians and mindfulness experts and derived design principles for mindfulness-based embodied tangible interactions for at home rehabilitation of stroke patients, bringing forward a holistic approach based on the practice of mindfulness. Similarly, [32] carried out a couple of co-design workshops with occupational therapists to develop adaptive soft switches for youth with acquired brain injury. In the work by [182], minimalist technology probes were employed as design probes for the co-design of physical training activities for children with motor challenges. These works illustrate how involving the relevant domain experts can also be useful to derive rich guidelines and insights for the initial stages of a design that can be later evaluated by the patients who are intended to use it.

2.5 Chapter Takeaways

3 Methodological Considerations

This chapter draws on publication D [189].

3.1 Embodied Sketching

In the design activities described in this thesis, I employed a methodology based on Embodied Sketching [108]. Embodied sketching focuses on the design of technology-supported activities in a holistic way, involving the (co-)designer’s own body to come up, simulate, and examine the potential of design concepts before their actual development [108]. Embodied sketching involves three different stages: sensitising, bodystorming for ideation, and participatory embodied sketching.

3.1.1 Sensitising

Sensitising involves familiarising the designers and stakeholders with the relevant aspects to their mutual domains in order to provide a fertile ground for communication and ideation [108]. TODO(extend this… for example)

3.1.2 Bodystorming for Ideation

Bodystorming for ideation involves using the physical, social, and situated bodies to generate ideas  [107,108]. Among various bodystorming techniques  [107,129,146,184], I followed the one described by [107], targeting the design of playful physical and social action [107,108].

In this line, bodystorming [108] prescribes the following rules:

TODO(extend this… for example… we used it in)

3.1.3 Participatory Embodied Sketching

Participatory embodied sketching [108] consists of an embodied exploration of already-existing technologies—such as exploratory prototypes—in order to generate ideas for further possible applications.

TODO(extend this… for example)

3.1.4 Bodystorming Baskets

Related to the methodology of embodied sketching is the usage of a bodystorming basket [191]: an assortment of objects to support quick crafting and prototyping, including: general-purpose crafting material; affixing systems (velcro, safety pins, etc.); general training tools (e.g. elastic bands) and some specific to hand and arm training (e.g. grip strengthening devices); everyday tools and accessories related to the hands and upper limbs, e.g. gloves. Such an assortment can also include a toolkit of technology probes for embodied sketching to support movement-based technological possibilities and incorporate interactivity in the resulting designs.

TODO(extend this… for example)

The concept of bodystorming basket was introduced in [191], a conference paper which serves as the basis of Chapter 4.

3.2 Research through Design

Throughout this thesis, I followed a Research through Design (RtD) approach: a mode of inquiry that uses the design practice and the design process as primary means of generating knowledge [52,57,213]. Research through Design integrates the act of designing as a central method of inquiry, and therefore it studies and reflects on the stages of design, prototyping, iteration, reflection, and engagement with stakeholders. In this sense, the implications of the design trajectory become a central contribution, and therefore serve beyond a path to produce an artifact that can then be evaluated.

Contributions in RtD can take multiple forms, including constructive, methodological, and conceptual or theoretical contributions [214]. Emerging conceptual knowledge in RtD should be seen as “provisional, contingent, and aspirational” [57]. This knowledge aspires to be “sometimes right” rather than “never wrong” [57]. Such a framing supports an understanding of RtD as exploratory and open-ended. In this sense, methodological contributions involve the methods and approaches developed during the design process. In this line, I carefully report the methods followed in the design processes of this thesis in an attempt to capture reflection-in-action TODO(ref schön).

In this thesis, I take inspiration from [128], who proposed a vocabulary of design events to “articulate temporalities in design research with further care, nuance, and generativity”, which can support building “narratives that emphasize knowledge created along the way, and relieve pressure from the ‘final’ artifact.” [128] This perspective justifies documenting design activities with stakeholders, conceptual pivots, and emerging insights as consequential components of the knowledge process, rather than treating them as background context.

3.2.1 Prototypes and Loose Ends

Regarding contributions in this thesis, I make a conscious choice to include initial and intermediate prototypes and preliminary design concepts as valuable elements in the evolution of the design inquiry. I take inspiration in how [63] highlighted the value of “loose ends,” i.e. the “successful samples that are not suitable to the main inquiry of the present design research process, but that under certain circumstances can become starting points for new investigation lines.” I commend their critique of conventional reporting approaches, that by focusing on functions and solutions, omit “failures” and “loose ends” to give way to a sucess narrative of the final outcomes [63].

Extending this view while reflecting on unreported prototypes, [173] highlighted the multifaceted functions which prototypes perform within RtD. They note that one of these functions involve “brokering relationships with participants and deconstructing opaque technologies” [173]. In my work, prototypes acted as mediators: they helped to establish and sustain engagement between designers and other stakeholders by providing tangible reference points for discussion, imagination, and critique. Through this mediation, the prototypes surfaced existing tensions and new perspectives that reoriented the design trajectory. Also, they supported “deconstructing opaque technologies” [173], making visible design aspects that would have otherwise been overlooked. Recognizing these functions underscores how prototypes actively generate knowledge throughout the design process, rather than serving solely as end results.

3.2.2 Messy Design Journeys

Inspired by previous works in RtD [40,58,76,128], in this thesis I embrace the complexity, nuances, uncertainty and temporality that characterized this design journey: I describe and engage with the findings and challenges at each one of the stages shaped and fed the next ones, but not necessarily in a straightforward or expected way. TODO(develop these references)

In this sense, rather than presenting a streamlined story of linear progress and success, I made an effort to reveal aspects of our design journey that show the challenges, limitations, unexplored routes (or loose ends [63]) and even “failures” [58,76] that shaped it.

3.3 Design Drives

TODO(complementing positionality). In the design journey of my thesis project, there were design drives that guided how I developed my prototypes: minimalism, open-endedness, and generalisability.

3.3.1 Minimalism

In general, with the prototypes I designed and developed, I was interested in exploring to what extent a minimalist design could support the kinds of rehabilitation goals in our application domain. I looked for simple setups that would not require much involvement from the users (designers, therapists, patients) to use them. For instance, I reasoned that this would, for example, accommodate the needs of the therapists and rehabilitation doctors, that usually do not have much time for the patients, let alone to set up a complex technological device.

To inform my perspective, I was inspired by the concept of minimal computing. Originating as a concept in the digital humanities, minimal computing refers to an approach that advocates for working within the context of constraints, questioning the narrative that equals innovation with a bigger scale or scope [136]. Minimal computing puts at the forefront the question of what is necessary and sufficient for a given context, and advocates for using the technologies that account for exactly that [136]. This approach echoes the perspectives on computing within limits and similar concepts [37,121], proposing the exploration of computer-supported well-being in human civilizations living within global ecological and material limits [121]. I contend that the consideration of constraints when designing new technologies is especially relevant in the current state of the world, where human-based activities have exceeded several planetary limits and therefore cannot be sustained [121]. Additionally, this approach is suitable in contexts where resources are scarce, financially, time- or labour-wise.

To implement my prototypes, I used commercially available devices (prototyping boards and off-the-shelf smartwatches) as development platforms because I saw in them available, small, flexible, and wearable platforms that could be subverted for our purposes. In this sense, I found guidance in works addressing the possible roles of Interaction Design within sustainability, for example arguing for the value of reusing devices [23] and of not creating new custom hardware [68]. Custom hardware tends to be more expensive to create, control, modify, maintain and distribute [68], so by using already-existing hardware—drawing inspiration from the concept of salvage computing [37]—, I was aiming at minimising such costs while keeping what was necessary and sufficient [136] for the task at hand.

In the prototypes that I introduce in this thesis, minimalism was present in different fronts: the number of devices in use and their size, the types of interactions, the user interface patterns, their physical dimensions, the degrees of configuration, the relatively low number of features, and the fact that the device were meant to be used during the activities only (and therefore not at all times.)

3.3.2 Open-endedness and Generalisability

Previous works on minimalist technology probes [105,106,180183,186] have highlighted how they have been appropriate to complement activities within different movement domains, therefore reflecting a considerable degree of generalisability. TODO(expand open-endedness) In this line, I found in the strong concept of Intercorporeal biofeedback [183], which I describe above, an appropriate theoretical foundation for framing my minimalist designs. It proposes the role of interactive technology as a mediator that can support different actors making sense of motor learning processes together. Designs that follow this strong concept are open-ended, allow for a shared frame of reference using multisensory feedback, enable guiding the attention from and to the actions, and are considered as part of the whole activity [183].

3.4 Ethical Considerations and Data Management

In this work, several participants were involved in the design activities that served as the basis for the insights and requirements of the multiple designs I implemented. The participants signed an informed consent regarding the management of their personal data (video and voice) during the sessions they participated in. These personal data were handled according to the management plan that was in place for the corresponding research projects. TODO(provide details?) There was no monetary contribution for their participation.

The activities were approved by the Ethical Committees of the university. For the case of the study with patients described in Chapter 6, the Ethical Committee of Getafe University Hospital also approved the study protocol that was developed as part of this work.

The design activities described in this work were held in private rooms of the university or the hospital.

3.5 Chapter Takeaways

4 Design Resources in Movement-based Design

This chapter draws on publication A [191].

How can we design wearable technologies to support physical rehabilitation and training? One possible path lies in the use of movement-based design methods. As previously discussed, movement-based design methods have increasingly been adopted in several domains due to their capacity for providing early insights into the embodied experience of participating stakeholders [201]. They can be used in multiple phases of a design project, ranging from sensitising exercises to evaluation [108]. However, while some methods are known and documented, these are not always well-suited for the specific characteristics of a design project. One has to consider the requirements, goals, limitations and possibilities, context, available resources, and emerging contingencies; as well as when in the design process the methods may be used.

In this chapter, I introduce a characterisation work which aimed to map the features these methods have, identifying how they can be applied to different application domains. This work resulted from a collaboration with an international movement-based design consortium working together in the Method Cards for Movement-based Interaction Design (MeCaMInD) project supported by Erasmus+. As a consortium, we developed a comprehensive characterisation of movement-based design methods to guide designers in selecting, adapting or creating their methods. The goal was to identify salient characteristics of the methods that influence their applicability in different contexts. For this purpose, we were interested in collecting and analysing methods that design researchers use in a specific context. Some of these were adapted from previously-known methods and some others were created from scratch.

Through a series of workshops, we analyzed a total of 41 key movement-based design methods used in 12 interaction design projects from the consortium. We characterised and classified the methods using a comprehensive thematic analysis [25] with a bottom-up approach. We obtained 17 categories that encompassed the significant characteristics of our corpus. We arranged them into five main groups: Design Resources, Activities, Delivery, Framing, and Context (See Fig. 1.)

This work was a collaborative effort, so in some instances during this chapter I use we to refer to what we did together. As the main author, I participated in all the analysis along with EMS, and I organised the materials and led the research activities, coordinating the team. Note that to refer to the collaborators I use the acronyms introduced above. Based on the resulting analysis, I wrote most of the work and created the corresponding figure.

In this chapter, I introduce the methodology we used for the analysis. Then, I present the core considerations related to each category and then focus the discussion on the Design Resources group. Finally, I provide action points and recommendations that ground the Design Resources with the practice of using movement-based design methods.

4.1 Characterisation of Movement-based Design Methods

In this work, I collaborated with design researchers from six institutions participating in the Method Cards for Movement-based Interaction Design (MeCaMInD) project, an international Erasmus+ project focused on movement-based design methods. They facilitated the interaction design projects that constituted the original data corpus, reporting on the movement-based design methods used for the different stages of their design processes. For each method, they reported its description, account of logistics and facilitation, benefits and outcomes, and reflections.

The following are the 12 interaction design projects that were reported, each one using between one and seven methods. From this, we collected a corpus of 41 descriptions of movement-based design methods (See Tbl. 1 for their names and IDs.)

  1. ACHIEVE: Design of a playful interactive supermarket environment for children to foster a transition to healthier and more sustainable food consumption.
  2. KOMPAN Workshop: Concept ideation for outdoor fitness equipment for playful fitness training. Participation of students along with designers from the playground company.
  3. Astaire [107,108,212]: Design of a collocated MR dance game for two players: one inside and the other outside VR.
  4. Super Trouper [105107,109,111,182,184]: Methods for training body awareness and control in children with motor difficulties, combining circus training and interactive technology.
  5. Magic outFit [93,108,168]: Design of wearable technologies for sensorial changes of body perceptions to support physical activity.
  6. Sense2makeSense: Explorations in opening the design space of immersive and multisensory data representation.
  7. LearnSPORTtech [179182,184,186]: Design of wearable technology to support sports and fitness practices through sensory feedback.
  8. Tangibles [38,39]: Co-design for upper limbs therapy for children with Cerebral Palsy employing interactive tangibles.
  9. DigiFys [11]: Research in children’s outdoor play and interactive installations to support it.
  10. Diverging Squash [98]: Single-player VR game inspired by racket ball.
  11. GIFT [196]: Museum experiences enriching physical exhibitions with digital content on smartphones.
  12. Online Course in Embodied Interaction [197]: Course in Embodied Design adapted to be taught online during the COVID-19 pandemic.
Table 1: Characterised Movement-based Design Methods.
Project Code Method
ACHIEVE Ach1 Somaesthetic field Trips
Ach2 Somaesthetic body scan and body mapping
Ach3 Generative bodystorming
Ach4 Role-playing and improvisation
Ach5 Online re-enactments
Ach6 Puppeteering
Ach7 Wizard of Oz + Informances
KOMPAN Workshop KOM1 What can I do with this?
KOM2 Video sketching
KOM3 Play moods and quality cards
KOM4 Explore the movement aspect
KOM5 Action mock-up
KOM6 Play in context
Astaire Ast1 Warm-up games
Ast2 Playing off-the-shelf VR and MR games
Ast3 Embodied exploration and bodystorming with the affordances of MR
Ast4 Embodied exploration and bodystorming with off-the-shelf VR games
Ast5 Embodied explorations to fine-tune the game
Super Trouper SuT1 Warming-up to introduce and sensitize participants to tech and exercises
SuT2 Training sessions turning into participatory Embodied Sketching
SuT3 Bodystorming with experts
SuT4 Bodystorming with cards
Magic outFit MoF1 Dynamic body maps and keywords to characterise energising moments
MoF2 Barriers to physical activity cards
MoF3 Somatic dress-up for movement and sensation awareness
MoF4 Brainstorming based enactment
Sense2makeSense S2M1 First-person sensorial exploration and materialization of data representations
S2M2 Dolls to facilitate feeling and acting like your persona
S2M3 Body and sensory cards to inspire ideation
S2M4 Video prototype to capture design and scenario
LearnSPORTtech LSt1 Embodied explorations of technology use
LSt2 Technology sensitization
LSt3 Sensory Bodystorming
Tangibles Tan1 Field studies and short ethnography
Tan2 Interaction Relabelling applied in co-design
Tan3 Acting out movements
DigiFys DiF1 Long-term play engagement intervention in outdoor play
DiF2 Short-term play engagement intervention in outdoor play
Diverging Squash DiS1 VR Bodystorming
GIFT GIF1 Sensitising towards human practices
Online Course in Embodied Interaction OEI1 Online Bodystorming

The thematic analysis [25] was performed by the UC3M team (EMS, LTV, OVH, ATJ and me) and consisted of the following steps:

  1. EMS, LTV, OVH and I labelled these methods according to salient features and characteristics.
  2. The same four people, plus ATJ, categorised the resulting characteristics using a bottom-up approach.
  3. EMS and I refined the categorisation and obtained meaningful subcategories.
  4. EMS, LTV, ATJ and I grouped these categories and selected the group to elaborate on.
  5. I asked the facilitators to comment on the categories and results and provide more illustrative details to articulate them.

I provide more details about the process below.

4.1.1 Characterisation

The labelling and characterisation process was performed by EMS, LTV, OVH and me. We printed the reports of methods on big sheets and arranged them on the floor of a closed space. To characterise them, we used sticky notes, where we wrote sentences or individual concepts that best described the methods. We tagged them with their corresponding project and method names. This approach aimed to gather insights bottom-up. Thus we did not come into the process with preconceived categories or specific aspects to look out for. We made sure that at least one embodied design expert covered each technique, and also that every technique was characterised by at least two people.

4.1.2 Categorisation

Once we had the sticky notes as working material, EMS, LTV, OVH, ATJ and I gathered for a big initial categorisation session lasting over 3 hours in a room of approximately 50m². This happened collaboratively, on-site, and preserving the bottom-up approach. We arranged the sticky notes in the space, placing them randomly all over half of the room floor, independent from other notes from the same technique or project. A small and relatively cryptic code was used in the notes to later be able to trace them back to their respective method and project. We proceeded to simultaneously traverse the space reading and surveying the notes, and looking for patterns and similarities between notes.

TODO(photo of the process?)

As this activity continued, we started to notice new categories. We grouped relevant notes in particular areas of the space and made the group aware of their existence—e.g. “There’s a group about Objects in this area!”—, to which the rest responded by bringing relevant notes they were aware of. During the process, these clusters would transform, grow, get divided into subcategories, or be integrated as subcategories of others. Interconnections with other categories were also drawn either through making use of proximity to indicate their closeness or through colour threads indicating relations between notes and categories. Finally, we documented the resulting map of categories with photos. We had a debrief session to talk about the experience and our insights during the process, concluding that some categories still needed revision and further connection with relevant others.

Next, EMS and I performed subsequent categorisation sessions. We revised big, unfocused, or complex categories at the level of notes, finding overarching categories and their relations to each other, and also deepening and refining the findings from the first big session. This allowed for an increased level of detail and led to finding clusters within categories, merging clusters that were closely related, naming and revising the names of clusters, and surfacing interconnections. Further, subsequent sessions were needed to trace back which methods and projects were involved in each category. In the end, we found and chose 17 categories from the process.

4.2 Categories and Groups

Figure 1: Characteristics of movement-based design methods

In our analysis, we found 17 categories from the 41 movement-based design methods reported by the facilitators of 12 movement-based interaction design projects. We arranged them into five groups: Design Resources, Activities, Delivery, Framing, and Context (See Fig. 1). These categories and groupings are not orthogonal, meaning several of them can characterise a given method or project.

In this section, I introduce the groups and categories to provide a sense of their components. Note that the names of the categories are written in italics and the names of subcategories are written in bold.

4.2.1 Design Resources

This is the main emerging group of categories, on which I will focus on later in this chapter. It contains the categories of Movement, Space and Objects.

4.2.2 Activities

The Activities group contained the categories of Design Phase, Methods, Acting Out, Sensorial Explorations, and Crafting.

4.2.2.1 Design Phase

We found that movement-based methods were used across different Design Phases. They helped not only in Sensitizing and Inspiration but also in the Iteration and Evaluation stages of the design process. As such, they were adopted for the Divergent and Convergent phases of the design process. Additionally, some of the projects leveraged existing Technologies during these activities.

4.2.2.2 Methods

We categorized under Methods several references to already-existing design and research methods. Regarding Research, we found some references to field studies and ethnography. Concerning design, we found several references to classical Interaction Design techniques such as Brainstorming, Scenarios and personas, Participatory design, Wizard of Oz, and Puppeteering. Additionally, there were mentions of already existing Embodied Design and movement-based methods, especially the use of Embodied Sketching [108] and bodystorming [107,129,146,184]. We identified Warm-up techniques across projects as an important component of embodied methods.

4.2.2.3 Acting Out

Methods in the corpus used Acting Out to come up with, materialize, and iterate design ideas, or as part of a convergence process. It allowed participants to flesh out, experience and see key action sequences. Role-playing was used to iterate ideas in the following ways: by testing ideas within a particular situation and adjusting it iteratively; by tapping into human-like interactions, e.g. exploring different social roles; or by filtering and indicating improvements. It was also used to achieve joint sense-making as a group and to share ideas. Role-playing was mostly reported to be done in combination with improvisation.

4.2.2.4 Sensorial Explorations

We categorised under Sensorial Explorations notes regarding activities aimed towards increasing awareness of specific sensing modalities like vision, hearing or touch, either individually or in the form of multisensory feedback. They were used to inspire or iterate designs, and typically made use of bodystorming—particularly Sensory Bodystorming [184]—using physical probes with characteristic tactile and sound qualities.

4.2.2.5 Crafting

Crafting was adopted to create prototypes of interactive experiences, controllers and costumes while making use of readily available materials.

4.2.3 Delivery

The next group, Delivery, contained the categories of Facilitation, Planning and Logistics, and Documentation.

4.2.3.1 Facilitation

As part of the Facilitation category, we obtained the following list of Facilitation Tasks described and utilized across several projects:

Additionally, we found several mentions of having a predefined set of Instructions or rules for the facilitators or the participants to follow. These allowed a fluent development of the activities because they:

Regarding the involvement of the facilitators, there was a variation in the required Facilitation Level that was reported for each method in the corpus. Noteworthy, methods that used digital technologies reported needing more time, energy and resources. Finally, we found some reflections that considered the context of the Participants of the design methods, either as a target audience or as designers in the project. The projects prioritized the accommodation of different participant backgrounds, abilities, needs and limitations. We found these considerations concerning physical movement and also the use of digital technologies. Methods in which Experts were participants, tended to emphasize co-creation with them. It was apparent how their skills and knowledge were leveraged, for example by providing detailed feedback, developing or introducing technologies, or guiding somatic and movement-based activities.

4.2.3.2 Planning and Logistics

An important complement to Facilitation was the category of Planning and Logistics. Regarding Planning and Logistics Tasks, we found and grouped considerations and reflections regarding the following:

Methods varied in the Involvement Level they required for planning and logistics. A low involvement level occurred when there was a low requirement for resources, when these resources were easily available, when the facilitators or participants had high expertise, or when the activity had relatively low stakes. Conversely, methods that used complex technologies and setups like Virtual or Mixed Reality experiences, or methods that used several ad hoc elements such as custom-made cards or body maps, reported requiring considerable effort in planning and logistics.

4.2.3.3 Documentation

We found that the Documentation of activities was an important component in the Delivery of the movement-based design methods we analyzed. In this category, we grouped considerations regarding Data collection in general and the use of video and body maps in particular. Video recording was leveraged not only as a way to have an archive of evidence to evaluate after the activities but also as a creative medium for participants to prototype their ideas. Body Maps were adopted several times for participants to observe and communicate their body states, sensations or wearable prototypes across stages of the activities and design process.

4.2.4 Framing

The Framing group contained the categories of Play and Perspectives.

4.2.4.1 Play

Under the Play category, we grouped notes regarding playfulness, fun, and game design. Several projects had Playfulness either as a design goal or as a resource to instigate engagement and curiosity. Similarly, a few projects involved the concept of Fun as a goal or as a resource within their design methods. One way of fostering fun was to include pre-existing Movement-based games in the design activity. Finally, we found that some projects focused their movement-based methods on playing with and exploring, ideating and iterating key actions that were envisioned to be at the core of the designed activity. We found that these were related to core mechanics in Game Design and embodied core mechanics in playful activity-centric design [102,195].

4.2.4.2 Perspectives

The perspective participants would take in relation to the target audience would emerge as an important consideration. We found methods that worked from a First, Second or Third-person perspective, and even some that combined them [166]. This category also covered users’ perspectives, which were strongly related to the target group of the design. Specifically, Children were supported using several techniques with technology (e.g. using perspectives in VR in ACHIEVE) and without it. Finally, we found that Physical Models allowed for first- and third-person perspective shifts.

4.2.5 Context

Some notes related very specifically to the Context of projects we studied, in their motivations and results. Starting Points and Outcomes were strongly tied to the given projects. The range of possible Goals for projects and the movement-based methods they used included the following: understanding; reflection; focus; embodied core mechanics; and changes in physical activity, behaviour or self-perception. Finally, some common Challenges faced during these methods and projects included social and ethical concerns, levels of expertise in relevant areas, the management of engagement during activities, and the use of VR together with all its technical requirements.

4.3 Design Resources in Detail

We found three categories that we grouped under the name of Design Resources: Movement, Space and Objects. In this section, I describe and exemplify the subcategories that compose them. The categories and subcategories came from analysing the notes regarding specific projects and methods, but for this section I also include relevant and applicable examples from other sources in the corpus. To refer to the methods, I use the IDs introduced in Tbl. 1.

4.3.1 Movement

The methods in our empirical material were chosen based on their prominent use of movement. However, we still found a distinctive category for Movement, encapsulating important body and movement aspects at focus: Movement Qualities, Body Regions involved, and physical Contact. Additionally, this category included strategies and external elements that supported movement: Moving with Objects, and Constraints and Superpowers; as well as particular considerations when working with movements in Instrumental Domains, such as training and rehabilitation. Finally, it covered a possible outcome of using movement, Engagement.

4.3.1.1 Movement Qualities

Methods focused on experiencing, exploring, understanding, and working with particular Movement Qualities, such as movement trajectory, tension, or pace. This sub-category initially originated from notes on the project of Sense2makeSense and the methods Ast4, LSt1, S2M3, in particular. Common in all the projects is that movement qualities were targeted in their future designed experience. Methods focused on elucidating and experiencing these aspects first-hand to obtain a seed to inspire subsequent or concurrent ideation activities.

Additionally, some activities centred on working with a particular focus of attention regarding Movement Qualities, which were often related to the sensory and body experience in relation to the self, others, and the surrounding space. For example, in some instances there was a focus on bodily and proprioceptive sensations, body orientations in relation to the space and others, and proxemics [31,64,88]—physical, social, and cultural resources of action that can be useful in the design of technology [101,103].

4.3.1.2 Body Regions

We found two main groups of methods regarding the Body Regions involved during movement: those that were open to and instigated movement with the whole body—like LSt3, S2M3, or most of the methods from Magic outFit—and those that prioritized the movement of particular body areas, specifically the upper limbs—such as MoF1 S2M1, S2M2, S2M3, Tan2, or Tan3. In LearnSPORTtech, there was an involvement of the whole body. For example, the explorations of the technology in yoga focused on how the body was affected by the contribution of each limb in relation to the chest [179].

Regarding upper limb movements, we found a couple of different cases. The Tangibles project involved activities related to specific kinds of motor impairments that targeted the upper limbs. Alternatively, methods that focused on the movement of upper limbs also involved some traditional design and research activities in Interaction Design that are typically performed by hand, e.g.: drawing, sorting cards and crafting.

Some projects alternated between the use of the whole body and specific regions. For example, in Sense2makeSense, participants built a physical model of their prototype on a reduced scale and used small toys to enact a scenario. They used their bodies to capture and represent body actions that were not able to produce by the toys. Hence, these were classical Interaction Design activities that were used in a way that involved physical enactment.

4.3.1.3 Contact

Physical Contact emerged as a subcategory of movement due to the Ast4 method from Astaire [212]. It was the only project that explicitly targeted social interaction involving physical contact. The design researchers described physical contact both as a design target and a key aspect shaping the design process.

Nonetheless, physical contact was present in other projects. For instance, physical contact was used in the form of physical collaboration and assistance to put on, modify, and adapt design materials and prototypes on the body. As an example, in Magic outFit, participants helped one another to “dress up” as the persona they were trying to feel like and enact. The enacting participants would request certain sensations from other participants, who would facilitate them through physical contact and engagement—e.g. poking, caressing, tapping, etc.

Additionally, contact was sometimes used to conduct the target activity. For example, in the Super Trouper project, instructors and researchers helped the children engage with the activities by offering support when needed, for example providing a hand for extra support when the children walked the tightwire.

4.3.1.4 Moving with Objects

Many projects used objects as design resources and goals in their methods. Hence Objects emerged as a whole category in its own right. Moving with Objects focuses on the relationship between objects and movement in doing and acting, as originally found in the Ast3, KOM1, KOM4 and Tan2 methods. These instances belonged to projects that had an emphasis on exploring possible movements done in combination with objects. We found that wearables in Magic outFit were the design goal, and objects were used to craft and simulate them. Objects were used to explore the sensations they produced and whether they invited movement or supported self-awareness. They allowed delving into other physical, cognitive, and emotional effects.

Objects were frequently used to explore, experience, generate and reflect on key physical and social actions [102] of the intended experience and their effects on it. For example, in Ast4, designers used objects as props to explore moving together with indirect physical contact, playing a variation of the Virtual Reality game Audioshield with two players. One player was inside VR while the other was outside. Players placed themselves side to side—in an I formation position [103]—, holding a controller in their outer hand and the end of a single toy golf club in the inner hand, closer to one another. The golf club connected them. The player in VR had to move and guide the other player to score. This allowed design researchers to explore how this kind of movement made them feel socially and physically, how it worked as a way to score, and how much they felt like dancing—a core design goal.

4.3.1.5 Constraints and Superpowers

We found several instances of movement explorations around Constraints—limiting in one way or another the poses, movements, or actions that otherwise would be feasible in a participant—, and around what we called Superpowers, i.e. poses, movements, and actions that a participant would not be able to do in principle. This category emerged from the Tangibles project in general and the Ast3, Ast4, DiS1, S2M2 and Tan3 methods in particular.

Constraints were used as creative prompts, to explore and subvert possibilities tied to particularities of objects and environments. For example, mainstream VR experiences tend to hijack the senses of the VR user—mostly vision, but also touch, and hearing—and their presence from the physical space. Astaire worked towards subverting these trends and exploring the design space of collocated mixed-reality play with a two-player dance game. Embodied explorations in the design process involved constraining and providing access to senses, actions, and physical or virtual worlds [212].

Alternatively, Constraints emerged from practical reasons due to instrumentality or the objects and models that were used during the activity. The Tangibles project is an example of the former because the target rehabilitation exercises required movements in specific directions. An example of the latter is S2M2, where Playmobil toys were used to enact a scenario involving an immersive environment with multisensory data representation. The mobility of the toys imposed constraints over the movements that could be explored from this third-person perspective. This was overcome through first-person involvement, i.e. physically engaging with those actions the toys were unable to enact. This is linked to the category of Perspectives.

Over and above, several projects worked with exploring capacities, sensations, and possibilities beyond the participants’ current repertoire both in the physical and virtual worlds. In the physical world, Magic outFit used MoF4 to bodystorm how to mitigate and transform the current sensations of participants using external stimuli produced by different objects. In the design of VR experiences, these explorations of possibilities of action turned to the extreme when investigating Superpowers. For example, in Diverging Squash, designers altered the physics of the VR world—gravity and bounciness of a ball—to explore a new way of playing squash. In ACHIEVE’s methods Ach3 and Ach4, designers explored being a child both in the physical world through changing bodily stance and posture, and in the VR world through changing the dimensions of the world in comparison to the participant’s avatar. This is linked to the Perspectives category and resonates with previous works regarding changing individual and social perception and action (e.g., [117].)

4.3.1.6 Instrumental Domain

While a free exploration of movement was pervasive throughout the projects in the portfolio, some of them focused on particular embodied core mechanics [102] that were necessary for the user, like Astaire. This happened in the context of applications where movement belonged to an Instrumental Domain such as training or therapy. In the case of the Tangibles project in general, and Tan3 in particular, researchers were interested not in the free exploration of movement possibilities but in the re-contextualisation of specific, instrumental movements.

The design context in which a project was developed was often behind an explicit focus on instrumental goals. In the KOMPAN workshop, the objective was to make the physical fitness training more playful and thus more intrinsically motivated. They were aiming for a combination of instrumentalized training parameters such as exertion, strength, flexibility, coordination, motor skills, gravity, resistance, and power, combined with play characteristics. As another example, projects in LearnSPORTtech focused on instrumental values of particular practices of training—such as yoga or weightlifting— and targeted particular exercises within those practices.

4.3.1.7 Engagement

Participants typically engaged well with the movement-based design activities by involving their bodies and frequently interacting with one another. In our empirical material, participants tended to feel good and comfortable with themselves and with one another, and there was usually high energy and a feeling of togetherness after embodied design sessions. Engagement as a sub-category originated from the general description of Astaire and Magic outFit and the methods MoF1 and KOM1.

The energy of the participants was carefully considered in several projects, alternating between higher and lower energy activities, and activities involving the body in different ways. For example, in Magic outFit, co-designers carefully interwove less physically and socially active activities with the main movement-based activities. In particular, more reflective and quieter activities such as filling body maps or brainstorming using sticky notes were used as a way to change the focus—e.g. from recalling to acting, from acting to listening; from generating ideas, to documenting them; and so on—, and to rest and recover energy. Consider that energy management is one of the Facilitation tasks listed above.

4.3.2 Space

We found several considerations around the use of Space, which could be either Physical, Virtual or a Hybrid of both. Additionally, we identified factors concerning the Delimitation and Room size of the space in use during the development of the activity.

4.3.2.1 Physical Space

In our corpus of data, projects used different types and scales of Physical spaces. In some cases, very specific and project-relevant places were used, often in instrumental domains where there was an overarching goal behind the design. This goal could be more or less playful. For example, LearnSPORTtech employed yoga and fitness studios, and KOMPAN Workshop resorted to the Athletic Experimentarium, a combination of a track and field stadium, obstacle course, parkour installations, and a cross-fit area. Specific places were also important in open and playful-oriented projects, like DigiFys, which focused on outdoor play environments. Plus, in VR-related projects, such as ACHIEVE, Diverging Squash and Astaire, appropriate rooms with VR equipment were essential. The choice of location was due to their relevance to the target application domain or the needs in logistics or materials to conduct the design activity.

However, we also found that methods used more generic spaces, which were adapted by facilitators and design researchers for the activity at hand. For example, in LearnSPORTtech, activities were organised both in a room transformed into a training space with basic yoga equipment and in the target place: a dedicated gym equipped with weights, machines and yoga mats. The former was chosen as it gave control and access to designers—e.g. it allowed them to organise and change the space during the process—, while the latter offered control and access over the process to target users which were instructors and practitioners.

In a middle ground, Super Trouper used a school gym hall, which incorporated some physical training equipment used during warm-ups—e.g. mats, balls, hoops, a vaulting horse, etc.—, and which was further equipped by the circus instructors and co-designers with circus-specific equipment such as a tightwire, trapeze, balance board, etc. Additionally, the design research team incorporated the technology—multiple wearables—and research equipment like cameras.

Finally, DigiFys reported both its methods DiF1 and DiF2 as being located outdoors and in public. While this was necessary given the project’s focus on designing and observing behaviour in playgrounds, it posed limitations to what ideation activities could be done, and in particular, this required a more lightweight approach to facilitation.

4.3.2.2 Virtual and Hybrid Spaces

On one hand, Virtual space originated as a category from the Astaire project and the Ach1, Ach3, Ast2, Ast3, Ast4, DiS1 and S2M1 methods. On the other, Hybrid space originated from ACHIEVE, Astaire, Diverging Squash, and Online Course in Embodied Interaction as projects and from the Ach3, Ast3, Ast4 and DiS1 methods. Notice how some of these methods appear in both categories. VR emerged as a particular and distinctive space in the following projects: ACHIEVE, Diverging Squash, Astaire, and Sense2makeSense. The last two focused more or as much on the physical than the virtual space. In Sense2makeSense, the physical space was used to leverage important socio-spatial considerations to design an immersive and multisensory experience for VR. In contrast, the design goal of Astaire was set in the hybrid space: providing a fun and interesting play experience for a player in VR and out. Both projects involved both the physical and virtual worlds.

In Ast2, off-the-shelf VR experiences and games were used to sensitize designers. Additionally, in Ast4, they worked as design resources to help inspire, explore, and come up with interesting play ideas through transgression and re-appropriation. In both the ACHIEVE and Diverging Squash projects, custom 3D environments were designed and used for the activities. Some of these environments employed custom physics and behaviours, which required the added effort of 3D modelling, programming, testing, setting up, and onboarding, and also the added requirements of appropriate equipment and physical space. This is connected with the Facilitation and Planning and Logistics categories.

Projects using virtual spaces were also aware of and considered the role of the physical space. In some of them, the simultaneous exploration of the physical space was intrinsic to their goals. For example, in ACHIEVE a hybrid space was created by adding tracked physical shopping carts to the experience. This allowed the designers to employ tangibly embodied feedback in the virtual environment while also developing a meaningful connection to the physical space and collaborators. In this way, students outside VR would interact with students inside by aligning their physical and virtual positions. Students were able to see their fellows’ virtual perspectives on screens in the mixed-reality space. Additionally, physical props such as different food types were used in the embodied improvisational interactions.

In other cases, the hybrid space emerged out of necessity, like in the Online Course in Embodied Interaction, a course that needed to be conducted online due to COVID-19 pandemic restrictions but that otherwise would have benefited from participants being in the same space [197]. In that setting, individual participants connected through videoconferencing software but conducted the bodystorming activities—physical games, exploration of materials, movements in space, etc.—from their rooms at home. Students reported curating the space to be shown, which gave them control over the presentation of such an intimate space. They felt the safety supported by their spaces. The familiarity of objects in their space allowed them to engage and ideate straightforwardly. While the physical space became the main place of bodily action the online space became the place for social interaction, thus creating a hybrid form of bodystorming. This approach integrated two of the method’s main considerations, space and social interaction, from different perspectives.

4.3.2.3 Delimitation

We found that the Delimitation of working space was a relevant consideration across methods in our corpus. This category emerged from Ach1, Ach3, DiF1, DiF2, DiS1 and GIF1; and also from the GIFT project in general. We found the category was related to the concept of frames [59], and the concept of the magic circle of play from game design and game studies [78,140]. Frames refer to social conventions and expectations structuring and organizing our experience [59]. The magic circle of play refers to a special time and space created when playing that is governed by different rules and understandings than in the everyday world [42,140,161]. Similarly, embodied design methods seem to seek and foster a distinctive frame set apart from ordinary life in which particular kinds of physical and social action that might be weird or unusual in everyday contexts are sought and supported.

At times, special spaces emerged as participants engaged in the design or play activity. For example, in Astaire, a demarcated round-shaped stage emerged where players in and outside VR interacted. The rest of the team stayed around acting as a participating audience, commenting and assisting when needed. Contrastingly, in other projects, a good deal of attention was paid to boundary objects and marks helping physically demarcate areas to focus attention, understanding, intention, and action [42]. Sometimes the limits of the space were physically indicated through the arrangement of furniture and objects in the room, and sometimes by marking spaces on the floor with tape. For example, in GIFT, several activities included pretending to be in a museum. Delimiting the space with barriers representing different rooms served to signal what space was standing in for the museum as a whole. Further, it encouraged a high level of social interaction between participants in a focused space.

Delimitation of the physical space was at the core of the design goals of DigiFys. The designers not only wanted to install interactive playground equipment but to create a space that would foster particular movements, paths and behaviours between play stations. As such, landscape architects worked together with interaction designers, and natural materials such as bushes, flower beds and paths were designed to delimit the interaction space, promoting movement and social interaction in certain areas and limiting access to other areas. Finally, furniture emerged as a delimiting spatial boundary in some projects, even if unintended. For example, in ACHIEVE, the designers expected the furniture to be used by the students as a design material. However, students initially understood furniture as fixed elements in the space.

4.3.2.4 Room Size

Considerations in delimitation were related to the space requirements regarding Room Sizes across projects. These requirements first appeared in our corpus in the GIFT and Online Course in Embodied Interaction projects, and in the Ach3, Ast1, Ast2, Ast3, Ast4 and DiF1 methods. For example, we found that GIFT reported having low requirements for space, and Astaire reported needing only a big enough space to move and run around. In contrast, ACHIEVE reported needing a large room for their bodystorming sessions due to their video recording setup and because of health measures regarding COVID-19. DigiFys, by contrast, needed events to be run in authentic environments. Because the material and spatial conditions were in focus for these studies, selecting authentic environments that were representative of different types of places—a playground, in this case—became a central consideration. Similarly, Super Trouper required big halls—a circus hall and a primary school physical education hall—because its design activities involved multiple large objects and furniture such as mats and mattresses, trapezes, benches, and trellises that could not be placed elsewhere.

An interesting compromise regarding room size and engagement comes from Magic outFit. The researchers had a problem of interference caused by the two groups being in the same room. On the one hand, they wanted to have all participants in the same space for sharing the materials and interacting, but on the other hand, the two groups interacting with sound interfered with each other ideation process. Sometimes the room was too noisy and did not allow participants to hear well some of the more subtle sounds, especially when the sound objects were applied to body parts or space far from the ears, like close to the feet.

4.3.3 Objects

Objects was one of the most prominent categories. Most of the techniques relied on the use of objects, which ranged from tangible, Physical objects—including a special focus on Cards—to Technologies of different sorts and fidelity. In the following, I cover this range and conclude by also articulating two properties and strategies around the use of objects: Affordances and Subversion.

4.3.3.1 Physical Objects

The use of Physical objects was very common across the projects. For instance, we found them in notes regarding Ach3, Ach6, Ast3, DiF2, KOM3, LSt1, LSt3, MoF4, S2M1, Tan2, and Tan3, and also in the general descriptions of GIFT and LearnSPORTtech. Physical objects were frequently described as common, simple, readily available, and low cost, meaning that they were cheap to buy or create and that they did not need to be necessarily handled with special care. We observed that because of how they were used, the objects were not destructively transformed, and when they were, they were easy to replace. All of this made these objects malleable, adaptable, and highly transformative and provided them with a strong re-signification power. For example, as we mentioned earlier in the Moving with objects sub-category regarding Ast4, a toy club for playing pretence golf was momentarily torn apart: The clubhead was removed and the shaft was used to extend the reach of the controllers.

Objects were key for divergent design as crucial prompts for ideation. They were often essential in multidisciplinary contexts involving experts and novices. For example, both in Magic outFit and Online Course in Embodied Interaction, simple objects supporting different sensory qualities—textures, shapes, weights or sounds—, enabled people with and without a technical background to generate ideas for future sensing and actuating technologies.

Idea materialization using objects played a strong role in convergent phases of ideation, involving building mock-ups. These acted as “quick and dirty” experience prototypes [26] that allowed other participants to get a sense of the target experience. For example, in S2M1, participants within a team used objects to individually come up with ideas for multisensory immersive data representation. These ideas were then shared among the group and iterated together in the rest of the activities from Sense2makeSense.

Additionally, objects were used to prototype the space in which the activity would take place and explore ideas involving spatial elements. For example, in Ast3, cardboard boxes were used to explore an idea involving a hybrid obstacle course with physical and digital obstacles.

We observed very deliberate decisions regarding what kinds of elements to bring to use during the methods that involved objects. For instance, objects were chosen for a given method due to one or more of the following:

In most cases, the objects that were used were common crafting materials and everyday objects, such as cardboard boxes, tape, sticks, balls, toys, lights, toys, dolls, hand puppets, children’s musical instruments, glue guns, pipe cleaners, cardboard, scissors, knife, sponges, modelling wax, foam cardboard, straws, plastic mugs, barbecue sticks, adhesive tape, a stapler, a multi-head screwdriver, a Rubik’s cube, and small boxes with magnetic closing. Crafting materials were essential to transform and re-signify other kinds of objects.

The objects that were brought in were also related to the target domain, like sports equipment in KOMPAN Workshop and Super Trouper. These objects were essential to support ideation considering domain-specific practices.

4.3.3.2 Cards

Paper Cards were a special class of physical objects used across methods in different ways. The projects that used cards were KOM5, LearnSPORTtech, Magic outFit, Sense2makeSense and Super Trouper. Specifically, the methods from which this category emerged were KOM3, KOM4, KOM5, KOM6, MoF1, MoF2, MoF3, S2M2, S2M3, S2M4 and SuT4.

Cards were used across projects to represent the following categories:

Regarding objectives, uses, and rules, the cards were used in the following ways across methods:

We found that cards were used according to different mechanics. In some cases, the cards were used by the participants as a way of getting a random design prompt. This was implemented through shuffling and drawing from a deck in KOMPAN Workshop, or by scattering cards on the floor and picking up one in Magic outFit. This created some spatial requirements to consider, as previously discussed in Physical Space. In other cases, the facilitators or participants would choose the cards after careful consideration. For instance, in Magic outFit, participants chose the card with a keyword that best described how they had felt, and in KOMPAN Workshop, designers added action modifiers that they considered interesting to introduce variations. Additionally, there were occurrences where cards could be modified on the spot. This happened in Magic outFit and Sense2makeSense, which featured blank or wild cards for the participants to fill in using sticky notes.

In several projects, card use was carefully timed in the schedule of design activities. For example, in Magic outFit, cards depicting barriers to engaging in physical activity set up the design goal by being used before the design and enactment stages. As discussed above in Engagement, when card usage was combined with activities that engaged the whole body, some friction would appear and movement creation would be hindered.

Regarding the design of the cards, they were often minimalistic, containing a few keywords or an image in the form of a picture or an icon. Cards with keywords would often have a defining and focusing character while cards with imagery would be used to inspire and evoke. Images came either from stock pictures and icons or from in-house designs. Cards often featured categories identified either with colours or with printed icons. This allowed for quick identification in the design activities. It is worth mentioning that cards in all projects were highly visual and assumed sighted participants. Hence, without further modifications, the studied cards would present an accessibility barrier for participants with visual impairments.

4.3.3.3 Technologies

Technologies with different levels of fidelity, high or low, were present in several of the movement-based design methods of our corpus. Specifically, this category emerged from the following projects: Astaire, Diverging Squash, Magic outFit, Super Trouper and Tangibles; and from the following methods Ach7, Ast3, Ast4, DiF1, KOM5, LSt1, LSt2, MoF3, S2M1, S2M2 and SuT3. On the lowest end of this technological fidelity range, we could find “fake tech”: props or cards that represented and substituted a specific device or functionality during the activity. Such elements were often used when the details of implementation were still not known or needed, or when the cost of logistics for the existing technology would be prohibitive for the given design stage. For example, in KOM5, a set of technology cards—see Cards above—was used when building physical mock-ups of the ideated interactive interventions. The focus was on experiencing the 1:1 scale of the mock-up and not on testing the proposed interactivity.

In contrast, some projects included already working technology in their methods, such as LearnSPORTtech, Magic outFit and Super Trouper. For instance, LearnSPORTtech used a series of wearables—Training Technology Probes, or TTPs—that had been designed and implemented in the context of yoga and circus training, and then deployed them in embodied explorations of weightlifting [182]. In other projects, the technological element was central in the form of Virtual Reality. This was the case of projects including Astaire and Diverging Squash, which employed VR both as the design goal and the vehicle to design. In ACHIEVE, similarly to the work of [200], designers used VR to facilitate embedding and placing virtual objects, lighting, sounds, and video screens within a virtual supermarket as a vehicle to design.

4.3.3.4 Affordances

A key element that we found when analyzing the use of objects across these movement-based design methods was the concept of Affordances [56,86,115]. In our empirical material, affordances mostly referred to physical actions allowed and invited by an object or environment [124]. Further, they had a strong focus on materiality and material aspects. This category emerged from the projects of Astaire, GIFT, KOMPAN Workshop and LearnSPORTtech, and the Ach3, Ach4, Ach5, DiS1, MoF3, Tan2, and Tan3 methods.

Affordances were considered when selecting objects to bring to design activities for the actions—core mechanics—they would possibly inspire. For example, in Magic outFit, designers included stress-release balls to invite explorations around squeezing. Additionally, affordances emerged to reflect creative emergent behaviour in the design sessions supported by objects, which was instrumental in design. For example, in ACHIEVE, the participating students pushed a shopping cart but could also physically sit in it while simultaneously puppeteer a virtual character in VR. Even when interacting in a virtual space, such affordances steered the ideation process.

4.3.3.5 Subversion

Some methods were focused on finding new uses for objects and technologies that were designed with a specific purpose: Subversion emerged as a sub-category from Ast3, Ast4, Ast5, KOM1, LSt1, LSt2, S2M1, SuT2, SuT3, Tan1 and Tan2. These new uses were either the objective of the project in general or a way to aid in the ideation process. We discussed above in Technologies, an instance of LearnSPORTtech that exemplified the former: embodied explorations in LSt1 leveraged Training Technology Probes that were initially developed for yoga [179] and which were brought to weightlifting to find out new uses in this other physical training practice [186]. An example of subversion aiding in the ideation process is Ast4, which, as we mentioned above in Virtual and Hybrid spaces, used existing VR games as platforms to explore different game mechanics and affordances of VR equipment.

4.4 Action Points and Recommendations

Based on the previous observations regarding the characteristics of movement-based design methods, we compiled a list of action points, insights and recommendations focused on the categories and subcategories of the Design Resources group (Movement, Space and Objects.) We also included references to other relevant categories. These actions points and recommendations were thought as a practical guide for novice and seasoned designers.

4.4.1 Movement

4.4.1.1 Movement Qualities

4.4.1.2 Body Regions

4.4.1.3 Contact

4.4.1.4 Moving with Objects

4.4.1.5 Constraints and Superpowers

4.4.1.6 Instrumental Domain

4.4.1.7 Engagement

4.4.2 Space

4.4.2.1 Physical Space

4.4.2.2 Virtual and Hybrid Space

4.4.2.3 Delimitation

4.4.2.4 Room Size

4.4.3 Objects

4.4.3.1 Physical Objects

4.4.3.2 Cards

4.4.3.3 Technologies

4.4.3.4 Affordances

4.5 General Reflections

Movement-based design methods exhibit properties that are less common in other forms of ideation. In particular, the physicality of these methods requires important considerations related to the bodily engagement of participants, as well as to space and materiality. This poses challenges and opportunities not seen in classical ideation activities. Therefore, those who have not experimented with these kinds of methods might find them intimidating.

Related previous works have resulted in categorizations of movement-based design methods: [5] and their typology of seven foci, [201] using an analyzing framework based on estrangement, and [97] proposing a design methodology based on choreographic tools. Despite their relevance, I contend that these works are not an ideal entry point for the design researcher who wants to start using movement-based methods.

Contrastingly, with this work, I provide a practical guide for them by focusing on surfacing tacit knowledge from a group of experienced researchers. I focused on analysing the Design Resources (Movement, Space and Objects) because of how practical they are. Through the Action Points and Recommendations, I connected them to the categories of Facilitation and Planning and Logistics.

In this way, novices can find a clear route to start experimenting with movement-based design methods. The seasoned designer will identify that the action points are not groundbreaking—rather, they encapsulate and articulate existing tacit knowledge of embodied design. However, I also believe that seasoned designers can find utility in our work: using these categories as a live palette of possibilities that can inspire them, and as a document to help them argue for or against particular design choices.

I acknowledge that this work provides knowledge that is articulated and shared in a written format, which is not enough to engage with and facilitate these kinds of methods. Movement-based design methods require hands-on involvement and a first-person perspective on the moving body that is not usually integrated (yet) into the mainstream formation of an Interaction Designer. In this sense, I would like to emphasize the concept of Somatic Conoisseurship [145] as a lens with which to involve experienced somatic practitioners in HCI design processes, and the work on Soma Design [72,74,203] as a way to develop one’s sensibilities and self-knowledge. Obtaining first-hand experience in movement practices is crucial for involving the first-person perspective emphasized by this kind of design practice [73].

Our methodology used a bottom-up approach, was practice-based, and led to a comprehensive set of results. I contend that our results have validity due to the following considerations. First, our original corpus of data came from several projects of different authors and the reported movement-based design methods that they used in practice. During the initial characterization process, we used perspectives both from two experts in the field and from two students in training. This allowed for the emergence of characteristics relevant to different levels of expertise. Additionally, we worked with a loose set of guidelines so that the characteristics that emerged would tend to be divergent. We decided on the names and boundaries of emerging categories and subcategories based on consensus combining these different levels of expertise. Finally, the insights, action points and recommendations I share are based on these empirics and are tied to previous work.

A limitation of the work is that we only focused on one group of categories. Even though we attempted to connect the Action Points and Recommendations with other categories, there is still work to do to expand on all of them and their implications. In this way, the work offers a palette of possibilities that we acknowledge are not exhaustive or definitive. Additionally, the action points are articulated based on our tacit knowledge and references. They worked for us, but their generative capacity for others needs to be proven. Future work can expand or challenge what I presented here.

Finally, I also acknowledge the limitation of the corpus, as it is not necessarily representative of all movement-based design processes. I argue that it being part of an international design research consortium with a focus on movement-based design is illustrative of different approaches to these methods, but I recognise that having started from another corpus of data we could have obtained a different set of categories and action points.

4.6 Chapter Takeaways

This work adds to the Interaction Design and HCI body of works on movement-based design methods: we developed an empirically-based characterisation of design resources as employed in movement-based design methods. Using this characterisation as a starting point, I provided a practical guide for novice and seasoned designers to engage with these methods.

TODO(transition to next chapter)

5 Designing a Minimalist Toolkit for Embodied Sketching

This chapter draws on publications B and C [190,192].

Figure 2: Overview of the Movits, the icons that represent their interactions, and design concepts using them.

In this chapter, I report on the design, development and evaluation of the Movits, a toolkit of minimalist technology probes that support hands-on explorations and design of future interactions driven by movement and multisensory feedback (Fig. 2.) The Movits are relatively small and simple wearable digital units that provide audiovisual or visuotactile patterns in response to body inputs such as motion, spatial orientation or touch.

I was interested in designing and employing a toolkit of probes that would serve as an aid for the embodied sketching workshops in our design process. The toolkit would enable the relevant stakeholders in our project—interaction designers, engineers, movement and health professionals in the areas of rehabilitation, physical therapy and occupational therapy, along with their patients—to establish a common ground regarding the possibilities of interactive technologies based on multisensory feedback, and empower them all to contribute to the design process. The design of this toolkit was driven by key design qualities and requirements connected to our application domain. In particular, we sought our toolkit to be modular, interactive, open-ended, simple, easy to deploy and with a strong focus on movement-based applications. While some of these characteristics are present in previous works, none of them feature all of them in the form of an ideation toolkit.

As we have discussed, the design community has long realised the importance of taking an embodied and holistic approach to design. To help in this regard, they have developed strategies, such as design methodologies like Embodied Sketching [108] and Soma Design [72,74]; and custom design tools. Among the latter, we find bodystorming baskets—introduced in the previous chapter—as collections of simple and diverse props for ideation. While often these props are analogue—such as balls, textiles, play jewellery, styrofoam objects, and mechanical gadgets—, they might also include simple off-the-shelf technology like a buzzer or a laser pointer, featuring interactivity that would be simple and relevant for the application domain [184]. These technological props can be evocative and may help spur creativity, allowing participants to engage multisensorially and explore design possibilities. However, depending on the design challenge, it might be hard to find a diverse set of proper interactive props with befitting properties to the target application domain, which may affect ideation.

In such situations, it is worth considering having a bodystorming basket with ad hoc technology props or probes [80]—simple, flexible and adaptable technologies to inspire users and researchers to ideate new technologies [80]. Many examples can be found in the design research community. Several of these toolkits assemble a collection of Bits—the Inspirational Bits [165], the Soma Bits [202,203], the Menarche Bits [154,155], the Wearable Bits [83] and the DanceBits [41]. (TODO add little bits) The work that I describe in this chapter adds to this body of work, focusing in particular on the domain of movement-based design. Following that naming tradition, the Movits acquired their name as a portmanteau word from Move(ment) Bits, or alternatively from Move-its, because they are focused on designing for the moving body with the moving body.

In the chapter, I detail the analysis leading my design process. I describe the resulting toolkit, and discuss four evaluation workshops with different participants—design students, technologists, dancers and physiotherapists—and the insights they yielded.

5.1 Designing the Toolkit

To design the Movits, I was interested in analysing the features of previous projects of wearables for movement learning and physical activity and prior work in toolkits for embodied interaction design—discussed in Chapter 2—so that I could develop minimalist versions of common interactions they presented. I was especially curious about the modalities of inputs and outputs they had, as well as the type of mappings between them that they conveyed. As a guide during the design process, I established a list of design requirements based both on personal values I wanted to put forward, and on the theoretical concepts and methodological approaches that inform our work. These requirements and the results of my analysis, in combination, supported me in selecting the hardware and software platforms I would use, and in delimiting the types of interactions we would like to develop as a first iteration of the kit.

5.1.1 Inputs, Outputs and Mappings

I analysed the modalities of inputs and outputs employed in the related embodied ideation toolkits reviewed above, along with related projects of wearables and multisensory technologies in movement learning or physical activity context.

TODO(convert the following to tables?)

In terms of input modalities, I found that projects used spatial orientation [2,41,92,93,105,106,152,167,167,167,169,179,182], motion [92,93,105,106,137,162,167,169,182], pressure [48,106,131,137,162,167,169], touch [95,125,126], buttons [41], knobs [202,203], biosignals [2] and fully-fledged graphical user interfaces [211].

Regarding output modalities, I found the use of sound [2,92,93,95,105,106,152,162,167,169,182], vibrotactile haptics [48,105,106,125,126,167,182], lights [105,106,131,137,179,182], shape-changes through inflation [154,155,202,203], heat [125,202,203] and robotic touch gestures [211] as outputs. Fig. 3 illustrates the inputs and outputs I found and the relationships between them.

Figure 3: Relationships between Inputs and Outputs in analysed works

Additionally, I noted the type of relationship that was established between inputs and outputs. Roughly, they could be classified as either continuous or discrete. A continuous mapping would involve the direct or inverse proportional modulation of a dimension of the output—e.g. pitch, frequency, intensity, colour—in relation to the input. A discrete mapping would be based on single or multiple thresholds of the input quantities that trigger a behaviour—e.g. a musical note or a vibration pulse—when crossed.

5.1.2 Design Requirements

My goal with the Movits design was to generate a toolkit of minimal units reflecting preexisting and proven interactions in open-ended wearable projects for movement applications employing augmented feedback. My aim was that such a toolkit could contribute to the overarching aim of fostering exploration and idea generation in movement-centric domains using interactive technology. Based on the design inspirations presented in the previous section, I articulated a series of values that I aimed to inculcate in our toolkit design.

For instance, when I refer to minimal units, I aim to indicate simplicity—i.e. low complexity—and the decision to only use the technologies that would be necessary and sufficient [136] for the task. Towards the design of the first set of the toolkit, this would mean that the Movits should be self-contained and work in a standalone manner: a group of designers using them should be able to bring them into an embodied design workshop without having to bring an extra computer to make them work or troubleshoot them. Therefore, the devices should provide straightforward interactions without a setup or calibration step: one should be able to turn them on and start using them immediately. I anticipated that this would likely help participants to figure out meaningful interactions by organically exploring them.

Additionally, the Movits should work offline, i.e. without Wi-Fi or other wireless communications. This would support embodied sketching done in the wild or outdoors. Additionally, this would reduce technical complexity during embodied design workshops, keeping the focus on embodied action rather than on troubleshooting. I left for future work to develop relatively more complex design probes implementing features such as communication. For the early design stages that the Movits targeted, I was contented with such interaction between devices being able to be simulated or puppeteered in a Wizard of Oz manner [35].

The Movits toolkit draws on the intercorporeal biofeedback [183] strong concept using its four characteristics to shape our design goals and envisioned preferred state [213]: the units in the toolkit would be intended to provide a shared frame of reference, via audiovisual or visuotactile feedback, that thanks to their open-endedness would likely allow its users to engage in fluid meaning allocation. Because of the minimalism in their behaviour, they would likely be unobtrusive. Therefore, they could be used to guide attention and action as an additional and complementary—to other objects and activities—, interwoven interactional resource. Furthermore, the toolkit would reflect an Embodied Interaction approach, and be designed in particular to support embodied design methods, such as those within embodied sketching [108]: sensitizing, ideating and prototyping, in particular in the context of movement learning experiences.

5.1.3 Hardware and Software

I chose to develop our toolkit using Adafruit Circuit Playground Express and Gemma M0 boards, along with some extra components—vibration motors, motor controllers and buzzers. I selected these platforms because of their assortment of built-in components and capabilities—such as accelerometers, speakers, lights, buttons, and capacitive touch input—and computational specifications which allow for simple sound processing and playback of short sound samples. Additionally, I decided to use these boards because of their potential availability as prototyping tools across research and design institutions. I intended to streamline the process of researchers and designers getting up and running with our system. Furthermore, these boards reside at a middle ground regarding complexity in hardware and software, ideal for our design goals. For the physical construction of the Movits, besides the boards, I used e-textile materials such as conductive thread and fabric, soft enclosures and straps. The Movits have velcro behind them so that they can be attached to textile straps worn on arms, legs, head or torso, or directly to the wearer’s clothes. For programming the Movits I chose to use CircuitPython to leverage its support for beginners and allow for a simple re-configuration of its parameters should a more advanced design session require it.

5.1.4 Selection of Inputs and Outputs

I chose to craft a first iteration of the toolkit implementing the more common modalities from the input-output analysis (Fig. 3), which at the same time were simpler in terms of setup, implementation, and use. I was interested in having enough modalities that could still support and reflect more rounded and polished movement-based designs, such as those in the multiple projects I reviewed.

For this, I selected three types of inputs—orientation, motion and touch—, and three types of outputs—sound, vibrotactile haptics and lights (Fig. 4). I chose touch as an input because we gathered it could emulate to some extent the behaviours provided by the pressure and button inputs while being relatively simpler to implement with the tools we selected.

Figure 4: The selected inputs, mappings and outputs of the Movits.

For the first iteration of the Movits, I decided to focus on the two most prominent output modalities in our analysis, sound and vibration. I wanted to keep them in separate modules to be able to evaluate differences in their use. I reasoned that if someone wanted to use these modalities together, they could physically join the Movits which exhibited them. However, to maintain a shared frame of reference [183], the vibration had to be accompanied by another modality perceivable from the outside, and for this, I chose lights. Vision is usually the primary sensory modality, and sighted people can easily and readily make sense of visual information [184]. When this is well designed, it has proven to be useful in dynamic and changing contexts of movement teaching and learning [186,187]. Further, as the use of lights synchronised with other outputs is technically straightforward with the boards I chose to use, I decided to incorporate them into the Movits with sound as an output as well. In this way, the Movits would provide either audiovisual or visuotactile feedback. Fig. 4 illustrates the inputs, outputs and mapping we chose to implement in the first iteration of the Movits.

5.2 The Movits Toolkit

Figure 5: The Movits toolkit.

I designed a total of nine Movits divided into three groups according to their input: four Tilt Movits that use orientation, three Motion Movits that use the measured change of acceleration, and two Touch Movits that use capacitive touch (Fig. 5.) In the name of each Movit, we attempt to indicate their behaviour using three parts: (1) the type of input, (2) the word “To” or “Play” to indicate a continuous or discrete mapping respectively, and (3) a descriptive word indicating the type of output. Tbl. 2 provides an overview of their names, inputs, outputs and mappings.

Table 2: Input-output mappings implemented in the Movits.
Name Input Output Mapping
TiltToMetronome Orientation Audiovisual Continuous
TiltPlayNote Orientation Audiovisual Discrete
TiltToVibration Orientation Visuotactile Continuous
TiltPlayVibration Orientation Visuotactile Discrete
MotionToPitch Motion Audiovisual Continuous
MotionPlaySample Motion Audiovisual Discrete
MotionPlayVibration Motion Visuotactile Discrete
TouchPlaySound Touch Audiovisual Discrete
TouchPlayVibration Touch Visuotactile Discrete

5.2.1 Tilt Movits

The Movits based on orientation calculate it from the gravity pull measured from the three axes in the accelerometer, assuming a relatively static position. As an input, they use the angle of rotation of the plane of the device around a single axis. The axis of rotation can be selected by pressing a button on the board. All of these units have a similar behaviour regarding visual feedback: they use a rainbow-like palette that is mapped to the full rotation of the units.

5.2.1.1 TiltToMetronome

This Movit provides a metronome that changes its frequency according to the angle of rotation. Its sonic behaviour is inspired by the Movement and Tiltband TTPs [105,106,182], but instead of a pure tone for the sound, it uses a sound sample of a real metronome and provides the option to choose the axis of rotation. The coloured lights in this Movit pulsate at the same frequency as the metronome.

5.2.1.2 TiltPlayNote

In this Movit, the full rotation of the device is divided into eight angular sections of the same size. A note of the C major scale is assigned to each one of them. When the Movit enters a given angular section, the corresponding note is played once. This behaviour is based on the sonic phrase paradigm of Go-with-the-flow [152], where a single scale is correlated to changes in orientation. In this Movit, we assigned one colour of the rainbow-like palette per note. The coloured light stays on during each angular section.

5.2.1.3 TiltToVibration

This Movit uses a single vibrotactile actuator connected to a controller that allows modulating the intensity of vibration based on the angle of rotation: the change in intensity is directly proportional to the angle. The colour of the light also changes based on the rotation angle.

5.2.1.4 TiltPlayVibration

This Movit contains two vibrotactile actuators—one at each side—and divides the full rotation into three sections: neutral, left and right. When the device is tilted and its orientation reaches the left or right section, it activates the actuator of that side. The lights on the same side of the activated actuator are lit and change colour depending on the amount of tilt. This Movit is based on the Tiltband and FrontBalance TTPs [105,106,182] but provides a simplified version in terms of form factor along with more customization in its behaviour.

5.2.2 Motion Units

The Movits based on motion calculate and use the total absolute difference in the acceleration measured in the three axes between two points in time. In this way, movements that involve sudden changes in motion trajectory generate a greater value of motion than those that are slow or with a constant direction.

5.2.2.1 MotionToPitch

This Movit emits notes of increasing pitch proportional to the amount of measured motion. This behaviour is mostly inspired by the Movement TTP [105,106,182]. The rainbow-like colour palette is mapped to the notes that are played.

5.2.2.2 MotionPlaySample

In this Movit, when a threshold of motion is crossed, one sample of sound from a given collection is randomly selected and played. For instance, when one moves, one can hear sounds of splashing water or blowing wind, as if those sounds were generated by own’s motion. The collection of samples can be selected by pressing a button on the board. This Movit is inspired by Soniband [93] in both its behaviour and the types of sounds that are used, but it presents a simplified version of the system regarding requirements of hardware and calibration capabilities. In this Movit, the lights are turned on when a sample is played and their colour is fixed and based on the chosen selection.

5.2.2.3 MotionPlayVibration

Similar to the TiltToVibration Movit, MotionPlayVibration uses a single vibrotactile actuator. In this case, a vibration is triggered once a threshold of amount of motion is crossed. In my analysis of previous projects, I did not find an example of this behaviour but we decided to implement it to allow for its exploration.

5.2.3 Touch Movits

Our Touch Movits are devices partially covered by a conductive fabric. Touching the fabric activates an output—vibration or sound—that stays on until the touch is released. The output of these units is accompanied by a light turned on simultaneously. These units are based on the capacitive touch capabilities of our prototyping boards.

5.2.3.1 TouchPlaySound

This Movit plays sound samples and turns a light on when touched. The samples are the same as the MotionPlaySample Movit, based on the work of  [93]: they consist of water and wind sounds.

5.2.3.2 TouchPlayVibration

This Movit activates a vibration motor disc as long as it is touched. It is based on the Sense Pouch [125] and Felt Sense Glove [125,126], but replaces their soft button with the touch of the fabric.

5.2.4 Additional Details

In the following, I summarise the basic interactions of the Movits:

The Movits provide either audiovisual or visuotactile feedback. Their bi-modal output is intended to assist the shared frame of reference between wearers and audience as postulated by intercorporeal biofeedback [183]. The output of each of the units was designed to be open-ended—thus likely allowing for a fluid meaning allocation [183] between their users—, and unobtrusive—so that it would be feasible to guide attention and action [183] toward and away from it, and it could potentially blend well as an interwoven interactional resource [183].

I included in the Movits a minimal degree of configuration, as I imagined that it could be useful to better adapt them to a specific design scenario. For instance, all of them have a switch that allows one to turn off the sound or vibration, leaving the lights only. This originated as an aid during development but became useful when demonstrating the Movits. Additionally, the Tilt and Motion Movits make use of the two buttons in the Circuit Playground Express boards. One button allows one to cycle between modes of operation, for instance changing the axis of rotation for the Tilt Movits, the initial pitch in MotionToPitch, the collection of sound samples in MotionPlaySample or the duration of vibration in MotionPlayVibration. The other button enables one to cycle between up to three levels of perceived sensitivity by changing the trigger thresholds in the case of the discrete interactions or the range of output in the continuous interactions.

To design the Movits, I was interested in attempting to convey a straightforward mapping of their input modality by activating or modulating a single parameter of the output. In line with the decision to separate the sonic and haptic feedback from the Movits, my intention for designing them consisted of abstracting and simplifying the interactions of our references—when they existed—so that each resulting Movit would exhibit a single behaviour. The source code I wrote, along with installation instructions, can be found in an online repository [193].

5.3 Testing the Movits: Four Workshops

To validate the potential of the Movits as ideation probes for explorations of movement-based design, I co-organised four embodied sketching workshops with different populations.

5.3.1 Workshop Participants

I was interested in probing the potential and limitations of the Movits. While the Movits were originally conceived to be used in participatory design workshops inviting professional physiotherapists, I was also interested in testing if the Movits lent themselves as useful creative tools for participants from different contexts and with different backgrounds and expertise. Hence, I co-organised several workshops with different participants. All of them focused on wearable design and ideation. Workshops 1 and 2 targeted a broader population of students (Workshop 1) and the general public (Workshop 2) with overlapping interests in the project—creativity and design, movement practices, and technology innovation. These workshops served as pilots for the workshop and Movits, and helped to refine them for the subsequent workshops. Workshops 3 and 4 were meant to further inspect and evaluate the Movits in use by the actual target population.

In Workshop 1, the participants were 15 undergraduate students who were part of a Creativity and Design course taught by EMS. The class was a non-mandatory lab part of the course’s usual class schedule. For Workshop 2, I co-organised an open call for the general public in the context of a nation-wide science dissemination event. The workshop was advertised on the regional website of the event featuring multiple workshops and activities, and on the research group’s website. There were five participants: three of them reported having technology innovation jobs and two of them were multidisciplinary dancers. For these two workshops we did not gather demographic data for data protection reasons.

For Workshops 3 and 4, TBC directly invited a group of professional physiotherapists from his professional network. There were four participants in Workshop 3 and three participants in Workshop 4. The mean age of the seven physiotherapists was 43.2 years (SD=6.0) with a mean duration of professional practice of 20.1 years (SD=5.9). The expertise of the physiotherapists included sports, movement coaching, global postural reeducation, osteopathy, and core, perineum and pelvic floor reeducation. None of the participants in the four workshops received monetary compensation for their participation.

5.3.2 Workshop Structure

Figure 6: General structure of the Movits embodied sketching workshops

The four two-hour workshops shared the same objective and general structure (Fig. 6.) Participants utilised the Movits as ideation probes for designing wearable technologies to support physical training in flexibility and strength. The four workshops were framed as standalone events, i.e. they were presented as the only instance of participation for the attendees without an expectation of further collaboration. In Workshops 1 and 3, participants worked in pairs or trios; in Workshop 2, they worked individually with a facilitator assisting each one, and in Workshop 4, they worked individually with a single rotating facilitator. The workshops followed a double diamond [33] structure, adapting its four stages: Discover, Define, Develop and Deliver. The Movits were used for bodystorming [107] during divergent phases—Discover and Develop—, and custom-made documentation sheets during convergent phases—Define and Deliver. In the fourth stage, participants presented and demonstrated their resulting ideas.

In all cases, the workshops had a bodystorming basket consisting of fabric straps of different lengths and widths—some of them with velcro, snap buttons or plastic clasps—, pieces of fabric, cardboard, paper and EVA foam, attachment methods such as safety pins, velcro, zip ties, rope and cords, and some fabric bags with marbles or pulses to provide weight.

I was the main facilitation of the four workshops, with the support from EMS, ATJ, JDD, JLF, KS and RCR in Workshops 1 and 2. Note that to refer to the collaborators I use the acronyms introduced above. In the following, I will use “we” to refer to us as facilitators of the workshops. We collected field notes and asked the participants to fill out documentation sheets of their designs. Based on the consent provided by the participants, only Workshops 3 and 4 were video recorded.

All workshops began with an introduction to the research context, the workshop’s general structure and the generalities of the Movits. Due to technical issues, the Touch Movits were excluded from Workshops 1 and 4. The Movits, tagged with colours and behaviour icons, were arranged on a table. We did not show them in action, name them or indicate in any form which of the units would respond in which way. Then, the context of wearable design for physical training was illustrated through diagrams ([Fig. 7}) based on the review work of [187] and [118]. The depth of the presentation varied across groups: in Workshops 1 and 2 the context was provided as a general overview, whereas in Workshops 3 and 4 we further discussed the implications of these frameworks to design wearable technologies for movement learning. We conducted a brief warm-up activity to motivate a creative focus; it followed a sequence of (1) a body scan meditation, (2) a visualization activity, (3) a movement-based game where people introduced themselves by synchronizing their names with a chosen movement, and (4) a somatic activity to explore different ranges of motion. In Workshop 1, the warm-up consisted of remembering and visualising these activities, which had been performed in a previous session.

Figure 7: Diagrams used in the first documentation sheet: a) Roles of the technology, b) Objectives of the technology

5.3.2.1 Discover

For the initial divergent phase, participants silently explored the Movits for five minutes. Their objective was to choose two—per group or person, depending on the workshop—for the session and to arrive at an initial understanding of their behaviour. After selecting their Movits, the participants explained their findings to the group, with facilitators offering clarification. For example, it could happen that they understood a continuous behaviour as discrete because they did not explore the middle steps of the input range, or that they thought that a Movit was responding to tilting when it was responding to motion, or vice versa.

Then, we facilitated a bodystorming [107] activity to explore possible applications in the context of wearable technologies for physical training. We defined three exploration phases to loosely guide participants in their explorations: (1) Movit placement on their bodies—using straps and other mechanical aids from the bodystorming basket—; (2) movement range levels; and (3) diversity of actions in sports and fitness practices. In practice, the flow of ideas was so rich in covering these dimensions and more that we did not need to indicate separate phases. This divergent activity lasted approximately 15 minutes.

5.3.2.2 Define

After guiding participants back to a seated position, we introduced the first documentation sheet for the first convergence step. Participants were asked to define the context for the wearable technology they would like to design, based on their findings during the previous phase. Then, using a diagram from the design space of wearables for sports and fitness practices [187], we prompted them to consider roles for the technology outputs: to support some experiential quality or to convey information through augmented feedback—be it knowledge of the current performance or knowledge of results—or feed-forward—providing information in advance (Fig. 7, a). We told them that these roles were non-exclusive. Finally, we asked participants to consider the objective of their technology—to enable, improve or augment—across physical, cognitive, emotional and social aspects of the context they chose, using a diagram based on the review of trends and opportunities of wearable systems for sports by [118] (Fig. 7, b). Again, we clarified that these objectives were non-exclusive. We as facilitators adapted to participants’ needs by answering questions or assisting with idea framing. This activity lasted approximately 10 minutes.

5.3.2.3 Develop

For the second round of convergence, the participants engaged in another 15-minute bodystorming [107] activity to develop a concrete idea within their chosen context. We prompted the exploration of placements of the Movits and consideration of design aspects like shape, texture, weight, feedback types, configuration modes and compatible movements. Facilitators adapted prompts to each group or individual based on the participants’ process. For instance, in Workshops 3 and 4, physiotherapists had very clear ideas of possible applications, so we guided them towards a detailed specification of what they envisioned. In contrast, in Workshops 1 and 2, our work as facilitators consisted of guiding participants towards deciding on a single idea and developing it. In all cases, we encouraged participants to use the available objects from the bodystorming basket in combination with the Movits to build a low-fidelity prototype, extending or discarding the actual behaviour of the Movits that they had chosen as inspiration.

To keep the activity manageable, with a focus on embodiment and not on technicalities, we did not initially communicate the configuration capabilities of the Movits, except when the participants: (1) accidentally pressed buttons, leading to notable changes in what they were exploring—e.g., the responsiveness of a Motion Movit was now too much or too little for their chosen movement, or a Tilt Movit was not responding the same way to the movement they had tried before—; (2) voiced a very specific need that could be met by this configuration change, such as a sensitivity adjustments in the Motion Movits or axis changes in the Tilt Movits.

5.3.2.4 Deliver

For the second convergent stage, participants completed two tasks with a 10-minute time limit. First, we provided another documentation sheet for specific details of their final design, including a general description, an account of the concrete application, behaviour and expected results of the technology, and a body map to illustrate the shape and placement of the design. Additionally, we asked participants to reflect on the features of the Movits they used in or left out of their design, the changes they made and the helpfulness of the toolkit in their design process. To conclude, participants presented a low-fidelity prototype of their design (Fig. 8) and engaged in a Q&A session with the facilitators and the rest of the participants.

The workshop ended with a semi-structured group discussion exploring overall experiences, feelings and insights from both divergent and convergent phases—taking into account the differences between the movement-based nature of the former and the written and analytical aspects of the latter. Participants shared experiences through the lenses of embodied ideation methods, the use of the Movits and other objects, and teamwork.

Figure 8: A selection of designs resulting from the evaluation workshops.

5.3.3 Analysis

The analysis mainly focused on the resulting designs and on the documentation sheets filled by the participants in the workshops, which were complemented by the field notes gathered by the workshops’ facilitators. For this, I digitised the data gathered from the documentation sheets described in the Define and Deliver sections of the workshop. I conducted a top-down qualitative analysis of this material, using the categories of the sheets as guiding analytical lenses. These categories included: the application of the design, its placement in the body, the Movits in use during the workshop, the variations in the design (from the original behaviour exhibited by the Movits), the self-classification of their design regarding roles and objectives (Fig. 7), the suggested modifications to the Movits, and the reflections on their usefulness.

I used the field notes to complement the data, as some relevant comments from the participants were not captured in their documentation sheets. The workshop where each idea originated was kept.

I then identified emerging themes across each category roughly based on what they found more common, less common, or more relevant to the design requirements of the Movits. These were important aspects of the overarching project and were discussed by Elena Márquez Segura and me. Laia Turmo Vidal and I discussed the resulting insights from this analysis and elucidated their relevance for the wider Interaction Design and Human-Computer Interaction community, which provided clear directions for deepening the analysis. After iterating and refining the analysis, Elena and I discussed the findings and framed the themes for dissemination in what would become the paper [192].

5.4 Embodied Sketching Results

In the four workshops, participants generated a total of 15 different ideas for wearables supporting movement learning across different movement practices, showcasing the generative potential of the Movits. Tbl. 3 summarises these design ideas and provides an ID for them, detailing their application domains, wearable placements, input/output modalities, and the Movits that participants used as probes and references to develop and present their ideas during the workshops. Overall, the participants explored movement disciplines familiar to them—swimming, volleyball, archery, weightlifting, yoga—, specific rehabilitation or alignment exercises—such as those for gait rehabilitation or the recovery of joint range—, or self-care or creativity experiences—massage and choreographic exploration.

Table 3: Overview of the resulting designs from the embodied sketching workshops. Abbreviations: Inputs: O is Orientation, M is Motion; Outputs: L is Lights, S is Sound, V is Vibration.
ID Application Place In Out Probe(s)
W1D1 Arm alignment in archery Forearm O L V TiltPlayVibration
W1D2 Augmentation of weightlifting Arms, Legs O L S V TiltPlayNote
W1D3 Swimming pacing Ears, Waist O L S V TiltPlayVibration, TiltToVibration, TiltToMetronome
W1D4 Recovery massage Hand M L S V MotionPlaySample
W2D1 Swimming stroke technique Forearm O S V TiltToMetronome
W2D2 Recovery massage Hand O L V TiltPlayVibration
W2D3 Artistic swimming synchronization Head, Elbows, Wrists, Knees, Ankles O V TiltPlayVibration
W2D4 General alignment in yoga Waist, Wrists, Knees M O L S TiltToMetronome, MotionPlaySample
W2D5 Dance creativity stimulation Head (top), Wrists, Ankles O V TiltToVibration
W3D1 Ankle mobility in volleyball Thigh O L S V TiltPlayVibration
W3D2 Jumping technique in volleyball Waist (back) M L S V MotionToPitch
W3D3 Lumbopelvic stability Waist (front) M O S V TiltPlayVibration, MotionPlaySample, MotionPlayVibration
W4D1 Scapular rehabilitation Shoulder blades M S MotionPlaySample
W4D2 General joint rehabilitation Head, Waist, Forearm O L V TiltPlayVibration
W4D3 Gait rehabilitation Head (top) O V TiltPlayVibration

In this section, I discuss our findings regarding the roles of generated ideas, to what extent they extended the interactions provided by the Movits, the Movits that were used the most and the least and their impact on idea generation, and to what extent and why the Movits were helpful in the participants’ design processes. Because the analysis was based on the written reports for each of the designs—some of which were created collectively—and not on the conversations or individual comments by the participants, I report the findings by referring to the ID of the involved designs.

5.4.1 Roles of Generated Ideas

To evaluate the designs, we employed as a lens the classification of [187] (Fig. 7) regarding the possible roles of the outputs of wearable technology for sports and fitness practices. The Movits provide immediate feedback to the wearers’ actions, and therefore the most straightforward role for all of their outputs is to provide information in the form of feedback which consists of knowledge of performance [187]. Because of this, it was not surprising that most of the ideas presented applications where some kind of knowledge of performance was supplied, be it an indication of misalignment (W1D1, W2D1, W2D3, W4D3) or a reward for arriving to a desired position (W1D2, W3D1,W3D2, W4D2.) We were interested in evaluating to what extent the generated ideas would extend this base role and explore others.

I found it illuminating that several designs selected and focused on another possible role, the experiential qualities that the Movits provided to the participants. For instance, W1D4, W2D2, W2D4 and W2D5 highlighted the experience of the vibrotactile haptic feedback on their bodies, and W1D4, W2D2, W3D3 and W4D1 focused on the sound of water emitted by the MotionPlaySample Movit. W2D5 and W3D3 were also interested in the experiential qualities of the possibilities of social connection while using their designs in a group. In the case of W1D4 and W2D2—the two designs focused on wearables for providing a holistic recovery massage—, the experiential quality was their only focus and they did not consider the role of providing information. The emphasis on the felt experience and experiential qualities of these designs reminded me of slow, introspective and reflexive Soma Design processes which inform my work [2,125,126,154,155,202,203] even if that was not the default mood of the workshops. I observed that the designs that considered the experiential qualities emerged in all four workshops and thus were not restricted to a specific population. From this, I gather that the Movits have the potential to be used effectively as probes in somaesthetic appreciation design [74] workshops.

Finally, some designs also considered the roles of providing feedback in the form of knowledge of results [187] and some others the role of supplying feed-forward. For instance, W3D1, W3D2, W3D3, W4D1 and W4D3 involved a reporting of the results of the activity. Those were all designed by physiotherapists, which might speak to their involvement in the evaluation and improvement of the conditions of their patients and the interest they might have in quantifying results. Regarding feed-forward, W2D1, W3D1 and W3D2 considered their design could provide instructions and objectives of the activity to perform, and W1D3 was inspired by the sound of TiltToMetronome to use it as a feed-forward mechanism to indicate the desired pace. It appears that the Movits helped to some extent to provide a framework for designing complete experiences with feed-forward of objectives and feedback of results even if by themselves they only supply feedback of performance.

5.4.2 Features of Designs

I also analysed the proposed features of the designs regarding their intended interactions. I was especially interested in the input and output modalities they chose, and to what extent they extended or subtracted from what their chosen Movits did.

First, I observed that most designs—11 out of 15—had multimodal outputs, and for those four that were monomodal, three chose vibration and one sound as outputs. No design chose lights as their only output, which might speak to the stronger stimuli that vibration and sound provided to participants. Additionally, six designs (W2D1, W2D3, W2D5, W3D3, W4D1, W4D3) explicitly expressed that they would remove the lights as they were not helpful to their applications. This validates the decision to have chosen vibration and sound as the main outputs of the Movits.

Interestingly, of the 11 multimodal designs, 10 included vibration as one of their output modalities. In six of these cases, the vibration was part of the chosen Movit, and in the remaining four it was added by the participants as a complement to a sound-based interaction. It seems that, as all participants tried vibration-based Movits—at least during the introductory phase of the workshop—they kept a strong impression of the sensation and wanted to incorporate it in their designs. This might speak about a potential intimate correspondence [74] induced by the synchronisation of participants’ movements with the haptic stimuli. The participants who added vibration to a Movit interaction that did not have it (W1D2, W1D4, W2D1, W3D2) mentioned that they included it because they found it more personal or direct to convey localised feedback. All of these findings echo prior work discussing the roles of vibrotactile haptics as a feedback mechanism for movement [167].

Regarding further findings regarding vibration, the three designs in Workshop 3 had vibration as an output. They noted this type of feedback would be appropriate for group work as long as it was accompanied by lights or some kind of visualisation so that the instructor of the activity would know what would be happening with each individual. This confirmed the principle of shared frame of reference of intercorporeal biofeedback [183] without us as facilitators having to mention it. Other designs with vibration as an output (W1D3, W1D4, W2D1, W2D2, W2D5 and W4D3) observed that vibration would be beneficial for individual and introspective activities, and some of them even chose it as their only output.

Regarding sound as an output, all of the nine designs which employed it introduced it from their chosen Movits. That is, no design that was vibration-based decided to incorporate sound. In those designs with sound, the types of sounds provided by the Movits were considered appropriate. A couple of designs (W1D3, W3D1) explicitly indicated that they would add alert sounds to indicate both that an objective had been achieved or that there was some deviation. In the case of W1D3, these alert sounds would contrast with the sound of the metronome they would be using. However, there was not a detailed discussion about what type of alert sounds. From this, I gathered that in future workshops like these, it could be helpful to have more sound samples—either in the Movits or in another system—so that participants have clearer options to select from. This echoes prior findings and discussions regarding the availability and types of sounds, and the metaphorical associations to them that can help in performing physical activity [93,152,167,185].

Discussing group work in general during Workshop 3, and also in the case of W1D3, the participants realised that the sound-based outputs of their designs could be routed individually to their users via headphones, or switched off altogether. Depending on the context of the designs that would be instigated during a workshop, I note that it would be helpful to at least have some pairs of headphones in the corresponding bodystorming basket. The presence of headphones could point towards the possibility of their designs using them, even if the Movits per se cannot be connected to them—at least in this iteration.

Finally, I observed seven instances of designs (W1D2, W1D3, W2D1, W2D3, W3D1, W3D2, W4D3) that envisaged a mobile app to either configure or visualise the outputs of their designs. Regarding configuration, four designs (W1D2, W2D4, W3D1, W4D2) proposed a procedure to calibrate the working zones of the devices with start and end zones, echoing the calibration processes of [152] or [93]. Three designs (W3D3, W4D1, W4D3) considered the possibility of adjusting the sensitivity of their devices to adapt it to the type of movement and body of the person that would use them. It was out of the scope of the workshops to further define the user interfaces and wireless interactions that these ideas would require. However, I highlight that the Movits worked effectively as ideation probes that could provide the basis for comprehensive designs even in the relatively short time that was had for each of the workshops.

5.4.3 Selected Movits

I observed the frequency of use of the Movits and analysed the type of applications that they supported as ideation probes. Here I discuss the three most used ones: TiltPlayVibration, MotionPlaySample and TiltToMetronome, and the ones that were not used at all: the Touch Movits (see Tbl. 3.)

TiltPlayVibration was the most used Movit, as it was the basis in eight out of 15 designs. In six of these (W1D1, W2D3, W3D1, W3D3, W4D2, W4D3) it inspired applications related to alignment which would use it basically as is: when a threshold of tilting angle would be crossed, it would start vibrating to indicate that either some position was achieved or some misalignment happened. The other two designs (W1D3, W2D2) would use this Movit as an inspirational seed to work with vibration in general. In any case, it seemed that the interaction provided by the TiltPlayVibration Movit was the most straightforward to understand and elicited feedback that was both interesting and familiar.

I observed that participants were able to trigger the vibration output regardless of the speed of the movements they were trying or the axis of rotation they chose. Even if at first they did not identify the middle point where the Movit started to vibrate, they could notice and activate its two states. For me, the popularity of this Movit within these workshops is very interesting from the perspective of minimalism, because its interaction can be implemented without a microcontroller, by either using simple tilt switches [66] or handcrafting a soft tilt sensor following the kit-of-no-parts [132] approach. I contend that the generativity and applicability of this probe are very high compared to the low complexity of its interaction, and therefore could serve as a good pointer towards further explorations of minimal interactions.

MotionPlaySample and TiltToMetronome were used four (W1D4, W2D4, W3D3, W4D1) and three times (W1D3, W2D1, W2D4) respectively. With their sound-based behaviour, they seemed to illustrate appropriately the input modalities of motion and orientation. MotionPlaySample kept its main behaviour across the four designs that used it as an ideation probe: all of them reacted to movement and played a sound. All of them kept the water sound because they enjoyed it, although in W4D1, the physiotherapist also considered either using the wind sound available in the Movit, or another one, to be chosen by the user. I found it interesting to explore to what extent the usage of the MotionPlaySample Movit would change depending on its default sound sample.

In the case of the behaviour of TiltToMetronome, although I observed it helped to illustrate the continuous nature of the orientation input, the proposed interactions based on it were beyond this mapping. For instance, in W2D1 and W2D4, the envisioned behaviour involved a range of ideal positions where the design would not produce sound. Outside of this range, the device would emit the metronome sounds with an increasing frequency depending on how far it was. In W1D3, the third design that used TiltToMetronome, the sonic behaviour of the Movit was implemented but decoupled from its orientation sensing. From the usage of both of these Movits, MotionPlaySample and TiltToMetronome, I confirm the observation from above regarding the availability of different sound samples that can be generative by themselves, without the need for complex interactivity. I also validated the selection of sounds: these two Movits are the ones—so far—that produce real-world samples instead of pure tones, which might have helped in them being selected more times.

Lastly, I found it interesting to observe that even though the Touch Movits were explored and selected by a couple of participants during the Discover phases of Workshops 2 and 3—when they were available as probes—, none of them were used in the Deliver phases and therefore were not considered by the participants as part of their designs. From what I could gather, it appeared that the interactions provided by the Tilt or Motion Movits were seen as more rich and inviting, especially for wearable technologies. The participants who further explored the Touch Movits—and who ended up designing W2D1 and W3D3—had difficulties coming up with possible applications regarding the two separate parts needed to complete the interaction: the placement of the device and the significance of the touch that would trigger it. In contrast, the Tilt and Motion Movits seemed to provide many possibilities as the placement of the device and the movement that would trigger it tended to be tightly coupled. From this situation, I gather that the further minimalism manifested in the Touch Movits possibly prevents their usage when there are other richer and more stimulating Movits in the kit. It seems that, depending on the intended application domain of the embodied sketching activities planned for the Movits, and in case the Touch Movits are anticipated to be relevant, it would be pertinent to consider an alternative way of engaging with them.

5.4.4 Effectiveness of the Movits

As part of the documentation sheet and the final discussion, we as facilitators asked the participants to reflect on to what extent the Movits had helped or not with their creative processes, and why. We also asked them about possible modifications they would want to apply to the Movits to be more effective.

5.4.4.1 Physical Features

Some of the participants focused on the physical characteristics of the Movits. For instance, participants found them to be helpful because of their small size, low weight and compact shape (W1D2), their lack of cables (W2D3), the way they can be attached to straps or clothes and be worn (W2D3), and their physical robustness (W1D2). With this feedback, I validated the choice of using Adafruit boards, covering them with fabric and trying to keep external components as minimal as possible to allow participants to feel empowered to explore. Nevertheless, some participants (W1D1, W2D3, W2D4) would have preferred the Movits to be smaller as they found them relatively obtrusive, especially for placing them on the head and wrists. Also, even though some groups (W1D3, W2D4, W3D3) presented their design by physically putting together two or three Movits, speaking well to their modularity, a smaller size could have benefitted them as well.

Regarding another physical aspect, the groups of W1D2 and W2D4 expressed they would have preferred the Movits to have adjustable straps already fixed on them, instead of having velcro to attach them freely to straps or clothes. I took all this feedback for future work, as I would like to keep the physical modularity and flexibility of the Movits while also providing an invitation to explore their wearability in different body parts.

5.4.4.2 Feedback Features

Other participants commented on the effectiveness of the Movits because of features of the feedback they provide. They appreciated that it was immediate and precise and that therefore they could intuitively (1) figure out how to use it and (2) find possibilities for it (W1D2, W1D3, W2D4, W3D1, W4D3.) They also commended that it was open-ended and therefore they could assign different meanings for it. For instance, in Workshops 3 and 4, the physiotherapists observed and discussed how the vibration of the TiltPlayVibration Movit was used as an indication of misalignment by some (W3D3, W4D3) or of the achievement of an objective by some others (W3D1, W4D2.) In the case of W4D1, the design described that its sounds could be used to indicate something to achieve or something to avoid depending on the exercise. This all speaks strongly to the characteristics of shared frame of reference and fluid meaning allocation put forward by the strong concept of intercorporeal biofeedback [183], which grounds the work.

5.4.4.3 Usage Experience

We also received feedback praising the experience of using and exploring the Movits. In the four workshops, participants commented that they found the Movits intriguing and inducing of curiosity, which made them engage with the activity. A student in W1D1 observed that at the beginning they were not motivated and did not want to participate, but once they started exploring the Movits, they enjoyed the process and were surprised by the amount of ideas they were coming up with. I find that this is in line with the mood that embodied sketching [108] aims to facilitate, and I was glad to observe that the Movits could support it. Some other participants took a more somaesthetic [74] perspective and commended the sensations induced by the Movits, either by their vibrotactile or sonic feedback. As I discussed above, 12 designs included vibration as one of their outputs, in part because they considered that it provided them agreeable sensations. Interestingly, all the participants who chose to work with the MotionPlaySample Movit (W1D4, W2D4, W3D3, W4D1) expressed their fondness for the water sound it produced and the relaxation it induced. Some of them (W3D3, W4D1) connected it to the behaviour of a rainstick, which made them enjoy it more. This perception of relaxation and pleasure while listening to the water sound echoes previous findings [92,93] which were an inspiration to use those samples, and validates its inclusion in the toolkit. Additionally, I consider that this varied appraisal of the experience of using and exploring the Movits speaks to their potential of being employed beyond movement learning contexts.

All of the participants, except for two, voiced that they enjoyed the general experience of exploring and creating with the Movits. The two people who did not enjoy the experience as much mentioned that they felt overwhelmed by the whole activity and did not feel confident enough to choose and develop a specific application. The designs that emerged from them were W2D4 and W4D2, which perhaps not coincidentally were the ones who established a very broad context for their application. However, even in those cases, we observed those participants realised that the Movits they selected were flexible enough to be used in a variety of ways. One of them (W2D4) articulated why the Movits were useful to them as design probes, even if they were not satisfied with their resulting design. I conjecture that for these two participants, it could have been more beneficial to constrain their exploration regarding the design scenario or the amount of available Movits. Additionally, they might have benefited from having more allotted time.

5.4.4.4 Further Reasons

The physiotherapists in Workshops 3 and 4 also articulated a more analytical rationale of why the Movits worked in their processes and could work for other stakeholders in co-design workshops. They (W3D1, W3D2, W4D3) observed that the Movits provided an external focus of attention [116,207] that can be effective and malleable for different circumstances. They recognised that as such, it could provide more autonomy to their patients. Also, some of them (W3D1, W3D2, W3D3) asserted that the Movits could measure and externalise useful information that is otherwise hidden from an observer—i.e., from them as physiotherapists interested in the movement features of their patients. I contend that these observations are aligned with the theoretical work behind the TTPs [105,106,180182,186] and the strong concept of intercorporeal biofeedback [183], which inform the work and validate its potential for movement learning applications.

5.5 Chapter Takeaways

I designed and evaluated the Movits, a minimalist toolkit for embodied sketching [108] composed of nine units that exhibit single interactions of multisensory feedback for movement-based inputs (Fig. 5, Tbl. 2.) The input and output modalities and the mapping between them that I implemented are based on an analysis of toolkits for embodied design and projects of wearable technologies for sports and fitness practices. I designed the Movits to mediate and support the social dimension of movement learning, and for this, they were grounded in the strong concept [75] of intercorporeal biofeedback [183] and its four interactive qualities.

I co-ran four embodied sketching workshops with different populations to validate the potential of the Movits as ideation probes for movement-based design explorations. From a qualitative analysis of the resulting designs, I gathered several insights. I validated their potential as generative probes: they allowed participants to come up with comprehensive ideas for multiple movement-based application domains. These ideas extended the interactions provided by the Movits, either by considering other types of inputs and outputs or other possible roles [187] of their technologies beyond feedback providing knowledge of performance—such as knowledge of results, feed-forward of instructions, and even a focus on experiential qualities. In this way, I contend that the minimalist setup of the Movits, along with the chosen modalities of inputs and outputs, proved to be necessary and sufficient [136] enough to support and reflect more rounded and polished movement-based designs, such as those in the multiple projects reviewed.

Speaking to the adaptability of the Movits, I found in them a potential to aid in somaesthetic appreciation [74]. I observed that the Movits were effective as probes to explore experiential qualities of multisensory feedback in a way that echoed slow, introspective and reflexive Soma Design processes which inform our work [2,125,126,154,155,202,203]. Based on these results, I contend that, by providing the possibility of exploring movement-based interactions with multisensory feedback, the Movits have the potential to be used effectively as probes in soma design [72,74] workshops.

In general, I observed that the minimalism embedded in the Movits was helpful and empowering for the participants of our workshops. Their small size and modularity enabled them to explore multiple placements in their bodies, to wear them comfortably, and to join two or three together to explore more complex interactions. The participants were able to quickly make sense of the feedback that the Movits provided and explore creative applications in movement-based interactions. This echoes prior work in embodied ideation toolkits and extends those findings to the application domain of movement learning.

Additionally, I confirmed the grounding of the Movits in the strong concept [75] of intercorporeal biofeedback [183] and its four interactive qualities. During the evaluation workshops, their open-ended audiovisual or visuotactile feedback helped to provide shared frame of reference for the conduction of a movement, allowing for a fluid meaning allocation of its behaviour. By being minimalist, I observed that they were likely to favour guiding attention and action toward and away from them and admit being used along other objects and activities, as an interwoven interactional resource.

With this work, I contributed an account of the design process of the Movits as a model of simplification and generalisation of proven interactions in wearables for movement practices and embodied design toolkits. I found that the analysis of inputs and outputs, and its subsequent application to a specific domain, generated modules that were themselves generative and adaptable. I designed the Movits so that they could be easily replicated and used by other designers and researchers working with embodied sketching and soma design. Additionally, I contend that the findings of our evaluation workshops extend prior knowledge and can be applicable for the further design and research of toolkits and probes, as well as to the design of technologies which consider a holistic approach [72,102] in their interactions.

Future work could include further research regarding the design of the Movits. There is still the need to evaluate the degree of configuration embedded in each Movit, navigating the tension between making it more specific for an application while allowing a straightforward understanding of their behaviours. For example, I imagine that it could happen that instead of having one Movit—and therefore, one board and microcontroller—per interaction, it would work to have a single Movit with switches to toggle the types of inputs, outputs or mappings they exhibit. This tension is amplified by current discussions [68] regarding the economic and environmental costs of physical interfaces.

Finally, the aim was to use the Movits in further co-design workshops targeting wearable technologies for movement learning within a more specific domain, involving the participation of movement and health professionals, patients, and interaction designers. For these, the intent was to use not only the Movits but also a bodystorming basket with relevant probes and materials which have been proven effective for embodied design. The idea would be that, by involving embodied sketching [108], soma design [72,74] and intercorporeal biofeedback [183] as the theoretical background for these participatory design explorations, it would be possible to hold space for meaningful explorations of movement-based design and technology by the people that would interact with it.

6 Designing Wearables to Support Physical Rehabilitation

This chapter draws on publication D [189].

Given an increased knowledge on how to design technologies for the body in movement, using movement-based design methods and embodied ideation toolkits, there is the question of what can be designed with them. In this chapter, I present the design journey that I engaged with from the beginning of this thesis work. TODO(introduction)

6.1 Peripheral Nerve Transfer Surgery Rehabilitation

TODO(introduce section)

Injuries and trauma in the arms do not only entail bone fractures or skin lacerations, but also commonly nerve injuries with dire consequences. Injuries in the peripheral nerve and brachial plexus are complex and usually present considerable functional and sensory impairments and pain [7]. In Europe, it is estimated that there are 300.000 new cases per year [7]. If one of these injuries prevents a muscle from receiving nervous signals (i.e., if the muscle is denervated) and the injury is not treated soon, the muscle can become atrophied and unable to recover its complete function [7]. To treat these injuries, there are surgeries such as primary nerve repair, repair with nerve grafts, nerve transfers, free functional muscle transfers and tendon transfers [7]. All require weeks for the repaired nerves to fully regenerate and provide stimuli to their target muscle [7]. In the cases of transfer surgeries, the body is surgically “rewired” and recovering the original motor function requires relearning how to perform movements.

As an example, the Oberlin ulnar nerve transfer [7,127,164] is a common peripheral nerve transfer that is used to recover the flexion of the elbow after this type of injury. Losing the capacity to flex and extend the elbow can be very limiting. This loss might happen due to a nerve injury involving the musculocutaneous nerve, which innervates the biceps brachii muscle. This kind of peripheral nerve transfer surgery consists of taking part of the muscular branches of the ulnar nerve and using it to reinnervate the biceps [7,127,164]. Once the nerves regenerate and the biceps start receiving nervous impulses, these impulses originate from the motor commands corresponding to one or a combination of the movement provided by the ulnar nerve (Fig. 9.)

TODO(make the figure wider)

Figure 9: Oberlin Ulnar Nerve Transfer Surgery. a) Injury in the musculocutaneous nerve preventing the flexion of the elbow. b) Peripheral nerve transfer surgery, using muscular branches of the ulnar nerve to reinnervate the biceps. c) The biceps is now activated using signals coming from the ulnar nerve and the elbow can be flexed.

6.1.1 Rehabilitation Goals after Peripheral Nerve Transfer Surgery

Previous works [7,164] have described three fundamental goals for the rehabilitation of any peripheral nerve transfer surgery:

  1. Preservation of the range of motion in involved joints.
  2. Motor activation of the renervated body part, by involving movements of the donor nerve.
  3. Re-learning of the original movement patterns, by dissociating them from the movements provided by the donor nerve.

In the specific case of Oberlin ulnar nerve transfer surgery, these goals are translated as follows, as presented in Fig. 10:

  1. Preservation of the range of motion of the elbow by using passive motion therapy.
  2. Activation of the biceps by using movements of the ulnar nerve (flexion of the wrist, flexion of the fingers, adduction of the wrist.)
  3. Learning to flex the elbow without the movements of the ulnar nerve, and vice versa.
Figure 10: Main rehabilitation goals after Oberlin ulnar nerve transfer surgery.

In order to achieve these goals, patient education is key: the patients have to understand the procedure and what it is required for them to be able to recover. Goals 2 and 3 require also a progressive involvement in strength and precision training and development. Also, for the best achievement of these goals, it is not enough to rely on clinical sessions: it is critical to continue treatment at home as that is where the most improvements can happen.

Depending on the severity of the initial injury, the time that passed before the surgery, and the complexity of the surgery, the level of rehabilitation that can be achieved varies considerably between patients. Ideally, the patients will be able to activate the renervated body part, eventually recovering at least some of the lost mobility. Only a few patients would recover it fully.

6.1.2 Rehabilitation in the Spanish Public Health System

I would like to point out that there are several contextual factors that might challenge the ideal rehabilitation path. To better understand them, here I provide further context of the Public Health System of Spain, where this thesis is situated.

6.1.2.1 General Context and Challenges

In general, plastic surgeries are performed in the hospital that corresponds to the area where the patient lives. If post-surgery rehabilitation is needed, it would also be performed in the same area hospital. For this, and depending on the type of surgery, a surgeon might prescribe instructions for the rehabilitation, communicating them to the corresponding rehabilitation doctor. The rehabilitation doctor evaluates the state of the patient and prescribes a specific treatment to be followed, both in the hospital and at home. Depending of the situation, the in-hospital treatment can require a combination of physiotherapy and occupational therapy. The rehabilitation doctor communicates this treatment to the therapists and assigns a duration of the sessions—which might last from 30 to 90 minutes—, and a weekly frequency—daily, three, two or once a week—, during a given period.

It is worth noting that, in general, in this country, patients might need to wait for several weeks for an appointment with the rehabilitation doctor—who sees from 10 to 20 patients per day—to then have assigned their physical or occupational therapy sessions. These appointments requires to balance the load and availability of the therapists, who treat between 15 and 20 patients a day.

6.1.2.2 Peripheral Nerve Transfer Surgery Challenges

In the specific case of peripheral nerve transfer surgery, because it requires a high level of specialization and as of the writing of this paper, it can only be performed in specific hospitals of Spain. One of those hospitals is Hospital Universitario de Getafe, in Madrid, Spain, which has its own Peripheral Nerve Unit TODO(and since 2026 is centro referencia). Such a unit consists of a medical team—comprised of plastic surgeons, rehabilitation doctors, occupational therapists and physiotherapists—who are knowledgeable of the specific requirements of peripheral nerve transfer surgery rehabilitation and can provide appropriate care for before, during and after the surgery. Not only the specialized team from the hospital will perform the surgery, but also see the patients after the surgery and teach them the fundamentals and requirements of the rehabilitation.

However, given the scarcity of this specialization, this means that many times, the patients who could benefit from peripheral nerve transfer surgery have to be sent to a hospital outside of their area to have the surgery performed there. If the patients live relatively close to the hospital, they might be able to get their rehabilitation treatment in there—this depends on the resources of the hospital, because, in principle, by treating patients from outside the area they would be reducing their availability for patients in the area. If the patients live in a distant area, or if the specialized hospital is saturated, they will be rehabilitated in their own area hospitals. However, it might happen that the receiving rehabilitation doctor and corresponding therapists are not too knowledgeable of the specific requirements of peripheral nerve transfer surgery rehabilitation.

In either case, follow up sessions with the plastic surgeon or specialized rehabilitation doctor might take place. After some time of treatment, and depending on the level of impairment and initial injury, patients will get discharged from the rehabilitation treatment at the hospital. This could happen even if there is no noticeable improvement, or if the recovery has not achieved the expected level. In the follow up sessions, patients might receive further exercises to continue their treatment at home.

6.2 Designed Prototypes

Given the complexity of the design journey in this work and the several elements that constituted it, I would like to first introduce the three stages of prototypes that I designed, implemented and employed as probes during the design process. A description of their origin and the rationale for their features will be introduced in the next sections. I present them here to provide an anchor to the reader when encountering the activities that would eventually lead to them.

The prototypes are a combination of software, consisting of custom Wear OS apps that I developed, and hardware, which consisted of off-the-shelf smartwatches (Samsung Galaxy Watch 7, 40mm) and velcro straps. The Android Studio project with the source code and installable files for the Wear OS apps can be found in an Open Science repository [188]. EMS, ATJ and I would eventually call these prototypes the MoTTs: Movement and Training Technologies.

Flex, Drums and Maze were developed first, followed by Points as an intermediate iteration, and ending with Angle as the latest version so far. In the spirit of prior work [40,63], I report on all of the prototypes to support the sense of their significance during the design journey and to provide an account of the loose ends that others might want to explore with them or an iteration of them.

Note that to describe the prototypes, I refer to the three rehabilitation goals described above and presented in Fig. 10:

  1. Preservation of the range of motion of the elbow by using passive motion therapy.
  2. Activation of the biceps by using movements of the ulnar nerve (flexion of the wrist, flexion of the fingers, adduction of the wrist.)
  3. Learning to flex the elbow without the movements of the ulnar nerve, and vice versa.

6.2.1 Initial Prototypes: Flex, Drums, Maze

Figure 11: Initial Prototypes: Flex, Drums and Maze

6.2.1.1 Flex

The Flex prototype is meant to be attached to the patient’s recovering arm to provide multisensory feedback—audio, visual and haptic—regarding the range of motion of their elbow. It is calibrated through engaging in a maximum extension of the elbow and pressing a first button, which sets that position; and engaging in a maximum flexion of the elbow and pressing a second button, which sets that position as well. The device makes distinctive sounds distinguishing those points: a high or low percussion sound. It also lights in different colours—note that the screen is facing inwards so that the patient can see it while flexing the elbow—and provides a vibration.

This prototype would be useful to implement the activities for Goal 1: Preservation of the Range of Motion in the Elbow Joint and to some extent for Goal 3: Re-learning of the original movement patterns. By indicating a point of maximum flexion and extension, the patient can then practice and repeat the movement passively or actively, depending on the rehabilitation stage.

6.2.1.2 Drums

The Drums prototype extends the interaction provided by Flex: it allows the user to define a virtual set of four drums (or positions) arranged in the location they prefer. In this case, the idea is to wear the smartwatch on the palm of the hand, with the screen facing inwards. From there, one can adduct or flex the wrist in different directions, and set one of the four available positions by touching the corresponding button. Each time that one positions the hand in the same way, a distinctive sound, colour and vibration will be triggered. This prototype works best when accompanied by simple objects that allow to mark the space so that the patient can identify and return to the saved positions.

In principle, this prototype would be appropriate to undertake Goal 2: Motor Activation of the Biceps by Involving Movements of the Donor Nerve. By setting and exploring different position of the wrist and fingers, invited by the sound-making provided by the device, the patient would be likely to be able to first activate the biceps (detectable by EMG) under the supervision of medical personnel, and then to practice the same kind of movement at home.

6.2.1.3 Maze

Here, the smartwatch is worn again on the palm of the hand, with the screen facing up. The elbow would be flexed while the wrist would be completely extended. In the case that the patient would not be able to extend the wrist yet, some extra weight would be needed, as envisioned in the designs. Once the activity is started on the smartwatch, the tilting of the hand in one direction—by flexing and extending the wrist—moves a ball up and down on the screen. At the same time, a stream of “walls” of different sizes and positions starts to move perpendicular to the direction of the ball. The objective of the prototype would be to avoid as many walls as possible. When the ball hits a wall, it changes to another colour and the device vibrates. After a set amount of walls have passed, the activity ends and shows the numerical result of how many walls were avoided and hit. This prototype included a setting where one could change the speed of the moving walls.

The Maze would likely support Goal 3: Re-learning of the original movement patterns, by providing an activity for dissociating the movements coming from the branch of the ulnar nerve—adduction of the wrist, and flexion of the wrist and fingers—from the flexion of the elbow. It would require extra weight to counteract the normal flexion of the wrist in initial stages of the rehabilitation.

6.2.2 Second-round Prototype: Points

6.2.2.1 Points

The Points prototype is a generalisation of Flex (using two points) and Drums (using four points). Similar to both, this prototype requires wearing the device on a body part, holding it in a position and saving it with the corresponding button. One can select how many points one wishes to save, from one to four. Each time that one positions the body part in the same place, a distinctive sound, colour and vibration will be triggered, and a repetitions counter for that position will be increased.

Figure 12: Circle-based visualization in Points prototype

This prototype implements a circle-based visualization of how close the device is to each of the pre-saved positions: a circle of the corresponding colour is displayed on the screen when the device is close, and it gets smaller the closer it gets. Once this circle crosses a fixed gray circle, corresponding to a detection threshold, the colour, sound and vibration is triggered. The size of this threshold is adjustable from within the same app. Fig. 12 shows the circle-based visualization in the Points prototype: Moving from the first position (left, green) to the second one (right, blue), the green circle gets bigger while the blue one gets smaller. When arriving to the blue position, its counter is incremented by one.

6.2.3 Third-round Prototype: Angle

6.2.3.1 Angle

Angle provides multisensory feedback to repetitive movements performed between two target positions, such as the flexion and extension of the elbow. It calculates the angle between the two targets and the current angle of the body part with respect to them, based on a single tilt angle of the device (Fig. 13.)

Figure 13: Basic workings of the Angle prototype

In the app, two buttons are used to save two positions of the device, i.e. two different tilt angles of the device (A and B). When the device gets back again within a given range of those angles, a sound and vibration is triggered. In the screen, a visualization displays the angle between AB, as well as the current position of the device within or outside that range.

Figure 14: Angle prototype modes: Configuration, Activity, and Log

For the purposes of on-clinic calibration and at-home rehabilitation, I designed three modes for the application: Configuration, Activity, and Log (Fig. 14.) In the Configuration mode, the rehabilitation doctor or therapists can set the target positions for the end and beginning of movement, the sensitivity, and the number of target repetitions for the exercise. Additionally, this mode benefits from a mobile “companion” app, where these parameters can be adjusted without having to interface with the smartwatch. Once the configuration is done, it can be disabled.

Then, in the Activity mode, the patient can start an activity: a stopwatch begins counting time and the system counts and displays the number of performed repetitions. In both the Configuration and Activity modes, the real-time feedback is provided either when reaching the target positions (in the case of sounds and haptics) or always (in the case of the visualisation.)

Finally, in the Log mode, one can look at a visualisation of the logged activities along with textual displays of the corresponding statistics.

6.3 Methodological Notes

TODO(intro)

6.3.1 Contributions

In this work, there were multiple collaborators. Here I list our contributions:

6.3.2 Participants

In this project, there were three different pools of participants: peripheral nerve transfer surgery patients (9), medical personnel from (6), and external occupational therapists and physiotherapists (11.)

6.3.2.1 Peripheral Nerve Transfer Surgery Patients

We had the participation of nine patients of peripheral nerve transfer surgery (Px1, Px2, Px3, Px4, Px5, Px6, Px7, Px8, Px9), who had undergone their surgery in Hospital Universitario de Getafe. They were informed about the study by AMM, LC or FA, and voluntarily signed up to participate.

6.3.2.2 Local Medical Personnel

Collaborators from the Peripheral Nerve Unit of Getafe University hospital were participants in some of the activities we organised: one occupational therapist (OT1), two physiotherapists (PT1, PT2), and the three medical doctors who led the medical aspect of this work (AMM, LC, FA.)

6.3.2.3 External Therapists

Finally, we had eleven external participants, who work in four different hospitals in Spain: six occupational therapists OT2,OT3,OT4,OT5,OT6,OT7) and five physiotherapists PT3,PT4,PT5,PT6,PT7). They responded to a call for experts in rehabilitation, occupational therapy and physiotherapy specialising in the rehabilitation of peripheral nerve injuries, specifically in the upper limbs. Five of them engaged in the Co-Design workshop with experts (Ac3), and nine of them also in the Participatory Embodied Sketching sessions (Ac4.) Additionally, in both of these activities, TBC was a participant co-designer too.

6.3.3 Design Activities

In the study protocol that we (EMS, ATJ, AMM, LC and me) originally developed as a plan, we envisioned a single year of activities divided into three stages:

  1. Sensitising: interviews with patients and field studies in rehabilitation sessions.
  2. Ideation: three co-design workshops in groups accompanied by three corresponding individual follow-up sessions with iterated prototypes.
  3. Evaluation: three evaluation workshops with further iterations of the prototypes.

The plan was linear and straightforward although we were aware that in practice we would need to account for uncertainty. This was especially the case because the only defined aspect for the project was that we were designing wearable technologies. We would need to wait for the insights from the different activities to start developing what would be appropriate for the application domain.

TODO(original figure)

In the end, the design process followed approximately the same three stages, but involving different activities to some extent, and taking three years instead of one. Tbl. 4 provides an overview of the design activities that were actually carried out. In the end, a main aspect of the work from this chapter consisted of sharing the actual design journey and the reasons why it was shaped that way.

Table 4: Design Activities carried out during the Research through Design process.
ID Activity Outcomes Participants
Ac1 Interviews with peripheral nerve transfer surgery patients and observation sessions in OT and PT. Initial understanding of the challenges and opportunities for the design process. Px1-8, OT1, PT1-2
Ac2 Co-Design Workshop with OT, PT and Px. Understanding of the diversity of individual needs. Loose ends regarding sensorial rehabilitation. The need for an alternative frame for ideation. Px1, Px6-8, OT1, PT1-2, FA
Ac3 Co-Design Workshop: Wearables to Support Peripheral Nerve Transfer Surgery Rehabilitation. Design concepts supporting each of the three rehabilitation goals. Insights regarding wearability, external focus of attention, multisensory feedback and generalisability of the designs. Initial seeds for the first exploratory prototypes. OT2-5, PT3, TBC
Ac4 Participatory Embodied Sketching Workshops (x5) with Exploratory Prototypes. Expert feedback on the exploratory prototypes. Ideas for the improvement of multisensory feedback and sensitivity adjustment, and for implementation of data logging and dual design. OT1-3, OT5-7, PT1-7, FA, AMM, TBC
Ac5 At-Home Testing of Angle Prototype by Patients. Contrasting case studies of the use of the Angle prototype. Insights regarding the use of multisensory feedback and the role of data logging in a longer term experience. Px6, Px9

6.4 The Design Journey

In this section, I draw inspiration from the work and nomenclature of  [128] and provide an account of the design events that shaped the design journey in this Research through Design project. This perspective justifies documenting design activities with stakeholders, conceptual pivots, and emerging insights as important components of the knowledge process, rather than treating them as background context only.

Based on this nomenclature, I report particular encounters [128] where different stakeholders came together with a generative intent (e.g. coming up with designs, or iterate on existing designs); moments, highlighting brief events when something significant in the design process occurs that reveal “under-considered relations” [128]; relevant pauses, [128] which in our case came often from intrinsic complexities in the rehabilitation process, access to patients, and availability of our multi-disciplinary team; and transitions, [128] between events, moments, and pauses; “when tasks are taken over, materials are changed, or prototypes or samples are left behind.” [128]

I articulate “knowledge externalization and contributions” [128] emerging from these transitions, like relevant design features from former design concepts that remained alive in subsequent designs. In the spirit of providing a nuanced account of the design journey [57], I include in the key takeaways of each design activity not only the successful stories but also some aspects that did not work as expected [58,76] or that were left unexplored [63,173] and could be revisited and expanded in future work.

In some instances I use “we” to refer to the point of view of EMS, ATJ and me, the design and HCI researchers of the team.

6.4.1 Sensitising Designers

The journey started with a stage of sensitising: the intention was learn about the lives and rehabilitation treatments of patients of peripheral nerve transfer surgery. EMS and I intended to get a sense of the context where custom-designed interactive technologies could support them. For this, I organised interviews and observation sessions with patients.

6.4.1.1 Activity: Interviews and Observation

EMS an I conducted individual semi-structured interviews with eight patients, to get a sense of their injuries, their rehabilitation timelines, and the challenges that they had faced before and after their peripheral nerve transfer surgery. Additionally, I organised observation sessions during the therapy sessions (physiotherapy and occupational therapy) of three of the patients that were treated in Hospital Universitario de Getafe. These activities were carried out during a period of eight months, as the patients were being recruited during this time.

6.4.1.2 Key Takeaways: Initial Challenges and Opportunities for the Design Process

Each of the interviews was an encounter [128] between the patients and us (EMS and me), which started to challenge our expectations of how to proceed with the project. While conducting the interviews with patients, we started to notice many logistical challenges for working with them as “a population”. The first one had to do with their availability: some months we were able to recruit two or three patients, while other months there were none. In this sense, this period was filled with pauses [128] between interviews, which provided time to reflect on what could be an appropriate approach for the upcoming co-design activities.

Speaking with the patients, we started to notice that even though they had their surgery performed at Hospital Universitario de Getafe, they were not necessarily treated there, or even lived close to the city, so further access to them could be complicated. Even though they had peripheral nerve transfer surgeries, they had been performed in different body parts, and some of them had undergone tendon transfers along with the nerve transfers. For some of them, this was not their first surgery to treat their injury, and for others, several months passed after their injury before they could receive appropriate treatment. These situations altered the treatment timelines that each one of them had. Additionally, they were in different stages of rehabilitation, were already discharged or were waiting for further treatment. For those that had periods without formal rehabilitation treatment, it was not clear what they could be doing or expecting in the meanwhile. Furthermore, several of them were not aware of the specifics and implications of the peripheral nerve transfer. In contrast, some of them were very motivated in their rehabilitation and did not seem to have any problems with following their treatment.

In the observation sessions at the hospital, I noticed the patients were treated warmly and in a personalised manner by their assigned physiotherapists (PT) and occupational therapists (OT). The PT provided passive motion therapy, massages, electrical muscle stimulation, and the OT assigned custom activities to help increasing range of motion, coordination and strength, using an assortment of analogue materials. Even though the therapists could have several patients at the same time, they were able to rotate between them while also making sure that they all had appropriate feedback and exercises.

From all these observations, EMS and I started to realise that even though the patients had undergone peripheral nerve transfer surgery, each one of them had very different requirements rehabilitation-wise. Initially, coming from an Interaction Design perspective, we imagined that having a population with the same type of surgery would provide a strong scope for the designs, and therefore, that the sensitising stage would reveal a clear direction to follow for the co-design activities. Instead, the project started to transition [128] from that general perspective, as now it seemed appropriate to focus the initial designs on the individual needs of the patients. In any case, something that appeared clear was that the biggest opportunity for introducing a helpful technology design was in the periods when the patients did not have frequent rehabilitation sessions: we gathered that any meaningful impact would come from supporting the rehabilitation at home, as the therapy sessions at the hospital were very helpful and appreciated already.

6.4.2 Ideation Begins: Personalised Designs

Once EMS, ATJ and I had gathered enough information from the diversity of patients and needs, we advanced to what was the next stage in our plan: co-design with patients. Specifically, the intention was to involve patients, therapists, and interaction designers to arrive at design concepts that we could later analyse and implement. We decided to focus on the specific context of each patient, assuming that there would be emerging insights that could be extrapolated for a first prototype.

6.4.2.1 Activity: Co-design Workshop with Patients and Therapists

I organised a co-design workshop in a room of Hospital Universitario de Getafe, involving the participation of four patients (Px1, Px6, Px7 and Px8), the rehabilitation team from the hospital working with the Peripheral Nerve Unit (FA, OT1, PT1, PT2) and four interaction designers (EMS, KS, MRL, JDD and me). I arranged the group in four teams, each one with one patient, one therapist or doctor, and one interaction designer, therefore instigating an encounter [128] between the different stakeholders of the project.

The objective of the workshop was to arrive at design concepts of wearable technologies that would support the patients on their specific rehabilitation needs. The workshop lasted for two hours. For organising it, I had to coordinate the availability of the medical personnel, which was very limited, with that of different patients. For instance, originally the arrangement included an additional team but neither the patient nor the doctor could attend.

All of us engaged in bodystorming for ideation. First, each team defined an application context based on the experience of the patient, focusing on a situation they found challenging in their day-to-day. Bodystorming followed, first as an exploratory and divergent activity and then with the objective of creating a low-fidelity prototype that could be shared with others. This prototype was documented along with the application context in sheets that we had prepared—these sheets asked for details regarding the patient, their context, and the design proposal. In the end, the teams presented their ideas between each other.

6.4.2.2 Key Takeaways: Diversity of Individual Needs, Sensory Rehabilitation and Textures, At-home Usage, Reminders

The diversity of patients and a focus on their personal needs led to a diversity of wearable design concepts (Tbl. 5.) The team of Px1 created the prototype of an inflatable glove that would help with passive motion therapy to extend the fingers while also providing haptic stimulation through different types of textures. The team of Px6 envisioned a pulley system that would be installed at home and allow him to self-administer passive motion therapy for the mobility of his shoulder. The team of Px7 designed a glove that would provide sequences of vibrotactile stimuli in the finger for sensory therapy. In the case of Px8, they focused on the hand and wrist split that he had to wear with the objective of helping to extend his fingers. Px8 felt that the splint was cumbersome to wear, so the team designed a lighter version of it, along with “smart” features that would remind him of taking it off for resting.

Table 5: Design concepts from Ac2: co-design workshop with patients
Px. Application Context Design Description
Px1 Rehabilitation of range of motion and sensitivity in fingers to enable grabbing objects without dropping them. Glove attached to an inflatable ball with different textures in the surface. It would extend the fingers, simultaneously providing haptic stimulation.
Px6 Passive motion therapy of the shoulder at home, to be done individually. Splint with pulley system attached to the room, to allow for pulling with the healthy arm and lifting the other in different directions.
Px7 Recovering sensitivity and reducing pain in fingers. Gloves with moving textures to stimulate the skin in phalangeal areas in sequence.
Px8 Improvement of a splint to support the extension of the fingers in activities such as typing. Light splint with a more elegant aesthetic and smart reminders for usage and rest.

In the following, I highlight key aspects and commonalities that the workshop and the designs provided to us as designers.

6.4.2.3 Diversity of Individual Needs

The workshop confirmed the diversity of needs and the specificity of injuries and treatments that these patients had at their different stages of rehabilitation. We noticed that each design was on a different direction and even technical field, not necessarily within the scope of our project. For instance, the designs by and for Px6 and Px8 had a strong focus on mechanical engineering and industrial design, while the ones by Px1 and Px7 leveraged mechatronics.

6.4.2.4 Sensory Rehabilitation and Textures

The designs in the workshop were completely led by the patients, and to arrive at them, they identified their main need at the moment. For a couple of them (Px1, Px7), some aspects of tactile sensitivity and its rehabilitation—sensory rehabilitation—were brought up. This was probably the case because they had noticed these issues, which were not treated as much as the mechanical and functional aspect. We as a team did not pursue this line on sensory rehabilitation and we left it as a loose end [63]. However, I would like to highlight these ideas that had an emphasis on robotic tactile stimulation and the usage of different textures. This appears to be a fruitful line for further research, especially because it is not usually prioritized as motor rehabilitation is. In fact, motor rehabilitation which was the focus of their rehabilitation process and this project.

6.4.2.5 At-home Usage

In any case, we as a design team noted that what the designs had in common was their intended context of usage: all of them were meant for the homes or workplaces of the patients, to support rehabilitation activities that they would want (or need) to engage frequently during the day. This confirmed previous discussion from the interviews: that supporting the patients at their home would be an appropriate opportunity for the designs.

6.4.2.6 Reminders

Additionally, I gathered some insights regarding more low-level aspects of interaction. For instance, all the design concepts envisioned using vibrations to provide interactive feedback and reminders, as they were perceived as more personal and less intrusive for others. The reminders were envisioned either to promote activity (Px6) or to encourage resting from the splint (Px8.) I took note of this aspect but did not explore it further, given that the main design direction was still to be decided.

6.4.2.7 Need for an Alternative Frame for Ideation

With this workshop, we as a design team expected to find inspiration for our design. However, we realised after the fact that, by foregrounding the patient’s experience at different stages of rehabilitation, without another common focus in the workshop, we ended up with an assortment of ideas that were very personalised to the patient’s context while simultaneously not necessarily specific to peripheral nerve transfer surgery rehabilitation in general. We concluded that the ideation session was too open and in the end it did not provide a strong foothold for exploratory prototypes. The encounter [128] that was the workshop became a moment [128] were we realised that our approach had to change in some way.

6.4.3 Towards a Goal-Oriented Design

From the co-design session with patients, we as a design team gathered that we would need an alternative framing for ideation. In this sense, we started looking for applicable embodied core mechanics [108] in the context of peripheral nerve transfer surgery rehabilitation: desirable and repeatable movement-based actions that could serve as the basis of our designs, in the form of core actions that the patients needed to do as part of their rehabilitation. For this, a functional perspective, such as the one that is used in physiotherapy and occupational therapy, would be helpful.

In a conversation with AMM and TBC—an encounter [128] between the medical and design team—, EMS and I learned that the elbow flexion is the most important movement to rehabilitate in upper limbs, in general. In this sense, given a partial loss of function in upper limbs, the first target for a peripheral nerve transfer would aim to regain the elbow flexion (e.g. by using the the Oberlin nerve transfer.) This moment [128] started another transition [128] in the design process, where we would now focus on a very concrete surgery with concrete core actions. Therefore, I started to follow following previous work [7,164] describing rehabilitation goals for peripheral nerve transfer surgery in general, but with a focus on the Oberlin nerve transfer  [164], because of the concrete actions in play there. We decided to frame further designs on the rehabilitations goals these works [7,164] describe, focusing on the three rehabilitation goals that are discussed above.

By focusing on the mechanics of the rehabilitation, we reasoned that it would be possible to eventually extrapolate the findings to other types of peripheral nerve transfers. Additionally, to further support the focus on movement, we decided to involve notions of implicit motor learning, which we reasoned could aid in developing ideas for a shared frame of reference between patients and therapists, as previous works [183] had done before.

When organising the following workshop, we faced a temporal dissonance [128]: because of the needs of the embodied sketching [102] methods we planned to use, and the depth that we envisioned was needed for each of the three rehabilitation goals, we anticipated that we would need a session two to three hours long. This was at odds with the time and availability of the medical personnel of the hospital, who had already gone through great efforts to accommodate the previous two-hour workshop. This situation was resolved with a transition [128]: for the new co-design workshop, we invited external experts, specialised in the rehabilitation of peripheral nerve injuries, specifically in the upper limbs, who could also provide a fresh view to the problem. This co-design workshop took place two months after the previous one.

6.4.3.1 Activity: Co-design Workshop with External Therapists

In the workshop, EMS and I had the presence of five external therapists (OT2, OT3, OT4, OT5, PT3) and TBC as participant co-designers. We divided the workshop into two main stages: sensitising and co-design. The sensitising stage introduced participants to relevant theoretical and methodological aspects to inspire their design process, establishing a common vocabulary. We introduced our embodied design methodology and the goals of the workshop. I discussed key concepts of peripheral nerve transfer surgery and its rehabilitation, which would guide the stages of the co-design session. Then, TBC introduced, physically engaged with and practised relevant concepts of motor skills pedagogy, such as explicit vs. implicit motor learning, to provide some inspiration for the designs: he presented and compared instructions such as “flex your right elbow until it reaches its maximum position” (explicit) with “touch your right shoulder with your right hand” (implicit.)

For the co-design stage, EMS and I divided the group into two teams, separating those who worked in the same hospital: Team A included OT2, OT3, and TBC, with me as the facilitator, and Team B consisted of PT4, OT4 and OT5, with EMS as the facilitator. This stage was divided into three sections, each based on a different rehabilitation goal. For each section, the objective was to develop a holistic home activity using wearable technology, and to document it through a video prototype that would be presented between the teams. The focus was on an activity because the idea was to build up on the key core mechanics for the given rehabilitation goal, turning them into something engaging by means of involving different design resources, such as objects, technologies and people. To achieve this, we engaged in a 15 min bodystorming, followed by a 15 min convergence activity leading to one design concept, which would be (1) implemented as a low fidelity prototype, (2) video recorded, and (3) presented to the rest of the group.

6.4.3.2 Key Takeaways: Design Concepts, Wearability, External Focus of Attention, Multisensory Feedback, Generalisability

6.4.3.2.1 Design Concepts
Figure 15: Design concepts from Ac3: co-design workshop with therapists

From this goal-oriented co-design workshop with therapists, we obtained a total of six design concepts, two per rehabilitation goal. See Fig. 15 and Tbl. 6 for an overview.

Table 6: Design concepts from Ac3: co-design workshop with therapists
Goal ID Design Name Description
Preservation of the Range of Motion in the Elbow Joint 1A River Crossing The patient simulates crossing an imaginary river while sitting on a wheeled office chair. They must keep their forearm on a table, encouraging elbow flexion as they “walk”. The setup includes an elbow flexion sensor and motion sensors in the feet to track the movement and generate splashing sound effects.
1B Feedback Wristband A wristband worn in the recovering forearm, sensing the position of the forearm relative to the arm, and wirelessly connected to devices worn in shirts or shoes. The design provides visual feedback in the wristband—changing its brightness and colours—, and haptic feedback—pleasant vibrations or massages—from the location of the other devices, such as the lower neck, shoulders or feet soles.
Motor Activation of the Biceps by Involving Movements of the Donor Nerve 2A Virtual Drums A virtual drum kit responding to a drumstick with sensors: it plays the drums, emits lights, and vibrates when doing a motion that corresponds closely to the one that would activate the biceps—previously identified during consultation. When the motion is far from the desired one, the drums sound distorted. The patient can also play along a song that they enjoy.
2B Kinesthetic Blob A “blob” of a malleable material with kinesthetic memory which can move and passively guide the hand along different directions. It supports identifying the movement combination that activates the biceps during consultation, and practising it at home by following its haptic feedback: it encourages or blocks movements depending on their closeness to the desired one. The blob is connected to a social app tracking the patient’s progress.
Re-learning of the Original Movement Patterns 3A Red Riding Hood Tray Video game based on the Red Riding Hood story, where she brings biscuits on a tray to her grandmother by traversing a path. The controller is a weighted tray that has to be tilted and moved in a specific manner, flexing and extending the wrist independently of the elbow. The tray counteracts the flexion of the wrist and fingers that naturally emerges when patients want to flex the elbows after Oberlin nerve transfer.
3B Maze Tray A tray with a maze that is solved by tilting it, providing multisensory feedback when the end of the maze is reached.
3B’ Magnetic Wristbands A wristband and a belt with a sensor in the middle of the sacrum area, plus the haptic feedback devices in shoulders, neck and insoles from Design 1B. Here, the goal is to align the wristband with the belt, therefore implicitly promoting the desired movement—extension of the elbow simultaneous to an adduction of the wrist. When successful, a pleasant massage is provided by the haptic devices.
6.4.3.2.2 Wearability

I expected the participants to propose activities based on mainstream wearable devices such as smartwatches, or straightforward form factors such as gloves. Instead, they proposed devices and objects in other shapes and locations, some of which would even challenge the definition of wearable. For instance, the participants envisioned devices worn on the upper back (Designs 1B and 3B), on the shoulders and feet (above and below) (Designs 1A, 1B, 3B) and on a belt (Design 3B), but also embedded in objects that were not worn but rather held for an amount of time: drumsticks (Design 2A), a moving blob (Design 2B) and a weighted tray (Designs 3A and 3B). All of these had in common that they would be on the patient only while the activity was performed. Therefore, they would not be permanently worn or held, challenging a perspective of always-on devices. Here we found an interesting aspect of minimalism, in that the devices would serve their purpose of supporting an activity, and not more.

6.4.3.2.3 External Focus of Attention

I found it relevant that in most designs, the position of the wearable device or the focus of attention of the whole activity was not necessarily the body part under rehabilitation. For instance, the activity focus was on “crossing the river” (Design 1A), the virtual drums (Design 2A), the “blob” (Design 2B) or the tray or screen (Designs 3A and 3B). Furthermore, in Design 1B, the haptic feedback was provided on the back, shoulders or feet, not on the arm. This was coherent with the implicit motor learning strategy practised and bodily explored in the sensitizing part of the co-design workshop. Even if the concept was not consciously known by the participants, they found this strategy valuable and were able to reflect it in their designs.

6.4.3.2.4 Multisensory Feedback

The design concepts presented the use of multiple sensory modalities to provide feedback: visuals (Designs 1A, 1B, 2A, 3A, 3B, 3B'), sounds (Designs 1A, 2A, 3A, 3B), and haptics (Designs 1A, 1B, 2A, 2B, 3B'). Only in the case of Design 1B, the focus was on a single modality: haptics. From this, I gathered that it would be appropriate to incorporate two or three sensory modalities in the prototypes I would develop.

6.4.3.2.5 Generalisability

The participants took into account that the designs would need to be adapted to different ranges of motion, dimension of body parts and the specifics of each patient’s surgery. Therefore, in principle they would be flexible enough to be adapted to other application domains.

6.4.4 Implementing Initial Prototypes: Flex, Drums and Maze

The co-design workshop with experts provided rich insights and inspiration from which I could derive a first version of exploratory prototypes. Following my design drives, I looked at how I could design a minimalistic technology that could be used to build and support holistic activities such as the ones that the participants had designed. For this, I looked into connecting the concrete core mechanics from the activities to the functionality of the technology we would design. Therefore, we focused in each of the three rehabilitation goals, and designed and developed one initial prototype per each: Flex (Goal 1), Drums (Goal 2) and Maze (Goal 3.)

Developing the prototypes consisted on creating a custom app for Wear OS that could be installed on commercial smartwatches. I recognised that the design concepts throughout the process were rich in potential placements and interaction capabilities, which invited to explore custom hardware designs and solutions. However, given my focus on minimalism and on Interaction Design, first I wanted to explore the affordances of already-existing hardware to support these goals. Because custom hardware tends to be more expensive to create, control, modify, maintain and distribute [68], I saw in the smartwatches an available, small, flexible, and wearable platform that could be subverted for our purposes. Developing on this platform would allow me to leverage the already-existing hardware—which could be altered through custom straps to support alternative placings of the device and different needs for different bodies [158]—, to focus on prototyping the interactions. Additionally, previous works [60,141,141] had indicated the potential that custom apps for smartwatches have for the detection and evaluation of rehabilitation movements, which became an useful precedent especially when being involved on a medical context with strong requirements of safety.

Then, I developed the applications with feedback from EMS and ATJ. Analysing the design concepts from the workshop, writing about the process, designing possible activities based on them, and developing the prototypes, took approximately four months. Once I had a first version, we organised participatory embodied sketching [102] sessions with our participants to collect their feedback and iterate the prototypes before involving the patients again.

6.4.4.1 Activity: Participatory Embodied Sketching

Along with EMS, I organised and led five participatory embodied sketching sessions with different subgroups of the medical participants: these became encounters [128] between the medical practitioners, designers and prototypes aimed at an initial assessment of the latter. In these sessions, for each prototype we: (1) showed how it worked and invited the participants to use it and experiment with it; (2) asked for feedback on the prototype appropriateness to support the given rehabilitation goal; and (3) provided some time for free embodied explorations about alternative rehabilitation applications the prototype afforded. While this third step was originally planned by us—as we were considering a possible transition [128] towards a broader rehabilitation domain—, it naturally emerged from the onset of the session on the initiative of our participants.

The five sessions lasted one hour each and were carried out within one month. They induced a rhythm [128], where the repetition of the same session structure and similar responses created a sense of being on a fruitful direction.

6.4.4.2 Key Takeaways: Data Logging, Dual Design, Sensitivity

In general, the participants confirmed that the prototypes were heading in a useful direction, and provided helpful notes regarding how to improve them according to their point of view. Here I describe key aspects that led to the next iteration of the prototypes.

6.4.4.2.1 Multisensory Feedback

The participants commented on the different sensory modalities and stimuli provided for feedback. For the case of Flex, some (FA, OT1, PT1, PT2) concurred that the two percussive sounds worked effectively: I kept these two samples for the rest of the journey. However, in Drums, several participants (PT2, OT2, OT3, TBC) mentioned the difficulty of differentiating the drum kit sounds—“How do I know which one [position] is it? Are the sounds different?” (PT2)—, and suggested using other more recognisable sounds, such as notes (OT3), numbers (TBC), a motivational voice (OT3) or animal sounds (TBC, OT2). More semantically-charged sounds were also suggested for Flex and Drums, such as alerts to signal when one is crossing from the safe area (PT5) or trumpets or other “success sounds” when achieving something (PT5). Given the long-term nature of the rehabilitation of peripheral nerve transfer surgery, and the challenges of supporting motivation over time [13], I was cautious about implementing these sorts of motivational affordances from the start. Nevertheless, for the next iterations of the prototypes, I decided to further curate the sound samples to increase their clarity and distinctiveness.

Additionally, for further possibilities, OT5 suggested the idea of sonifying when approaching (and not only reaching) the goal in Flex or Drums to build up anticipation and increase motivation. TBC, PT5 and FA commented on providing feedback as well when one is exceeding or pushing beyond that target to reward that effort. TBC and FA saw it as a positive reinforcement for some exercises, while PT5 as some kind of warning for situations when the patient has to stay within certain limits to avoid further injury.

Regarding haptic feedback, when trying out Drums, both PT6 and PT2 suggested using different vibration patterns for each one of the four positions. This is a direction that I did not pursue so far.

In general, the visuals were deemed adequate: the colour feedback when reaching the target positions in Flex and Drums was clear, and the game-like animation in Maze, using basic shapes, was engaging. I observed that the real-time feedback provided by the player ball in Maze enabled a fluent coupling between the expected and performed movements. In this sense, these observations became moments [128] when I realised that the visuals were underutilised in Flex and Drums: in principle, they could provide more information—for the patient and the therapist—for the movement within target points, increasing the transparency [209] of the interaction. Additionally, we reasoned that I could use the visuals for showing even further information regarding aspects that emerged in the sessions, such as data tracking, dual design and sensitivity. In further iterations, I followed this line of development.

6.4.4.2.2 Data Logging

To support the achievement of the rehabilitation goals and track their progress, the participants reflected on the worthiness of implementing data logging and a visualization of such data. “What is valuable here is the possibility of logging the activities, otherwise this is the same as the [Nintendo] Wii we have” (OT1). The participants wanted to access exercise data to understand progress and adjust the prescribed exercises. AMM suggested logging frequency of use (amount of total repetitions), dispersion on time (repetitions per day) and success rate (how many repetitions were in the expected/directed ranges), and FA was interested in recording the progression on movement ranges in terms of angles. They envisioned that these data could be visualised in another device, such as a mobile phone or tablet. These reflections, based on using the prototypes, resonated with initial comments from FA, AMM and LC at the onset of the project. They confirmed that data logging was a highly important need for the medical staff. Given the prominence of the topic, I reasoned it made sense to compromise some level of open-endedness in the design in order to add the capability of logging its usage.

6.4.4.2.3 Dual Design

On a similar line, there were multiple comments about the different needs of the medical staff and the patients, and the fact this would likely entail a dual design, i.e. different user interfaces (UI) for each of them. For instance, the medical team discussed needs such as designing and prescribing exercises for the patients to engage with at home, which would include setting up the devices sensitivity ranges, difficulty/sensitivity, etc. Additionally, they mentioned the need for monitoring engagement and progress with the prescribed exercises. During medical consultation, the therapists UI could be used to design and calibrate the type of movements (PT1) and ranges (AMM and TBC) in the case of Flex, and positions and orientations in the case of Drums (AMM). Connected to the previous point, these reflections led me to consider how to implement different modes for the patients and therapists.

6.4.4.2.4 Sensitivity

The range of detection for triggering feedback and signalling the reach of the targeted position was a prominent topic of discussion. The sensitivity was perceived as too high when the targets were difficult to achieve by the participants, leading to comments about how challenging it would be for their patients. This was a major point of discussion with Drums, where several participants found it difficult and frustrating to replicate the target position of the hand after setting up the four positions: “it’s so precise it’s difficult to do it” (PT5), “this one is very sensitive, can it be adjusted?” (AMM). In the Flex design, only one participant found the sensitivity could be too high for a certain population: “patients with motor control issues would have trouble reaching the same positions” (OT3). In contrast, PT2 perceived Flex as not sensitive enough and therefore prone to be “cheated”. OT3 related this lack of perceived sensitivity to the motion richness in the involved hand joints, which makes it easy to engage in movement compensations. OT2 noted how this challenge also happens with commercial technology, like the Nintendo Wii they had used in OT. I realised it would be needed to implement sensitivity adjustment to allow for further exploration of the affordances of each sensitivity level. Additionally, I realised that for the case of Flex and Drums, visualising the closeness or trajectory of the device relative to their target points would also be helpful as a way to increase the transparency [209] and make it easier to understand why one might not triggering the feedback when returning to the target points.

6.4.5 Iterating the Prototypes: Angle

I analysed the results from the previous sessions and implemented changes to test the next version with some patients. The Points prototype emerged as an intermediate stage: I added features of sensitivity adjustment and trajectory visualisation, while also abstracting away the differences between Flex and Drums. Given that they work with two and four points respectively, but otherwise are the same, I transformed them into Points where one can choose how many points they want to save.

However, with the objective of testing the prototype both in-clinic and at-home, and to implement the data logging and dual design that was requested, I iterated further and simplified the design to arrive at Angle, where only two points are used, along with a simplified position detection. This enabled to configure and show in a straightforward manner the measurements that are done and the position of the device within the saved points. Following the design drive of minimalism, in this prototype I kept the idea of using the device only when needed: it was not meant to be worn during all the time, but only when the activity was performed. Analysing, implementing the changes and iterating them in order to arrive at the Angle prototype took me four months.

I organised a couple of brief encounters [128] with the therapists and rehabilitation doctor from the hospital (OT1, PT1, PT2, FA) to gather their feedback for the Angle prototype. In general, they were receptive about and commended the Angle prototype. The therapists considered that it would be appropriate for the patients to use at home to get feedback about their exercises. Most of their comments discussed the multisensory feedback and the logging features of the design, confirming previous insights from the participatory embodied sketching session (Ac4) and their effectiveness. Additionally, they discussed some further possibilities regarding the configuration in the prototype: they reasoned that it could be useful to store more than one range, either to support the same exercise in different contexts (e.g. active motion therapy at home, passive motion therapy in clinic), or to support more exercises. Coming back to the minimalism as a design drive of this work, for this first test I wanted to focus on one range—based on two points—only and therefore did not implement such a feature, but I took note of it as a possible future work and a relevant point for discussion.

Once we as a design team got the approval of FA for testing the prototype at home, we were able to recruit two patients. They were currently under treatment at the hospital and were in the process of recovering mobility in the elbow. The patients were Px6, who had been present in the process since the beginning, and Px9, who joined only for this part. This started a series of encounters [128] between the patients, prototypes, doctors and designers (EMS and me.) Having two and no more patients for this stage was an effect of a temporal dissonance [128] involving the hospital, patients and treatment times in contrast with the time requirements of the project. This temporal dissonance prompted us to organise a longer and deeper engagement with the two patients: each of them used the prototypes for a total of seven weeks, within a period of thirteen weeks in total.

6.4.5.1 Activity: At-home Testing of Angle by Patients

I organised an initial testing of our Angle prototype with Px9 and Px6. They would use the prototype for an extended period of time to support the rehabilitation exercises they do at home. Here I present a brief profile of them to enable a contextualised discussion of their usage.

6.4.5.1.1 Patients

Px6 is a 43-year old, male, right-handed patient. He suffered a motorcycle accident which caused a serious injury in his right brachial plexus, preventing any movement or sensitivity in the right upper limb. Five months later, at , he underwent a first surgical intervention. There were complications addressed with a second surgery one week after, and then underwent a third surgery days later: a peripheral nerve transfer surgery from accessory nerve to musculocutaneous nerve (biceps), using a nerve graft from the sural nerve. He participated in the initial interviews (Ac1) and co-design workshop with patients (Ac2) while he was rehabilitating in his area hospital, five and eight months after his surgery, respectively. After a year of rehabilitation in this area hospital, he was discharged. Five months later, he started rehabilitation treatment at Hospital Universitario de Getafe, and five months afterwards—two years after the first surgery—, he joined us for the at-home testing (Ac5.)

Px9 is a 77-year old, male, right-handed patient. Initially, he encountered weakness and sensitive alterations in his left upper limb without a clear cause. A year and a half later, after some attempted interventions and increasing alterations, he undergone peripheral nerve transfer surgery in his left upper limb (from ulnar nerve to biceps, i.e. Oberlin.) After nine months of unsuccessful rehabilitation in this area hospital, he started his treatment at Hospital Universitario de Getafe. He joined the study one month later. Therefore, when we met Px9, he had started with neurological alterations two and a half years before, and had undergone surgery ten months before.

6.4.5.1.2 Sessions

EMS, FA and I organised four sessions as checkpoints: introduction to the prototype (E1), check-in after four weeks of use of the Angle prototype (E2), check-in after six weeks of no use (E3), and check-in after three weeks of returning to use it (E4) (Tbl. 7.) These sessions were facilitated by EMS, FA and me. Each session was carried out with the patients individually.

Table 7: Summary of sessions with patients
ID Description Week
E1 Introduction to Angle prototype and configuration. 0
E2 Check-in after four weeks of use. We recover the prototype. 4
E3 Check-in after six weeks of no use. Patients get the prototype again. 10
E4 Check-in after three weeks of use. 13

In the first session (E1), we provided an introduction to the device and calibrated it so that the patients could bring it with them to their homes. In the case of Px9, the exercise that was prescribed consisted of a flexion of the elbow starting from the table (Fig. 16, left.) For Px6, the exercise was an extension of the elbow with support from the table throughout the movement (Fig. 16, right.) The movement that Px6 had to perform was not in the plane that the design was intended for. However, because of a natural rotation of the forearm simultaneous to the elbow extension over the table, the measured angle and subsequent feedback was meaningful as it coincided with the movement

Figure 16: Exercises performed by the patients. Left: Px9, Right: Px6.

In E2, after four weeks of use of the prototype, we interviewed the patients and recovered the devices to be able to download their data. This was useful because in E3, we interviewed the patients asking about their experience without the device and provided them with the opportunity of using it again. In the meanwhile, I had updated the prototype so that downloading the data could be done directly in the same session. In E4, we interviewed the patients again. Px9 returned the device while Px6 decided to keep using it.

In a sense, these sessions established contrasting rhythms [128] for our design journey. A first rhythm was the pattern of having the sessions with the two patients during the same days, always in the same order—first Px9, followed by Px6. Further rhythms were established within the series of sessions of each patient, as they were very different: with Px9 we encountered a pattern of reluctance combined with a diligent usage of the prototype, whereas with Px6 we encountered a pattern of enthusiasm combined with impotence due to the amount of pain.

6.4.5.2 Key Takeaways: Contrasting Perspectives

In general, from the beginning of the sessions, Px9 showed and maintained clear reluctance to using the device and downplayed any possible benefit of using it, even if he used it constantly during the period he had it with him. In E1, while first configuring the target positions, he thought he would not be able to use it, but after some careful guidance he operated it without problems. In the follow-up sessions, he said that the use of the device did not help him in any way. Additionally, even if we did not ask about it, he said that he preferred the therapies in the hospital—from which he had been recently discharged because of his treatment timeline—, to the use of the device. However, he did recognise that he had been doing considerably more of the prescribed at-home exercises after he started using the device. And after E2, in the period without device, he continued doing the exercises and logging them manually on paper.

Contrastingly, since the beginning of the sessions, Px6 was very eager to use the Angle device, as he saw that it could help him as an extra motivator to perform the exercises. Due to his chronic pain, he was unable to perform the exercises daily. However, he recognised that having the device did make him more willing to do the exercise despite the pain. Although there was no money involved in the study, he compared the situation to the following: “It’s like the gym, where you go because you have paid for it” (Px6). In E3, Px6 shared that without the device he had decreased the amount of times he did the exercises. Additionally, he was looking forward to using it again to have that extra reason to do them.

6.4.5.2.1 Multisensory Feedback

Each sensory modality in the device feedback was evaluated differently by the patients. Sound seemed to be the most effective one, visuals helped with the display of the repetition count, and the vibrations were not felt by neither of them. For instance, sounds were useful for Px6 as they indicated him if he had arrived to the target position, or if he had to try again. In this sense, he expressed how the sounds helped him to “push” further, because he would repeat a movement if it did not triggered the feedback: “the sound is good: you do the repetition but until you hear the sound (kling, kling, kling) [you know] it’s done well” (Px6.) He also recognised and appreciated the third sound that was played when reaching the repetition goal. He mentioned how in the screen he could see the repetition count, but also expressed how the sound was better to have a sense of the count because then “you don’t have to see the screen” (Px6.)

In E3, he shared how without the device and therefore without the sounds, he could not be sure he was really reaching the target positions. He recognised that then it could have happened that he did not do the prescibed repetitions as “well” as when having the device. Additionally, Px6 mentioned he got used to the sounds and liked them. When asked about it, he could not think of alternative sounds to have included.

In the case of Px9, since the beginning he expressed that the feedback was not providing anything to him. In E2 he mentioned he was aware of the “beeping” of the device when doing the exercise. When asked further about the sound, he said it did not add anything to his experience, and also that he did not have any preference for any other sounds to replace them. For the visuals, it was until E3 that Px9 recognised that the repetition count was useful to him because otherwise, without the device, “I have to be the one counting the repetitions” (Px9.)

6.4.5.2.2 Logging as Evidence

Both patients agreed and recognised from the onset that by receiving and using the Angle device, they would be now in a position where the logs would provide the doctor an evidence of their performance regarding the prescribed exercises: evidence that usually does not exist, as this evaluation tends to be based on self-reports. They agreed on having their activity data recorded, but also expressed with some degree of irony that the device would now be “snitching” on them. Surprisingly, in the period between E2 and E3, Px9 manually logged on paper his exercises in a similar way that the device does. He brought the sheet of paper during E3 and said that it was the proof he had kept working during the time. In the last period, given that he did not feel he was improving, he also expressed a sense of hopelessness “I also ask myself this, why do I keep doing this [exercising and logging it] if I’m not getting better” (Px9). In contrast, during the period without device (between E2 and E3), Px6 expressed that without the device he did less exercises. He mentioned that having the device made him “push himself more”, in part because he did not want to arrive “with an empty log” (Px6.)

6.5 Reflections on the Design Journey

In this section, I discuss insights from different aspects of the design journey. First, I discuss concrete design qualities from the prototypes, regarding their use of multisensory feedback. Being inspired by the Intercorporeal biofeedback [183] strong concept, the design aimed to be open-ended, provide a shared frame of reference using multisensory feedback, facilitate guiding the attention from and to the actions, and considered as part of the whole activity. Then, I discuss insights and tensions regarding the insertion of these designs within the medical context of the project: the implications of a minimalist design and a spectrum between open-endedness and normative goals. Additionally, I discuss challenges and opportunities that we experienced regarding technology design for health. Finally, I conclude with reflections on the design journey itself and the reporting approach I chose for this chapter.

6.5.1 Designing Multisensory Feedback Technologies

In this section, I focus on the multisensory feedback aspect, which provided the shared frame of reference [183] for the patients and the medical personnel. I discuss the implications for our designs of each sensory modality in use: sonic, visual, and haptic feedback.

6.5.1.1 Sonic Feedback

The design journey took a simple sonification approach, sonifying the reach of key points in a trajectory of movement using distinctive percussive sounds. In a sense, this led to the definition of a minimal version of a sonified exercise space [152]—a space where the patients could move knowing with confidence that they were within an expected and safe range of motion, and where the sonifications would be meaningful. In our case, in the Flex, Drums, Points and Angle prototypes, the sounds were emitted when reaching a position that was close enough to the target points. In general, this straightforward sonification approach appeared to work well both for the medical personnel and the patients. Based on the feedback that we received, during the design journey I performed a few changes regarding the sonic qualities of the triggered sounds, while keeping the basic mechanics intact.

For instance, there were several comments from the medical personnel regarding the sonic qualities of the chosen sounds when testing the initial prototypes (Ac4.) The initial drum kit sounds from Drums were deemed as confusing, while the more tonal (albeit also percussive) sounds of Flex were commended. It seemed that in the idea and design of Drums, the participants and us designers did not take into account that some musical literacy might be needed to be able to distinguish between the components of a drum kit. In the case of the sounds for Flex, their tonal component was more noticeable. Based on this feedback, for the Points prototype, I replaced the drum kit sounds from Drums with more distinctive, tonal and higher-pitched percussive samples, keeping the two sounds from Flex. Finally, for Angle, which was based on two points only, I kept the two sounds from Flex, and added a third one (a cymbal) that was triggered when reaching the repetitions goal. The sounds were generally well accepted by the patients, who used the Angle prototype during several weeks (Ac5.)

In the case of Px6, he liked the two base sounds and associated them with doing the exercises, while also enjoying having the third sound to get a sense of the amount of repetitions he had performed. In the case of Px9, he did not attribute much significance to the “beeps,” but he did notice the sounds. Further, he could not think of better replacement sounds or feedback. Despite his overall critical attitude, he did not characterize the sounds as annoying or negative, which I interpreted as a relatively favourable observation.

So far, I can argue that the sounds were appropriate enough to support long-term involvement of the patients with the exercises. This is not trivial, considering those exercises were performed frequently (from three to seven days a week, with 20 to 50 repetitions each time) and involved multiple repetitions of the movements and hence of the sounds. It is worth noting that, when developing and testing the prototypes with the therapists, there were suggestions of implementing more complex and semantically-charged sounds, like festive trumpets or voices expressing motivational messages, which would challenge the values of open-endedness and minimalism that we originally intended for the designs. I was careful in not involving these kinds of sounds during these initial design stages, not only because of the tension with the design drives but also because I reasoned that they could become annoying after a while. Further work and evaluation with more patients could compare the effects (e.g. on perceived versus actual performance) of using different sonic palettes for similar interactions.

Additionally, there were other possible directions that we left as loose ends that future work could explore. For instance, the medical personnel suggested alternative sonifications to provide more information and sensorial qualities along the trajectory of the movement, such as alerting when the target positions are being approached to or surpasssed. Integrating these ideas into the designs could benefit from already-tested approaches: possible inspiration could come from previous works using sonic and/or vibrotactile feedback to support rehabilitation and physical activity [62,122,123,151,152,170]. To provide an example, the sonic approaches described by  [152] could indicate the location of the movement within the pre-defined range and not only at the beginning or end, as I did with Flex, Points or Angle. Additionally, the metaphorical sounds described by [93] could inform the types of sounds and sonic behaviours that further designs could implement, taking into account their potential for impacting body perception and physical activity. These approaches are particularly relevant and applicable in prototypes targetting a trajectory between two target positions—such as Flex, (2)Points, and Angle, in our case—where it is easier (technically and cognitively) to make sense of a sonification that maps the current position between them.

6.5.1.2 Visual Feedback

Visual feedback received more changes during the design journey in the position-based prototypes. The visualisations evolved from a coloured screen when reaching the target positions in Flex and Drums, to a circle-based visualisation of the closeness to the target positions in Points, to a literal visualisation of target and current angles in Angle, along with numerical indicators of the current angle, duration of the exercise or amount of repetitions (Fig. 17.)

Figure 17: Comparison of the visual feedback in Flex, Points and Angle.

This progression in the visuals was mainly led by a need to increase transparency [209] in the design process. In the encounters [128] with the expert co-designers, it was sometimes difficult for them to reach the saved positions again, and it was not clear why. We needed a way to make it clear what was being measured and triggering the feedback, both for us as designers to troubleshoot difficulties, and for the therapists to better calibrate, adapt and position the device in relation to the prescribed activity. This enabled a transition in the design focus, where I started developing real-time visualisations to aid in this regard.

For instance, the circle-based visualisation in Points provided to the therapists a visual explanation of why someone might not be reaching the position they thought they were reaching, assisting the wearer in finding the target position again if needed. Then, the transition [128] towards a simplified measurement accounting for a single angle only in Angle, also brought with it a change in the visualisation, which became a literal representation of such angle. As commented by the medical personnel (OT1, PT1, PT2), this visualisation was very clear. In general, because I was working within the limits of the smartwatch screen size, I was prompted to implement visualisations that were simple and clear, aligned with the design drive of minimalism.

Another relevant addition to the visual aspect of Angle, was the usage of real-time numerical indicators for different quantities according to the mode: measured angles in the setup mode, and number of repetitions and activity duration time in the activity mode (Fig. 14.) In this sense, the numerical values added an additional layer that on one hand made the design less open-ended but on the other helped to provide a common language between therapists and patients.

6.5.1.3 Haptic Feedback

I focused on developing the audiovisual feedback because of its potential for providing a shared frame of reference for patients and therapists. For the haptic feedback, I implemented a simple approach of triggering a single and mild vibration when reaching the target points. In future work, the hardware in the device could be leveraged for rich vibrotactile interactions, exploring different intensities or patterns as others [94,151,156,157,171] have done.

Further exploration in this line would tie back to the initial designs (Ac2) proposing sensory rehabilitation: in these rehabilitation processes (and in our designs) the focus is on the motor rehabilitation, but there is a sensorial and haptic aspect that could be further explored and supported. Work such as the haptic design toolkit by  [211] could provide the technical means and ideation tools to further explore that direction. Alternatively, a possible way to use haptic feedback through commonly available technologies would be by leveraging the interconnection with other devices—such as the mobile phone paired to the smartwatch—and providing the vibrotactile stimuli from their location in different body parts. This would echo a couple of design concepts (1B, 3B') from the co-design workshop with external therapists (Ac3), where vibrations were placed on the back, the shoulders or the feet in response to the movement of the elbow. In this regard, further work would be needed to evaluate the effects of the feedback on body parts other than the ones in rehabilitation.

Here, I wish to highlight our experience with haptics when testing our Angle prototypes with Px6 and Px9, as it could inform further work in this line. Across the sessions, I noticed that the haptic feedback that I had implemented was not effective: neither of them could feel it, because of low or no sensitivity in their forearms (and the prevalence of pain, in the former case.) I did not consider that this could happen, and its detection became a moment [128] of realisation: even though we as a design team had gathered the insights on sensory and tactile rehabilitation from the initial co-design workshop (Ac2), these had made us think more about the fingers having trouble feeling textures, and not about other body parts unable to feel vibrations. Therefore, I overlooked this aspect and naively implemented the vibrations without further consideration. In this sense, future co-design workshops could explore to what extent vibrotacticle feedback from the devices could be meaningful and effective, taking into account the sensorial diversity of patients such as ours.

6.5.2 Implications of a Minimalist Design for Rehabilitation

During the design journey, I explored to what extent a minimalistic design based on smartwatches could support the kinds of rehabilitation goals in our application domain. Here I discuss insights regarding the suitability and limitations of such an approach.

6.5.2.1 Degrees of Minimalism

Across the design process and successive prototype iterations, the emerging feedback and needs continuously pushed me to expand our prototype’s feature set, placing sustained pressure on my intended minimal approach. However, the drive to keep these new features within limits was still present and shaped their final forms. For example, the visual feedback provided initially by Flex and Drums was overly minimal, offering no indication of the user’s position relative to target points, which in turn caused confusion in the participatory embodied sketching sessions (Ac4.) In the subsequent iterations of Points and Angle, I introduced visualisations that conveyed this information while preserving a simple and clear aesthetic.

Similarly, the initial prototypes focused on the use of a single device. However, feedback concerning dual design and normative goals prompted me to build on the existing connection between the smartwatch and its paired mobile device. Nonetheless, instead of developing a fully-fledged companion application, I opted for developing a minimal mobile app as a proof of our dual-design concept.

Interestingly, there was one aspect of the designs that ended up being reduced: the number of possible target positions for the rehabilitation exercises. Flex and Drums featured similar interactions, differing in the number of target positions (two and four respectively). Hence, they evolved into Points, where one could choose the number of target positions for the rehabilitation exercise (from one to four). I reasoned that this would, in principle, provide more degrees of freedom and adaptability. However, in the preliminary testing, I observed that dealing with more alternatives was less intuitive and less generative for the therapists. In this sense, a more constrained approach, working with two target points only, was more generative. Additionally, an arbitrary number of target positions would bring a considerable amount of technical complexity when implementing the data tracking, so the more minimalistic two-point setup (Flex) was maintained for the new Angle prototype. This choice allowed me to implement features that would be challenging to develop for an arbitrary number of target positions, such as repetitions counting, the linearisation and visualisation of the current position with respect to the target points, or logging of the differences between the actual and the target ranges. By limiting the number of target points to two, I was able to deepen the exploration of what could be done with them.

Another aspect in which I had to balance complexity was the implementation of the dual design. Given that the activities that the therapists perform with the patients before their exercises tend to have more complexity (e.g. they might need to evaluate, measure, calibrate the situation and activities for the patients), there was a need for the technology to support these practices. This increased the complexity, at least in the Configuration mode of the Angle prototype, which included certain features and controls such as an adjustment of the sensitivity threshold and the setup of target repetition counts. These features were kept constrained and minimal, to maintain some degree of minimalism. In contrast, the Activity mode of Angle remained as minimal as possible.

For the case of Px9, this approach seemed to be adequate: he got slightly overwhelmed observing the configuration procedure in the Angle prototype and showed resistance to using the device because he “would not be able to use it” (Px9.). However, once I disabled the Configuration mode and introduced the system through a brief tutorial co-facilitated by the rehabilitation personnel, he resolved he would use it and was indeed able to use it without any problems. I recognise that providing a simplified patient-facing version while reserving a more complex controls for therapists introduce certain tensions, particularly in relation to power dynamics. Prior works has emphasised the value of users calibrating their own movement space  [152], or to personalise their devices [96]. In contrast, the current approach positioned calibration as a clinical task, enabling medical staff to define the desired movement of range without offering the patients the opportunity to further adapt their devices. While this aligned with the immediate rehabilitation setting, I contend that meaningful participation, control, and self-determination are central to equitable design. In this sense, I see the value in future work that explores more actively how patients might take part in calibration and configuration processes. More broadly, I suggest considering how degrees of minimalism can be thoughtfully distributed across different stages or instances of a design to support both clinical needs and user autonomy.

6.5.2.2 Multiple Fits

As mentioned above, my intention was to develop minimum-viable prototypes that could support multiple rehabilitation goals in the application domain. The initial apps—Flex, Drums, Maze—, each targeted supporting specific activities grounded in the design concepts generated during the co-design workshop with external therapists (Ac3). Subsequent versions—Points first, followed by Angle—followed up from these earlier ideas to support rehabilitation activities in a more general way. Even though Angle was originally envisioned for elbow flexion and extension against gravity, its simple mechanics and interaction allowed it to be adapted to other motions.

During patient testing, I observed that Angle’s minimal features enabled the prototype to support multiple rehabilitation goals and movements: for Px9, the prototype supported his active motion therapy to strengthen the biceps while also helping him dissociate biceps activation from hand movement —the last goal from the nerve transfer rehabilitation process.

For Px6, who no longer required passive motion therapy, but was in the early stages of generating contractions after reinnervation, I adjusted the angle detection of the prototype to facilitate exercises focused on elbow extensions parallel to gravity. Because these movements occurred in a different plane than anticipated in the original design, detection was inherently less reliable. However, in practice, both the elbow extension and the accompanying forearm rotation occurred simultaneously during the prescribed exercise. This meant that the indirect angle measure, although not anatomically precise, still provided a consistent and distinguishable signal for progressing within that context. The measured angle reflected this composite movement rather than the elbow extension alone. Additionally, the core behaviour of providing feedback and counting repetitions when reaching two target positions remained useful in supporting the exercise routine.

6.5.2.3 Potential Cheating

An interesting implication of the simple movement detection methods used in my prototypes, was that they prompted remarks about the possibility of “tricking” or “cheating” the system by doing different movements than the expected or desired ones. For instance, Px6 told us he noticed he could trigger the position detection by just rotating (supinating) the forearm without executing the full elbow extension on the table. However, he reflected: “I would be tricking myself if I just did that [the incomplete exercise]” (Px6), revealing a more complex relationship with deception. Some therapists (OT1, PT1, PT2) also explored how detection could be tricked through compensatory motions in the first iteration of the prototypes. Rather than treating this behaviour purely as a “bad” or unwanted outcome, I contend that it points to deeper issues of agency, motivation, and even playfulness [110], which in turn carry important design implications.

For instance, in the domain of gamified and sensor-based rehabilitation, the potential differences between target movement and detected movement is well-documented: technical systems often struggle to distinguish true compensatory motion from correct motion, and efforts to eliminate “cheating” purely via increased sensor fidelity or algorithmic detection have limitations [10]. Because the prototypes employed simple detection measures, the risk of “cheating” was intrinsic to the design. However, this is not unique to minimalist systems—commercial fitness and exergaming platforms based on rich sensing (e.g. Wii, Kinect, mobile exergames, and wearable trackers) are also routinely “cheated” or exploited due to sensing and design limitations [135,199]. I argue that rather than purely eliminating this risk, designers should recognise how such behaviours reveal meaningful aspects of user interaction. In this case, the fact that Px6 chose not to exploit the shortcut, because he judged it would undermine his exercise goals, suggests an alignment between personal agency and the system’s simple detection. By recognising these tensions and opportunities, designers of rehabilitation technologies can better integrate minimalism with meaningful user control and motivation.

6.5.2.4 Reappropriation of Available Technologies

I used commercial Wear OS smartwatches as a development platform because they offer an available, small, flexible, and wearable platform that could be reappropriated for rehabilitation purposes. I reasoned that using commonly available technologies as a foundation would make more feasible the eventual implementation and spreading of the design to support rehabilitation treatments beyond our study.

Rather than treating them as watches, I approached the devices as powerful embedded computers with hardware that could be leveraged for movement applications. To support this reframing, I employed ready-made velcro straps that facilitated alternative placings of the device. This allowed for quick adjustments to different bodies and movement capabilities [158], and also reduced the visual and functional association with a consumer wristwatch.

I recognise that a possible limitation of building our minimalistic prototypes over Wear OS, is that the host system is arguably not “minimal”. Even if the resulting apps are simple and straightforward, to be able to use them one also has to learn about the basics of the operating system per se. For example, in the system, there are swiping motions that need to be learned in order to do simple tasks like launching the prototype app or adjusting the volume. Fortunately, other than that, the way of interacting with Wear OS is very similar to Android, which might be more familiar for the people that would use it. To address this potential usability issue, as part of the setup and design of our prototype, I “cleaned” up the system as much as possible—re-arranging icons and grouping or removing unused apps or features—and constructed an optimal path of steps to follow from turning on to turning off the device, creating a visual guide that the patients could keep and follow.

Highlighting another angle of the minimalism approach, I contend that by using Wear OS as a development platform I am leveraging the availability of devices running it as potential hosts of the app, therefore minimising or avoiding altogether the considerable expenses that creating, controlling, modifying, maintaining and distributing [68] custom hardware involves. Inspired by the concept of salvage computing [37] and previous works arguing for reusing of hardware [23], by using already-existing devices I also invite for further designs and prototypes that could leverage this hardware instead of creating new devices from the start. This approach could also be seen as “a first step towards technological reappropriation that starts by leveraging the work of large technology companies as tools for common good.” [6]

6.5.3 Open-endedness and Normative Goals

In all the design concepts from the co-design workshop with external therapists (Ac3), we as a design team observed the participants foregrounded the need for the activities and devices to be open-ended enough to support adaptations and variations within the same application domain, including adaptations to different ranges of motion, dimensions of body parts, and the specifics of each patient’s surgery. The designs took into account that even with the same type of nerve transfer, there could be differences regarding the movements of the donor nerve that activate the biceps. The rehabilitation would become even more complicated when considering other injuries that the patient might have suffered too.

In the co-design workshop with external therapists (Ac3), the objective was to design concrete activities for the specific kind of rehabilitation in our project. Still, I hoped that the resulting design insights could have potential for broader application domains, i.e. other types of nerve transfers and other rehabilitation applications. In this line, I observed that from the outset, emerging design concepts and activities organically developed exhibiting a degree of independence from the constraints of our specific application domain. Moreover, during the participatory embodied sketching sessions (Ac4), the medical experts promptly and intuitively proposed variations and potential extensions of the designs for other rehabilitation cases, even when explicitly instructed to momentarily set those ideas aside. This speaks to the flexibility and adaptability skills that OTs and PTs tend to have, which help them develop ad hoc treatments for their patients by reappropriating a variety of objects at their disposal. In this sense, the designs had the capacity to support these cases, confirming the design drive of open-endedness we had from the start.

In the initial exploratory prototypes I developed, a high degree of open-endedness was implemented by not directly displaying “right” or “wrong” assessments, but rather providing open audiovisual output, echoing the concept of non-judgmental interfaces [176]. For instance, the sounds played by Flex and Drums and the colours shown in their screens were chosen to be neutral, open for interpretation by the therapists and patients. Additionally, the target positions were not pre-programmed, and instead they could be determined in the moment given the current context and sensations of the patients. In Maze, a final count of avoided and collided obstacles is shown, but there is no message regarding “winning” or “losing” the game. In general, the open-endedness in the designs was related to the capacity of the devices to measure quite general movement aspects: beginning and end points, orientations, and speed/acceleration, which allowed their appropriation to support multiple other exercises. These comments on open-ended designs mirror the findings on previous works with minimalist technology probes that have been reappropriated for multiple application domains beyond rehabilitation [105,106,180183,186].

6.5.3.1 Data Tracking in a Medical Context

Even though the open-endedness of the designs was deemed appropriate, there were several suggestions regarding the implementation of normative goals that would contrast with it. Given the medical context and the needs of the project, I integrated most of them into the designs, while attempting to keep the spirit of open-endedness where possible. In this sense, I implemented explicitly quantitative measures—angle ranges, duration, number of repetitions—and the tracking of these data.

Because of the challenges in following up with the patients, the medical doctors reasoned that recording data of the exercises done at home could provide them with a better understanding of the rehabilitation process. Similarly, the therapists reflected that for the patients, having a log of the activities they had done could be a source of motivation. This is in line with the general trend of employing motivational affordances to support challenging processes—such as physical rehabilitation—, and I noticed that it was actually reflected in tracking technologies and rehabilitation equipment that medical staff used already.

Within this context, and to facilitate the communication between the different stakeholders of the project, it made sense to incorporate the quantitative measures and goals in what would become the Angle prototype. For this, I needed to add some extra complexity to the application: I started by displaying the data that was already being acquired for the interactivity in the prototypes—such as the rotation angles of the device—while also implementing a recording of other relevant variables, such as the duration of the activity and number of repetitions. The latter implied redesigning the usage flow of the app, dividing it into the three modes: Configuration, Activity, and Log. Even though all these features can be considered conventional and expected within a given app to support physical activity or rehabilitation, I contend that there is much potential in designs that do not implement these measures, as illustrated in the previous prototypes, Flex, Drums and Points..

In the usage of Angle by the patients, I could observe multiple overlapping purposes and uses of tracking, as discussed by  [4]. For instance, the tracking and its visualization was useful for sense-making [4] between the patients and us as designers and doctors: it allowed us all to see the levels of activity consistency and map it to the accounted experience. In the case of Px9, he was very consistent in doing their exercises in two sessions per day. When he did not have the device, he created his own log on paper, which he showed to the rehabilitation doctor and designers. When asked about his consistency and commitment to self-track, he said he was hopeful that by following the routine he will improve considerably. In this sense, he was using tracking for goal-checking [4], using the data to assess his progress towards the rehabilitation goal. Unfortunately, given that he did not feel he was improving, he also expressed a sense of hopelessness “I also ask myself this, why do I keep doing this if I’m not getting better” (Px9). In the case of Px6, he recognised that he was aware that we as designers and doctors would be able to analyse the logs, and that made him push himself further “so that he didn’t come with an empty log” (Px6). He did more exercises when having the prototype than when not, according to his own report. Even though he acknowledged that doing the exercises was better for his goals, it seemed that in his case the tracking was primarily for the doctor [4], who would be the recipient of the data. Similarly, another instance in which the patients considered that the tracking was for the doctor [4] was when they jokingly referred to the device as a snitch, that would reveal to the doctor the truth of their activities. Again, even though they recognised that performing the prescribed exercises would be beneficial for them, I contend that this degree of irony surfaced some tension and mismatch between their own personal goals and those of the treatment or the doctors.

These comments and reflections by the patients regarding the prototype as a “snitch” echo previous work discussing the moral valence of tracking and medical data [4]: within a medical context, it is easy for the data to become a vessel for moral judgments, where those who track themselves and show in the tracking a bigger commitment could be considered “better” (by themselves, or the medical personnel) than those who do it less. Therefore, introducing a logging device within this paradigm can produce a kind of antagonistic relationship: the device would become a “judge”, or at least an auxiliary to a (moral) judge.

However, research has shown that even when technologies “call out” or expose user actions, this dynamic can be playfully reframed by users and transformed into motivational resources. In their work on Playification, [110] observed precisely this phenomenon: during early trials of the PhySeEar system, the patients and the physiotherapist spontaneously joked about the device acting as a “judge” and a “tattletale,” adopting a humorous stance toward this externalized authority. Rather than correcting or suppressing this interpretation, the authors leveraged this emergent meaning—designing interactions that embraced teasing, provocation, and playful competition. This design move is explained by  [3] as “chasing play potentials”: recognizing and amplifying naturally occurring moments in which people reinterpret technological constraints or frictions as opportunities for play [3]. Seen through this lens, a device framed as a “snitch” or “tattletale” needs not to reinforce moral pressure; instead, such playful resignification can support agency, engagement, and motivation in rehabilitation contexts.

These perspectives also open space for reconnecting playfulness with more reflective and non-judgmental approaches to logging and feedback. For example, the affective diary [160] foregrounds personal meaning-making and emotional expression rather than performance evaluation, offering a way for people to engage with their bodily data without feeling assessed or corrected. Similarly, non-judgmental interfaces [176] explores how to design systems that explicitly avoid moralizing framings, enabling users to encounter their data without implicit expectations of compliance or discipline. Complementary, discussions on data feminism such as the ones by  [61], invite designers to question whose values and interpretations shape data practices, and to cultivate forms of logging that acknowledge embodiment, context, and situated lived experience.

Bringing these strands together, I suggest that future rehabilitation technologies could combine playful re-signification (as seen in Playification [110]) with non-judgmental, holistic, and context-sensitive approaches to data. Such designs may help to alleviate the moral valence  [4] often attached to tracking—where data becomes a proxy for commitment or compliance—while at the same time offering richer avenues for patients to understand themselves and be understood by clinicians. By embracing both the playful and the reflective dimensions of tracking, rehabilitation technologies can move beyond the paradigm of the device as a “judge” or “snitch,” supporting instead forms of engagement that are motivating, and meaningfully aligned with patients’ lived realities.

6.5.3.2 Gamification and Motivational Affordances

In line with the experiences described above—where patients sometimes framed the prototype playfully as a “snitch” or external judge—some therapists suggested, after trying our prototypes in their different stages, that such dynamics could be extended through explicit gamification layers. Their proposals included adding motivational and reward-like audio and tactile cues—instilling soothing or pleasurable sensations, such as the haptic feedback in the co-design workshop with experts (Ac3)—, motivational messages, and illustrated characters. Some therapists related some of these ideas to other interactive technologies they had tried in the past in therapy sessions, such as commercially available biofeedback technologies, or reappropriated videogames. These suggestions reflect a broader trend in rehabilitation leveraging game elements to sustain motivation or engagement over taxing routines (e.g. painful, or long-term).

Other therapists, however, felt that the minimalist feedback already provided by our prototypes was sufficient: OT1 reasoned that, while it might not necessarily “exciting,” it would likely be effective for most adult patients.

Given that these technologies were meant to support a long-term rehabilitation process, I approached additional motivational features with caution. Designing effective motivational affordances in such settings can be notoriously challenging  [13], and poorly aligned gamification can unintentionally amplify feelings of judgment, obligation or failure. In keeping with aspects discussed in the previous section, where playful re-signification emerged organically, I preferred to first understand the engagement, dynamics, and motivation that might grow from the minimal design itself.

A potential direction to explore in the future, would build on the playful activity-centric design concepts emerging in the co-design workshop with experts (Ac3), such as the crossing of an imaginary river or hitting fictional drums in space. Both are compatible and could be supported by our Angle prototype, which points to being a flexible host for future playification layers.

6.5.4 Challenges and Opportunities in Technologies for Health

The design journey surfaced several challenges and opportunities for the insertion of technologies such as ours in a long-term rehabilitation context. Neither I or the rest of the team anticipated them.

6.5.4.1 Designing for Long-term Treatments

Because the prototypes were intended to support home use and incorporated explicit data tracking, the design evolved toward a long-term deployment horizon.

Designing technologies for long-term rehabilitation demands sustained engagement with users’ values, contexts, and personal motivational drivers. [13] offer influential guidance for this endeavour, outlining four lessons for interactive systems supporting extended therapeutic routines: (1) Help people articulate what motivates them; (2) Balance between work, duty and fun; (3) Support motivation over time; and (4) Understand the wider social context  [13]. The design journey sought to move in this direction. We as a design team engaged patients early in a co-design workshop (Ac2) that emphasised self-articulation of therapeutic goals and motivations and understanding of the wider social context (aligned with a couple of points by  [13]). Although the emerging designs did not materialize in current prototypes, they informed our broader stance: developing a minimal yet adaptable intervention layer that could integrate into everyday routines, leverage familiar device infrastructures, and support repetition-based therapy without imposing unnecessary learning or setup burden. In doing so, I aimed to create prototypes that were compatible with long-term rehabilitation workflows, aligning with  [13]’s emphasis on context, pacing, and sustainability of motivation.

In principle, by being minimalistic, the prototypes could be relatively unobtrusive in the day-to-day of the patients, while also being capable of supporting the progressions in the prescribed exercises, as described above. However, in its current state, the minimal palette of feedback options—sound samples, colours, and haptic pattern—could also limit their potential in actually supporting motivation over time, given that it could become repetitive and disengaging. To address this, the suggestions by the therapists could be helpful: the devices could provide rich and flexible palettes that could be personalised for the interests and sources of motivation of the patients.

Regarding the limitations and opportunities of data logging for medical purposes, I want to discuss the data that are missing in the logs but that are relevant to the rehabilitation process. For instance, in the case of Px6, having high levels of pain preventing him to do the activities daily. In the days he did the activities, he logged the amount of pain in the form I provided, but there was no evident way for him to do it when he did not do the exercises. In this sense, by being activity-centric, the logging system I implemented is only capturing the activities when there are other factors that affect them and are left out of the log. In the work by  [61], the authors reflect on the way chronic conditions—such as pain, in the case of Px6—and their impact on physical activity tend to be missing from the data that is captured from fitness tracking technologies, and invite for further consideration of these aspects in future designs. Further refinement of the logging capabilities of the prototypes could take into account a more holistic perspective, not only centring the activities but also the context surrounding them.

In a similar line, following prompts from previous work [69,70,82] , future iterations could also consider designing for rest and not only for activity. In the context of chronic conditions such as this application domain, there is a need for technologies that support pacing so that one does not exceed their limits [69,71]. For instance, in the last session with Px9, he reported that he had started feeling some pain that he had not felt before. Discussing it with FA, we realised that it could have been that he was tired after pushing himself very hard to do extra amounts of repetitions. In this sense, having a non-judgemental logging interface was not enough: additional safeguards to prevent overexertion could be crucial to implement too. [152] reflected on the importance of providing these notifications during the activity to avoid the triggering of further pain after an enthusiastic exercise session. Designing for a longer time frame beyond a single activity, [70] leveraged the constant tracking of activity data from commercial smartwatches to then reinterpret it through more a fitting and rich sensory experience.

6.5.4.2 Centring on the Patients

With the initial explorations and co-design activities with patients (Ac1 and Ac2), we as a design team realised that the breadth of possible interventions to support the patients exceeded what we originally anticipated. In a sense, the expectation was that by scoping our design project to peripheral nerve transfer surgery as an application domain, we would be able to constraint the emerging designs. However, that was not the case. Then, in this project, I chose to focus on designs that could be flexible enough to be applied to a breadth of peripheral nerve transfer surgery patients.

Given the decisions we took in the design journey, the prototypes that I developed were mostly based on the design concepts led by medical practitioners—therapists and doctors—and not by the patients. Previous works have proceeded in a similar way, gathering expert knowledge to design and develop initial prototypes [32,99,182]. I contend that in a sense, with these designs we are extending the dynamics that already exist between medical practitioners and patients, where the former—based on their expert knowledge—design and prescribe activities for the latter, adapting them to the patients’ needs and taste. My designs were meant to support these kinds of activities, and for this, they benefited from the design qualities that I have discussed above—minimalism, multimodality, and a degree of open-endedness.

However, I also recognise that this work could be extended by further involving the patients in the design decisions for next iterations of the prototypes. Although challenging, I consider that there could be a way of balancing highly personalised designs—such as those of the participants in previous works [13,96]—with the possibility of generalising them so that non-designer patients could benefit from them as well.

6.5.4.3 Differences in Expectations

For some of the patients, who might have had difficult conditions before the peripheral nerve transfer surgery, the rehabilitation process is palliative: the expectations of improvement might not be to high, or they could even aim to not getting worse. This reality is told by the doctors to the patients before and after the surgery is performed.

Introducing the technology I developed, which aimed to support the already-existing treatment and was never framed as something that would improve their situation, could have been nevertheless seen by the patients as an element that could provide an alternative outcome to their treatment. This effect could have been exacerbated by the pervasiveness of tools that emphasise individual responsibility for health, where the use of the technology to track and improve on normative goals would promise a “better” outcome [46,159].

For instance, Px9 expressed that he did more repetitions than prescribed with the hopes of getting better. And, for the last part of the study, he expressed a considerable amount of disappointment due to not feeling any improvement. As mentioned above, the introduction of the device coincided with his discharge from the therapy sessions at the hospital, which was going to have an impact by itself. This could have possibly exacerbated the feeling of disappointment, while also making him think that the introduced technology was meant to be a replacement of the therapy sessions, when that was not the case. Unfortunately, this was an instance of a temporal dissonance [128], where the development timeline and research needs clashed with the treatment timeline of the patient, generating an unfruitful overlap. Coming back to the point regarding expectations, it remains a challenge to design and implement technologies that aim to support rehabilitation activities while also providing generative alternatives to the common assumptions [120] and expectations that are made—by designers and users—regarding health tracking technologies and their outcomes.

6.5.5 Beyond the Success Narrative

In this chapter, and in keeping with the commitments of RtD, I have aimed to provide an account that is attentive to the complexity, uncertainty, and unexpected turns characteristics of such work [40,58,76]. Rather than presenting a streamlined story of linear progress and success, I made an effort to reveal aspects of our design journey that show the challenges, limitations, unexplored routes (or loose ends [63]) and even “failures” [58,76] that shaped our design journey.

Earlier in the project, we as a team attempted to publish a report on the co-design workshop with experts (Ac3), while subsequent stages (Ac4 and Ac5) were still underway. The rejections of those submissions made clear that the account lacked valuable context from earlier stages (Ac1 and Ac2), which explained why certain decisions, e.g. the minimalist direction, were taken. More importantly, the initial publication attempts implicitly reinforced a “success narrative”, smoothing over obstacles, detours, and unreported iterations in a way that obscured the complexity of both the process and the domain. In reflecting on this, I found resonance with calls for more honest and just accounts of design processes [40,58,63,76,128], which have argued that revealing tensions and uncertainties is essential to advancing knowledge in RtD and HCI.

Following this guidance, I made a deliberate effort in this paper to attend to temporal accounts  [128] of the design activities in our journey. Doing so allowed me to recognise, for example, the qualitative differences between the eight months of trying to find patients and conducting interviews (Ac1), and the intense two-month period between co-design workshops (Ac2 and Ac3), when the “urgency” to “arrive at a design that we could implement” became particularly pronounced. A more compressed account would have flattened these contrasts, erasing the shifting pressures, priorities, and forms of uncertainty that shaped the path.

Finally, an even fuller picture of the journey would include a more explicit discussion of the emotion work [12,44] involved— for researchers, clinicians, and participants alike—over a project that spanned several years. Acknowledging these affective dimensions is an important direction for future reflective work in design-based rehabilitation research, and it adds further nuance to what it means to move beyond tidy narratives of success.

6.5.6 Final Reflections

As highlighted throughout the previous sections, this design journey opened several avenues for future work, each addressing different aspects of the challenges, opportunities, and loose ends surfaced in this project. I have already outlined several lines of future work. Here, I reflect on several of them.

One direction concerns the further development of the design concepts and qualities extracted from the co-design activities. These could be explored through new prototypes, whether supporting peripheral nerve transfer rehabilitation or adjacent application domains. A second trajectory involves extending and refining the current prototypes, incorporating the feedback and lived experiences from the patients who used Angle for several weeks, together with insights I surfaced in the broader discussion. These prototypes—whether entirely new or evolved from what I implemented already—could then be evaluated with a larger and more diverse group of patients at different stages of rehabilitation, enabling more classical forms of evaluation alongside RtD-oriented inquiry. Finally, future work could examine the generalisability potential of these designs and prototypes across rehabilitation contexts, as the medical staff repeatedly noted that their characteristics could be valuable much more widely than the specific case of nerve transfer surgery.

This work is strongly situated in the public health system of Spain and focuses on a highly specific application domain—the rehabilitation of peripheral nerve transfer surgery. This situatedness brought significant challenges, including limited access to patients and medical personnel, substantial variability in participants’ surgical histories and rehabilitation needs. From a purely artifact-centric point of view, focusing only on the final part of the design journey, the testing of the Angle prototype by only two patients might appear as a considerable limitation: Their rehabilitation trajectories and bodily conditions differed, constraining the breadth of feedback that we as researchers could collect. I recognise this limitation; however in keeping with the principles of Research through Design, the contributions here extends beyond the prototype itself or the attainment of a successful outcome.

Relatedly, some may view my decision to work with smartwatches rather than custom hardware as unexpected – particularly given the rich, unconventional designs imagined during the co-design workshops. I do not claim the smartwatches were the best possible implementation medium for all the design concepts that we encountered. Rather, they provided what was necessary and sufficient [136] within our constraints: an accessible, flexible, reusable, and already widespread platform through which we could explore the design space without major fabrication overhead. Throughout the chapter I have traced the rationale behind this choice, including the affordances and limitations it entailed, and the ways it surfaced specific qualities and tensions of interaction design in health. By sharing both the design concepts generated in the workshops and the design qualities I extracted from them, leaving some of them as loose ends [63], I invite other research teams, equipped with different technical resources, or design priorities, to take these ideas further. Moreover, in this work I am sharing the source code of the prototypes at each one of their stages—honouring their relevance in the process, preventing a situation of them being unreported [173], and encouraging replicability—, so the invitation is open for downloading, installing, forking and extending them.

Finally, I acknowledge a broader limitation: this work is situated in the realm of technology design. The medical staff involved in the project were already working at the limits of the time, personnel, and resources available to them. My prototypes were intended to support and augment their existing practice, not replace it. While the findings in this project suggest that further iterations of these systems could meaningfully assist the rehabilitation of peripheral nerve transfer patients, they also reveal that a larger impact could happen beyond technological interventions. For instance, strengthening the rehabilitation services within the public health system of Spain—through increased staffing, extended therapy time, and additional infrastructural resources—could amplify and very likely surpass the benefits that technology alone could provide. In this sense, I see this work as a reminder of the importance of addressing systemic constraints in parallel with design innovation.

6.6 Chapter Takeaways

7 Movement and Training Technologies: Further Applications

In the previous chapter, I presented how a rich and long design journey led to a set of smartwatch-based prototypes that I implemented. Because of the design drives behind them, minimalism, open-endedness and generalisability, I contend that they were apt for appropriation and exploration in application domains beyond the initial one.

7.1 The MoTTs

Even though they come from different stages in the design process, I have observed that it has been effective to contextualize them as three units of the same “family”.

Further work would be needed to smooth the slight differences in their interactions and configurations, but so far it has not been an obstacle for their usage and exploration.

7.1.1 Angle

7.1.2 Points

7.1.3 Maze

7.2 Appropriating the Initial Prototypes

7.2.1 Participatory Embodied Sketching with Therapists

During the design journey centred on peripheral nerve transfer surgery, we organised five Participatory Embodied Sketching Workshops (Ac4.) with the initial exploratory prototypes I had developed. In these sessions, there was a time for them to engage in free embodied explorations about alternative rehabilitation applications each prototype afforded.

7.2.2 Portfolio of Uses in Occupational Therapy

7.3 Further Explorations of the MoTTs

7.3.1 Public Demonstrations

Feria de la Ciencia, TEI Studio. DIS proposal?

7.3.2 Local Characterization Workshop

7.4 Overview of Uses

7.4.1 Actions and Body Locations

7.4.2 Interaction with Objects

7.4.3 Two Points or Multiple Points

7.4.4 Roles of Configuration

7.4.5 Directions to Connectivity

7.5 Chapter Takeaways

8 Discussion and Conclusion

8.1 Design Resources in Design Processes

analysis of how I/we used what is presented in Chapter 4 in the design processes of Chapters 5 and 6.

8.1.1 Movement

Common language.

Creativity from engaging with multisensory feedback technologies. Having to focus the explorations that otherwise were naturally diverging.

The case of Angle misunderstanding.

8.1.2 Spaces

From open rooms to meeting rooms. Tables in space. How much open space is needed?

8.1.3 Objects

8.1.3.1 Bodystorming Baskets

analysis of bodystorming baskets that we used?

8.1.3.2 Technologies

Approachable, robust, easy to understand.

8.2 Implications of Minimalist Interactive Technologies

Combining Movits + MoTTs discussion. Factor forms: Movits could be more or less implemented in analog circuits (bring back tilt switch from Movits paper discussion), simpler digital circuits, mobile or web apps. MoTTs for mobile.

They can be appropriated for multiple purposes!

TiltPlayVibration: the popularity of this Movit within these workshops is very interesting from the perspective of minimalism, because its interaction can be implemented without a microcontroller, by either using simple tilt switches [66] or handcrafting a soft tilt sensor following the kit-of-no-parts [132] approach. I contend that the generativity and applicability of this probe are very high compared to the low complexity of its interaction, and therefore could serve as a good pointer towards further explorations of minimal interactions.

Potential in offline, self-contained technologies.

Reliance on human connection / the activity.

against e-waste.

to be evaluated: having other people install and run the software.

8.3 Further Reflections

8.3.1 Open Science Practices

How a design project like this could potentially benefit from Open Science practices.

Open Science practices are applicable to interaction design research and therefore have the potential to strengthen its design processes and outcomes.

“alternative research outcomes” - workshop at CHI26. Fiona Bell pictorial / zine in TEI25.

pre-registration? the study changed a lot, but it is generative to share why.

“Share early and often” -> not appropriate under current publication-centric paradigms, but could be helpful to get early feedback. Fortunately, research assessment is changing.

Mental health side: a lot of uncertainty and I feel I cannot talk about this. I do not have anything to show about what I am working on.

Even if there is the “need” to move on to other parts of the project, leaving a clear path for others to continue.

The software is open but could use more effort in encouraging its use.

citizen science - participatory design. Involvement in the public in what / how we are designing.

8.3.2 Self-care in Co-Design

Differences in expectations from stakeholders. Who and for what is this actually for?

Need for unpacking. Emotion work [12,44]

8.4 Future Work

Loose ends that can be explored or developed further.

Practical guide based on Chapter 4. The paper [191] has been cited several times but there is more work to be done to share its insights beyond academia.

Movits: improve documentation, use them, share them, evolve them.

Further exploration of the design concepts from the co-design workshops (Movits, co-design with patients, MoTTs)

MoTTs: documentation for further usage, sharing, documentation and exploration of further uses.

8.5 Concluding Remarks

Bibliography

1.
Reza Abdollahipour, Rudolf Psotta, and William M. Land. 2016. The Influence of Attentional Focus Instructions and Vision on Jump Height Performance. Research Quarterly for Exercise and Sport 87, 4: 408–413. https://doi.org/10.1080/02701367.2016.1224295
2.
Miquel Alfaras, Vasiliki Tsaknaki, Pedro Sanches, Charles Windlin, Muhammad Umair, Corina Sas, and Kristina Höök. 2020. From Biodata to Somadata. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), 1–14. https://doi.org/10.1145/3313831.3376684
3.
Ferran Altarriba Bertran, Elena Márquez Segura, Jared Duval, and Katherine Isbister. 2019. Chasing Play Potentials: Towards an Increasingly Situated and Emergent Approach to Everyday Play Design. In Proceedings of the 2019 on Designing Interactive Systems Conference (DIS ’19), 1265–1277. https://doi.org/10.1145/3322276.3322325
4.
Jessica S. Ancker, Holly O. Witteman, Baria Hafeez, Thierry Provencher, Mary Van de Graaf, and Esther Wei. 2015. You Get Reminded You’re a Sick Person: Personal Data Tracking and Patients With Multiple Chronic Conditions. Journal of Medical Internet Research 17, 8: e4209. https://doi.org/10.2196/jmir.4209
5.
Rasmus Vestergaard Andersen, Søren Lekbo, René Engelhardt Hansen, and Lars Elbæk. 2020. Movement-Based Design Methods: A Typology for Designers. European Conference on Games Based Learning: 637–645, XII, XIV, XVI. Retrieved January 30, 2023 from https://www.proquest.com/docview/2473445482
6.
Lorenzo Angeli, Özge Okur, Carlo Corradini, Marcel Stolin, Yilin Huang, Frances Brazier, and Maurizio Marchese. 2022. Conceptualising Resources-aware Higher Education Digital Infrastructure through Self-hosting: A Multi-disciplinary View. In Computing within Limits. https://doi.org/10.21428/bf6fb269.8b989f2c
7.
Elena Armas, Elisa Sanz, Juan José Jover, María Fernanda Alarcón, Sonia Martín, Lara Cristóbal, and Andrés A. Maldonado. 2021. Current treatment of traumatic brachial plexus and peripheral nerve injuries. ANALES RANM 138, 138(03): 270–281. https://doi.org/10.32440/ar.2021.138.03.rev04
8.
Mattias Arvola and Henrik Artman. 2007. Enactments in Interaction Design: How Designers Make Sketches Behave. Artifact 1, 2: 106–119. https://doi.org/10.1080/17493460601117272
9.
Simon Asplund and Martin Jonsson. 2018. SWAY - Designing for Balance and Posture Awareness. In Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’18), 470–475. https://doi.org/10.1145/3173225.3173262
10.
Edward Averell, Don Knox, and Frederike van Wijck. 2022. A real-time algorithm for the detection of compensatory movements during reaching. Journal of Rehabilitation and Assistive Technologies Engineering 9: 20556683221117085. https://doi.org/10.1177/20556683221117085
11.
Jon Back, Laia Turmo Vidal, Annika Waern, Susan Paget, and Eva-Lotta Sallnäs Pysander. 2018. Playing Close to Home: Interaction and Emerging Play in Outdoor Play Installations. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), 1–11. https://doi.org/10.1145/3173574.3173730
12.
Madeline Balaam, Rob Comber, Rachel E. Clarke, Charles Windlin, Anna Ståhl, Kristina Höök, and Geraldine Fitzpatrick. 2019. Emotion Work in Experience-Centered Design. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19), 1–12. https://doi.org/10.1145/3290605.3300832
13.
Madeline Balaam, Stefan Rennick Egglestone, Geraldine Fitzpatrick, Tom Rodden, Ann-Marie Hughes, Anna Wilkinson, Thomas Nind, Lesley Axelrod, Eric Harris, Ian Ricketts, Susan Mawson, and Jane Burridge. 2011. Motivating mobility: Designing for lived motivation in stroke rehabilitation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11), 3073–3082. https://doi.org/10.1145/1978942.1979397
14.
Michael Baran, Nicole Lehrer, Margaret Duff, Vinay Venkataraman, Pavan Turaga, Todd Ingalls, W. Zev Rymer, Steven L. Wolf, and Thanassis Rikakis. 2015. Interdisciplinary Concepts for Design and Implementation of Mixed Reality Interactive Neurorehabilitation Systems for Stroke. Physical Therapy 95, 3: 449–460. https://doi.org/10.2522/ptj.20130581
15.
Lauren Baron, Vuthea Chheang, Amit Chaudhari, Arooj Liaqat, Aishwarya Chandrasekaran, Yufan Wang, Joshua Cashaback, Erik Thostenson, and Roghayeh Leila Barmaki. 2024. Virtual Therapy Exergame for Upper Extremity Rehabilitation Using Smart Wearable Sensors. In Proceedings of the 8th ACM/IEEE International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE ’23), 92–101. https://doi.org/10.1145/3580252.3586975
16.
Mathilde M. Bekker, Judith S. Olson, and Gary M. Olson. 1995. Analysis of gestures in face-to-face design teams provides guidance for how to use groupware in design. In Proceedings of the 1st conference on Designing interactive systems: Processes, practices, methods, & techniques (DIS ’95), 157–166. https://doi.org/10.1145/225434.225452
17.
Genevieve Bell, Mark Blythe, and Phoebe Sengers. 2005. Making by Making Strange: Defamiliarization and the Design of Domestic Technologies. ACM Trans. Comput.-Hum. Interact. 12, 2: 149–173. https://doi.org/10.1145/1067860.1067862
18.
Mark J. Berentsen, Marit Bentvelzen, and Paweł W. Woźniak. 2021. MTBalance: Assisting Novice Mountain Bikers with Real-Time Proprioceptive Feedback. Proc. ACM Hum.-Comput. Interact. 5, ISS: 506:1–506:25. https://doi.org/10.1145/3488551
19.
Merel K N van den Berg, Armağan Karahanoğlu, Matthijs L Noordzij, Els L M Maeckelberghe, and Geke D S Ludden. 2025. Why we should stress about stress scores: Issues and directions for wearable stress-tracking technology. Interacting with Computers: iwaf036. https://doi.org/10.1093/iwc/iwaf036
20.
David Bertolo, Stéphanie Fleck, Camille Lemiere, and Isabelle Pecci. 2025. Tangible User Interface in Health: A Scoping Review. In Proceedings of the Nineteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’25), 1–25. https://doi.org/10.1145/3689050.3704951
21.
Janne Mascha Beuthel and Danielle Wilde. 2017. Wear.x: Developing Wearables that Embody Felt Experience. In Proceedings of the 2017 Conference on Designing Interactive Systems (DIS ’17), 915–927. https://doi.org/10.1145/3064663.3064799
22.
Frédéric Bevilacqua, Eric O Boyer, Jules Françoise, Olivier Houix, Patrick Susini, Agnès Roby-Brami, and Sylvain Hanneton. 2016. Sensori-Motor Learning with Movement Sonification: Perspectives from Recent Interdisciplinary Studies. https://doi.org/10.3389/fnins.2016.00385
23.
Eli Blevis. 2007. Sustainable interaction design: Invention & disposal, renewal & reuse. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’07), 503–512. https://doi.org/10.1145/1240624.1240705
24.
Johan Blomkvist and Stefan Holmlid. 2010. Service Prototyping According to Service Design Practitioners.
25.
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2: 77–101. https://doi.org/10.1191/1478088706qp063oa
26.
Marion Buchenau and Jane Fulton Suri. 2000. Experience prototyping. In Proceedings of the 3rd conference on Designing interactive systems: Processes, practices, methods, and techniques (DIS ’00), 424–433. https://doi.org/10.1145/347642.347802
27.
Colin Burns, Eric Dishman, William Verplank, and Bud Lassiter. 1994. Actors, hairdos & videotape—informance design. In Conference Companion on Human Factors in Computing Systems (CHI ’94), 119–120. https://doi.org/10.1145/259963.260102
28.
Sang Hoon Chae, Yushin Kim, Kyoung-Soub Lee, and Hyung-Soon Park. 2020. Development and Clinical Evaluation of a Web-Based Upper Limb Home Rehabilitation System Using a Smartwatch and Machine Learning Model for Chronic Stroke Survivors: Prospective Comparative Study. JMIR mHealth and uHealth 8, 7: e17216. https://doi.org/10.2196/17216
29.
Vivek Chandel, Avik Ghose, and Aniruddha Sinha. 2024. Demo: Smartwatch-Driven Gaming for Stroke Rehabilitation. In Proceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems (SenSys ’24), 887–888. https://doi.org/10.1145/3666025.3699421
30.
Maria Chiu, Elina Tochilnikova, and Casper Harteveld. 2024. From Novelty to Clinical Practice: Exploring VR Exergames with Physical Therapists. Proc. ACM Hum.-Comput. Interact. 8, CHI PLAY: 303:1–303:29. https://doi.org/10.1145/3677068
31.
T. Matthew Ciolek. 1983. The proxemics lexicon: A first approximation. Journal of Nonverbal Behavior 8, 1: 55–79. https://doi.org/10.1007/BF00986330
32.
Karen Anne Cochrane, Chau Nguyen, Yidan Cao, Noemi M. E. Roestel, Lee Jones, and Audrey Girouard. 2023. Adaptive Soft Switches: Co-Designing Fabric Adaptive Switches with Occupational Therapists for Children and Adolescents with Acquired Brain Injury. In Proceedings of the Seventeenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’23), 1–14. https://doi.org/10.1145/3569009.3572734
33.
Design Council. 2003. The Double Diamond. Retrieved February 6, 2024 from https://www.designcouncil.org.uk/our-resources/the-double-diamond/
34.
Mihaly Csikszentmihalyi. 2008. Flow: The Psychology of Optimal Experience. Harper Perennial Modern Classics, New York.
35.
N. Dahlbäck, A. Jönsson, and L. Ahrenberg. 1993. Wizard of Oz studies — why and how. Knowledge-Based Systems 6, 4: 258–266. https://doi.org/10.1016/0950-7051(93)90017-N
36.
Claudia Daudén Roquet and Corina Sas. 2021. Interoceptive Interaction: An Embodied Metaphor Inspired Approach to Designing for Meditation. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21), 1–17. https://doi.org/10.1145/3411764.3445137
37.
Marloes De Valk. 2021. A pluriverse of local worlds: A review of Computing within Limits related terminology and practices. LIMITS Workshop on Computing within Limits. https://doi.org/10.21428/bf6fb269.1e37d8be
38.
Robby van Delden. 2011. Design of therapeutic TagTile games for children with unilateral spastic cerebral paresis. Univeristy of Twente, Enschede, the Netherlands. Retrieved from http://essay.utwente.nl/61135/
39.
Robby van Delden, Pauline Aarts, and Betsy van Dijk. 2012. Design of Tangible Games for Children Undergoing Occupational and Physical Therapy. In Entertainment Computing - ICEC 2012 (Lecture Notes in Computer Science), 221–234. https://doi.org/10.1007/978-3-642-33542-6_19
40.
Audrey Desjardins and Cayla Key. 2020. Parallels, Tangents, and Loops: Reflections on the ’ThroughPart of RtD. In Proceedings of the 2020 ACM Designing Interactive Systems Conference (DIS ’20), 2133–2147. https://doi.org/10.1145/3357236.3395586
41.
Kayla DesPortes, Kathleen McDermott, Yoav Bergner, Francisco Enrique Vicente Castro, Sauda Musharrat, and Aakruti Lunia. 2024. DanceBitsIt tells you to see us’: Supporting Dance Practices with an Educational Computing Kit. In Proceedings of the Eighteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’24), 1–19. https://doi.org/10.1145/3623509.3633350
42.
Sebastian Deterding. 2009. The Game Frame: Systemizing a Goffmanian Approach to Video Game Theory.
43.
Nathan DeVrio, Vimal Mollyn, and Chris Harrison. 2023. SmartPoser: Arm Pose Estimation with a Smartphone and Smartwatch Using UWB and IMU Data. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST ’23), 1–11. https://doi.org/10.1145/3586183.3606821
44.
Virginia Dickson-Swift, Erica L. James, Sandra Kippen, and Pranee Liamputtong. 2009. Researching sensitive topics: Qualitative research as emotion work. Qualitative Research 9, 1: 61–79. https://doi.org/10.1177/1468794108098031
45.
J. P. Djajadiningrat, W. W. Gaver, and J. W. Fres. 2000. Interaction relabelling and extreme characters: Methods for exploring aesthetic interactions. In Proceedings of the 3rd conference on Designing interactive systems: Processes, practices, methods, and techniques (DIS ’00), 66–71. https://doi.org/10.1145/347642.347664
46.
Luna Dolezal and Venla Oikkonen. 2021. Introduction: Self-Tracking, Embodied Differences, and Intersectionality. Catalyst: Feminism, Theory, Technoscience 7, 1. https://doi.org/10.28968/cftt.v7i1.35273
47.
Yun Dong, Dax Steins, Shanbin Sun, Fei Li, James D. Amor, Christopher J. James, Zhidao Xia, Helen Dawes, Hooshang Izadi, Yi Cao, Derick T. Wade, Yuanfeng Peng, Jingjing Xue, Xiaoli Guo, Xuesong Xie, Na Zuo, Xinkui Gao, Lingzhi Wu, Peifang Li, Ying Wang, Chong Chen, Peiyang Sun, Jinji Wang, Feifei Wang, Panfu Hao, Weiwei Wu, Yubao Gao, Xiaoli Sun, Haiyang Wu, Yujie Yang, and Smart watch activity feedback trial committee (SWAFT). 2018. Does feedback on daily activity level from a Smart watch during inpatient stroke rehabilitation increase physical activity levels? Study protocol for a randomized controlled trial. Trials 19, 1: 177. https://doi.org/10.1186/s13063-018-2476-z
48.
Don Samitha Elvitigala, Denys J. C. Matthies, Löic David, Chamod Weerasinghe, and Suranga Nanayakkara. 2019. GymSoles: Improving Squats and Dead-Lifts by Visualizing the User’s Center of Pressure. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19), 1–12. https://doi.org/10.1145/3290605.3300404
49.
William J. Farr, Marilyn Poole, Clive Thursfield, and Ian Male. 2017. Acceptance of a Neuropaediatric Exergame Rehabilitation System with Severe Cerebral Palsy. In Extended Abstracts Publication of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY ’17 Extended Abstracts), 255–260. https://doi.org/10.1145/3130859.3131317
50.
Sarah Fdili Alaoui, Baptiste Caramiaux, Marcos Serrano, and Frédéric Bevilacqua. 2012. Movement qualities as interaction modality. In Proceedings of the Designing Interactive Systems Conference (DIS ’12), 761–769. https://doi.org/10.1145/2317956.2318071
51.
Pedro Ferreira and Kristina Höök. 2011. Bodily Orientations Around Mobiles: Lessons Learnt in Vanuatu. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11), 277–286. https://doi.org/10.1145/1978942.1978981
52.
Christopher Frayling. 1994. Research in Art and Design (Royal College of Art Research Papers, Vol 1, No 1, 1993/4). Retrieved October 30, 2025 from https://researchonline.rca.ac.uk/384/
53.
Edwin Gamboa, Andres Serrato, Diana Toro, and Maria Trujillo. 2020. Advantages and limitations of leap motion for developing physical rehabilitation exergames (PREGs). In Proceedings of the 5th Workshop on ICTs for improving Patients Rehabilitation Research Techniques (REHAB ’19), 43–46. https://doi.org/10.1145/3364138.3364149
54.
Edwin Gamboa and Maria Trujillo. 2020. Identifying aspects, methods and instruments to evaluate player experience in physical rehabilitation exergames. In Proceedings of the 5th Workshop on ICTs for improving Patients Rehabilitation Research Techniques (REHAB ’19), 136–139. https://doi.org/10.1145/3364138.3364166
55.
Nadia Vanessa Garcia Hernandez, Stefano Buccelli, Matteo Laffranchi, and Lorenzo de Michieli. 2023. Mixed Reality-based Exergames for Upper Limb Robotic Rehabilitation. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’23), 447–451. https://doi.org/10.1145/3568294.3580124
56.
William W. Gaver. 1991. Technology Affordances. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’91), 79–84. https://doi.org/10.1145/108844.108856
57.
William Gaver. 2012. What Should We Expect from Research Through Design? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12), 937–946. https://doi.org/10.1145/2207676.2208538
58.
William Gaver, John Bowers, Tobie Kerridge, Andy Boucher, and Nadine Jarvis. 2009. Anatomy of a failure: How we knew when our design went wrong, and what we learned from it. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’09), 2213–2222. https://doi.org/10.1145/1518701.1519040
59.
Erving Goffman. 1986. Frame Analysis: An Essay on the Organization of Experience. Northeastern University Press.
60.
Pedro Raphael Inácio Gomes, Murillo Santos de Castro, and Thamer Horbylon Nascimento. 2024. Gesture Recognition Methods Using Sensors Integrated into Smartwatches: Results of a Systematic Literature Review. In Proceedings of the XXII Brazilian Symposium on Human Factors in Computing Systems (IHC ’23), 1–11. https://doi.org/10.1145/3638067.3638082
61.
Alejandra Gómez Ortega and Beatrice Vincenzi. 2025. Reconfiguring Our Data: A Duoethnography on Chronic Health and Physical Activity through the Lens of Fitness Trackers. In Companion Publication of the 2025 ACM Designing Interactive Systems Conference. Association for Computing Machinery, New York, NY, USA, 461–465. Retrieved September 29, 2025 from https://doi.org/10.1145/3715668.3736373
62.
Alba Gomez-Andres, Jennifer Grau-Sánchez, Esther Duarte, Antoni Rodriguez-Fornells, and Ana Tajadura-Jiménez. 2020. Enriching footsteps sounds in gait rehabilitation in chronic stroke patients: A pilot study. Annals of the New York Academy of Sciences 1467, 1: 48–59. https://doi.org/10.1111/nyas.14276
63.
Bruna Goveia da Rocha, Kristina Andersen, and Oscar Tomico. 2022. Portfolio of Loose Ends. In Proceedings of the 2022 ACM Designing Interactive Systems Conference (DIS ’22), 527–540. https://doi.org/10.1145/3532106.3533516
64.
Edward Twitchell Hall. 1966. The hidden dimension. Doubleday, Garden City, NY, USA.
65.
Israel Halperin, Dale W. Chapman, David T. Martin, and Chris Abbiss. 2017. The effects of attentional focus instructions on punching velocity and impact forces among trained combat athletes. Journal of Sports Sciences 35, 5: 500–507. https://doi.org/10.1080/02640414.2016.1175651
66.
Kate Hartman, Brian Jepson, Emma Dvorak, and Rebecca Demarest. 2014. Make: Wearable electronics. Maker Media, Sebastopol, CA.
67.
Simon Holland, Anders J. Bouwer, Mathew Dalgelish, and Topi M. Hurtig. 2010. Feeling the beat where it counts: Fostering multi-limb rhythm skills with the haptic drum kit. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction (TEI ’10), 21–28. https://doi.org/10.1145/1709886.1709892
68.
Lars Erik Holmquist. 2023. Bits are Cheap, Atoms are Expensive: Critiquing the Turn Towards Tangibility in HCI. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA ’23), 1–8. https://doi.org/10.1145/3544549.3582744
69.
Sarah Homewood, Claudia A Hinkle, and Irene Kaklopoulou. 2025. Cripping the Co-Design of Pacing Technologies For Energy-Limiting Conditions. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25), 1–16. https://doi.org/10.1145/3706598.3713990
70.
Sarah Homewood, Nantia Koulidou, Claudia A Hinkle, Irene Kaklopoulou, and Harvey Bewley. 2025. Lull: Designing Crip Pacing Technologies for Rest. In Proceedings of the 2025 ACM Designing Interactive Systems Conference (DIS ’25), 3082–3097. https://doi.org/10.1145/3715336.3735419
71.
Sarah Homewood, Kari Okholm Just, and Olivia Bramm Johansson. 2024. The Unanticipated Use of Fitness Tracking Technologies During Post-COVID Syndrome. In Proceedings of the 2024 ACM Designing Interactive Systems Conference (DIS ’24), 556–570. https://doi.org/10.1145/3643834.3661617
72.
Kristina Höök. 2018. Designing with the Body: Somaesthetic Interaction Design. The MIT Press. https://doi.org/10.7551/mitpress/11481.001.0001
73.
Kristina Höök, Baptiste Caramiaux, Cumhur Erkut, Jodi Forlizzi, Nassrin Hajinejad, Michael Haller, Caroline C. M. Hummels, Katherine Isbister, Martin Jonsson, George Khut, Lian Loke, Danielle Lottridge, Patrizia Marti, Edward Melcer, Florian Floyd Müller, Marianne Graves Petersen, Thecla Schiphorst, Elena Márquez Segura, Anna Ståhl, Dag Svanæs, Jakob Tholander, and Helena Tobiasson. 2018. Embracing First-Person Perspectives in Soma-Based Design. Informatics 5, 1: 8. https://doi.org/10.3390/informatics5010008
74.
Kristina Höök, Martin P. Jonsson, Anna Ståhl, and Johanna Mercurio. 2016. Somaesthetic Appreciation Design. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), 3131–3142. https://doi.org/10.1145/2858036.2858583
75.
Kristina Höök and Jonas Löwgren. 2012. Strong concepts: Intermediate-level knowledge in interaction design research. ACM Transactions on Computer-Human Interaction 19, 3: 23:1–23:18. https://doi.org/10.1145/2362364.2362371
76.
Noura Howell, Audrey Desjardins, and Sarah Fox. 2021. Cracks in the Success Narrative: Rethinking Failure in Design Research through a Retrospective Trioethnography. ACM Trans. Comput.-Hum. Interact. 28, 6: 42:1–42:31. https://doi.org/10.1145/3462447
77.
Hongci Hu, Mengqi Jiang, Kai Lin, Kinor Shou-xiang Jiang, and Ziqian Bai. 2025. ReKnit-Care: A Seamless-Knitted Sensing Glove for Sensory Rehabilitation and Adaptive Haptic Feedback. In Proceedings of the Nineteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’25), 1–7. https://doi.org/10.1145/3689050.3705979
78.
Johan Huizinga. 1955. Homo Ludens: A Study of the Play-Element in Culture. Beacon Press, Boston, MA, USA.
79.
Caroline Hummels. 2016. Embodied Encounters Studio: A Tangible Platform for Sensemaking. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’16), 3691–3694. https://doi.org/10.1145/2851581.2890272
80.
Hilary Hutchinson, Wendy Mackay, Bo Westerlund, Benjamin B. Bederson, Allison Druin, Catherine Plaisant, Michel Beaudouin-Lafon, Stéphane Conversy, Helen Evans, Heiko Hansen, Nicolas Roussel, and Björn Eiderbäck. 2003. Technology probes: Inspiring design for and with families. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’03), 17–24. https://doi.org/10.1145/642611.642616
81.
Salma Ibrahim and Sara Nabil. 2025. E-Serging: Exploring the Use of Overlockers (Sergers) in Creating E-Textile Seams and Interactive Yarns for Garment Making, Embroidery, and Weaving. In Proceedings of the Nineteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’25), 1–17. https://doi.org/10.1145/3689050.3704428
82.
Sylvia Janicki, Alexandra Teixeira Riggs, Noura Howell, Anne Sullivan, and Abigale Stangl. 2024. Queering/Cripping Technologies of Productivity. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24), 1–12. https://doi.org/10.1145/3613905.3644067
83.
Lee Jones, Sara Nabil, Amanda McLeod, and Audrey Girouard. 2020. Wearable Bits: Scaffolding Creativity with a Prototyping Toolkit for Wearable E-textiles. In Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’20), 165–177. https://doi.org/10.1145/3374920.3374954
84.
Martin Jonsson, Anna Ståhl, Johanna Mercurio, Anna Karlsson, Naveen Ramani, and Kristina Höök. 2016. The Aesthetics of Heat: Guiding Awareness with Thermal Stimuli. In Proceedings of the TEI ’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’16), 109–117. https://doi.org/10.1145/2839462.2839487
85.
Annkatrin Jung, Miquel Alfaras, Pavel Karpashevich, William Primett, and Kristina Höök. 2021. Exploring Awareness of Breathing through Deep Touch Pressure. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21), 1–15. https://doi.org/10.1145/3411764.3445533
86.
Victor Kaptelinin and Bonnie Nardi. 2012. Affordances in HCI: Toward a Mediated Action Perspective. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12), 967–976. https://doi.org/10.1145/2207676.2208541
87.
Aisling Kelliher, Andrew Gibson, Eric Bottelsen, and Edward Coe. 2019. Designing Modular Rehabilitation Objects for Interactive Therapy in the Home. In Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’19), 251–257. https://doi.org/10.1145/3294109.3300983
88.
Adam Kendon. 2010. Spacing and Orientation in Co-present Interaction. In Development of Multimodal Interfaces: Active Listening and Synchrony: Second COST 2102 International Training School, Dublin, Ireland, March 23-27, 2009, Revised Selected Papers, Anna Esposito, Nick Campbell, Carl Vogel, Amir Hussain and Anton Nijholt (eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 1–15. https://doi.org/10.1007/978-3-642-12397-9_1
89.
William M. Land, Cornelia Frank, and Thomas Schack. 2014. The influence of attentional focus on the development of skill representation in a complex action. Psychology of Sport and Exercise 15, 1: 30–38. https://doi.org/10.1016/j.psychsport.2013.09.006
90.
Kate E. Laver, Belinda Lange, Stacey George, Judith E. Deutsch, Gustavo Saposnik, and Maria Crotty. 2017. Virtual reality for stroke rehabilitation. Cochrane Database of Systematic Reviews, 11. https://doi.org/10.1002/14651858.CD008349.pub4
91.
Si-Huei Lee, Shih-Ching Yeh, Rai-Chi Chan, Shuya Chen, Geng Yang, and Li-Rong Zheng. 2016. Motor Ingredients Derived from a Wearable Sensor-Based Virtual Reality System for Frozen Shoulder Rehabilitation. BioMed Research International 2016, 1: 7075464. https://doi.org/10.1155/2016/7075464
92.
Judith Ley-Flores, Frédéric Bevilacqua, Nadia Bianchi-Berthouze, and Ana Tajadura-Jiménez. 2019. Altering body perception and emotion in physically inactive people through movement sonification. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), 1–7. https://doi.org/10.1109/ACII.2019.8925432
93.
Judith Ley-Flores, Laia Turmo Vidal, Nadia Berthouze, Aneesha Singh, Frédéric Bevilacqua, and Ana Tajadura-Jiménez. 2021. SoniBand: Understanding the Effects of Metaphorical Movement Sonifications on Body Perception and Physical Activity. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21), 1–16. https://doi.org/10.1145/3411764.3445558
94.
Judith Ley-Flores, Laia Turmo Vidal, Elena Márquez Segura, Aneesha Singh, Frederic Bevilacqua, Francisco Cuadrado, Joaquín Roberto Díaz Durán, Omar Valdiviezo-Hernández, Milagrosa Sánchez-Martin, and Ana Tajadura-Jiménez. 2024. Co-Designing Sensory Feedback for Wearables to Support Physical Activity through Body Sensations. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 8, 1: 40:1–40:31. https://doi.org/10.1145/3643499
95.
Wanyu Liu, Artem Dementyev, Diemo Schwarz, Emmanuel Flety, Wendy E. Mackay, Michel Beaudouin-Lafon, and Frederic Bevilacqua. 2021. SonicHoop: Using Interactive Sonification to Support Aerial Hoop Practices. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21), 1–16. https://doi.org/10.1145/3411764.3445539
96.
Georgia Loewen, Karen Anne Cochrane, and Audrey Girouard. 2024. From Imagination to Innovation: Using Participatory Design Fiction to Envision the Future of Accessible Gaming Wearables for Players with Upper Limb Motor Disabilities. Proc. ACM Hum.-Comput. Interact. 8, CHI PLAY: 308:1–308:30. https://doi.org/10.1145/3677073
97.
Lian Loke and Toni Robertson. 2013. Moving and making strange: An embodied approach to movement-based interaction design. ACM Transactions on Computer-Human Interaction 20, 1: 7:1–7:25. https://doi.org/10.1145/2442106.2442113
98.
Alexander Hvidbjerg Kjær Lund, Amalie Finnemannn Sørensen, Lars Elbæk, and Maximus D. Kaos. 2021. Insights from design processes used in developing exergames. In 15th European Conference on Game Based Learning, ECGBL 2021, 490–498.
99.
Preetham Madapura Nagaraj, Wen Mo, and Catherine Holloway. 2024. Mindfulness-based Embodied Tangible Interactions for Stroke Rehabilitation at Home. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24), 1–16. https://doi.org/10.1145/3613904.3642463
100.
Charlotte Magnusson, Héctor A. Caltenco, David McGookin, Mikko Kytö, Ingibjörg Hjaltadóttir, Thóra B. Hafsteinsdóttir, Helga Jónsdóttir, and Ingibjörg Bjartmarz. 2017. Tangible Interaction for Stroke Survivors: Design Recommendations. In Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’17), 597–602. https://doi.org/10.1145/3024969.3025073
101.
Nicolai Marquardt and Saul Greenberg. 2015. Proxemic Interactions: From Theory to Practice. Morgan & Claypool Publishers, San Rafael, CA, USA. Retrieved from http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7056253
102.
Elena Márquez Segura. 2016. Embodied core mechanics. Designing for movement-based co-located play. Uppsala University. Retrieved from http://uu.diva-portal.org/smash/record.jsf?pid=diva2\%3A920694&dswid=-4668
103.
Elena Márquez Segura, Katja Rogers, Anna Lisa Martin-Niedecken, Stephan Niedecken, and Laia Turmo Vidal. 2021. Exploring the Design Space of Immersive Social Fitness Games: The ImSoFit Games Model. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21), 1–14. https://doi.org/10.1145/3411764.3445592
104.
Elena Márquez Segura, Katta Spiel, Karin Johansson, Jon Back, Z O. Toups, Jessica Hammer, Annika Waern, Theresa Jean Tanenbaum, and Katherine Isbister. 2019. Larping (Live Action Role Playing) as an Embodied Design Research Method. In Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion (DIS ’19 Companion), 389–392. https://doi.org/10.1145/3301019.3320002
105.
Elena Márquez Segura, Laia Turmo Vidal, Luis Parrilla Bel, and Annika Waern. 2019. Circus, Play and Technology Probes: Training Body Awareness and Control with Children. In Proceedings of the 2019 on Designing Interactive Systems Conference (DIS ’19), 1223–1236. https://doi.org/10.1145/3322276.3322377
106.
Elena Márquez Segura, Laia Turmo Vidal, Luis Parrilla Bel, and Annika Waern. 2019. Using Training Technology Probes in Bodystorming for Physical Training. In Proceedings of the 6th International Conference on Movement and Computing (MOCO ’19), 1–8. https://doi.org/10.1145/3347122.3347132
107.
Elena Márquez Segura, Laia Turmo Vidal, and Asreen Rostami. 2016. Bodystorming for movement-based interaction design. Human Technology 12, 2: 193–251. https://doi.org/10.17011/ht/urn.201611174655
108.
Elena Márquez Segura, Laia Turmo Vidal, Asreen Rostami, and Annika Waern. 2016. Embodied Sketching. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), 6014–6027. https://doi.org/10.1145/2858036.2858486
109.
Elena Márquez Segura, Laia Turmo Vidal, Annika Waern, Jared Duval, Luis Parrilla Bel, and Ferran Altarriba Bertran. 2021. Physical Warm-up Games: Exploring the Potential of Play and Technology Design. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21), 1–14. https://doi.org/10.1145/3411764.3445163
110.
Elena Márquez Segura, Annika Waern, Luis Márquez Segura, and David López Recio. 2016. Playification: The PhySeEar case. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play (CHI PLAY ’16), 376–388. https://doi.org/10.1145/2967934.2968099
111.
Elena Márquez Segura, Annika Waern, Luis Parrilla Bel, and Laia Turmo Vidal. 2019. Super Trouper: The Playful Potential of Interactive Circus Training. In Extended Abstracts of the Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts (CHI PLAY ’19 Extended Abstracts), 511–518. https://doi.org/10.1145/3341215.3356282
112.
Joe Marshall, Conor Linehan, and Adrian Hazzard. 2016. Designing Brutal Multiplayer Video Games. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), 2669–2680. https://doi.org/10.1145/2858036.2858080
113.
Igor Matias, Matthias Kliegel, and Katarzyna Wac. 2024. Providemus alz: Ubiquitous Screening of Preclinical Alzheimer’s Disease with Consumer-grade Technologies. In Companion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’24), 743–751. https://doi.org/10.1145/3675094.3678425
114.
Louise Petersen Matjeka. 2020. The Move Maker - Exploring Bodily Preconditions and Surrounding Conditions for Bodily Interactive Play. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA ’20), 1–6. https://doi.org/10.1145/3334480.3381652
115.
Joanna McGrenere and Wayne Ho. 2000. Affordances: Clarifying and Evolving a Concept. 8.
116.
Nancy H McNevin and Gabriele Wulf. 2002. Attentional focus on supra-postural tasks affects postural control. Human Movement Science 21, 2: 187–202. https://doi.org/10.1016/S0167-9457(02)00095-7
117.
Joshua McVeigh-Schultz and Katherine Isbister. 2021. The Case for Weird Social in VR/XR: A Vision of Social Superpowers Beyond Meatspace. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA ’21). https://doi.org/10.1145/3411763.3450377
118.
Eleonora Mencarini, Amon Rapp, Lia Tirabeni, and Massimo Zancanaro. 2019. Designing Wearable Systems for Sports: A Review of Trends and Opportunities in HumanComputer Interaction. IEEE Transactions on Human-Machine Systems 49, 4: 314–325. https://doi.org/10.1109/THMS.2019.2919702
119.
Florian Mueller, Martin R. Gibbs, Frank Vetere, and Darren Edge. 2014. Supporting the Creative Game Design Process with Exertion Cards. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14), 2211–2220. https://doi.org/10.1145/2556288.2557272
120.
Sean A. Munson. 2017. Rethinking assumptions in the design of health and wellness tracking tools. interactions 25, 1: 62–65. https://doi.org/10.1145/3168738
121.
Bonnie Nardi, Bill Tomlinson, Donald J. Patterson, Jay Chen, Daniel Pargman, Barath Raghavan, and Birgit Penzenstadler. 2018. Computing within limits. Commun. ACM 61, 10: 86–93. https://doi.org/10.1145/3183582
122.
J. Newbold, N. Gold, and N. L. Bianchi-Berthouze. 2017. Musical Expectancy in Squat Sonification For People Who Struggle With Physical Activity. Retrieved November 18, 2025 from http://icad.org/icad2017/
123.
Joseph W. Newbold, Nadia Bianchi-Berthouze, Nicolas E. Gold, Ana Tajadura-Jiménez, and Amanda CdC Williams. 2016. Musically Informed Sonification for Chronic Pain Rehabilitation: Facilitating Progress & Avoiding Over-Doing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), 5698–5703. https://doi.org/10.1145/2858036.2858302
124.
Don Norman. 1988. The Psychology Of Everyday Things. Basic Books, New York.
125.
Claudia Núñez Pacheco and Lian Loke. 2017. Tacit Narratives: Surfacing Aesthetic Meaning by Using Wearable Props and Focusing. In Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’17), 233–242. https://doi.org/10.1145/3024969.3024979
126.
Claudia Núñez-Pacheco and Lian Loke. 2015. The Felt Sense Project: Towards a Methodological Framework for De-signing and Crafting From the Inner Self. In 21st International Symposium on Electronic Art, Vancouver, Canada.
127.
C. Oberlin, D. Béal, S. Leechavengvongs, A. Salon, M. C. Dauge, and J. J. Sarcy. 1994. Nerve transfer to biceps muscle using a part of ulnar nerve for C5-C6 avulsion of the brachial plexus: Anatomical study and report of four cases. The Journal of Hand Surgery 19, 2: 232–237. https://doi.org/10.1016/0363-5023(94)90011-6
128.
Doenja Oogjes and Audrey Desjardins. 2024. A temporal vocabulary of Design Events for Research through Design. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24), 1–12. https://doi.org/10.1145/3613904.3642560
129.
Antti Oulasvirta, Esko Kurvinen, and Tomi Kankainen. 2003. Understanding contexts by being there: Case studies in bodystorming. Personal and Ubiquitous Computing 7, 2: 125–134. https://doi.org/10.1007/s00779-003-0238-7
130.
Shanmugam Muruga Palaniappan and Bradley S. Duerstock. 2018. Developing Rehabilitation Practices Using Virtual Reality Exergaming. In 2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), 090–094. https://doi.org/10.1109/ISSPIT.2018.8642784
131.
Hyung Kun Park and Woohun Lee. 2016. Motion Echo Snowboard: Enhancing Body Movement Perception in Sport via Visually Augmented Feedback. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems (DIS ’16), 192–203. https://doi.org/10.1145/2901790.2901797
132.
Hannah Perner-Wilson, Leah Buechley, and Mika Satomi. 2010. Handcrafting textile interfaces from a kit-of-no-parts. In Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction (TEI ’11), 61–68. https://doi.org/10.1145/1935701.1935715
133.
Dennis Reidsma, Robby W. van Delden, Joris P. Weijdom, René Engelhardt Hansen, Søren Lekbo, Rasmus Vestergaard Andersen, Lærke Schjødt Rasmussen Rasmussen, and Lars Elbæk. 2022. Considerations for (Teaching) Facilitator Roles for Movement-Based Design. In Extended Abstracts of the 2022 Annual Symposium on Computer-Human Interaction in Play (CHI PLAY ’22), 233–239. https://doi.org/10.1145/3505270.3558315
134.
Patricia Rick, Milagrosa Sánchez-Martín, Aneesha Singh, Sergio Navas-León, Mercedes Borda-Mas, Nadia Bianchi-Berthouze, and Ana Tajadura-Jiménez. 2022. Investigating psychological variables for technologies promoting physical activity. DIGITAL HEALTH 8: 20552076221116559. https://doi.org/10.1177/20552076221116559
135.
Annamina Rieder. 2025. Cheat, curse, or comply? Wearable users’ proactive, avoidant-reactive, and ameliorative-reactive coping with negative incidents. Behaviour & Information Technology 0, 0: 1–20. https://doi.org/10.1080/0144929X.2025.2570389
136.
Roopika Risam and Alex Gil. 2022. Introduction: The Questions of Minimal Computing. Digital Humanities Quarterly 016, 2.
137.
Patrick Roche, Collin J. Goldbach, Alix Putman, Jeffrey A. Jalkio, Katie Kimball, and AnnMarie P. Thomas. 2020. Circus Science: Designing Responsive Flying Trapeze Performance Costumes. In Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’20), 551–556. https://doi.org/10.1145/3374920.3374986
138.
Aurora Ruiz-Rodriguez, Hermie Hermens, and Edwin van Asseldonk. 2023. HEROES, Design of an Exergame for Balance Recovery of Stroke Patients for a Home Environment. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA ’23), 1–4. https://doi.org/10.1145/3544549.3577037
139.
Olivia G Ruston, Adwait Sharma, and Mike Fraser. 2024. SeamSleeve: Robust Arm Movement Sensing through Powered Stitching. In Proceedings of the 2024 ACM Designing Interactive Systems Conference (DIS ’24), 1134–1147. https://doi.org/10.1145/3643834.3660726
140.
Katie Salen and Eric Zimmerman. 2003. Rules of Play: Game Design Fundamentals. The MIT Press, Cambridge, MA, USA.
141.
Sirat Samyoun and John Stankovic. 2023. QARE-Watch: A Smartwatch-based Assistive System for Detecting the Quality of Arm Rehabilitation Exercises. In Adjunct Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2022 ACM International Symposium on Wearable Computers (UbiComp/ISWC ’22 Adjunct), 108–111. https://doi.org/10.1145/3544793.3560349
142.
António Santos, Vânia Guimarães, Nuno Matos, João Cevada, Carlos Ferreira, and Inês Sousa. 2015. Multi-sensor exercise-based interactive games for fall prevention and rehabilitation. In Proceedings of the 9th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth ’15), 65–71.
143.
Nina Schaffert, Thenille Braun Janzen, Klaus Mattes, and Michael H Thaut. 2019. A Review on the Relationship Between Sound and Movement in Sports and Rehabilitation. Frontiers in Psychology 10: 244. https://doi.org/10.3389/fpsyg.2019.00244
144.
Thecla Schiphorst. 2007. Really, Really Small: The Palpability of the Invisible. In Proceedings of the 6th ACM SIGCHI Conference on Creativity & Cognition (C&C ’07), 7–16. https://doi.org/10.1145/1254960.1254962
145.
Thecla Schiphorst. 2011. Self-evidence: Applying somatic connoisseurship to experience design. In CHI ’11 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’11), 145–160. https://doi.org/10.1145/1979742.1979640
146.
Dennis Schleicher, Peter Jones, and Oksana Kachur. 2010. Bodystorming as embodied designing. Interactions 17, 6: 47–51. https://doi.org/10.1145/1865245.1865256
147.
Daniel S Scholz, Sönke Rhode, Michael Großbach, Jens Rollnik, and Eckart Altenmüller. 2015. Moving with music for stroke rehabilitation: A sonification feasibility study. Annals of the New York Academy of Sciences 1337, 1: 69–76. https://doi.org/10.1111/nyas.12691
148.
Richard Shusterman. 1999. Somaesthetics: A Disciplinary Proposal. The Journal of Aesthetics and Art Criticism 57, 3: 299–313. https://doi.org/10.2307/432196
149.
Richard Shusterman. 2008. Body Consciousness: A Philosophy of Mindfulness and Somaesthetics. Cambridge University Press.
150.
Kristian T. Simsarian. 2003. Take it to the next stage: The roles of role playing in the design process. In CHI ’03 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’03), 1012–1013. https://doi.org/10.1145/765891.766123
151.
Aneesha Singh, Marusa Hrobat, Suxin Gui, Nadia Bianchi-Berthouze, Judith Ley-Flores, Frederic Bevilacqua, Joaquin R. Diaz Duran, Elena MÁrquez Segura, and Ana Tajadura-JimÉnez. 2024. Pushed by Sound: Effects of Sound and Movement Direction on Body Perception, Experience Quality, and Exercise Support. ACM Trans. Comput.-Hum. Interact. 31, 4: 53:1–53:36. https://doi.org/10.1145/3648616
152.
Aneesha Singh, Stefano Piana, Davide Pollarolo, Gualtiero Volpe, Giovanna Varni, Ana Tajadura-Jiménez, Amanda CdeC Williams, Antonio Camurri, and Nadia Bianchi-Berthouze. 2016. Go-with-the-Flow: Tracking, Analysis and Sonification of Movement and Breathing to Build Confidence in Activity Despite Chronic Pain. Human–Computer Interaction 31, 3-4: 335–383. https://doi.org/10.1080/07370024.2015.1085310
153.
Dorothé Smit, Doenja Oogjes, Bruna Goveia da Rocha, Ambra Trotto, Yeup Hur, and Caroline Hummels. 2016. Ideating in Skills: Developing Tools for Embodied Co-Design. In Proceedings of the TEI ’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’16), 78–85. https://doi.org/10.1145/2839462.2839497
154.
Marie Louise Juul Søndergaard, Marianela Ciolfi Felice, and Madeline Balaam. 2021. Designing Menstrual Technologies with Adolescents. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21), 1–14. https://doi.org/10.1145/3411764.3445471
155.
Marie Louise Juul Søndergaard, Ozgun Kilic Afsar, Marianela Ciolfi Felice, Nadia Campo Woytuk, and Madeline Balaam. 2020. Designing with Intimate Materials and Movements: Making "Menarche Bits". In Proceedings of the 2020 ACM Designing Interactive Systems Conference (DIS ’20), 587–600. https://doi.org/10.1145/3357236.3395592
156.
Daniel Spelmezan, Anke Hilgers, and Jan Borchers. 2009. A language of tactile motion instructions. In Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’09), 1–5. https://doi.org/10.1145/1613858.1613896
157.
Daniel Spelmezan, Mareike Jacobs, Anke Hilgers, and Jan Borchers. 2009. Tactile motion instructions for physical activities. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’09), 2243–2252. https://doi.org/10.1145/1518701.1519044
158.
Katta Spiel. 2021. The Bodies of TEIInvestigating Norms and Assumptions in the Design of Embodied Interaction. In Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’21), 1–19. https://doi.org/10.1145/3430524.3440651
159.
Katta Spiel, Fares Kayali, Louise Horvath, Michael Penkler, Sabine Harrer, Miguel Sicart, and Jessica Hammer. 2018. Fitter, Happier, More Productive? The Normative Ontology of Fitness Trackers. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA ’18), 1–10. https://doi.org/10.1145/3170427.3188401
160.
Anna Ståhl, Kristina Höök, Martin Svensson, Alex S. Taylor, and Marco Combetto. 2009. Experiencing the Affective Diary. Personal and Ubiquitous Computing 13, 5: 365–378. https://doi.org/10.1007/s00779-008-0202-7
161.
Jaakko Stenros. 2014. In Defence of a Magic Circle: The Social, Mental and Cultural Boundaries of Play. Transactions of the Digital Games Research Association 1. https://doi.org/10.26503/todigra.v1i2.10
162.
Jelle Stienstra, Kees Overbeeke, and Stephan Wensveen. 2011. Embodying complexity through movement sonification: Case study on empowering the speed-skater. In Proceedings of the 9th ACM SIGCHI Italian Chapter International Conference on Computer-Human Interaction: Facing Complexity (CHItaly), 39–44. https://doi.org/10.1145/2037296.2037310
163.
Isabelle Stoate and Gabriele Wulf. 2011. Does the Attentional Focus Adopted by Swimmers Affect Their Performance? International Journal of Sports Science & Coaching 6, 1: 99–108. https://doi.org/10.1260/1747-9541.6.1.99
164.
Agnes Sturma, Laura A. Hruby, Dario Farina, and Oskar C. Aszmann. 2019. Structured Motor Rehabilitation After Selective Nerve Transfers. Journal of Visualized Experiments, 150. https://doi.org/10.3791/59840
165.
Petra Sundström, Alex Taylor, Katja Grufberg, Niklas Wirström, Jordi Solsona Belenguer, and Marcus Lundén. 2011. Inspirational bits: Towards a shared understanding of the digital material. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11), 1561–1570. https://doi.org/10.1145/1978942.1979170
166.
Dag Svanæs and Louise Barkhuus. 2020. The Designer’s Body as Resource in Design: Exploring Combinations of Point-of-view and Tense. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), 1–13. https://doi.org/10.1145/3313831.3376430
167.
Ana Tajadura-Jimenez, Judith Ley-Flores, Omar Valdiviezo, Aneesha Singh, Milagrosa Sanchez-Martin, Joaquin Diaz Duran, and Elena Márquez Segura. 2022. Exploring the Design Space for Body Transformation Wearables to Support Physical Activity through Sensitizing and Bodystorming. In Proceedings of the 8th International Conference on Movement and Computing (MOCO ’22), 1–9. https://doi.org/10.1145/3537972.3538001
168.
Ana Tajadura-Jiménez, Maria Basia, Ophelia Deroy, Merle Fairhurst, Nicolai Marquardt, and Nadia Bianchi-Berthouze. 2015. As Light as your Footsteps: Altering Walking Sounds to Change Perceived Body Weight, Emotional State and Gait. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15), 2943–2952. https://doi.org/10.1145/2702123.2702374
169.
Ana Tajadura-Jiménez, Francisco Cuadrado, Patricia Rick, Nadia Bianchi-Berthouze, Aneesha Singh, Aleksander Väljamäe, and Frédéric Bevilacqua. 2018. Designing a gesture-sound wearable system to motivate physical activity by altering body perception. In Proceedings of the 5th International Conference on Movement and Computing (MOCO ’18), 1–6. https://doi.org/10.1145/3212721.3212877
170.
Ana Tajadura-Jiménez, Joseph Newbold, Linge Zhang, Patricia Rick, and Nadia Bianchi-Berthouze. 2019. As Light as You Aspire to Be: Changing Body Perception with Sound to Support Physical Activity. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19), 1–14. https://doi.org/10.1145/3290605.3300888
171.
Ana Tajadura-Jiménez, Aleksander Väljamäe, and Kristi Kuusk. 2020. Altering One’s Body-Perception Through E-Textiles and Haptic Metaphors. Frontiers in Robotics and AI 7. https://doi.org/10.3389/frobt.2020.00007
172.
Julia Tannus de Souza, Caroline Valentini, Eduardo Lazaro Martins Naves, and Edgard Afonso Lamounier Jr. 2022. A Virtual Reality Exergame with a Low-cost 3D Motion Tracking for At-Home Post-Stroke Rehabilitation. In Proceedings of the 23rd Symposium on Virtual and Augmented Reality (SVR ’21), 172–176. https://doi.org/10.1145/3488162.3488223
173.
Nick Taylor, Jon Rogers, Loraine Clarke, Martin Skelly, Jayne Wallace, Pete Thomas, Babitha George, Romit Raj, Mike Shorter, and Michelle Thorne. 2021. Prototyping Things: Reflecting on Unreported Objects of Design Research for IoT. In Proceedings of the 2021 ACM Designing Interactive Systems Conference (DIS ’21), 1807–1816. https://doi.org/10.1145/3461778.3462037
174.
Paul Tennent, Joe Marshall, Vasiliki Tsaknaki, Charles Windlin, Kristina Höök, and Miquel Alfaras. 2020. Soma Design and Sensory Misalignment. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), 1–12. https://doi.org/10.1145/3313831.3376812
175.
Jakob Tholander. 2014. Using body cards in a design process for going from bodily experiences to design. In Proceedings of the 28th International BCS Human Computer Interaction Conference on HCI 2014 - Sand, Sea and Sky - Holiday HCI (BCS-HCI ’14), 141–150. https://doi.org/10.14236/ewic/hci2014.15
176.
Romain Toebosch, Arne Berger, and Carine Lallemand. 2024. Non-judgmental Interfaces: A New Design Space for Personal Informatics. In Companion Publication of the 2024 ACM Designing Interactive Systems Conference (DIS ’24 Companion), 166–170. https://doi.org/10.1145/3656156.3663706
177.
Krishna R. Tripuraneni, Jared R. H. Foran, Natalie R. Munson, Natalie E. Racca, and Joshua T. Carothers. 2021. A Smartwatch Paired With A Mobile Application Provides Postoperative Self-Directed Rehabilitation Without Compromising Total Knee Arthroplasty Outcomes: A Randomized Controlled Trial. The Journal of Arthroplasty 36, 12: 3888–3893. https://doi.org/10.1016/j.arth.2021.08.007
178.
Vasiliki Tsaknaki, Madeline Balaam, Anna Ståhl, Pedro Sanches, Charles Windlin, Pavel Karpashevich, and Kristina Höök. 2019. Teaching Soma Design. In Proceedings of the 2019 on Designing Interactive Systems Conference (DIS ’19), 1237–1249. https://doi.org/10.1145/3322276.3322327
179.
Laia Turmo Vidal, Elena Márquez Segura, Christopher Boyer, and Annika Waern. 2019. Enlightened Yoga: Designing an Augmented Class with Wearable Lights to Support Instruction. In Proceedings of the 2019 on Designing Interactive Systems Conference (DIS ’19), 1017–1031. https://doi.org/10.1145/3322276.3322338
180.
Laia Turmo Vidal, Elena Márquez Segura, Luis Parrilla Bel, and Annika Waern. 2018. Exteriorizing Body Alignment in Collocated Physical Training. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA ’18), 1–6. https://doi.org/10.1145/3170427.3188685
181.
Laia Turmo Vidal, Elena Márquez Segura, Luis Parrilla Bel, and Annika Waern. 2020. Training Body Awareness and Control with Technology Probes: A Portfolio of Co-Creative Uses to Support Children with Motor Challenges. In Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’20), 823–835. https://doi.org/10.1145/3374920.3375002
182.
Laia Turmo Vidal, Elena Márquez Segura, Luis Parrilla Bel, and Annika Waern. 2020. Training Technology Probes Across Fitness Practices: Yoga, Circus and Weightlifting. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA ’20), 1–8. https://doi.org/10.1145/3334480.3382862
183.
Laia Turmo Vidal, Elena Márquez Segura, and Annika Waern. 2023. Intercorporeal Biofeedback for Movement Learning. ACM Transactions on Computer-Human Interaction 30, 3: 43:1–43:40. https://doi.org/10.1145/3582428
184.
Laia Turmo Vidal, Elena Márquez Segura, and Annika Waern. 2018. Sensory bodystorming for collocated physical training design. In Proceedings of the 10th Nordic Conference on Human-Computer Interaction (NordiCHI ’18), 247–259. https://doi.org/10.1145/3240167.3240224
185.
Laia Turmo Vidal, Ana Tajadura-Jiménez, José Manuel Vega-Cebrián, Judith Ley-Flores, Joaquin R. Díaz-Durán, and Elena Márquez Segura. 2024. Body Transformation: An Experiential Quality of Sensory Feedback Wearables for Altering Body Perception. In Proceedings of the Eighteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’24), 1–19. https://doi.org/10.1145/3623509.3633373
186.
Laia Turmo Vidal, Hui Zhu, and Abraham Riego-Delgado. 2020. BodyLights: Open-Ended Augmented Feedback to Support Training Towards a Correct Exercise Execution. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), 1–14. https://doi.org/10.1145/3313831.3376268
187.
Laia Turmo Vidal, Hui Zhu, Annika Waern, and Elena Márquez Segura. 2021. The Design Space of Wearables for Sports and Fitness Practices. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21), 1–14. https://doi.org/10.1145/3411764.3445700
188.
José Manuel Vega-Cebrián. 2025. MoTTs / WearPlayfulMoves: Angle, Points, Maze. https://doi.org/10.5281/zenodo.17726173
189.
José Manuel Vega-Cebrián, Elena Márquez Segura, María Fernanda Alarcón, Tomás Bonino Covas, Lara Cristóbal, Andrés A. Maldonado, and Ana Tajadura-Jimenez. 2025. Co-designing Minimalist Wearables to Support Physical Rehabilitation after Peripheral Nerve Transfer Surgery. https://doi.org/10.5281/zenodo.17903256
190.
José Manuel Vega-Cebrián, Elena Márquez Segura, and Ana Tajadura-Jiménez. 2024. Towards a Minimalist Embodied Sketching Toolkit for Wearable Design for Motor Learning. In Proceedings of the Eighteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’24), 1–7. https://doi.org/10.1145/3623509.3635253
191.
José Manuel Vega-Cebrián, Elena Márquez Segura, Laia Turmo Vidal, Omar Valdiviezo-Hernández, Annika Waern, Robby Van Delden, Joris Weijdom, Lars Elbæk, Rasmus Vestergaard Andersen, Søren Stigkær Lekbo, and Ana Tajadura-Jiménez. 2023. Design Resources in Movement-based Design Methods: A Practice-based Characterization. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (DIS ’23), 871–888. https://doi.org/10.1145/3563657.3596036
192.
José Manuel Vega-Cebrián, Laia Turmo Vidal, Ana Tajadura-Jiménez, Tomás Bonino Covas, and Elena Márquez Segura. 2024. Movits: A Minimalist Toolkit for Embodied Sketching. In Proceedings of the 2024 ACM Designing Interactive Systems Conference (DIS ’24), 3302–3317. https://doi.org/10.1145/3643834.3660706
193.
José Manuel Vega-Cebrián, Laia Turmo Vidal, Ana Tajadura-Jiménez, Tomás Bonino Covas, and Elena Márquez Segura. 2024. Source Code: Movits: A Minimalist Toolkit for Embodied Sketching. https://doi.org/10.5281/zenodo.11121429
194.
Katharina Vogt, David Pirrò, Ingo Kobenz, Robert Höldrich, and Gerhard Eckel. 2010. PhysioSonic - Evaluated Movement Sonification As Auditory Feedback in Physiotherapy. In Proceedings of the 6th International Conference on Auditory Display (CMMR/ICAD’09), 103–120. https://doi.org/10.1007/978-3-642-12439-6_6
195.
Annika Waern and Jon Back. 2017. Activity as the Ultimate Particular of Interaction Design. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17), 3390–3402. https://doi.org/10.1145/3025453.3025990
196.
Annika Waern, Paulina Rajkowska, Karin B. Johansson, Jon Bac, Jocelyn Spence, and Anders Sundnes Løvlie. 2020. Sensitizing Scenarios: Sensitizing Designer Teams to Theory. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–13. Retrieved July 26, 2022 from https://doi.org/10.1145/3313831.3376620
197.
Annika Waern, Alessandra Semeraro, Nikolay Georgiev, Ruochen Wang, Andreas Bergqvist, Jon Back, Shuang Feng, Karan Manjunath, and Laia Turmo Vidal. 2021. Moving Embodied Design Education Online.: Experiences from a Course in Embodied Interaction during the COVID-19 Pandemic. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA ’21), 1–5. https://doi.org/10.1145/3411763.3451787
198.
Isaac Wallis, Todd Ingalls, Thanassis Rikakis, Loren Olsen, Yinpeng Chen, xu Weiwei, and Hari Sundaram. 2007. Real-Time Sonification of Movement for an Immersive Stroke Rehabilitation Environment.
199.
Alf Inge Wang, Kristoffer Hagen, Torbjørn Høivik, and Gaute Meek Olsen. 2018. Evaluation of the Game ExermonA Strength Exergame Inspired by Pokémon Go. In Advances in Computer Entertainment Technology, 384–405. https://doi.org/10.1007/978-3-319-76270-8_27
200.
Joris Weijdom. 2022. Performative prototyping in collaborative mixed reality environments: An embodied design method for ideation and development in virtual reality. In Sixteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’22), 1–13. https://doi.org/10.1145/3490149.3501316
201.
Danielle Wilde, Anna Vallgårda, and Oscar Tomico. 2017. Embodied Design Ideation Methods: Analysing the Power of Estrangement. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17), 5158–5170. https://doi.org/10.1145/3025453.3025873
202.
Charles Windlin, Kristina Höök, and Jarmo Laaksolahti. 2022. SKETCHING SOMA BITS. In Designing Interactive Systems Conference (DIS ’22), 1758–1772. https://doi.org/10.1145/3532106.3533510
203.
Charles Windlin, Anna Ståhl, Pedro Sanches, Vasiliki Tsaknaki, Pavel Karpashevich, Madeline Balaam, and Kristina Höök. 2019. Soma Bits: Mediating technology to orchestrate bodily experiences. 1387501 Bytes. https://doi.org/10.6084/M9.FIGSHARE.7855799.V2
204.
Mikołaj P. Woźniak, Julia Dominiak, Michał Pieprzowski, Piotr Ładoński, Krzysztof Grudzień, Lars Lischke, Andrzej Romanowski, and Paweł W. Woźniak. 2020. Subtletee: Augmenting Posture Awareness for Beginner Golfers. Proc. ACM Hum.-Comput. Interact. 4, ISS: 204:1–204:24. https://doi.org/10.1145/3427332
205.
Will F. W. Wu, Jared M. Porter, and Lee E. Brown. 2012. Effect of Attentional Focus Strategies on Peak Force and Performance in the Standing Long Jump. Journal of Strength and Conditioning Research 26, 5: 1226–1231. https://doi.org/10.1519/JSC.0b013e318231ab61
206.
Gabriele Wulf, Suzete Chiviacowsky, Eduardo Schiller, and Luciana Toaldo Gentilini Ávila. 2010. Frequent External-Focus Feedback Enhances Motor Learning. Frontiers in Psychology 1. https://doi.org/10.3389/fpsyg.2010.00190
207.
Gabriele Wulf, Markus Höß, and Wolfgang Prinz. 1998. Instructions for Motor Learning: Differential Effects of Internal Versus External Focus of Attention. Journal of Motor Behavior 30, 2: 169–179. https://doi.org/10.1080/00222899809601334
208.
Peixuan Xiong, Yukai Zhang, Nandi Zhang, Shihan Fu, Xin Li, Yadan Zheng, Jinni Zhou, Xiquan Hu, and Mingming Fan. 2024. To Reach the Unreachable: Exploring the Potential of VR Hand Redirection for Upper Limb Rehabilitation. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24), 1–11. https://doi.org/10.1145/3613904.3642912
209.
Rayoung Yang, Eunice Shin, Mark W. Newman, and Mark S. Ackerman. 2015. When fitness trackers don’t ’fit’: End-user difficulties in the assessment of personal tracking device accuracy. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’15), 623–634. https://doi.org/10.1145/2750858.2804269
210.
Mehdi Zarghami, Esmaeel Saemi, and Islam Fathi. 2012. External focus of attention enhances discus throwing performance. Kinesiology 44, 1: 47–51.
211.
Ran Zhou, Zachary Schwemler, Akshay Baweja, Harpreet Sareen, Casey Lee Hunt, and Daniel Leithinger. 2023. TactorBots: A Haptic Design Toolkit for Out-of-lab Exploration of Emotional Robotic Touch. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), 1–19. https://doi.org/10.1145/3544548.3580799
212.
Zhuoming Zhou, Elena Márquez Segura, Jared Duval, Michael John, and Katherine Isbister. 2019. Astaire: A Collaborative Mixed Reality Dance Game for Collocated Players. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY ’19), 5–18. https://doi.org/10.1145/3311350.3347152
213.
John Zimmerman, Jodi Forlizzi, and Shelley Evenson. 2007. Research through design as a method for interaction design research in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’07), 493–502. https://doi.org/10.1145/1240624.1240704
214.
John Zimmerman, Erik Stolterman, and Jodi Forlizzi. 2010. An analysis and critique of Research through Design: Towards a formalization of a research approach. In Proceedings of the 8th ACM Conference on Designing Interactive Systems (DIS ’10), 310–319. https://doi.org/10.1145/1858171.1858228