This mini-Conference is targeted at practitioners of qualitative video ethnography and ethnomethodological conversation analysis who are exploring new ways of collecting time-based records of social, material and embodied practices as live-action events in real or virtual worlds. They may also be critically revisiting established methods. Additionally, this approach will most likely involve crafting and sharing video data archives, as well as transcribing and visualising enhanced video data in order to collect analytically adequate recordings and to do analysis in new ways. We feel that our collective research endeavour is at a critical juncture - both a leap forward driven by new technologies that help collect richer and enhanced moving image and sound recordings in a variety of novel settings and a critical reflection on the nature of video data and the praxiology of doing video-based research.
With the complexity of video recording scenarios, and the increasing use of computational tools and resources for qualitative analysis, we can see the beginnings of a BIG VIDEO programme. We use this glib term to suggest an alternative to the hype about quantitative big data analytics. Big can mean both large datasets and more than just video. Thus, we argue that there is a need to develop an infrastructure for qualitative video analysis in four key areas: 1) capture, storage, archiving and access of enhanced digital video; 2) visualisation, transformation and presentation; 3) collaboration and sharing; and 4) software tools to support analysis.The mini-conference is organised as a series of keynotes, panel discussions, enhanced data sessions and method sprints aiming to elevate and ignite discussions of the future of Big Video.
With the development of new video recording and sensing technologies, fresh opportunities arise for data collection and analysis within the discourse and interaction studies paradigm. Technologies that have potential include high resolution and high speed video cameras, 360 cameras, stereoscopic 3D cameras, thermal cameras, virtual cameras, spatial and ambisonic audio, video stitching and annotation, GPS and local positioning systems, lightfields and 3D scanning, mobile biosensing data (eg. heart rate, galvanic skin response and EEG), motion/performance capture and mobile eye tracking –to name just a few. The opportunities these afford should be actively and critically explored. Indeed, the constitutive role of technology in shaping our understanding of the world has been well documented by many scholars, such as Bruno Latour and Karen Barad. And the constitutive role of traditional recording technology and the metaphor of the ‘camera’ in shaping our understanding of the world, including talk and social interaction, has been well documented by scholars such as Douglas Macbeth and Edward Branigan, among others. And it is no less true that new digital camera recording and audiovisual display technologies (hardware and software) also shape what we see and hear and sense. We must scrutinise critically the limitations and dangers of what can easily be used as a surveillance technology, as well as examine the analytical affordances of sousveillance technology.
We contend that today there are a set of paradigm shifts of different time-scales that are indicative of this critical juncture. They involve a shift in methods:
- From analogue to digital: eg. computationally intensive;
- From singular to plural: eg. multiple recording devices, such as cameras and microphones;
- From sound as secondary to sound as covalent, eg. in-built microphones versus spatial audio;
- From frame to field of vision: eg. 16:9 versus 360°;
- From flat to depth: eg. 2D versus stereoscopic 3D;
- From spectator to POV: eg. cinema versus VR;
- From audiovisual to sensory: eg. haptic.
- From manual to autonomous, eg. drone tracking
And so we envisage the following themes will be in focus in this mini-conference:
- Enhanced qualitative video data collection methods
- Complementary use of sensory data
- Complementary use of spatial and environmental sensing data
- Autonomous and manual drone video
- Critical reflections on the ‘camera’, the ‘microphone’, the 'frame' and the 'shot' in data capture
- Virtualisation of capture methods
- ‘Found video’ and public video data archives
- Re-sensing video and audio, eg. haptic visuality
- Video data collection in extreme situations and complex settings
- Footprint recordings, omniscient frames and six degrees of freedom
- Virtual immersion and stereoscopic/holographic realism
- Algorithmic normativity and bias in video recording software and hardware
- Developing and standardising transcription conventions for complex qualitative data sets
- Transcription software development
- Novel ways to visualise and analyse complex qualitative data sets
- Best practice for digitally anonymising voices, bodies, semiotic landscapes, settings and objects
- Enhanced ‘data sessions’
- Inhabiting data with augmented and virtual reality
- Re-enactment, plausibility and epistemic adequacy
- Modding game engines, APIs, VSTs, CODECs, platforms and apps for live data capture and editing (DAWs and NLEs)
- Archiving, rendering and sharing video data corpora beyond the cloud, eg. fogs
- Collaborative video repository and subversion issues
- Design of software tools and practices to support collaboration on video data annotation and analysis
- New modes for dissemination, presentation and publication of data and analysis
- Aesthetics of video research methods
- Emerging ethical and legacy issues
- Theoretical and methodological reflections on data collection and transcription practices
- Practical, methodological and theoretical perspectives on the relations between the concepts of the ‘Event’, the ‘Record’, ‘Data’, the ‘Transcript’, the ‘Analysis’, and the ‘Publication'
The conference is supported by the national Digital Humanities Lab 1.0 infrastructure programme in Denmark, with assistance from the Department of Communication and Psychology.