Archive for calls, 2017

[Previous message][Next message][Back to index]

[ecrea] conference: Big Video Sprint 2017

Fri Feb 10 06:33:46 GMT 2017




status: CfP Call for papers
conference
Big Video Sprint 2017
22.11.2017-24.11.2017
Aalborg University

Keynote speakers:
- Anne Harris, Monash University, Australia
- Robert Willim, Lund University, Sweden
- Adam Fouse, Aptima, USA
- Paul McIlvenny & Jacob Davidsen, Aalborg University

This mini-Conference is targeted at practitioners of qualitative video ethnography and ethnomethodological conversation analysis who are exploring new ways of collecting time-based records of social, material and embodied practices as live-action events in real or virtual worlds. They may also be critically revisiting established methods. Additionally, this approach will most likely involve crafting and sharing video data archives, as well as transcribing and visualising enhanced video data in order to collect analytically adequate recordings and to do analysis in new ways. We feel that our collective research endeavour is at a critical juncture - both a leap forward driven by new technologies that help collect richer and enhanced moving image and sound recordings in a variety of novel settings and a critical reflection on the nature of video data and the praxiology of doing video-based research.

With the complexity of video recording scenarios, and the increasing use of computational tools and resources for qualitative analysis, we can see the beginnings of a BIG VIDEO programme. We use this glib term to suggest an alternative to the hype about quantitative big data analytics. Big can mean both large datasets and more than just video. Thus, we argue that there is a need to develop an infrastructure for qualitative video analysis in four key areas: 1) capture, storage, archiving and access of enhanced digital video; 2) visualisation, transformation and presentation; 3) collaboration and sharing; and 4) software tools to support analysis.The mini-conference is organised as a series of keynotes, panel discussions, enhanced data sessions and method sprints aiming to elevate and ignite discussions of the future of Big Video.

With the development of new video recording and sensing technologies, fresh opportunities arise for data collection and analysis within the discourse and interaction studies paradigm. Technologies that have potential include high resolution and high speed video cameras, 360 cameras, stereoscopic 3D cameras, thermal cameras, virtual cameras, spatial and ambisonic audio, video stitching and annotation, GPS and local positioning systems, lightfields and 3D scanning, mobile biosensing data (eg. heart rate, galvanic skin response and EEG), motion/performance capture and mobile eye tracking - to name just a few. The opportunities these afford should be actively and critically explored. And so we envisage the following themes will be in focus in this mini-conference:
- Enhanced qualitative video data collection methods
- Complementary use of sensory data
- Complementary use of spatial and environmental sensing data
- Autonomous and manual drone video
- Critical reflections on the ‘camera’, the ‘microphone’, the 'frame' and the 'shot' in data capture
- Virtualisation of capture methods
- ‘Found video’ and public video data archives
- Re-sensing video and audio, eg. haptic visuality
- Video data collection in extreme situations and complex settings
- Footprint recordings, omniscient frames and six degrees of freedom
- Virtual immersion and stereoscopic/holographic realism
- Algorithmic normativity and bias in video recording software and hardware
- Developing and standardising transcription conventions for complex qualitative data sets
- Transcription software development
- Novel ways to visualise and analyse complex qualitative data sets
- Best practice for digitally anonymising voices, bodies, semiotic landscapes, settings and objects
- Enhanced ‘data sessions’
- Inhabiting data with augmented and virtual reality
- Re-enactment, plausibility and epistemic adequacy
- Modding game engines, APIs, VSTs, CODECs, platforms and apps for live data capture and editing (DAWs and NLEs) - Archiving, rendering and sharing video data corpora beyond the cloud, eg. fogs
- Collaborative video repository and subversion issues
- Design of software tools and practices to support collaboration on video data annotation and analysis - New modes for dissemination, presentation and publication of data and analysis
- Aesthetics of video research methods
- Emerging ethical and legacy issues
- Theoretical and methodological reflections on data collection and transcription practices - Practical, methodological and theoretical perspectives on the relations between the concepts of the ‘Event’, the ‘Record’, ‘Data’, the ‘Transcript’, the ‘Analysis’, and the ‘Publication'

Please submit an abstract of 500 words to be considered for inclusion on the programme and to secure your participation in the conference. Deadline is 30 May 2017.

Contact person: Paul McIlvenny
email: (bigvideo2017 /at/ hum.aau.dk)


---------------
The COMMLIST
---------------
This mailing list is a free service offered by Nico Carpentier. Please
use it responsibly and wisely.
--
To subscribe or unsubscribe, please visit http://commlist.org/
--
Before sending a posting request, please always read the guidelines at
http://commlist.org/
--
To contact the mailing list manager:
Email: (nico.carpentier /at/ vub.ac.be)
URL: http://nicocarpentier.net
---------------


[Previous message][Next message][Back to index]