Lecture capture with real-time rearrangement of visual elements: Impact on student performance

P. T. Yu, B. Y. Wang, M. H. Su

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

The primary goal of this study is to create and test a lecture-capture system that can rearrange visual elements while recording is still taking place, in such a way that student performance can be positively influenced. The system we have devised is capable of integrating and rearranging multimedia sources, including learning content, the instructor and students' images, into lecture videos that are embedded in a website for students to review after school. The present study employed a two-group experimental design, with 153 participants (145 females and 8 males) making up an experimental group in which lecture courses were recorded using the new lecture-capture system, and 149 participants (140 females and 9 males) forming a control group whose lectures were recorded by traditional means. All participants were in the freshman college and studying Introduction to Computer and Information Science in one of six classes, and were randomly assigned to one of the two groups. The participants' midterm examination and final examination scores were collected as indicators of their academic performance, with their mathematics entrance scores used as a pre-test. The findings obtained from analysis of covariance (ANCOVA) suggest that appropriate rearrangement of visual elements in lecture videos can significantly impact students' learning performance.

Original languageEnglish
Pages (from-to)655-670
Number of pages16
JournalJournal of Computer Assisted Learning
Volume31
Issue number6
DOIs
Publication statusPublished - 2015 Dec 1

All Science Journal Classification (ASJC) codes

  • Education
  • Computer Science Applications

Fingerprint Dive into the research topics of 'Lecture capture with real-time rearrangement of visual elements: Impact on student performance'. Together they form a unique fingerprint.

Cite this