Yesterday I ran a video workshop for the ANU TechLauncher computer project students. Last year these were run in a purpose built building of flat floor classrooms. But due to COVID-19 all workshops are now online. One of the privileges of being a lecturer at a leading university is that you get to decide for yourself how you teach, as well as what you teach. Many advanced video technologies were suggested, but I decided to just use a Zoom video-conference, as it has been mostly working over the last few weeks, and it mostly worked this time.
I had prepared for the workshop, with two computers, and two Internet connections in my home office. There were also two colleagues authorized as alternate hosts to take over if something went wrong. Everything was working fine, up until the time for the workshop, when the cabled network connection stopped working. So I had to use WiFi, and at that point my wireless modem's connection to the mobile network also slowed down. But it was still fast enough for audio, and screen sharing, which is all I needed, and the video was also okay.
One of the problems with a live online session is not knowing how well it is working. What I realize now I should have done (and tell others to do), is to log the second computer in as a participant, so I could see how it looked.
However, my colleagues told me it was working fine. Most of the time screen-sharing was used. This reduces the video of the presenter to a postage stamp size, just to reassure the viewer. One problem at my end was that I then could not see participants or the text chat window (relying on colleagues to point out text questions).
I used the option of recording the video to the cloud (actually on the university AARnet system). The video was recorded at 1920 × 1200 pixels WUXGA resolution. I downloaded the hour of video (554 Mbytes, or about 10 Mbytes a minute), and then uploaded it to the university's Echo 360 video system. The video was then converted to the more widely used 1080p (1920 × 1080 pixels HDTV format), which took several hours. Echo 360's own editing tools were then used to trim the video. The editing was quick, but it then took several more hours for the video to be re-rendered. The video I then linked from the university's Moodle Learning Management System, and it loaded very quickly.
It would give a better result if I could skip the conversion step to change from WUXGA to HDTV. As well as making for a clearing recording, that should speed up the process. Perhaps I can set my monitor to HDTV resolution.
Also I have been using an online tool to create video slideshows with synthetic voice. It might be interesting to mix some of that content into the live videos. The idea would be to take the live recording and insert the slideshows at points where students had difficulty with a concept.
A fun idea would be to turn the live audio into text using a speech-to-text system, then paste that text into the slideshow tools automatic content search, and insert the resulting stock footage into the live video recording. This would emulate an approach I use when chairing a live, or online session. When the speaker has no slides, and the audience seems to be getting bored, I search the web for relevant content and put it up on the screen behind the speaker (usually without consulting them first). It would be possible to have a system which did this live during a video conference.
No comments:
Post a Comment