Voices of XR: The Ferryman Collective

Ferryman collective logo. Shows a grim reaper standing paddling in a boat.

Live Immersive VR Experiences

The Ferryman Collective is a passionate group of creative and technical professionals dedicated to bringing groundbreaking live, immersive narratives to life in virtual reality. Their company was born of the desire to explore what storytelling, interaction, immersion, and entertainment can be rather than what it has been, to define a new generation of media with live performance in XR. They are a new kind of virtual reality studio, paving the way forward in this brave new world. With a passionate team of multidisciplinary talents, Ferryman Collective aims to take you on unprecedented journeys into fantastic worlds and stories previously thought to be the stuff of dreams.

Where: Zoom
When: Monday, 2/20/23 from 10:25 to 11:40am EST
Register: bit.ly/VoicesXRFerryman

speaker Deirdre Lyons wearing a VR headset and smiling.

Deirdre V. Lyons is a Los Angeles producer and performer, with participation in over 75 film and theatrical productions throughout the west coast. Her production experience includes theater, film, webisodes and now virtual reality theater. Her VR experience started when she performed in two 180 films, The Willows and Freakin’ Weekend. She was a cast member of award-winning productions of The Under Presents and The Under Presents: Tempest from Los Angeles indie studio Tender Claws. She is a co-founder of Ferryman Collective, producing and performing in their first three productions. Gumball Dreams is their fourth production and her directorial debut. She is a sought-after speaker and an occasional lecturer at Chapman University.

speaker Stephen Butchko wearing a VR headset propped on his forehead.

Stephen Butchko received his Bachelor of Arts degree in theatre from Western Washington University. After moving to Los Angeles, he began producing and performing in independent theatrical and motion picture productions with his wife Deirdre V. Lyons. As a founding member of Ferryman Collective, Stephen began producing and performing in Virtual Reality with PARA, Krampusnacht, The Severance Theory: Welcome to Respite and Ferryman’s current production Gumball Dreams.

speaker Whitton Frank.
Photo by Wolf Marloh

Whitton Frank is a voice, film, television and theater actor from Los Angeles. Recent VR work includes the award winning shows Severance Theory: Welcome to Respite and Gumball Dreams with Ferryman Collective as well as their debut show Para. She is a company member, producer and performer with Ferryman Collective. She was an immersive actor in the groundbreaking VR worlds of Tender Claws: The Under Presents and The Tempest. She is a graduate of Carnegie Mello University and London Academy of Music and Dramatic Art and is proud to have worked with many amazing theater companies in Los Angeles. In addition she is an audiobook narrator and can be found on Audible and other sites. In her copious free time she moonlights as a DJ specializing in vintage jazz, blues, and soul. 

Recording

The Voices of XR speaker series is made possible by Kathy McMorran Murray and the National Science Foundation (NSF) Research Traineeship (NRT) program as part of the Interdisciplinary Graduate Training in the Science, Technology, and Applications of Augmented and Virtual Reality at the University of Rochester (#1922591).


a person wearing a hijab and a virtual reality headset reaching out to an orb with text that reads, "Voices of XR."

Voices of XR is a Studio X speaker series. Speakers are scholars, artists, and extended reality professionals who discuss their work with immersive technologies across disciplines and industries. All talks are free and open to the general public.

Voices of XR: Kaan Akşit

speaker Kaan Aksit.

Could holographic displays be the key to achieving realism?

Co-presented by Studio X, Goergen Institute for Data Science, and the Institute of Optics

Holographic displays have long been debated as the display technology that could bring true three-dimensionality, accurate color production, and retinal resolutions to our screens, including augmented reality glasses and virtual reality headsets. However, these promises of holographic displays have yet to be proven and delivered. In this talk, Dr. Akşit will introduce his group’s research work to deliver these promises and describe how they aim to provide life-like images by bridging the gap between Human-Visual Perception (HVS) and Computer-Generated Holography (CGH).

The research he will describe in this context spans from their open-source toolkit and new rendering algorithms to novel types of display hardware.

Where: Zoom
When: Monday, 2/6/23 from 3:30 – 4:30pm EST
Register: bit.ly/VoicesXRKaanAksit

Relevant Links

Course on Computer-Generated Holography and Human-Visual Perception
Our differentiable toolkit, Odak
Realistic Defocus Blur for Multiplane Computer-Generated Holography
HoloBeam: Paper-Thin Near-Eye Displays

Kaan Akşit is an Associate Professor in the Computer Science department at University College London, where he leads the Computational Light Laboratory. Kaan received his Ph.D. in electrical engineering at Koç University, Türkiye* in 2014. His M.Sc. degree is in electrical power engineering from RWTH Aachen University, Germany, obtained in 2010. Kaan received his B.S. degree in electrical engineering from Istanbul Technical University, Türkiye, in 2007. Kaan researches the intersection of light and computation, including computational approaches in imaging, graphics, fabrication, and displays. Kaan’s research works are widely known among the optics and graphics community for his contributions to display technologies dedicated to virtual reality, augmented reality, and three-dimensional displays with and without glasses. He worked as a research intern in Philips Research, the Netherlands, and Disney Research, Switzerland, in 2009 and 2013, respectively. In addition, he was a research scientist at NVIDIA, the USA, between 2014 and 2020. He is the recipient of Emerging Technologies best in show awards in SIGGRAPH 2018 and SIGGRAPH 2019, DCEXPO special prize in SIGGRAPH 2017, the best papers in IEEE VR 2017 and ISMAR 2018, and the best paper nominee in IEEE VR 2019 and IEEE VR 2021.

Recording

The Voices of XR speaker series is made possible by Kathy McMorran Murray and the National Science Foundation (NSF) Research Traineeship (NRT) program as part of the Interdisciplinary Graduate Training in the Science, Technology, and Applications of Augmented and Virtual Reality at the University of Rochester (#1922591).


person wearing a hijab and a virtual reality headset holding a virtual orb with text that reads "Voices of XR."

Voices of XR is a Studio X speaker series. Speakers are scholars, artists, and extended reality professionals who discuss their work with immersive technologies across disciplines and industries. All talks are free and open to the general public.

Metaverse Reading Group

illustration of a woman wearing a VR headset reading a book.

Join Studio X for a casual reading group this spring in which we will discuss the metaverse and try out virtual reality (VR) experiences together. We’ll be reading The Metaverse Handbook to gain familiarity with the concept. Funds have generously been made available through the Humanities Center to purchase books for participants.

The group will meet biweekly from 12 to 1pm on Fridays beginning February 3rd. If you are interested in joining the group, please fill out this short form.

New grant will use virtual reality to understand trauma and the brain

A ball of energy with electricity beaming all over the place.

Understanding how experience and exposure to trauma changes the brain could improve diagnosis and targeted care for conditions like anxiety and post-traumatic stress disorder (PTSD). Benjamin Suarez-Jimenez, Ph.D., assistant professor of Neuroscience, has been studying this topic for the past several years and was awarded a new $3.5 million grant to use virtual reality and MRI to look into the circuitry of threat, reward, and cognitive mapping in PTSDtrauma, and resilience.

For the next five years, this funding from the National Institute of Mental Health will allow the ZVR lab to build upon work that investigates brain areas that build spatial maps, specifically to discriminate between areas of an environment associated with emotions. Suarez-Jimenez’s most recent research identified changes in the salience network – a mechanism in the brain used for learning and survival – in people exposed to trauma (with and without psychopathologies, including PTSD, depression, and anxiety). His prior research has revealed people with anxiety have increased insula and dorsomedial prefrontal cortex activation – indicating their brain was associating a known safe area to danger or threat.

“This project the RO1 will support will probe whether the neural processes we have identified in the past are specific to threat or if they expand to reward processing,” Suarez-Jimenez said. “We are also looking at how attention allocation to some visual cues of the virtual reality tasks changes from pre- to post-task experience. We are hoping that understanding these brain processes can help us identify better ways to diagnose PTSD and to improve treatment.”

Suarez-Jimenez came to the University in January 2021. He is an active member of the Neuroscience Diversity Commission and has served as a mentor for the NEUROCITY program.

Learn more.

Seed funding reflects how data science, AR/VR transform research at Rochester

professor mudjat cetin standing in front of Wegman's Hall.

The University’s Goergen Institute for Data Science supports collaborative projects across all disciplines.

professor mudjat cetin standing in front of Wegman's Hall.
“I’m very excited about the wide range of collaborative projects we are able to support this year,” says Mujdat Cetin, the Robin and Tim Wentworth Director of the Goergen Institute for Data Science. “These projects tackle important and timely problems on data science methods and applications, and I am confident they will lead to significant research contributions and attract external funding.” (University of Rochester photo / Bob Marcotte)

Ten projects supported with seed funding from the Goergen Institute for Data Science this year demonstrate how machine learning, artificial intelligence (AI), and augmented and virtual reality (AR/VR) are transforming the way University of Rochester researchers—across all disciplines—address challenging problems.

“I’m very excited about the wide range of collaborative projects we are able to support this year,” says Mujdat Cetin, the Robin and Tim Wentworth Director of the institute. “These projects tackle important and timely problems on data science methods and applications, and I am confident they will lead to significant research contributions and attract external funding.”

The awards, approximately $20,000 each, help researchers generate sufficient proof-of-concept findings to then attract major external funding.

This year’s projects involve collaborations among engineers, computer scientists, a historian, a biostatistician, and experts in brain and cognitive sciences, earth and environmental science, and palliative care. Their projects include a totally new kind of computing platform, new virtual reality technologies to improve doctor-patient conversations and help people overcome color vision deficiency, and machine learning techniques to make it easier for people to add music to their videos and to enhance AR/VR immersive experiences based on the unique geometry of each user’s anatomy.

The 2022–23 funded projects and their principal investigators are:

  • Ising Boltzmann Substrate for Energy-Based Models
    Co-PIs: Michael Huang, professor of electrical and computer engineering and of computer science, and Gonzalo Mateos, associate professor of electrical and computer engineering and of computer science and the Asaro Biggar Family Fellow in Data Science
  • A Data-Driven, Virtual Reality-based Approach to Enhance Deficient Color Vision
    Co-PIs: Yuhao Zhu, assistant professor of computer science, and Gaurav Sharma, professor of electrical and computer engineering, of computer science, and of biostatistics and computational biology
  • Audiovisual Integration in Virtual Reality Renderings of Real Physical Spaces
    Co-PIs: Duje Tadin, professor and chair of brain and cognitive sciences and professor of ophthalmology and of neuroscience; Ming-Lun Lee, associate professor of electrical and computer engineering; and Michael Jarvis, associate professor of history
  • Personalized Immersive Spatial Audio with Physics Informed Neural Field
    Co-PIs: Zhiyao Duan, associate professor of electrical and computer engineering and of computer science, and Mark Bocko, Distinguished Professor of Electrical and Computer Engineering and professor of physics and astronomy
  • Computational Earth Imaging with Machine Learning
    Co-PIs: Tolulope Olugboji, assistant professor of earth and environmental sciences, and Mujdat Cetin, professor of electrical and computer engineering and of computer science, and the Robin and Tim Wentworth Director of the Goergen Institute for Data Science
  • Improving Deconvolution Estimates through Bayesian Shrinkage
    PI: Matthew McCall, associate professor of biostatistics
  • Building a Multi-Step Commonsense Reasoning System for Story Understanding
    Co-PIs: Zhen Bai, assistant professor of computer science, and Lenhart Schubert, professor of computer science
  • Versatile and Customizable Virtual Patients to Improve Doctor-Patient Communication
    Co-PIs: Ehsan Hoque, associate professor of computer science, and Ronald Epstein, professor of family medicine and palliative care
  • Machine Learning Assisted Femtosecond Laser Fabrication of Efficient Solar Absorbers
    Co-PIs: Chunlei Guo, professor of optics, and Jiebo Luo, Albert Arendt Hopeman Professor of Engineering
    Rhythm-Aware and Emotion-Aware Video Background Music Generation
    PI: Jiebo Luo, Albert Arendt Hopeman Professor of Engineering

Read the full story.

Beat Saber Battle 2022

promotional image for beat saber battle. Shows two light sabers intersecting.

Like dancing to fun music, light sabers, and virtual reality? We have the perfect competition for you! Compete against your peers, and if you dominate, you will be crowned as the beat saber champion. If you are the ultimate, numero uno, top dog Beat Saberer, you will win the fanciest of prizes. The kickoff will be during a Drop-In Friday event on Friday, November 11th at 1pm.

promotional image for Beat Saber battle. Shows two light sabers crossing.

Where: Studio X, Carlson Library First Floor
Kick Off: Friday, November 11 at at 1pm

But what’s Beat Saber, you say? Only the most popular VR game of all time! Beat Saber is a VR rhythm game in which you slash floating boxes as they fly toward you to the beat of the music with gigantic light sabers.

The Rules

  • Participants must be affiliated with the University of Rochester. Students, faculty, or staff are welcome to participate.
  • Participants must show a UR ID with the name under which they registered.
  • Participants must complete all three rounds and final battle to be eligible for prizes.
  • There will be two competition brackets (1. easy/normal level 2. hard, expert, expert+ level). Once a participant selects a bracket, they must stay in that bracket throughout the entire competition.
  • Once the participant chooses their bracket, they are welcome to select which level therein.
  • Participants must complete each round in Studio X.
  • Scores must be verified by a Studio X staff member.
  • Participants will get three attempts for each of the three rounds but not for the final battle.
  • Participants are not allowed to use score multipliers.
  • The songs will be chosen by Studio X staff and will be presented upon arrival for each round.
  • Once you have your score, you will be added to our scoreboard.
  • At the end of each round, a selection of the lowest scoring participants will be eliminated. This depends on how many participants we have.

The Structure

The competition is divided between two brackets:

BEGINNER

Players who are new to the game on the easy/normal levels.

Winning prize: Projector!

EXPERIENCED

Players who have experience with the game on the hard, expert, and expert+ levels.

Winning prize: Meta Quest 2 VR headset!

You can choose which bracket you would like to participate in.

The Schedule

Kick Off: Friday, 11/11 @1pm in Studio X
Round 1: Participants must complete this round by 11/18.
Round 2: Participants must complete this round by 11/22.
Round 3: Participants must complete this round by 12/2.
Final Showdown: Three finalists from each bracket will participate in final competition event on Friday, 12/9 @1pm.


promotional graphic for drop-in fridays at Studio X with geometric design. Reads "Drop-in Fridays. Fall 2022 series. Join us Fridays at 1pm for informal XR talks, tech demos, workshops, and more."

Drop by Studio X every Friday at 1pm for informal workshops, talks, demos, and more! View the full schedule.

XR Game Night

promotional image for XR game night. Shows people in VR headsets.

Take a break from studying and unwind at XR Game Night at Studio X! The night will begin with a brief headset tutorial, and you can reserve the headsets after the event to keep playing later. We will have snacks, beats, and games to relax, have fun, and vibe!

promotional image for XR game night. Shows people in VR headsets.

Join Studio X, UR’s hub for immersive technologies, and learn more about the digital world of extended reality (XR). All levels welcome. No experience necessary!

Instructor: Nefle Nesli Oruç
Where: Studio X, Carlson Library First Floor
When: Tuesday, December 6th @7:30pm
Register: libcal.lib.rochester.edu/event/9662693

Make Your Own AR Mini Driving Game

illustration of a person using a cell phone that shows an augmented reality car.

Learn how to create your own AR mini driving game with Apple ARKit, a mobile platform that makes it easy to create all kinds of AR experiences. In this workshop, participants will use Reality Composer, a tool within ARKit, to create simple 3D models, add physics and behaviors, and deploy their creation on an iPhone or iPad.

Join Studio X, UR’s hub for immersive technologies, and learn more about the digital world of extended reality (XR). All levels welcome. No experience necessary!

Note: In order to participate, you will need to complete the pre-workshop instructions, which will be sent by email prior to the event. Need assistance with this process? Ask for help on the Studio X Discord (Quick Questions Channel). 

Instructor: Hao Zeng
Where: Studio X, Carlson Library First Floor
When: Tuesday, November 15th from 6 to 7:30pm
Register: libcal.lib.rochester.edu/event/9662565

XR Research in the Summer

photogrammetry model of the mural in Kodak Hall.

There is a strong emphasis on fostering cross-disciplinary collaboration in extended reality (XR) at Studio X. Over 50 researchers across the UR use XR technology for their research and teaching, and many come to Studio X for consultation and advice in either program development or engineering. As an XR Specialist at Studio X, I got the opportunity to work on two XR-related research projects during the past summer, one in collaboration with the Brain and Cognitive Science Department (BCS), and the other with the Computer Science Department (CS). Through the Office of Undergraduate Research, these projects were supported by a Discover Grant, which support immersive, full-time summer research experiences for undergraduate students at the UR.

The research with BCS includes digitizing the Kodak Hall at the Eastman School of Music and bringing it into VR. The result will be used to provide a more realistic environment for conducting user testing to better study how humans combine and process light and sound. The visit to Kodak Hall was scheduled way back in March. Many preparations had been done before the visit that included figuring out the power supply and cable management, stage arrangement, clearance, etc. One discussion was had on what techniques will be used to scan and capture the hall. Three object scanning techniques were tested before and during the visit: photogrammetry, 360-image, and time-of-flight (ToF). 

Photogrammetry creates 3D models of physical objects by processing photographic images or video recordings. By taking images of an object from all different angles and processing them with software like Agisoft Metashape, it is possible for the algorithm to locate and map key points from multiple images and combine them into a 3D model. I first learned about this technique by attending a photogrammetry workshop at Studio X led by Professor Michael Jarvis. This technique has been very helpful for the research since we are able to get great details on the mural in Kodak Hall, at which other techniques had failed.

photogrammetry model of the mural in Kodak Hall.
Photogrammetry model of the mural in Kodak Hall

360-image, as its name suggests, is a 360-degree panoramic image taken from a fixed location. With the Insta360 camera borrowed from Studio X, the capturing session requires almost no setup whatsoever and can be quickly previewed using the app on a phone or smart device.

360 image of Kodak Hall, captured from the stage.
360 image of Kodak Hall, captured from the stage

The Time-of-Flight (ToF) technique shoots light and calculates the time it takes for the light wave to travel back from the reflection in order to get the depth information. Hardware using the ToF technique can be easily found on modern devices, such as iPhone and iPad with Face ID. I tested the ToF scanner on the iPad Pro at Studio X. It provides a great sense of spatial orientation and has a fairly short processing time.

3D capture of Studio X from an iPad Pro.

We used the Faro Laser Scanner in order to get a scan with higher accuracy and resolution. Each scan took 20 minutes, and we conducted 8 scans to cover the entire hall. The result is a 20+ GB model with billions of points. In order to load the scene to the Meta Quest 2 VR headset, we shrunk down the size and resolution of the model dramatically using tools such as gradual selection, adjusting the Poisson distribution, material paint, etc. We also deleted excessive points and replaced flat surfaces with better quality images such as the stage and mural. The end result is a nice-looking model with decent details around 250MB, good for the headset to run. 

partial 3D model of Kodak Hall.

The model was handed over to Shui’er Han from BCS as a Unity package, where she is going to implement the audio recording and spatial visualization before conducting the user testing. It is amazing to see many people working and bringing together their experience and knowledge in making this cross-disciplinary project to reality. I would like to thank Dr. Duje Tadin, Shui’er Han, Professor Michael Jarvis, Dr. Emily Sherwood, Blair Tinker, Lisa Wright, Meaghan Moody, and many more who gave me the amazing opportunity to work on this fun research and all the help they provided along the way. I can’t wait to see what they can achieve beyond this model and research project.  


You can read more about this cross-disciplinary collaboration here.

Hao Zeng
Hao Zeng

XR Specialist