Each semester, Writing 105: Uncertainty begins by collaborating with Studio X. Students are invited to explore uncertainty by testing ideas about how extended reality technologies impact our communication, senses, and understanding of our world around us. This fall, 2023, Assistant Director Meg Moody presented an excellent “Intro to XR” workshop where students learned about the history and applications of these technologies. Following this presentation, the Studio X team helped students try out VR experiences, many for the first time. The VR experiences offered included an introductory experience, first steps, a VR version of the trolley problem, and virtual roller coasters. Kate Phillips, the instructor for WRT 105: Uncertainty, asked students to put these experiences into conversation with work from philosophers, psychologists, and other researchers in order to develop arguments that respond to questions such as “What, if anything, can we learn about what is right and wrong through using VR?” and “Will the proliferation of virtual reality technologies create more or less access (to experiences, opportunities, etc.) and equity?” Students are invited to modify the questions, or create their own with the goal of developing their own unique arguments that contribute to our understanding of these technologies and emerging debates about them. Please enjoy reading their thoughtful creations!
Starting as an XR Specialist at Studio X meant that I needed to learn about all the hardware and software that we provide for use to patrons and utilize ourselves in projects. We have such a wide variety of tools for all different purposes, so it was really cool to work through each device and really understand how they work and the best places and times to use them. For example, I had never used a 3D scanner like the EinScan. This device takes a ton of photos of an object while rotating it and testing it under different light patterns to determine texture. I was super curious as to how detailed it could render different objects under different lighting conditions, especially since usually it’s recommended that the scanner be used in dark places. It turned out that the way the machine works with light allows it to recreate objects even in brighter locations, which is really impressive. I also took great interest in the Lumepad, which brought me back to the days of the Nintendo 3DS. The Lumepad sports a screen with a 3D affect that makes images and videos pop out. There was a suite of different 3D objects and a scene from a movie. This tablet brought a new layer of immersion to simple images that, I feel, get overlooked. We also were donated a Kat C2 VR, which is essentially an omnidirectional treadmill meant for VR movement. It uses these shoes that act similarly to computer mice with a sensor on the bottom of each, tracking how your foot moves across the platform. It also harnesses you in, so you stay in the same spot. The trouble was that the receiver was broken so it wasn’t being picked up when we connected it to the computer. After weeks of conversing with the original owner and the team over at Kat, I eventually received a new receiver that worked perfectly !
I also ended up working with a number of patrons during this time, which showed me the power of learning through teaching, as I needed to figure out how to communicate with and direct someone who is in VR – someone seeing a simulation that I can’t physically view at the same time. This required me to have a certain level of knowledge of the hardware and software and any potential stumbling blocks for patrons. I was also able to participate in consultations with faculty and staff from the medical center as well as the chemistry and classics departments. This further proved to me how multidisciplinary XR truly is and how dedicated and invested our faculty are in figuring out new ways to teach their content. I consulted with Brandon Barnett and Bill Brennessel from the Department of Chemistry to determine the usability of a virtual reality 3D molecule visualizer Nanome in the classroom. It was an absolute pleasure to work with them as we worked together to test the compatibility of the app with their teaching material. The two main pieces of software that we work with in Studio X are the 3D modeling and animation software Blender and the game development software Unity. I came to Studio X with some experience in Blender but not as much in Unity, so I completed large portions of Unity’s Create with Code and Create with VR courses, which provided me with a perfect baseline for the software before going into pre-college.
Studio X’s pre-college program “Dreaming New Realities: Interactive Storytelling with Extended Reality” invites high school students from across the country to create 3D models and design and develop VR experiences. Preparing for pre-college took up most of our time this summer, as we ended up making numerous tweaks to our material and structure of the 10 days, we had with the 17 students. In particular, I focused on some of our written materials and expanded the design prompts to include more open-ended projects with intentional storytelling elements. I utilized my knowledge and skills in UX/UI to create a project design document that was easy to read and understand, delving into font types, color, and opacity to impart intention and avoid confusion. During pre-college, I spearheaded the Intro to Blender and the Character Creation workshops and taught the students how to edit primitive objects into models that resembled their previously conceived character. There were so many interesting and unique designs this year ranging from a Troll to Sanji from One Piece to a cat witch.
During week 2 of the program, the students used Unity to create their VR experiences. This portion really taught me how powerful learning from teaching is, as I learned a great deal more by troubleshooting projects and reiterating instructions to students. I also learned a ton about how textures and materials interact with objects across Blender and Unity.
Helping run our pre-college program was such a motivating and satisfying experience. I love to see students embrace their creativity in different ways. Some wanted to focus on making hyper-detailed environments while others wanted to make a tribute to their favorite musicians. It was fascinating to see how their projects progressed across such a wide breadth of projects spanning different genres and art styles.
After pre-college, I had more time to engage with the River Campus Libraries (RCL) community. While I had met a number of staff members already, I had more time to get to know them before the start of the fall semester. I participated in group events such as the Physics-Optics-Astronomy Library’s Towers and Tabletops event, in which RCL staff and students got together to play board games. I went to grab coffee with a number of different colleagues and participated in the RCL Summer Challenge event. All these activities allowed me to really get to know a lot of the RCL staff and bond with my coworkers. It also showed me the importance of getting to know the people you work with as it heightens the motivation to do our absolute best at any task that comes across our desk.
Overall, this summer was a perfect balance of training, teaching, learning, and socializing. I got to work with so many different people of different disciplines across the River Campus Libraries and beyond. I spent time developing my skills as an XR specialist and UX/UI designer through various projects. I felt that I really got to know the RCL staff more personally. I got to understand how the libraries function as a whole and how vital many of the spaces are to keeping the machine running and providing the student body with resources. RCL provides so much to the University of Rochester campus so, especially if you’re a student, I implore you to explore these resources as they can oftentimes go overlooked or underutilized.
23 years ago, The Matrix introduced the idea of humanity living inside of a computer simulation. While moviegoers at the time wrote it off as a fictional piece of work, in the present day, we are much closer to that all-encompassing, technological world with the metaverse. Despite the promise of virtual worlds, there are still legal challenges and questions that echo those we face in the real world.
“Metaverse” is a common buzzword in the world of technology, but what exactly is it? With the term first recorded in Neal Stephenson’s novel Snow Crash 1992, the metaverse is currently defined as “a virtual space in which users can interact with an environment generated by computer and with other users.” However, the metaverse is constantly developing, and its definition is no different. Increasingly, people refer to different subareas or genres that are a part of or an example of the metaverse, including virtual reality (VR), augmented reality (AR), and virtual roleplaying games. Fortnite[i], for example, has started referring to their virtual world as a “metaverse” and Oculus[ii] has customizable personal virtual homes. At the same time, calling these spaces the metaverse is equitable to Google calling itself the “internet.” They are simply a part of it.
By way of this ontological thinking—that is, a metaverse is what we decide a metaverse is—we are introducing the concept of a “multiverse of metaverses.” Any single game or platform could be thought of as a metaverse. The move to name every virtual space a metaverse is in no small part due to companies attempting to utilize the buzzword to gain momentum for their products. While corporations have a long history of jumping on the bandwagon of the latest technical trend—think about the sheer number of apps generated in the last decade—the metaverse has the potential to shift in how we interact with technology. Take the digital economy, for example. Users can buy, sell, and create goods, and ideally, eventually be able to take virtual items from one platform to another. There are still challenges to this seamless virtual engagement: no singular company can solve the issue of interoperability, and collaboration often feels out of the question as it might be less profitable for companies to work together. Moreover, we are not at the level of computing to have “portals” from one metaverse to another. Technology development is not necessarily a linear progression like the development of the early internet. The existence of failed, and supposedly revolutionary, investments to further our “hybrid-verse,” [iii] such as the 3D TV, delivery drones, and Google glass is proof enough for companies to be wary.
Further, the metaverse, just like any space we occupy, comes with questions about how we act, interact, and what are the rules and regulations that govern those interactions. To put it more simply, what are the laws of the metaverse? According to Pin Lean Lau, there are three primary legal sectors one must consider in XR:
In property law [iv], when referencing a physical piece of art, ownership is two-fold and can be attributed to the actual physical artwork. Depending on the terms of the sale, the buyer may/may not own the intellectual property of the artwork. With digital art, international law firm Reed Smith states that “ownership” in the metaverse—in this case, referring to the use of different platforms—“is nothing more than a form of licensing, or provision of services.” Here, true ownership lies with the owner, and the buyer cannot sell the item without permission from the true owner.
With the increased interest in the metaverse, we’re learning more about non-fungible tokens (NFTs) which seem to exist at the intersection of digital and physical property law arguments. An NFT can be an image, music, video, 3D objects, or other types of creative work. Because of their various forms, it’s difficult to determine if they count as regular pieces of digital art or something more. As individuals and companies continue to spend enormous sums to own “property” in the metaverse using NFTs, you begin to wonder what kinds of regulations are applicable to it. Can you apply land law? Can you have a mortgage? Can you sue others for damaging your property?
In the same vein of the potential for personal grievances with extended reality (XR), there is also the issue of the general public’s data safety. With the continued modernization of daily life and companies’ compulsory desire to cater to the needs of their users (sometimes needs that the users themselves hadn’t even considered), new categories for personal data have come to fruition, namely facial expressions, gestures, and reactions. VR headsets collect large amounts of personal physical data from their users. With this expansion in data collection comes the fear of what could be lost during an inevitable cyberattack. Organizations and nations are not fully prepared to deal with the privacy and security issues facing the metaverse as there are not enough qualified people to deal with the complexity of the architecture to develop secure solutions. This fear also does not keep in mind user agreements where companies sell the data they obtained to third parties. In cataloging this information, users lose their right to privacy in microscopic areas of their lives.
Less Answers, More Questions
“Can someone be liable for their actions in the metaverse?” This is a critical question being asked with the development of XR. If one were to give avatars a legal persona, establishing rights and duties within a legal system, what might this mean for society? The distinction between a “legal” avatar and the true legal person who operates it becomes blurred, having a large effect on the ability to prove harm, loss, or injury suffered in the metaverse. Who owns our digital twin? Who is liable for the actions of our digital twin? We all have our place in the metaverse and these questions are just the beginning of much larger developments in the future.
[i] An online multiplayer game owned by Epic Games
[ii] VR headsets and technology developed by Meta
[iii] Amalgamation of physical reality, internet, and VR
[iv] The legal division that deals with issues regarding property and ownership
Understanding how experience and exposure to trauma changes the brain could improve diagnosis and targeted care for conditions like anxiety and post-traumatic stress disorder (PTSD). Benjamin Suarez-Jimenez, Ph.D., assistant professor of Neuroscience, has been studying this topic for the past several years and was awarded a new $3.5 million grant to use virtual reality and MRI to look into the circuitry of threat, reward, and cognitive mapping in PTSD, trauma, and resilience.
For the next five years, this funding from the National Institute of Mental Health will allow the ZVR lab to build upon work that investigates brain areas that build spatial maps, specifically to discriminate between areas of an environment associated with emotions. Suarez-Jimenez’s most recent research identified changes in the salience network – a mechanism in the brain used for learning and survival – in people exposed to trauma (with and without psychopathologies, including PTSD, depression, and anxiety). His prior research has revealed people with anxiety have increased insula and dorsomedial prefrontal cortex activation – indicating their brain was associating a known safe area to danger or threat.
“This project the RO1 will support will probe whether the neural processes we have identified in the past are specific to threat or if they expand to reward processing,” Suarez-Jimenez said. “We are also looking at how attention allocation to some visual cues of the virtual reality tasks changes from pre- to post-task experience. We are hoping that understanding these brain processes can help us identify better ways to diagnose PTSD and to improve treatment.”
The University’s Goergen Institute for Data Science supports collaborative projects across all disciplines.
Ten projects supported with seed funding from the Goergen Institute for Data Science this year demonstrate how machine learning, artificial intelligence (AI), and augmented and virtual reality (AR/VR) are transforming the way University of Rochester researchers—across all disciplines—address challenging problems.
“I’m very excited about the wide range of collaborative projects we are able to support this year,” says Mujdat Cetin, the Robin and Tim Wentworth Director of the institute. “These projects tackle important and timely problems on data science methods and applications, and I am confident they will lead to significant research contributions and attract external funding.”
The awards, approximately $20,000 each, help researchers generate sufficient proof-of-concept findings to then attract major external funding.
This year’s projects involve collaborations among engineers, computer scientists, a historian, a biostatistician, and experts in brain and cognitive sciences, earth and environmental science, and palliative care. Their projects include a totally new kind of computing platform, new virtual reality technologies to improve doctor-patient conversations and help people overcome color vision deficiency, and machine learning techniques to make it easier for people to add music to their videos and to enhance AR/VR immersive experiences based on the unique geometry of each user’s anatomy.
The 2022–23 funded projects and their principal investigators are:
- Ising Boltzmann Substrate for Energy-Based Models
Co-PIs: Michael Huang, professor of electrical and computer engineering and of computer science, and Gonzalo Mateos, associate professor of electrical and computer engineering and of computer science and the Asaro Biggar Family Fellow in Data Science
- A Data-Driven, Virtual Reality-based Approach to Enhance Deficient Color Vision
Co-PIs: Yuhao Zhu, assistant professor of computer science, and Gaurav Sharma, professor of electrical and computer engineering, of computer science, and of biostatistics and computational biology
- Audiovisual Integration in Virtual Reality Renderings of Real Physical Spaces
Co-PIs: Duje Tadin, professor and chair of brain and cognitive sciences and professor of ophthalmology and of neuroscience; Ming-Lun Lee, associate professor of electrical and computer engineering; and Michael Jarvis, associate professor of history
- Personalized Immersive Spatial Audio with Physics Informed Neural Field
Co-PIs: Zhiyao Duan, associate professor of electrical and computer engineering and of computer science, and Mark Bocko, Distinguished Professor of Electrical and Computer Engineering and professor of physics and astronomy
- Computational Earth Imaging with Machine Learning
Co-PIs: Tolulope Olugboji, assistant professor of earth and environmental sciences, and Mujdat Cetin, professor of electrical and computer engineering and of computer science, and the Robin and Tim Wentworth Director of the Goergen Institute for Data Science
- Improving Deconvolution Estimates through Bayesian Shrinkage
PI: Matthew McCall, associate professor of biostatistics
- Building a Multi-Step Commonsense Reasoning System for Story Understanding
Co-PIs: Zhen Bai, assistant professor of computer science, and Lenhart Schubert, professor of computer science
- Versatile and Customizable Virtual Patients to Improve Doctor-Patient Communication
Co-PIs: Ehsan Hoque, associate professor of computer science, and Ronald Epstein, professor of family medicine and palliative care
- Machine Learning Assisted Femtosecond Laser Fabrication of Efficient Solar Absorbers
Co-PIs: Chunlei Guo, professor of optics, and Jiebo Luo, Albert Arendt Hopeman Professor of Engineering
Rhythm-Aware and Emotion-Aware Video Background Music Generation
PI: Jiebo Luo, Albert Arendt Hopeman Professor of Engineering
There is a strong emphasis on fostering cross-disciplinary collaboration in extended reality (XR) at Studio X. Over 50 researchers across the UR use XR technology for their research and teaching, and many come to Studio X for consultation and advice in either program development or engineering. As an XR Specialist at Studio X, I got the opportunity to work on two XR-related research projects during the past summer, one in collaboration with the Brain and Cognitive Science Department (BCS), and the other with the Computer Science Department (CS). Through the Office of Undergraduate Research, these projects were supported by a Discover Grant, which support immersive, full-time summer research experiences for undergraduate students at the UR.
The research with BCS includes digitizing the Kodak Hall at the Eastman School of Music and bringing it into VR. The result will be used to provide a more realistic environment for conducting user testing to better study how humans combine and process light and sound. The visit to Kodak Hall was scheduled way back in March. Many preparations had been done before the visit that included figuring out the power supply and cable management, stage arrangement, clearance, etc. One discussion was had on what techniques will be used to scan and capture the hall. Three object scanning techniques were tested before and during the visit: photogrammetry, 360-image, and time-of-flight (ToF).
Photogrammetry creates 3D models of physical objects by processing photographic images or video recordings. By taking images of an object from all different angles and processing them with software like Agisoft Metashape, it is possible for the algorithm to locate and map key points from multiple images and combine them into a 3D model. I first learned about this technique by attending a photogrammetry workshop at Studio X led by Professor Michael Jarvis. This technique has been very helpful for the research since we are able to get great details on the mural in Kodak Hall, at which other techniques had failed.
360-image, as its name suggests, is a 360-degree panoramic image taken from a fixed location. With the Insta360 camera borrowed from Studio X, the capturing session requires almost no setup whatsoever and can be quickly previewed using the app on a phone or smart device.
The Time-of-Flight (ToF) technique shoots light and calculates the time it takes for the light wave to travel back from the reflection in order to get the depth information. Hardware using the ToF technique can be easily found on modern devices, such as iPhone and iPad with Face ID. I tested the ToF scanner on the iPad Pro at Studio X. It provides a great sense of spatial orientation and has a fairly short processing time.
We used the Faro Laser Scanner in order to get a scan with higher accuracy and resolution. Each scan took 20 minutes, and we conducted 8 scans to cover the entire hall. The result is a 20+ GB model with billions of points. In order to load the scene to the Meta Quest 2 VR headset, we shrunk down the size and resolution of the model dramatically using tools such as gradual selection, adjusting the Poisson distribution, material paint, etc. We also deleted excessive points and replaced flat surfaces with better quality images such as the stage and mural. The end result is a nice-looking model with decent details around 250MB, good for the headset to run.
The model was handed over to Shui’er Han from BCS as a Unity package, where she is going to implement the audio recording and spatial visualization before conducting the user testing. It is amazing to see many people working and bringing together their experience and knowledge in making this cross-disciplinary project to reality. I would like to thank Dr. Duje Tadin, Shui’er Han, Professor Michael Jarvis, Dr. Emily Sherwood, Blair Tinker, Lisa Wright, Meaghan Moody, and many more who gave me the amazing opportunity to work on this fun research and all the help they provided along the way. I can’t wait to see what they can achieve beyond this model and research project.
You can read more about this cross-disciplinary collaboration here.
This summer, I worked full time at Studio X. Even though the campus felt pretty empty with almost all the other undergrads home for summer, there was a lot going on in Studio X! For example, for two weeks in July, we held a pre-college program called “XR: Content Creation and World Building.” In this program, high schoolers came all across the country to learn about the world of extended reality or XR.
“Learn how XR (the umbrella term for augmented and virtual reality) experiences are created! Students will study the history of immersive technologies and gain technical skills by exploring both the basics of 3D graphics for asset creation and how to develop XR environments with Unity, a popular game engine. We will also discuss the applications and impact of XR across humanities, social science, and STEM fields. All learning levels welcome.”
It was really exciting to be a part of this program to teach passionate students about XR creation. As we prepared for the students’ arrival, we asked ourselves, “How can we introduce a dozen high school students to the complex and technically challenging world of XR development, all within two weeks of half-day sessions?” This was a challenge indeed. We knew that we wanted the students to walk away with a basic understanding of the fundamentals of Blender, a 3D modeling and content creation tool, and Unity, a game engine commonly used for VR development, but we did not want to overwhelm them with too much new material all at once. We decided that we would have to create a highly detailed plan, carefully crafting how we would use the two weeks that we have with the students.
Over the course of June and early July, we worked to create this plan, taking every little detail into consideration. The first major obstacle we faced was how we were going to ensure that each student would have the necessary hardware and software in order to complete the activities we were planning. Blender and Unity can both be very taxing on computers, and it is often the case that folks don’t have the necessary hardware, even for our undergraduates. It was very important that this program was open to anyone who was interested and that technical experience or personal hardware was not a limitation. We decided that instead of having each student bring in their own computer, we would use the high-powered workstations that we already have in Studio X. This, however, created the question of how to organize a dozen PCs in our space that each use a very large amount of power. With 12 high-powered PC’s running all at the same time in the same place, we actually ended up blowing a circuit and had to re-think our plans. We considered several options, including using another space or splitting up the group into different rooms, but we eventually decided to completely reorganize Studio X in order to keep the group together in one space. I really liked the way we eventually configured the space, as it allowed us to keep the whole group together and helped us build a stronger community as we worked.
After solving our issue of how to organize the computers, we could focus our energy entirely on planning out how to best use the two weeks with the students. The first week was focused on learning Blender. We wanted to give an introduction to 3D concepts, Blender basics, and character modeling. We felt that this would give our students a foundational understanding of how to navigate Blender, while still being realistic with the time that we have. Blender can be a very challenging program to learn. There are many different things that you can do using the software, and oftentimes it can be very overwhelming the first time that you try it out. Although we felt like we were trying to introduce a lot in a short amount of time, we were very excited to see what the students could make. At the end of this week, each student had their very own 3D modeled character. The students did an amazing job creating their characters in Blender. It was so impressive how fast they were able to learn, and it felt so good to see our planning pay off.
The second week of our program was focused on learning Unity. We wanted to teach the basics of Unity, get the students thinking about core game design principles, and introduce the world of VR development. The end goal for this week would be that each student would create their very own VR mini game, using the 3D character they modeled as the antagonist in their experience.
With so little time, it was really important that we had milestones to reach each day to make sure we stayed on track. On the first day working on their games, the students got an introduction to a template VR Unity project. I created this template using a beginner VR asset from the Unity Asset Store, a place where you can find free or paid packages to help you create games. The asset I used is linked here: VR Escape Room. This package handled a lot of the initial setup for a VR project that can be very complex, allowing the students to focus on their game concepts without being tied down or having to use too much coding. I also created a full VR mini game myself, giving the students an example of what their final project would look like. My game was called Jellyfishin, a game where the player has to go around catching Jellyfish. This game highlighted some of the main mechanics of the template and also was fun for the students to play around with.
After being introduced to the template project, day 2 was all about environmental design. The students learned how to find resources to create their game world using a combination of free models, primitive objects, and their 3D characters that they made the week prior. By the end of day 2, the games really came together. I was really amazed at how much detail and care that each student put into their project, especially considering how little time that they had. The final development day was used to polish and finalize the games. We made sure that each student’s game could be playable start to finish and that there were no major problems with the experience. I think each project was really unique despite coming from the same template. It was so rewarding to see the tools we had created be used so well to create these awesome experiences.
On our final day with the students, it was time for the showcase. Staff members from all over the library came to Studio X, and each student had the opportunity to present their game. One-by-one they gave a quick introduction to their concept and then showed off some gameplay. In the world of game development, you never know if something is going to go wrong. One minor bug could throw off an entire demonstration. Thankfully, these students did an amazing job finalizing their games, and everything went off without a hitch. After two challenging weeks, our students left with a complete VR game, a 3D modeled character, and a set of skills they can continue to grow and use on their journey with XR.
Being a part of this pre-college program throughout the summer has been an amazing learning experience for me. Through all of the preparation and thinking that went into making our goals possible, I really had to put my technical skills to the test. In the end, our planning really made all the difference and is what made the program run so smoothly. It was a great challenge to think about how we can teach so much information to the students in such a short amount of time, and I’m really proud of what we all accomplished. I can’t wait to see how this program continues to evolve and find more ways to lower the entrance barrier to the world of XR. Overall, it was a pretty great summer in Studio X.
A Wilmot Cancer Institute scientist published data that show a new microchip-like device that his lab developed can reliably model changes in the bone marrow as leukemia takes root and spreads.
Ben Frisch, Ph.D., assistant professor of Pathology and Laboratory Medicine and Biomedical Engineering at the University of Rochester, and colleagues have been building what is known as a modular bone-marrow-on-chip to enhance the investigation of leukemia stem cells. The tiny device recapitulates the entire human bone marrow microenvironment and its complex network of cellular and molecular components involved in blood cancers.
Similar tissue-chip systems have been developed by others, but they lack two key features contained in Frisch’s product: osteoblast cells, which are crucial to fuel leukemia, and a readily available platform.
The fact that Frisch’s 3D model has been published in Frontiers in Bioengineering and Biotechnology and is not a one-off fabrication will allow others in the field to adopt a similar approach using the available microfluidics system, he said.
Rochester researchers will harness the immersive power of virtual reality to study how the brain processes light and sound.
A cross-disciplinary team of researchers from the University of Rochester is collaborating on a project to use virtual reality (VR) to study how humans combine and process light and sound. The first project will be a study of multisensory integration in autism, motivated by prior work showing that children with autism have atypical multisensory processing.
The project was initially conceived by Shui’er Han, a postdoctoral research associate, and Victoire Alleluia Shenge ’19, ’20 (T5), a lab manager, in the lab of Duje Tadin, a professor of brain and cognitive sciences.
“Most people in my world—including most of my work—conduct experiments using artificial types of stimuli, far from the natural world,” Tadin says. “Our goal is to do multisensory research not using beeps and flashes, but real sounds and virtual reality objects presented in realistically looking VR rooms.”
A cognitive scientist, a historian, and an electrical engineer walk into a room . . .
Tadin’s partners in the study include Emily Knight, an incoming associate professor of pediatrics, who is an expert on brain development and multisensory processing in autism. But in creating the virtual reality environment the study participants will use—a virtual version of Kodak Hall at Eastman Theatre in downtown Rochester—Tadin formed collaborations well outside his discipline.
Faculty members working on this initial step in the research project include Ming-Lun Lee, an associate professor of electrical and computer engineering, and Michael Jarvis, an associate professor of history. Several graduate and undergraduate students are also participating.
Many of the tools they’ll use come from River Campus Libraries—in particular, Studio X, the University’s hub for extended reality projects, as well as the Digital Scholarship department. Emily Sherwood, director of Studio X and Digital Scholarship, is leading the effort to actually construct the virtual replica of Kodak Hall.
The group recently gathered in the storied performance space to collect the audio and visual data that Studio X will rely on. University photographer J. Adam Fenster followed along to document the group’s work.
Senior Creative Writing major and Karp Library Fellow Ayiana Crabtree '22 was featured in this post for the UR admissions blog! Link to original post at the end.
Located on the first floor of Carlson Library, as the hub for extended reality at the University of Rochester, Studio X fosters a community of cross-disciplinary collaboration, exploration, and peer-to-peer learning that lowers barriers to entry, inspires experimentation, and drives innovative research and teaching in immersive technologies.
Studio X runs tons of fun workshops and events that aim to make XR fun and easier to understand. For example, I run an Intro to XR workshop every semester that teaches participants, no matter their skill level, all about the basics of XR with a fun hands-on learning experience. There are other workshops too, like Blender and Unity tutorials to teach you the basics of 3D modeling and game development. If workshops aren’t your thing, we also have events like our Beat Saber competition and a speaker series called Voices of XR, where you can learn about XR directly from professionals in the field.
Studio X has a wide range of XR technologies that students, faculty, and staff have access to using both inside and out of the space. Our most popular attractions are the Meta Quest 2 VR headsets, which can be borrowed and taken back to your dorm for up to three days at a time. On our VR headsets, there are a bunch of fun pre-downloaded games and experiences for you to play, like Beat Saber, Walkabout Minigolf, Job Simulator, and more! In addition to the VR headsets, we have 360 cameras and 360 audio recorders which can also be taken back to your dorm for a three-day period. If you don’t mind staying in the space, you can ask to try one of our Microsoft HoloLens 2’s (MR headsets) or use one of our high-end workstations for homework. You can also use any of the aforementioned technology in the space if you don’t want to take it back to your room.
Studio X’s main goal is to break down any barriers that may be preventing students from getting into XR technologies. Whether that be making resources readily available, or giving introductory tutorials, Studio X is here to help!