Noctis Evadere

Educational VR project designed to raise public awareness of the strengths and weaknesses of robotic sensors.

Project Goals
In September 2019, I participated in a collaborative VR project with students and faculty from Oregon State University and Stuttgart Media University. This was an educational project designed to raise public awareness of the strengths and weaknesses of robotic sensors. We had to complete the project within a ten-week academic term.
We worked closely with the students from Stuttgart HdM to develop concepts that would help transform our educational goals into a fully-realized playable experience. In addition to a weeklong in-person brainstorming session, we worked with these students remotely to ideate our design and share feedback. Within our own groups teams worked together to divide the work among our expertise. My main contributions were in detail modeling using Autodesk Maya and prototyping code in Unity.
Stuttgart HdM logo
Initial Planning
During our weeklong brainstorming session with the German design students our group decided on creating an atmospheric, haunted museum VR experience. We thought that putting players in a dark environment where they would lose the sense of sight would be a perfect way to showcase how robotic sight functions using lidar. We decided on giving the user a ‘flashlight’ that projected a matrix of dots and outlined objects in the field of view, much like how robotic sensors on automated vehicles work.
Expert Feedback
After deciding on a concept our group met with several Oregon State robotics professors to get expert opinions and information on the current state of robotic sensors. The professors surprised us by explaining that robotic sensors are inherently poor at object detection and that many people overestimate a sensor’s ability to interpret its surroundings. They explained how machine learning techniques for recognizing objects within images are often fooled when presented with an image that differs from the dataset they were trained on. Furthermore they described limitations in lidar technology by showing how beams from the sensors can be blocked by closer objects in the field of view, which can potentially lead to unexpected objects appearing as the sensor moves through space.
Final Decisions

It was clear that many people lacked an understanding of the limitations that robotic sensors have. WIth an increase in self-driving functionality in cars among other implementations, there is real danger in over-relying on machine vision. We decided to make a game that demonstrated how certain situations can confuse sensors and make navigation uncertain

Robots cannot perform object detection and navigation tasks as well as humans because they simply do not have the reasoning ability of humans. If we just gave players the robotic sensor to use on top of their own vision, they would opt to use their own vision every time as it is inherently better and they can immediately understand what they are seeing. However, by utilizing the immersive nature of virtual reality, we can deprive the users of their vision in certain scenarios to force their reliance on robotic sensors as they navigate through a space.

To combine these ideas into a game that is fun and immersive we decided that the player must try to escape a haunted museum. We would implement the idea of object detection failure in cameras via a security room that players must walk through and look at to find an escape route. We then would implement vision via the infrared array in a pitch black room full of terracotta soldiers. These soldiers were arranged so that they formed a maze, and the player would have to rely on the sensor to navigate their way through.
Design Process
Autodesk Maya: Greyboxing
We began the modeling process by greyboxing the play space in Autodesk Maya. The group split up which sections of the museum they would design and then combined our rooms in a master file so that they would all connect. I led the design of the overall layout and path that the player would take as they travelled the museum.
greybox museum model greybox museum model
Autodesk Maya: Modeling and Lighting
reference image of axe
Model Reference
model of axe
Basic Shape
model of axe in room
In Scene
I modeled many assets for the final design of the project including the entirety of the ‘medieval room’ as well as many artifacts, containers, display cases and other props throughout the museum. I began by finding reference images to use as I modeled so that my assets would be more realistic. After creating an initial shape, I would refine the model by adding vertices and sculpting. Finally, I would place individual assets within the larger scene. In addition to modeling I created many textures and UV maps for our assets. The group exported our models into Unity where we created lighting effects and sound effects to make the space more immersive.
Adding Light in Unity
Unity: Interaction
The robotic sensor, resembling a flashlight
To make the model playable in VR our team used the VRTK Unity plugin for Oculus Rift. We spent weeks troubleshooting the game, optimizing it for framerate, making many elements intractable including containers and light switches, and creating physics boundaries to restrict teleportation movement. One key focus here was the robotic sensor tool. With the help of a professor we were able to integrate code for this tool that allowed it to project a dot matrix into the environment, simulating robotic vision.
Final Product
Our group presented our final product in an end-of the-term showcase as well as via video conference with the German design students. The experience was fully playable and overall was easy to pick up and use on the oculus rift. With more time our group would have loved to explore the object recognition concept more, as the current state of the security monitors does not allow for interaction.

Link to the playable project (requires oculus rift to work properly)

Our group struggled most with creating an immersive VR experience that could teach complex concepts like the limitations of machine vision while still being fun for the average user to pick up and play. Ultimately I feel like we could have done a more thorough job of explaining how the robotic sensor worked, but we were pleased with the playability of our project.

This was the first project of this scope that I had ever worked on with a group. Everything had to be coordinated perfectly so that our models matched both in scale and in style. If any one member was out of sync the entire project was at risk of failure as we only had 10 weeks to complete it. I learned valuable skills in time management, solving issues with group dynamics, and adapting my style of design to work with the vision of others.

So much planning, prototyping and iteration went into making this project both functionally and aesthetically effective. Before this project I had done some graphic and 3D design work but I did not fully appreciate the iterative method as a tool for refinement. I also gained a ton of practical skill in a wide variety of applications, from Maya to Unity to Adobe software like premiere pro and photoshop.
Created by Brady Baldwin, 2021