NASA’s IDEAS Project Update: Phase 1
Back in January, we took one small step for man, and one giant leap for user-centered wearable head mounted displays, with the one and only NASA. Since then, the Integrated Display and Environmental Awareness System (IDEAS) Team at NASA's Kennedy Space Center has explored and tested everything from microprocessors and flux-capacitors (wink wink) to hand gestures and GUIs, all in the name of enhancing communication, environmental awareness, and work documentation efficiencies for NASA's very unique set of users both on Earth and in space. So here’s an update on just what we’ve been up to in creating wearable technology during this partial orbit around the sun.
The Team
I’ve been serving as Lead UX Architect & Designer, alongside PRPL’s Paul Hilt, Integrations Engineer, to bring a progressive perspective to the team we’ve partnered with at NASA, in addition to the creative posters that bring the team's quirky personality to the lab. Our focus on user-centered design, agile work style, and “just keep shipping” attitude has inspired a hack mentality among the IDEAS crew for continued innovation. They’ve really rallied around these values and embrace joining the ranks of those changing the game at NASA. Together, we dove in to this incredible project and accomplished so much since taking off.
Phase One: Proof of Concept
The first six months of the project was designated as the Proof of Concept phase, where the team got the lay of the land in all things wearables. Together, we charted a path towards the first prototype that would ultimately catapult NASA into a future of improved safety, communication, and work flow efficiency, allowing the agency to stretch resources further into the galaxy.
Laying a Foundation with UX
The UX team spent some quality time with the most important people to IDEAS team — the users themselves. We got to know the users, both on the ground and in space, quite well, and really put our space pants on to gather the requirements for the users and define the scope of the project. We looked into what's been successful with GUIs in the HMD (head mounted display) arena, what types of input devices work best when you've got a space vehicle to send to the final frontier, and even sent one of our team members on an analog Mars mission to get as close to first-hand experience as possible. Once we had our first version of the UI, we conducted our first usability tests with actual users, and got amazing feedback to incorporate in the first prototype.
Tapping in the Software Team
The software team got under the hood with various platforms and software development kits (SDKs), tinkering around with displaying environmental sensors to keep our users safe. We made our first audio and video calls from the device using video-conferencing solutions, and for the first time were able to show “Launch Control” what's actually happening with Major Tom.From there, we experimented with voice commands to allow the user to interact with our device hands free, a game changer for end users like astronauts conducting experiments on the space station. We developed the initial interactive working task app, allowing users to ditch paper documents in favor of digital instructions, buyoffs, and time tracking. This alone could save countless hours for the agency, and reduce the burden on almost the entire project-based workforce.
Figuring Out the Physical Facets
The hardware guys got their heads in the game with a variety of microprocessors, and experimented to find the best fit for the "brains" of our device. (Luckily, we suffered only one board-frying casualty.) The team looked at options for optics, seeking the most lightweight, high-resolution and bright lenses to potentially roll out on a mass scale. We needed power to make this puppy run, so the team researched requirements for batteries with the best power-density to weight ratio, and worked on designing custom boards to integrate the glasses with the microprocessor. Looks like it's not going to be as simple as plugging in your standard HDMI cord, right? Just to cover all the bases, we researched what it would take to start from scratch with the CPU, Codec, memory, etc., and determined it was best not to reinvent the wheel.
Bringing the components together
The Integration Team focused on where the hardware meets the code. They dabbled in a couple different versions of Android; connected the display lenses, cameras, and other peripherals to the microprocessor; flashed the board a couple thousand times; and ultimately ended up with a custom Android build for our device. They also convinced the Bluetooth Low Energy sensors to talk with the microprocessor and glasses, which proved to be a great success.Then it came time to install the software team's custom built apps and programs onto the dev boards for testing purposes. We considered it a big win for the team to combine UI, software and hardware together into one single form factor.
What’s next?
The future is bright for the IDEAS Team, as they set out to build their first prototype of the IDEAS device by January 2016. The foundation of knowledge lain in the proof-of-concept stage creates a strong platform to integrate the necessary UI, software, and hardware to create a functioning device built for this special set of users. And then of course, the team will bring it into the field and test, iterate, test, iterate, test, iterate until it shines like a supernova.Check out the homemade video above for a preview of all we’ve accomplished to date!