In 2014, Google Glass was released to the public to great fanfare. Just the previous year, in his 2013 TED Talk, Google co-founder Sergey Brin spoke about the human usability issues that Google Glass was designed to solve, and they sounded plausible enough. Who wants to be hunched over their phone all the time? Wouldn’t it be great if we could free up our hands and gaze forward as we navigate the world while we send messages and take photos?
Unfortunately, even as it offered solutions, Google Glass introduced a different set of even larger human usability issues that ultimately led to its demise.
For one, the social impact of wearing the device was underestimated. I remember an early adopter in grad school wearing it as he gave a talk in a lab meeting. I found it totally distracting and, despite never having met him before, immediately questioned his judgment and character. As a category, Glass wearers soon earned the unfortunate nickname “Glassholes.” The device was impossible to ignore, “a class divide on your face.”
The more grounded usability issues, however, centered around security and privacy. People sometimes reacted angrily and occasionally violently to a person around them wearing Glass – what if they were taking photos or filming you without your consent?
In 2015, less than one year after its public release, Google announced they would stop producing Glass for individual purchase. Google had simultaneously created a brilliant technical achievement and a usability failure.
The Research-User Disconnect
This is not an isolated example. Usability issues are rife in technology development across sectors, a problematic pattern recently exemplified by a wearable program funded by the Army Futures Command. In 2018, development began on the Integrated Visual Augmentation System (IVAS), a headset intended to improve the situational awareness of soldiers via a multi-sensor, mixed-reality display and scheduled for operation in 2021. That date has since been pushed back to 2025 due to issues identified in testing.
According to a January 2023 DOT&E report, soldiers experienced multiple difficulties with the headsets: head pain, neck pain, and face pressure due to wearing the physical device, and nausea and eye strain from using its visual display. The incidental light emitted from the headset was also noted to be a potentially serious threat to a soldier’s safety. From a functional perspective, the technology compared unfavorably to current practice: “In the Ops Demo, the infantry company was more successful accomplishing their operational missions with their current equipment than with IVAS 1.0.”
All hope is not lost, however. Testing has been thorough, and the results are being integrated back into updates to the headset, sensors, and software to increase comfort and usability. This is all part of a deliberate “soldier-centered design” process being applied in IVAS that demonstrates the Army’s acknowledgement of the importance of gaining user acceptance before sending a new technology into operation. It’s also a sobering reminder that creating novel, usable technology is very difficult. Even when human factors engineering (HFE) practices are used, rapid success is not guaranteed.
Human Factors, High Stakes
The impact of attending (or not) to HFE practices is not just the success or failure of a shiny new technology, it’s also about preserving human life and safety. In aviation, for example, the majority of accidents and mishaps are caused by human error. So how can we increase awareness and engagement with HFE? This is a question increasingly being asked not just by academics, but by commercial enterprises and government agencies, and was the main topic of discussion at the 2024 NDIA Human Systems conference I recently attended. There, OUSD (R&E) Director of Specialty Engineering Christopher DeLuca outlined updates to DoD Directive 5000.95, “Human Systems Integration in Defense Acquisition” that expand upon the activities that systems engineering development teams must engage in to enhance system maturity, safety, and effectiveness for human users. The goal of these updates is to provide policies that shift HFE from a “nice-to-have” to a “have-to-have” for technologies that are being developed with the end goal of DoD acquisition.
Additionally, there is increasing discussion around the broader adoption of human readiness levels (HRLs) in the evaluation of technologies, defined in ANSI/HFES standard 400-2021. HRLs were designed as an analog to the widely used technology readiness levels, or TRLs, which provide a common, easy-to-reference standard by which to evaluate the technical maturity of a given technology. HRLs provide a similar scale to evaluate how mature a system is with respect to its readiness for human use. Establishing policies for system design and standards for evaluation are important components for seamlessly integrating HFE into systems engineering.
Five Human Factors Principles
All of this underscores that successful technology development is as much about understanding people and the unique contexts in which they exist as it is about technical expertise. I see the case studies described above as clear demonstrations of at least five HFE principles that are essential components of a successful technology development program. They are straightforward, even verging on obvious, but applying them in practice can be deceptively challenging.
1. Obtain a deep understanding of your users’ problems and needs before you settle on a solution. Google thought they had a user problem to solve, but maybe it wasn’t really the main problem at hand. More than a decade after Glass failed, we are hunching over our phones more than ever. Apparently, the need to keep our technology off our faces and preserve some normalcy in our face-to-face communication with other people supersedes the need to free our hands and gaze forward when messaging or taking photos.
2. Clearly define user stories and use cases for your solution. Marketing videos for Google Glass showed people skydiving, on a flying trapeze, attending birthday parties, eating food, all while filming their experiences from a first-person POV. Was that it? If so, Google failed to clearly communicate how people could both use and benefit from using their radical new technology.
3. Consider the system you are developing in its operational context. The social impact of Google Glass was its downfall, both from the standpoints of privacy and social stigma. Careful consideration was taken to make sure the device itself was lightweight and comfortable to wear, but HFE research does not end at ergonomic considerations. A novel technology ultimately needs to be evaluated not just by individual users testing prototypes, but by individual users and teams in real-world contexts in order to realistically evaluate its usability and uncover issues that are fundamentally hard to predict in complex environments.
4. Plan for multiple iterations of testing and feedback. It’s practically impossible to get things right the first time with human users. The waterfall method of moving linearly from (1) defining requirements to (2) developing a system to (3) piloting that system to (4) deployment is unrealistic. How can you define human user requirements if you don’t work with users until piloting? What is the use of conducting a pilot study if the results aren’t fed back into development updates? Soliciting input from an end user base early in development is key to identifying human usability concerns in time to address them.
5. Establish performance metrics for your program that include operational capabilities. In the case of IVAS the Army implemented an ideal evaluation strategy, which was to compare the soldiers’ mission performance using the headset technology with their performance not using the technology. This gets to the heart of the question–does this technology represent a meaningful improvement over standard practice? Such an approach represents a shift from system evaluation based on typical computational performance metrics like accuracy and cycle time.
At Galois, we are putting these five HFE principles into practice in several programs, like DARPA PROVERS. The primary goal of PROVERS is to bring the benefits of formal-methods for software verification into the hands of traditional software engineers in the defense industrial base (DIB). These engineers are experts in domains like aviation and software design, but not necessarily experts in applying formal logic to evaluate their code.
While the value of formal methods for verifying software has been demonstrated, the available tools have failed to be widely adopted across the DIB or commercial industry–perhaps because of usability issues?
To close this gap, our PROVERS team includes expertise not only in formal methods, systems engineering, and software design, but also in HFE, human-computer interaction, and psychometrics. The multi-disciplinary make-up of our team is a deliberate strategy to ensure that our technology has the right balance between technical innovation and human usability, so we can create formal-methods tools that help real-world engineers do real-world work.
When developing and evaluating a new system, we must include the human as a component in that system along with the environment in which the system is implemented. If we focus too narrowly on only advancing the technical side of the equation, we are likely to get stuck later trying to fix the human side of the equation, wasting valuable time and money in the process.
References:
https://www.wired.com/2013/12/glasshole/
https://www.forbes.com/sites/siimonreynolds/2015/02/05/why-google-glass-failed/?sh=711e2c5651b5
https://www.theguardian.com/technology/2013/mar/06/google-glass-threat-to-our-privacy
https://en.wikipedia.org/wiki/Google_Glass
https://taskandpurpose.com/news/army-ivas-goggles-headaches-nausea-neck-pain/
https://taskandpurpose.com/news/army-ivas-inspector-general-soldiers/
https://www.hf.faa.gov/role.aspx.
https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodi/500095p.PDF
https://digitalcommons.odu.edu/cgi/viewcontent.cgi?article=1151&context=emse_fac_pubs