A lack of Human-centered Design

Most of the PPBE process is managed through in-person meetings, email, printed materials, and bespoke Excel spreadsheets. Budget and program data is converted to PDF and back or into other formats through multiple processes. Data is often manually retyped. The current process is inefficient, difficult to navigate, and impossible to track accurately over time. Despite the best efforts of most of the participants, it is not possible to maintain data integrity in the current system.

Each DoD program office builds and maintains internal tools for their own needs and to their own specifications. This results in inconsistent information across DoD offices. Demonstrations of these tools showed serious usability problems. Despite this, the DoD has begun granting Congressional staff access to some of these tools. Simply sharing these tools with Congressional staff does not mean Congressional staff can or will be able to use them effectively. Congressional staff have not been trained to use them nor do they have the time to learn the intricacies of every individual system to which the DoD might grant them access.

Currently, select Congressional staff have access to an enclave pilot project developed by the DoD Chief Digital and Artificial Intelligence Office. This pilot product was built using tools and data from Advana and housed on unclassified infrastructure (IL 2) called the Secure Unclassified Network (SUNet). The pilot currently contains three applications: Historical Selected Acquisition Reports (SAR), the Defense Acquisition Visibility Environment (DAVE), and Middle Tier of Acquisition (MTA) programs. Advana manages enclave pilot access via password, username, and two factor authentication code. Advana also provides a basic user interface for the pilot.

During our research, we discovered the DoD granted 12 individual users access to the enclave pilot. Only four had ever successfully logged in. To explain the lack of adoption, participants pointed to password timeouts, a lack of technical knowledge, non-existent training, and the burden of learning new programs. Others were simply unaware they had been granted access. Based on demos, we found the enclave pilot is hard to use, contains limited data, and performs poorly. Demonstrations included sizable, unexplainable errors with little recourse for confused users. Our research could not determine if anyone was still using the existing pilot in a meaningful way.

Despite the lack of adoption of earlier programs, the DoD plans to grant Congressional staff access to more tools, such as the Congressional Hearings and Reporting Requirements Tracking System (CHARRTS). DoD created CHARRTS to track deadlines and reporting requirements contained within the National Defense Authorization Act (NDAA), Defense Appropriations Bill, or other relevant legislation. Congress has requested access to CHARRTS to gain insights into how the DoD is managing Congressional requirements. DoD promised Congressional staff access to CHARRTS but has yet to deliver. Despite this, we were told that CHARRTS data will become part of the future enclave.

CHARRTS has significant usability problems, very few dedicated resources, and no HCD capacity. The future of the application is also unclear. CHARRTS is an excellent example of how simply granting access to an existing system does not satisfy Congressional needs.

Ease of use

Congressional staff have requested access to DoD systems, but applications such as the enclave pilot and CHARRTS are difficult for even experienced users to operate. When leading product demos, seasoned DoD staff struggled to interact with them. Systems returned results that were sometimes incorrect or incomplete. Even when accurate and complete, much of the information contained in these systems is not relevant to Congressional staff.

CHARRTS has many years of historical data, for example, but several participants indicated that historical data is of limited use because reprogramming changes budgets over time. Furthermore, CHARRTS contains multiple versions of the same PDF, creating a confusing collection of nearly-but-not-quite-identical documents. Without context and training, access to CHARRTS is unlikely to provide satisfactory insight into how the DoD is responding to Congressional budget requirements.

Likewise, navigating Advana requires data science skills and a deep knowledge of DoD budget minutia. It is a powerful tool for some users but for Congressional staff without such knowledge or skills, Advana is effectively unusable. Advana’s potential to spin up infinite applications may make it a useful internal DoD tool, but it comes with significant risks: inconsistent taxonomy, complex and jargon-driven navigational structures, lack of useful metadata, and inaccessible user interfaces. The apparent lack of oversight creates a steep and worsening learning curve for non-expert end users, whether they are DoD or Congressional staff. Advana is designed for people who have the time and the need to become experts in the system itself and the skill to navigate a repository of unstructured data. While it does represent a leap forward for the DoD’s use of modern data tools and infrastructure, it is not designed for Congressional staff and cannot meet their needs.

User interface

Navigating the enclave pilot and CHARRTS is difficult due to poor user interface design and lack of predictable functionality. All these systems contain major user interface issues like broken features, inconsistent interactive elements, erroneous data results, and unintuitive system behaviors. CHARRTS, for example, has an extremely antiquated user interface, with small text, low contrast design elements, and older, table-style HTML pages.

The enclave pilot contains three applications, each with user interface issues. During product demos we observed low contrast colors, inconsistent button styles, unexpected animations, non-standard navigation methods, and form fields that were too small. These systems likely do not meet the Federal government’s own accessibility guidelines. Each application within the enclave pilot has its own look and feel, there are no consistent design patterns. What little taxonomy that exists in the underlying Advana platform is not written in plain language. Instead, it uses Advana-specific acronyms and language unfamiliar to non-experts.

User feedback

Congressional staffers we spoke with said they had not been asked to provide input on what was built in the enclave pilot. During product demonstrations, the Advana team referred to user stories driving pilot functionality, but it was unclear if they had spoken to Congressional staff. Without direct input from Congressional users it is hard to know where these user stories came from. Advana has an unclear team structure and HCD capacity. Even if the Advana team has dedicated HCD resources, they are not reaching out to users in a proactive, collaborative, or productive way.

For Congressional staff to adopt and use the enclave, they will need to change their current workflows and behaviors. They will not do this with a system that is difficult to use or does not meet their needs. Building a system without their direct feedback and involvement will result in continued and additional usability issues.

Building trust through design

CHARRTS and the enclave pilot reinforce the distrust between Congress and the DoD through poor performance. During our CHARRTS demo, key features were clearly broken. Advanced search functionality had not worked for an undetermined amount of time. Several of the features in the search menu had been deprecated, but never removed from view. If Congress were given access to CHARRTS, it would cause significant frustration, as it did for the people demonstrating the product.

Likewise, the enclave pilot contains historical acquisition data but provides no insight into how up-to-date or accurate that data is. When the system fails, which it does, it is difficult for users to diagnose how, why, or what to do about it. During our demo, the Acquisition Budget Estimate application failed to pull data correctly, displaying an alarming $10 billion dollar deficit. No on-screen warning or explanation was available to the user. Even DoD staff leading the demo could not explain the error. Users were expected to know that the data was incorrect by guessing. Alternatively, to verify information, the user would need to reach out to a developer for support, something that was also not clear in the system. Notably, this $10 billion error occurred in a system that is currently live and accessible by Congressional staff. These types of unexplained errors sabotage user trust, reinforcing the appearance DoD is improperly tracking its spending or hiding information from Congress.

Additionally, the enclave pilot displayed extremely slow performance times, taking several minutes to complete queries and load pages. By contrast, Google considers load times over two seconds unacceptable. A 2017 Google report indicated that an increase in load times from one second to three caused one third of users to abandon the system. During our demo, single pages took over five minutes to load. When asked how Congressional staff would go about reporting a problem, the Advana team advised that users would need to contact a developer to report a broken data link or understand when a data set was last updated. It was unclear in the system how staff would contact a developer.

User adoption for the pilot is low because the system is poorly designed, inaccurate, and unreliable. Users will only adopt systems they can easily use. Poor user experience practices like long load times, undiagnosable errors, and unknown data accuracy generate more questions than answers. They add frustration and time to an already difficult process which ultimately leads to distrust.

The future enclave will need to pull data at a reasonable cadence and show users system statuses in an intuitive, straightforward way. The enclave will need to deliver results quickly, ensure accuracy, and allow users to understand what they are looking at in a holistic way. It will also need to provide customer support and intuitively display error messages without forcing the user to guess meanings, gain specialized knowledge, or require developer intervention.


Back to top

This site was last updated on 12 MAR 2024.