Catherine Grabar

Axon

Report Assistant

AI-assisted form completion in a high-stakes law enforcement workflow

How can we responsibly reduce the mental effort and the time officers spend writing incident reports?

Role: Lead designer, partnering closely with PM, engineering, scientists, and UX research

Timeline: 12+ months

Product status: piloting with select police agencies

 

I led design for an AI-assisted form-filling workflow that helps law enforcement officers complete incident reports more efficiently without compromising accuracy, judgment, or trust.

 

The work focused on defining clear boundaries for automation, designing explicit human-in-the-loop review, and integrating AI assistance directly into existing workflows to drive adoption in a high-risk environment.

User problem

Officers spend a substantial portion of every shift completing detailed incident reports. This time burden reduces time spent in the community and contributes to job dissatisfaction.

 

Additionally, report quality varies significantly across officers within the same agency. Inconsistencies in completeness and accuracy can create downstream issues for records management, investigations, and compliance workflows.

Image description

Constraints

Although this work originated within the Axon Records team, limiting the experience to our native report writer would significantly restrict market reach.

 

To be viable, the solution needed to:

  • Work across a wide range of agency-specific forms
  • Function outside of Axon Records for agencies using third-party systems
  • Avoid deep, brittle integrations with every form system

 

These constraints directly shaped the product architecture and UX, leading to a browser-extension-based approach alongside native integration.

Design Principles

Although this work originated within the Axon Records team, limiting the experience to our native report writer would significantly restrict market reach.

 

To be viable, the solution needed to:

  • Work across a wide range of agency-specific forms
  • Function outside of Axon Records for agencies using third-party systems
  • Avoid deep, brittle integrations with every form system

 

These constraints directly shaped the product architecture and UX, leading to a browser-extension-based approach alongside native integration.

Solution Overview

Although this work originated within the Axon Records team, limiting the experience to our native report writer would significantly restrict market reach.

 

To be viable, the solution needed to:

  • Work across a wide range of agency-specific forms
  • Function outside of Axon Records for agencies using third-party systems
  • Avoid deep, brittle integrations with every form system

 

These constraints directly shaped the product architecture and UX, leading to a browser-extension-based approach alongside native integration.

Evidence selection --> suggestions --> insert. Either a gif or show all three side by side?

Responsible AI and guardrails

One important design decision was defining what not to automate.

 

Fields involving legal interpretation or judgment such as role classification and offense selection were intentionally excluded from AI suggestions. Automating these fields risked introducing bias and encouraging officers to defer judgment to the system.

 

Through internal debate and direct conversations with officers, we aligned on a clear boundary: AI should support documentation, not replace professional judgment.

An example of fields that were intentionally excluded in an incident report: role and offense.

Key design decision: how does an officer review suggestions?

A pivotal and controversial decision centered on how AI-generated suggestions should be reviewed and inserted into reports.

Initial prototype: bulk review and approval

Our first prototype required officers to review all suggestions in a separate UI and bulk approve them to insert. This approach minimized engineering effort and allowed us to quickly test the concept in the field.

Show the bulk view of suggestions (maybe augment comps with being able to edit or something)

During field observations, I noticed a concerning pattern: officers carefully reviewed suggestions in the separate UI, but still caught and corrected errors only after re-reading them in the report itself.

 

This indicated that the review process, while explicit, was not sufficiently rigorous given the consequences of mistakes.

Exploring alternatives

I identified two more appropriate review models:

  1. Bulk approval section by section
  2. Inline review and approval within the form fields themselves

 

Both increased rigor, but differed significantly in workflow disruption.

Diagram of alternative solutions

Final direction: inline, field-by-field approval

I advocated for a lightweight, field-by-field approval model where suggestions appear inline as ghost text, similar to email type-ahead. Officers can:

  • Press tab to accept
  • Start typing to dismiss

 

This approach:

  • Required explicit approval for every suggestion
  • Added no extra steps when suggestions were incorrect
  • Minimized deviation from officers’ existing workflows
  • Preserved time savings where suggestions were accurate

Inline suggestions - show a gif??

To validate usability quickly, I built a lightweight prototype site (add link) and tested it live with officers during a weekly agency partner sync. Officers completed tasks while sharing their screens, and feedback was immediate and clear: the interaction felt intuitive and low-friction.

 

Although I initially received pushback from stakeholders concerned about complexity and implementation effort, subsequent pilot observations confirmed that integrating AI directly into the existing workflow, rather than introducing a parallel one, was critical for adoption.

 

This insight ultimately aligned the team and became the agreed-upon direction.

Outcomes (pilot signals)

While the product is still in pilot, early signals have been encouraging:

 

  • Reduced report completion time
  • Consistent usage of tool during pilot
  • Positive feedback from officers
  • High acceptance of AI assistance with explicit controls
  • Increased willingness to complete historically underutilized forms

Why this matters

This project extended AI assistance beyond narrative drafting into more structured and higher-risk parts of the reporting workflow. It clarified how boundaries, review mechanisms, and workflow integration determine whether AI tools are merely impressive or genuinely adoptable in regulated environments.

Let’s work together

Catherine Grabar

Axon

Report Assistant

AI-assisted form completion in a high-stakes law enforcement workflow

How can we responsibly reduce the mental effort and the time officers spend writing incident reports?

Role: Lead designer, partnering closely with PM, engineering, scientists, and UX research

Timeline: 12+ months

Product status: piloting with select police agencies

 

I led design for an AI-assisted form-filling workflow that helps law enforcement officers complete incident reports more efficiently without compromising accuracy, judgment, or trust.

 

The work focused on defining clear boundaries for automation, designing explicit human-in-the-loop review, and integrating AI assistance directly into existing workflows to drive adoption in a high-risk environment.

User problem

Officers spend a substantial portion of every shift completing detailed incident reports. This time burden reduces time spent in the community and contributes to job dissatisfaction.

 

Additionally, report quality varies significantly across officers within the same agency. Inconsistencies in completeness and accuracy can create downstream issues for records management, investigations, and compliance workflows.

Image description

Constraints

Although this work originated within the Axon Records team, limiting the experience to our native report writer would significantly restrict market reach.

 

To be viable, the solution needed to:

  • Work across a wide range of agency-specific forms
  • Function outside of Axon Records for agencies using third-party systems
  • Avoid deep, brittle integrations with every form system

 

These constraints directly shaped the product architecture and UX, leading to a browser-extension-based approach alongside native integration.

Design Principles

Although this work originated within the Axon Records team, limiting the experience to our native report writer would significantly restrict market reach.

 

To be viable, the solution needed to:

  • Work across a wide range of agency-specific forms
  • Function outside of Axon Records for agencies using third-party systems
  • Avoid deep, brittle integrations with every form system

 

These constraints directly shaped the product architecture and UX, leading to a browser-extension-based approach alongside native integration.

Solution Overview

Although this work originated within the Axon Records team, limiting the experience to our native report writer would significantly restrict market reach.

 

To be viable, the solution needed to:

  • Work across a wide range of agency-specific forms
  • Function outside of Axon Records for agencies using third-party systems
  • Avoid deep, brittle integrations with every form system

 

These constraints directly shaped the product architecture and UX, leading to a browser-extension-based approach alongside native integration.

Evidence selection --> suggestions --> insert. Either a gif or show all three side by side?

Responsible AI and guardrails

One important design decision was defining what not to automate.

 

Fields involving legal interpretation or judgment such as role classification and offense selection were intentionally excluded from AI suggestions. Automating these fields risked introducing bias and encouraging officers to defer judgment to the system.

 

Through internal debate and direct conversations with officers, we aligned on a clear boundary: AI should support documentation, not replace professional judgment.

An example of fields that were intentionally excluded in an incident report: role and offense.

Key design decision: how does an officer review suggestions?

A pivotal and controversial decision centered on how AI-generated suggestions should be reviewed and inserted into reports.

Initial prototype: bulk review and approval

Our first prototype required officers to review all suggestions in a separate UI and bulk approve them to insert. This approach minimized engineering effort and allowed us to quickly test the concept in the field.

Show the bulk view of suggestions (maybe augment comps with being able to edit or something)

During field observations, I noticed a concerning pattern: officers carefully reviewed suggestions in the separate UI, but still caught and corrected errors only after re-reading them in the report itself.

 

This indicated that the review process, while explicit, was not sufficiently rigorous given the consequences of mistakes.

Exploring alternatives

I identified two more appropriate review models:

  1. Bulk approval section by section
  2. Inline review and approval within the form fields themselves

 

Both increased rigor, but differed significantly in workflow disruption.

Diagram of alternative solutions

Final direction: inline, field-by-field approval

I advocated for a lightweight, field-by-field approval model where suggestions appear inline as ghost text, similar to email type-ahead. Officers can:

  • Press tab to accept
  • Start typing to dismiss

 

This approach:

  • Required explicit approval for every suggestion
  • Added no extra steps when suggestions were incorrect
  • Minimized deviation from officers’ existing workflows
  • Preserved time savings where suggestions were accurate

Inline suggestions - show a gif??

To validate usability quickly, I built a lightweight prototype site (add link) and tested it live with officers during a weekly agency partner sync. Officers completed tasks while sharing their screens, and feedback was immediate and clear: the interaction felt intuitive and low-friction.

 

Although I initially received pushback from stakeholders concerned about complexity and implementation effort, subsequent pilot observations confirmed that integrating AI directly into the existing workflow, rather than introducing a parallel one, was critical for adoption.

 

This insight ultimately aligned the team and became the agreed-upon direction.

Outcomes (pilot signals)

While the product is still in pilot, early signals have been encouraging:

 

  • Reduced report completion time
  • Consistent usage of tool during pilot
  • Positive feedback from officers
  • High acceptance of AI assistance with explicit controls
  • Increased willingness to complete historically underutilized forms

Why this matters

This project extended AI assistance beyond narrative drafting into more structured and higher-risk parts of the reporting workflow. It clarified how boundaries, review mechanisms, and workflow integration determine whether AI tools are merely impressive or genuinely adoptable in regulated environments.

Let’s work together

Catherine Grabar

Axon

Report Assistant

AI-assisted form completion in a high-stakes law enforcement workflow

How can we responsibly reduce the mental effort and the time officers spend writing incident reports?

Role: Lead designer, partnering closely with PM, engineering, scientists, and UX research

Timeline: 12+ months

Product status: piloting with select police agencies

 

I led design for an AI-assisted form-filling workflow that helps law enforcement officers complete incident reports more efficiently without compromising accuracy, judgment, or trust.

 

The work focused on defining clear boundaries for automation, designing explicit human-in-the-loop review, and integrating AI assistance directly into existing workflows to drive adoption in a high-risk environment.

User problem

Officers spend a substantial portion of every shift completing detailed incident reports. This time burden reduces time spent in the community and contributes to job dissatisfaction.

 

Additionally, report quality varies significantly across officers within the same agency. Inconsistencies in completeness and accuracy can create downstream issues for records management, investigations, and compliance workflows.

Image description

Constraints

Although this work originated within the Axon Records team, limiting the experience to our native report writer would significantly restrict market reach.

 

To be viable, the solution needed to:

  • Work across a wide range of agency-specific forms
  • Function outside of Axon Records for agencies using third-party systems
  • Avoid deep, brittle integrations with every form system

 

These constraints directly shaped the product architecture and UX, leading to a browser-extension-based approach alongside native integration.

Design Principles

Although this work originated within the Axon Records team, limiting the experience to our native report writer would significantly restrict market reach.

 

To be viable, the solution needed to:

  • Work across a wide range of agency-specific forms
  • Function outside of Axon Records for agencies using third-party systems
  • Avoid deep, brittle integrations with every form system

 

These constraints directly shaped the product architecture and UX, leading to a browser-extension-based approach alongside native integration.

Solution Overview

Although this work originated within the Axon Records team, limiting the experience to our native report writer would significantly restrict market reach.

 

To be viable, the solution needed to:

  • Work across a wide range of agency-specific forms
  • Function outside of Axon Records for agencies using third-party systems
  • Avoid deep, brittle integrations with every form system

 

These constraints directly shaped the product architecture and UX, leading to a browser-extension-based approach alongside native integration.

Evidence selection --> suggestions --> insert. Either a gif or show all three side by side?

Responsible AI and guardrails

One important design decision was defining what not to automate.

 

Fields involving legal interpretation or judgment such as role classification and offense selection were intentionally excluded from AI suggestions. Automating these fields risked introducing bias and encouraging officers to defer judgment to the system.

 

Through internal debate and direct conversations with officers, we aligned on a clear boundary: AI should support documentation, not replace professional judgment.

An example of fields that were intentionally excluded in an incident report: role and offense.

Key design decision: how does an officer review suggestions?

A pivotal and controversial decision centered on how AI-generated suggestions should be reviewed and inserted into reports.

Initial prototype: bulk review and approval

Our first prototype required officers to review all suggestions in a separate UI and bulk approve them to insert. This approach minimized engineering effort and allowed us to quickly test the concept in the field.

Show the bulk view of suggestions (maybe augment comps with being able to edit or something)

During field observations, I noticed a concerning pattern: officers carefully reviewed suggestions in the separate UI, but still caught and corrected errors only after re-reading them in the report itself.

 

This indicated that the review process, while explicit, was not sufficiently rigorous given the consequences of mistakes.

Exploring alternatives

I identified two more appropriate review models:

  1. Bulk approval section by section
  2. Inline review and approval within the form fields themselves

 

Both increased rigor, but differed significantly in workflow disruption.

Diagram of alternative solutions

Final direction: inline, field-by-field approval

I advocated for a lightweight, field-by-field approval model where suggestions appear inline as ghost text, similar to email type-ahead. Officers can:

  • Press tab to accept
  • Start typing to dismiss

 

This approach:

  • Required explicit approval for every suggestion
  • Added no extra steps when suggestions were incorrect
  • Minimized deviation from officers’ existing workflows
  • Preserved time savings where suggestions were accurate

Inline suggestions - show a gif??

To validate usability quickly, I built a lightweight prototype site (add link) and tested it live with officers during a weekly agency partner sync. Officers completed tasks while sharing their screens, and feedback was immediate and clear: the interaction felt intuitive and low-friction.

 

Although I initially received pushback from stakeholders concerned about complexity and implementation effort, subsequent pilot observations confirmed that integrating AI directly into the existing workflow, rather than introducing a parallel one, was critical for adoption.

 

This insight ultimately aligned the team and became the agreed-upon direction.

Outcomes (pilot signals)

While the product is still in pilot, early signals have been encouraging:

 

  • Reduced report completion time
  • Consistent usage of tool during pilot
  • Positive feedback from officers
  • High acceptance of AI assistance with explicit controls
  • Increased willingness to complete historically underutilized forms

Why this matters

This project extended AI assistance beyond narrative drafting into more structured and higher-risk parts of the reporting workflow. It clarified how boundaries, review mechanisms, and workflow integration determine whether AI tools are merely impressive or genuinely adoptable in regulated environments.