A four-page Power BI system that replaced static MJO exports with live triage visibility — reducing an active caseload of 170 files to a sustained 100–120 at a US Army JAG Corps field office.
OSTC handles covered offenses under Army jurisdiction: domestic violence, sexual assault, crimes against children, and other felony-level charges. Every case requires formal legal review before a disposition decision can be made. Cases that don't go to court-martial don't simply close — they get deferred back to the command for alternative action, a process that takes time, legal judgment, and coordination.
At any point, the office was managing roughly 170 active files. Some were trial-bound. Some were candidates for deferral. Most were somewhere in between, waiting on a victim interview, a lab result, or additional evidence before anyone could make a confident call.
The tracking system was MJO, the official JAG case management platform. MJO can generate reports, but they output as static Excel exports — snapshot data only. No live view. No triage logic. No way to look at the full caseload and make deliberate decisions about where to focus first. Priority defaulted to case age. The oldest file got attention first, regardless of where it stood in the review process or whether faster movement was even possible.
That had been the default for nearly two years. The caseload sat at 170 cases and stayed there.
This project was not part of my assigned duties. I built it on top of my regular workload, without a directive or a dedicated budget.
The idea came from a senior leader who described a triage mindset during a training session: sort your caseload into categories, work those categories differently, stop treating every case like it demands the same attention at the same time. The framework made sense. What I heard was a design problem.
I had been working with Power BI on other projects. I knew the tool could make that framework visible. I took the initiative, designed the solution, and built it myself from start to finish. As the dashboard evolved, teammates used it and brought feedback. They asked for additional views. I built those. The final product reflects their input, but every design and build decision was mine.
The triage framework already existed in concept. What didn't exist was a tool that made it visible. I built one in Power BI, using MJO Excel exports as the data source. That was version one. What exists now is a four-page operational dashboard built on three connected data sources.
The first source is the OSTC Case Tracker — an Excel export from MJO covering 114 active cases across 22+ columns. Power Query adds a Priority Level column using cascading text extraction, checking the Latest Update and Summary fields for category tags and assigning each case to a triage bucket automatically. DAX calculated columns handle Days Until ETS, ETS flags, and aging buckets. DAX measures handle the KPI outputs: total actions, cases over 120 days, percent over 120, average days open, and ETS alert counts.
The second source is a derived Case Personnel query. The MJO export stores personnel assignments in a multi-line field. That field gets unpivoted into one row per person per case, producing 437 rows linked back to the case table by a CaseID index. This enables the dashboard to filter by team member and display individual workload distribution across pages.
The third source is a Microsoft Lists integration pulling directly from the OSTC 7th Circuit Hawaii Field Office SharePoint site. This tracks the deferral request workflow and required significant Power Query engineering. SharePoint's nested Person/Group fields expand into duplicate rows by default — fixed using a custom M formula reading the display name from the nested table structure without expanding rows. Relative Teams links were corrected by prepending the base URL in Power Query and setting the field to Web URL data category, making them clickable from inside the dashboard.
Circular dependency errors that would have broken the sort logic were resolved by building all sort columns in Power Query rather than DAX. The Status bar chart on the Deferral Request page sorts in workflow order using a Power Query integer sort column mapped to each stage.
Four KPI cards give the immediate picture: total cases, average days open, percent over 120 days, and ETS alerts. Two bar charts break the caseload down by priority level and by garrison command. An ETS alert table surfaces cases within 90 days of a soldier's separation date, sorted by urgency. This page is the first view leadership sees.
Screen recording — coming soon
The daily operational page. Slicers for garrison command, team member, accused keyword, and priority level let any team member scope the view to their own workload in seconds. A case table with the Latest Update column visible gives at-a-glance status without opening MJO. This is the page the team opens first every morning.
Screen recording — coming soon
Workload management view. Aging bucket slicers filter the entire page. Two pie charts show the distribution by bucket and by priority level. A bar chart shows cases by team member using a workload measure built from the unpivoted Case Personnel table. A contact table filters by triage bucket for direct follow-up planning.
Screen recording — coming soon
The most recent addition, addressing a gap the original tool didn't reach. The deferral intake workflow previously lived only in a SharePoint list — finding a case, determining its current stage, and identifying the next action required several minutes of navigation per request. This page puts that entire workflow on one screen. KPI cards show pending reviews and ETS alerts. A status bar chart sorted in workflow order shows where requests are concentrated. Team members can identify their cases, see the current stage, and open the linked MJO record or Teams conversation directly from the table. What previously took minutes now takes seconds.
Screen recording — coming soon
Before the dashboard existed, the office had been carrying roughly 170 active cases for nearly two years. The caseload had no visible path down. Cases were worked in order of age, not in order of what could actually move.
After the triage tool launched, the active caseload dropped to between 100 and 120 cases, where it has held since. That reduction is directly attributable to the dashboard. Once the team could see which cases were candidates for rapid deferral and which ones were genuinely waiting on something, the work got sorted faster — and cases that could close did.
Triage decisions that had previously defaulted to instinct or seniority became conversations grounded in the same data. Every team member was looking at the same view. Cases heading to trial got sustained attention earlier. Cases that could resolve quickly did. The 120-day threshold the office monitors against headquarters benchmarks is now tracked in real time — cases approaching it don't arrive as surprises.
The Deferral Request Portal addressed a separate bottleneck that only became visible after the first tool changed how the team worked. The intake workflow had required several minutes of SharePoint navigation per request. The portal page makes that workflow visible and filterable in one view. Scoping to your own cases and identifying the current stage now takes seconds.
Screen recordings of all four dashboard pages are embedded in Section 03 above, alongside the description of each page. Each recording walks through the page in operation: filters applied, data visible, and navigation demonstrated.
Individual case records and personally identifiable information have been sanitized prior to publication. Unit and garrison identifiers visible in the recordings are general organizational designations and do not contain sensitive case information.
The first version of this tool had a real problem in how it was released. It was functional, it worked, and I pushed it out before structured testing, before a feedback loop, before anyone outside my own workflow had a chance to find its edges. I was proud of what I'd built. That got in the way of building it better.
The revamp was different. Every addition was tested before it expanded. The data model was validated at each stage. The SharePoint integration, the unpivoted personnel table, the clickable links, the custom sort logic — each one was working correctly before the next piece was added. No wide release until the logic was stable.
That's not a process I had a name for when I built the first version. I do now. Build incrementally. Test with real conditions before expanding scope. Treat the first working version as a starting point, not a finished product.
The other thing the revamp confirmed: the original dashboard solved the right problem but left part of the workflow untouched. The deferral intake process was still living in a SharePoint list that took minutes to navigate. That gap was visible because the first tool had changed how the team worked well enough that the next bottleneck became obvious. Good tools surface the next problem. That's how you know they worked.