There's a particular kind of operational pressure that builds when the cases never stop coming in and you have no reliable way to know which ones need to move right now.
That was the environment at our Office of Special Trial Counsel. OSTC handles what the Army calls covered offenses: domestic violence, sexual offenses, crimes against children, and similar felony-level charges. Every one of those cases demands serious legal review. Examining the evidence, interviewing alleged victims, potentially talking to additional witnesses, before a disposition decision can even be made. And disposition isn't binary. A case that doesn't rise to the level of a court-martial doesn't just disappear. It gets sent back to the command for alternative action. That process takes time. Legal judgment. Coordination.
And the cases kept coming.
At any given point, our team was managing hundreds of open files. Some were clearly heading to trial. Some were clearly not. Most were somewhere in between, waiting on an interview, a piece of evidence, a victim who hadn't yet responded. Without a system that could make those distinctions visible at a glance, the default prioritization method was whoever was loudest, or whatever was oldest. Neither is a strategy.
This is the story of how we built something better, and what we had to learn the hard way to get there.
Before any dashboard existed, our case tracking lived in two places: MJO and whatever system each individual team member had invented for themselves.
MJO, Military Justice Online, is the official case management system for the JAG Corps. It's where case updates get logged, where the official record lives. It can generate reports, but those reports come out as Excel spreadsheets, and they only reflect the state of the data at the moment you pull them. If something changed an hour later, your report was already out of date. There was no live view. No at-a-glance status. You pulled a report when you needed information, worked from it, and hoped nothing significant had changed by the time you acted on it.
Outside of MJO, tracking was entirely informal. Sticky notes. Whiteboards. Notebooks. Some people were disciplined about it. Some weren't. None of it was shared or visible to the rest of the team.
What we didn't have was a triage system. There was no formal method for looking at the full caseload and making deliberate decisions about which cases needed immediate attention, which ones could move quickly toward deferral, and which ones just needed more time and information before anyone could make a call. Priority defaulted to age. The oldest case got attention first, regardless of complexity, regardless of where it stood in the review process, regardless of whether faster movement was even possible.
The idea didn't start with technology. It started with something a senior leader said in a training session about ten months ago.
He wasn't pitching a software solution. He was talking about mindset. The message was straightforward: look at your caseload as a whole, sort it into categories based on where each case stands, and work those categories differently. Don't treat every case like it demands the same level of attention at the same time. Triage your work.
It was practical advice, the kind that sounds obvious in retrospect but hadn't been operationalized in any formal way in our office. Cases came in, cases got worked, and the pile kept growing. The idea of deliberately sorting that pile and making strategic decisions about where to focus first wasn't something we had a system for.
What stuck with me after that session wasn't just the philosophy. It was the question of what it would look like if the triage happened automatically. If someone on the team could open a single view, see exactly which cases fell into which category, and get to work without having to mentally sort through hundreds of open files first. The senior leader was describing a way of thinking. I started thinking about whether we could build a tool that made that thinking unnecessary.
That's what pointed me toward Power BI.
The first version of the dashboard was built around one problem: triage.
The framework came directly from what our senior leader had described. Every case in our caseload could be sorted into one of three categories. Category one was a case that was clearly going to trial. No ambiguity, no additional review needed. It was a court-martial case and everyone knew it. Category three was the opposite. These were cases that could be resolved quickly through what we called an accelerated deferral. A first-time domestic violence offense with a clear disposition path, or a dual domestic violence incident where there wasn't a clearly identified victim. Cases like these could be turned back to the command for alternative action without an extended review process. Category two was everything in the middle. Cases where we needed more information, a witness interview, additional evidence, or more time before anyone could make a confident call about where the case was going.
The dashboard made those categories visible. A team member could open it, filter by category, and immediately see which cases needed their attention without mentally sorting through the entire caseload first. Category one cases got the resources and focus appropriate for trial preparation. Category three cases got moved quickly. Category two cases got the investigative work they needed to become one or the other.
It was a focused solution to a specific problem, and for that purpose it worked. What I didn't anticipate was how quickly it would surface the next set of problems our office needed to solve.
Once the triage dashboard was working, leadership saw it and immediately started asking questions it wasn't built to answer.
How are we doing overall? What does our caseload look like as a whole? What percentage of our cases are we actually moving through, and where are things slowing down? Those weren't questions the triage view was designed to address. It told you which cases to work. It didn't tell you how well the office was performing.
So the dashboard grew.
The second view became a caseload health dashboard. At a glance, it showed the overall composition of our caseload, the ratio of case categories, throughput over time, and where the bottlenecks were forming. Leadership could look at it and get an honest read on whether the office was moving cases efficiently or whether work was piling up somewhere in the process.
The third view came out of a specific operational requirement. We were required to make initial contact with victims within the first 30 days of a case being opened. We had no system for tracking that. The new dashboard added a time-based layer to the triage view, sorting cases into buckets based on how long they had been open. Under 30 days. Between 60 and 90 days. Over 90 days.
That last bucket mattered beyond just our internal operations. Cases over 90 days old were a benchmark headquarters used to evaluate office performance. A high number of aging cases flagged an office for additional oversight. A low number signaled that the office was running efficiently. For the first time, we could see exactly where we stood on that metric without pulling a manual report from MJO and counting rows in a spreadsheet.
What started as a single triage tool had become three connected dashboards, each answering a different question for a different audience on the team.
The dashboard worked. That part is true. But the way I released it is something I'd do differently.
Once I had something that was functional and looked good, I started sharing it with everyone I could think of. I was proud of it. I'd built something from nothing, solved a real problem, and created a tool that our office genuinely didn't have before. So I showed it off. I pushed it out before it was fully tested, before I had gathered structured feedback, before I had any real sense of how people outside of my own workflow would actually use it.
The tool held up, which was fortunate. But there were things that could have been caught earlier if I had approached the release more deliberately. Edge cases in the data. Filters that made sense to me but confused other users. Layouts that worked on my screen but didn't translate well on someone else's setup. I found out about most of these things after people were already using the dashboard, which meant fixes had to happen in the background while the tool was live.
The mindset I was missing had a name. I just didn't know it yet. Agile development, specifically the principle of building incrementally, testing with real users early, and treating the first version as a starting point rather than a finished product. I learned about those practices after this project was already done. Looking back, they describe exactly what I should have been doing from the beginning.
The goal should have been to solve a problem. Instead, somewhere in the process, it became about proving I could build something. Those are different objectives, and they lead to different decisions.
The adoption didn't happen all at once, and it didn't start at the top.
In the beginning, it was just me and one other paralegal using it. We had built it, we understood it, and we could see immediately how it changed the way we approached the day. Instead of opening MJO, pulling a report, and manually working through a spreadsheet to figure out where to focus, we opened the dashboard and the prioritization was already done. We used it to help the attorneys on our team understand which cases needed their attention and in what order.
Then leadership saw it. Then the attorneys saw it. And the questions shifted from "what is this" to "why aren't we all using this."
The contrast was stark enough that it didn't require a sales pitch. Before the dashboard, there was no shared view of the caseload. Everyone was working from their own mental model, their own notes, their own version of what was most urgent. After it, the entire team was looking at the same information. Triage decisions became conversations grounded in data rather than instinct or seniority. Cases that could be moved quickly got moved. Cases heading to trial got the sustained attention they needed earlier in the process.
The 90-day metric told the clearest story. Once we could see it in real time, we could act on it in real time. Cases that were approaching that threshold didn't sneak up on us anymore. We saw them coming and could make deliberate decisions about how to handle them before they became a performance issue.
It went from no system to a system that worked. That sounds simple. In practice it changed how the office operated every day.
If I were starting this project over, I'd go in with a completely different relationship to the first version.
The mistake wasn't building quickly. It was treating the first working version like a finished product. I didn't build in a testing period. I didn't create a feedback loop. I didn't give a small group of users time to find the edges of the tool before I pushed it out to everyone. I went from "this works" to "everyone should see this" without the step in between that would have made it significantly better before it went live.
I know what that step looks like now. Build something functional, share it with two or three people who will actually use it and tell you honestly what's broken, incorporate their feedback, and repeat that cycle before you ever think about a wider release. The product that comes out the other side of that process is better. More importantly, the people using it trust it more because they were part of making it work.
I'm applying that directly to the next project. I'm currently building a second Power BI dashboard that tracks court-martial case progress and team workload across multiple offices. A few people know it exists. Most don't. I'm using it myself, finding the gaps, fixing them, and refining the logic before anyone else's workflow depends on it. That's a different discipline than what I brought to the first project, and it's already producing a more stable tool.
The other thing I'd change is simpler. Go in with the explicit goal of solving a problem, not building something impressive. Those can feel like the same thing in the early stages of a project, but they lead to different decisions at every step. One keeps you focused on the user. The other keeps you focused on yourself.
The dashboard we built isn't remarkable as a piece of technology. Power BI is widely available, the data was already sitting in MJO, and the triage logic came from a framework a senior leader described in a training session. None of it required specialized skills that most legal ops practitioners couldn't develop.
What it required was someone willing to look at an operational problem and ask whether a tool could solve it. That question doesn't get asked enough in legal environments. The assumption tends to be that legal work is too complex, too judgment-dependent, or too sensitive to systematize. Sometimes that's true. But often the real barrier isn't complexity. It's just that nobody has sat down and built the thing yet.
If your office is tracking caseload in spreadsheets, or relying on individual team members to maintain their own systems, or making triage decisions based on whoever is pushing hardest that week, there's probably a dashboard that would change how your team operates. It won't replace legal judgment. It will free up more of it for the cases that actually need it.
That's most of what I learned.