Macro/VBA

Macro for a smaller health plan that customizes data based on location values from 1-10, adding columns for location and remit information. It splits rows with multiple locations into new rows, backfilling corresponding data, and loops through all rows to manage additional locations. The final steps include cleaning up by removing blank columns and addressing problematic values as necessary.


HTML/CSS

Click the calculator to view.


Visualization

This dashboard is designed to monitor the performance of a high-turnover team responsible for completing weekly projects across three workflows. Each project requires a high level of accuracy and on-time delivery, as performance directly impacts vendor relationships. The dashboard provides a clear, neutral view of specialist availability, active SKUs, ongoing projects, and individual performance metrics. This visibility enables effective project triage, supports workload balancing, and helps ensure all deadlines and quality standards are consistently met.

All information above is fabricated

Formulas

Our organization struggled to track metrics because our processes were overly complex. To make things easier, I built a spreadsheet that automatically compiles and tracks metrics in a way that’s easy to understand. This tool highlights the key features that distinguish it. Since most SKUs are managed in Google Sheets, it was difficult to track data across multiple sheets, each with different tabs and specialists. To solve this, I designed a tracker with two tabs that work together to keep everything running smoothly without much outside input. The Google Sheet uses nested formulas to check every sheet and tab, pulls in every specialist’s name using links and queries, and compiles all the data in a consistent format. I also added an Apps Script to manage access, a separate tab for daily numbers, and set up a trigger to run the process automatically each day.

All Information listed is fabricated

Language Model

LLM

This decision tree shows how I approach LLM outputs, spotting reasoning patterns, catching potential issues, and figuring out how logic updates might affect things. It’s all about looking closely and keeping an eye on things instead of getting into model development.

LLM Output Risk Evaluation & Golden Data Calibration

A1 – Provide LLM with Golden Dataset to be utilized as SOT & for training purposes.
A2[LLM Output Dataset] –> B{Aligned with Task Objective?}

B — Yes –> C{Reasoning Consistent Across Similar Prompts, Adhering to SOP – Intended Logic?}
B — No –> D{Is Misalignment Recurring?}

C — Yes –> E{Emerging Trends Over Time?}
C — No –> F[Identify Reasoning Deviation, Flag for Review]

E — Yes –> G{Log Trend
Monitor Over Time}

E — No –> H[Mark Output as Acceptable]

D — No –> I[Document Anomaly
Monitor for Recurrence]
D — Yes –> J[Identify Common Reasoning Path]

J –> K[Assess Downstream Risk]
K –> L{Possible Impact}

L — High –> M[Escalate for Logic Adjustment, Attach possible area for Logic Adjustment]
L — Medium –> N[Log for Trend Monitoring
Re-evaluate in Review Cycle]
L — Low –> O{Document and Monitor}

M –> P[Post-Update Output Review]
P –> Q{Quality Improved?}

Q — Yes –> R[Confirm Mitigation Effectiveness]
Q — No –> S[Reassess Patterns
State Recommendations]