Case Studies

What this looks like in practice

These examples highlight the pattern: clear problem definitions, pragmatic design, and a path from prototype to production.

Grid Resilience and Intelligence Platform (GRIP)

Grid resilience & analytics · U.S. Department of Energy / SLAC / utilities

The challenge

The Department of Energy and research partners had powerful grid simulation tools, but no usable application that let operators and planners explore resilience scenarios for extreme weather events.

Approach

  • Led the design and implementation of the GRIP web platform on top of research-grade models from SLAC and partners.
  • Worked with researchers and utility stakeholders to translate complex simulation outputs into clear UX flows and product requirements.
  • Built an interactive, map-centric application for exploring fault, isolation, reconfiguration, and virtual-islanding scenarios under different storm conditions.
  • Engineered the data and API layer that ingests simulation runs, utility data, and weather scenarios into a coherent, performant system.

Results

  • Turned a collection of opaque models into an operational tool that non-research users can actually use in planning and resilience discussions.
  • Helped DOE and utility partners anticipate, mitigate, and recover from extreme weather events with scenario-based resilience analysis.
  • Provided a reusable foundation for future grid analytics work (additional DERs, climate scenarios, and utility integrations).

Friday VLM: Compact Phi-4 + FastViT Vision-Language Model

Multimodal model R&D · Open-source project

The challenge

Many vision-language models are large, expensive to run, or difficult to integrate into real products. The goal was to create a compact, high-quality VLM that combines strong reasoning with efficient vision encoding.

Approach

  • Combined Microsoft’s Phi-4-mini-reasoning language model with Apple’s FastViT-HD image encoder into a single multimodal architecture.
  • Fine-tuned the text-only LLM using Low-Rank Adaptation (LoRA) on LLaVA datasets (~1.5M image-text pairs) across 2×A100 GPUs.
  • Extracted and integrated the FastViT image encoder from a pre-trained FastVLM model, wiring its visual tokens into the modified Phi-4 architecture.
  • Packaged the resulting model (bf16 and int8) and published it on Hugging Face with simple `transformers`-based loading examples.

Results

  • Delivered a compact, open-source vision-language model that can be used as a practical building block for multimodal applications.
  • Enabled teams to experiment with multimodal reasoning on more modest hardware, without relying on heavyweight proprietary models.
  • Provided clear, reproducible training details and code that other practitioners can build on.

Food classification for a consumer smart oven

Consumer IoT / smart appliances · Early-stage product

The challenge

A client building a smart oven needed reliable, real-time food classification on-device so the product could automatically suggest cooking programs without manual configuration.

Approach

  • Assembled and curated a custom dataset of ~20k images across 31 food classes representing realistic oven usage.
  • Trained a MobileNetV3-based CNN in PyTorch with scikit-learn, Weights & Biases, and Labelbox for labeling and experiment tracking.
  • Optimized the model for embedded deployment, focusing on low latency (<100ms inference) and memory footprint.
  • Collaborated with the hardware team to integrate the model into the device firmware and validate performance in real-world conditions.

Results

  • >95% test accuracy across 31 classes while meeting real-time latency requirements on the target embedded hardware.
  • Unlocked automatic program selection and better user experiences without requiring manual input for each cook cycle.
  • Provided a foundation for future features (personalization, recommendations) built on top of the classification layer.

From slide deck to funded platform for Vibrant Planet

Climate / land management SaaS · Seed–Series A startup

The challenge

The founding team at Vibrant Planet had a compelling vision for a decision-support platform for land managers, but only a slide deck to communicate it. They needed a real, usable first version of the product to show customers and investors.

Approach

  • As Sr. Engineering Lead at Presence, worked closely with Vibrant Planet’s founders to turn early slides and concepts into concrete user journeys and product requirements.
  • Designed the initial architecture for the web platform, including data model, backend services, and a map-centric frontend tailored to planners and land managers.
  • Implemented the first production-ready version of the application, integrating geospatial data, planning scenarios, and collaboration features into a coherent UI.
  • Partnered with the team to iterate quickly based on early user and stakeholder feedback, tightening the story and demo for fundraising conversations.

Results

  • Launched the first version of Vibrant Planet’s platform, moving the product from slideware to a live application demoed with customers and partners.
  • Helped the company articulate a clearer product narrative and demonstrate execution capability to investors.
  • The platform was a key part of the story that supported Vibrant Planet in raising a $15M Series A to accelerate climate resilience work.