Applied AI Research & Engineering

We help companies build, fine-tune, evaluate, and deploy AI products.

What We Do

01

Customized models

Fine-tuning LLMs and multimodal models. Custom architectures and training pipelines. From LoRA adapters to full training runs.

02

Multimodal inference pipelines

Video generation, vision-language systems, and end-to-end inference pipelines orchestrating multiple models in production. VRAM optimization, batch processing, GPU automation.

03

RAG & knowledge systems

Retrieval pipeline design, evaluation frameworks, and optimization for production workloads. Evaluation tools and dataset strategies for large-scale corpora.

04

AI agents & automation

Autonomous agents that research, reason, and take action. From internal workflow automation to customer-facing agent products with real users.

05

Research & strategy

Literature reviews, feasibility studies, and technical roadmaps grounded in current research. Bridging the gap between papers and production.

06

Fractional CTO / Advisor

Ongoing senior AI guidance without a full-time hire. Roadmaps, hiring plans, architecture decisions, and a trusted partner for board conversations and strategic trade-offs.

Recent Work

Friday-VLM

Custom vision-language model pipeline

Built Friday-VLM by combining Phi-4 with a FastViT-HD vision encoder, plus a full multimodal training pipeline using DeepSpeed and LoRA adapters. Released open-source model artifacts and tooling for reproducible VLM fine-tuning.

PyTorchPhi-4 (text-only)FastViT-HDDeepSpeedLoRAVLM
Research

Factored Latent Action World Models (FLAM)

Co-authored and implemented behavior-cloning workflows for FLAM, with baseline and training pipeline work across the factored_genie codebase and the LAPO baseline repository.
Read the paper.

Behavior CloningWorld ModelsPyTorchLAPOHDF5Policy Learning
Anova

Food classification for smart oven

Trained a MobileNetv3 CNN on 20K images across 31 food classes, achieving >95% accuracy for real-time inference on a resource-constrained embedded device. Exported to PyTorch Mobile and integrated with the oven's embedded Android OS. End-to-end CV work from data preprocessing and augmentation to model training and optimization.

PyTorchMobileNetv3PyTorch MobileWeights & BiasesLabelbox
InsightRX

LLM agent for clinical trial analysis

Developed a multi-agent AI system for conversational interactions with clinical trial datasets (SDTM and ADaM). Integrated GPT-4-32K to dynamically interpret natural language queries into executable code and database queries, enabling real-time data visualization and analysis.

AI AgentsLLMsCode GenerationPython
Briefed.ai

Autonomous research agent

Built and launched an AI agent product that autonomously researches topics daily and delivers personalized briefings with source citations. End-to-end product from agent architecture through subscription billing.

AI AgentsLLMsSaaSFull Stack
Kevin Rohling, founder of Conifer Labs

About

We're Conifer Labs, founded by Kevin Rohling. We love challenging engineering problems and we're happy working across the AI stack: fine-tuning, custom architectures, evaluation frameworks, and inference pipelines. Before founding Conifer Labs, Kevin led AI and engineering teams at startups and has gone from slide deck to product countless times. Kevin holds an MS in Artificial Intelligence from UT Austin (graduated December 2025).

Background

  • MS in Artificial Intelligence
    University of Texas at Austin, 3.93 GPA
  • Published Researcher
    "Factored Latent Action World Models" with Peter Stone, Zizhao Wang, Chang Shi, and Jiaheng Hu
  • Former Head of AI
    Presence Product Group (acquired by Accenture)
  • Former VP Engineering
    Test.AI, led 25 engineers and ML researchers
  • Founding Engineer & CTO
    BookNook Learning, built from zero to $1M+ ARR
  • Founding Engineer & CEO
    cisimple, 5,000+ users, acquired by ElectricCloud
"Exceptionally skilled, committed, and thoughtful. Kevin has the rare ability to deliver complex technology solutions while mentoring team members."
Jason Monberg, CEO, Presence Product Group

Open Source & Research

We publish models, datasets, and research tools on HuggingFace and GitHub. Select projects:

  • Friday-VLM
    Vision-language model combining Phi-4 + FastViT-HD. Full training pipeline with DeepSpeed + LoRA.
  • fast-vit-hd
    Optimized vision encoder extracted from Apple's FastVLM checkpoints, 5.5K+ downloads on HuggingFace
  • NL-ACT
    Natural language integration for Action Chunk Transformers (ACT)
  • LIBERO Datasets
    Robotics learning datasets for imitation learning research
  • SmolVLM2-ASL
    Fine-tuned vision-language model for American Sign Language recognition

Let's talk about your
AI challenges

We'd love to hear about your project. Send us a message below or book a call directly.