Build production-grade video intelligence for the problems that matter — mapping, energy, national security, and industrial operations.

The Geospatial Video Intelligence Hackathon brings together 50+ developers, ML engineers, and geospatial practitioners for 24 hours at the T-Rex Innovation Center in St. Louis. Teams build real systems on TwelveLabs' video foundation models — Marengo for multimodal embeddings and Pegasus for video reasoning — available through Amazon Bedrock with sponsored credits for every participant.

This is not a toy-demo hackathon. St. Louis is the global headquarters of geospatial intelligence — home to NGA West, a dense ecosystem of defense and GEOINT contractors, and an active open-data community. You're building alongside the people who will actually use what you ship.

What makes it different

  • Four real challenge tracks, each with operational rubrics and sponsor-supplied datasets. Pick one and go deep — scope beats breadth in 24 hours.
  • Sponsored compute. AWS is covering Bedrock credits so you can focus on the problem, not the invoice.
  • Workshops that teach, not pitch. Four concurrent technical workshops Saturday afternoon from AWS, Overture Maps, Assured Consulting, and TwelveLabs.
  • Mentors on the floor all weekend. TwelveLabs engineers and AWS solutions architects are there to unblock you, not to give talks.
  • Cash prizes + partnership opportunities for winning teams across all four tracks.

Full track briefs with datasets, sub-challenges, and scoring rubrics are in the Resources tab.

Schedule

Saturday, April 25
  • 9:00 AM — Doors open, check-in, coffee
  • 10:00 AM — Opening keynote, sponsor remarks, track walkthrough
  • 10:45 AM — Team formation
  • 11:00 AM — Hacking begins
  • 2:00–4:00 PM — Concurrent technical workshops (AWS Bedrock · Overture Maps · Assured Consulting · TwelveLabs)
  • 6:00 PM — Venue closes; hacking continues remotely overnight
Sunday, April 26
  • 9:00 AM — Venue reopens
  • 1:00 PM — Hard deadline: submissions due on DevPost
  • 2:00 PM — Team presentations (7 min demo + 3 min Q&A)
  • 4:15 PM — Judges deliberate
  • 5:00 PM — Awards ceremony
  • 5:30 PM — Closing reception

Who should participate

Individual developers and teams of 2–4 with complementary skills. Ideal mixes combine ML/CV, geospatial data (GeoJSON, GERS, MGRS), and full-stack or backend engineering. Solo hackers welcome — we run a team formation session Saturday morning. No prior video AI experience required. No security clearance required. All work product is owned by the teams that build it.

Why now

Video foundation models have crossed the threshold from research to production. TwelveLabs' Marengo and Pegasus are used by defense, media, and enterprise customers to replace tasks that previously required days of manual analysis. This weekend, you're among the first developers to stress-test them on the hardest problems in geospatial, energy, and industrial operations — and whatever you build is yours to keep.

Come ship something that matters.

Requirements

Each submission must enter exactly one of the four challenge tracks and be evaluated against that track's rubric. You must submit on DevPost by 1:00 PM Sunday, April 26, 2026. Late submissions will not be evaluated.

What to build

A working video intelligence system that addresses the problem defined in your chosen track. You must use TwelveLabs models (Marengo, Pegasus, or both) as a core part of the pipeline — not as a bolted-on component. Solutions that wrap a still-frame classifier in a video-AI jacket will not score well. Details for what each track expects (sub-challenges, datasets, workflows, bonus criteria) are in the track briefs under the Resources tab.

Required submission components

Every submission must include all four components below. Incomplete submissions may be disqualified.

1. Working demonstration
  • A deployed system (public URL) or a comprehensive demo video (3–5 minutes) showing the system processing real input end-to-end. We need to see it work, not hear it described.
  • For track-specific deliverables (map visualization, dashboard, export formats), follow the track brief.
2. Technical documentation
  • Architecture diagram covering the full pipeline (input → processing → output)
  • Description of how Marengo and/or Pegasus are used and why
  • GitHub repository with working code, README, setup instructions, and license
  • Any dataset documentation, preprocessing steps, and reproducibility notes
3. Validation report
  • Quantitative metrics on a labeled test set (precision, recall, F1, RMSE, or track-appropriate equivalents)
  • Qualitative analysis: where the system excels and where it breaks down
  • A comparison baseline (manual review, simple CV, or prior approach)
  • Processing benchmarks: throughput and cost at the scope you demonstrated
4. Mission impact brief (one page)
  • Quantified operational value for a concrete end user (e.g., "Reduces infrastructure condition assessment from 14 hours per video to 10 min automated + 30 min validation")
  • Specific use case: who uses this, for what workflow, under what conditions
  • Scaling assumptions and honest limitations

What to submit on DevPost

  • Project name
  • Elevator pitch (one sentence)
  • Challenge track (select exactly one of the four tracks — this determines your rubric)
  • Long-form description covering the four components above
  • Demo video (YouTube, Vimeo, Loom, or uploaded) — required if no live URL
  • Public repository URL (GitHub, GitLab, or equivalent)
  • Built with: the technologies, APIs, SDKs, and datasets you used
  • Try it out: live URL if you deployed a web app, install instructions if local
  • Team members: all contributors added to the submission

How judging works

Each submission is scored against its track's own rubric. A submission in Track 2 is not compared to a submission in Track 1. Every rubric uses weighted 1–5 scoring across 4–6 criteria summing to 100 points, multiplied by score for a total out of 500. Prize ranking happens within each track. Full rubrics are available in the Resources tab.

Hackathon Sponsors

Prizes

$3,250 in prizes
1st Place
$1,500 in cash
1 winner

2nd Place
$1,000 in cash
1 winner

3rd Place
$750 in cash
1 winner

Devpost Achievements

Submitting to this hackathon could earn you:

Judges

Michael Jones

Michael Jones
TwelveLabs

Jeffrey Harrison

Jeffrey Harrison
US Army Geospatial Center

Sean Gorman

Sean Gorman
Zephr

Sean Batir

Sean Batir
AWS

Trung Tran

Trung Tran
Fayetteville State University

Simon Bailey

Simon Bailey
T-Kartor

Vikram Lakhwara

Vikram Lakhwara
Stakehouse Fund

Gizelle Costa

Gizelle Costa
University of Missouri-St. Louis

Andrew McDowell

Andrew McDowell
HII

Chaz Mason

Chaz Mason
Assured Consulting Solutions

Mark Munsell

Mark Munsell
GeoSTL

Judging Criteria

  • Track 1 Evaluation Criteria
    1 - Detection Accuracy (30%) 2 - Data Quality & Enrichment (20%) 3 - Temporal Reasoning (15%) 4 - Technical Implementation (15%) 5 - Output Quality & Usability (10%) 6 - Mission Alignment & Contribution (10%)
  • Track 2 Evaluation Criteria
    1 - Detection Accuracy (35%) 2 - Domain Understanding (25%) 3 - Technical Implementation (20%) 4 - Operational Readiness (15%) 5 - Innovation (5%)
  • Track 3 Evaluation Criteria
    1 - Multi-Source Integration (30%) 2 - Intelligence Value (25%) 3 - Video Understanding (20%) 4 - System Design (15%) 5 - Technical Execution (10%)
  • Track 4 Evaluation Criteria
    1 - Detection Performance (40%) 2 - Domain Intelligence (30%) 3 - Technical Execution (20%) 4 - Operational Utility (10%)

Questions? Email the hackathon manager

Tell your friends

Hackathon sponsors

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.