Rules and Guardrails

This challenge asks you to build an AI agent that reasons over commercial project and financial-style data. Follow these rules so your prototype is responsible, realistic, and fair to how such tools would be used in the real world.

1. Working Assumptions

To keep scope focused on agent design, data reasoning, and UX, you may assume:

  • Your user is a CFO or executive reviewing portfolio margin health; they understand contracts and construction economics at a high level.
  • The dataset is synthetic but messy on purpose — your agent should handle ambiguity the way a real tool would, without claiming access to non-public or live systems.
  • Source of truth for contract value, billing, and formal change order status remains the business systems and people who own them. Your solution surfaces, explains, and recommends actions; it does not replace signed agreements, accounting close, or legal review.
  • project management and finance team exists and would validate any recovery action before it is executed. You do not need to build full ERP or accounting integrations for the hackathon.

2. Privacy, Data Governance, and the Synthetic Dataset

Real HVAC portfolios contain sensitive commercial and personnel information. This event uses a synthetic dataset for learning and demonstration only.

  • Do not present outputs as if they describe real companies, people, or projects outside the provided dataset.
  • For this datathon, demonstrate awareness of what your agent would collect, log, and retain in a production setting (audit trails, access control, retention) rather than implementing full enterprise security.

3. Responsible Output (Non-Negotiable)

Margin and recovery recommendations can affect money, relationships, and litigation risk. Design your agent with appropriate humility.

  • Separate fact from inference. When the data supports a claim, show it. When you are inferring from incomplete or noisy fields (e.g. field notes), label uncertainty and point to what would verify the claim.
  • Let humans lead execution. Your agent may prioritize projects, quantify gaps, and propose COs, billing, or labor actions — the business decides what to file, bill, or dispute.
  • Know what your solution can’t do. AI is not a CPA, lawyer, or PM. It may analyze, summarize, and recommend, but must keep limitations visible and route high-stakes decisions to qualified stakeholders. Build that handoff into the experience where appropriate.

4. Originality and Permitted Tools

Follow the official datathon rules published for this event (team size, eligibility, submission deadline, allowed APIs, and code of conduct). Unless the organizers say otherwise:

  • Submitted work should be produced during the event (starter templates and open-source libraries are fine; disclose what you reused).
  • v0 must be used meaningfully in your workflow, with the proof required for your chosen option (see submission requirements).
  • Respect terms of service for any model or API you call.

5. Judging (100 Points Total)

Teams are scored on a 100-point scale across four categories as published in the challenge brief.

5.1 Agent Quality (40 points)
  • Finds the right at-risk projects and applies correct margin logic
  • Goes beyond retrieval — shows reasoning across tables and signals
5.2 Recommendations (30 points)
  • Actions are specific and dollar-quantified where possible
  • A CFO could realistically act on or delegate from the output
5.3 Implementation (20 points)
  • Built with v0; handles large record volumes (e.g. aggregation before LLM)
  • Deployed and demonstrably working at submission time
5.4 Business Insight (10 points)
  • Explains why margin erodes, not only that it did
  • Shows forward-looking or diagnostic insight, not only static reporting