Skip to main content
Creating a benchmark in Prophecy Gov starts in the AI chat interface. You describe what you want to compare, and the AI proposes a table structure. Once you confirm the table, the AI researches each cell and streams results back in real time. This page walks you through the full process.

Create a new benchmark

1

Open a new chat

Navigate to Chat from the sidebar and start a new conversation. Benchmarks are created from within the chat interface, not from a separate benchmarks page.
2

Describe your comparison

Type a message describing what you want to compare. Be specific about the municipalities and the data points you need. For example:
“Compare our city’s annual general fund budget, population, and number of full-time employees to five similar-sized cities in the Pacific Northwest.”
The more context you provide — such as size range, region, or department focus — the more relevant the AI’s proposed table will be.
3

Review the AI's proposal

The benchmarking interface opens in a split-screen view. The chat panel on the left shows the AI’s response and a proposed list of municipalities and attributes. The table panel on the right shows the proposed structure.Review the proposal before confirming. You can ask the AI to make changes in plain language — for example, “Replace Seattle with Tacoma” or “Add a column for median household income.”

Edit municipalities and attributes

Before you run research, you can adjust the table structure directly. The table is editable while it is in the Editing state — after research starts, the structure is locked. To add a municipality: Click Add in the first column of the table. To add an attribute: Click Add in the header row of the table. To edit or remove a row or column: Hover over it to reveal the edit and delete controls. To reorder rows or columns: Drag a row or column header to a new position.
There is a maximum cell limit per benchmark. The table header shows how many cells you have used and the maximum allowed. If you reach the limit, you will need to remove a municipality or attribute before adding another.

Run the benchmark

Once the table structure looks right, click Start Research in the top-right corner of the table panel. The AI begins researching every cell in parallel.
Starting research locks the table structure. You cannot add, remove, or edit municipalities or attributes while research is running. To make structural changes, stop the research first, then restart it after editing.
A progress indicator shows how many cells have been completed. Cells populate in the table as results come in — you do not need to wait for all cells to finish before reviewing partial results. To stop research: Click Stop in the table header. Cells that have already completed are saved. You can restart research from where it stopped, or clear the results and start over.

Review results

Each completed cell shows a short researched value. Click any cell to open a detail panel with:
  • The cell value — the concise answer shown in the table
  • Explanation — the AI’s reasoning and any caveats about data quality or recency
  • Citations — links to the sources the AI used, including web pages and uploaded documents
Pay attention to citations. If a cell cites an older data source or a source you do not recognize, the explanation panel will often note the data year or flag uncertainty.

Export the results

When research is complete, click Export CSV in the table header to download the full benchmark as a spreadsheet. The export includes all municipality names, attribute columns, and cell values — formatted to paste directly into a staff report or share with colleagues.

Share with your city team

Benchmarks are private by default. Your city administrator can view all benchmarks created within your organization, which means your work is accessible to your team’s admins for review or reference. To share a specific benchmark with a colleague, direct them to the chat conversation where you ran the benchmark. The benchmark table appears inline within the chat, and any team member with access to the conversation can view it.
If you want a colleague to be able to view the full benchmark table independently, ask your city administrator — they can access all benchmarks within your city from their admin view.

Troubleshoot research failures

If research fails or cells show an error state, you can retry. Click Retry Research in the table header to re-run the research for cells that did not complete. Cells that already completed successfully are preserved and will not be re-researched. If research fails repeatedly on the same cell, the data the AI needs may not be publicly available online. Consider removing that attribute or narrowing the scope of the comparison.