Project Name:

AI-Ready Data Products to Facilitate Discovery and Use

Contractor: BrightQuery, Inc.

Lessons Learned

1. Data Availability and Format
○ Consolidated historical data and revisions are critical for accessibility and usability.
2. AI and ML Challenges
○ Commercial AI tools struggle with statistical and time-series data, particularly revisions.
○ Time must be treated as multidimensional, capturing both the period and the timestamp.
3. Standards and Discoverability
○ Schema.org and Croissant standards enhance data discoverability but require additional depth for analytics.
4. Knowledge Graph Development
○ Triplication is essential for building knowledge graphs but lacks standardization for entity denitions and time-series data representation.
5. Granularity and Interoperability
○ More granular data enhances interoperability but may be affected by changes in methodology or categorization.

  1. Early Stakeholder Engagement is Crucial Engaging agency stakeholders at the outset (e.g., BEA, NSF, and Department of Commerce) provided valuable insights that shaped the AI readiness criteria and schema design, ultimately improving relevance and adoption.
  2. Standardization Requires Iteration The development of the AI-Ready Schema and Data Standard benefited from iterative feedback loops and real-world testing. Establishing a flexible versioning approach will be critical as additional agencies adopt the standard.
  3. Cross-Agency Landscape Analysis Builds Common Ground
  4. Documentation Drives Clarity and Continuity Comprehensive documentation—particularly for the GDA-E tool architecture—proved essential in aligning technical teams and setting the stage for efficient prototyping and future scaling.
  5. Tool Design Should Anticipate Scalability Early design choices for the GDA-E tool incorporated scalability and modularity, which will reduce future technical debt and support potential enterprise-level adoption across government entities.
  1. Iterative Development Drives Tool Quality The modular development of the GDA-E tool allowed incremental testing an refinement, significantly improving performance in content discovery, structured metadata detection, and reporting accuracy.
  2. Agency-Specific Variability Requires Flexible Scoring Agencies differ significantly in how they structure and share data. A flexible evaluation framework was critical for maintaining fairness and relevance across diverse data architectures.
  3. Standardization Enhances Interoperability Leveraging open-source frameworks like the IBM Data Prep Kit and HuggingFace models ensured consistent evaluation metrics and interoperability with other AI-ready tools in development.
  4. Visualization Increases Stakeholder Engagement Delivering Power BI dashboards with clear scoring and comparative metrics improved the accessibility of insights for non-technical stakeholders.

Disclaimer: America’s DataHub Consortium (ADC), a public-private partnership, implements research opportunities that support the strategic objectives of the National Center for Science and Engineering Statistics (NCSES) within the U.S. National Science Foundation (NSF). These results document research funded through ADC and is being shared to inform interested parties of ongoing activities and to encourage further discussion. Any opinions, findings, conclusions, or recommendations expressed above do not necessarily reflect the views of NCSES or NSF. Please send questions to ncsesweb@nsf.gov.