As intelligence systems for federal agencies mature, policymakers are treating training data, model weights, and software dependencies as part of a strategic supply chain that must be verified and explainable.
Innodata Inc., a data-engineering and AI-services firm based in New Jersey, just launched Innodata Federal, a business unit focused on what it calls “mission-critical AI solutions” for the U.S. government. The division offers data preparation, model fine-tuning, and model-safety testing within cleared environments that comply with NIST 800-171 and ISO 9001 standards.
Chief executive Jack Abuhoff said the initiative is designed to meet a “high-growth market where our capabilities align exceptionally well with government modernization priorities.”
The National Institute of Standards and Technology’s AI Risk Management Framework, released in 2023, directs agencies to assess the provenance and integrity of AI systems and their data sources. Earlier executive guidance under the Biden-era Executive Order 14110 set similar expectations for transparency and security; that order was rescinded in January 2025, with new White House directives updating federal AI oversight.
A July 2025 Government Accountability Office review found that data-provenance documentation was missing in more than half of audited agency AI deployments, identifying it as a major operational risk.
Data provenance becomes a procurement requirement
Procurement offices are prioritizing traceability over experimentation. The Department of Defense Chief Digital and AI Office lists “trusted data pipelines” as a baseline requirement for new AI pilot programs, and agencies increasingly use Other Transaction Authorities and Commercial Solutions Openings to contract with firms that can document data security and compliance.
Innodata’s federal unit advertises three differentiators (cleared U.S. technical staff, global 24-hour operations, and rapid setup of secure AI workflows) alongside services including annotation for computer-vision and sensor data, multilingual dataset curation, and adversarial testing for model reliability.
“What sets Innodata Federal apart is our ability to deliver the entire AI lifecycle – not just data annotation or point solutions, but true end-to-end capability from data collection through model deployment and operational support,” said Vinay Malkani, Senior Vice President of Innodata Federal.
Comparable efforts include Scale AI’s $250 million Department of Defense contract for data labeling and model evaluation, Booz Allen Hamilton’s AI Assurance Practice for bias and security testing (link), and Palantir’s Gotham platform for model-driven analytics across classified environments.
Analysts at the Atlantic Council note that the push to “secure data in the AI supply chain” shows how dataset validation and model traceability are becoming part of the defense industrial base.
A credibility test for AI contractors
A February 2025 Reuters column reported that investor lawsuits alleging “AI washing” more than doubled year over year. Innodata, named in one such case filed in New Jersey, called the complaint “flimsy,” according to coverage.
Despite the litigation, the company’s financials show steady growth. Third-quarter 2025 revenue reached $62.6 million, up 20 percent from a year earlier, while cash holdings rose to $73.9 million.
Chief Revenue Officer Rahul Singhal, promoted to president in November, said he sees “extraordinary opportunity in front of us” as the firm expands customer relationships in government and enterprise markets.
For contractors seeking government work, credibility now depends on demonstrable governance. Agencies require documentation of data handling, version control, and model evaluation before authorizing deployment; the General Services Administration’s AI Compliance Plan outlines how supplier claims are verified and data provenance tracked.
A 2024 MITRE Corporation study warned that unverified model components and third-party data sources create vulnerabilities comparable to hardware supply-chain risks and recommended certification systems for “AI bills of materials,” documenting every dataset and parameter used in model development.
For Innodata and its peers, those requirements represent both constraint and opportunity: demonstrating a cleared workforce, auditable data lineage, and compliance with federal frameworks could become a key differentiator as AI moves from pilot projects to operational systems.
Federal AI procurement is entering an assurance phase. Agencies now expect vendors to prove not only what their models can do but also how those models were built and secured.
Innodata’s new federal arm is an early attempt to supply that evidence at scale, and whether the company can convert its compliance-based pitch into recurring contracts will show how far the U.S. market has shifted from experimenting with AI to regulating its infrastructure.








