Procurement and risk teams are being asked to scrutinize more suppliers across more risk categories than ever before. Headcount, in most cases, is not keeping pace. Geopolitical exposure, FOCI, cyber posture, financial viability, sanctions, ESG, M&A activity — the aperture keeps widening, while the team running the assessments stays the same size or shrinks.
In our recent webinar, One Man Army: Building a Due Diligence Program with Limited Resources, Craft’s VP of Global Sales Brian Mackerer sat down with Solutions Analyst Bruce Jansa, who built and ran a supplier due diligence program at Army Cyber for the SBIR program (often as a team of one) to walk through what actually works when resources are tight. Below are the practical takeaways for procurement, risk, and supply chain leaders trying to stand up or scale a defensible program without a 25-person team.
Start with the list, not the strategy
The most common question Bruce and Brian hear is “where do I start?” The answer is not a months-long planning exercise. It’s a list.
“Just figure out who all your suppliers are, consolidate a list,” Bruce said. From there, assign criticality labels — sole source, strategic, transactional, by spend, by commodity — and cross-check the list against the relevant sanctions lists. That single step confirms no entity you work with directly is actively sanctioned. It is the bare minimum, and for most programs, it’s also a credible starting point.
The mistake teams make is treating program design like a think tank exercise. Bruce’s advice: don’t. Start, iterate, and let your understanding of risk evolve through actual reps. Process requirements you can’t anticipate from a whiteboard surface quickly once real assessments are running.
Triage ruthlessly. Most suppliers don’t need a deep dive
You cannot apply the same level of scrutiny to every supplier. The math doesn’t work, and the cost balloons.
Across tens of thousands of supplier assessments Craft has run for higher education and government customers, the pattern is consistent: roughly 80% of suppliers clear screening with no mitigation needed. Of the remaining 20% with a flag, only about 20% of those require a true deep dive. For a portfolio of 1,000 suppliers narrowed to 400 critical ones, that funnel ends with roughly 16 companies that warrant analyst hours.
Bruce framed the prioritization simply: out of your critical suppliers, sort by the risk categories you actually care about — for federal programs, that’s often cybersecurity posture and FOCI — and start with the worst. “What do you actually care about most, and then just go do that first.”
The corollary: most suppliers can be auto-cleared based on screening. That is itself an analyst decision to accept risk, but it is a defensible one when the underlying data supports it.
Weight technology over headcount
Asked how to balance people, process, and technology when building from scratch, Bruce was direct: “I would emphasize technology over quantity of analysts.”
A complex deep dive — pulling evidence, analyzing it, generating a decision-maker-ready risk assessment — historically runs four to sixteen hours of analyst time. Brian referenced a U.S. Air Force group Craft worked with that previously ran due diligence with 25 people, taking roughly eight hours per company. After implementing process automation and integrated risk data, they cut per-company assessment time to under one hour and redeployed analysts to other programs.
What you do need is the right analyst — someone who can think critically, run independent research, and assess whether a flag actually matters for your organization. Not someone who escalates every red indicator on sight.
Point-in-time checks aren’t enough
A clean assessment today is a point-in-time snapshot. Within twelve months, the average company sees more than 20% turnover in personnel and 5–15% change in firmographic data. M&A activity, new hires with adversarial affiliations, data breaches at downstream suppliers — all of it can flip a previously approved supplier into a risk overnight.
Brian shared an Air Force example where a comprehensive program-level supplier review took six months to complete. By the time it was delivered, the underlying data was stale. In another case involving an aerospace and defense customer, Craft surfaced a data breach at an already-onboarded supplier; continuous monitoring let the customer get ahead of the exposure before it propagated.
The takeaway: monitoring is not optional, and it is not a separate project. It is the same program, extended through time.
Defensibility is the deliverable
For federal and federally-adjacent programs, scrutiny of past decisions is intense. Congressional inquiries, OSD reporting, and audit requests can land months or years after a decision was made.
Bruce’s standard at Army was a Memorandum for Record (MFR) for any risk deemed significant — a time-stamped document describing the risk, the rationale for accepting or rejecting it, and the supporting evidence, signed by the decision maker. Stored in an information management system, it gives the program something concrete to point to when someone asks why a supplier was approved.
Even outside the federal context, the principle holds: if you can’t reproduce the data, the rationale, and the sign-off, the decision isn’t defensible.
Break the silos
Risk lives in finance, legal compliance, IT security, FOCI review, and procurement — usually in separate tools, separate workflows, and separate inboxes. That fragmentation is where weeks of cycle time disappear.
The fix is unifying the data and the workflow. Everyone assessing risk should be looking at the same supplier record, the same open-source intelligence, the same monitoring signals. Entity resolution alone — knowing that the company in your finance tool is the same company in your cyber review — closes a meaningful gap.
What to do in the next 30 days
If you take one action this month, make it this: pull your supplier list, tag the critical ones, and run them against consolidated sanctions lists. From there, define the process you’d want for a strategic supplier and apply it to every new supplier coming in. Iterate from there.
Watch the full session on demand for the complete discussion, including audience Q&A on auditability, program maturity, and where to inject human judgment in an AI-assisted workflow. Click here to watch the replay on-demand.