AI Consulting: Strategy, Feasibility Assessment, Roadmap
Company spent half year and $200k on "AI implementation" with Jupyter notebook in folder and dashboard nobody opens. Typical when AI project starts with model selection instead of business process analysis.
What Usually Goes Wrong
Task incorrectly stated. "Want to predict churn" not ML task. Real: "churn among B2B clients contract >$10k/year, signals — login drop >40% over 30 days, key feature usage cuts ≥2, payment delays." Without this, model trains on proxies disappearing at next A/B test.
Data overestimated. Client: "we have 5 years data." Reality: schema changed thrice, first two years different system, 30% records missing key attribute. After audit — 14 months usable, 60k records with target variable gaps. Changes entire plan: gradient boosting with feature engineering instead deep learning.
No baseline. Before building model need know: what's current result without ML? If analyst hand-classifies at precision 0.68 and "smart" model at 0.71, worth half-year work?
AI Audit Structure
Feasibility assessment takes 2–4 weeks, includes several components.
Data audit. Examine raw data: completeness, label quality, distribution drift, training leakage (common — especially joins with future target values). Tools: pandas-profiling / ydata-profiling, great_expectations, SQL analytics in PostgreSQL.
Process mapping. Where in business process ML adds value: faster, fewer errors, auto decisions? Draw AS-IS and TO-BE with integration points.
Feasibility scoring. Each use case in matrix: data volume × label quality × business value × technical complexity. Result — prioritized backlog with honest risk assessment.
ROI: Calculate Realistically
Three ROI components for ML project:
-
Direct savings — replace manual labor. Classifier replacing 3 operators at $40k/year = $120k/year before infrastructure and support costs.
-
Solution quality — precision/recall increase in business metrics. Fraud detection precision 0.71 → 0.89 at recall 0.85 = fewer false blocks = less churn.
-
Speed — claim scoring from 48h to 2min = conversion, not just efficiency.
Honest ROI includes: development cost, infrastructure (GPU/CPU, storage), support and retraining cost (usually 30–40% development cost yearly).
Technology Choice Without Religion
Key consulting question: when LLM vs classical ML?
LLM needed when task requires unstructured text understanding, generation, dialogue. For tabular data — XGBoost, LightGBM, CatBoost usually beat neural networks on quality, interpretability, inference cost. On $10/mo CPU instance.
Similarly RAG vs fine-tuning: static well-structured knowledge — RAG via LlamaIndex or LangChain with pgvector cheaper and easier. Need specific manner or new "language" — fine-tuning via PEFT/LoRA.
Roadmap: Pilot to Product
Typical AI roadmap — three horizons:
0–3 months (Quick wins). Select 1–2 use cases with good data and clear ROI. MVP with baseline model, deploy shadow mode — model decides parallel with humans, compare results. Reduces risk, builds team trust.
3–12 months (Core platform). Build MLOps foundation: feature store, CI/CD, drift monitoring via evidently, MLflow registry. Scale 2–3 successful use cases.
12+ months (Scale). More complex architectures, auto-retraining, new domain expansion.
Consulting project timelines: AI audit — 2–4 weeks, strategy and roadmap — 3–6 weeks, pilot support — 2–4 months. Concrete timelines depend on process complexity and data availability and stakeholder access.







