Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»AI/ML»Here’s what’s causing your AI strategy to slow down – and how to fix it
    AI/ML

    Here’s what’s causing your AI strategy to slow down – and how to fix it

    PineapplesUpdateBy PineapplesUpdateOctober 13, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Here’s what’s causing your AI strategy to slow down – and how to fix it
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Here’s what’s causing your AI strategy to slow down – and how to fix it

    Your best data science team just spent six months building a model that predicts customer churn with 90% accuracy. It is lying unused on the server. Why? Because it’s been stuck in the risk review queue for too long, waiting to be signed off by a committee that doesn’t understand stochastic models. This is not a fantasy – it is a daily reality in most large companies. In AI, models run at internet speed. Do not venture. Every few weeks, a new model family is launched, open-source toolchains change and entire MLOps practices are rewritten. But at most companies, anything related to production AI has to go through risk reviews, audit trails, change-management boards, and model-risk sign-offs. As a result, the velocity gap is widening: the research community accelerates; The enterprise comes to a halt. This difference is not the main “AI will take over your job” issue. It’s quieter and more expensive: lost productivity, shadow AI sprawl, duplicate spending and compliance drag that turns promising pilots into perpetual proofs of concept.

    Numbers say the quiet part out loud

    Two tendencies collide. First, the pace of innovation: the industry is now the dominant force, producing the vast majority of notable AI models Stanford’s 2024 AI Index ReportThe key inputs to this innovation are growing at historic rates, with training compute requirements doubling rapidly every few years. That speed all but guarantees rapid model churn and tool fragmentation. Second, enterprise adoption is accelerating. According to IBM, 42% enterprise-level companies AI has been actively deployed, and many people are actively exploring it. Yet the same surveys show that governance roles are now becoming merely formal, causing many companies to take back control after deployment. Layer on new rules. The phase-out obligations of the EU AI Act have been phased out – unacceptable-risk restrictions are already active and the general purpose AI (GPAI) transparency charge comes into effect in mid-2025, including high-risk rules. Brussels has made it clear that there will be no obstruction. If your governance isn’t ready, your roadmap will be.

    The real blocker is audit, not modeling

    In most enterprises, the slowest step is not fixing a model; This is proving that your model follows certain guidelines. Three frictions dominate:

    1. Audit Debt: Policies were written for static software, not stochastic models. You can ship a microservice with unit tests; You can’t “unit test” fairness drift without data access, lineage, and ongoing monitoring. When controls don’t map, reviews balloon.

    2. MRM overload: Model risk management (MRM), a discipline originating in banking, is spreading beyond finance – often translated literally, not functionally. Explainability and data-governance checks make sense; It is not possible to force every recovery-enhanced chatbot through credit-risk style documentation.

    3. Shadow AI diffusion: Teams adopt vertical AI inside SaaS tools without central oversight. This feels fast – until the third audit asks who owns the signals, where the embeddings reside and how to unclog the data. Dispersion is the illusion of motion; Integration and governance are long-term trends.

    Frameworks exist, but they are not on by default

    The NIST AI Risk Management Framework is a solid north star: govern, map, measure, manage. It is voluntary, adaptable and in line with international standards. But this is a blueprint, not a building. Companies still need solid control catalogs, evidence templates, and tooling that turn principles into repeatable reviews. Similarly, the EU AI Act sets out deadlines and duties. It doesn’t set up your model registry, wire up your dataset lineage or resolve the age-old question of who gets signatures when there’s a tradeoff between accuracy and bias. He is upon you soon.

    What winning enterprises are doing differently

    The leaders I see closing the velocity gap are not pursuing every model; They are paving the way for production routines. Five moves appear repeatedly:

    1. Send control planes, not memos: Codify governance as codes. Create a small library or service that implements the non-negotiables: dataset lineage required, assessment suite attached, risk level chosen, PII scan passed, human-in-the-loop defined (if necessary). If a project does not pass the checks, it cannot be deployed.

    2. Pre-approval patterns: Approve reference architectures – “GPAI with retrieval augmented generation (RAG) on approved vector stores,” “High risk tabular models with feature store X and bias audit Y,” “Vendor LLM via API with no data retention.” Pre-approval transforms the review from pre-determined debate to pattern conformance. (Your auditors will thank you.)

    3. Organize your governance by risk, not by team: Link depth of review to use-case criticality (security, finance, regulated outcomes). A marketing copy assistant shouldn’t face the same challenges as a loan adjudicator. Risk-proportional review is both defensible and fast.

    4. Create a “proof once, reuse everywhere” backbone: centralize model cards, assessment results, data sheets, prompt templates, and vendor verification. Each subsequent audit should start at 60% because you have already proven the common portions.

    5. Make audit a product: Give legal, risk and compliance a real roadmap. Tool dashboards that show: models in production by risk level, upcoming reassessments, events, and data-retention validation. If audit can self-service, engineering can do shipping.

    A practical rhythm for the next 12 months

    If you’re serious about moving forward, choose a 12-month governance sprint:

    • Quarter 1: Prepare a minimal AI registry (models, datasets, signals, evaluation). Draft risk-level and control mapping linked to NIST AI RMF tasks; Publish two pre-approved patterns.

    • Quarter 2: Transform controls into pipelines (CI evaluations, data scans, checks for model cards). Transform two fast-moving teams from Shadow AI to Platform AI by making the paved road easier than the side road.

    • Quarter 3: Conduct a GXP-style review (a rigorous documentation standard from the life sciences) for a high-risk use case; Automated Evidence Capture. Start your EU AI Act gap analysis if you touch Europe; Specify owners and deadlines.

    • Quarter 4: Expand your pattern catalog (RAG, batch prediction, streaming prediction). Create dashboards for risk/compliance. Bake governance SLAs into your OKRs. Up to this point, you haven’t slowed down innovation – you’ve standardized it. The research community can move at light speed; You can continue shipping at enterprise speed – without the audit queue becoming your critical path.

    Competitive Edge Isn’t the Next Model – It’s the Next Mile

    It’s tempting to chase each week’s leaderboard. But the sustainable benefit is the mile between paper and production: the platform, the patterns, the proofs. This is what your competitors can’t copy from GitHub, and it’s the only way to maintain velocity without trading compliance for chaos. In other words: make governance smooth, not gritty.

    Jayachander Reddy Kandakatla is a Senior Machine Learning Operations (MLOPS) Engineer at Ford Motor Credit Company.

    causing fix Heres slow Strategy Whats
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleIs AI even worth it for your business? 5 Expert Tips to Help Prove ROI
    Next Article So avoid ransomware
    PineapplesUpdate
    • Website

    Related Posts

    Startups

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026
    Startups

    Looking toward 2026: What’s next for startup Battlefield 200

    January 19, 2026
    Startups

    T-Mobile slashes over $1,000 off AT&T and Verizon with a new family plan — here’s what’s what

    January 18, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025797 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025724 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025650 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    10,000 steps or Japanese walk? We ask experts if you should walk ahead or fast

    June 16, 20250 Views

    FIFA Club World Cup Soccer: Stream Palmirus vs. Porto lives from anywhere

    June 16, 20250 Views

    What do chatbott is careful about punctuation? I tested it with chat, Gemini and Cloud

    June 16, 20250 Views
    Our Picks

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2026 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.