Session: Right-Sizing AI for DevOps
Abstract—The rapid evolution of AI in DevOps has led to the creation of a range of complex models, from large-scale generative natural-language reasoning models to lightweight statistical models. Large language models (LLMs) have demonstrated high performance in tasks related to semantic interpretations and cross-document synthesis. Frequent use of LLMs is applied in situations where smaller models have produced the correct outcome at a lower cost, higher efficiency, and superior accuracy as compared to LLMs. This article explores an
end-to-end framework to determine the appropriate model for sizing the right AI for the right task. It draws on peer-reviewed findings and industry reports to outline the requirements for LLMs and when short models are a better fit for the task. It also explores the usability of hybrid architectures and their impact on performance
and operations. This article includes case studies on build prediction, log analysis, incident workflow, and CI/CD automation to demonstrate the outcomes. Index terms: AIOps, DevOps automation, LLM, small models, transformers, anomaly detection, observability, model compression, RAG, incident management.
Bio
Shalini Sudarsan is a DevOps Engineering Leader at Kindercare Learning Companies, USA., designing reliable, secure, and cost-optimized data and AI platforms. A Forbes Technology Council Member, Fellow of IETE and Women in Engineering (WIE) Oregon section. She drives enterprise AI adoption with a governed operating model that speeds time-to-market while lowering risk and spend. Shalini’s expertise spans BI strategy, data platform architecture, MLOps, observability, and value realization. She is known for translating complex engineering into measurable business outcomes. Shalini brings deep technical rigor and business expertise in the areas of DevOps and Reliability Engineering. A committed advocate for advancing technology, Shalini regularly presents at international conferences and contributes to IEEE and ACM as a technical reviewer.
Ankita Banerjee is a Technical leader with 14+ years of experience delivering enterprise software solutions and leading cross-functional teams to achieve measurable business outcomes. Expertise spans Java and backend engineering, cloud platforms, data engineering, DevOps, MLOps, and secure large-scale systems, with a strong focus on AI engineering and predictive analytics. Proven track record in mentoring teams, driving Agile transformation, and optimizing systems through CI/CD, API innovation, and observability best practices. Actively engaged in the global research community as a Technical Program Committee member, reviewer, and judge for leading international conferences and innovation awards, supporting the advancement of ethical, secure, and scalable AI technologies.