The correct answer is C because Retrieval-Augmented Generation (RAG) allows a large language model to provide responses based on up-to-date content from external data sources without the need to fine-tune the model.
According to the AWS Bedrock Developer Guide:
"Amazon Bedrock Knowledge Bases enables developers to augment foundation models (FMs) with company-specific data that is updated in real time or near real time. By separating retrieval from the model itself, RAG-based approaches avoid the need for frequent retraining or fine-tuning."
This means a company can use a knowledge base with Amazon Bedrock to dynamically fetch the latest company policy information and feed it to the LLM in the prompt. This approach is ideal for use cases where the content (like policies) changes frequently, and latency for updates must be minimal.
Explanation of other options:
A. Fine-tuning an LLM with SageMaker is not optimal for frequently updated data. Fine-tuning involves retraining and redeploying the model, which is time-consuming and not suited for real-time updates. As stated in the SageMaker documentation:
"Fine-tuning is best used for use cases where the data changes infrequently and where highly specific model behavior is required."
B. Selecting a foundation model alone does not fulfill the real-time requirement. The FM's base knowledge is static unless augmented through additional methods like RAG.
D. Amazon Q Business is intended for workplace productivity and enterprise use but is more opinionated in structure and doesn’t provide the same flexibility as a custom RAG workflow for building a tailored chatbot application. While it supports some real-time data sync features, it’s not purpose-built for LLM-based chat systems with dynamic data feeds like Knowledge Bases in Bedrock.
Therefore, the most appropriate and scalable solution aligned with AWS recommendations is C.
Referenced AWS AI/ML Documents and Study Guides:
Amazon Bedrock Developer Guide – Knowledge Bases and RAG (2024 Edition)
AWS Certified Machine Learning Specialty Study Guide – Generative AI Section
AWS Documentation: Choosing Between Fine-Tuning and RAG for LLM Applications
Amazon SageMaker Documentation – Model Tuning and Deployment Best Practices (2024)