To study how the Comparison Prompting Pattern influences AI responses when compared to naïve prompts. The objective is to analyze the quality, accuracy, and depth of responses across multiple test scenarios.
- ChatGPT (LLM-based AI assistant)
-
Naïve Prompt:
A vague request that asks for general information.
Example: “Tell me about AI and Machine Learning.” -
Comparison Prompt (Basic):
A structured request that explicitly asks for similarities, differences, pros/cons, or evaluations.
Example: “Compare Artificial Intelligence and Machine Learning in terms of definition, applications, and real-world examples.”
The following tasks were selected:
- Technical Concept Comparison
- Product/Tool Evaluation
- Advantages vs. Disadvantages Analysis
- Historical Event Comparison
For each scenario, two prompts were given: one naïve and one comparison-focused. The responses were captured for analysis.
The outputs were compared on clarity, factual correctness, and depth of analysis.
| Scenario | Naïve Prompt | Comparison Prompt | Observation |
|---|---|---|---|
| Technical Concept | “Explain AI and ML.” | “Compare AI and ML in terms of definition, scope, and examples.” | Naïve response gave independent explanations. Comparison prompt highlighted differences and relationships clearly. |
| Product Evaluation | “Tell me about iOS and Android.” | “Compare iOS and Android in terms of user experience, security, and app ecosystem.” | Naïve response described both separately. Comparison prompt gave side-by-side evaluation with pros/cons. |
| Advantages vs. Disadvantages | “Tell me about online learning.” | “Compare advantages and disadvantages of online learning.” | Naïve response leaned positive. Comparison prompt balanced both sides systematically. |
| Historical Events | “Tell me about World War I and II.” | “Compare World War I and World War II in terms of causes, outcomes, and impact.” | Naïve response was lengthy but unstructured. Comparison prompt provided a well-organized analytical summary. |
- Quality: Comparison prompts produced structured, side-by-side evaluations.
- Accuracy: Provided more precise contrasts instead of general information.
- Depth: Ensured multi-dimensional insights rather than descriptive overviews.
- Special Case: For simple facts (e.g., definitions), naïve prompts worked equally well.
The experiment was successfully executed, and responses for both naïve and comparison prompts were obtained.
The Comparison Prompting Pattern produced richer, more analytical, and well-structured outputs compared to naïve prompts. It is highly effective for decision-making, evaluations, and analytical tasks.