TracksGuided Small Projects With AI AssistanceMaking Projects RobustUsing AI to Suggest Test Cases(5 of 6)

Using AI to Suggest Test Cases

Thinking of test cases is surprisingly difficult. You built the feature, so you naturally think about how it should work. AI doesn't have that bias — it systematically considers what could go wrong.

Why AI Excels at This

AI has seen millions of functions and their test cases. When you describe your feature, AI draws on this experience to suggest tests you might not consider:

  • Edge cases at the boundaries of valid input
  • Error conditions that rarely occur
  • Security-related scenarios
  • Concurrency and timing issues
  • Platform-specific behaviors

You'll review AI's suggestions and decide which are relevant, but the comprehensive list is valuable.

Crafting the Right Prompt

A good prompt gives AI enough context to generate useful tests:

Here's my create_todo function:

def create_todo(text):
    if not text or not text.strip():
        raise ValueError("Text required")
    if len(text) > 500:
        raise ValueError("Text too long")
    return Todo(text=text.strip())

Generate a comprehensive list of test cases including:
- Happy path tests
- Edge case tests  
- Error condition tests
- Security-related tests

For each test, describe the input, expected output, and why it matters.

Reviewing AI Suggestions

AI will generate many test cases. Some will be obvious:

  • Empty string returns error ✓
  • Normal text creates todo ✓

Others will be insightful:

  • Text with only whitespace (should fail)
  • Text with exactly 500 characters (boundary)
  • Text with 501 characters (just over limit)
  • Text with null bytes or control characters
  • Unicode edge cases (zero-width characters)

Some suggestions might not apply to your situation. That's fine — you're looking for the ones that do.

From Suggestions to Tests

Once you have a list, prioritize and implement:

def test_create_todo_with_valid_text():
    todo = create_todo("Buy groceries")
    assert todo.text == "Buy groceries"

def test_create_todo_strips_whitespace():
    todo = create_todo("  Buy groceries  ")
    assert todo.text == "Buy groceries"

def test_create_todo_rejects_empty_string():
    with pytest.raises(ValueError):
        create_todo("")

def test_create_todo_rejects_whitespace_only():
    with pytest.raises(ValueError):
        create_todo("   ")

def test_create_todo_at_max_length():
    text = "a" * 500
    todo = create_todo(text)
    assert len(todo.text) == 500

def test_create_todo_rejects_over_max_length():
    text = "a" * 501
    with pytest.raises(ValueError):
        create_todo(text)

Iterating With AI

After implementing tests, ask AI to review:

"Here are my test cases for create_todo. What scenarios am I missing?"

AI might spot gaps: "You haven't tested what happens with newline characters in the text" or "Consider testing concurrent creation of todos with the same text."

Building Testing Intuition

Over time, you'll internalize the patterns AI suggests. You'll automatically think about boundaries, empty inputs, and special characters. AI accelerates this learning by exposing you to comprehensive test thinking.

See More

Last updated December 13, 2025

You need to be signed in to leave a comment and join the discussion