9. Testing Your Code
MODULE 9

Testing Your Code


9.1: Why AI Code Needs Tests

AI-generated code is plausible by default. It looks correct, compiles, and seems reasonable. But it makes subtle errors: wrong variable names, incorrect edge case handling, missing null checks, silent data loss.

Tests are the only way to prove the code does what you think it does.

9.2: The Verification Flow

Every piece of AI-generated code should go through this:

1
AI generates code

Get the initial implementation.

2
You read it (actually read it)

Don't glance. Understand what every function does. If you can't explain it, don't ship it.

3
Run it locally

npm run dev -- does it actually work in the browser/terminal?

4
Check edge cases

What happens with empty input? Invalid data? Network errors? Null values?

5
Run tests

npm run test -- do all tests pass, including the new ones?

6
Build passes

npm run build -- does the full build compile without errors?

7
Only then commit

git add . && git commit -m "add feature X" -- never commit broken code.

9.3: Test-Driven Vibecoding

Even better than testing after: write tests first.

Step 1: "Write a test for a function that validates email addresses"
        -> AI writes the test

Step 2: "Now implement the function to make the test pass"
        -> AI writes the implementation

Step 3: Run test -> verify it passes

Step 4: "Add edge case tests: empty string, missing @, multiple @s"
        -> AI adds more tests

Step 5: Iterate until solid
Tests give AI a clear target

When AI writes tests first, the implementation has a concrete specification to satisfy. This produces better code than "just implement this feature."

9.4: What to Test

For every function or endpoint, consider:

  • Happy path -- normal input, expected output
  • Empty input -- empty strings, null, undefined
  • Invalid input -- wrong types, malformed data
  • Boundary values -- zero, max int, very long strings
  • Error cases -- network failures, missing permissions, timeouts
  • Security cases -- injection attempts, unauthorized access

9.5: Common Test Prompts

Unit tests:

"Write unit tests for the validateEmail function covering:
- Valid emails
- Empty string
- Missing @ symbol
- Missing domain
- Multiple @ symbols
- Very long email addresses"

Integration tests:

"Create integration tests for the /api/users endpoint:
- GET returns list of users
- POST creates new user with valid data
- POST with invalid data returns 400
- Unauthorized request returns 401
- Duplicate email returns 409"

Security tests:

"Write security tests for the login endpoint:
- SQL injection in email field
- XSS in name field
- Rate limiting after 5 attempts
- Invalid token returns 401"

9.6: The Sanity Checklist

Before accepting any AI code, verify:

  • Does this actually solve my problem?
  • Are there obvious security issues?
  • Does it handle errors?
  • Is it consistent with the rest of the codebase?
  • Would I be embarrassed to show this to a senior developer?

If you answer "no" or "not sure" to any of these, don't ship it.

🧠
Module Checkpoint
Test your understanding -- try to answer from memory before looking
Why does AI code specifically need testing?

What is the benefit of writing tests BEFORE implementation?

What should you do if you can't explain what AI code does?

Which of these is NOT something you should test?