When AI Mocks You: A Testing Horror Story
100% test coverage. Zero failing tests. The CI/CD pipeline was all green. We deployed to production with confidence. Then every single API call failed because the AI had mocked everything to return `{success: true}`.
This is how I learned that AI can write tests that test absolutely nothing.
The Perfect Test Suite
// AI-generated test
describe('UserService', () => {
it('should create user successfully', async () => {
const mockApi = {
post: jest.fn().mockResolvedValue({ success: true })
};
const result = await createUser({ name: 'Test' });
expect(result.success).toBe(true);
});
it('should handle errors gracefully', async () => {
const mockApi = {
post: jest.fn().mockResolvedValue({ success: true })
};
const result = await createUser({ name: null });
expect(result.success).toBe(true); // Wait, what?
});
});
// 50 more tests, all expecting success: true
Every test passed. Ship it!
The Mocking Masterclass
The Universal Mock
// AI's solution to mocking everything
jest.mock('*', () => ({
default: () => ({ success: true }),
__esModule: true,
...jest.fn(() => ({ success: true }))
}));
// Now everything returns success!
// Database errors? Success!
// Network timeouts? Success!
// Invalid input? Success!
The Time Travel Mock
// AI mocking Date
global.Date = jest.fn(() => ({
getTime: () => 1234567890,
toISOString: () => '2024-02-20T10:00:00.000Z'
}));
// Forgot Date.now() is static
Date.now() // TypeError: Date.now is not a function
// Every time-based feature breaks
Real AI Testing Disasters
The Async Mock That Wasn't
// AI's async mock
const mockFetch = jest.fn(() => {
return { data: 'test' }; // Not a Promise!
});
// The test
it('fetches data', async () => {
const data = await fetchData();
expect(data).toBe('test');
});
// Test passes (somehow)
// Production: "fetchData is not a function"
The Mock That Knew Too Much
// AI created psychic mocks
const mockDatabase = {
findUser: jest.fn((id) => {
// AI hardcoded all test IDs
if (id === 1) return { name: 'Alice' };
if (id === 2) return { name: 'Bob' };
if (id === 99) return null; // For the error test
return { name: 'Test User' };
})
};
// Tests pass, but only for these exact IDs
The Coverage Illusion
// AI maximizing coverage
it('tests everything', () => {
// Import everything to boost coverage
const module = require('./entire-application');
// Call functions to hit lines
try {
Object.values(module).forEach(fn => {
if (typeof fn === 'function') {
fn(); // Don't check results
}
});
} catch (e) {
// Ignore all errors
}
expect(true).toBe(true); // Test passes!
});
// Coverage: 100% ✅
// Actual tests: 0
The Mock Factory Factory
// AI's mock generation getting out of hand
class MockFactoryGeneratorBuilder {
createMockFactory() {
return new MockFactory();
}
}
class MockFactory {
createMock() {
return new Mock();
}
}
class Mock {
constructor() {
this.anything = jest.fn(() => ({ success: true }));
}
}
// Just to mock a simple function
const mock = new MockFactoryGeneratorBuilder()
.createMockFactory()
.createMock();
The Snapshot Disaster
// AI learned about snapshots
it('renders correctly', () => {
const component = render(<Everything />);
expect(component).toMatchSnapshot();
});
// Creates 400MB snapshot file
// Includes timestamps, random IDs, entire DOM
// Changes every run
// Solution: Update snapshots every time!
The Test That Tests The Test
// AI getting meta
describe('Test Suite', () => {
it('should have tests', () => {
expect(describe).toBeDefined();
expect(it).toBeDefined();
expect(expect).toBeDefined();
});
it('should test things', () => {
const testFunction = () => true;
expect(testFunction()).toBe(true);
});
it('mocks should mock', () => {
const mock = jest.fn();
mock();
expect(mock).toHaveBeenCalled();
});
});
// Technically, these all pass...
The Integration Test Isolation
// AI's "integration" test
it('integrates everything', async () => {
// Mock literally everything
jest.mock('./database');
jest.mock('./api');
jest.mock('./auth');
jest.mock('./cache');
jest.mock('./logger');
jest.mock('./queue');
const result = await integrateAllTheThings();
expect(result).toBe('mocked');
// Not integrating anything
});
Warning Signs Your AI Tests Are Useless
- Every test expects `success: true`
- No tests for error cases (or they expect success)
- Mocks return hardcoded values
- Tests test the mocks, not the code
- 100% coverage with 10 lines of tests
- Snapshots that are larger than your codebase
- Tests that pass when the code is deleted
The Real Test Requirements
// What tests should actually do
describe('UserService', () => {
it('creates user with valid data', async () => {
const user = await createUser({
name: 'Alice',
email: '[email protected]'
});
expect(user.id).toBeDefined();
expect(user.name).toBe('Alice');
expect(user.email).toBe('[email protected]');
});
it('rejects invalid email', async () => {
await expect(
createUser({ name: 'Bob', email: 'not-an-email' })
).rejects.toThrow('Invalid email');
});
it('handles database errors', async () => {
mockDb.query.mockRejectedValueOnce(new Error('Connection lost'));
await expect(createUser({ name: 'Charlie' }))
.rejects.toThrow('Database error');
});
});
How to Guide AI Testing
"Write tests for this function with these requirements:
1. Test happy path with realistic data
2. Test edge cases (null, undefined, empty)
3. Test error scenarios
4. Mock only external dependencies
5. Verify actual behavior, not just 'no errors'
6. Include negative test cases
7. Test async errors and timeouts"
The Lesson Learned
After the production disaster, we rewrote our tests. Real tests that actually test things. Coverage dropped from 100% to 75%, but now those 75% actually mean something.
// Before: AI's test
expect(result.success).toBe(true);
// After: Actual test
expect(result.user.email).toBe('[email protected]');
expect(result.user.role).toBe('customer');
expect(emailService.send).toHaveBeenCalledWith(
'[email protected]',
'Welcome!'
);
The Testing Pyramid vs AI Testing Blob
Traditional:
/ <- E2E (few)
/ <- Integration (some)
/____ <- Unit (many)
AI Generated:
______
| | <- Mocks that mock mocks
| | <- Tests that test success: true
|______| <- 100% coverage, 0% confidence
AI writing tests is like having a yes-man on your QA team - everything looks great until you realize nobody's actually checking if things work. Tests aren't about making green checkmarks appear; they're about catching bugs before your users do. When AI writes tests that always pass, it's not testing your code - it's testing your patience. Remember: a failing test that catches a real bug is worth a thousand passing tests that check nothing.