Catalypt LogoCatalypt.ai

Industry Focus

Developer Options

Resources

Back to Blog

My Git Log Became a Victorian Novel: A Month of Automated Commit Messages

2025-07-10T00:00:00.000Z Catalypt AI Team ai-first

"Let AI write your commit messages!" they said. "It'll save time!" they said. Fast forward one month: my git log looks like a Victorian novel, my teammates hate me, and I spent more time editing AI commits than I would have spent writing them.

But I learned something. Here's the journey from commit message hell to actually useful automation.

Week 1: The Optimistic Beginning

# Initial setup - so innocent, so naive
git config --global alias.aicommit '!f() { 
  git diff --staged | ai-commit-message
}; f'

# First AI commit:
"feat: Implement user authentication module with comprehensive JWT token 
handling, including refresh token rotation, secure cookie management, and 
extensive error handling for various edge cases such as expired tokens, 
invalid signatures, and malformed payloads. This implementation follows 
OWASP security guidelines and includes rate limiting to prevent brute 
force attacks."

# Actual changes: Added 10 lines to auth.js

Every commit was a paragraph. The AI was trying to win a creative writing contest.

I told the AI to be more concise:

# "Please be more concise"
# Next commit:
"fix: bug"

# "What bug?"
# AI: "The bug in the code"

From Victorian novel to caveman grunts. No middle ground.

// Week 2: The Training Phase
const generateCommitMessage = async (diff) => {
  const prompt = `
    Generate a git commit message following conventional commits format.
    Type must be: feat, fix, docs, style, refactor, test, chore
    Keep under 72 characters.
    Be specific but concise.
    
    Diff: ${diff}
  `;
  
  return await ai.complete(prompt);
};
# Actual commits AI generated:
"feat: Add new function that does something with the data and returns a value"
"fix: Change variable name from x to y in multiple files"
"refactor: Move code from one place to another place in the codebase"

# Still too vague...
// Week 3: Getting Specific
const improvedPrompt = `
Generate a conventional commit message.

Rules:
1. Format: <type>(<scope>): <subject>
2. Types: feat|fix|docs|style|refactor|test|chore|perf
3. Scope: the affected module/component
4. Subject: imperative mood, no period, <50 chars

Examples:
- feat(auth): add password reset functionality
- fix(api): handle null response in user endpoint
- docs(readme): update installation instructions

Analyze this diff and generate ONE message:
`;
# Better results:
"feat(auth): implement JWT refresh token rotation"
"fix(users): validate email format before saving"
"perf(db): add index to user_sessions table"

# Finally getting somewhere!
// The context problem
const diff = `
-  const data = await fetchUser(id);
+  const data = await fetchUser(id).catch(err => {
+    console.error('User fetch failed:', err);
+    return null;
+  });
`;

// AI generated: "feat(users): add error handling"
// But this was fixing a production crash!
// Should be: "fix(users): handle fetchUser rejection to prevent crashes"

AI commits started looking reasonable:

# Week 4: The Honeymoon Phase
git log --oneline -10

a7b3d4f feat(payments): integrate Stripe checkout flow
b2c5e6a fix(auth): prevent race condition in token refresh  
c3d4e5b refactor(db): normalize user preferences schema
d4e5f6c docs(api): add webhook endpoint documentation
e5f6g7d test(payments): add integration tests for refunds

# Team: "Hey, your commits got better!"
# Me: *sweating* "Yeah... I'm really focused on commit quality"

But then came the edge cases...

// The hallucination problem
const diff = `
-  return user.role === 'admin';
+  return user.role === 'admin' || user.role === 'superadmin';
`;

// AI commit: "feat(auth): implement role-based access control with 
// granular permissions for admin, superadmin, moderator, and user roles"

// WTF? I only added superadmin check!
// The merge commit disaster
// AI trying to be helpful with merge commits:
"Merge branch 'feature/user-auth' into develop after extensive testing 
and validation of JWT implementation, password reset flow, two-factor 
authentication, session management, and security audit compliance"

// Reality: Fixed one typo before merging
// The breaking change problem
const breakingDiff = `
- function processPayment(amount, currency) {
+ function processPayment(amount, currency, metadata = {}) {
`;

// AI: "refactor(payments): add metadata parameter"
// Missing: THIS IS A BREAKING CHANGE!
// Should be: "feat(payments)!: add required metadata parameter to processPayment"
// My favorite AI nonsense commits:
"fix: fix the fix that fixed the previous fix"
"feat: add feature (details in code)"
"refactor: improve code quality and stuff"
"chore: do some maintenance tasks"
"style: make it pretty ✨"  // It added emojis without asking
  • "WIP" (on main branch)
  • "Fix stuff" (what stuff?)
  • "asdfasdf" (keyboard mash)
  • "Final commit" (it never is)
  • "Actually final commit"
  • "OK this time really final"
  • Copy-pasting the entire PR description

What actually works:

// The hybrid approach that actually saves time
class CommitMessageHelper {
  constructor() {
    this.patterns = {
      dependencies: /package(-lock)?\.json|yarn\.lock|requirements\.txt/,
      docs: /\.(md|rst|txt)$|docs\//,
      tests: /__tests__|\.test\.|\.spec\.|test_/,
      config: /\.(json|yaml|yml|toml|ini|conf)$/
    };
  }
  
  async suggest(stagedFiles, diff) {
    // Let AI handle the easy ones
    if (this.isDependencyUpdate(stagedFiles)) {
      return this.generateDependencyCommit(diff);
    }
    
    if (this.isDocumentationOnly(stagedFiles)) {
      return this.generateDocsCommit(diff);
    }
    
    // For complex changes, just help with format
    const analysis = await this.analyzeChanges(diff);
    return {
      type: analysis.suggestedType,
      scope: analysis.affectedModules[0],
      breaking: analysis.hasBreakingChanges,
      // Let human write the description
      template: `${analysis.suggestedType}(${analysis.affectedModules[0]}): `
    };
  }
}
// The final setup that works
const gitCommitWorkflow = {
  // 1. AI suggests type and scope
  getSuggestion: async () => {
    const diff = await git.diff(['--staged']);
    const files = await git.status(['--porcelain']);
    
    return {
      type: detectChangeType(diff),      // feat, fix, etc.
      scope: detectScope(files),         // auth, api, etc.
      breaking: hasBreakingChanges(diff) // true/false
    };
  },
  
  // 2. Human writes the actual message
  commit: async (message) => {
    // Validate format
    if (!isConventionalCommit(message)) {
      console.error('Not conventional format!');
      return false;
    }
    
    // Check for common mistakes
    if (message.length > 72) {
      console.warn('Message too long!');
    }
    
    if (hasBreakingChanges && !message.includes('!')) {
      console.warn('Breaking change not marked!');
    }
    
    return git.commit(message);
  }
};
  1. Large refactoring - AI can summarize many changes
  2. Documentation updates - Good at describing what changed
  3. Dependency updates - Clear, formulaic messages
  4. Generated code changes - Explains the generation
# My final git aliases after a month of experimentation
[alias]
  # AI helps with type and scope only
  ai-type = "!f() { git diff --staged | ai-commit-type; }; f"
  
  # Format checker
  check-commit = "!f() { echo $1 | grep -E '^(feat|fix|docs|style|refactor|test|chore)(\\(.+\\))?: .{1,50}'; }; f"
  
  # Quick commits for obvious changes
  fix = "!f() { git add -A && git commit -m \"fix: $1\"; }; f"
  docs = "!f() { git add -A && git commit -m \"docs: $1\"; }; f"
  
  # The one that actually works
  smart-commit = "!f() { \
    TYPE=$(git ai-type); \
    SCOPE=$(git diff --staged --name-only | ai-detect-scope); \
    echo \"Suggested: $TYPE($SCOPE): \"; \
    read -p \"Complete the message: \" MSG; \
    git commit -m \"$TYPE($SCOPE): $MSG\"; \
  }; f"
  1. AI is bad at context (was it a fix or a feature?)
  2. AI is verbose by nature (trained on documentation)
  3. AI doesn't understand your team's conventions
  4. But AI is good at summarizing changes
  5. Human review is non-negotiable

Now I use AI for:

  • Suggesting the type and scope
  • Summarizing large diffs
  • Reminding me about conventional format
  • Catching when I forgot to mention breaking changes

But I write the actual message. It's faster than editing AI's purple prose.

Get Started