kennysliding

November 30, 2025 Transmission_ID: advent-o

Advent of Code 2025: Introduction

rust advent of code ai

Look, I’ll be honest with you. I’ve done Advent of Code before. Multiple times. And every single time, I tap out around Day 4 or 5. Not because the problems get impossibly hard—they do ramp up, sure—but because I’m lazy. There, I said it. The holiday season hits, work gets weird, and suddenly those little puzzles feel like homework I didn’t sign up for.

But this year? This year I’m actually going to try.

The Elephant in the Room: AI Can Just… Solve These

Here’s the thing that’s been bugging me about competitive programming challenges in 2025: AI can crush them. You can literally paste the problem into ChatGPT or Claude, and within seconds you have working code. Sometimes it’s even elegant. It feels like showing up to a marathon and everyone else is on motorcycles.

So what’s even the point anymore? If we’re being real, leaderboards are probably filled with people who just prompted their way to the top. And honestly, I don’t even blame them—if the tool exists, people will use it.

But that defeats the whole purpose, doesn’t it? Advent of Code was always about the struggle. That moment when you stare at your screen for 20 minutes, realize you misread the problem, refactor everything, and finally see those sweet green checkmarks. That dopamine hit doesn’t exist when you outsource the thinking.

My Approach This Year: Human vs. Machine (Kind Of)

So here’s what I’m doing differently. I’m not pretending AI doesn’t exist—that’d be dumb. Instead, I’m leaning into it as a learning experiment.

Step 1: Solve it myself first.

No AI. No hints. Just me, Rust, and probably too much coffee. I’ll write out my thought process, the dead ends I hit, the stupid bugs I introduced. The whole messy journey.

Step 2: Then I prompt the AI.

After I’ve got my solution working (or at least attempted), I’ll throw the same problem at an AI and see what it spits out. Same problem, fresh context, no hand-holding.

Step 3: Compare and analyze.

This is the part I’m actually excited about. How does my approach differ from what the AI generates? Is my code more readable? More performant? Or did the AI find some elegant pattern I completely missed?

I want to look at things like:

  • Code structure and organization
  • Edge case handling
  • Algorithm choice (did we pick the same approach?)
  • Idiomatic patterns (especially since I’m writing Rust, which has strong opinions about how code should look)
  • Performance characteristics

Step 4: Learn from it.

The goal isn’t to prove humans are better or that AI is cheating. The goal is to understand where AI excels, where it falls short, and how I can actually use these tools to become a better developer—not just a better prompt engineer.

Step 5: The Part 2 adaptation test.

Here’s where it gets really interesting. If you’ve done Advent of Code before, you know the drill: Part 1 lulls you into a false sense of security, then Part 2 drops a twist that makes you rethink everything. Maybe the input size explodes and your brute force solution times out. Maybe there’s a new constraint that breaks your elegant approach. Maybe you need to track something you completely ignored in Part 1.

This is where I want to see how AI handles iterative problem-solving. Can it take its own Part 1 solution and adapt it intelligently? Or does it just start from scratch? When I give it the Part 2 requirements with the existing codebase as context, does it refactor cleanly or create a mess?

Because let’s be real—this mirrors actual software development way more than greenfield coding. Most of the time we’re not writing code from nothing. We’re adapting, extending, and sometimes completely reworking existing code when requirements change. If AI can only do the “fresh start” thing well but struggles with iteration, that’s a pretty significant limitation. And if it’s actually good at this? That changes how I’d use it in my day-to-day work.

Why Rust?

Because I’m a masochist, apparently. No, actually—I’ve been wanting to level up my Rust skills, and Advent of Code is perfect for that. The problems are small enough that you’re not drowning in boilerplate, but complex enough that you need to actually understand ownership, borrowing, and all those fun compiler errors that make you question your career choices.

Plus, Rust’s error handling and pattern matching are genuinely beautiful for these kinds of algorithmic problems. Once you get past the learning curve, the code reads like poetry. Aggressive poetry that yells at you about lifetimes, but poetry nonetheless.

What I’m Hoping to Get Out of This

Honestly? A few things:

  1. Actually finish Advent of Code for once. Day 12 or bust. (Yes, AoC 2025 runs until December 12th for the 10th anniversary—shorter than usual, so no excuses.)

  2. Better intuition for when to reach for AI. There’s a difference between using AI as a crutch and using it as a power tool. I want to find that line.

  3. Sharper Rust skills. Nothing teaches you a language like solving 50+ problems in it.

  4. Content that might help others. If you’re also grappling with the “should I use AI for this?” question, maybe my fumbling around will be useful.

Let’s See How This Goes

I’ll be documenting each day in this repo—my solution, my thought process, the AI’s solution, and the comparison. It’s going to be messy. I’ll probably have days where the AI absolutely destroys my solution and I have to sit with that. I’ll probably also have days where I catch bugs in AI-generated code that would’ve caused issues in production.

Either way, it should be interesting.

If you’re doing Advent of Code this year too, I’d love to hear your approach. Are you going pure human? Pure AI? Some hybrid? Hit me up.

Time to open Day 1. Let’s see how long this motivation lasts.


This is part of my Advent of Code 2025 series where I solve puzzles manually, then compare my solutions with AI-generated code. You can find the full repo and daily breakdowns here.