th
Date
Read11 min read
Tags
AIProductToolsWays of working

How AI changed how I work — two months in

Not a productivity story. More of a reckoning with what work actually is when a large part of it can be delegated to a machine.

About two months ago, something shifted. Not gradually. More like a threshold I crossed without noticing, and then looked back and realised I was somewhere else.

I've been using AI tools for a while. GitHub Copilot in the editor. ChatGPT for the occasional draft. Nothing that fundamentally changed how I worked. But in January, I started using Claude Code for a side project (building this portfolio), and then started routing more and more of my actual product work through AI tools. That's when things got interesting.

This isn't a productivity story. I'm not going to tell you I'm "10x more efficient" or that AI has "unlocked my potential." Those framings irritate me, and I suspect they irritate you too. This is more of a reckoning with what work actually is when a large part of it can be delegated to a machine.

The evening project test

The clearest signal came from side projects. I've always built things in the evening: small websites, experiments, tools that scratch an itch. But the gap between idea and working thing used to be painful. Setting up infrastructure, wrestling with configuration, writing boilerplate I'd written a hundred times before. It wasn't hard, but it was slow and draining, and it ate into the time available to think about the actual problem.

With Claude Code, that gap nearly disappeared. I describe what I want, it writes the scaffolding, I steer the decisions, it executes. The layout, the animations, the cursor behaviour on this portfolio all came together in a few evenings. Not because I worked harder, but because the ratio of thinking-to-doing shifted dramatically in favour of thinking.

That sounds like a simple efficiency gain. But I don't think it is. When you remove the execution drag from creative work, you discover how much of your "creative process" was actually just resistance to starting. The best ideas tend to come when you're building something, not before. When iteration is cheap, you iterate more. When iteration is more, you find better solutions. The quality of the output went up alongside the speed.

But the thing that surprised me most wasn't the portfolio. It was what happened when I brought that same approach to an actual work problem.

Our sales and marketing team regularly needs visual assets: images for proposals, presentations, campaigns. The usual process involved requests, waiting, revisions, more waiting. One evening I built a small internal tool, a Brand Image Generator, a simple interface on top of an AI model tailored to our brand inputs and use cases. Nothing technically impressive. It took a few hours. And now the sales and marketing team uses it every week without involving anyone else.

The gap between "evening experiment" and "tool people actually rely on" has almost disappeared. A year ago, that tool would have been a project. A proper project, with requirements and timelines and a ticket in Jira. Now it's something I built between dinner and sleep. The organisational implication of that is something most companies haven't fully reckoned with yet: the threshold for building internal tools has dropped so far that almost any problem worth articulating is now also worth just solving.

What changed at work

My day job is Head of Product. The work is mostly thinking, writing, and conversations. Not code. But it turns out that's exactly where AI is most useful for me right now.

The clearest example is spec writing. Writing a good product specification is time-consuming not because the thinking is hard, but because translating the thinking into clear, complete prose is hard. You're holding a complex mental model: user needs, technical constraints, edge cases, scope decisions. And trying to render it in a form that's unambiguous to an engineer who doesn't share that mental model.

I now do a lot of that work in conversation with AI. I talk through the feature, messily, the way you'd think out loud, and then work with the model to structure it into something usable. What used to take half a day takes an hour. More importantly, the back-and-forth surfaces gaps in my thinking that I would have missed. The model asks logical follow-up questions. It generates edge cases I hadn't considered. It acts like a smart rubber duck that occasionally has better ideas than me.

The same applies to synthesising qualitative research. We talk to users regularly, and turning interview notes into insight used to be a careful, manual process. Now I paste in transcripts, ask targeted questions, and get a first-pass synthesis in minutes. I still read everything myself. I don't trust a model to replace that. But the synthesis gives me a frame to react to rather than a blank page to fill.

It all comes down to context

After two months of using AI across very different tasks — writing, coding, research, tooling — one pattern became impossible to ignore: the quality of the output is almost entirely determined by the quality of the input.

That sounds obvious. It isn't. Most people approach AI the way they'd approach a Google search: a short prompt, an expected answer. That works for simple lookups. For anything involving judgment, style, or domain knowledge, it produces generic output that needs to be heavily edited to be useful. The model doesn't know who you are, what you're building, or what good looks like in your context. So it produces something that could work for anyone, which usually means it's not quite right for you.

The thing that changed my results the most was learning to write proper context files. Structured markdown documents that give the AI the information it needs to act as a capable collaborator rather than a generic assistant: the project's goals, design decisions, tone of voice, conventions, constraints, and anything else that would take time to explain in conversation. For this portfolio, that means a file describing the colour system, the typography choices, the animation principles, the copy voice, even what not to do. Every session starts with the AI having read that file. The output is consistent, on-brand, and far closer to what I actually want on the first attempt.

Writing those files is a skill in itself. You're not just documenting decisions for future reference — you're writing a brief for a very capable collaborator who has no prior knowledge of your project. The better the brief, the better the work. Vague context produces vague output. Precise context produces something you can actually use.

One thing I learned early on: don't write these files yourself. That's a trap. You'll spend an hour trying to articulate decisions that feel obvious to you, miss half of what matters, and end up with something incomplete. The better approach is to have a conversation with the AI first — describe the project, explain the goals, talk through the constraints — and then ask it to produce the context file based on that conversation. It will synthesise what you told it into a structured document. Then you read it, correct what's wrong, remove what's sensitive or irrelevant, and fill in the gaps it couldn't know. You end up with a far more complete document in a fraction of the time, and the process of redacting it forces you to actually read and own every line.

It also changed how I think about documentation more broadly. I used to write notes for future-me. Now I write them knowing an AI might act on them directly. That shifts the standard: the information needs to be clear, complete, and unambiguous enough that an intelligent reader with no context can follow it. That's a higher bar than most internal documentation clears. And it turns out that documentation written to that standard is also significantly more useful for human readers too.

The unexpected costs

None of this is free.

The first cost is attention. When execution is cheap, you can move faster than your judgment can keep up with. I've caught myself approving AI-generated code I didn't fully understand, or accepting a first-pass spec because it looked complete. The speed creates a kind of cognitive slack that's easy to abuse. You have to build new habits around slowing down at the right moments, even when momentum wants you to keep going.

The second cost is ownership. There's a question I find myself asking more often now: do I actually know how this works? When I wrote every line myself, understanding was a byproduct of doing. When a model writes it, you have to actively reach for the understanding rather than receive it passively. For side projects, that's mostly fine. I'm not building infrastructure I need to maintain for years. For work, the bar is higher. I'm more deliberate now about making sure I can explain every decision, not just approve it.

The third cost is context. AI is genuinely excellent at greenfield work. Give it a blank canvas and a clear brief, and it produces something solid fast. The image generator, this portfolio, any new project where the shape isn't determined yet. That's where it shines.

Legacy codebases are a different story. The moment you're working inside something with ten years of decisions baked in, inconsistent patterns, abstractions that made sense in 2016, and business logic living in places no one remembers why, the model starts to struggle. It doesn't know what it doesn't know about your system. It confidently produces code that looks right and integrates wrong. You spend as much time correcting its assumptions as you would have spent writing the thing yourself.

More broadly: AI is fast at rough. Getting from zero to functional, from nothing to something demonstrable. But the last 20% (the edge cases, the polish, the things that make a product actually good rather than merely working) still takes most of the time. The 80% that AI handles quickly can create a false sense of progress. You have a foundation in two hours and spend two weeks finishing it. That ratio isn't AI's fault, but it's worth internalising before you set expectations.

The fourth cost is harder to name. Something like craft attrition, maybe. There's a kind of satisfaction that comes from building something difficult with your own hands. Some of that satisfaction has moved from the work itself to the direction of the work. That's probably fine. It's the same shift that happens when any professional becomes more senior and delegates more. But it's worth noticing.

What AI doesn't do

AI doesn't make hard calls.

It will give you a well-structured analysis of a complex decision. It will surface tradeoffs clearly. It will even make a recommendation if you ask for one. But the recommendation is based on what you've told it, weighted by what's salient in its training. It doesn't understand your team, your stakeholders, the history behind the current architecture, or the political risk of shipping something half-baked. The judgment, the part where someone has to weigh incommensurable things and commit, is still entirely human.

AI also doesn't have taste. It can mimic taste convincingly, if you're not paying attention, but it optimises for coherence, not for the kind of deliberate wrongness that makes something interesting. The best design decisions are often slightly off by conventional standards. They break a pattern on purpose. AI tends toward the reasonable. Taste tends toward the specific.

And AI doesn't know what to work on. Given a problem, it's excellent. Deciding which problems are worth solving is still the hardest part, and it's still yours.

Two months in

I don't think this is a transition period before AI replaces knowledge work. I think it's a transition period before knowledge work changes shape. The skills that matter most will be judgment, taste, and the ability to direct well rather than execute fluently.

I'm better at my job now than I was two months ago. Some of that is the tools. But mostly it's that the tools forced me to be clearer about where my actual contribution is. When execution is handled, what's left is the thinking. And it turns out, when I'm not tired from execution, the thinking gets better too.

That's probably the most honest thing I can say about it.

← All postsThomas Hutten