Commmonn Ground

Education & Tech

I Asked AI to Co-Parent With Me for a Week — Here's What Happened

I'm going to be honest with you: last Tuesday, I was standing in the cereal aisle at Wellcome in Taikoo Shing, asking ChatGPT whether my almost-three-year-old should be eating Cheerios or congee for breakfast. A woman next to me was doing the same thing with her phone, except she was checking the stock market. Both of us looked equally stressed.

Here's the thing — I work in product design and tech. I spend my days thinking about how AI tools should behave. And yet, when it comes to the single most important project I've ever worked on (keeping a tiny human alive and reasonably well-adjusted), I've been winging it like everyone else. So when the What to Expect 2026 survey dropped the stat that 75% of moms now use AI for parenting advice, my first thought wasn't "that's alarming." It was "only 75%?"

Common Sense Media reports the same trend — parents are increasingly turning to AI for meal planning, behaviour strategies, and even medical triage. We're all doing it. We're just not talking about it at playgroup.

So I decided to stop dabbling and go all-in. For one full week, every parenting decision — what to feed him, how to handle tantrums, when to put him down for naps, what stories to tell him — would go through AI first. ChatGPT, Claude, Gemini. The full squad.

Think of it as "I tried AI co-parenting so you don't have to."

Spoiler: some of it was genuinely brilliant. Some of it was hilariously useless. And one moment made me put the phone down for good.

The Rules

Before we get into it, here's how I set this up:

  • Every parenting question goes to AI first. No googling, no WhatsApp-ing my mum group, no asking our helper (who, for the record, has better parenting instincts than any algorithm I've encountered).
  • I'd rate each interaction out of 10 for usefulness, practicality, and whether it actually made my life easier.
  • I'd use multiple AI tools — ChatGPT for quick answers, Claude for nuanced strategy, Gemini for when I wanted a second opinion that sounded vaguely different from the first.
  • If the advice felt dangerous or wrong, I'd override it. I'm experimenting, not negligent.

Armed with too many chat tabs and not enough sleep, I began on a Monday.

Day 1: The Meal Plan That Almost Worked

The ask: Create a full week of toddler meals for a picky eater who's almost 3, considering nutrition (especially iron — his last blood test was borderline), no shellfish allergy, and ingredients I can actually find in Hong Kong.

What AI did well: Honestly? I was impressed. The meal plan was structured, nutritionally thoughtful, and — here's what surprised me — culturally aware. It suggested congee variations with minced pork and spinach for iron, steamed fish with ginger, and even egg tofu stir-fry. It knew about iron absorption and paired iron-rich foods with vitamin C sources. It felt like getting advice from a nutritionist who'd actually been to a cha chaan teng.

It even factored in the reality of toddler eating: small portions, repeated exposure to new foods, and the psychological hack of putting everything in a muffin tin because apparently toddlers love compartments. (They do. It's weird.)

Where it fell apart: It suggested "organic kale chips" as a snack — and look, I love kale as much as the next person who's lying about loving kale, but have you tried finding organic kale chips at ParknShop? It also recommended a specific high-iron cereal brand that doesn't exist in Hong Kong. Small thing, but it reveals the gap: AI knows nutrition science, but it doesn't know your local supermarket. I ended up spending 20 minutes editing the plan to swap in things I could actually buy, which somewhat defeated the time-saving purpose.

That said, it connected to something I've been thinking about — how the first three years of a child's gut health shape so much of their future. AI gave me a solid framework. I just had to localise it.

Score: 7/10 — Would use again, but with a "Hong Kong edit pass."

Day 2: The Meltdown That Broke the System

The ask: We're at a restaurant in Wan Chai. My son is screaming — full volume, face red, tears streaming — because I committed the unforgivable crime of cutting his sandwich into triangles instead of rectangles. I pull out my phone and type (one-handed, while also trying to prevent a water glass from becoming a projectile): "my 2.5yo is screaming because I cut his sandwich wrong, what do I do?"

What AI said: Beautiful, textbook gentle parenting with boundaries. Validate his feelings ("I can see you're really upset about the sandwich"). Offer a choice ("Would you like me to get you a new sandwich, or would you like to try this one?"). Stay calm. Get down to his eye level.

The problem: By the time I'd read the response — roughly 45 seconds later — the sandwich was on the floor, my son had moved on to being upset about something else entirely, and the couple at the next table had already written us off as a cautionary tale.

This was the moment that crystallised something for me. Real-time parenting doesn't work on a chatbot's timeline. When your kid is mid-meltdown, you need instinct, not a loading spinner. You need the muscle memory that comes from having survived 400 previous tantrums. AI can teach you strategies in advance (and it's genuinely good at that). But in the heat of the moment? You're on your own.

It's the same reason I've been thinking about teaching emotional intelligence over raw IQ — EQ is built through messy, real, in-the-moment interactions. Not through optimised prompts.

Score: 4/10 — Great advice, terrible timing. Would be useful as pre-reading, not a live hotline.

Day 3: The Sleep Win

The ask: I fed Claude my son's full sleep data — current wake-up time (6:45am, because toddlers don't believe in weekends), nap time (1pm-ish, usually fights it for 20 minutes), nap duration (inconsistent, 45 min to 2 hours), bedtime (7:30pm, asleep by 8pm on a good night, 9pm when he decides to negotiate), and his main issue: waking at 5am and refusing to go back to sleep.

What AI did: Gave me a detailed schedule with specific wake windows, a nap transition plan, and — this was the key insight — pointed out that his wake window before bed was too short. He was going down at 7:30 but only waking from his nap at 3pm, giving him a 4.5-hour wake window when he likely needed 5–5.5 hours at his age. The fix: push bedtime to 8pm, or cap the nap at 1.5 hours and keep bedtime at 7:30.

I went with capping the nap. He slept 20 minutes longer the next morning. Not a miracle, but for a parent who's been running on broken sleep for nearly three years, 20 extra minutes felt like a spa weekend.

If you're in the thick of sleep chaos, we've put together a comprehensive baby sleep schedule by age guide — but honestly, what AI added was the personalisation. It took my kid's specific data and found the pattern I was too sleep-deprived to see.

Score: 8/10 — Genuinely useful. This is AI at its best: crunching your data and spotting what you've missed.

Day 5: The Bedtime Story That Flopped

The ask: Generate a personalised bedtime story for a child who loves trains, dogs, and the park near our flat. Make it calming, about 5 minutes long, and include a character named "Biscuit" (his imaginary dog friend — yes, we're a bilingual household and somehow the imaginary dog only speaks English).

What AI produced: A perfectly structured story about a dog named Biscuit who takes a train to a magical park. It had a beginning, middle, and end. It had gentle repetition. It had a calming wind-down at the end where Biscuit falls asleep under a tree.

It was also completely, utterly soulless.

I'm not being dramatic. The story was fine. It was competent. It hit every narrative beat. But it had none of the weird, wonderful, slightly unhinged energy of the stories I make up at bedtime. It didn't have the running joke about the dragon who's scared of butterflies. It didn't have the part where I accidentally give a character the wrong name and my son corrects me and we both laugh. It didn't have us in it.

And my kid knew it. He listened politely for about two minutes, then said: "No. I want the dragon one."

The dragon one is a story I made up three months ago. It makes no narrative sense. The dragon lives in a shoe. He eats soup. Sometimes he's a girl dragon and sometimes he's a boy dragon depending on my son's mood. I couldn't replicate it if I tried.

That's the point. The best parts of parenting are the unreplicable, improvised, deeply personal moments. AI can't do those. And honestly? It shouldn't.

Score: 5/10 — Technically fine, emotionally flat. The dragon wins.

Day 6: The Sharing Problem (Solved?)

The ask: My son won't share at playgroup. Like, aggressively won't share. He once told another child "this is MY digger" with the energy of a property developer defending a land bid. I asked AI for age-appropriate strategies.

What AI delivered: This one genuinely surprised me. Instead of generic "teach them to share" advice, it explained that forced sharing isn't developmentally appropriate at 2.5 — kids this age are in parallel play mode and are still developing theory of mind. It gave me specific scripts:

  • "You're using the digger right now. When you're finished, it'll be Ethan's turn."
  • Setting a visual timer so "turn-taking" becomes concrete
  • Bringing a "special toy" from home that he doesn't have to share, so he has a sense of control
  • Praising the process, not the outcome — "You gave Ethan a turn! That was kind" rather than "Good boy for sharing"

This was better than most parenting books I've read, because it was tailored to his exact age and situation. It wasn't a chapter about sharing in general — it was a strategy for my kid, right now. And it worked. Not perfectly (he still death-grips the good toys) but measurably better.

Score: 8/10 — I'd argue this is AI's sweet spot: synthesising developmental research into specific, actionable advice.

Day 7: The Rash That Wasn't Cancer

Quick one. My son developed a weird splotchy rash after swimming class. Pre-AI, I would have googled "toddler rash after swimming" and spiralled through a WebMD rabbit hole until I'd convinced myself it was a rare tropical disease.

Instead, I described the rash to AI (red, slightly raised, appeared within an hour of swimming, no fever, not itchy). It gave me a calm, structured differential: most likely chlorine sensitivity or contact dermatitis, possibly heat rash. Clear guidance on when to see a doctor (spreading, fever, blisters, lasting more than 48 hours) and what to do in the meantime (rinse with fresh water, moisturise, monitor).

The rash was gone by morning. AI saved me a trip to the GP and approximately three hours of anxiety.

Score: 6/10 — Useful for triage, but I want to be clear: AI is not a doctor. It reduced my anxiety, which is valuable, but I'd still see a paediatrician for anything that lingered.

The Verdict: Tool, Not Co-Parent

After seven days of outsourcing my parenting brain to silicon, here's where I landed.

Where AI genuinely helps:

  • Meal planning and nutrition — Great at structure, good at science, needs local editing
  • Sleep optimisation — Excellent at pattern recognition when you feed it real data
  • Behaviour strategies — Nuanced, age-appropriate, and better than most generic parenting books
  • Medical triage — Reduces anxiety, provides structure, knows when to say "see a doctor"
  • Offloading the mental load — And honestly, this might be the biggest one. The invisible labour of parenting is real, and having an AI handle the research portion of it — "what should a 2.5-year-old eat for iron?" — frees up mental space for the stuff that actually matters

Where AI falls flat:

  • Real-time crisis moments — Too slow, too generic, no substitute for instinct
  • Emotional connection — Bedtime stories, comfort during tears, the weird inside jokes that make your family yours
  • Knowing your specific child — AI knows what a 2.5-year-old is like in general. It doesn't know that your 2.5-year-old is terrified of hand dryers, obsessed with the colour yellow, and will only eat rice if it's in a blue bowl

The one-line summary:

AI is a brilliant research assistant and a terrible co-parent. It's like having a very well-read friend who has memorised every parenting book ever written but has never actually met your kid.

The Risk Nobody's Talking About

Here's what concerns me, and I say this as someone who works in tech and genuinely believes AI is useful: if you outsource every decision to AI, you stop building parenting instincts.

The whole point of the messy, exhausting, confusing early years is that you're learning to read your child. You're building a mental model — this cry means hungry, that whine means tired, this specific silence means he's definitely drawing on the wall. That knowledge comes from thousands of micro-interactions. It doesn't come from a prompt.

I've written before about the screen-free childhood movement and the question of whether kids can become too dependent on AI chatbots. But we should be asking the same question about ourselves. Are we using AI as a tool — or as a crutch that prevents us from developing confidence in our own parenting?

The 75% stat isn't scary. Parents have always sought advice — from books, from elders, from that one mum at playgroup who seems to have it all together (she doesn't). AI is just the latest source. The question is whether we're integrating the advice and building our own judgment, or just following the algorithm.

The Practical Guide: How to Use AI as a Parenting Tool Without Losing Your Mind

After a week of this experiment, here's my actual, honest framework:

Use AI for:

  • Meal plans and grocery lists — Give it your constraints (allergies, local stores, picky eating) and let it do the planning. Edit for reality.
  • Sleep schedule analysis — Feed it your child's actual data. It's good at spotting patterns.
  • Behaviour strategies — Ask with specific age and context. The more detail you give, the better the advice.
  • "Should I worry about this?" medical questions — For triage and anxiety reduction, not diagnosis.
  • Activity ideas — "Rainy day activities for a 2.5-year-old in a small Hong Kong flat" is a prompt that delivers.

Don't use AI for:

  • In-the-moment decisions — Build your instincts. Trust them.
  • Emotional connection — Tell the bedtime story yourself. Even if it's terrible. Especially if it's terrible.
  • Replacing professional advice — AI is not your paediatrician, your child psychologist, or your marriage counsellor.
  • Every single question — If you're opening ChatGPT before you've even tried to figure it out yourself, that's a flag.

The golden rule:

AI for logistics. You for the relationship. The meal plan can come from a machine. The hug after a nightmare cannot.


It's been two weeks since my experiment ended. I still use AI for meal planning — it's genuinely saved me time. I still occasionally ask it about developmental milestones when I'm curious. But the phone stays in my pocket during meltdowns now. And bedtime stories are back to featuring a dragon who lives in a shoe and is scared of butterflies.

My son doesn't know I spent a week co-parenting with a chatbot. If he did, he'd probably ask if the chatbot knows the dragon story.

It doesn't. That one's ours.

Add as a preferred source on Google

You may also like