0:00
/
0:00
Transcript

Elon Musk should be afraid of this woman.

Sophie Hall of ETH Zurich is coming for unethical algos and AIs. Plus Geoffrey Hinton, killer robots, self-healing Google asphalt, and Frankie McNamara.

Today’s outro track is Algorithmic, by Scratch Bandits Crew. The reasons will become clear.

AI is going through an inflection point in the hype cycle. Scrappy Chinese startup DeepSeek claimed to produce an AI model as good as things made by Meta or OpenAI but for a tiny fraction of the computing power. Which might be bad news for tech bros who convinced themselves, and their shareholders and investors, that what worked so well for 1970s Detroit – “people just want bigger and more powerful muscle cars, none of these cheap f*cking rice-burners thank you” – was just what the tech sector needed to dominate China and the world.

And while I’m gonna allow myself some time to drink in the schadenfreude at the expense of Elon Musk – it’s delicious, btw – I’m also not going to pretend AI is going away. Like nuclear weapons, AI might wind up being a thing we wished we could uninvent. Certainly listening to the Godfather of AI Geoffrey Hinton talk to Andrew Marr you do wonder. (clip in the show)

suppose there's several different super intelligences, and they all realize that the more data centers they control, the smarter they'll get, because the more data they can process. That was one of them just has a slight, a slight desire to have more copies of itself.

You can see what's going to happen next. They're going to end up competing and we're going to end up with super intelligences with all the nasty properties that people have that. depended on us having evolved from small bands of warring chimpanzees, or our common ancestors with chimpanzees. And that leads to intense loyalty within the group, desires for strong leaders, um, willingness to do in people outside the group.

And If you get evolution between superintelligences, you'll get all those things. You're talking about them, Professor Hinton, as if they have full consciousness. Now, all the way through the development of computers and AI, people have talked about consciousness. Do you think that consciousness has perhaps already arrived inside AI?

Yes, I do.

Hinton didn’t actually finish the interview saying “I am become death, the destroyer of worlds” but then it’s possible Oppenheimer at Los Alamos didn’t actually say that either.

And that’s before we get to the killer robots. In September, Nature published some disturbing research led by Colin Holbrook at the Department of Cognitive and Information Sciences, University of California, Merced.

Wicked Problems is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

He studied how willing people were to trust an AI, in life or death situations, by putting test subjects in the role of drone controllers. Like some CIA drone jockey, or people on both sides in Ukraine, people were presented with visual data and asked to identify which were enemy targets, which were allied positions. They had to decide whether to fire missles and kill the people at the target. Then a robot – literally in one experiment a humanoid robot rather than a disembodied AI – told them whether it agreed or or not with the decision.

In situations where the AI disagreed with the designation – friend or foe – the human subject changed their mind two-thirds of the time. When the AI agreed with them, they stuck with their initial decision nearly 100% of the time.

Only one problem – the agree/disagree advice from the robot was….entirely random.1

There are loads of surveys showing levels of discomfort with giving Ais control over things. Interestingly, younger people tend to be less trusting of AI than older respondents.

But revealed preference is everything. And if we’re honest that’s kindof scary — and yet feels inevitable.

I live southeast of London. My wife’s parents, who are elderly, live in Devon. On a good day, or the middle of the night with no construction, you can do the trip in about 4 hours. On a bad day, because someone really can’t avoid staring at Stonehenge on the A303 and rear-ends someone in front of them doing the same thing, it can take 7 hours.

I know the way to get to my inlaws, but I keep Google Maps on all the time now. Why? Because if it tells me to divert to avoid traffic, and if let’s say one of my in-laws is in hospital, I’m going to take the advice and potentially shave an hour or more off my journey. It’s not killer robots, but as the stakes go up, you want to feel like you did everything you could to not get it wrong.

And as Hinton goes on to say in the interview, the reason we’re going ahead with AI is because in the short term it’s impossible to ignore the benefits – in health, in education, in materials design. This morning I woke up to stories on self-healing asphalt that could actually stop potholes. It uses tiny capsules filled with used cooking oil added to the mix, which can prevent micro-cracks from getting bigger during freeze/thaw cycles. The project was possible thanks to a collaboration between scientists at Kings College London, Swansea university, in Chile and…Google. Because as Google’s blog post this morning reports in glowing endorsements from the researchers, the apparent breakthrough – which could be a huge solve for not just car repair but for reducing carbon emissions through extending the life of roads, tyres, cars, etc.

And most AI related things fall in the middle. Perhaps none more important than the role AI and automation is already playing in energy. Not the electricity demand problem – though that’s why the DeepSeek thing is so interesting. But on electricity supply and more importantly the power grids. Switching on and off solar, wind, batteries, geothermal, versus natural gas or coal – these are decisions that can’t be made in real time by some dude sitting in a control room. If things get out of whack between supply and demand, or the frequency of power gets a little bit messed up, you get blackouts or worse you get equipment blowing up that could take weeks to repair. And as we see more storms like the one that has kept tens of thousands of Irish people in the dark for more than a week or fires in Los Angeles requiring power to be turned off to prevent more power lines shorting out and starting even more fires, this is the stuff that real people depend on to just get through a day.

But when those decisions are automated – and they kinda have to be – what we usually forget is that every algorithm reflects the values and priorities of the people who wrote it. And as we keep finding out, that can be problematic. So paying attention to it seems important. Even if not everyone is convinced the concern from people like Musk or Zuck is sincere. Ask Frankie McNamara.

Do you know Frankie? If you don’t, link in the show notes and more clips of that at the end because my God we need a bit of a laugh at the moment.

One person who’s sincere in her concern about ensuring that these systems come to fair outcomes, who’s 10x smarter than me but who can communicate the stakes as plainly as Geoffrey Hinton is Sophie Hall, a researcher at ETH university in Zurich. And she kindly spoke to me, right before the DeepSeek news, but I think it just makes her broader point even more important.

Coming up with ways to make these systems consider fairness and justice in the outcomes they are optimising for is more important, more urgent, than ever.

Because otherwise it’s guys like Elon Musk – who Americans woke up yesterday to find out he and his minions have access to the payment system disbursing $6 Trillion dollars in federal spending a year along with everyone’s social security number, the medical history of 1/3 of the US population, and every transaction with every aid agency, NGO, contractor, foreign govt, the whole ball of wax. With that data to train his own AIs…well I’m sure that’ll work out just fine.

Thanks for reading Wicked Problems! This post is public so feel free to share it.

Share

If you’re enjoying these do please subscribe to wickedproblems.earth and help us out with some material support if you’re able so we can keep the lights on, keep the dogs in kibble, and my plug in hybrid charged for the next trip down the A303 guided by Google.

And please – please don’t make me beg – a rating and review on Spotify or Apple or Youtube really helps other people find the show.

Outro Love

We’re not sure where we fall in this typology of Dads. But we are sure that our outro tracks are f*cking great. You should let us know why we’re wrong and how we can do better:

1

If you’re not clear why that’s a problem you’re in the wrong place. Please return to X.

Discussion about this podcast