< Back to Index Posted: February 1st 2024

This was originally posted on Cohost. See the original post with comments here.
How To Be Smart
This is a blog post about the TV show QI, how the belligerent arrogance of a few people set an example for a whole generation, and why loving science is not enough.

Delusions
In 2005 I was studying my A-Levels, and despite all advice to the contrary I'd kept on with two non-science subjects: English, and Philosophy. As part of our Philosophy studies they wanted to take us to a special student-focused conference that would have people speaking about the topics we were studying. The headliners were Richard Dawkins and Peter Vardy; the former a scientist and staunch critic of all things religious, and the latter a philosopher and theologian. They were framed like boxers coming for a title match, each one defending a huge corner of human culture.
I don't remember a lot from the conference, but one thing stands out very strongly in my memory: Richard Dawkins spent the entire thing being mean, belligerent and incoherent. At one point he was on a panel with some religious leaders and was asking them patronising or flawed questions and trying to show up holes in their logic. A lot of the students thought it was great, but despite being both an atheist and someone who wanted to go study a science subject at university, listening to him talk just made me feel sad. What I now realise, looking back, is that Dawkins' imminent pop culture explosion (the God Delusion was published a year later, in 2006) was part of a cultural shift in the role of science, facts and who we listen to.
In the UK a couple of years prior to this a new TV show appeared called QI, or Quite Interesting. The premise of the show was that team members would be asked difficult questions, often with trick answers that were common misconceptions. Giving a wrong answer wouldn't hurt you at all, but giving a wrong answer that was also a commonly-held but wrong belief would cause sirens to blare, lights to flash, and massive points loss. Like any TV show with a gimmick, QI leaned further into this the more popular it got. Questions got more and more loaded, trick answers became more tenuous, in some cases famous trick questions were then revisited in later series to further trick people by explaining they had since been disproved or that a certain special case meant they no longer applied.
I liked QI at the time and I still watch it now when I stumble across it. There's some funny people on it sometimes, and it especially is fun when someone intentionally subverts the show's premise. If you like QI too, maybe more than me, please do not be offended by what I'm about to say: QI encouraged hundreds of thousands of people to become gigantic arseholes. The thing with QI is that on paper it was about the joy of learning new things and sharing them with others, but what a lot of people seemed to take away from it was that one-upmanship and showing off how smart you are is good or admirable. For a lot of people their favourite character on the show wasn't the wise host or the silly panelists, but the Klaxon that blared whenever someone said something wrong. You probably know someone who has modelled their personality around becoming a human QI Klaxon. Learning and knowledge, instead of being a big nuanced thing that everyone is contributing to and taking from, is instead broken down into tiny isolated facts that were sharpened to a point and used to poke people in the eye with.
Now, I'm not saying the show intended this, and I'm definitely not saying QI was a singular point that changed the world forever. It's just emblematic of a certain kind of argumentation that I saw a lot of at the time, and I saw the same thing on the stage with Dawkins at that conference as a student too. It's the idea that being smart is simply about knowing stuff - the more stuff you know, the smarter you are. If the stuff is complicated and hard to understand, then you're even smarter still. And simply being smart isn't enough - you have to demonstrate it, and the best way to demonstrate it is to deploy it against someone else, whether that's an opponent on a gameshow, a Muslim at a philosophy conference, or a person you disagree with on the Internet.
QI breaks knowledge down in this way because, y'know, it's a comedy show in a 30 minute timeslot, it doesn't really have a choice. But stuff like this was emerging at a time when the Internet was also fragmenting into tiny bite-sized things where there was no space or time for nuance or elaboration either.
IFL It Here
I was invited to an event last year where science journalists come to train and practice their skills - I was volunteering to be a practice interviewee so they could get some experience interacting with real scientists1. Afterwards, many of the organisers and speakers were chatting over drinks, and someone introduced themselves as working for "IFLS". I did the classic thing you do when you're socially awkward and talking to serious people which is I smiled and nodded and pretended I knew what that was. After about ten minutes I realised I actually did know where she was from - she wrote for something I used to know under a different name: I Fucking Love Science.
IFLS (I'm only calling it that for brevity, much as it pains me) is a good example of science-as-aesthetic that emerged over the last ten to fifteen years on internet social media sites such as Facebook. Much like QI, IFLS and its ilk break down scientific ideas into something small enough to be digestible in the medium they use, which was originally a Facebook page but now includes YouTube, Instagram and more, as IFLS has become a veritable brand name. A positive spin to put on this is that it is making science accessible for the wider world, adding some clickbait tricks to help people engage with serious ideas - and IFLS has expanded a bit into a more straightforward science journalism outlet these days, which explains why they also felt the need to drop the Fuck. A less favourable spin would be that by reducing science down to the same quips, clips and soundbites as everything else on the Internet, you're actually doing the opposite of helping people engage with science. Like QI, you're reducing the idea of knowledge and learning down to little facts you can memorise, share and repeat.
The fetishisation of science in this way has a lot of other problematic side effects, too. One is that it narrows our idea of what being smart or doing science actually is, both in terms of what activities constitute science and also where its boundaries are. Scientists have always looked down on humanities researchers to some degree, but pressure on that link has intensified in the 80s, 90s and 00s as the push for 'STEM' subjects2 became stronger and stronger. Studies in school (at least in the UK) are compartmentalised - your physics lessons are about very specific things and do not overlap into other subjects. So by pushing STEM as an important thing, we also push in people's minds a specific idea of what it means to do science, it means these things that we have drawn hard red lines around in the school curriculum. If you do work that doesn't look like pure science, you are called 'interdisciplinary', or you describe yourself as doing work 'at the intersection' of multiple things.
And I feel I should point out here that I actually, do, also, fucking love science. I remember several things from my years as an undergrad that made me feel so excited and happy to have learned, I remember really buzzing with a visceral kind of response to grasping certain new ideas. There are lots of fascinating things out there that are hidden away in subjects we think of as dry. But I also don't really think about or care about where knowledge comes from. The students I work with have backgrounds in design, writing, art, archaeology, game development and more, many of them check multiple of these boxes. We pursue the questions we think are valuable and interesting, and sometimes they look like what IFLS would call 'science' and sometimes they don't. But that's really hard to do, and all of these systems - including the systems scientists design themselves for evaluation and recognition - are constantly pushing back and encouraging us to reclassify what we do, to fit inside narrower boxes, to align with boundaries that suit other purposes. But as much as I love science sometimes, it's also pretty boring a lot of the time. I don't even mean that as a negative, a lot of nice things in life are boring or mundane or involve repetitive, thankless work.
Another problem is that, as Dawkins showed in 2005, a lot of people think that stating facts allows you to win arguments and solve problems, as if there is a secret hidden QI scoring team just behind you in real life, silently grading all the posts you're making in that Twitter thread. Around 2005-2010 I would say I mostly saw this used the way Dawkins used it - to attack people who were perceived as being 'outside' of or 'against' science, usually people that could be looked down on. If you used Reddit or Facebook during this period, particularly a little later when Facebook was opened up to non-students, you might have seen people making fun of religious people or posts, or replying to them with snappy one-liners. I used to judge people who said or shared this stuff very harshly - I think over time I've tried to adjust and have some compassion for some people who might have, for instance, grown in very religious repressive spaces and thus have some trauma associated with that. But in general it was people cosplaying as rational, enlightened people so they could feel and act superior towards others, and there's a direct throughline from that to the open Islamophobia we often see today, for example.
I think during this time probably a lot of people either didn't think this was a problem or perhaps sincerely believed it was good. After all, these people were wrong about something on the internet and they needed to be told why. But as the century has worn on, that desire to equate smarts with facts, and arguments with one-liners, and logic with correctness, and religion or spirituality with stupidity, has contributed to a new unfortunate trend of absolute, total noise in online debates.
I Will Never Log Off
There are a lot of differences between the science education we get in schools, and the experience of science in everyday research labs and universities, but I think one of the most important ones is the difference between how knowledge is treated. Good researchers view knowledge and ideas as essentially temporary, unreliable, and partial. For the most part facts are just things we currently believe, and they're useful to build stuff on, but many of them are open to being disproven, elaborated on, broken, expanded or changed over time. In school, though, we mostly deal in immutable facts, which is why we get examined on our ability to memorise and understand them. So we get this idea that science means facts, and that smart people deal in facts, and that an argument is a series of facts that leads to an irrefutable conclusion. This is what people mean when they talk about 'logic' and 'being logical'. Fact A implies B, B implies C, A is true, therefore C is true. If you disagree with this, you are being illogical and irrational.
One place you see this manifest is any prominent public issue that involves currently developing scientific consensus - the easiest recent example, of course, would be the Covid-19 pandemic. During the early stage of the pandemic people would often find and post scientific papers that supported whatever view they had - and because literally tens of thousands of people were writing papers about Covid-19 from every angle, it was not hard to find something to support your view. So if you wanted to find a study that suggested, for example, that Covid-19 actually wasn't very dangerous at all, then you could. A scientist said this and published it in a paper - it's facts. Then some well-meaning person will think, well, we know Covid-19 is actually dangerous, so there must be flaws in this paper, and they'll do a nice Twitter thread pointing it out, and people will RT it because it also sounds like Facts and it explains why the Other Facts were the wrong kind of Facts.
In reality, scientific consensus - especially about something as new and rapidly-changing as Covid-19 - is changing all the time. Everyone - including the authors - know that studies do not tell us the whole picture, and we have a range of mathematical and scientific processes to try and assess claims and data, and build better pictures over time. Any evidence-based or experimental paper that has ever been published on any topic will contain weaknesses if you look hard enough, and if you wanted to you could write a Twitter thread picking it apart. That's just how scientific publishing works. The rightness of wrongness of a single article cannot tell us anything meaningful about something so new and complex. This is because scientific papers are not, actually, facts - they are reports, explanation of work done, the conclusions those people drew about that work, and their personal interpretation of where that might point. We can think of them as facts within their own space, if you like - the data you collect in a study is real and accurate. If you ask 100 people on the street what their favourite chocolate bar is, they might all say, I dunno, Aero Mint. That's not false, that really happened. Morally wrong, maybe. But the study is just a small fraction, a sideways glimpse at the real phenomenon or problem you are trying to investigate.
All the stuff we've spoken about so far - bitesize viral science, QI's gotcha facts, Dawkins' blunt arrogance, cherry-picking the science that supports what you want to say, all take advantage of this model of thinking. However, at some point, this way of thinking became particularly vulnerable to exploitation3. If you believe that facts can't be argued with ("facts don't care about your feelings" is a popular refrain from a certain kind of person who can be found on every part of the political spectrum) and you can convince people that you are a smart person who knows a lot of facts, then you can argue for pretty much anything. And while a racist or sexist or any other kind of bigot or extremist might be more easily dismissed when framed in emotional terms, factual claims feel different to people. Suddenly this isn't racism or sexism, it's just common sense, and after all, you can't deny the logic of their argument.
How do you achieve that? What does it mean to perform smartness? I often think of the phrase "a stupid person's idea of a smart person" which is a description that's been applied to several of the people mentioned in this post so far, including QI's Stephen Fry, Joe Rogan and Elon Musk. I think it's a bit of an unfair phrase - it's not really a stupid person's idea, but rather society's idea as a whole. And there's no more fascinating an example of what society thinks a smart person is than Elon Musk.
Issuing A Correction On A Previous Post Of Mine
Ever since Musk first opened his mouth about AI, I have been fascinated watching how people's opinions shifted on him. I have always maintained he is a colossal idiot, but a lot of people - including highly respected scientists, engineers and policymakers - have believed him to be anything from an AI genius through to an almost messianic saviour figure. I used to think this was because Musk was good at the grift of appearing smart, but I don't think that's even true any more really, he's demonstrated his complete lack of sense and consistency time and time again and it has barely shifted some people's view of him. Musk's specific form of business bullshit - confidence, buzzwords, a sense of superiority - just fits too perfectly with how we think smart people speak and act.
So here's the transcript from the start of this post again. I just want to go over a few things, as someone with a PhD in artificial intelligence and over a decade of experience researching in the area Musk is talking about, and break down exactly what he is saying.
Musk: If you start thinking that humans are bad, then the natural conclusion is that humans should die out.
You don't need to know anything about AI to know that this doesn't make any sense at all. I think Elon Musk is bad, but I'm happy to let him keep living4. I think Facebook is bad but I understand it has its uses. I think cavities are bad, but I know I don't need to pull out all my teeth to avoid them. Even in the case of optimisation - which Musk is kind of getting at here, the idea that AI would just keep pushing something to its most extreme solution - we have countless examples of both natural and human-engineered systems that do not optimise for extremes. This is pure political bullshit. But even by saying things like 'natural conclusion' here, Musk is trying to make you think these are the words of a wise philosopher. The natural conclusion - this is logical. You can't argue with it.
Now, I'm heading to an international AI safety conference later tonight, leaving in about three hours, and I'm gonna meet with the British Prime Minister and a number of other people.
Ok this isn't factually inaccurate but I just want to duck in here to point out that Sky News' Sam Coates described this meeting as "one of the maddest events I have ever covered", and Musk's invitation to the AI Summit is a prime example of how reputations compound and reinforce themselves to promote and maintain the status of the worst human beings.
So you have to say, like, how could AI go wrong? Well, if the AI gets programmed by the extinctionists
Musk is supposedly an experienced public speaker and an expert on AI, so while this might seem like a minor criticism, we do not say an AI system "gets programmed". You do not "program" the kind of AI Musk is talking about. You make decisions about how they're structured, and you make decisions about what data to train them with, and what goals to seek out. It's clumsy and inaccurate, and he's really only phrasing it this way to create a causal link between the beliefs of these bad people (the 'extinctionists'5) and the AI.
it will... its utility function will be the extinction of humanity.
This is the thing I really wanted to dwell on. Musk knows that "utility function" is a specific, technical term from within AI. He knows that saying it will make him sound smart. But he clearly does not know what it is, or if he does, he's being incredibly obtuse here. A utility function is a way for an AI algorithm to judge how well a solution fits a particular problem. Let's say I want to know how to encourage myself to work better in the mornings. My utility function might be how many words I write between 9am and midday. The actual specific things I do to achieve that - drinking coffee, playing music, avoiding email - are the solutions I'm trying. The utility function measures how good those solutions are.
Making humanity extinct would not make sense as a utility function. The utility function - in this bizarre case - would be better expressed as something like, "reduce carbon in the atmosphere". In such a scenario, Musk is suggesting the best solution for this the AI could find would be to kill everyone. The only reason this sounds plausible is because you saw it in a movie once.
Why does this distinction matter? Well, two things, first let's see how noted philosopher of our time Joe Rogan responds to this:
Rogan: -pause- Well yeah... clearly.
He doesn't know what to say because he doesn't know what a utility function is. Musk knows this, that's why he used the term, because he knows he will not get pressed on this point because people are afraid to ask questions about things they themselves do not understand. When you're doing science communication, you either need to not use certain technical terms, or you need to find ways to explain them. Musk is doing neither here specifically because he knows it helps set him up as a Smart Guy.
The second reason this distinction matters is that in this bizarre hypothetical situation that everyone is obsessed with, we never discuss whether, for example, the utility function could be adjusted to include other things? Could we have the utility function be, for example, "reduce carbon in the atmosphere without killing anyone"6. This would undermine Musk's argument, and so having people understand what he is actually saying is dangerous. Instead, it's more important that they engage with what's he's saying on the surface level, just like all the other science-as-aesthetic stuff we've discussed so far. What does his argument sound like? How does it make me feel? People do the emotional calculus, and then conclude it's scientific after they've decided if they agree or not.
There are a million ways you can build a piece of software, and a million ways you can break it. We have built AI systems that evolve, change, and limit themselves in countless ways, and shaped each and every one to the kinds of goals we want to have for it. Similarly, there are many, many plans for the future that allow humanity to keep living on a planet that they take better care of. So Musk's claim here is ideology wrapped up in science: he wants to argue that both environmentalism and AI are dangerous, and is trying to get you to agree that this belief is logical by using fancy words and trains of thought plucked right out of science fiction. Once someone gets into a position of authority that derives from a belief they are smart, it's remarkably hard to get them out of it, especially if everyone else believes they are not smart (as many of us are trained to do).
How To Be Smart
What are the ramifications of this? One important one is that we have elevated an entire class of people to positions of immense power and influence because they perform smartness rather than actually demonstrating competence at doing anything. In almost every case you'll be able to think of, this has had disastrous results. Worse still, we've created a culture in which the effort required to break down the claims, lies and bad behaviour of these people is exponentially more than the effort required to do the bad thing in the first place. You can't break down an entire way of thinking with the same virality that created it - the only way would be a long, slow and painful culture shift, and I don't know how that happens or if it's even feasible. Attempts to combat this virality with virality of its own leads to the culture of dunk quote chains, out-of-context mockery and one-up gotchas that ultimately do nothing to actually shift our understanding of the problem7.
I think there's also a deeply complex entanglement between this exact way of performing smartness and the movements of capital and the investor class over the last decade. The rise of AI, as well as the smaller trends that swam alongside it like VR and web3, have been guided by people who sold the right dream with the perfect balance of big words and total bullshit - they knew exactly how to appear smart to rebuff questions about whether they could actually do anything they were claiming. I've heard and seen this happen even locally, in cafes as people yell on video calls or pitch to people in suits. I have completely given up any hope of 'debunking' any claim made by an AI company or influencer in any meaningful way now - it is an inferno that has to burn itself out. If we're lucky, perhaps we can save a few important institutions from the blaze while we wait for that to happen.
But I think most depressingly, the main consequence of this is that all the wrong people learned the wrong lesson from it. When I talk to people about science or AI, they will volunteer their stupidity without being asked. They will tell me they know nothing, am not as smart as me, could never understand computers, or are much worse at this than other people. Even people who fight for the relevance of the humanities in the face of STEM will tell me, to my face, that they are not smart enough to understand what I do, or that I must be a genius or super-smart person to work in AI 8. People whose voices are badly needed currently, as we navigate dangerous futures powered by technology, believe that they are the last people who should be speaking. I'm not conspiracy-minded enough to tell you that's by design, I just think it's a very unfortunate side effect that has accelerated the position we're in right now. The wrong kind of person got encouraged by the madness of the last decade, and the other kind of person got told to be quiet.
I don't think smart people exist. I think that some of us have developed extremes of knowledge of skill about certain things, and some of us develop that by going viral on TikTok doing yoyo tricks, and others develop it by getting incredibly good at writing compact Python code to solve tricky data problems, or by perfecting making our partner's favourite meal just as they like it. Just as we create hierarchies of culture, where it's more socially acceptable to know a lot about fine art than comics, we create hierarchies of knowledge, skill and expertise too. If you know a lot about mathematics, you are smart. If you know a lot about media studies, you wasted taxpayer money. I know this first-hand because depending on what room I'm in, and depending on how I describe my work, I find myself in different categories all the time.
I truly hope that in the future we can change our cultural thinking about science, about intelligence and about education. I hope we can expose some of the people who have lied and blustered their way to being in charge, and rehabilitate the people who were told they were too stupid to have an opinion. And I hope we can begin to understand science as a messy, complicated and imprecise thing, that often doesn't look or feel much like 'science' at all. I think that one of the healthiest things we could do as a society right now in the west is make everyone feel like they are good enough to learn about and participate in debates about science, technology and the future; while also acknowledging the limits of what 'knowing' things or being 'smart' actually are.
Thanks to everyone who gave feedback on a draft of this post, and thank you to you for reading! See the original post on Cohost here.
Footnotes
-
No jokes please, we're all thinking it.
-
Science, Technology, Engineering and Maths (or Medicine?) - an acronym basically referring to the subjects generally called 'the Sciences'. This became a watchword in education policy and other areas and is now a very loaded term today.
-
I mean you can argue that it's always been a form of exploitation, you know what I mean. More exploitation.
-
Well,
-
Kor Bosch points out that the use of the word 'extinctionists' is also intentionally inflammatory here and totally pulled out of thin air. I didn't mention this directly because it's not dlrectly linked to the technical details, but actually Kor is right that this is a good example of how people cover their bias and emotional appeals with a scientific persona. Calling people you don't like 'extinctionists' is about one level above Trump nicknaming in coherency stakes.
-
I'm hesitant to include this example because it implies that I think even the basis of the thought experiment is worth engaging with, which I don't. The science-fiction idea that an AI with the capacity to control globally-important systems would be given as vague an instruction as "improve the climate" is colossally stupid, and relies on so many naieve and idiotic human mistakes that it really has nothing to do with AI at all, even if it were possible.
-
If you follow me on Twitter you'll know I dunk on AI people a lot, I quote them, I make lazy jokes. I'm not going to get into that here, but to be clear, I don't consider any of that an attempt to convince anyone of anything or change people's minds. I do it because I'm losing my mind on a daily basis and if I don't find a way to laugh at it then I'll need to go live in the wilderness somewhere instead.
-
Again, despite all evidence to the contrary that I regularly post online.
Posted Feburary 1st, 2024