“A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend Them Back” by Bruce Schneier (W.W. Norton & Company)
Hacking is universally understood as the exploitation of a software vulnerability by a malicious actor.
But hacking encompasses oh, so much more. By gaming systems, it achieves outcomes for which they were not designed. People do it to the economy, the tax code, the law. Discover a loophole, profit from an oversight.
Security guru Bruce Schneier’s latest book, “The Mind of a Hacker,” surveys hacking’s most effective applications — the good and bad — with both hope and dread, the latter because digital technology and artificial intelligence are putting it on steroids. His focus: Hacking as a lever of power.
If data is the new oil, hacking is the new lube. Bots will be the delivery system.
A simple hack I just performed: Early in a six-mile run, I hit the post office to mail a bulky letter-sized envelope. But it was closed. I lacked postage. So I slid the envelope into the mailbox. The return address would ensure a free-of-charge delivery to my home.
Some things humans have hacked to great effect: the IRS, stock exchanges (high-frequency trading; see Michael Lewis’ “Flash Boys”), airline frequent flier programs, religious rules (Orthodox Jews and the Sabbath, e.g.).
Following a Hacking 101 that many readers won’t need, Schneier provides an easily digestible, mind-opening treatise on how hacking exacerbates inequality. The elite have long hired smart folks to shimmy in and around the rules of high finance, law and politics to their profit.
A fellow at Harvard’s Berkman-Klein Center for Internet & Society and board member of the Electronic Frontier Foundation, Schneier is a public-interest technologist. He’s no fan of wealth and monopolist market concentration.
“Hacking is parasitical, mostly performed by the rich and powerful, and it comes at the expense of everyone else,” he writes. Want to subvert the plutocrats? Hack back, Schneier advises. The deck is stacked against us, after all.
Much hacking tears at society’s fabric. Schneier is particularly worried about how to counteract destructive mind-meddling — cognitive hacks that affect people’s ability to make deliberate and effective decisions. They are the most dangerous of hacks.
AI will make them even more so, hacking “our society in a way that nothing heretofore has done.” After a half-century of digital advances and ubiquitous computing devices, hacking minds has gotten easier. Algorithms and automation make disinformation — one type of cognitive hack — more effectively corrosive. “As computers evolve from tools of human hackers into ever faster, more powerful and more autonomous hackers, understanding how our digital products can hack us will become increasingly critical to protecting ourselves from manipulation,” Schneier writes.
We’d better get on it fast, he says, arguing that if strict guardrails aren’t put on AI, robots with agency could unravel trust in vital institutions, social cohesion, civil engagement. Schneier worries about a repeat of the regulatory inattention that enabled Big Tech’s assault on privacy.
He thinks defensive AI needs to be developed to best counter AI hacking with anti-social intent, proposing a “hacking governance system” to defend against both its intentional and inadvertent uses.
If we don't compel governments to get such regulation going, Schneier argues, we’ll be ceding our collective fate to programmers — and the people who sign their paychecks.