Friday, 13 April 2018

Insanity. Humanity. Notes from Session 8 at TED2018

If we want to create meaningful technology to counter radicalization, we have to start with the human journey at its core,” says technologist Yasmin Green at Session 8 at TED2018: The Age of Amazement, April 13, Vancouver. Photo: Ryan Lash / TED

The seven speakers lived up to the two words in the title of the session. Their talks showcased our collective insanity — the algorithmically-assembled extremes of the Internet — and our humanity — the values and desires that extremists astutely tap into — with some speakers combining the two into a glorious salad. Let’s dig in.

Artificial Intelligence = artificial stupidity. How does a sweetly-narrated video of hands unwrapping Kinder eggs garner 30 million views and spawn more than 10 million imitators? Welcome to the weird world of YouTube children’s videos, where an army of content creators use YouTube “to hack the brains of very small children, in return for advertising revenue,” as artist and technology critic James Bridle describes. Marketing ethics aside, the world seems innocuous on the surface but go a few clicks deeper and you’ll find a surreal and sinister landscape of algorithmically-assembled cartoons, nursery rhymes built from keyword combos, and animated characters and human actors being tortured, assaulted and killed. Automated copycats mimic trusted content providers “using the same mechanisms that power Facebook and Google to create ‘fake news’ for kids,” says Bridle. He adds that feeding the situation is the fact that “we’re training them from birth to click on the very first link that comes along, regardless of where the source is.” As technology companies ignore these problems in their quest for ad dollars, the rest of us are stuck in a system in which children are sent down auto-playing rabbit holes where they see disturbing videos filled with very real violence and very real trauma — and get traumatized as a result. Algorithms are touted as the fix, but Bridle declares, “Machine learning, as any expert on it will tell you, is what we call software that does stuff we don’t really understand, and I think we have enough of that already,” he says. Instead, “we need to think of technology not as a solution to all our problems but as a guide to what they are.” After his talk, TED Head of Curation Helen Walters plainly asks Bridle: “So are we doomed?” His realistic but ungrim answer: “We’ve got a hell of a long way to go, but talking is the beginning of that process.”

Technology that fights extremism and online abuse. Over the last few years, we’ve seen geopolitical forces wreak havoc with their use of the Internet. At Jigsaw (a division of Alphabet), Yasmin Green and her colleagues were given the mandate to build technology that could help make the world safer from extremism and persecution. “Radicalization isn’t a yes or no choice,” she says. “It’s a process, during which people have questions about ideology, religion — and they’re searching online for answers which is an opportunity to reach them.” In 2016, Green collaborated with Moonshot CVE to pilot a new approach called the “Redirect Method.” She and a team interviewed dozens of former members of violent extremist groups and used that information to create a campaign that deployed targeted advertising to reach people susceptible to ISIS’s recruiting and show them videos to counter those messages. Available in English and Arabic, the eight-week pilot program reached more than 300,000 people. In another project, she and her team looked for a way to combat online abuse. Partnering across Google with Wikipedia and the New York Times, the team trained machine-learning models to understand the emotional impact of language — specifically, to predict comments that were likely to make someone leave a conversation and to give commenters real-time feedback about how their words might land. Due to the onslaught of online vitriol, the Times had previously enabled commenting on only 10 percent of homepage stories, but this strategy led it to open up all homepage stories to comments. “If we ever thought we could build technology insulated from the dark side of humanity, we were wrong,” Green says. “If technology has any hope of overcoming today’s challenges, we must throw our entire selves into understanding these issues and create solutions that are as human as the problems they aim to solve.” In a post-talk Q & A, Green adds that banning certain keywords isn’t enough of a solution: “We need to combine human insight with innovation.”

Living life means acknowledging death. Philosopher-comedian Emily Levine starts her talk with some bad news — she’s got stage 4 lung cancer — but says there’s no need to “oy” or “ohhh” over her: she’s okay with it. After all, explains Levine, life and death go hand in hand; you can’t have one without the other. In fact, therein lies the importance of death: it sets limits on life, limits that “demand creativity, positive energy, imagination” and force you to enrich your existence wherever and whenever you can. Levine muses about the scientists who are attempting to thwart death — she dubs them the Anti-Life Brigade — and calls them ungrateful and disrespectful in their efforts to wrest control from nature. “We don’t live in the clockwork universe,” she says wryly. “We live in a banana peel universe,” where our attempts at mastery will always come up short against mystery. She has come to view life as a “gift that you enrich as best you can and then give back.” And just as we should appreciate that life’s boundary line stops abruptly at death, we should accept our own intellectual and physical limits. “We won’t ever be able to know everything or control everything or predict everything,” says Levine. “Nature is like a self-driving car.” We may have some control, but we’re not at the wheel.

A high-schooler working on the future of AI. Is artificial intelligence actually intelligence? Not yet, says Kevin Frans. Earlier in his teen years — he’s now just 18 — he joined the OpenAI lab to think about the fascinating problem of making AI that has true intelligence. Right now, he says, a lot of we call intelligence is just trial-and-error on a massive scale — machines try every possible solution, even ones too absurd for a human to imagine, until it finds the thing that works best to solve a single discrete problem. That can create computers that are champions at Go or Q-Bert, but it really doesn’t create general intelligence. So Frans is conceptualizing instead a way to think about AI from a skills perspective — specifically, the ability to learn simple skills and assemble them to accomplish tasks. It’s early days for this approach, and for Kevin himself, who is part of the first generation to grow up as AI natives and think with these machines. What can he and these new brains accomplish together?

Come fly with her. From a young age, action and hardware engineer Elizabeth Streb wanted to fly like, well, a fly or a bird. It took her years of painful experimentation to realize that humans can’t swoop and veer like them, but perhaps she could discover how humans could fly. Naturally, it involves more falling than staying airborne. She has jumped through broken glass and toppled from great heights in order to push the bounds of her vertical comfort zone. With her Streb Extreme Action company, she’s toured the world, bringing the delight and wonder of human flight to audiences. Along the way, she realized, “If we wanted to go higher, faster, sooner, harder and make new discoveries, it was necessary to create our very own space-ships,” so she’s also built hardware to provide a boost. More recently, she opened Brooklyn’s Streb Lab for Action Mechanics (SLAM) to instruct others. “As it turns out, people don’t just want to dream about flying, nor do they want to watch people like us fly; they want to do it, too, and they can,” she says. In teaching, she sees “smiles become more common, self-esteem blossom, and people get just a little bit braver. People do learn to fly, as only humans can.”

Calling all haters. “You’re everything I hate in a human being” — that’s just one of the scores of nasty messages that digital creator Dylan Marron receives every day. While his various video series such as “Every Single Word” and “Sitting in Bathrooms With Trans People” have racked up millions of views, they’ve also sent a slew of Internet poison in his direction. “At first, I would screenshot their comments and make fun of their typos but this felt elitist and unhelpful,” recalls Marron. Over time, he developed an unexpected coping mechanism: he calls the people responsible for leaving hateful remarks on his social media, opening their chats with a simple question: “Why did you write that?” These exchanges have been captured on Marron’s podcast “Conversations With People Who Hate Me.” While it hasn’t led to world peace — you would have noticed — he says it’s caused him to develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.” And he stresses that his solution is not right for everyone . In a Q & A afterward, he says that some people have told him that his podcast just gives a platform to those espousing harmful ideologies. Marron emphasizes, “Empathy is not endorsement.” His conversations represent his own way of responding to online hate, and he says, “I see myself as a little tile in the mosaic of activism.”

Rebuilding trust at work. Trust is the foundation for everything we humans do, but what do we do when it is broken? It’s a problem that fascinates Frances Frei, a professor at Harvard Business School who recently spent six months trying to restore trust at Uber. According to Frei, trust is a three-legged stool that rests on authenticity, logic, and empathy. “If any one of these three gets shaky, if any one of these three wobbles, trust is threatened,” she explains. To restore trust, the first step is to identify what she calls your “wobble.” Ask yourself: is it authenticity, logic, or empathy that most often gets in the way of someone trusting you or your organization? However, know that no wobble is permanent — each can be corrected. So which wobbles did Uber have? All of them, according to Frei. During her time there, she found quick, effective fixes for the empathy and logic wobbles. For example, it had become common practice at Uber meetings for people to text each other – about the meeting. “It did not create a safe, empathetic environment,” Frei recalls. The simple solution: Ask people to turn off their tech, and put it away while talking. Authenticity, the last wobble, has proved harder to fix – but that’s not uncommon. “It is still much easier to coach people to fit in; it is still much easier to reward people when they say something that you were going to say,” Frei says, “but when we figure out how to celebrate difference and how to let people bring the best version of themselves forward, well, holy cow, that is the world I want my sons to grow up in.” In a post-talk Q&A, Frei says she remains hopeful that trust can be rebuilt in our culture’s businesses and institutions, but it won’t happen right away: “I am super-optimistic that everyone can set the conditions to to be more empathic, to use more rigorous logic, to be their authentic selves.”



from TED Blog https://blog.ted.com/insanity-humanity-notes-from-session-8-at-ted2018/
via Sol Danmeri

No comments:

Post a Comment