As I scroll to get my daily dose of news, default boredom is seldom interrupted by an eye-catching headline. Interest lasts a heartbeat before I revert back to numbness, but the pull to check my screen again is too strong and I keep going back for more.
I know exactly what I am looking for. First of all, I want to know what I have to know – all the important things that are going on in the world right now. I also want to find interesting or funny content that stimulates the reward system in my brain. And, I want the possibility to share interesting stories with others or vent my frustration about crappy ones.
You might think that I can do all of that already (otherwise why would I check my phone 110 times a day?), but this is not the case. I’m not much into Facebook, I find Twitter confusing, YouTube time-consuming, and news apps boring. I struggle to deal with the daily cacophony of irrelevant and uninteresting stories. Moreover, I am constantly wondering how much of what I see can be believed. Few stories seem relevant to me, nothing seems true and yet everything seems possible.
From control to anarchy
It was not always like that. Not so long ago, people watched the news on TV, went to the cinema and bought printed magazines. Back then, the world looked simpler and more predictable, publishers had enough money to fund serious journalism, and editors spared us the most extreme fake stories. But information was limited, stories carefully drip-fed and the media perilously close to business and politics.
Then came the internet, and its promise of unlimited, real-time information felt like a liberation. People lost trust in the conventional media and started using social media as their source of news. But the internet also removed editorial control, creating a space where information travels faster than our ability to consume it and where the difference between fact and fiction becomes elusive. The old media oligarchy got replaced by anarchy.
Living with a broken media
As much as it is stimulating, the new media is stressing us out. The human brain thrives on new information and is driven to constantly seek it. But when novelty is available in unlimited quantity, the brain becomes overworked. Feeling unable to process information makes us stressed, confused and restless. And uncertainty about the credibility of the stories we see causes us great worry.
This state of affairs has serious social consequences too. Information underpins how we perceive the world, plan our future, engage with others and participate in society. As the British minister Matt Hancock famously declared: “the basis of a free democracy is to have an agreed, objective basis of facts off which to have political disagreements.” To agree on objective facts we need a trusted messenger so, when trust in the media vanishes, the line between fact and fiction disappears with it.
As if this was not enough, tech companies are using artificial intelligence to give us personalised content that confirms our pre-existing ideas and beliefs. This is trapping people in so-called echo chambers, where alternative perspectives are rare and the world becomes a reflection of our own prejudices. All of this creates a sense that the media is broken.
Keep the tech, but put humans in charge
Tech giants are under pressure to fix the problems that they had a hand in creating. Google is using human reporting to improve the quality of its search results, while Facebook has recently introduced user surveys to gauge the credibility of media sources (and tweak their algorithms accordingly). These initiatives are a good start, but they miss the point.
Big tech wants humans to help machines make better decisions, whereas the role of technology should be to help humans make good decisions. Why do we insist in perfecting the machine while neglecting the humans who use it? What if, in order to create a better media, we need to create better audiences first?
With this idea in mind, a couple of years ago I started working on a project with the help of some pretty phenomenal people. Our goal was to make it easy to share opinions on what stories are worth our time, and create a platform that crowdsources the most important and interesting content from the web. Artificial intelligence would automatically aggregate stories by topic and give personalised recommendations, but humans would have control over the recommendations they receive. That project is now Yoop.
Finding a new moral compass
Turning the original idea into something that people can use has been quite challenging, and the team was faced with many decisions that were as much ethical as they were technical. So we developed four guiding principles that define what we stand for.
Freedom of expression: we believe that people should be free to express their opinion quickly, simply and anonymously. Some people may not want to publicly own an opinion (think of a repressive regime, for example), while others are concerned that anonymity can create a toxic environment. Freedom of expression means balancing these needs out.
Access to information: everyone deserves to find great content that can be trusted, on the issues that matter to them. Quality is more important than quantity, and ease-of-use can coexist with the ability to control our sources of information.
Belief in people: we believe in people’s ability to tell good from bad, to understand the world around them and to make it better. And, over time, access to good information will help us make better decisions about the content we see and want others to see.
Smarter together: when we share insights, we are all wiser. We want to create communities of people that help one another find great stories and expose bad ones. Some opinions will be questionable some of the times, but more will be accurate most of the times – and that is a better record than any media can boast.
As I scroll the screen to get my daily dose of news, I am now giving real-time feedback to my fellows and to the publishers about the quality of their work. I am having some influence over what stories will be rewarded and which ones will go unnoticed, and I feel more in control. Whether this will fix the media or not will depend on how many of us buy into this vision, but for now I am just happy to trigger my reward system with some quality content.
As machines become ubiquitous, the decisions we make today will have an impact for generations to come. Do you think that people can help fix the media, or should we just trust technology? What is your moral compass?
Please share your thoughts with us, or find out more at yoop.io