Image for post
Image for post

Pattern and facial recognition is revolutionizing medicine, the automotive industry and marketing and making people’s lives easier. However, these advances have a dark side to them too. You should prepare for the fact that your face may attract a lot of interest in the coming years.

Just as any other modern, innovative technology, facial and object recognition has a rapid but brief history behind it. As we go over its breakthrough moments, we might revisit the year 2011 when Jeff Dean, an engineer with Google, met computer science professor Andrew Ng. Both came up with the idea of ​​creating a powerful neural network into which they “fed” 10 million images taken from the internet (mainly from databases, e-mails and YouTube videos and photos). Dozens of hours of continuous processing later, the visual input produced three patterns that could be used to distinguish between the images of the human face, the human body and cats. Since then, the software could process further data and decide instantly whether an object portrayed in an image was or was not, say, a cat. Although this may not sound particularly exciting, it was a major breakthrough. A simple and yet very effective method had been developed. …


Image for post
Image for post

Is it time yet we thought of a “red safety button” in the event AI gets out of control? Or should we respond to our fear of an algorithmic armageddon more constructively by designing human-friendly AI?

The idea of a symbolic kill switch that would instantly neutralize hostile algorithms is not foreign to the average person nor to tech industry moguls and celebrity scientists. There is no point citing the repeated warnings of Elon Musk, Bill Gates, Yuval Harrari and Stephen Hawking regarding the risks associated with uncontrolled AI. Before I reflect on how such human-friendly algorithms could work, I’d like to go over the visions of civilizational and technological pessimists.

Cosmic indifference

Some increasingly popular concepts predict that as artificial intelligence continues to self-improve, it will stop heeding any limits. It would be against its nature to do so. Such AI will resemble an ever more complex self-replicating virus that invades successive realms of our existence, whether biological, emotional or intellectual. Algorithms that continuously improve their organization will subjugate us just as we have subjugated animals. They will gain the ability to organize, or actually disorganize, social life with no regard for our views, laws or protests. Seen this way, AI will not necessarily have to act on a rational (as we might see it) intention to subjugate people. It may cause a disaster merely by having its algorithms establish their own hierarchy of goals that completely ignores our interests. The concern is not that a cyborg will come firing its laser gun at us, meaning that one species will take over another in a hostile attack. Rather, we might be sidetracked and taken out of the game by entities that are indifferent towards homo sapiens if not unaware of its existence. …


Image for post
Image for post

Over the past four years, tech companies have boasted rapid growth. Their problem was mounting distrust on the part of private users and growing media criticism. Now, with the global crisis sweeping across the globe, they may actually have happened upon an opportunity: a chance for the industry to mend its reputation.

While any crisis may be a possibility to sway public opinion, it can also cause you to quickly fall out of favor. The pandemic poses numerous threats to the tech industry but it is not without silver linings. Companies may use their ad hoc responses to greatly enhance their image. …


Image for post
Image for post

The Earth is sick, and the causes of its disease lie in our choices, myopia, and lack of imagination. In our struggle to understand what is ruining our planet, we might find the solution in artificial intelligence.

Climate change, with deforestation, melting glaciers, mass extinctions and carbon emissions, is the biggest problem of the century. If you struggle to understand what is ruining our planet, perhaps you should look for clues in artificial intelligence. Oceanographers, biologists, and meteorologists successfully use a host of applications that allow you to see Earth from a whole different perspective.

Apocalypse according to the UN

As I was writing this post, I came across a dozen or so of articles on climate change. I never searched for this information. Some of it came to my mailbox in the form of headlines, the rest I came across easily on major websites. All of it had been published a day, if not hours earlier. The gist of each story was nearly identical to what I found in the New York Times: “… The world’s land and water resources are being exploited at unprecedented rates, which combined with climate change is putting dire pressure on the ability of humanity to feed itself…” A similarly somber mood dominated each of the other articles, many of which struck truly apocalyptic tones. And no wonder. …


Image for post
Image for post

Personalization has conquered the market by employing a few basic psychological mechanisms married with technological capabilities. Although people grow hypersensitive about protecting their privacy, they then turn around and readily share their digital data and intimate life stories allowing brands to use such information to build lasting relationships with them. We help brands examine us and peer into the deepest recesses of our minds. Personalization would never happen without our willingness to share private content, our thoughtless consenting to the processing of data and our conviction that our personal uniqueness should be met with equally unique products and services. Algorithmic technologies cleverly use “psychology” to feed on our addictions. They encroach on our lives making sure we do not notice it too much. We are no longer surprised to receive an email from a company that addresses us by name. What we get is almost exactly what our ego wants: being noticed, appreciated and heard. By virtue of our involvement in this process, we are experiencing probably the biggest change in the history of customer-producer relationships. …


Image for post
Image for post

For the first time in the history of societies, people will live with technology as a partner that is guided by its own rules and laws. Politicians are increasingly aware of the various consequences of AI expansion. Will they be able to regulate our relationship with AI?

Politicians are increasingly aware of the various consequences of the expansion of technologies linked to artificial intelligence. We do not know whether their assertions today will remain relevant a few years from now and whether the decisions they are currently making will prove to have been well-advised. However, their efforts to bring AI-related issues into order certainly deserve our attention.

For at least five years now, the world has been using artificial intelligence in the economy, education, science and in general public interest. Inevitably, the governments of many countries could no longer avoid reflecting deeply on their prior AI experience and the research into its essence. Artificial intelligence is not merely the perfect driver of economic development or yet another innovative IT solution. Rather, it is a powerful force that is changing the way societies work, creating a new kind of interpersonal relationships and new professional roles. New technologies are set to change the way we work, learn, rest, and communicate with one another. The Internet of Things will alter the functioning of cities, blockchain will revolutionize the authentication of financial transactions, voice assistants will become an important tool for obtaining information, while chatbots will dominate customer service. …


Image for post
Image for post
Source

How would one grant rights to machines? The matter still can’t be consulted with those concerned. Artificial intelligence will not tell us whether or not it feels people’s existing legal system is treating it well. So we have to manage alone and decide for it.

In late July 2019, the world learned that the company Neuralink was close to integrating the human brain with a computer. The first interface hoped to enable the feat was unveiled. We may thus be in for an incredible leap in expanding our cognitive abilities. The consequences of such a leap would be varied and we would certainly not avoid having to make unprecedented legal and ethical choices. In view of such a breakthrough, the question about machine rights or humanoid becomes all the more relevant. …


It only took a couple of weeks or so for the dark visions reminiscent of the Black Mirror tv series to become reality: think of drones hovering overhead and taking our body temperature and smartphone apps notifying law enforcement on whether we are staying home. Is the massive amount of health data collected during the pandemic going to be deleted once the dust settles?

Image for post
Image for post

It has long been common knowledge that digital technology can keep track of our every activity online. The digital tracks we leave behind are easy to follow. Advertising agencies, online stores and online banks take advantage of this to target their ads and observe our shopping preferences. In recent weeks, however, these practices have been brought to a whole new level. Never before has technology been used so intensively and widely to monitor our health and, indirectly, to manage society. Such a large scale of extraordinary activities that endanger our privacy is something that only those living in the United States in the time immediately following 9/11 can remember. The Chinese may be somewhat less bewildered, having grown accustomed to their country’s social credit system which has long relied on data from the Internet, street cameras and smartphones to rate its citizens. …


Are algorithms capable of discrimination? I am afraid they are. What complicates the question is the fact that algorithm developers can hardly be accused of malicious intent. How then could a mathematical formula put individuals and communities in harm’s way?

Image for post
Image for post
Photo by Markus Spiske on Unsplash

As distant and aloof as mathematical equations may seem, they are also commonly associated with reliable, hard science. Every now and then, it nevertheless turns out that a sequence of numbers and symbols conceals a more ominous potential. What is it that causes applications, which otherwise serve a good cause, to go bad? There could be any number of reasons. One of the first ones that spring to mind has to do with human nature. People are known to follow a familiar mechanism of letting stereotypes and prejudices guide their lives. They apply them to other individuals, social groups, and value systems. Such cognitive patterns can easily be driven by the lack of imagination and a reluctance to give matters proper consideration. The resulting explosive mixture spawns negative consequences. People who blindly trust computer data fail to see the complexity of situations and easily forgo subjective assessments of events. Once that happens, unfortunate events unfold causing huge problems for everyone involved. …


Image for post
Image for post

Millennials, who came into the world in the 1980s, and Generation Z, born in the mid-90s, believe in sharing just about anything: web links, streaming website subscriptions, city bikes, even apartments with holiday makers. As consumers develop a preference for sharing or renting goods and services over purchasing, global business is confronted with an interesting challenge.

The sharing economy is gathering momentum in modern business. Only a decade ago, the market niches in which consumers shared goods seemed like a passing fad. To many people’s surprise, not only has the trend not died, it actually seems to be growing stronger and even inspiring the decisions of large, tech companies. The popularity of such platforms as Netflix and Airbnb and such applications as Uber stems largely from the new mindsets of consumers born post 1980. …

About

Norbert Biedrzycki

Technology is my passion. Head of Microsoft Services CEE. Private opinions only

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store