12.29.2017

ScienceSeeker picks 2017 highlights: Investigating the interplay between Mind and Machine

Image credit Pabak Sarkar, used under Creative Commons licence.
by Ananya Sen

The works of Isaac Asimov, to me, always seemed to be the stuff of pure magic. It was incomprehensible that in the near future we would live in a society where machines would be smoothly integrated into our daily lives. The first step towards that seeming impossibility was the advent of smartphones. Now, we can connect with friends on the opposite side of the world, send in work emails, casually scroll through pictures of travel destinations and the hottest restaurants, all while sipping the morning cup of coffee. Smartphones have become so enmeshed into our daily routines that it is impossible to ignore the sea of people with heads bent over their screens. And so, when ScienceSeeker asked me to choose some highlights of the post its editors have picked as their favourites in 2017, smartphones were always likely to be among them.

For example, it turns out there is a biological component to our smartphone fixation. Smartphones cause changes in the ratio of the neurotransmitters gamma aminobutyric acid (GABA), glutamate and glutamine (GLX). A study conducted on teenagers showed that the ratio of GABA to Glx was higher in the cortex of internet- and smartphone- addicted youth compared to the healthy controls. An imbalance in this ratio leads to drowsiness and anxiety. Further studies will elucidate the clinical implications of this finding and determine whether the emotional problems are the cause or consequence of this media addiction.

However, our modern machines can clearly be beneficial, and this year video games have been studied as an option for treating kids with ADHD. Akili Interactive Labs has designed a game that target specific neural pathways leading to cognitive improvement. The targeted algorithms are aimed at producing the same beneficial effect as ADHD drugs, without the side effects. The company is now focusing on similar video games for treating adults with depression, patients with pediatric autism, and those with multiple sclerosis.

Virtual reality technology is now exciting avid video game players – and scientists. Researchers are combining virtual reality with visualization software to explore the 3D structure of the brain. This will allow scientists to perform visual dissections of the brain instead of depending on traditional microscopic slides of 2D slices. Furthermore, scientists can now trace individual neurons by slipping on the virtual reality headsets. This is a vast improvement over the current software programs that require constantly rotating 2D images to visualize the path of a neuron; a cumbersome process considering that a cubic millimeter of the brain has approximately 300 million neural connections. The new technology borrows graphic techniques from the “Harry Potter” franchise, and so it is hardly surprising that the results seem to be magical!

Artificial Intelligence (AI) has also surged ahead this year. AI programs have vastly improved and can now challenge the human mind more efficiently than before. One such example is the DeepMind AI, which has self-trained itself to become an overlord in chess, Go, an abstract strategy board game that is more complex than chess, and shogi, a Japanese version of chess, in less than a day. Pretty impressive considering that the program started from scratch to learn each of the games. For comparison, the previous program AlphaGo, required three days of training by the DeepMind team, observing human players and integrating that information with machine learning.

Machine learning has also enabled the development of Google’s new earbuds that can auto-translate 40 languages. This new advancement conjures up the image of a universal translator, a concept that is prevalent in science fiction. The technology of the new earbuds, like a universal translator, allows for translation in real time. The earbuds use the Google Neural Machine Translation framework, which is an artificial neural network that is used to increase the accuracy of Google Translate. The next step in improving these earbuds would be to incorporate Zero-shot translation, a technique that is already being used by the current Google Translate software. This allows for translation between languages, say Chinese and Japanese, without first translating it to a reference language, such as English. This would increase the efficiency of the earbuds as well as allow the translation of more languages.

Meanwhile, Google has used machine learning to convert Google Street View images into professional-grade photographs, without using humans. These images were shown to professional photographers, along with photos taken by humans, to see how the AI images fared. Approximately two out of five shots received a score similar to a professional shot. This challenges the traditional way of creating such photographs, which usually requires hours of image manipulation, changing the filters, and adjusting the light composition to create a stunning image.

The relationship between machines and humans is becoming increasingly interconnected. Technology has entrenched itself in our lives in a way that was unthinkable only a few decades ago. Although there are some drawbacks to this, the potential benefits have been stunning. Furthermore, it is truly exciting to see how far AI can go in investigating and challenging the very same neural networks of the human brain that has inspired their creation.

Ananya Sen is a grad student in microbiology at University of Illinois, Urbana-Champaign, working on oxidative stress in the Imlay lab, as well as a foodie, dog lover, and science blogger at Tales of Scientific Journeys.

No comments:

Post a Comment