Dr. Roman Yampolskiy: Professor of Computer Science and Engineering, Expert on AI Safety and Cyber Security


By Kimberly Mitchell

“If you have a system smarter than us in every domain, you become obsolete. This obsolescence is quite concerning, even if they don’t explicitly do something harmful… just the fact that you’re no longer needed is kind of profound.”

  -Roman Yampolskiy, Expert on AI Safety and Cyber Security

For Dr. Roman Yampolskiy, a fascination with artificial intelligence arose from playing online poker. The tenured Associate Professor in the Department of Computer Engineering and Computer Science, University of Louisville began studying behavioral biometrics to recognize when he was playing bots, how to beat them, and perhaps more importantly, how to stop the bots from playing altogether. The nature of AI bots is to adapt and overcome any countermeasures placed against them. This reality concerns Yampolskiy and drives much of his research. AI is unlocking incredible opportunities for humanity, but how much of a threat does it pose?

Yampolskiy began his studies in computer science, earning a combined Bachelor’s and Master’s degree from Rochester Institute of Technology. He followed up with a Ph.D. in computer science and engineering at the University at Buffalo. He has held positions and conducted research at the Center for Advanced Spatial Analysis at the University of London, the Laboratory for Applied Computing at Rochester Institute of Technology, and the Center for Unified Biometrics and Sensors at the University at Buffalo. In his current position at the University of Louisville, he founded the Cyber Security Lab and is the director. In addition, Yampolskiy is a prolific writer, authoring more than 150 journal articles and books on cybersecurity, AI and AI safety, behavioral biometrics, genetic algorithms, and pattern recognition. 

Artificial Intelligence is developing faster than ever before, and we are on the cusp of seeing it integrated into our everyday lives. Already AI controls many systems, from nuclear power plants to hospital patient intake. Within the stock market, most trades are fully automated. Siri, Alexa, and other automated personal assistants are now common. Although self-driving cars are still being tested, Yampolskiy predicts they’ll be in common usage within five years. “Most people have no idea,” Yampolskiy says, emphasizing our general lack of knowledge on how AI is developing and impacting us. As for how AI could disrupt our lives if the technology becomes too smart, too quickly, “many people are not aware there is a problem.” 

Fortunately, Yampolskiy is not only aware, he has dedicated a large amount of research into AI Safety. “I’m looking at limits to what can be done with intelligent systems in terms of control, our ability to predict their decisions, determine their behavior, predicting what happens when they get human-level performance.” AI isn’t to human-level performance just yet. Though they can play chess, drive cars, and search for your favorite podcast through voice command, they have yet to progress in general intelligence. That day is not far off, though. 

“Some people say we’re seven years away. No one has any working safety mechanisms in place. Not even close. I’m trying to understand theoretical limits to what is possible. Is it possible to have safety mechanisms? We’re not even sure.”

By safety mechanisms, Yampolskiy refers to measures that keep AI from becoming so advanced we are unable to control them. Already, we experience cyberattacks regularly, from hacked social networks to large sites like Yahoo that hold the user data of millions of people. Bots disseminate misleading information and elevate controversial issues. Stopping those attacks is a game of cat and mouse. The earliest security systems like Captcha, which asked users to distinguish words that were stretched or manipulated with letters, have been broken by AI. With his work in biometrics, Yampolskiy asserts authentication is “going multimodal, video, audio, mouse movements, all of it together.” So far AI isn’t able to reproduce the entirety of these human movements. However, Yampolskiy warns, “if you collect enough data from multimodal experiences, AI will catch up and be able to fake that.”

If the AI we already possess is able to eventually break the security measures placed around them, how can humans possibly control AI when they reach superintelligence, with an ability to mimic humans and go beyond human capabilities? Yampolskiy believes we’ll reach this point by 2045. The threat from AI, in his view, is the competition between humans and machines. “If you have a system smarter than us in every domain, you become obsolete. This obsolescence is quite concerning, even if they don’t explicitly do something harmful, just the fact that you’re no longer needed is kind of profound,” Yampolskiy states. This is one reason why he advocates for AI developers to place built-in security measures around the intelligence. 

“Nothing to my mind is as important. It’s a meta-problem. If we can control the smart machines, all the problems become solvable. If we don’t control them, then we have a problem.” 

-Roman Yampolskiy, Expert on AI Safety and Cyber Security

Currently, proposals to limit AI range from creating virtual environments for AI to operate inside of, isolating them from the internet, and limiting interactions with humans and with data. However, whatever restrictions are placed on AI are temporary. This is the heart of Yampolskiy’s studies. 

“It’s a question of time before it finds a way to bypass that and find a permanent solution.

I’m just trying to understand what are the limits to what we can do. There are mathematically provable limits to control and I’m trying to formulate what they are. There are a lot of things we can do.”

Given that AI might ultimately find a way to surpass any controls we place on it, is it still beneficial for humans to pursue it? The question is moot. We’ve already reached the point of no return, at least with certain segments of AI, and the benefits are many, from extended life to fixing climate change and beyond. A better question to ask is, who is controlling how fast AI develops, and are there any regulations being put into place? 

“Governance is another very difficult problem,” Yampolskiy admits. “Even if you had some technical solution, it’s not certain it’s going to be implemented. That’s another secondary layer of a problem and just as difficult. Historically, it doesn’t seem like governance of technology is something we do very well, whether its biological weapons, nuclear weapons, with all the regulations in place, they still seem to proliferate over time.”

In contrast to governing bodies’ lagging efforts to regulate AI, private companies are surging full force into the future, with financial gain often wrapped up in the development of new technology. Yampolskiy points out that these companies are pushing forward so quickly, they’re not giving the rest of the world enough time to fully understand the implications of the technology they were never consulted on in the first place. 

“Nobody’s asking should we do it?” Yampolskiy points out. “You have this infinite space of possible designs. If you’re not purposefully steering it towards beneficial ones, you’ll get whatever you get, and this is kind of important to get it right. We want to make sure it benefits all of us. Even with dumb technologies, like social networks, we were surprised by the side effects we didn’t anticipate. It just destroyed our democracy. This could be even more powerful.”

With the growing cascade of AI in our lives and the near reality of superintelligence, ignoring the questions Yampolskiy examines is no longer viable. The AI control problem is real. The AI revolution is upon us. Roman Yampolskiy is adamant people must understand this. “There is a lot of impactful technology that will change your life and your future, and the least you can do is understand what’s going on and where it’s going.”  

For more on the AI control problem and the future of AI, follow Roman Yampolskiy on Twitter as @romanyam. All of Yampolskiy’s research papers are available online for free and his books are available on Amazon, including Artificial Superintelligence, A Futuristic Approach and Artificial Intelligence Safety and Security

- Advertisement -
Previous articleKSAT passed 20 000 contacts in September on KSATLITE, their Smallsat network
Next articleJuno team planning close flybys of Jupiter’s moons