Últimas notícias

Fique informado

How A.I is Shaking the Foundations of Cybersecurity

24 de agosto de 2018

Tony Kontzer

The crossroads are coming faster than ever for the world of cybersecurity

by Tony Kontzer |RSA Conference

Information security leaders have had to do a lot of adapting over the past decade. They’ve shifted from an emphasis on protecting the perimeter of the network to defending access points.

They’ve turned a large portion of their outward gaze inward as knowledge and awareness of insider threats mushroomed. They’ve extended their layers of protection endlessly to accommodate the growing thirst for mobile access to data. And they’ve accommodated unfathomable complexity as migration of applications to the cloud has accelerated.

But all that might prove to be child’s play compared to what’s coming next, which is the very same wave gripping every part of nearly every industry: Namely, the fast-approaching age of artificial intelligence.

Whether they’re in AI hot spots like the U.S. or U.K., or in markets as far-flung as Dubai, security observers clearly want us to know that a) hype or not, AI is the talk of the security realm, and b) if AI isn’t part of your security strategy, then you’re a step behind the bad guys.

If you want further proof for how important AI is to the future of security, consider the opening of a cybersecurity innovation center in London earlier this summer, or the recent deal between the governments of France and the U.K. to collaborate on research and funding of innovation in this area. The subtext of such developments is clear: We have to do something to give ourselves a fighting chance.

As Jon Oltsik, senior analyst at Enterprise Strategy Group, told Investor’s Business Daily, “the arms race is real.”

That reality was on display at this year’s RSA Conference in San Francisco, where AI was ubiquitous among the 10 finalists in the annual Innovation Sandbox competition. What attendees saw that day was a reflection of a trend CB Insights had already documented: Cybersecurity startups attracted $7.6 billion in venture capital investments during 2017, double what they received in 2016.

That kind of increased financial commitment makes a powerful statement that something big is afoot.

How big? Big enough that the U.S. Department of Defense is preparing to detail three separate strategies that would bridge the junction of national defense, cybersecurity and AI. And as past DoD innovations such as the Internet, GPS and videoconferencing have proven, when the pentagon gets behind a technology, momentum tends to follow.

Put all of these indicators together, and you get one of those rare moments when a perfect storm is combining technological innovation, a big problem to solve, and a marketplace desire for new solutions to spur what might be considered a sort of Frankenstein Effect, with tech innovators playing the role of Dr. Frankenstein, AI as the new monster, and organizations looking not for immortality, but rather for a leg up on their attackers.

The emergence of AI in cybersecurity also brings with it some other considerations, none more important than the need for cybersecurity leaders to have a louder voice in the C-Suite. We’ve frequently documented the need for organizations to give their security execs a seat at the boardroom table, most recently highlighting the need to inject boards with more security moxie. Boards need to hear more from experienced security thinkers, while security thinkers are likely to get demonstrably better at their jobs if they have a clear picture of high level business objectives.

The stakes are even higher thanks to AI, which promises to bring a degree of change to cybersecurity that’s never been seen before. Whether that means more sophisticated attacks, whole new levels of automation in responding those attacks, or simply handing off security to an artificial decision-maker, how organizations embrace AI will go a long way toward shaping cybersecurity and the larger business landscape for many years to come.

How pervasive is AI becoming? On a 2017 episode of the Cartoon Network’s Teen Titans Go titled “Ones and Zeroes,” the main characters try to build a robot that can make a new kind of pizza. In order to test the robot’s human-like intelligence, the group subjects its robot to the Turing Test, which is designed to determine whether a machine’s intelligence is equivalent to a human’s. They even provide a little snippet explaining who Alan Turing, the innovator behind the test, was and how testing works.

Think about that: A show aimed at school-aged children is teaching them how to verify the effectiveness of an AI program, and is even educating them about one of the technology’s groundbreakers. It doesn’t take a huge leap to consider that a method for guiding the next generation of workers into AI-related jobs.

Which is why security experts should pause before expressing skepticism about AI’s hype in the security realm. Sure, there’s a certain Kool-Aid quality to all the AI talk. Yes, it’s going to be some years before AI possesses the maturity to deliver on its many promises. And there’s no doubt that security professionals have every reason to fret over their future career paths if AI starts doing all of their work.

But with attackers successfully employing AI-powered tools to do their bidding, security practitioners who discount AI do so at their own risk. True, AI doesn’t represent a silver bullet that can cure an organizations’ security woes (as the article linked above argues), but it’s a bullet nonetheless, and one that all security leaders should be making part of their lines of defense.

Fonte: RSA Conference 

Leia outros artigos em INTELIGÊNCIA ARTIFICIAL INTERNATIONAL NEWS NOTÍCIAS