lundi 30 avril 2018

Dublin Tech Summit 2018: Cybersecurity, Artificial Intelligence, and Privacy

I had the pleasure of attending this year’s Dublin Tech Summit which took place at Dublin’s Convention Centre on April 18th and April 19th. A number of famous faces were there, from the “e-celebrities” like YouTuber Casey Neistat to the geekier such as Jordan Evans of the NASA Jet Propulsion Laboratory (JPL). There was something for everyone and there was generally somebody that you had heard of giving a presentation. While Casey Neistat focused on his life growing up and how he became a YouTuber, Jordan Evans talked about the work JPL was currently undertaking and their hopes for the future. Both presentations came with a lot of “fun” attached – information laced in with some comedy. Despite the seemingly lighthearted nature of the conference, things became a little more serious when Michael Chertoff, former United States Secretary of Homeland Security took to the stage to talk about cybersecurity, artificial intelligence, and privacy.

His talk, entitled “Exploding data: Reclaiming our Cyber Security in the Digital Age” initially focused largely on terrorism, opening his talk by giving examples of the ramifications of 9/11 and how it affected world travel as a whole. He focused heavily on how terrorism has molded much of our lives and how the Afghanistanian invasion happened as a result. Then came the kicker – the US had access to data collection methods that meant it could potentially have been prevented, but they simply hadn’t used them. He stated that if it had been planned the same way now as it had back then, the FBI would likely have intervened long before any lives were in danger.

So why didn’t the US collect this data?

Chertoff puts it pretty simply, explaining how the US government just never really thought about it. In the aftermath of 9/11, they discovered that the people’s data could be used almost as a radar to detect those who stand up as potential risks to the state. Then comes the question of privacy, where he admits that surveillance occasionally goes too far when it comes to the invasion of privacy. He uses examples of new machine learning algorithms used in x-ray machines in the airport. Despite being more invasive than ever, he asks if how that data is being used matters. Commercial companies use our data to exploit us for profit, while these machine learning algorithms simply are doing it to keep us safe. He also mentions that these x-ray machines being entirely operated by artificially intelligent machines means that nobody is actually looking at you, it’s all a machine. He brings up concerns about “Big Brother” in the book Nineteen Eighty-Four by George Orwell, and how corporations have not just our data, but our subconscious data. Thanks to artificial intelligence, in ways we are worse off. It’s not just where you drive, he explains, it’s how and why you drive. In his opinion, it’s not just about hiding your data, it’s about controlling your data.

One of the biggest challenges we face in legislating for the coming of new technologies and even more intrusive algorithms is the age of said legislators, Chertoff continues. People who do not understand a topic completely should not be the ones to make rulings on it. He then mentions The Two Cultures, the first part of a Rede Lecture written by C.P. Snow. It’s a thesis which analyses how we have split mainly into two cultures as a society: science and humanities (or philosophy).

“We need to understand that as we design things, what are the ethical and philosophical implications?” Chertoff asks. “We need to bring philosophers and scientists together.”

As far as cybersecurity goes, he wouldn’t panic but he also agrees that there is a need for legislation particularly when it comes to the Internet of Things devices and companies should be held accountable for their security. Massive DDoS attacks have been made possible because of the existence of mundane devices like baby monitors simply because of the lack of care from these companies. There is no regulation or provisions for updating them in terms of vulnerabilities.

In closing, Chertoff tells us to judge if what we are doing is beneficial to ourselves when we provide our data to these companies. Calibrate your engagement based on how beneficial it is to yourself. You can’t get paranoid and disengage with everything, but at the same time, don’t become complacent either. He then says that while crossing the road can be risky, if you’ll look both ways you’ll probably be fine. Look both ways when you provide your data, make sure you know what exactly it is you’re getting into.

He sees the GDPR as a good thing, pressuring companies into choosing the data they want to take wisely. The US can definitely benefit from something similar. He brings it back to artificial intelligence. When all that’s looking at your data is a machine, then where does the privacy violation begin? This is when philosophers join the design process, and he believes that there are discussions that need to be had not just about spying, but about the spying he says is deemed a necessary violation of privacy.


So what are your thoughts? What do you think about companies harvesting your data for a potentially mutual benefit? Is it a violation of privacy? Let us know in the comments!



from xda-developers https://ift.tt/2w8gPlV
via IFTTT

Aucun commentaire:

Enregistrer un commentaire