Alexa, Siri, Google Now, Cortana & Co. – Are they (still) too futuristic for you, or already indispensable in your everyday life? Captain Picard from Star Trek showed us as early as 1987 how language assistants can simplify our lives when he ordered his "Tea. Earl Grey. Hot." using voice command. Of course, in the meantime, the intelligent assistants are now able to do much more. But every new IT gadget also has a side that’s less fun - keyword cyber security. Find out here what risks are lurking and how you can minimise them.
Various studies show that more than half of all Germans are open to the idea of digital language assistants or are already using them (in Switzerland the data will probably be similar). However, an even greater proportion of people are concerned about data protection, which is basically a good thing. Often data protection is casually disregarded. However, these concerns should not prevent you from benefiting from the advantages of the smart helpers - provided that you observe a few points.
The convenient advantages of smart devices - and in this case, specifically language assistants in smartphones, laptops, tablets or stationary devices such as Alexa - are well known to most: Do I have to bring an umbrella today? What time does my first appointment start? What ingredients do I need for the recipe? You can also search for videos on YouTube, make a call, reserve a restaurant table or place online orders by voice command. It's convenient, saves time and is a lot faster than typing. And there is definitely a fun factor in one or the other (at least for me). Digital assistants are also particularly smart because they are constantly learning and getting to know the user better (Machine Learning & Artificial Intelligence). But it is precisely this supposed benefit that can very quickly turn into a risk...
Every IT achievement has its price - especially if it is networked via the cloud. For language assistants, the data is stored in the cloud of providers often based in the USA (which is very significant in terms of data protection law). Cloud security should not only be taken seriously in the company, but also privately. Providers such as Amazon, Google and Apple promise not to secretly eavesdrop on users. Language assistants are only active once the activation word is given, e.g. "Alexa,..". But what about data collection? That's allowed. We are already familiar with data collection from online surfing and apps which can be used for targeted advertising, for example. The same principle could also be used for language assistants in the future. What is more worrying than annoying advertising, however, is the (now non-existent) protection of privacy. This is because what information is stored or used for how long, where exactly and for what purpose-l this is all relatively unclear.
But what data is risky? Of course, last night's risotto recipe is not such a risk. However, if you are at home, for example in your home office, making business calls and discussing sensitive matters, it's all very different. Same if you are dictating a password or your credit card number. Of course, the whole thing can be spun even further. In the event of a cyber attack on the language assistant, conversations about the next holidays could even be used during this time to break into your home. Scary, isn't it? But unfortunately, sooner or later this will be a more and more likely scenario. It has even been proven that language assistants can hear commands that are barely audible to the human ear. For example, home automation can also be influenced or orders can even be placed in online stores. So language assistants bring with them a completely new threat situation. This was also proven by a recently discovered vulnerability in Microsoft's language assistant Cortana.
s everywhere, with Cyber Security there is (unfortunately) no such thing as 100% protection. This means that language assistants are also increasingly becoming the target of cyber attacks. And the latest thing is when the assistant mutates into a bug. To be fair, specifically with regard to the second scenario, it must be stated that we have been aware of problems like these for some time, particularly with Smart TVs and similar devices.
So, now would you prefer to relegate Alexa to the basement? No, that's not necessary. Using it in a security-conscious way, you can create the protective walls yourself, and in doing so protect yourself from data abuse and cyber attacks. We'll give you the six best tips for this:
From dreams of the future, let's go back to the present - of course, language assistants are by no means a necessity, but this also applies to many apps, yet all of us use dozens of them. If you use them with caution, you do not have to worry about risks any more than you would with other services. With increasing volumes, manufacturers will probably - or hopefully - also raise security barriers to prevent cyber attacks. How will that be done? For example, using security by design. In practice, this could be an activation of the language assistant by fingerprint instead of a spoken (and widely known) "password" like Alexa.
What about data protection? Language assistants already present a challenge today. However, it will be several years before language assistants are the norm and commonplace in our daily lives. The law will increasingly have to deal with issues such as these and hopefully eliminate any confusion. Manufacturers will have to be more transparent as regards consumers' data protection concerns and legislation. However, you will probably not be given a guarantee that they will not be listening in to the user in the future either.
Unfortunately, cyber criminals also have time until then to develop new ways of attacking and exploiting existing vulnerabilities. This makes it even more important that you are aware of the risks too and protect yourself. Then you too can enjoy a hot Earl Grey tea ordered by Alexa - or these days, maybe a refreshing iced tea instead.
Artificial intelligence is used in the most diverse areas, including cyber security. The threat and risk of cyber attacks are constantly increasing and putting companies under increasing strain. The sophisticated nature of the attacks is remarkable - and frightening. This may be a reason why sometimes it can often take a relatively long time to detect the threat. What is the solution? Artificial intelligence. This significantly reduces containment costs and completely prevents potential damage.
How can you do this? Find out in the Vectra Attacker Behaviour Industry Report 2018, which we have made available to you free of charge. The report provides an analysis of behaviours that indicate ongoing, persistent activity by attackers in corporate networks. Download it now!
Would you like to know more about Artificial Intelligence, Machine Learning, and Cyber Security et al? Then read our Cyber Security blog, or you can also subscribe to our blog updates so that you don’t miss out on any articles!