AI Chatbot Warning

AI chatbots Warning as new survey finds nearly a third of people (30%) have entered health information like symptoms and medical history into AI chatbots. Personal financial data is also being shared, with over a quarter of respondents (26%) admitting to entering financial details such as salary, investment and mortgage information, while nearly one in five (18%) have disclosed credit card or bank account data.

The risks extend to company data too. Almost a quarter (24%) have inputted customer information – such as names, email addresses and client details – into AI tools, while 16% have uploaded company financial data and internal documents such as contracts.

AI tools such as ChatGPT have changed the way many people work, with over a quarter of employees (28%) say they now use AI to even complete simple tasks like drafting emails.

However, despite this growing adoption, concerns remain widespread. Over two fifths of employees (43%) said they worry about sensitive company data being leaked or stored by AI tools, 36% fear breaches of confidentiality, and 33% are concerned that AI tools may unintentionally expose private data.

These fears have been exacerbated by recent widespread data breaches seen at the likes of Marks & Spencer, Co-Op and Adidas.

When asked about their overall attitude to AI, nearly half of Brits (48%) described themselves as cautious – recognising the usefulness of AI, but worried about privacy. Meanwhile, a quarter of Brits (24%) admitted they don’t trust AI with personal or sensitive information despite its usefulness.

Harry Halpin, CEO of cybersecurity experts NymVPN, comments:

“AI tools have rapidly become part of how people work, but we’re seeing a worrying trend where convenience is being prioritised over security.

“The fact that users are so willing to share this personal information, regarding their health and finances, means that they are putting key aspects of their lives on the line.

“The problem is that many AI platforms rely on storing and processing user questions in the cloud. Once sensitive data like health records, customer information or financial details are uploaded, it can be very difficult to control where it ends up.

“High-profile breaches show how vulnerable even major organisations can be, and the more personal and corporate data that is fed into AI, the bigger the target becomes for cybercriminals. Stolen data can be used for fraud, blackmail, or sold on the dark web.

“Employees and businesses urgently need to think about how they’re protecting both personal privacy and company data when using AI tools. This starts with company-wide AI usage policies, secure VPN connections and encrypted browsing.

“Blocking unwanted tracking is essential but it’s important that companies combine these protections with staff training and clear internal guidelines to reduce the growing risks posed by AI adoption.”

The Engineers Ring
Latest posts by The Engineers Ring (see all)

More in this category

Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x