The use of AI in the workplace has exploded in recent times. Many employees are using AI and often are not aware of the risks associated with its use. Further, many companies do not have acceptable use policies or guidelines in place for their employees. This situation gives rise to new risks which are not being fully identified, assessed, and managed. Through our information service, Neural Networking, we provide detailed coverage of AI and its risks. Some of the risks with the use of AI are listed below.
A false positive is an output which shows a situation when an AI detection tool incorrectly identifies a content as problematic or concerning when it is not. While a false negative is the situation when a content deems a situation was adverse when in fact it is not. An example of a false positive is when AI tools diagnose a patient with a disease when they do not have it, leading to expensive, unnecessary treatment. An example of a false negative is when output from an AI tool determines an adverse situation when none exists.
Inadvertently imputing private information into an AI platform may cause a breach of the law. Once the chatbot releases information into the internet, the privacy of the information entrusted to the organisation that collects it has been compromised. In Australia, such breaches of the Privacy Act trigger both civil and criminal penalties for breaching the law. Employees and organisations need to be aware of these risks.
Inadvertently imputing confidential information into an AI platform may cause a breach of contract between the employer of the person using AI and its supply chain clients or suppliers. Once the chatbot releases information into the internet, it triggers confidentiality clauses in such contracts. This may lead to civil lawsuits with financial and reputation effects.
Examples of social manipulation with the use of AI are: