What are Machine Learning and Artificial Intelligence Threats?
As machine learning (ML) and artificial intelligence (AI) become more prevalent in our daily lives and digital world, it is important to consider the concerns associated with adopting these technologies. The following are security risks to be aware of:
Data poisoning and model poisoning. ML systems rely on large datasets. Data poisoning and model poisoning attacks involve polluting an ML model’s training data by modifying datasets so that they come to erroneous conclusions. This type of attack is considered an integrity attack because manipulating the training data affects the model’s ability to put forth correct predictions1. Other types of attacks can be classified by their impact which include confidentiality attacks where attackers can infer potentially confidential information about the training data by feeding inputs to the model, availability attacks where attackers disguise their inputs to trick the model in order to evade correct classification, and replication attacks where attackers can reverse-engineer the model in order to replicate it and analyze it locally to prepare attacks or exploit it.
Data privacy. When data is built into the ML model, cyberattackers may launch inconspicuous data extraction attacks which place the entire ML system at risk. Another attack angle can also come from smaller sub-symbolic function extraction attacks which require less effort and resources. Data stolen from extraction attacks could include sensitive personal information such as credit card numbers, social security numbers, and more.
Transfer learning attacks. ML systems often leverage pre-trained ML models, and the machine’s specifications are designed to fulfill designated purposes through specialized training. If the ML model is well-known, cyberattackers will not find it difficult to launch attacks that deceive the task-specific ML model.
Online system manipulation. Most AI/ML machines are connected to the internet when learning, giving cyberattackers an opportunity to strike. Cyberattackers can mislead ML machines by giving systems false inputs or gradually retraining them to provide faulty outputs.
AI-drive attacks. In AI-driven attacks, cyberattackers can use machine learning algorithms to find ways around security concerns or use deep learning algorithms to create new malware based on real-world samples.
The following tips can help you to protect yourself from ML/AI threats:
- Conduct penetration testing
- Use a second layer of AI/ML to catch anomalies in data training
- Use humans to check AI algorithms
- Implement strong access controls
- Stay alert for suspicious activity or unanticipated ML behaviors
- Streamline and secure system operations
- Maintain records of data ownership
1 Constantin, 2021, “How data poisoning attacks corrupt machine learning models”