Autonomous Vehicles/Robots Highly Vulnerable to Cybersecurity Challenges
Driver assist: drivers ultimately in charge.
Currently works well with pre-mapped roads with line markings, but there are may scenarios (e.g. snow, dirt roads, dust storms) in which they do not work.
Autonomous levels:
- Level 0: none
- Level 1: driver assistance: steering OR speed
- Level 2: partial automation: steering AND speed, but driver ready to intervene at all times
- Level 3: driver must be ready to intervene at all times
- Level 4: can safely stop if driver does not respond. Works in most environment
- Level 5: completely autonomous in all conditions
Richard thinks full autonomy in late 2040s.
Cybersecurity Challenges
Autonomous vehicles difficult as very high levels of guarantees are required, which isn’t possible with ML/AI algorithms, which are complex and opaque.
Supply chains also often contain third-party, pre-trained models which could lead to supply chain attacks.
Threats:
- Intentional threats from exploitation of AI/ML vulnerabilities
- Unintentional harm: limitations, malfunctions, and poor design of AI models
Crashes and collisions impact not only just the safety of the passenger, but also pedestrians, bicycles, other vehicles, and related infrastructure.
Autonomous vehicles are susceptible to adversarial ML techniques such as evasion or poisoning attack. The threat model involves spoofing the pattern and facial recognition systems:
- Evasion attacks: manipulate the data fed into the system to alter the output
- Sensor jamming, blinding, spoofing, saturation can be used to feed wrong/incomplete data and undermine model training
- Noise can be added to images to completely change the model output while keeping it visually similar for humans
- Requires that the attacker have control over input and access to the network output
- Poisoning attacks: corruption of the training process to cause malfunctions that benefit the attacker
- China’s Great Firewall: breached within the first year by conspiring with the gatekeepers
- Meatbags are always the weakest link
Autonomous vehicles in particular may be vulnerable to:
- DDos attacks: blind the vehicle to the outside world, leading to stalling or malfunction
- e.g. mesh network sharing traffic signals
- Manipulation of sensor/communication equipment: could hijack communication channels to manipulate sensor readings, or wrongly interpret road messages/signs
- Information disclosure/data breach: large amounts of personal/AI data stored an AVs
AI systems tend to be involved in high-stake decisions, so successful attacks can have serious impacts.
Hence, resilient and safety-critical systems must be designed with malicious attackers in mind:
- Difficult for ML: although trained to behave properly under normal circumstances, an attacker may be able to engineer a scenario (or spoof sensor data) to make it respond in an unexpected way
- Fail-safe systems and graceful degradation
- Humans often react sensibly to novel situations, but AI/ML algorithms may not
Recommendations:
- Adopt security-by-design approaches to AI security on the road
- Security by design:
- Richard: have some regulated threshold on mean-time to incident, and each model must have enough driving time to statistically determine that it is above the threshold
- Doesn’t seem like it is resilient to attacks
- Systematic security validation of AI models:
- Collect large amounts of data
- Conduct risk assessment on AI models/algorithms
- Address supply chain issues through compliance with AI security regulations
- Share responsibilities across the supply chain: developers, manufacturers, end-users, third-party service providers
- Incident handling and vulnerability discoveries related to AI
- Share discoveries across industry, simulate various attack scenarios, conduct drills, establish cybersecurity incident handling/response teams
- Security by design:
- Address limited expertise on AI cybersecurity in automotive industry
- Create diverse teams in cybersecurity/AI fields
- Dedicated AI cybersecurity courses?
- End-to-end holistic approaches to integrate AI cybersecurity with traditional cybersecurity principles
- R&D, proper governance of AI cybersecurity policy
- Adaption of AI cybersecurity culture
- Regulation to ensure all companies above a certain size have cybersecurity staff
Successful past exploits:
- DARTS: deceiving autonomous cars with toxic signs:
- Telsas: middle line in ‘3’ in speed limit sign extended to convince the model it was an 8
- Still completely legible to humans
- Tencent: stickers used to trick autopilot to serve into the wrong lane
- Spray paint used to trick car into misreading stop sign as speed limit
Possible attacks:
- False line markings could be painted:
- AI models have no common sense
- Attacker could paint a pattern on cars or roadside elements to fool ML models
As car complexity increases, updates become necessary: how secure is the update process?
Exercise: possible attacks
- Spoofed flat tyre signal
- MITM
- Access through infotainment system
- Browser/app sandbox vulnerability
- Core driving systems probably separated, but still some communication with infotainment
- Replay attack to open cars. Block and record first signal, then send capture later
- Easily blocked by timestamps
- Rouge software updates
- Phishing app on user phones
- Inter-car networking?
- Intra-car:
- Tesla has remote access to every car - SSH
- CAN bus: would need physical access?
- USB slots
- Diagnostics port (evil maid/mechanic attack)
- Attaching transparent LCD/OLED in front of camera lens
- Or even pass through video feed like in AR
- Blinding with an LED light: denial of service
- Only requires access to outside
- Can control image: attacker can make it normal most of the time
Exam Hints
Ray’s section:
- 80% of the marks
- Mostly straight from the slides
- 20 marks for wireless mobile/enterprise security
- 4 tools used in Kali Linux
- Dialogues between clients/APs
- Shopping malls
- 20 for VPNs
- Setting up two types of VPNs: OpenVPN, IKEv2
- 20 for IoT insecurity
- UDP/TCP/Bluetooth
- 20 for smart cards