How to validate an autonomous self-patching system?

03/13/2018

In 2016, the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense organized the Cyber Grand Challenge.  At this event, fully, autonomous systems with no human interaction attacked each other but also developed patches for their own vulnerabilities.  This event was carried out under lab conditions (simple operating system with few system calls, no file storage, exploiting and patching only via an interface to a central adjudicator system).  It gave an outlook into the future of Information Technology as a whole, mostly in IT-Security and Validation.  In future, we will see cyber-attacks run completely by Artificial Intelligence (AI) driven autonomous systems.  Today AI is already used by the bad guys e.g.: malware creation, personalized non-detectable phishing mail generation and smart scalable botnets.  But, also, the good guys will use such technology to automate network scans, log file monitoring/review and patch creation for vulnerable systems. 

 What is a self-patching system? 

 A self-patching system is an AI-driven system, which uses the capability of this AI to scan itself for vulnerabilities and develop and deploy patches for the detected vulnerabilities without human interaction.  This functionality will mainly be used in future, from the author’s point of view, to secure faster vulnerable systems against cyberattacks. 

Let us take a quick look at how such a system works:  

  1. Finding the bug via a bug discovery tool, which uses symbolic tracing and genetic fuzzing. 
  2. Submit information about a broken binary of the system to the integrated patch development system. 
  3. The patch development system (DPS) develops a patch for the broken binary based on the information from the bug discovery tool. 
  4. After the patch is developed, the DPS also tests the patch to see if the patch could cause some issues with other binaries, slow down the system during/after patching or if it does not work properly. 
  5. After the DPS has developed and tested the patch, it deploys the patch to the system and replaces the broken binary with the replacement binary (patch). 

This whole process is coordinated by an orchestration system/routine. 

For those of you, who want to know more detail about how such a system works, please visit the applicable article at phrack from Shellphish, who attended the Cyber Grand Challenge as participants. The source code of the Shellphish system “Mechanical Phish” as Open Source Software for testing and further development is also available in GitHub. 

 The regulatory point of view 

 Validation is “the documented evidence that provides a high degree of assurance, that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes” or in other words, you have the documented evidence, that your computerized system does what your system is supposed to do.  FDA 21 CFR Part 11 and EU Eudralex Volume 4 Annex 11 still applies to this system, if it is used to support GxP-Processes and/or fulfil GxP-Requirements.  From the author’s point of view, Part 11 makes no difference whether the system is a standard computerized system or a full autonomous AI-driven computerized system in regards to system management.  In general, Annex 11 makes also no difference between classically managed computerized systems and autonomous AI-driven computerized systems.   Annex 11 Chapter 10 says only, that “Any changes to a computerized system including system configurations should only be made in a controlled manner in accordance with a defined procedure.” 

This makes no difference in practice.   Also, in most of the other regulations, there is no specific requirement that makes the use of a self-patching system impossible.  

Issues to validate such a system 

 For the initial system validation, it makes no big difference whether the patching is carried out like it is today or via a self-patching system functionality.  This issue lies more in the change control and maintenance of the validated state of the self-patching system.  If the system patches up vulnerable functions by itself, how would you recognize if it is still compliant with GxP or not?  How can you keep your documentation up-to-date? 

A suggestion 

From the authors point of view, a possible solution would be a self-patching system, which also generates the required GxP-Regulations documentation and submits it to a central storage location after change approval.  The Change Process can work as follows: 

  1. Finding the bug via a bug discovery tool, which uses symbolic tracing and genetic fuzzing. 
  2. The bug discovery tool generates a change request in the applicable tool and also creates the required change documents. 
  3. It submits information about a broken binary of the system to the integrated patch development system and a separate Quality Assurance system for verification(QAS). 
  4. The QAS approves or rejects the change for development and testing. 
  5. The QAS submits the result of its decision to the patch development system. 
  6. The patch development system (DPS) develops a patch for the broken binary based on the information from the bug discovery tool, once changes are approved for development. 
  7. After the patch is developed, the DPS also tests the patch, to see if the patch could cause some issues with other binaries, slow down the system during/after patching or if it does not work properly. 
  8. The DPS submits the outcome of the tests as a test report to the QAS. 
  9. The QAS approves or rejects the test report and changes the status of the change request to approved or rework required. 
  10. After the DPS has developed and tested the patch and the QAS has approved the change, it deploys the patch to the system and replaces the broken binary by the replacement binary (patch). 

Author: Kvalito Consulting

Author

You May Also Like…