DARPA announced a winner on August 4, 2016 of its Cyber Grand Challenge DARPA Cyber Grand Challenge. The contest was described as
designed to accelerate the development of advanced, autonomous systems that can detect, evaluate, and patch software vulnerabilities before adversaries have a chance to exploit them. The seven competing teams in today’s final event were composed of whitehat hackers, academics, and private sector cyber systems experts.
They described the actual challenge as:
For almost 10 hours, competitors played the classic cybersecurity exercise of Capture the Flag in a specially created computer testbed laden with an array of bugs hidden inside custom, never-before-analyzed software. The machines were challenged to find and patch within seconds—not the usual months—flawed code that was vulnerable to being hacked, and find their opponents’ weaknesses before the defending systems did.
The winning system, Mayhem, was to be formally invited to participate in the DEF CON Capture the Flag competion, "marking the first time a machine will be allowed to play in that historically all human tournament."
I read and re-read the material at DARPA and I still can't believe that an automated system found anything on the level of the April 2014 Heartbleed bug. I am wondering if the vulnerabilities were more on the level of published Microsoft security notices or recommended updates or the like (in other words, the "bug finder" more of an automatic patch installer basically).
Does anyone know what was the technical level of the DARPA challenge test bed in relation to Heartbleed or similar actual vulnerabilities? I guess I will find out when Mayhem actually competes against humans in the CTF competition, but I am doubtful at the moment.