The issue is that cybersecurity is not limited to vulnerabilities in software, which is what you seem to be focused on. Yes, software vulnerabilities are a big part of it, but there's much, much more.
For example, take cryptography, the mathematical algorithms we depend on to secure nearly all sensitive information. In order for AI to do the work of humans, it would have to be able to find flaws in mathematical procedures, which probably means it would have to be able to write its own proofs. It would also have to write new algorithms to keep up with increasing computing power and new breakthroughs in mathematical research. No AI is even close to being able to do this. In fact, it's so far-fetched that I don't think anyone's even tried. (And if AI ever does reach this point, the death of cybersecurity will be the least of your concerns!)
Then, there's the human aspect, as schroeder mentioned. It's not something you can toss out as an "organizational problem" and just ignore. In fact, I'd say about 80% of cybersecurity is about dealing with human stupidity - not just of users, but system administrators as well. It's 2015, but the most common password is still "123456". People using the same password for everything. Website administrators not bothering to hash and salt users' passwords, or choosing weak algorithms. People not bothering to change default passwords on routers, printers, cameras, and even industrial control systems, leaving them fully exposed on the Internet just waiting to be exploited. Company networks that are not properly designed and segmented. And, of course, phishing - many huge security breaches start with simply obtaining the password of a system administrator using a well-crafted email, providing an entry point into the network. There are tons more I could list, but you get the idea. All of these are serious, and current, problems in cybersecurity, and widely exploited by hackers, yet none of them are fundamentally software flaws. To fix these, you have to either educate people, or design systems that are more idiot-proof.
And all that, of course, is assuming that AI will one day be able to fix software vulnerabilities automatically, 100% of the time - something that is far from guaranteed. If anything, people will simply start looking for flaws in the AI. After all, AI is written by humans, and it'll never be perfect. There will always be instances where the AI can be fooled.
Theorem proving seems promising, but I do not want to become involved
in cyber security, possibly as a career, if i would be out of a job in
the future.
This isn't really the place for career advice, but I can say with 100% certainty that cybersecurity isn't going away anytime soon. It is only going to get bigger as computer systems become even more critical than they already are, as developers write more and more flawed code, and as more and more (clueless) users get connected.