While all of them lead to a 404 Not Found, CPU cycles are wasted in processing these requests ...
Somewhere CPU cycles "must be wasted" to filter out these requests. But it depends on the kind of requests and of your server and application setup how much cycles this will be and where exactly they will be needed.
If there is a clear source of these requests you might use simple packet filter rules (iptables, ipfw or some router in front of the server) to block such requests already at the transport layer by filtering based on the source IP address. This would be the cheapest way, i.e. less cycles are wasted.
But in most cases you don't have such a clear source so filtering must be done at the application layer, which is more complex and thus needs more CPU cycles. That might be done with a web application firewall (WAF) in front of the server, by filtering rules at your web server or by filtering these requests inside your web application.
... and affects access by legitimate users at least for a short while.
While any server on the internet gets lots of requests they are usually not that much that it really affects the applications, i.e. they are more a nuisance and do not amount to a denial of service. If handling such requests is too costly for you than you might need to rethink the design of your web application, like make sure that bad requests are filtered out early and don't cause database lookups or other costly operations.
For more details about this problem see How can I defend against malicious GET requests?