To answer this question, you need to understand the context and history of operating systems and their security model.
Desktop operating systems were designed in a time where the biggest security consideration was protecting you and the system itself from the other human users of the system. When multiple people are all using a shared machine, you need to make sure that one user's program can't affect others, and that other people can't read and/or write your private files.
"Administrative" permissions (sudo
, UAC, or similar) largely protect the system itself from its users. In order to write system files and configuration (/bin
, /etc
, C:\Windows
, HKLM, etc), or to gain access to another user's data, a user needs admin rights. Admin permissions usually have little to do with the GUI you see on your screen; in the end, they're mostly for file permissions.
A basic assumption of the classic desktop security model is that every program that you as a user run is trustworthy. Your programs should be able to read and write your private files (hopefully at your direction), and should be able to do whatever they need to do with your input and output devices.
This model worked well through the era when software came in cardboard boxes, because most software was indeed trustworthy. (And even if it wasn't, there usually wasn't much to be gained by doing evil things.)
The internet and constant connectivity changed that. Not all software is trustworthy, and your private data (to which you necessarily have full access) is far more valuable than anything else on the system.
Smartphone operating systems (iOS and Android specifically) were actually quite innovative in how they shifted the security model. Similar to how browsers sandbox websites, mobile OSes assume that only one human will use the device, and treat apps as untrusted.
Basically, they changed the security model to include protecting the user from malicious programs.
The mobile OS enforces isolation between each app's data (by running each app as a separate Unix user), limit the control that apps have over the GUI, and the system APIs are designed in such a way that apps must be granted explicit permission to call a wide range of APIs — a concept that simply did not exist in the desktop world.
Desktop operating systems are trying catch up to modern reality with "App Store" apps on Windows and Mac (and snaps on Linux) that are sandboxed and isolated like mobile apps. However, developers have to choose to write their programs in a way that's compatible with the sandbox and publish it on each platform's Store. It's very difficult to sandbox existing desktop apps because they make so many assumptions based on the classic model, that they break completely when sandboxed and denied permissions.
Thus, desktop OSes still allow pretty much anything you download to run with the full, wide-ranging capabilities that your user account has.
So... with all of that in mind, it should become glaringly obvious why your testing spyware app is able to do what it does. In fact, the only question you might have is why wouldn't it? The app runs as your user, and has full control over your user's GUI session. It can intercept any keyboard input, take screenshots of your entire screen, control what window gets focus, and even change the behavior of or kill your other running apps.
And it's not even necessarily bad that programs are able to do these things. These are individually all useful capabilities for apps that serve as utilities for you — a hotkey app needs to globally intercept keyboard input, a screenshot app needs to take screenshots of other apps, a power user window switching app needs to control focus, and developer tools need to fiddle with other processes to debug them. But combining those things together can lead to very uncomfortable results.
As a user who understands what the desktop security model is designed to defend against, it should be clear that precisely because every random app has complete and total access to the most valuable part of your computer—your data—that if you want to run untrusted programs, at the minimum, they should be run under a separate user account, and ideally, inside a throwaway VM. Otherwise, the app can do pretty much anything other than modifying the OS itself.