How is data downloaded from HTTP or FTP get checked for corruption?
By itself, not at all.
HTTP and FTP as protocols don't offer integrity¹.
However, HTTP and FTP are usually used atop of TCP/IP, which both have checksums in their transports – if a TCP checksum fails, your operating system will just discard the TCP packet and ask for it again. So there's no need for HTTP to even implement integrity checking.
When tunneling anything (including HTTP and FTP) over TLS, you get an additional layer of integrity checks.
So how does FTP and HTTP ensure that data is not corrupt?
They don't. It's usually the transport's job to guarantee integrity, not the job of the application protocol.
¹ there is an optional header in HTTP1.1 that allows the server to specify a checksum, but since that is practically impossible for resources generated on the fly and comes at high cost for large files, and has little advantage over the much more fine-granular TCP-checksuming, it's rarely used. I don't even know whether browsers commonly support it.
I'd like to add here that it's of course harder to cause a collision within MD5 (which is used in these headers) than to forge TCP packes, if you'd want to intentionally modify the transfer. But if that is your attacking scenario, TLS is the answer, not HTTP checksums.